Planet Russell

,

CryptogramUsing Machine Learning to Detect IP Hijacking

This is interesting research:

In a BGP hijack, a malicious actor convinces nearby networks that the best path to reach a specific IP address is through their network. That's unfortunately not very hard to do, since BGP itself doesn't have any security procedures for validating that a message is actually coming from the place it says it's coming from.

[...]

To better pinpoint serial attacks, the group first pulled data from several years' worth of network operator mailing lists, as well as historical BGP data taken every five minutes from the global routing table. From that, they observed particular qualities of malicious actors and then trained a machine-learning model to automatically identify such behaviors.

The system flagged networks that had several key characteristics, particularly with respect to the nature of the specific blocks of IP addresses they use:

  • Volatile changes in activity: Hijackers' address blocks seem to disappear much faster than those of legitimate networks. The average duration of a flagged network's prefix was under 50 days, compared to almost two years for legitimate networks.

  • Multiple address blocks: Serial hijackers tend to advertise many more blocks of IP addresses, also known as "network prefixes."

  • IP addresses in multiple countries: Most networks don't have foreign IP addresses. In contrast, for the networks that serial hijackers advertised that they had, they were much more likely to be registered in different countries and continents.

Note that this is much more likely to detect criminal attacks than nation-state activities. But it's still good work.

Academic paper.

Planet DebianJonathan Carter: Calamares Plans for Debian 11

Brief history of Calamares in Debian

Before Debian 9 was released, I was preparing a release for a derivative of Debian that was a bit different than other Debian systems I’ve prepared for redistribution before. This was targeted at end-users, some of whom might have used Ubuntu before, but otherwise had no Debian related experience. I needed to find a way to make Debian really easy for them to install. Several options were explored, and I found that Calamares did a great job of making it easy for typical users to get up and running fast.

After Debian 9 was released, I learned that other Debian derivatives were also using Calamares or planning to do so. It started to make sense to package Calamares in Debian so that we don’t do duplicate work in all these projects. On its own, Calamares isn’t very useful, if you ran the pure upstream version in Debian it would crash before it starts to install anything. This is because Calamares needs some configuration and helpers depending on the distribution. Most notably in Debian’s case, this means setting the location of the squashfs image we want to copy over, and some scripts to either install grub-pc or grub-efi depending on how we boot. Since I already did most of the work to figure all of this out, I created a package called calamares-settings-debian, which contains enough configuration to install Debian using Calamares so that derivatives can easily copy and adapt it to their own calamares-settings-* packages for use in their systems.

In Debian 9, the live images were released without an installer available in the live session. Unfortunately the debian-installer live session that was used in previous releases had become hard to maintain and had a growing number of bugs that didn’t make it suitable for release, so Steve from the live team suggested that we add Calamares to the Debian 10 test builds and give it a shot, which surprised me because I never thought that Calamares would actually ship on official Debian media. We tried it out, and it worked well so Debian 10 live media was released with it. It turned out great, every review of Debian 10 I’ve seen so far had very good things to say about it, and the very few problems people have found have already been fixed upstream (I plan to backport those fixes to buster soon).

Plans for Debian 11 (bullseye)

New slideshow

If I had to choose a biggest regret regarding the Debian 10 release, this slideshow would probably be it. It’s just the one slide depicted above. The time needed to create a nice slideshow was one constraint, but another was that I also didn’t have enough time to figure out how its translations work and do a proper call for translations in time for the hard freeze. I consider the slideshow a golden opportunity to explain to new users what the Debian project is about and what this new Debian system they’re installing is capable of, so this is basically a huge missed opportunity that I don’t want to repeat again.

I intend to pull in some help from the web team, publicity team and anyone else who might be interested to cover slides along the topics of (just a quick braindump, final slides will likely have significantly different content):

  • The Debian project, and what it’s about
    • Who develops Debian and why
    • Cover the social contract, and perhaps touch on the DFSG
  • Who uses Debian? Mention notable users and use cases
    • Big and small commercial companies
    • Educational institutions
    • …even NASA?
  • What Debian can do
    • Explain vast package library
    • Provide some tips and tricks on what to do next once the system is installed
  • Where to get help
    • Where to get documentation
    • Where to get additional help

It shouldn’t get to heavy and shouldn’t run longer than a maximum of three minutes or so, because in some cases that might be all we have during this stage of the installation.

Try out RAID support

Calamares now has RAID support. It’s still very new and as far as I know it’s not yet widely tested. It needs to be enabled as a compile-time option and depends on kpmcore 4.0.0, which Calamares uses for its partitioning backend. kpmcore 4.0.0 just entered unstable this week, so I plan to do an upload to test this soon.

RAID support is one of the biggest features missing from Calamares, and enabling it would make it a lot more useful for typical office environments where RAID 1 is typically used on worktations. Some consider RAID on desktops somewhat less important than it used to be. With fast SSDs and network provisioning with gigabit ethernet, it’s really quick to recover from a failed disk, but you still have downtime until the person responsible pops over to replace that disk. At least with RAID-1 you can avoid or drastically decrease downtime, which makes the cost of that extra disk completely worth while.

Add Debian-specific options

The intent is to keep the installer simple, so adding new options is a tricky business, but it would be nice to cover some Debian-specific options in the installer just like debian-installer does. At this point I’m considering adding:

  • Choosing a mirror. Currently it just default to writing a sources.list file that uses deb.debian.org, which is usually just fine.
  • Add an option to enable popularity contest (popcon). As a Debian developer I find popcon stats quite useful. Even though just a small percentage of people enable it, it provides enough data to help us understand how widely packages are used, especially in relation to other packages, and I’m slightly concerned that desktop users who now use Calamares instead of d-i who forget to enable popcon after installation, will result in skewing popcon results for desktop packages compared to previous releases.

Skip files that we’re deleting anyway

At DebConf19, I gave a lightning talk titled “Is it possible to install Debian in a lightning talk slot?”. The answer was sadly “No.”. The idea is that you should be able to install a full Debian desktop system within 5 minutes. In my preparations for the talk, I got it down to just under 6 minutes. It ended up taking just under 7 minutes during my lightnight talk, probably because I forgot to plug in my laptop into a power source and somehow got throttled to save power. Under 7 minutes is fast, but the exercise got me looking at what wasted the most time during installation.

Of the avoidable things that happen during installation, the thing that takes up the most time by a large margin is removing packages that we don’t want on the installed system. During installation, the whole live system is copied from the installation media over to the hard disk, and then the live packages (including Calamares) is removed from that installation. APT (or actually more speficically dpkg) is notorious for playing it safe with filesystem operations, so removing all these live packages takes quite some time (more than even copying them there in the first place).

The contents of the squashfs image is copied over to the filesystem using rsync, so it is possible to provide an exclude list of files that we don’t want. I filed a bug in Calamares to add support for such an exclude list, which was added in version 3.2.15 that was released this week. Now we also need to add support in the live image build scripts to generate these file lists based on the packages we want to remove, but that’s part of a different long blog post all together.

This feature also opens the door for a minimal mode option, where you could choose to skip non-essential packages such as LibreOffice and GIMP. In reality these packages will still be removed using APT in the destination filesystem, but it will be significantly faster since APT won’t have to remove any real files. The Ubuntu installer (Ubiquity) has done something similar for a few releases now.

Add a framebuffer session

As is the case with most Qt5 applications, Calamares can run directly on the Linux framebuffer without the need for Xorg or Wayland. To try it out, all you need to do is run “sudo calamares -platform linuxfb” on a live console and you’ll get Calamares right there in your framebuffer. It’s not tested upstream so it looks a bit rough. As far as I know I’m the only person so far to have installed a system using Calamares on the framebuffer.

The plan is to create a systemd unit to launch this at startup if ‘calamares’ is passed as a boot parameter. This way, derivatives who want this who uses a calamares-settings-debian (or their own fork) can just create a boot menu entry to activate the framebuffer installation without any additional work. I don’t think it should be too hard to make it look decent in this mode either,

Calamares on the framebuffer might also be useful for people who ship headless appliances based on Debian but who still need a simple installer.

Document calamares-settings-debian for derivatives

As the person who put together most of calamares-settings-debian, I consider it quite easy to understand and adapt calamares-settings-debian for other distributions. But even so, it takes a lot more time for someone who wants to adapt it for a derivative to delve into it than it would to just read some quick documentation on it first.

I plan to document calamares-settings-debian on the Debian wiki that covers everything that it does and how to adapt it for derivatives.

Improve Scripts

When writing helper scripts for Calamares in Debian I focused on getting it working, reliably and in time for the hard freeze. I cringed when looking at some of these again after the buster release, it’s not entirely horrible but it can use better conventions and be easier to follow, so I want to get it right for Bullseye. Some scripts might even be eliminated if we can build better images. For example, we install either grub-efi or grub-pc from the package pool on the installation media based on the boot method used, because in the past you couldn’t have both installed at the same time so they were just shipped as additional available packages. With changes in the GRUB packaging (for a few releases now already) it’s possible to have grub-efi and grub-pc-bin installed at the same time, so if we install both at build time it may be possible to simplify those pieces (and also save another few precious seconds of install time).

Feature Requests

I’m sure some people reading this will have more ideas, but I’m not in a position to accept feature requests right now, Calamares is one of a whole bunch of areas in Debian I’m working on in this release. If you have ideas or feature requests, rather consider filing them in Calamares’ upstream bug tracker on GitHub or get involved in the efforts. Calamares has an IRC channel on freenode (#calamares), and for Debian specific stuff you can join the Debian live channel on oftc (#debian-live).

Worse Than FailureCodeSOD: A Context for Logging

When logging in Java, especially frameworks like Spring, making sure the logging statement has access to the full context of the operation in flight is important. Instead of spamming piles of logging statements in your business logic, you can use a “mapped diagnostic context” to cache useful bits of information during an operation, such that any logging statement can access it.

One of the tools for this is the “Mapped Data Context”, MDC. Essentially, it’s very much like a great big hash map that happens to be thread-local and is meant to be used by the logging framework. It’s a global-ish variable, but without the worst side effects of being global.

And you know people just love to use global variables.

Lothar was trying to figure out some weird requests coming out of an API, and needed to know where certain session ID values were coming from. There are a lot of “correct” ways to store session information in your Java Spring applications, and he assumed that was how they were storing those things. Lothar was wrong.

He provided this anonymized/generalized example of how pretty much every one of their REST request methods looked:

 @Override
   public Wtf getWtf(String wtfId) {

    Map<String, Object> params = new HashMap<>();
    params.put("wtfId", wtfId);
    params.put("sessId", MDC.get(MDC_LABEL_SESSION_ID));
    params.put(MDC_LABEL_SESSION_ID, MDC.get(MDC_LABEL_SESSION_ID));

    UriComponents uriComponents = UriComponentsBuilder
            .fromUriString("https://thedailywtf.com")
            .buildAndExpand(params);
    String urlString = uriComponents.toUriString();
        ResponseEntity<byte[]> responseEntity = restTemplate.getForEntity(urlString, byte[].class);
  }

Throughout their application, they (ab)used their logging framework as a thread-local storage system for passing user session data around.

Sure, the code was stupid, but the worst part about this code was that it worked. It did everything it needed to do, and it also meant that all of their log messages had rich context which made it easier to diagnose issues.

If it’s stupid and it works, that means you ship it.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Krebs on SecurityWhen Card Shops Play Dirty, Consumers Win

Cybercrime forums have been abuzz this week over news that BriansClub — one of the underground’s largest shops for stolen credit and debit cards — has been hacked, and its inventory of 26 million cards shared with security contacts in the banking industry. Now it appears this brazen heist may have been the result of one of BriansClub’s longtime competitors trying to knock out a rival.

And advertisement for BriansClub that for years has used my name and likeness to peddle stolen cards.

Last month, KrebsOnSecurity was contacted by an anonymous source who said he had the full database of 26M cards stolen from BriansClub, a carding site that has long used this author’s name and likeness in its advertising. The stolen database included cards added to the site between mid-2015 and August 2019.

This was a major event in the underground, as experts estimate the total number of stolen cards leaked from BriansClub represent almost 30 percent of the cards on the black market today.

The purloined database revealed BriansClub sold roughly 9.1 million stolen credit cards, earning the site and its resellers a cool $126 million in sales over four years.

In response to questions from KrebsOnSecurity, the administrator of BriansClub acknowledged that the data center serving his site had been hacked earlier in the year (BriansClub claims this happened in February), but insisted that all of the cards stolen by the hacker had been removed from BriansClub store inventories.

However, as I noted in Tuesday’s story, multiple sources confirmed they were able to find plenty of card data included in the leaked database that was still being offered for sale at BriansClub.

Perhaps inevitably, the admin of BriansClub took to the cybercrime forums this week to defend his business and reputation, re-stating his claim that all cards included in the leaked dump had been cleared from store shelves.

The administrator of BriansClub, who’s appropriated the name and likeness of Yours Truly for his advertising, fights to keep his business alive.

Meanwhile, some of BriansClub’s competitors gloated about the break-in. According to the administrator of Verified, one of the longest running Russian language cybercrime forums, the hack of BriansClub was perpetrated by a fairly established ne’er-do-well who uses the nickname “MrGreen” and runs a competing card shop by the same name.

The Verified site admin said MrGreen had been banned from the forum, and added that “sending anything to Krebs is the lowest of all lows” among accomplished and self-respecting cybercriminals. I’ll take that as a compliment.

This would hardly be the first time some cybercriminal has used me to take down one of his rivals. In most cases, I’m less interested in the drama and more keen on validating the data and getting it into the proper hands to do some good.

That said, if the remainder of BriansClub’s competitors want to use me to take down the rest of the carding market, I’m totally fine with that.

The BriansClub admin, defending the honor of his stolen cards shop after a major breach.

Planet DebianLouis-Philippe Véronneau: Montreal Subway Foot Traffic Data

Two weeks ago, I took the Montreal subway with my SO and casually mentioned it would be nice to understand why the Joliette subway station has more foot traffic than the next one, Pie-IX. Is the part of the neighborhood served by the Joliette station denser? Would there be a correlation between the mean household income and foot traffic? Has the more aggressive gentrification around the Joliette station affected its achalandage?

Much to my surprise, instead of sharing my urbanistical enthusiasm, my SO readily disputed what I thought was an irrefutable fact: "Pie-IX has more foot traffic than Joliette, mainly because of the superior amount of bus lines departing from it" she told me.

Shaken to the core, I decided to prove to the world I was right and asked Société de Transport de Montréal (STM) for the foot traffic data of the Montreal subway.

Turns out I was wrong (Pie-IX is about twice as big as Joliette...) and individual observations are often untrue. Shocking right?!

Visualisations

STM kindly sent me daily values for each subway stations from 2001 to 2018. Armed with all this data, I decided to play a little with R and came up with some interesting graphs.

Behold this semi-interactive map of Montreal's subway! By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.

Interactive Map of Montreal's Subway

I also made per-line graphs that include data from multiple stations. Some of them (like the Orange line) are quite crowded though:

Licences

  • The subway map displayed on this page, the original dataset and my modified dataset are licenced under CCO 1.0: they are in the public domain.

  • The R code I wrote is licensed under the GPLv3+. Feel free to reuse it if you get a more up to date dataset from the STM.

,

Planet DebianAntoine Beaupré: Theory: average bus factor = 1

Two articles recently made me realize that all my free software projects basically have a bus factor of one. I am the sole maintainer of every piece of software I have ever written that I still maintain. There are projects that I have been the maintainer of which have other maintainers now (most notably AlternC, Aegir and Linkchecker), but I am not the original author of any of those projects.

Now that I have a full time job, I feel the pain. Projects like Gameclock, Monkeysign, Stressant, and (to a lesser extent) Wallabako all need urgent work: the first three need to be ported to Python 3, the first two to GTK 3, and the latter will probably die because I am getting a new e-reader. (For the record, more recent projects like undertime and feed2exec are doing okay, mostly because they were written in Python 3 from the start, and the latter has extensive unit tests. But they do suffer from the occasional bitrot (the latter in particular) and need constant upkeep.)

Now that I barely have time to keep up with just the upkeep, I can't help but think all of my projects will just die if I stop working on them. I have the same feeling about the packages I maintain in Debian.

What does that mean? Does that mean those packages are useless? That no one cares enough to get involved? That I'm not doing a good job at including contributors?

I don't think so. I think I'm a friendly person online, and I try my best at doing good documentation and followup on my projects. What I have come to understand is even more depressing and scary that this being a personal failure: that is the situation with everyone, everywhere. The LWN article is not talking about silly things like a chess clock or a feed reader: we're talking about the Linux input drivers. A very deep, core component of the vast majority of computers running on the planet, that depend on that single maintainer. And I'm not talking about whether those people are paid or not, that's related, but not directly the question here. The same realization occured with OpenSSL and NTP, GnuPG is in a similar situation, the list just goes on and on.

A single guy maintains those projects! Is that a fluke? A statistical anomaly? Everything I feel, and read, and know in my decades of experience with free software show me a reality that I've been trying to deny for all that time: it's the average.

My theory is this: our average bus factor is one. I don't have any hard evidence to back this up, no hard research to rely on. I'd love to be proven wrong. I'd love for this to change.

But unless economics of technology production change significantly in the coming decades, this problem will remain, and probably worsen, as we keep on scaffolding an entire civilization on shoulders of hobbyists that are barely aware their work is being used to power phones, cars, airplanes and hospitals. A lot has been written on this, but nothing seems to be moving.

And if that doesn't scare you, it damn well should. As a user, one thing you can do is, instead of wondering if you should buy a bit of proprietary software, consider using free software and donating that money to free software projects instead. Lobby governments and research institutions to sponsor only free software projects. Otherwise this civilization will collapse in a crash of spaghetti code before it even has time to get flooded over.

Google AdsenseYour Google AdSense seasonal guide

This guide is here to help you learn about the importance of seasonality as well as provide you with tips on how to get more from your Google AdSense account and site during peak seasonal traffic. This guide is filled with easy, actionable insights you can start implementing today.

Here’s a sneak peek of what will be posted over the next couple of weeks to prepare you well for this busy time of the year. Let’s get holiday season ready!

  • Pre-holiday season: How to prepare? 
  • During the holiday season: How to maximize the opportunities 
  • Post holidays: Looking at 2020 


Defining seasonal periods 

What is seasonality?

Any predictable fluctuation or pattern that recurs over a calendar year is defined as seasonality. It can be categorized into 3 types of events and holidays:

  1. Cultural (e.g. Ramadan, Thanksgiving, Christmas) 
  2. Commercial (e.g. Black Friday, Singles’ Day, Mother’s Day)
  3. Ad-hoc events (e.g. Olympics, Elections, TV series) 

By spotting events and holidays impacting your audience, you can identify your seasonality opportunities to increase revenue and attract new users.

What drives seasonality? 

Publisher revenue is driven by two interlinked factors: RPMs and traffic.
Advertisers are willing to pay more for inventory, leading to higher RPMs. Increased traffic leads to more impressions. These two factors drive seasonal spikes and dips in publisher earnings.


RPMs traffic


Why is it important for publishers?

Seasonality offers publishers opportunities to attract new audiences and maximize revenues. The upcoming holiday season (October - December) is the biggest annual seasonal opportunity. Let’s explore why!

  1. Large increase in internet traffic (higher query opportunity)
    Over the past 5 years, the holiday season internet traffic has grown as users flock online to purchase gifts for themselves and others.

  2. Higher competition between advertisers (higher query opportunity)
    The holiday shopping season is the busiest retail period of the year. Advertisers spend more during this time to ensure their products and brands are well-positioned at key events like Black Friday and Cyber Monday. 
    For example, consumers around the world and especially in Asia reportedly spent more than $1 billion in the first 90 seconds of Singles’ Day in 2018. Advertisers are eager to be there for moments like this.

Starting the holiday season marathon

Let’s focus on the upcoming season. The October-December period is a global seasonal marathon offering the greatest annual opportunity to maximize your earnings. In fact, this seasonal opportunity is increasing all over. In 2018, the average global interest in Black Friday grew more than twice over the previous 5 years

Get your calendar open and mark the following dates:




Let’s see what you can do to make the most out of your site and AdSense account before, during and after the holiday season.

In the blogpost next week we’ll provide you with tips on how to prepare your site and AdSense account for the rush of the holiday season.

Posted by:
Daryna Chushko - Publisher Growth Marketing Manager

    Planet DebianJonathan Dowland: PhD Poster

    Thumbnail of the poster

    Thumbnail of the poster

    Today the School of Computing organised a poster session for stage 2 & 3 PhD candidates. Here's the poster I submitted. jdowland-phd-poster.pdf (692K)

    This is the first poster I've prepared for my PhD work. I opted to follow the "BetterPoster" principles established by Mike Morrison. These are best summarized in his #BetterPoster 2 minute YouTube video. I adapted this LaTeX #BetterPoster template. This template is licensed under the GPL v3.0 which requires me to provide the source of the poster, so here it is.

    After browsing around other student's posters, two things I would now add to the poster would be a mugshot (so people could easily determine who's poster it was, if they wanted to ask questions) and Red Hat's logo, to acknowledge their support of my work.

    LongNowAndrew McAfee Bets on More from Less

    The Mount Whaleback Iron Ore Mine in the Pilbara region of Western Australia. One of McAfee’s Long Bets predicts that by 02029, we’ll use less iron and steel than we do today. Daily Overview.

    Our next Seminar speaker, Andrew McAfee, has offered a group of 14 predictions on Long Bets related to the issues of resource consumption explored in his new book, More From Less. McAfee, co-founder and co-director of MIT’s Initiative on the Digital Economy and a Principal Research Scientist at the MIT Sloan School of Management, studies how digital technologies are changing the world.

    In More From Less, McAfee argues that “we have at last learned how to increase human prosperity while treading more lightly on our planet.”

    McAfee’s Long Bets all share the same duration (10 years; 02019-02019) and are all predicated on the same central thesis: that economic growth in technology-intensive economies will lead to dematerialization:

    During the Industrial Era economies around the world grew rapidly. And as they grew, they used more year after year of the Earth’s resources: metals, minerals, fertilizers, trees, fossil fuels, cropland, and so on.

    Then we invented the computer and its kin, and the pattern changed.

    Hardware, software, and networks allow companies to use fewer materials as they produce their goods and services. Profit-seeking companies in competitive markets are eager to pursue these opportunities to dematerialize because they bring cost savings, and a penny saved is a penny earned. Dematerialization accumulates over time, and the economy as a whole eventually moves past “peak stuff” with respect to more and more resources.

    The counterintuitive result is that capitalist, technology-intensive economies like America’s are now dematerializing across a wide range of resources, and will continue to do so for the foreseeable future.

    Andrew McAfee, Long Bet 795

    McAfee is predicting that by 02029, the United States will consume less energy and produce less CO2 emissions than it did a decade prior. Crop tonnage will be greater, but we’ll use less total cropland, fertilizer, and water for irrigation in agriculture. We’ll use less iron and steel, aluminum, nickel, copper, gold, rare earth elements, chromium, tin and tungsten—all of which will become more affordable to the world’s average person. We’ll also use less timber and paper.

    The predictions call to mind the bet that biologist and environmentalist Paul Ehrlich made with economist Julian Simon in 01980. Ehrlich bet Simon $10,000 that the prices of five metals (copper, chromium, nickel, tin, and tungsten) would increase over a decade. The prices declined sharply, and Simon won the bet.

    The Simon-Ehrlich wager.

    The wager received a lot of publicity over the decade, and the result ultimately shaped societal thinking around limited resources. “Simon was a prolific skeptic of environmentalism,” Kevin Kelly wrote, “yet nothing that he ever wrote had as much impact on the course of culture as his wager with Ehrlich. That single, relatively small bet transformed the environmental movement by casting doubt on the notion of resource scarcity.” (Although, if the bet were repeated in subsequent decades, Ehrlich would have won, given the rise in commodity prices. On a long enough timescale, however, it’s one more blip in a multiple-centuries-long trend towards decreasing prices).

    McAfee is now seeking challengers for his bets. “If you think that economic growth is incompatible with taking less from the planet,” he tweeted, “these bets are for you!”

    Learn More

    Worse Than FailureCodeSOD: The Replacements

    Nobody wants to have a Bobby Tables moment in their database. So we need to to sanitize our inputs. Ted C noticed a bunch of stored procedures which contained lines like this:

      @scrubbed = fn_ScrubInput(fn_ScrubInput(@input))

    Obviously, they wanted to be super careful, and make sure their inputs were clean. But it got Ted curious, so he checked out how the function was implemented. The function body had one line, the RETURN line, which looked like this:

      RETURN REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
    REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(@input, '"', '"'), 
    '*', '\*'),'~', '\~'), '@', '\@'), '#', 
    '\#'), '$','\$'),'%','\%'),'^','\^'),
    '&','\&'),'(','\('),')','\)'),
    '_','\_'),'+','\+'),'=','\='),'>',
    '\>'),'<','\<'),'?','\?'),'/',
    '\/')

    Whitespace added.

    Ted REPLACE REPLACE REPLACEd this with a call to the built-in STRING_ESCAPE function, which handled the escaping they needed.

    [Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

    ,

    CryptogramCracking the Passwords of Early Internet Pioneers

    Lots of them weren't very good:

    BSD co-inventor Dennis Ritchie, for instance, used "dmac" (his middle name was MacAlistair); Stephen R. Bourne, creator of the Bourne shell command line interpreter, chose "bourne"; Eric Schmidt, an early developer of Unix software and now the executive chairman of Google parent company Alphabet, relied on "wendy!!!" (the name of his wife); and Stuart Feldman, author of Unix automation tool make and the first Fortran compiler, used "axolotl" (the name of a Mexican salamander).

    Weakest of all was the password for Unix contributor Brian W. Kernighan: "/.,/.," representing a three-character string repeated twice using adjacent keys on a QWERTY keyboard. (None of the passwords included the quotation marks.)

    I don't remember any of my early passwords, but they probably weren't much better.

    LongNowLong Short: Lil Buck with Icons of Modern Art

    Lil Buck with Icons Of Modern Art from NOWNESS on Vimeo.

    The Long Short for Suhanya Raffel’s seminar, World Art Through the Asian Perspective, featured American dancer Lil Buck dancing his way through the Foundation Luis Vuitton in Paris.

    “We’re going to hear many interventions that happen at museums,” Alexander Rose said, “and this is one of the more, let’s say, disruptive ones.”

    The Long Short even inspired Stewart Brand to try his hand at Lil Buck’s “gangsta walk.”

    Krebs on Security“BriansClub” Hack Rescues 26M Stolen Cards

    BriansClub,” one of the largest underground stores for buying stolen credit card data, has itself been hacked. The data stolen from BriansClub encompasses more than 26 million credit and debit card records taken from hacked online and brick-and-mortar retailers over the past four years, including almost eight million records uploaded to the shop in 2019 alone.

    An ad for BriansClub has been using my name and likeness for years to peddle millions of stolen credit cards.

    Last month, KrebsOnSecurity was contacted by a source who shared a plain text file containing what was claimed to be the full database of cards for sale both currently and historically through BriansClub[.]at, a thriving fraud bazaar named after this author. Imitating my site, likeness and namesake, BriansClub even dubiously claims a copyright with a reference at the bottom of each page: “© 2019 Crabs on Security.”

    Multiple people who reviewed the database shared by my source confirmed that the same credit card records also could be found in a more redacted form simply by searching the BriansClub Web site with a valid, properly-funded account.

    All of the card data stolen from BriansClub was shared with multiple sources who work closely with financial institutions to identify and monitor or reissue cards that show up for sale in the cybercrime underground.

    The leaked data shows that in 2015, BriansClub added just 1.7 million card records for sale. But business would pick up in each of the years that followed: In 2016, BriansClub uploaded 2.89 million stolen cards; 2017 saw some 4.9 million cards added; 2018 brought in 9.2 million more.

    Between January and August 2019 (when this database snapshot was apparently taken), BriansClub added roughly 7.6 million cards.

    Most of what’s on offer at BriansClub are “dumps,” strings of ones and zeros that — when encoded onto anything with a magnetic stripe the size of a credit card — can be used by thieves to purchase electronics, gift cards and other high-priced items at big box stores.

    As shown in the table below (taken from this story), many federal hacking prosecutions involving stolen credit cards will for sentencing purposes value each stolen card record at $500, which is intended to represent the average loss per compromised cardholder.

    The black market value, impact to consumers and banks, and liability associated with different types of card fraud.

    STOLEN BACK FAIR AND SQUARE

    An extensive analysis of the database indicates BriansClub holds approximately $414 million worth of stolen credit cards for sale, based on the pricing tiers listed on the site. That’s according to an analysis by Flashpoint, a security intelligence firm based in New York City.

    Allison Nixon, the company’s director of security research, said the data suggests that between 2015 and August 2019, BriansClub sold roughly 9.1 million stolen credit cards, earning the site $126 million in sales (all sales are transacted in bitcoin).

    If we take just the 9.1 million cards that were confirmed sold through BriansClub, we’re talking about more than $4 billion in likely losses at the $500 average loss per card figure from the Justice Department.

    Also, it seems likely the total number of stolen credit cards for sale on BriansClub and related sites vastly exceeds the number of criminals who will buy such data. Shame on them for not investing more in marketing!

    There’s no easy way to tell how many of the 26 million or so cards for sale at BriansClub are still valid, but the closest approximation of that — how many unsold cards have expiration dates in the future — indicates more than 14 million of them could still be valid.

    The archive also reveals the proprietor(s) of BriansClub frequently uploaded new batches of stolen cards — some just a few thousand records, and others tens of thousands.

    That’s because like many other carding sites, BriansClub mostly resells cards stolen by other cybercriminals — known as resellers or affiliates — who earn a percentage from each sale. It’s not yet clear how that revenue is shared in this case, but perhaps this information will be revealed in further analysis of the purloined database.

    BRIANS CHAT

    In a message titled “Your site is hacked,’ KrebsOnSecurity requested comment from BriansClub via the “Support Tickets” page on the carding shop’s site, informing its operators that all of their card data had been shared with the card-issuing banks.

    I was surprised and delighted to receive a polite reply a few hours later from the site’s administrator (“admin”):

    “No. I’m the real Brian Krebs here 🙂

    Correct subject would be the data center was hacked.

    Will get in touch with you on jabber. Should I mention that all information affected by the data-center breach has been since taken off sales, so no worries about the issuing banks.”

    Flashpoint’s Nixon said a spot check comparison between the stolen card database and the card data advertised at BriansClub suggests the administrator is not being truthful in his claims of having removed the leaked stolen card data from his online shop.

    The admin hasn’t yet responded to follow-up questions, such as why BriansClub chose to use my name and likeness to peddle millions of stolen credit cards.

    Almost certainly, at least part of the appeal is that my surname means “crab” (or cancer), and crab is Russian hacker slang for “carder,” a person who engages in credit card fraud.

    Many of the cards for sale on BriansClub are not visible to all customers. Those who wish to see the “best” cards in the shop need to maintain certain minimum balances, as shown in this screenshot.

    HACKING BACK?

    Nixon said breaches of criminal website databases often lead not just to prevented cybercrimes, but also to arrests and prosecutions.

    “When people talk about ‘hacking back,’ they’re talking about stuff like this,” Nixon said. “As long as our government is hacking into all these foreign government resources, they should be hacking into these carding sites as well. There’s a lot of attention being paid to this data now and people are remediating and working on it.”

    By way of example on hacking back, she pointed to the 2016 breach of vDOS — at the time the largest and most powerful service for knocking Web sites offline in large-scale cyberattacks.

    Soon after vDOS’s database was stolen and leaked to this author, its two main proprietors were arrested. Also, the database added to evidence of criminal activity for several other individuals who were persons of interest in unrelated cybercrime investigations, Nixon said.

    “When vDOS got breached, that basically reopened cases that were cold because [the leak of the vDOS database] supplied the final piece of evidence needed,” she said.

    THE TARGET BREACH OF THE UNDERGROUND?

    After many hours spent poring over this data, it became clear I needed some perspective on the scope and impact of this breach. As a major event in the cybercrime underground, was it somehow the reverse analog of the Target breach — which negatively impacted tens of millions of consumers and greatly enriched a large number of bad guys? Or was it more prosaic, like a Jimmy Johns-sized debacle?

    For that insight, I spoke with Gemini Advisory, a New York-based company that works with financial institutions to monitor dozens of underground markets trafficking in stolen card data.

    Andrei Barysevich, co-founder and CEO at Gemini, said the breach at BriansClub is certainly significant, given that Gemini currently tracks a total of 87 million credit and debit card records for sale across the cybercrime underground.

    Gemini is monitoring most underground stores that peddle stolen card data — including such heavy hitters as Joker’s Stash, Trump’s Dumps, and BriansDump.

    Contrary to popular belief, when these shops sell a stolen credit card record, that record is then removed from the inventory of items for sale. This allows companies like Gemini to determine roughly how many new cards are put up for sale and how many have sold.

    Barysevich said the loss of so many valid cards may well impact how other carding stores compete and price their products.

    “With over 78% of the illicit trade of stolen cards attributed to only a dozen of dark web markets, a breach of this magnitude will undoubtedly disturb the underground trade in the short term,” he said. “However, since the demand for stolen credit cards is on the rise, other vendors will undoubtedly attempt to capitalize on the disappearance of the top player.”

    Liked this story and want to learn more about how carding shops operate? Check out Peek Inside a Professional Carding Shop.

    Planet DebianJulien Danjou: Sending Emails in Python — Tutorial with Code Examples

    Sending Emails in Python — Tutorial with Code Examples

    What do you need to send an email with Python? Some basic programming and web knowledge along with the elementary Python skills. I assume you’ve already had a web app built with this language and now you need to extend its functionality with notifications or other emails sending. This tutorial will guide you through the most essential steps of sending emails via an SMTP server:

    1. Configuring a server for testing (do you know why it’s important?)
    2. Local SMTP server
    3. Mailtrap test SMTP server
    4. Different types of emails: HTML, with images, and attachments
    5. Sending multiple personalized emails (Python is just invaluable for email automation)
    6. Some popular email sending options like Gmail and transactional email services

    Served with numerous code examples written and tested on Python 3.7!

    Sending an email using an SMTP

    The first good news about Python is that it has a built-in module for sending emails via SMTP in its standard library. No extra installations or tricks are required. You can import the module using the following statement:

    import smtplib

    To make sure that the module has been imported properly and get the full description of its classes and arguments, type in an interactive Python session:

    help(smtplib)

    At our next step, we will talk a bit about servers: choosing the right option and configuring it.

    An SMTP server for testing emails in Python

    When creating a new app or adding any functionality, especially when doing it for the first time, it’s essential to experiment on a test server. Here is a brief list of reasons:

    1. You won’t hit your friends’ and customers’ inboxes. This is vital when you test bulk email sending or work with an email database.
    2. You won’t flood your own inbox with testing emails.
    3. Your domain won’t be blacklisted for spam.

    Local SMTP server

    If you prefer working in the local environment, the local SMTP debugging server might be an option. For this purpose, Python offers an smtpd module. It has a DebuggingServer feature, which will discard messages you are sending out and will print them to stdout. It is compatible with all operations systems.

    Set your SMTP server to localhost:1025

    python -m smtpd -n -c DebuggingServer localhost:1025

    In order to run SMTP server on port 25, you’ll need root permissions:

    sudo python -m smtpd -n -c DebuggingServer localhost:25

    It will help you verify whether your code is working and point out the possible problems if there are any. However, it won’t give you the opportunity to check how your HTML email template is rendered.

    Fake SMTP server

    Fake SMTP server imitates the work of a real 3rd party web server. In further examples in this post, we will use Mailtrap. Beyond testing email sending, it will let us check how the email will  be rendered and displayed, review the message raw data as well as will provide us with a spam report. Mailtrap is very easy to set up: you will need just copy the credentials generated by the app and paste them into your code.

    Sending Emails in Python — Tutorial with Code Examples

    Here is how it looks in practice:

    import smtplib
    
    port = 2525
    smtp_server = "smtp.mailtrap.io"
    login = "1a2b3c4d5e6f7g" # your login generated by Mailtrap
    password = "1a2b3c4d5e6f7g" # your password generated by Mailtrap

    Mailtrap makes things even easier. Go to the Integrations section in the SMTP settings tab and get the ready-to-use template of the simple message, with your Mailtrap credentials in it. It is the most basic option of instructing your Python script on who sends what to who is the sendmail() instance method:

    Sending Emails in Python — Tutorial with Code Examples

    The code looks pretty straightforward, right? Let’s take a closer look at it and add some error handling (see the comments in between). To catch errors, we use the try and except blocks.

    # The first step is always the same: import all necessary components:
    import smtplib
    from socket import gaierror
    
    # Now you can play with your code. Let’s define the SMTP server separately here:
    port = 2525
    smtp_server = "smtp.mailtrap.io"
    login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
    password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap
    
    # Specify the sender’s and receiver’s email addresses:
    sender = "from@example.com"
    receiver = "mailtrap@example.com"
    
    # Type your message: use two newlines (\n) to separate the subject from the message body, and use 'f' to  automatically insert variables in the text
    message = f"""\
    Subject: Hi Mailtrap
    To: {receiver}
    From: {sender}
    This is my first message with Python."""
    
    try:
      # Send your message with credentials specified above
      with smtplib.SMTP(smtp_server, port) as server:
        server.login(login, password)
        server.sendmail(sender, receiver, message)
    except (gaierror, ConnectionRefusedError):
      # tell the script to report if your message was sent or which errors need to be fixed
      print('Failed to connect to the server. Bad connection settings?')
    except smtplib.SMTPServerDisconnected:
      print('Failed to connect to the server. Wrong user/password?')
    except smtplib.SMTPException as e:
      print('SMTP error occurred: ' + str(e))
    else:
      print('Sent')

    Once you get the Sent result in Shell, you should see your message in your Mailtrap inbox:

    Sending Emails in Python — Tutorial with Code Examples

    Sending emails with HTML content

    In most cases, you need to add some formatting, links, or images to your email notifications. We can simply put all of these with the HTML content. For this purpose, Python has an email package.

    We will deal with the MIME message type, which is able to combine HTML and plain text. In Python, it is handled by the email.mime module.

    It is better to write a text version and an HTML version separately, and then merge them with the MIMEMultipart("alternative") instance. It means that such a message has two rendering options accordingly. In case an HTML isn’t be rendered successfully for some reason, a text version will still be available.

    import smtplib
    from email.mime.text import MIMEText
    from email.mime.multipart import MIMEMultipart
    
    port = 2525
    smtp_server = "smtp.mailtrap.io"
    login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
    password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap
    
    sender_email = "mailtrap@example.com"
    receiver_email = "new@example.com"
    
    message = MIMEMultipart("alternative")
    message["Subject"] = "multipart test"
    message["From"] = sender_email
    message["To"] = receiver_email
    # Write the plain text part
    text = """\ Hi, Check out the new post on the Mailtrap blog: SMTP Server for Testing: Cloud-based or Local? https://blog.mailtrap.io/2018/09/27/cloud-or-local-smtp-server/ Feel free to let us know what content would be useful for you!"""
    
    # write the HTML part
    html = """\ <html> <body> <p>Hi,<br> Check out the new post on the Mailtrap blog:</p> <p><a href="https://blog.mailtrap.io/2018/09/27/cloud-or-local-smtp-server">SMTP Server for Testing: Cloud-based or Local?</a></p> <p> Feel free to <strong>let us</strong> know what content would be useful for you!</p> </body> </html> """
    
    # convert both parts to MIMEText objects and add them to the MIMEMultipart message
    part1 = MIMEText(text, "plain")
    part2 = MIMEText(html, "html")
    message.attach(part1)
    message.attach(part2)
    
    # send your email
    with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
      server.login(login, password)
      server.sendmail( sender_email, receiver_email, message.as_string() )
    
    print('Sent')
    Sending Emails in Python — Tutorial with Code ExamplesThe resulting output

    Sending Emails with Attachments in Python

    The next step in mastering sending emails with Python is attaching files. Attachments are still the MIME objects but we need to encode them with the base64 module. A couple of important points about the attachments:

    1. Python lets you attach text files, images, audio files, and even applications. You just need to use the appropriate email class like email.mime.audio.MIMEAudio or email.mime.image.MIMEImage. For the full information, refer to this section of the Python documentation.
    2. Remember about the file size: sending files over 20MB is a bad practice.

    In transactional emails, the PDF files are the most frequently used: we usually get receipts, tickets, boarding passes, order confirmations, etc. So let’s review how to send a boarding pass as a PDF file.

    import smtplib
    from email import encoders
    from email.mime.base import MIMEBase
    from email.mime.multipart import MIMEMultipart
    from email.mime.text import MIMEText
    
    port = 2525
    smtp_server = "smtp.mailtrap.io"
    login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
    password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap
    
    subject = "An example of boarding pass"
    sender_email = "mailtrap@example.com"
    receiver_email = "new@example.com"
    
    message = MIMEMultipart()
    message["From"] = sender_email
    message["To"] = receiver_email
    message["Subject"] = subject
    
    # Add body to email
    body = "This is an example of how you can send a boarding pass in attachment with Python"
    message.attach(MIMEText(body, "plain"))
    
    filename = "yourBP.pdf"
    # Open PDF file in binary mode
    # We assume that the file is in the directory where you run your Python script from
    with open(filename, "rb") as attachment:
    # The content type "application/octet-stream" means that a MIME attachment is a binary file
    part = MIMEBase("application", "octet-stream")
    part.set_payload(attachment.read())
    # Encode to base64
    encoders.encode_base64(part)
    # Add header
    part.add_header("Content-Disposition", f"attachment; filename= {filename}")
    # Add attachment to your message and convert it to string
    message.attach(part)
    
    text = message.as_string()
    # send your email
    with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
      server.login(login, password)
      server.sendmail(sender_email, receiver_email, text)
    
    print('Sent')
    Sending Emails in Python — Tutorial with Code ExamplesThe received email with your PDF

    To attach several files, you can call the message.attach() method several times.

    How to send an email with image attachment

    Images, even if they are a part of the message body, are attachments as well. There are three types of them: CID attachments (embedded as a MIME object), base64 images (inline embedding), and linked images.

    For adding a CID attachment, we will create a MIME multipart message with MIMEImage component:

    import smtplib
    from email.mime.text import MIMEText
    from email.mime.image import MIMEImage
    from email.mime.multipart import MIMEMultipart
    
    port = 2525
    smtp_server = "smtp.mailtrap.io"
    login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
    password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap
    
    sender_email = "mailtrap@example.com"
    receiver_email = "new@example.com"
    
    message = MIMEMultipart("alternative")
    message["Subject"] = "CID image test"
    message["From"] = sender_email
    message["To"] = receiver_email
    
    # write the HTML part
    html = """\
    <html>
    <body>
    <img src="cid:myimage">
    </body>
    </html>
    """
    part = MIMEText(html, "html")
    message.attach(part)
    
    # We assume that the image file is in the same directory that you run your Python script from
    with open('mailtrap.jpg', 'rb') as img:
      image = MIMEImage(img.read())
    # Specify the  ID according to the img src in the HTML part
    image.add_header('Content-ID', '<myimage>')
    message.attach(image)
    
    # send your email
    with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
      server.login(login, password)
      server.sendmail(sender_email, receiver_email, message.as_string())
    
    print('Sent')
    Sending Emails in Python — Tutorial with Code ExamplesThe received email with CID image

    The CID image is shown both as a part of the HTML message and as an attachment. Messages with this image type are often considered spam: check the Analytics tab in Mailtrap to see the spam rate and recommendations on its improvement. Many email clients — Gmail in particular — don’t display CID images in most cases. So let’s review how to embed a base64 encoded image instead.

    Here we will use base64 module and experiment with the same image file:

    import smtplib
    from email.mime.text import MIMEText
    from email.mime.multipart import MIMEMultipart
    import base64
    
    port = 2525
    smtp_server = "smtp.mailtrap.io"
    login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
    password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap
    sender_email = "mailtrap@example.com"
    receiver_email = "new@example.com"
    
    message = MIMEMultipart("alternative")
    message["Subject"] = "inline embedding"
    message["From"] = sender_email
    message["To"] = receiver_email
    
    # We assume that the image file is in the same directory that you run your Python script from
    with open("image.jpg", "rb") as image:
      encoded = base64.b64encode(image.read()).decode()
    
    html = f"""\
    <html>
    <body>
    <img src="data:image/jpg;base64,{encoded}">
    </body>
    </html>
    """
    part = MIMEText(html, "html")
    message.attach(part)
    
    # send your email
    with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
      server.login(login, password)
      server.sendmail(sender_email, receiver_email, message.as_string())
    
    print('Sent')
    Sending Emails in Python — Tutorial with Code ExamplesA base64 encoded image

    Now the image is embedded into the HTML message and is not available as an attached file. Python has encoded our JPEG image, and if we go to the HTML Source tab, we will see the long image data string in the img src attribute.

    How to Send Multiple Emails

    Sending multiple emails to different recipients and making them personal is the special thing about emails in Python.

    To add several more recipients, you can just type their addresses in separated by a comma, add Cc and Bcc. But if you work with a bulk email sending, Python will save you with loops.

    One of the options is to create a database in a CSV format (we assume it is saved to the same folder as your Python script).

    We often see our names in transactional or even promotional examples. Here is how we can make it with Python.

    Let’s organize the list in a simple table with just two columns: name and email address. It should look like the following example:

    #name,email
    John Johnson,john@johnson.com
    Peter Peterson,peter@peterson.com

    The code below will open the file and loop over its rows line by line, replacing the {name} with the value from the “name” column.

    import csv
    import smtplib
    
    port = 2525
    smtp_server = "smtp.mailtrap.io"
    login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
    password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap
    
    message = """Subject: Order confirmation
    To: {recipient}
    From: {sender}
    Hi {name}, thanks for your order! We are processing it now and will contact you soon"""
    sender = "new@example.com"
    with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
      server.login(login, password)
      with open("contacts.csv") as file:
      reader = csv.reader(file)
      next(reader)  # it skips the header row
      for name, email in reader:
        server.sendmail(
          sender,
          email,
          message.format(name=name, recipient=email, sender=sender),
        )
        print(f'Sent to {name}')

    In our Mailtrap inbox, we see two messages: one for John Johnson and another for Peter Peterson, delivered simultaneously:

    Sending Emails in Python — Tutorial with Code Examples


    Sending emails with Python via Gmail

    When you are ready for sending emails to real recipients, you can configure your production server. It also depends on your needs, goals, and preferences: your localhost or any external SMTP.

    One of the most popular options is Gmail so let’s take a closer look at it.

    We can often see titles like “How to set up a Gmail account for development”. In fact, it means that you will create a new Gmail account and will use it for a particular purpose.

    To be able to send emails via your Gmail account, you need to provide access to it for your application. You can Allow less secure apps or take advantage of the OAuth2 authorization protocol. It’s a way more difficult but recommended due to the security reasons.

    Further, to use a Gmail server, you need to know:

    • the server name = smtp.gmail.com
    • port = 465 for SSL/TLS connection (preferred)
    • or port = 587 for STARTTLS connection
    • username = your Gmail email address
    • password = your password
    import smtplib
    import ssl
    
    port = 465
    password = input("your password")
    context = ssl.create_default_context()
    
    with smtplib.SMTP_SSL("smtp.gmail.com", port, context=context) as server:
      server.login("my@gmail.com", password)

    If you tend to simplicity, then you can use Yagmail, the dedicated Gmail/SMTP. It makes email sending really easy. Just compare the above examples with these several lines of code:

    import yagmail
    
    yag = yagmail.SMTP()
    contents = [
    "This is the body, and here is just text http://somedomain/image.png",
    "You can find an audio file attached.", '/local/path/to/song.mp3'
    ]
    yag.send('to@someone.com', 'subject', contents)

    Next steps with Python

    Those are just basic options of sending emails with Python. To get great results, review the Python documentation and experiment with your own code!

    There are a bunch of various Python frameworks and libraries, which make creating apps more elegant and dedicated. In particular, some of them can help improve your experience with building emails sending functionality:

    The most popular frameworks are:

    1. Flask, which offers a simple interface for email sending: Flask Mail.
    2. Django, which can be a great option for building HTML templates.
    3. Zope comes in handy for a website development.
    4. Marrow Mailer is a dedicated mail delivery framework adding various helpful configurations.
    5. Plotly and its Dash can help with mailing graphs and reports.

    Also, here is a handy list of Python resources sorted by their functionality.

    Good luck and don’t forget to stay on the safe side when sending your emails!

    This article was originally published at Mailtrap’s blog: Sending emails with Python

    Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2019

    A Debian LTS logo
    Like each month, here comes a report about
    the work of paid contributors
    to Debian LTS.

    Individual reports

    In September, 212.75 work hours have been dispatched among 12 paid contributors. Their reports are available:

    • Adrian Bunk did nothing (and got no hours assigned), but has been carrying 26h from August to October.
    • Ben Hutchings did 20h (out of 20h assigned).
    • Brian May did 10h (out of 10h assigned).
    • Chris Lamb did 18h (out of 18h assigned).
    • Emilio Pozuelo Monfort did 30h (out of 23.75h assigned and 5.25h from August), thus anticipating 1h from October.
    • Hugo Lefeuvre did nothing (out of 23.75h assigned), thus is carrying over 23.75h for October.
    • Jonas Meurer did 5h (out of 10h assigned and 9.5h from August), thus carrying over 14.5h to October.
    • Markus Koschany did 23.75h (out of 23.75h assigned).
    • Mike Gabriel did 11h (out of 12h assigned + 0.75h remaining), thus carrying over 1.75h to October.
    • Ola Lundqvist did 2h (out of 8h assigned and 8h from August), thus carrying over 14h to October.
    • Roberto C. Sánchez did 16h (out of 16h assigned).
    • Sylvain Beucler did 23.75h (out of 23.75h assigned).
    • Thorsten Alteholz did 23.75h (out of 23.75h assigned).

    Evolution of the situation

    September was more like a regular month again, though two contributors were not able to dedicate any time to LTS work.

    For October we are welcoming Utkarsh Gupta as a new paid contributor. Welcome to the team, Utkarsh!

    This month, we’re glad to announce that Cloudways is joining us as a new silver level sponsor ! With the reduced involvment of another long term sponsor, we are still at the same funding level (roughly 216 hours sponsored by month).

    The security tracker currently lists 32 packages with a known CVE and the dla-needed.txt file has 37 packages needing an update.

    Thanks to our sponsors

    New sponsors are in bold.

    No comment | Liked this article? Click here. | My blog is Flattr-enabled.

    Worse Than FailureCodeSOD: Cast Away

    The accountants at Gary's company had a problem: sometimes, when they wanted to check the price to ship a carton of product, that price was zero. No one had, as of yet, actually shipped product for free, but they needed to understand why certain cartons were showing up as having zero cost.

    The table which tracks this, CartonFee, has three fields: ID, Carton, and Cost. Carton names are unique, and things like 12x3x6, or Box1, or even Large box. So, given a carton name, it should be pretty easy to update the cost, yes? The stored procedure which does this, spQuickBooks_UpdateCartonCost should be pretty simple.

    ALTER PROCEDURE [dbo].[spQuickBooks_UpdateCartonCost] @Carton varchar(100), @Fee decimal(6,2) AS BEGIN DECLARE @Cost decimal(8,3) = LEFT(CAST(CAST((CAST(@Fee AS NUMERIC(36,3))/140) * 100 AS NUMERIC(36,3)) AS VARCHAR), LEN(CAST(CAST((CAST(@Fee AS NUMERIC(36,3))/140) * 100 AS NUMERIC(36,3)) AS VARCHAR)) - 1) + CASE WHEN RIGHT(LEFT(CAST(CAST((CAST(@Fee AS NUMERIC(36,3))/140) * 100 AS NUMERIC(36,4)) AS VARCHAR), LEN(CAST(CAST((CAST(@Fee AS NUMERIC(36,3))/140) * 100 AS NUMERIC(36,4)) AS VARCHAR)) - 1), 1) > 5 THEN '5' ELSE '0' END IF NOT EXISTS (SELECT 1 FROM CartonFee WHERE Carton = @Carton) BEGIN INSERT INTO CartonFee VALUES (@Carton, @Cost) END ELSE BEGIN UPDATE CartonFee SET Cost = @Cost WHERE Carton = @Carton END END

    Just stare at that chain of casts for a moment. It teeters on the verge of making sense, calls to LEFT and RIGHT and multiplying by 100- we're just doing string munging to round off, that must be what's going on. If I count the parentheses, and really sit down and sketch this out, I can figure out what's going on, it must make sense, right?

    And then you spot the /140. Divide by 140. Why? Why that very specific number? Is it a secret code? Is it a signal to the Illuminated Seers of Bavaria such that they know the stars are right and they may leave Aghartha to sit upon the Throne of the World? After all, 1 + 4 + 0 is five, and as we know, the law of fives is never wrong.

    As it turns out, this stored procedure wasn't the problem. While it looks like it's responsible for updating the cost field, it's never actually called anywhere. It was, at one point, but it caused so much confusion that the users just started updating the table by hand. Somebody thought they'd get clever and use an UPDATE statement and messed up.

    [Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

    Planet DebianNorbert Preining: State of Calibre in Debian

    To counter some recent FUD spread about Calibre in general and Calibre in Debian in particular, here a concise explanation of the current state.

    Many might have read my previous post on Calibre as a moratorium, but that was not my intention. Development of Calibre in Debian is continuing, despite the current stall.

    Since it seems to be unclear what the current blockers are, there are two orthogonal problems regarding recent Calibre in Debian: One is the update to version 4 and the switch to qtwebengine, one is the purge of Python 2 from Debian.

    Current state

    Debian sid and testing currently hold Calibre 3.48 based on Python 2. Due to the ongoing purge, necessary modules (in particular python-cherrypy3) have been removed from Debian/sid, making the current Calibre package RC buggy (see this bug report). That means, that within reasonable time frame, Calibre will be removed from testing.


    Now for the two orthogonal problems we are facing:

    Calibre 4 packaging

    Calibre 4 is already packaged for Debian (see the master-4.0 branch in the git repository). Uploading was first blocked due to a disappearing python-pyqt5.qwebengine which was extracted from PyQt5 package into its own. Thanks to the maintainers we now have a Python2 version build from the qtwebengine-opensource-src package.

    But that still doesn’t cut it for Calibre 4, because it requires Qt 5.12, but Debian still carries 5.11 (released 1.5 years ago).

    So the above mentioned branch is ready for upload as soon as Qt 5.12 is included in Debian.

    Python 3

    The other big problem is the purge of Python 2 from Debian. Upstream Calibre already supports building Python 3 versions since some months, with ongoing bug fixes. But including this into Debian poses some problems: The first stumbling block was a missing Python3 version of mechanize, which I have adopted after a 7 years hiatus, updated to the newest version and provided Python 3 modules for it.

    Packaging for Debian is done in the experimental branch of the git repository, and is again ready to be uploaded to unstable.

    But the much bigger concern here is that practically none of the external plugins of Calibre is ready for Python 3. Paired with the fact that probably most users of Calibre are using one or the other external plugin (just to mention Kepub plugin, DeDRM, …), uploading a Python 3 based version of Calibre would break usage for practically all users.


    Since I put our (Debian’s) users first, I have thus decided to keep Calibre based on Python 2 as long as Debian allows. Unfortunately the overzealous purge spree has already introduced RC bugs, which means I am now forced to decide whether I upload a version of Calibre that breaks most users, or I don’t upload and see Calibre removed from testing. Not an easy decision.

    Thus, my original plan was to keep Calibre based on Python 2 as long as possible, and hope that upstream switches to Python 3 in time before the next Debian release. This would trigger a continuous update of most plugins and would allow users in Debian to have a seamless transition without complete breakage. Unfortunately, this plan seems to be not actually executable.

    Now let us return to the FUD spread:

    • Calibre is actively developed upstream
    • Calibre in Debian is actively maintained
    • Calibre is Python 3 ready, but the plugins are not
    • Calibre 4 is ready for Debian as soon as the dependencies are updated
    • Calibre/Python3 is ready for upload to Debian, but breaks practically all users

    Hope that helps everyone to gain some understanding about the current state of Calibre in Debian.

    Planet DebianSergio Durigan Junior: Installing Gerrit and Keycloak for GDB

    Back in September, we had the GNU Tools Cauldron in the gorgeous city of Montréal (perhaps I should write a post specifically about it...). One of the sessions we had was the GDB BoF, where we discussed, among other things, how to improve our patch review system.

    I have my own personal opinions about the current review system we use (mailing list-based, in a nutshell), and I haven't felt very confident to express it during the discussion. Anyway, the outcome was that at least 3 global maintainers have used or are currently using the Gerrit Code Review system for other projects, are happy with it, and that we should give it a try. Then, when it was time to decide who wanted to configure and set things up for the community, I volunteered. Hey, I'm already running the Buildbot master for GDB, what is the problem to manage yet another service? Oh, well.

    Before we dive into the details involved in configuring and running gerrit in a machine, let me first say that I don't totally support the idea of migrating from mailing list to gerrit. I volunteered to set things up because I felt the community (or at least the its most active members) wanted to try it out. I don't necessarily agree with the choice.

    Ah, and I'm writing this post mostly because I want to be able to close the 300+ tabs I had to open on my Firefox during these last weeks, when I was searching how to solve the myriad of problems I faced during the set up!

    The initial plan

    My very initial plan after I left the session room was to talk to the sourceware.org folks and ask them if it would be possible to host our gerrit there. Surprisingly, they already have a gerrit instance up and running. It's been set up back in 2016, it's running an old version of gerrit, and is pretty much abandoned. Actually, saying that it has been configured is an overstatement: it doesn't support authentication, user registration, barely supports projects, etc. It's basically what you get from a pristine installation of the gerrit RPM package in RHEL 6.

    I won't go into details here, but after some discussion it was clear to me that the instance on sourceware would not be able to meet our needs (or at least what I had in mind for us), and that it would be really hard to bring it to the quality level I wanted. I decided to go look for other options.

    The OSCI folks

    Have I mentioned the OSCI project before? They are absolutely awesome. I really love working with them, because so far they've been able to meet every request I made! So, kudos to them! They're the folks that host our GDB Buildbot master. Their infrastructure is quite reliable (I never had a single problem), and Marc Dequénes (Duck) is very helpful, friendly and quick when replying to my questions :-).

    So, it shouldn't come as a surprise the fact that when I decided to look for other another place to host gerrit, they were my first choice. And again, they delivered :-).

    Now, it was time to start thinking about the gerrit set up.

    User registration?

    Over the course of these past 4 weeks, I had the opportunity to learn a bit more about how gerrit does things. One of the first things that negatively impressed me was the fact that gerrit doesn't handle user registration by itself. It is possible to have a very rudimentary user registration "system", but it relies on the site administration manually registering the users (via htpasswd) and managing everything by him/herself.

    It was quite obvious to me that we would need some kind of access control (we're talking about a GNU project, with a copyright assignment requirement in place, after all), and the best way to implement it is by having registered users. And so my quest for the best user registration system began...

    Gerrit supports some user authentication schemes, such as OpenID (not OpenID Connect!), OAuth2 (via plugin) and LDAP. I remembered hearing about FreeIPA a long time ago, and thought it made sense using it. Unfortunately, the project's community told me that installing FreeIPA on a Debian system is really hard, and since our VM is running Debian, it quickly became obvious that I should look somewhere else. I felt a bit sad at the beginning, because I thought FreeIPA would really be our silver bullet here, but then I noticed that it doesn't really offer a self-service user registration.

    After exchanging a few emails with Marc, he told me about Keycloak. It's a full-fledged Identity Management and Access Management software, supports OAuth2, LDAP, and provides a self-service user registration system, which is exactly what we needed! However, upon reading the description of the project, I noticed that it is written in Java (JBOSS, to be more specific), and I was afraid that it was going to be very demanding on our system (after all, gerrit is also a Java program). So I decided to put it on hold and take a look at using LDAP...

    Oh, man. Where do I start? Actually, I think it's enough to say that I just tried installing OpenLDAP, but gave up because it was too cumbersome to configure. Have you ever heard that LDAP is really complicated? I'm afraid this is true. I just didn't feel like wasting a lot of time trying to understand how it works, only to have to solve the "user registration" problem later (because of course, OpenLDAP is just an LDAP server).

    OK, so what now? Back to Keycloak it is. I decided that instead of thinking that it was too big, I should actually install it and check it for real. Best decision, by the way!

    Setting up Keycloak

    It's pretty easy to set Keycloak up. The official website provides a .tar.gz file which contains the whole directory tree for the project, along with helper scripts, .jar files, configuration, etc. From there, you just need to follow the documentation, edit the configuration, and voilà.

    For our specific setup I chose to use PostgreSQL instead of the built-in database. This is a bit more complicated to configure, because you need to download the JDBC driver, and install it in a strange way (at least for me, who is used to just editing a configuration file). I won't go into details on how to do this here, because it's easy to find on the internet. Bear in mind, though, that the official documentation is really incomplete when covering this topic! This is one of the guides I used, along with this other one (which covers MariaDB, but can be adapted to PostgreSQL as well).

    Another interesting thing to notice is that Keycloak expects to be running on its own virtual domain, and not under a subdirectory (e.g, https://example.org instead of https://example.org/keycloak). For that reason, I chose to run our instance on another port. It is supposedly possible to configure Keycloak to run under a subdirectory, but it involves editing a lot of files, and I confess I couldn't make it fully work.

    A last thing worth mentioning: the official documentation says that Keycloak needs Java 8 to run, but I've been using OpenJDK 11 without problems so far.

    Setting up Gerrit

    The fun begins now!

    The gerrit project also offers a .war file ready to be deployed. After you download it, you can execute it and initialize a gerrit project (or application, as it's called). Gerrit will create a directory full of interesting stuff; the most important for us is the etc/ subdirectory, which contains all of the configuration files for the application.

    After initializing everything, you can try starting gerrit to see if it works. This is where I had my first trouble. Gerrit also requires Java 8, but unlike Keycloak, it doesn't work out of the box with OpenJDK 11. I had to make a small but important addition in the file etc/gerrit.config:

    [container]
        ...
        javaOptions = "--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED"
        ...
    

    After that, I was able to start gerrit. And then I started trying to set it up for OAuth2 authentication using Keycloak. This took a very long time, unfortunately. I was having several problems with Gerrit, and I wasn't sure how to solve them. I tried asking for help on the official mailing list, and was able to make some progress, but in the end I figured out what was missing: I had forgotten to add the AddEncodedSlashes On in the Apache configuration file! This was causing a very strange error on Gerrit (as you can see, a java.lang.StringIndexOutOfBoundsException!), which didn't make sense. In the end, my Apache config file looks like this:

    <VirtualHost *:80>
        ServerName gnutoolchain-gerrit.osci.io
    
        RedirectPermanent / https://gnutoolchain-gerrit.osci.io/r/
    </VirtualHost>
    
    <VirtualHost *:443>
        ServerName gnutoolchain-gerrit.osci.io
    
        RedirectPermanent / /r/
    
        SSLEngine On
        SSLCertificateFile /path/to/cert.pem
        SSLCertificateKeyFile /path/to/privkey.pem
        SSLCertificateChainFile /path/to/chain.pem
    
        # Good practices for SSL
        # taken from: <https://mozilla.github.io/server-side-tls/ssl-config-generator/>
    
        # intermediate configuration, tweak to your needs
        SSLProtocol             all -SSLv3
        SSLCipherSuite          ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
        SSLHonorCipherOrder     on
        SSLCompression          off
        SSLSessionTickets       off
    
        # OCSP Stapling, only in httpd 2.3.3 and later
        #SSLUseStapling          on
        #SSLStaplingResponderTimeout 5
        #SSLStaplingReturnResponderErrors off
        #SSLStaplingCache        shmcb:/var/run/ocsp(128000)
    
        # HSTS (mod_headers is required) (15768000 seconds = 6 months)
        Header always set Strict-Transport-Security "max-age=15768000"
    
        ProxyRequests Off
        ProxyVia Off
        ProxyPreserveHost On
        <Proxy *>
            Require all granted
        </Proxy>
    
        AllowEncodedSlashes On
            ProxyPass /r/ http://127.0.0.1:8081/ nocanon
            #ProxyPassReverse /r/ http://127.0.0.1:8081/r/
    </VirtualHost>
    

    I confess I was almost giving up Keycloak when I finally found the problem...

    Anyway, after that things went more smoothly. I was finally able to make the user authentication work, then I made sure Keycloak's user registration feature also worked OK...

    Ah, one interesting thing: the user logout wasn't really working as expected. The user was able to logout from gerrit, but not from Keycloak, so when the user clicked on "Sign in", Keycloak would tell gerrit that the user was already logged in, and gerrit would automatically log the user in again! I was able to solve this by redirecting the user to Keycloak's logout page, like this:

    [auth]
        ...
        logoutUrl = https://keycloak-url:port/auth/realms/REALM/protocol/openid-connect/logout?redirect_uri=https://gerrit-url/
        ...
    

    After that, it was already possible to start worrying about configure gerrit itself. I don't know if I'll write a post about that, but let me know if you want me to.

    Conclusion

    If you ask me if I'm totally comfortable with the way things are set up now, I can't say that I am 100%. I mean, the set up seems robust enough that it won't cause problems in the long run, but what bothers me is the fact that I'm using technologies that are alien to me. I'm used to setting up things written in Python, C, C++, with very simple yet powerful configuration mechanisms, and an easy to discover what's wrong when something bad happens.

    I am reasonably satisfied with the Keycloak logs things, but Gerrit leaves a lot to be desired in that area. And both projects are written in languages/frameworks that I am absolutely not comfortable with. Like, it's really tough to debug something when you don't even know where the code is or how to modify it!

    All in all, I'm happy that this whole adventure has come to an end, and now all that's left is to maintain it. I hope that the GDB community can make good use of this new service, and I hope that we can see a positive impact in the quality of the whole patch review process.

    My final take is that this is all worth as long as the Free Software and the User Freedom are the ones who benefit.

    P.S.: Before I forget, our gerrit instance is running at https://gnutoolchain-gerrit.osci.io.

    Planet DebianChris Lamb: Tour d'Orwell: The River Orwell

    Continuing my Orwell-themed peregrination, a certain Eric Blair took his pen name "George Orwell" because of his love for a certain river just south of Ipswich, Suffolk. With sheepdog trials being undertaken in the field underneath, even the concrete Orwell Bridge looked pretty majestic from the garden centre — cum — food hall.

    Planet DebianMartin Pitt: Hardening Cockpit with systemd (socket activation)³

    Background A major future goal for Cockpit is support for client-side TLS authentication, primarily with smart cards. I created a Proof of Concept and a demo long ago, but before this can be called production-ready, we first need to harden Cockpit’s web server cockpit-ws to be much more tamper-proof than it is today. This heavily uses systemd’s socket activation. I believe we are now using this in quite a unique and interesting way that helped us to achieve our goal rather elegantly and robustly.

    ,

    Cory DoctorowFalse Flag

    In my latest podcast (MP3), I read my Green European Journal short story about the terrible European Copyright Directive which passed last March, False Flag. Published in December 2018, the story highlights the ways in which this badly considered law creates unlimited opportunities for abuse, especially censorship by corporations who’ve been embarassed by whistleblowers and activists.

    The crew couldn’t even supply their videos to friendly journalists to rebut the claims from the big corporate papers. Just *linking* to a major newspaper required a paid license, and while the newspapers licensed to one another so they could reference articles in rival publications, the kinds of dissident, independent news outlets that had once provided commentary and analysis of what went into the news and what didn’t had all disappeared once the news corporations had refused to license the right to link to them.

    Agata spoke with a lawyer she knew, obliquely, in guarded hypotheticals, and the lawyer confirmed what she’d already intuited.

    “Your imaginary friend has no hope. They’d have to out themselves in order to file a counterclaim, tell everyone their true identity and reveal that they were behind the video. Even so, it would take six months to get the platforms to hear their case, and by then the whole story would have faded from the public eye. And if they *did* miraculously get people to pay attention again? Well, the fakers would just get the video taken offline again. It takes an instant for a bot to file a fake copyright claim. It takes months for humans to get the claim overturned. It’s asymmetrical warfare, and you’ll always be on the losing side.”

    MP3

    Planet DebianArturo Borrero González: What to expect in Debian 11 Bullseye for nftables/iptables

    Logo

    Debian 11 codename Bullseye is already in the works. Is interesting to make decision early in the development cycle to give people time to accommodate and integrate accordingly, and this post brings you the latest update on the plans for Netfilter software in Debian 11 Bullseye. Mind that Bullseye is expected to be released somewhere in 2021, so still plenty of time ahead.

    The situation with the release of Debian 10 Buster is that iptables was using by default the -nft backend and one must explicitly select -legacy in the alternatives system in case of any problem. That was intended to help people migrate from iptables to nftables. Now the question is what to do next.

    Back in July 2019, I started an email thread in the debian-devel@lists.debian.org mailing lists looking for consensus on lowering the archive priority of the iptables package in Debian 11 Bullseye. My proposal is to drop iptables from Priority: important and promote nftables instead.

    In general, having such a priority level means the package is installed by default in every single Debian installation. Given that we aim to deprecate iptables and that starting with Debian 10 Buster iptables is not even using the x_tables kernel subsystem but nf_tables, having such priority level seems pointless and inconsistent. There was agreement, and I already made the changes to both packages.

    This is another step in deprecating iptables and welcoming nftables. But it does not mean that iptables won’t be available in Debian 11 Bullseye. If you need it, you will need to use aptitude install iptables to download and install it from the package repository.

    The second part of my proposal was to promote firewalld as the default ‘wrapper’ for firewaling in Debian. I think this is in line with the direction other distros are moving. It turns out firewalld integrates pretty well with the system, includes a DBus interface and many system daemons (like libvirt) already have native integration with firewalld. Also, I believe the days of creating custom-made scripts and hacks to handle the local firewall may be long gone, and firewalld should be very helpful here too.

    TEDHelping learners of English find their own voice

    Transportation Teahouse at Huangjueping Street in Chongqing, China. (Photo: National Geographic Learning – Life as Lived)

    Since its inception, TED has been zealous in its mission of spreading ideas that inspire. It was out of this passion that a partnership between National Geographic Learning emerged to create materials for the English language learning classroom — and help English learners to find their own voice.

    National Geographic Learning’s goal is to bring the world to the classroom and the classroom to life. They create English programs that are inspiring, real and relevant. Students learn about their world by experiencing it through the stories, ideas, photography and video of both National Geographic and TED.

    The language learning classroom is meant to be a safe place where learners can make mistakes and build confidence before going out into the world. But it can also be a place where learners can struggle to see a connection between the real world and the language they’re learning.

    National Geographic Learning believes that if we want learners to understand the value of learning English — a language that connects them to the world — then we need to bring the real world into the classroom and show them the opportunity learning a language brings. The teaching and learning programs created by National Geographic Learning with TED Talks give learners of English (and their teachers) a way to talk about ideas that are relevant to them and help them develop a voice of their own in English.

    This partnership has resulted in five textbook programs for the English language learning classroom so far. National Geographic Learning and TED have also collaborated to create a unique classroom supplement, Learn English with TED Talks — a language learning app with a difference.

    For more information about all of the English language learning materials made with TED Talks please visit ELTNGL.com/TED.

    Happy learning!

    CryptogramI Have a New Book: We Have Root

    I just published my third collection of essays: We Have Root. This book covers essays from 2013 to 2017. (The first two are Schneier on Security and Carry On.)

    There is nothing in this book is that is not available for free on my website; but if you'd like these essays in an easy-to-carry paperback book format, you can order a signed copy here. External vendor links, including for ebook versions, here.

    CryptogramFactoring 2048-bit Numbers Using 20 Million Qubits

    This theoretical paper shows how to factor 2048-bit RSA moduli with a 20-million qubit quantum computer in eight hours. It's interesting work, but I don't want overstate the risk.

    We know from Shor's Algorithm that both factoring and discrete logs are easy to solve on a large, working quantum computer. Both of those are currently beyond our technological abilities. We barely have quantum computers with 50 to 100 qubits. Extending this requires advances not only in the number of qubits we can work with, but in making the system stable enough to read any answers. You'll hear this called "error rate" or "coherence" -- this paper talks about "noise."

    Advances are hard. At this point, we don't know if they're "send a man to the moon" hard or "faster-than-light travel" hard. If I were guessing, I would say they're the former, but still harder than we can accomplish with our current understanding of physics and technology.

    I write about all this generally, and in detail, here. (Short summary: Our work on quantum-resistant algorithms is outpacing our work on quantum computers, so we'll be fine in the short run. But future theoretical work on quantum computing could easily change what "quantum resistant" means, so it's possible that public-key cryptography will simply not be possible in the long run. That's not terrible, though; we have a lot of good scalable secret-key systems that do much the same things.)

    Planet DebianRitesh Raj Sarraf: Bpfcc New Release

    BPF Compiler Collection 0.11.0

    bpfcc version 0.11.0 has been uploaded to Debian Unstable and should be accessible in the repositories by now. After the 0.8.0 release, this has been the next one uploaded to Debian.

    Multiple source respositories

    This release brought in dependencies to another set of sources from the libbpf project. In the upstream repo, this is still a topic of discussion on how to release tools where one depends on another, in unison. Right now, libbpf is configured as a git submodule in the bcc repository. So anyone using the upstream git repoistory should be able to build it.

    Multiple source archive for a Debian package

    So I had read in the past about Multiple source tarballs for a single package in Debian but never tried it because I wasn’t maintaining anything in Debian which was such. With bpfcc it was now a good opportunity to try it out. First, I came across this post from RaphaĂŤl Hertzog which gives a good explanation of what all has been done. This article was very clear and concise on the topic

    Git Buildpackage

    gbp is my tool of choice for packaging in Debian. So I did a quick look to check how gbp would take care of it. And everything was in place and Just Worked

    rrs@priyasi:~/rrs-home/Community/Packaging/bpfcc (master)$ gbp buildpackage --git-component=libbpf
    gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig.tar.gz
    gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig-libbpf.tar.gz
    gbp:info: Performing the build
    dpkg-checkbuilddeps: error: Unmet build dependencies: arping clang-format cmake iperf libclang-dev libedit-dev libelf-dev libzip-dev llvm-dev libluajit-5.1-dev luajit python3-pyroute2
    W: Unmet build-dependency in source
    dpkg-source: info: using patch list from debian/patches/series
    dpkg-source: info: applying fix-install-path.patch
    dh clean --buildsystem=cmake --with python3 --no-parallel
       dh_auto_clean -O--buildsystem=cmake -O--no-parallel
       dh_autoreconf_clean -O--buildsystem=cmake -O--no-parallel
       dh_clean -O--buildsystem=cmake -O--no-parallel
    dpkg-source: info: using source format '3.0 (quilt)'
    dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig-libbpf.tar.gz
    dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig.tar.gz
    dpkg-source: info: using patch list from debian/patches/series
    dpkg-source: warning: ignoring deletion of directory src/cc/libbpf
    dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.debian.tar.xz
    dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.dsc
    I: Generating source changes file for original dsc
    dpkg-genchanges: info: including full source code in upload
    dpkg-source: info: unapplying fix-install-path.patch
    ERROR: ld.so: object 'libeatmydata.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
    W: cgroups are not available on the host, not using them.
    I: pbuilder: network access will be disabled during build
    I: Current time: Sun Oct 13 19:53:57 IST 2019
    I: pbuilder-time-stamp: 1570976637
    I: Building the build Environment
    I: extracting base tarball [/var/cache/pbuilder/sid-amd64-base.tgz]
    I: copying local configuration
    I: mounting /proc filesystem
    I: mounting /sys filesystem
    I: creating /{dev,run}/shm
    I: mounting /dev/pts filesystem
    I: redirecting /dev/ptmx to /dev/pts/ptmx
    I: Mounting /var/cache/apt/archives/
    I: policy-rc.d already exists
    W: Could not create compatibility symlink because /tmp/buildd exists and it is not a directory
    I: using eatmydata during job
    I: Using pkgname logfile
    I: Current time: Sun Oct 13 19:54:04 IST 2019
    I: pbuilder-time-stamp: 1570976644
    I: Setting up ccache
    I: Copying source file
    I: copying [../bpfcc_0.11.0-1.dsc]
    I: copying [../bpfcc_0.11.0.orig-libbpf.tar.gz]
    I: copying [../bpfcc_0.11.0.orig.tar.gz]
    I: copying [../bpfcc_0.11.0-1.debian.tar.xz]
    I: Extracting source
    dpkg-source: warning: extracting unsigned source package (bpfcc_0.11.0-1.dsc)
    dpkg-source: info: extracting bpfcc in bpfcc-0.11.0
    dpkg-source: info: unpacking bpfcc_0.11.0.orig.tar.gz
    dpkg-source: info: unpacking bpfcc_0.11.0.orig-libbpf.tar.gz
    dpkg-source: info: unpacking bpfcc_0.11.0-1.debian.tar.xz
    dpkg-source: info: using patch list from debian/patches/series
    dpkg-source: info: applying fix-install-path.patch
    I: Not using root during the build.
    

    Worse Than FailureCodeSOD: I See What Happened

    Graham picked up a ticket regarding their password system. It seemed that several users had tried to put in a perfectly valid password, according to the rules, but it was rejected.

    Graham's first step was to attempt to replicate on his own, but couldn't do it. So he followed up with one of the end users, and got them to reveal the password they had tried to use. That allowed him to trigger the bug, so he dug into the debugger to find the root cause.

    private static final String UPPERCASE_LETTERS = "ABDEFGHIJKLMNOPQRSTUVWXYZ"; private int countMatches(String string, String charList) { int count = 0; for (char c : charList.toCharArray()) { count += StringUtils.countMatches(string, String.valueOf(c)); } return count; }

    This isn't a great solution, but it at least works. Well, it "works" if you are able to remember how to recite the alphabet. If you look closely, you can tell that there are no pirate on their development team, because while pirates are fond of the letter "R", their first love will always be the "C".

    [Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

    Planet DebianUtkarsh Gupta: Joining Debian LTS!

    Hey,

    (DPL Style):
    TL;DR: I joined Debian LTS as a trainee in July (during DebConf) and finally as a paid contributor from this month onward! :D


    Here’s something interesting that happened last weekend!
    Back during the good days of DebConf19, I finally got a chance to meet Holger! As amazing and inspiring a person he is, it was an absolute pleasure meeting him and also, I got a chance to talk about Debian LTS in more detail.

    I was introduced to Debian LTS by Abhijith during his talk in MiniDebConf Delhi. And since then, I’ve been kinda interested in that project!
    But finally it was here that things got a little “official” and after a couple of mail exchanges with Holger and Raphael, I joined in as a trainee!

    I had almost no idea what to do next, so the next month I stayed silent, observing the workflow as people kept committing and announcing updates.
    And finally in September, I started triaging and fixing the CVEs for Jessie and Stretch (mostly the former).

    Thanks to Abhijith who explained the basics of what DLA is and how do we go about fixing bugs and then announcing them.

    With that, I could fix a couple of CVEs and thanks to Holger (again) for reviewing and sponsoring the uploads! :D

    I mostly worked (as a trainee) on:

    • CVE-2019-10751, affecting httpie, and
    • CVE-2019-16680, affecting file-roller.

    And finally this happened:
    Commit that mattered!
    (Though there’s a little hiccup that happened there, but that’s something we can ignore!)

    So finally, I’ll be working with the team from this month on!
    As Holger says, very much yay! :D

    Until next time.
    :wq for today.

    ,

    Planet DebianDebian XMPP Team: New Dino in Debian

    Dino (dino-im in Debian), the modern and beautiful chat client for the desktop, has some nice, new features. Users of Debian testing (bullseye) might like to try them:

    • XEP-0391: Jingle Encrypted Transports (explained here)
    • XEP-0402: Bookmarks 2 (explained here)

    Note, that users of Dino on Debian 10 (buster) should upgrade to version 0.0.git20181129-1+deb10u1, because of a number of security issues, that have been found (CVE-2019-16235, CVE-2019-16236, CVE-2019-16237).

    There have been other XMPP related updates in Debian since release of buster, among them:

    You might be interested in the Octobers XMPP newsletter, also available in German.

    Planet DebianIustin Pop: Actually fixing a bug

    One of the outcomes of my recent (last few years) sports ramp-up is that my opensource work is almost entirely left aside. Having an office job makes it very hard to spend more time sitting at the computer at home too…

    So even my travis dashboard was red for a while now, but I didn’t look into it until today. Since I didn’t change anything recently, just travis builds started to fail, I was sure it’s just environment changes that need to be taken into account.

    And indeed it was so, for two out of three projects. The third one… I actually got to fix a bug, introduced at the beginning of the year, but for which gcc (same gcc that originally passed) started to trip on a while back. I even had to read the man page of snprintf! Was fun ☺, too bad I don’t have enough time to do this more often…

    My travis dashboard is green again, and “test suite” (if you can call it that) is expanded to explicitly catch this specific problem in the future.

    ,

    Planet DebianShirish Agarwal: Social media, knowledge and some history of Banking

    First of all Happy Dusshera to everybody. While Dusshera is India is a symbol of many things, it is a symbol of forgiveness and new beginnings. While I don’t know about new beginnings I do feel there is still lot of baggage which needs to be left I would try to share some insights I uncovered over last few months and few realizations I came across.

    First of all thank you to the Debian-gnome-team to keep working at new version of packages. While there are still a bunch of bugs which need to be fixed especially #895990 and #913978 among others, still kudos for working at it. Hopefully, those bugs and others will be fixed soon so we could install gnome without a hiccup. I have not been on IRC because my riot-web has been broken for several days now. Also most of the IRC and telegram channels at least related to Debian become mostly echo chambers one way or the other as you do not get any serious opposition. On twitter, while it’s highly toxic, you also get the urge to fight the good fight when either due to principles or for some other reason (usually paid trolls) people fight, While I follow my own rules on twitter apart from their TOS, I feel at least new people who are going on social media in India or perhaps elsewhere as well could use are –

    1. It is difficult to remain neutral and stick to the facts. If you just stick to the facts, you will be branded as urban naxal or some such names.
    2. I find many times, if you are calm and don’t react, many a times, they are curious and display ignorance of knowledge which you thought everyone knew is not there. Now whether that is due to either due to lack of education, lack of knowledge or pretensions, although if its pretentious, you are caught sooner or later.
    3. Be civil at all times, if somebody harassess you, calls you names, report them and block them, although twitter still needs to fix the reporting thing a whole lot more. Although, when even somebody like me (bit of understanding of law, technology, language etc.) had a hard time figuring out twitter’s reporting ways, I dunno how many people would be able to use it successfully ? Maybe they make it so unhelpful so the traffic flows no matter what. I do realize they still haven’t figured out their business model but that’s a question for another day. In short, they need to make it far more simpler than it is today.
    4. You always have an option to block people but it has its own consequences.
    5. Be passive-aggressive if the situation demands it.
    6. Most importantly though, if somebody starts making jokes about you or start abusing you, it is sure that the person on the other side doesn’t have any more arguments and you have won.

    Banking

    Before I start, let me share why I am putting a blog post on the topic. The reason is pretty simple. It seems a huge number of Indians don’t either know the history of how banking started, the various turns it took and so on and so forth. In fact, nowadays history is being so hotly contested and perhaps even being re-written. Hence for some things I would be sharing some sources but even within them, there is possibiity of contestations. One of the contestations for a long time is when ancient coinage and the technique of smelting, flattening came to India. Depending on whom you ask, you have different answers. Lot of people are waiting to get more insight from the Keezhadi excavation which may also give some insight to the topic as well. There are rumors that the funding is being stopped but hope that isn’t true and we gain some more insight in Indian history. In fact, in South India, there seems to be lot of curiousity and attraction towards the site. It is possible that the next time I get a chance to see South India, I may try to see if there is a chance to see this unique location if a museum gets built somewhere nearby. Sorry from deviating from the topic, but it seems that ancient coinage started anywhere between 1st millenium BCE to 6th century BCE so it could be anywhere between 1500 – 2000 years old in India. While we can’t say anything for sure, but it’s possible that there was barter before that. There has also been some history about sharing tokens in different parts of the world as well. The various timelines get all jumbled up hence I would suggest people to use the wikipedia page of History of Money as a starting point. While it may not be give a complete, it would probably broaden the understanding a little bit. One of the reasons why history is so hotly contested could also perhaps lie because of the destruction of the Ancient Library of Alexandria. Who knows what more we would have known of our ancients if it was not destroyed 😦

    Hundi (16th Centry)

    I am jumping to 16th century as it is more closer to today’s modern banking otherwise the blog post would be too long. Now Hundi was a financial instrument which was used from 16th century onwards. This could be as either a forbearer of a cheque or a Traveller’s cheque. There doesn’t seem to be much in way of information, whether this was introduced by the Britishers or before by the Portugese when they came to India in via when the Portugese came when they came to India or was it in prevelance before. There is a fascinating in-depth study of Hundi though between 1858-1978 done by Marina Bernadette for London School of Economics as her dissertion paper.

    Banias and Sarafs

    As I had shared before, history in India is intertwined with mythology and history. While there is possibility of a lot of history behind this is documented somewhere I haven’t been able to find it. As I come from Bania , I had learnt lot of stories about both the migratory strain that Banias had as well as how banias used to split their children in adjoining states. Before the Britishers ruled over India, popular history tells us that it was Mughal emprire that ruled over us. What it doesn’t tell us though that both during the Mughal empire as well as Britishers, Banias and Sarafs who were moneylenders and bullion traders respectively hedged their bets. More so, if they were in royal service or bound to be close to the power of administration of the state/mini-kingdom/s . What they used to do is make sure that one of the sons would obey the king here while the other son may serve the muslim ruler. The idea behind that irrespective of whoever wins, the banias or sarafs would be able to continue their traditions and it was very much possible that the other brother would not be killed or even if he was, any or all wealth will pass to the victorious brother and the family name will live on. If I were to look at that, I’m sure I’ll find the same not only in Banias and Sarafs but perhaps other castes and communities as well. Modern history also tells of Rothschilds who did and continue to be an influence on the world today.

    As to why did I share about how people acted in their self-interest because nowadays in the Indian social media, it is because many people chose to believe a very simplistic black and white narrative and they are being fed that by today’s dominant political party in power. What I’m trying to simply say is that history is much more complex than that. While you may choose to believe either of the beliefs, it might open a window in at least some Indian’s minds that there is possibility of another way things were done and ways in which people acted then what is being perceived today. It is also possible it may be contested today as lot of people would like to appear in the ‘right side’ of history as it seems today.

    Banking in British Raj till nationalization

    When the Britishers came, they bought the modern Banking system with them. These lead to creation of various banks like Bank of Bengal, Bank of Bombay and Bank of Madras which was later subsumed under Imperial Bank of India which later became State Bank of India in 1955. While I will not go into details, I do have curiousity so if life has, would surely want to visit either the Banca Monte dei Paschi di Siena S.p.A of Italy or the Berenberg Bank both of which probably has lot of history than what is written on their wikipedi pages. Soon, other banks. Soon there was whole clutch of banks which will continue to facilitate the British till independance and leak money overseas even afterwards till the Banks were nationalized in 1956 due to the ‘Gorwala Committee’ which recommended. Apart from the opaqueness of private banking and leakages, there was non provision of loans to priority sector i.e. farming in India, A.D. Gorawala recommended nationalization to weed out both issues in a single solution. One could debate efficacy of the same, but history has shown us that privatization in financial sector has many a times been costly to depositors. The financial Crisis of 2008 and the aftermath in many of the financial markets, more so private banks is a testament to it. Even the documentary Plenary’s Men gives whole lot of insight in the corruption that Private banks do today.

    The plenary’s men on Youtube at least to my mind is evidence enough that at least India should be cautious in dealings with private banks.

    Co-operative banks and their Rise

    The Co-operative banks rise in India was largely in part due to rise of co-operative societies. While the co-operative Societies Act was started in 1904 itself. While there were quite a few co-operative societies and banks, arguably the real filip to Co-operative Banking was done by Amul when it started in 1946 and the milk credit society it started with it. I dunno how many people saw ‘Manthan‘ which chronicled the story and bought the story of both the co-operative societies and co-operative banks to millions of India. It is a classic movie which lot of today’s youth probably doesn’t know and even if he would would take time to identify with, although people of my generation the earlier generations do get it. One of the things that many people don’t get is that for lot of people even today, especially for marginal farmers and such in rural areas, co-operative banks are still the only solution. While in recent times, the Govt. of the day has tried something called Jan Dhan Yojana it hasn’t been as much a success as they were hoping to. While reams of paper have been written about it, like most policies it didn’t deliver to the last person which such inclusion programs try. Issues from design to implementation are many but perhaps some other time. I am sharing about Co-operative banks as a recent scam took place in one of the banks, probably one of the most respected and widely held co-operative banks. I would rather share sucheta dalal’s excellent analysis done on the PMC bank crisis which is 1unfolding and perhaps continue to unfold in days to come.

    Conclusion

    At the end I have to admit I took a lot of short-cuts to reach till here. There is possibility that there may be details people might want me to incorporate, if so please let me know and would try and add that. I did try to compress as much as possible while trying to be as exhaustive as possible. I also haven’t used any data as I wanted to keep the explanations as simple as possible and try not to have as little of politics as possible even though biases which are there, are there.

    Planet DebianDirk Eddelbuettel: GitHub Streak: Round Six

    Five ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

    This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

    and then showed the first chart of GitHub streaking

    github activity october 2013 to october 2014github activity october 2013 to october 2014

    And four year ago a first follow-up appeared in this post:

    github activity october 2014 to october 2015github activity october 2014 to october 2015

    And three years ago we had a followup

    github activity october 2015 to october 2016github activity october 2015 to october 2016

    And two years ago we had another one

    github activity october 2016 to october 2017github activity october 2016 to october 2017

    And last year another one

    github activity october 2017 to october 2018github activity october 2017 to october 2018

    As today is October 12, here is the newest one from 2018 to 2019:

    github activity october 2018 to october 2019github activity october 2018 to october 2019

    Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    Planet DebianLouis-Philippe Véronneau: Alpine MusicSafe Classic Hearing Protection Review

    Yesterday, I went to a punk rock show and had tons of fun. One of the bands playing (Jeunesse Apatride) hadn't played in 5 years and the crowd was wild. The other bands playing were also great. Here's a few links if you enjoy Oi! and Ska:

    Sadly, those kind of concerts are always waaaaayyyyy too loud. I mostly go to small venue concerts and for some reason the sound technicians think it's a good idea to make everyone's ears bleed. You really don't need to amplify the drums when the whole concert venue is 50m²...

    So I bough hearing protection. It was the first time I wore earplugs at a concert and it was great! I can't really compare the model I got (Alpine MusicSafe Classic earplugs) to other brands since it's the only one I tried out, but:

    • They were very comfortable. I wore them for about 5 hours and didn't feel any discomfort.

    • They came with two sets of plastic tips you insert in the silicone earbuds. I tried the -17db ones but I decided to go with the -18db inserts as it was still freaking loud.

    • They fitted very well in my ears even tough I was in the roughest mosh pit I've ever experienced (and I've seen quite a few). I was sweating profusely from all the heavy moshing and never once I feared loosing them.

    • My ears weren't ringing when I came back home so I guess they work.

    • The earplugs didn't distort sound, only reduce the volume.

    • They came with a handy aluminium carrying case that's really durable. You can put it on your keychain and carry them around safely.

    • They only cost me ~25 CAD with taxes.

    The only thing I disliked was that I found it pretty much impossible to sing while wearing them. as I couldn't really hear myself. With a bit of practice, I was able to sing true but it wasn't great :(

    All in all, I'm really happy with my purchase and I don't think I'll ever go to another concert without earplugs.

    ,

    Planet DebianMolly de Blanc: Conferences

    I think there are too many conferences.

    I conducted this very scientific Twitter poll and out of 52 respondants, only 23% agreed with me. Some people who disagreed with me pointed out specifically what they think is lacking:  more regional events, more in specific countries, and more “generic� FLOSS events.

    Many projects have a conference, and then there are “generic� conferences, like FOSDEM, LibrePlanet, LinuxConfAU, and FOSSAsia. Some are more corporate (OSCON), while others more community focused (e.g. SeaGL).

    There are just a lot of conferences.

    I average a conference a month, with most of them being more general sorts of events, and a few being project specific, like DebConf and GUADEC.

    So far in 2019, I went to: FOSDEM, CopyLeft Conf, LibrePlanet, FOSS North, Linux Fest Northwest, OSCON, FrOSCon, GUADEC, and GitLab Commit. I’m going to All Things Open next week. In November I have COSCon scheduled. I’m skipping SeaGL this year. I am not planning on attending 36C3 unless my talk is accepted. I canceled my trip to DebConf19. I did not go to Camp this year. I also had a board meeting in NY, an upcoming one in Berlin, and a Debian meeting in the other Cambridge. I’m skipping LAS and likely going to SFSCon for GNOME.

    So 9 so far this year,  and somewhere between 1-4 more, depending on some details.

    There are also conferences that don’t happen every year, like HOPE and CubaConf. There are some that I haven’t been to yet, like PyCon, and more regional events like Ohio Linux Fest, SCALE, and FOSSCon in Philadelphia.

    I think I travel too much, and plenty of people travel more than I do. This is one of the reasons why we have too many events: the same people are traveling so much.

    When you’re nose deep in it, when you think that you’re doing is important, you keep going to them as long as you’re invited. I really believe in the messages I share during my talks, and I know by speaking I am reaching audiences I wouldn’t otherwise. As long as I keep getting invited places, I’ll probably keep going.

    Finding sponsors is hard(er).

    It is becoming increasingly difficult to find sponsors for conferences. This is my experience, and what I’ve heard from speaking with others about it. Lower response rates to requests and people choosing lower sponsorship levels than they have in past years.

    CFP responses are not increasing.

    I sort of think the Tweet says it all. Some conferences aren’t having this experiences. Ones I’ve been involved with, or spoken to the organizers of, are needing to extend their deadlines and generally having lower response rates.

    Do I think we need fewer conferences?

    Yes and no. I think smaller, regional conferences are really important to reaching communities and individuals who don’t have the resources to travel. I think it gives new speakers opportunities to share what they have to say, which is important for the growth and robustness of FOSS.

    Project specific conferences are useful for those projects. It gives us a time to have meetings and sprints, to work and plan, and to learn specifically about our project and feel more connected to out collaborators.

    On the other hand, I do think we have more conferences than even a global movement can actively support in terms of speakers, organizer energy, and sponsorship dollars.

    What do I think we can do?

    Not all of these are great ideas, and not all of them would work for every event. However, I think some combination of them might make a difference for the ecosystem of conferences.

    More single-track or two-track conferences. All Things Open has 27 sessions occurring concurrently. Twenty-seven! It’s a huge event that caters to many people, but seriously, that’s too much going on at once. More 3-4 track conferences should consider dropping to 1-2 tracks, and conferences with more should consider dropping their numbers down as well. This means fewer speakers at a time.

    Stop trying to grow your conference. Growth feels like a sign of success, but it’s not. It’s a sign of getting more people to show up. It helps you make arguments to sponsors, because more attendees means more people being reached when a company sponsors.

    Decrease sponsorship levels. I’ve seen conferences increasing their sponsorship levels. I think we should all agree to decrease those numbers. While we’ll all have to try harder to get more sponsors, companies will be able to sponsor more events.

    Stop serving meals. I appreciate a free meal. It makes it easier to attend events, but meals are expensive and difficult to logisticate. I know meals make it easier for some people, especially students, to attend. Consider offering special RSVP lunches for students, recent grads, and people looking for work.

    Ditch the fancy parties. Okay, I also love a good conference party. They’re loads of fun and can be quite expensive. They also encourage drinking, which I think is bad for the culture.

    Ditch the speaker dinners. Okay, I also love a good speaker dinner. It’s fun to relax, see my friends, and have a nice meal that isn’t too loud of overwhelming. These are really expensive. I’ve been trying to donate to local food banks/food insecurity charities an equal amount of the cost of dinner per person, but people are rarely willing to share that information! Paying for a nice dinner out of pocket — with multiple bottles of wine — usually runs $50-80 with tip. I know one dinner I went to was $150 a person. I think the community would be better served if we spent that money on travel grants. If you want to be nice to speakers, I enjoy a box of chocolates I can take home and share with my roommates.

     Give preference to local speakers. One of the things conferences do is bring in speakers from around the world to share their ideas with your community, or with an equally global community. This is cool. By giving preference to local speakers, you’re building expertise in your geography.

    Consider combining your community conferences. Rather than having many conferences for smaller communities, consider co-locating conferences and sharing resources (and attendees). This requires even more coordination to organize, but could work out well.

    Volunteer for your favorite non-profit or project. A lot of us have booths at conferences, and send people around the world in order to let you know about the work we’re doing. Consider volunteering to staff a booth, so that your favorite non-profits and projects have to send fewer people.

    While most of those things are not “have fewer conferences,� I think they would help solve the problems conference saturation is causing: it’s expensive for sponsors, it’s expensive for speakers, it creates a large carbon footprint, and it increases burnout among organizers and speakers.

    I must enjoy traveling because I do it so much. I enjoy talking about FOSS, user rights, and digital rights. I like meeting people and sharing with them and learning from them. I think what I have to say is important. At the same time, I think I am part of an unhealthy culture in FOSS, that encourages burnout, excessive travel, and unnecessary spending of money, that could be used for better things.

    One last thing you can do, to help me, is submit talks to your local conference(s). This will help with some of these problems as well, can be a great experience, and is good for your conference and your community!

    CryptogramFriday Squid Blogging: Apple Fixes Squid Emoji

    Apple fixed the squid emoji in iOS 13.1:

    A squid's siphon helps it move, breathe, and discharge waste, so having the siphon in back makes more sense than having it in front. Now, the poor squid emoji will look like it should, without a siphon on its front.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

    Read my blog posting guidelines here.

    Cory DoctorowPart two of my novella “Martian Chronicles” on Escape Pod: who cleans the toilets in libertopia?


    Last week, the Escape Pod podcast published part one of a reading of my YA novella “Martian Chronicles,” which I wrote for Jonathan Strahan’s Life on Mars anthology: it’s a story about libertarian spacesteaders who move to Mars to escape “whiners” and other undesirables, only to discover that the colonists that preceded them expect them to clean the toilets when they arrive.

    Last night, they published the conclusion in part two (MP3) of Adam Pracht’s reading of the story, along with some lovely commentary by Mur Lafferty.

    I’m an enormous fan of Escape Pod and of audiobooks in general, and it’s such a treat to her my work adapted by talented readers who bring new things to the material.

    “We’re all poves now, Dad.” I swallowed, looked into his eyes. It was hard to do. “We’re headed to Mars to clean the toilets. That’s the thing that we discovered. And the people Mars-side, they’re fine with that. After all, if we were too good for toilet cleaning, we would have been in the first wave. They’ll say that they’re too good to clean toilets, and they’ll prove it by pointing out that we’re all broke and the only jobs they have for us are the worst, crappiest jobs. Anyone who disagrees will be a whiner.”

    That had been the real surprise, once Mars OS was running on all my devices: the message boards filled with Martians fantasizing about how great it would be once the next wave of colonists arrived, how they’d be able to “solve the labor shortage” and finally hire people at “affordable wages” to do the real work of running the colony.

    Escape Pod 701: Martian Chronicles (Part 2 of 2) [Escape Pod]

    CryptogramDetails on Uzbekistan Government Malware: SandCat

    Kaspersky has uncovered an Uzbeki hacking operation, mostly due to incompetence on the part of the government hackers.

    The group's lax operational security includes using the name of a military group with ties to the SSS to register a domain used in its attack infrastructure; installing Kaspersky's antivirus software on machines it uses to write new malware, allowing Kaspersky to detect and grab malicious code still in development before it's deployed; and embedding a screenshot of one of its developer's machines in a test file, exposing a major attack platform as it was in development. The group's mistakes led Kaspersky to discover four zero-day exploits SandCat had purchased from third-party brokers to target victim machines, effectively rendering those exploits ineffective. And the mistakes not only allowed Kaspersky to track the Uzbek spy agency's activity but also the activity of other nation-state groups in Saudi Arabia and the United Arab Emirates who were using some of the same exploits SandCat was using.

    LongNowHow to Send Messages 10,000 Years into the Future

    Popular Science recently profiled our Rosetta and 10,000 Year Clock projects, as well as a number of related long-term thinking projects, such as Martin Kunze’s Memory of Mankind, the Apollo 12 MoonArk, nuclear waste ray cats, and the Star Map at Hoover Dam.

    Corroded, wrecked, and half-buried for 2,000 years like an accidental time capsule, the Statue of Liberty that looms over Charlton Heston in the final scene of the original Planet of the Apes is a literal symbol of humanity’s missteps: a horrible communiqué from the distant past about atomic annihilation. In the real world, many linguists, designers, and scientists are puzzling over how to intentionally send millennia-spanning messages to recipients whose languages, senses, and fears could bear little resemblance to our own. The projects these ­forward-​­thinkers dream up aim to convey clues about our existence, hellos to extraterrestrials, or warnings about nuclear waste—like postcards that will be legible to beings 1,000, 5,000, even 10,000 years ahead.

    A highlight of the piece are the illustrations of the various projects, presented as postcards from the future.

    Learn More

    • Re: Ray Cats: The Other 10,000 Year Project: Long-term Thinking and Nuclear Waste (Long Now)
    • Re: The MoonArk:
    • Re: the Star Map of Hoover Dam: The 26,000 Year Astronomical Monument Hidden in Plain Sight (Long Now)
    • Re: the Voyager Golden Record: Billion Year Mashup (Long Now)
    • Re: the Memory of Mankind Project: The Time Capsule That’s As Big As Human History (GQ)

    Worse Than FailureError'd: The WTF Experience

    "As it turns out, they've actually been singing Purple Haze before the start of all of those sportsball games," Adam writes.

     

    Andrew C. writes, "When you buy from 'Best Pool Supplies', make no mistake...you're going to pay for that level of quality."

     

    Jared wrote, "Pulling invalid data is forgiveable, but using a loop is not."

     

    "VMware ESXi seems a little confused about how power state transitions work," writes Paul N.

     

    "At first I was annoyed I didn't get the job, but now I really want to go in for free and fix their systems for them!" Mark wrote.

     

    Peter M. writes, "Oh yes, Verizon! I am very excited! ...I'm just having a difficult time defining why."

     

    [Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

    ,

    Planet DebianMarkus Koschany: My Free Software Activities in September 2019

    Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

    Debian Games

    • Reiner Herrmann investigated a build failure of supertuxkart on several architectures and prepared an update to link against libatomic. I reviewed and sponsored the new revision which allowed supertuxkart 1.0 to migrate to testing.
    • Python 3 ports: Reiner also ported bouncy, a game for small kids, to Python3 which I reviewed and uploaded to unstable.
    • Myself upgraded atomix to version 3.34.0 as requested although it is unlikely that you will find a major difference to the previous version.

    Debian Java

    Misc

    • I packaged new upstream releases of ublock-origin and privacybadger, two popular Firefox/Chromium addons and
    • packaged a new upstream release of wabt, the WebAssembly Binary Toolkit.

    Debian LTS

    This was my 43. month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

    • From 11.09.2019 until 15.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in libonig, bird, curl, openssl, wpa, httpie, asterisk, wireshark and libsixel.
    • DLA-1922-1. Issued a security update for wpa fixing 1 CVE.
    • DLA-1932-1. Issued a security update for openssl fixing 2 CVE.
    • DLA-1900-2. Issued a regression update for apache fixing 1 CVE.
    • DLA-1943-1. Issued a security update for jackson-databind fixing 4 CVE.
    • DLA-1954-1. Issued a security update for lucene-solr fixing 1 CVE. I triaged CVE-2019-12401 and marked Jessie as not-affected because we use the system libraries of woodstox in Debian.
    • DLA-1955-1. Issued a security update for tcpdump fixing 24 CVE by backporting the latest upstream release to Jessie. I discovered several test failures but after more investigation I came to the conclusion that the test cases were simply created with a newer version of libpcap which causes the test failures with Jessie’s older version.

    ELTS

    Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my sixteenth month and I have been assigned to work 15 hours on ELTS plus five hours from August. I used 15 of them for the following:

    • I was in charge of our ELTS frontdesk from 30.09.2019 until 06.10.2019 and I triaged CVE in tcpdump. There were no reports of other security vulnerabilities for supported packages in this week.
    • ELA-163-1. Issued a security update for curl fixing 1 CVE.
    • ELA-171-1. Issued a security update for openssl fixing 2 CVE.
    • ELA-172-1. Issued a security update for linux fixing 23 CVE.
    • ELA-174-1. Issued a security update for tcpdump fixing 24 CVE.

    CryptogramNew Reductor Nation-State Malware Compromises TLS

    Kaspersky has a detailed blog post about a new piece of sophisticated malware that it's calling Reductor. The malware is able to compromise TLS traffic by infecting the computer with hacked TLS engine substituted on the fly, "marking" infected TLS handshakes by compromising the underlining random-number generator, and adding new digital certificates. The result is that the attacker can identify, intercept, and decrypt TLS traffic from the infected computer.

    The Kaspersky Attribution Engine shows strong code similarities between this family and the COMPfun Trojan. Moreover, further research showed that the original COMpfun Trojan most probably is used as a downloader in one of the distribution schemes. Based on these similarities, we're quite sure the new malware was developed by the COMPfun authors.

    The COMpfun malware was initially documented by G-DATA in 2014. Although G-DATA didn't identify which actor was using this malware, Kaspersky tentatively linked it to the Turla APT, based on the victimology. Our telemetry indicates that the current campaign using Reductor started at the end of April 2019 and remained active at the time of writing (August 2019). We identified targets in Russia and Belarus.

    [...]

    Turla has in the past shown many innovative ways to accomplish its goals, such as using hijacked satellite infrastructure. This time, if we're right that Turla is the actor behind this new wave of attacks, then with Reductor it has implemented a very interesting way to mark a host's encrypted TLS traffic by patching the browser without parsing network packets. The victimology for this new campaign aligns with previous Turla interests.

    We didn't observe any MitM functionality in the analyzed malware samples. However, Reductor is able to install digital certificates and mark the targets' TLS traffic. It uses infected installers for initial infection through HTTP downloads from warez websites. The fact the original files on these sites are not infected also points to evidence of subsequent traffic manipulation.

    The attribution chain from Reductor to COMPfun to Turla is thin. Speculation is that the attacker behind all of this is Russia.

    CryptogramWi-Fi Hotspot Tracking

    Free Wi-Fi hotspots can track your location, even if you don't connect to them. This is because your phone or computer broadcasts a unique MAC address.

    What distinguishes location-based marketing hotspot providers like Zenreach and Euclid is that the personal information you enter in the captive portal足 -- like your email address, phone number, or social media profile足 -- can be linked to your laptop or smartphone's Media Access Control (MAC) address. That's the unique alphanumeric ID that devices broadcast when Wi-Fi is switched on.

    As Euclid explains in its privacy policy, "...if you bring your mobile device to your favorite clothing store today that is a Location -- 足and then a popular local restaurant a few days later that is also a Location足 -- we may know that a mobile device was in both locations based on seeing the same MAC Address."

    MAC addresses alone don't contain identifying information besides the make of a device, such as whether a smartphone is an iPhone or a Samsung Galaxy. But as long as a device's MAC address is linked to someone's profile, and the device's Wi-Fi is turned on, the movements of its owner can be followed by any hotspot from the same provider.

    "After a user signs up, we associate their email address and other personal information with their device's MAC address and with any location history we may previously have gathered (or later gather) for that device's MAC address," according to Zenreach's privacy policy.

    The defense is to turn Wi-Fi off on your phone when you're not using it.

    EDITED TO ADD: Note that the article is from 2018. Not that I think anything is different today....

    Worse Than FailureCodeSOD: Parse, Parse Again

    Sometimes, a block of terrible code exists for a good reason. Usually, it exists because someone was lazy or incompetent, which while not a good reason, at least makes sense. Sometimes, it exists for a stupid reason.

    Janet’s company recently bought another company, and now the new company had to be integrated into their IT operations. One of the little, tiny, minuscule known-issues in the new company’s system was that their logging was mis-configured. Instead of putting a new-line after each logging message, it put only a single space.

    That tiny problem was a little bit larger, as each log message was a JSON object. The whole point of logging out a single JSON document per line was that it would be easy to parse/understand the log messages, but since they were all on a single line, it was impossible to just do that.

    The developers at the acquired company were left with a choice: they could fix the glitch in the logging system so that it output a newline after each message, or they could just live with this. For some reason, they decided to live with it, and they came up with this solution for parsing the log files:

    def parse(string):
      obs = []
      j = ""
      for c in string.split():
        j += c
        try:
          obs.append(json.loads(j))
          j = ""
        except ValueError:
          pass
     
      return obs

    This splits the string on spaces. Then, for each substring, it tries to parse it as a JSON object. If it succeeds, great. If it throws an exception, append the next substring to this one, and then try parsing again. Repeat until we’ve built a valid JSON document, than clear out the accumulator and repeat the process for all the rest of the messages. Eventually, return all the log messages parsed as JSON.

    As a fun side effect, .split is going to throw the spaces away, so when they j += c, if your log message looked like:

    {"type": "Error", "message": "Unable to parse JSON document"}

    After parsing that into JSON, the message becomes UnabletoparseJSONdocument.

    But at least they didn’t have to solve than newline bug.

    [Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

    Planet DebianNorbert Preining: R with TensorFlow 2.0 on Debian/sid

    I recently posted on getting TensorFlow 2.0 with GPU support running on Debian/sid. At that time I didn’t manage to get the tensorflow package for R running properly. It didn’t need much to get it running, though.

    The biggest problem I faced was that the R/TensorFlow package recommends using install_tensorflow, which can use either auto, conda, virtualenv, or system (at least according to the linked web page). I didn’t want to set up neither a conda nor virtualenv environment, since TensorFlow was already installed, so I thought system would be correct, but then, I had it already installed. Anyway, the system option is gone and not accepted, but I still got errors. In particular because the code mentioned on the installation page is incorrect for TF2.0!

    It turned out to be a simple error on my side – the default is to use the program python which in Debian is still Python2, while I have TF only installed for Python3. The magic incantation to fix that is use_python("/usr/bin/python3") and one is set.

    So here is a full list of commands to get R/TensorFlow running on top of an already installed TensorFlow for Python3 (as usual either as root to be installed into /usr/local or as user to have a local installation):

    devtools::install_github("rstudio/tensorflow")

    And if you want to run some TF program:

    library(tensorflow)
    use_python("/usr/bin/python3")
    tf$math$cumprod(1:5)

    This gives lots of output but mentioning that it is running on my GPU.

    At least for the (probably very short) time being this looks like a workable system. Now off to convert my TF1.N code to TF2.0.

    Planet DebianLouis-Philippe Véronneau: Trying out Sourcehut

    Last month, I decided it was finally time to move a project I maintain from Github1 to another git hosting platform.

    While polling other contributors (I proposed moving to gitlab.com), someone suggested moving to Sourcehut, a newish git hosting platform written and maintained by Drew DeVault. I've been following Drew's work for a while now and although I had read a few blog posts on Sourcehut's development, I had never really considered giving it a try. So I did!

    Sourcehut is still in alpha and I'm expecting a lot of things to change in the future, but here's my quick review.

    Things I like

    Sustainable FOSS

    Sourcehut is 100% Free Software. Github is proprietary and I dislike Gitlab's Open Core business model.

    Sourcehut's business model also seems sustainable to me, as it relies on people paying a monthly fee for the service. You'll need to pay if you want your code hosted on https://sr.ht once Sourcehut moves into beta. As I've written previously, I like that a lot.

    In comparison, Gitlab is mainly funded by venture capital and I'm afraid of the long term repercussions this choice will have.

    Continuous Integration

    Continuous Integration is very important to me and I'm happy to say Sourcehut's CI is pretty good! Like Travis and Gitlab CI, you declare what needs to happen in a YAML file. The CI uses real virtual machines backed by QEMU, so you can run many different distros and CPU archs!

    Even nicer, you can actually SSH into a failed CI job to debug things. In comparison, Gitlab CI's Interactive Web Terminal is ... web based and thus not as nice. Worse, it seems it's still somewhat buggy as Gitlab still hasn't enabled it on their gitlab.com instance.

    Here's what the instructions to SSH into the CI look like when a job fails:

    This build job failed. You may log into the failed build environment within 10
    minutes to examine the results with the following command:
    
    ssh -t builds@foo.bar connect NUMBER
    

    Sourcehut's CI is not as feature-rich or as flexible as Gitlab CI, but I feel it is more powerful then Gitlab CI's default docker executor. Folks that run integration tests or more complicated setups where Docker fails should definitely give it a try.

    From the few tests I did, Sourcehut's CI is also pretty quick (it's definitely faster than Travis or Gitlab CI).

    No JS

    Although Sourcehut's web interface does bundle some Javascript, all features work without it. Three cheers for that!

    Things I dislike

    Features division

    I'm not sure I like the way features (the issue tracker, the CI builds, the git repository, the wikis, etc.) are subdivided in different subdomains.

    For example, when you create a git repository on git.sr.ht, you only get a git repository. If you want an issue tracker for that git repository, you have to create one at todo.sr.ht with the same name. That issue tracker isn't visible from the git repository web interface.

    That's the same for all the features. For example, you don't see the build status of a merged commit when you look at it. This design choice makes you feel like the different features aren't integrated to one another.

    In comparison, Gitlab and Github use a more "centralised" approach: everything is centered around a central interface (your git repository) and it feels more natural to me.

    Discoverability

    I haven't seen a way to search sr.ht for things hosted there. That makes it hard to find repositories, issues or even the Sourcehut source code!

    Merge Request workflow

    I'm a sucker for the Merge Request workflow. I really like to have a big green button I can click on to merge things. I know some people prefer a more manual workflow that uses git merge and stuff, but I find that tiresome.

    Sourcehut chose a workflow based on sending patches by email. It's neat since you can submit code without having an account. Sourcehut also provides mailing lists for projects, so people can send patches to a central place.

    I find that workflow harder to work with, since to me it makes it more difficult to see what patches have been submitted. It also makes the review process more tedious, since the CI isn't ran automatically on email patches.

    Summary

    All in all, I don't think I'll be moving ISBG to Sourcehut (yet?). At the moment it doesn't quite feel as ready as I'd want it to be, and that's OK. Most of the things I disliked about the service can be fixed by some UI work and I'm sure people are already working on it.

    Github was bought by MS for 7.5 billion USD and Gitlab is currently valued at 2.7 billion USD. It's not really fair to ask Sourcehut to fully compete just yet :)

    With Sourcehut, Drew DeVault is fighting the good fight and I wish him the most resounding success. Who knows, maybe I'll really migrate to it in a few years!


    1. Github is a proprietary service, has been bought by Microsoft and gosh darn do I hate Travis CI. 

    Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.800.1.0

    armadillo image

    Another month, another Armadillo upstream release! Hence a new RcppArmadillo release arrived on CRAN earlier today, and was just shipped to Debian as well. It brings a faster solve() method and other goodies. We also switched to the (awesome) tinytest unit test frameowrk, and Min Kim made the configure.ac script more portable for the benefit of NetBSD and other non-bash users; see below for more details. One again we ran two full sets of reverse-depends checks, no issues were found, and the packages was auto-admitted similarly at CRAN after less than two hours despite there being 665 reverse depends. Impressive stuff, so a big Thank You! as always to the CRAN team.

    Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 665 other packages on CRAN.

    Changes in RcppArmadillo version 0.9.800.1.0 (2019-10-09)

    • Upgraded to Armadillo release 9.800 (Horizon Scraper)

      • faster solve() in default operation; iterative refinement is no longer applied by default; use solve_opts::refine to explicitly enable refinement

      • faster expmat()

      • faster handling of triangular matrices by rcond()

      • added .front() and .back()

      • added .is_trimatu() and .is_trimatl()

      • added .is_diagmat()

    • The package now uses tinytest for unit tests (Dirk in #269).

    • The configure.ac script is now more careful about shell portability (Min Kim in #270).

    Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    ,

    CryptogramSpeakers Censored at AISA Conference in Melbourne

    Two speakers were censored at the Australian Information Security Association's annual conference this week in Melbourne. Thomas Drake, former NSA employee and whistleblower, was scheduled to give a talk on the golden age of surveillance, both government and corporate. Suelette Dreyfus, lecturer at the University of Melbourne, was scheduled to give a talk on her work -- funded by the EU government -- on anonymous whistleblowing technologies like SecureDrop and how they reduce corruption in countries where that is a problem.

    Both were put on the program months ago. But just before the event, the Australian government's ACSC (the Australian Cyber Security Centre) demanded they both be removed from the program.

    It's really kind of stupid. Australia has been benefiting a lot from whistleblowers in recent years -- exposing corruption and bad behavior on the part of the government -- and the government doesn't like it. It's cracking down on the whistleblowers and reporters who write their stories. My guess is that someone high up in ACSC saw the word "whistleblower" in the descriptions of those two speakers and talks and panicked.

    You can read details of their talks, including abstracts and slides, here. Of course, now everyone is writing about the story. The two censored speakers spent a lot of the day yesterday on the phone with reporters, and they have a bunch of TV and radio interviews today.

    I am at this conference, speaking on Wednesday morning (today in Australia, as I write this). ACSC used to have its own government cybersecurity conference. This is the first year it combined with AISA. I hope it's the last. And that AISA invites the two speakers back next year to give their censored talks.

    EDITED TO ADD (10/9): More on the censored talks, and my comments from the stage at the conference.

    Slashdot thread.

    CryptogramIllegal Data Center Hidden in Former NATO Bunker

    Interesting:

    German investigators said Friday they have shut down a data processing center installed in a former NATO bunker that hosted sites dealing in drugs and other illegal activities. Seven people were arrested.

    [...]

    Thirteen people aged 20 to 59 are under investigation in all, including three German and seven Dutch citizens, Brauer said.

    Authorities arrested seven of them, citing the danger of flight and collusion. They are suspected of membership in a criminal organization because of a tax offense, as well as being accessories to hundreds of thousands of offenses involving drugs, counterfeit money and forged documents, and accessories to the distribution of child pornography. Authorities didn't name any of the suspects.

    The data center was set up as what investigators described as a "bulletproof hoster," meant to conceal illicit activities from authorities' eyes.

    Investigators say the platforms it hosted included "Cannabis Road," a drug-dealing portal; the "Wall Street Market," which was one of the world's largest online criminal marketplaces for drugs, hacking tools and financial-theft wares until it was taken down earlier this year; and sites such as "Orange Chemicals" that dealt in synthetic drugs. A botnet attack on German telecommunications company Deutsche Telekom in late 2016 that knocked out about 1 million customers' routers also appears to have come from the data center in Traben-Trarbach, Brauer said.

    EDITED TO ADD (10/9): This is a better article.

    CryptogramCheating at Professional Poker

    Interesting story about someone who is almost certainly cheating at professional poker.

    But then I start to see things that seem so obvious, but I wonder whether they aren't just paranoia after hours and hours of digging into the mystery. Like the fact that he starts wearing a hat that has a strange bulge around the brim -- one that vanishes after the game when he's doing an interview in the booth. Is it a bone-conducting headset, as some online have suggested, sending him messages directly to his inner ear by vibrating on his skull? Of course it is! How could it be anything else? It's so obvious! Or the fact that he keeps his keys in the same place on the table all the time. Could they contain a secret camera that reads electronic sensors on the cards? I can't see any other possibility! It is all starting to make sense.

    In the end, though, none of this additional evidence is even necessary. The gaggle of online Jim Garrisons have simply picked up more momentum than is required and they can't stop themselves. The fact is, the mystery was solved a long time ago. It's just like De Niro's Ace Rothstein says in Casino when the yokel slot attendant gets hit for three jackpots in a row and tells his boss there was no way for him to know he was being scammed. "Yes there is," Ace replies. "An infallible way. They won." According to one poster on TwoPlusTwo, in 69 sessions on Stones Live, Postle has won in 62 of them, for a profit of over $250,000 in 277 hours of play. Given that he plays such a large number of hands, and plays such an erratic and, by his own admission, high-variance style, one would expect to see more, well, variance. His results just aren't possible even for the best players in the world, which, if he isn't cheating, he definitely is among. Add to this the fact that it has been alleged that Postle doesn't play in other nonstreamed live games at Stones, or anywhere else in the Sacramento area, and hasn't been known to play in any sizable no-limit games anywhere in a long time, and that he always picks up his chips and leaves as soon as the livestream ends. I don't really need any more evidence than that. If you know poker players, you know that this is the most damning evidence against him. Poker players like to play poker. If any of the poker players I know had the win rate that Mike Postle has, you'd have to pry them up from the table with a crowbar. The guy is making nearly a thousand dollars an hour! He should be wearing adult diapers so he doesn't have to take a bathroom break and cost himself $250.

    This isn't the first time someone has been accused of cheating because they are simply playing significantly better than computer simulations predict that even the best player would play.

    News article. BoingBoing post

    Planet DebianAdnan Hodzic: Hello world!

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

    Planet DebianEnrico Zini: Fixed XSS issue on debtags.debian.org

    Thanks to Moritz Naumann who found the issues and wrote a very useful report, I fixed a number of Cross Site Scripting vulnerabilities on https://debtags.debian.org.

    The core of the issue was code like this in a Django view:

    def pkginfo_view(request, name):
        pkg = bmodels.Package.by_name(name)
        if pkg is None:
            return http.HttpResponseNotFound("Package %s was not found" % name)
        # …
    

    The default content-type of HttpResponseNotFound is text/html, and the string passed is the raw HTML with clearly no escaping, so this allows injection of arbitrary HTML/<script> code in the name variable.

    I was so used to Django doing proper auto-escaping that I missed this place in which it can't do that.

    There are various things that can be improved in that code.

    One could introduce escaping (and while one's at it, migrate the old % to format):

    from django.utils.html import escape
    
    def pkginfo_view(request, name):
        pkg = bmodels.Package.by_name(name)
        if pkg is None:
            return http.HttpResponseNotFound("Package {} was not found".format(escape(name)))
        # …
    

    Alternatively, set content_type to text/plain:

    def pkginfo_view(request, name):
        pkg = bmodels.Package.by_name(name)
        if pkg is None:
            return http.HttpResponseNotFound("Package {} was not found".format(name), content_type="text/plain")
        # …
    

    Even better, raise Http404:

    from django.utils.html import escape
    
    def pkginfo_view(request, name):
        pkg = bmodels.Package.by_name(name)
        if pkg is None:
            raise Http404(f"Package {name} was not found")
        # …
    

    Even better, use standard shortcuts and model functions if possible:

    from django.shortcuts import get_object_or_404
    
    def pkginfo_view(request, name):
        pkg = get_object_or_404(bmodels.Package, name=name)
        # …
    

    And finally, though not security related, it's about time to switch to class-based views:

    class PkgInfo(TemplateView):
        template_name = "reports/package.html"
    
        def get_context_data(self, **kw):
            ctx = super().get_context_data(**kw)
            ctx["pkg"] = get_object_or_404(bmodels.Package, name=self.kwargs["name"])
        # …
            return ctx
    

    I proceeded with a review of the other Django sites I maintain in case I reproduced this mistake also there.

    Worse Than FailureCoded Smorgasbord: Driven to Substraction

    Deon (previously) has some good news. His contract at Initrode is over, and he’s on his way out the door. But before he goes, he wants to share more of his pain with us.

    You may remember that the StringManager class had a bunch of data type conversions to numbers and dates. Well guess what, there’s also a DateManager class, which is another 1600 lines of methods to handle dates.

    As you might expect, there are a pile of re-invented conversion and parsing methods which do the same thing as the built-in methods. But there’s also utility methods to help us handle date-related operations.

    		public static int subStractFromCurrentDate(System.DateTime dateTimeParm) 
    		{
    			//get now
    			System.DateTime now = System.DateTime.Now;
    			//now compare days
    			int daysDifference  = now.Day - dateTimeParm.Day;
    			return daysDifference ;
    		}

    Fun fact: the Day property returns the day of the month. So this method might “subStract”, but if these two dates fall in different months, we’re going to get unexpected results.

    One of the smaller string formatters included is this one:

    		public static string formatEnglishDate (System.DateTime inputDateTime) 
    		{
    			Hashtable _monthsInEnglishByMonthNumber = new Hashtable();
    			_monthsInEnglishByMonthNumber[1] = "January";
    			_monthsInEnglishByMonthNumber[2] = "February";
    			_monthsInEnglishByMonthNumber[3] = "March";
    			_monthsInEnglishByMonthNumber[4] = "April";
    			_monthsInEnglishByMonthNumber[5] = "May";
    			_monthsInEnglishByMonthNumber[6] = "June";
    			_monthsInEnglishByMonthNumber[7] = "July";
    			_monthsInEnglishByMonthNumber[8] = "August";
    			_monthsInEnglishByMonthNumber[9] = "September";
    			_monthsInEnglishByMonthNumber[10] = "October";
    			_monthsInEnglishByMonthNumber[11] = "November";
    			_monthsInEnglishByMonthNumber[12] = "December";
    
    			StringBuilder _dateBldr = new StringBuilder();
    			_dateBldr.Append(_monthsInEnglishByMonthNumber[inputDateTime.Month].ToString());
    			_dateBldr.Append(" ");
    			_dateBldr.Append(inputDateTime.Day.ToString());
    			_dateBldr.Append(", ");
    			_dateBldr.Append(inputDateTime.Year.ToString());
    
    			return _dateBldr.ToString();
    		}

    Among all the bad things implied here, I really like that they used a Hashtable as an array.

            public static bool  currentDateIsFirstBusinessDateOfTheMonth 
                                (
                                    Hashtable inputHolidayHash
                                )
            {
                /*
                 * If current date is not a business date, then it cannot
                 * be the first business date of the month.
                 */
                DateTime _currentDate = new DateTime(DateTime.Now.Year, DateTime.Now.Month, DateTime.Now.Day);
                _currentDate =
                    new DateTime(2010, 5, 6);
                if (
                        _currentDate.DayOfWeek == DayOfWeek.Saturday
                        ||
                        _currentDate.DayOfWeek == DayOfWeek.Sunday
                        ||
                        inputHolidayHash[_currentDate] != null
                    )
                    return false;
    
                /*
                 * If current date is a business date, and if it is also
                 * the first calendar date of the month, then the
                 * current date is the first business date of the month.
                 */
    
                DateTime _firstDayOfTheMonth =
                    _currentDate.AddDays(1 - _currentDate.Day);
                if (_firstDayOfTheMonth == _currentDate)
                    return true;
    
                /*
                 * If current date is a business date, but is not the 1st calendar
                 * date of the month, and, if, in stepping back day by day 
                 * from the current date,  we encounter a business day before 
                 * encountering the last calendar day of the preceding month, then the 
                 * current date is NOT the first business date of the month.
                */
                DateTime _tempDate = _currentDate.AddDays(-1);
                while (_tempDate >= _firstDayOfTheMonth)
                {
                    if (
                            _tempDate.DayOfWeek != DayOfWeek.Saturday
                            &&
                            _tempDate.DayOfWeek != DayOfWeek.Sunday
                            &&
                            inputHolidayHash[_tempDate] == null
                        )
                        return false;
                    _tempDate = _tempDate.AddDays(-1);
                }
                /*
                 * * If current date is a business date, but is not the 1st calendar
                 * date,and, if, in stepping back day by day from the current date, 
                 * we encounter no business day before encountering the 
                 * 1st calendar day of the month, then the current date 
                 * IS the first business date of the month.
                */
                return true;
            }

    This one has loads of comments, and honestly, I still have no idea what it’s doing. If it’s checking the current day, why does it need to cycle through other days? Why even ask that question, because clearly while debugging they hard-coded a testing date (new DateTime(2010, 5, 6)) and just left that in there.

    I’m not the only one getting confused. Check out this comment:

    
            //@??
            public static DateTime givenPeriodEndDateFindLastBusinessDateInPeriod
                                    (
                                        DateTime inputPeriodEndDate
                                        , Hashtable inputHolidayHash
                                    )
            {
              ...
            }

    And if you’re missing good old StringManager, don’t worry, we use it here:

        /**
    		 * @description format date
    		 * */
    		public static string formatYYYYMMDD (System.DateTime inputDateTime) 
    		{
    			StringBuilder _bldr = new StringBuilder();
    			_bldr.Append(inputDateTime.Year.ToString());
    			_bldr.Append(initrode.utilities.StringManager.Fill(inputDateTime.Month.ToString(),
    															"0",       // Zero-Fill
    															true,	   // Fill from left
    															2));        // String length
    
    			_bldr.Append(initrode.utilities.StringManager.Fill(inputDateTime.Day.ToString(),
    															"0",       // Zero-Fill
    															true,	   // Fill from left
    															2));       // String length
    			return _bldr.ToString();
    		}

    And all of this is from just about the first third of the code. I’m trying to keep to shorter methods before posting the whole blob of ugly. So with that in mind, what if you wanted to compare dates?

    		public static DateComparison date1ComparedToDate2(DateTime inputDate1, 
    															DateTime inputDate2)
    		{
    			if (inputDate1.Year > inputDate2.Year) return DateComparison.gt;
    			if (inputDate1.Year < inputDate2.Year) return DateComparison.lt;
    			if (inputDate1.DayOfYear > inputDate2.DayOfYear) return DateComparison.gt;
    			if (inputDate1.DayOfYear < inputDate2.DayOfYear) return DateComparison.lt;
    			return DateComparison.eq;
    		
    		}

    Oh yeah, not only do we break the dates up into parts to compare them, we also have a custom enumerated type to represent the result of the comparison. And it’s not just dates, we do it with times, too.

    		public static DateComparison timestamp1ComparedToTimestamp2(DateTime inputTimestamp1, 
    																	DateTime inputTimestamp2)
    		{
    			if (inputTimestamp1.Year > inputTimestamp2.Year) return DateComparison.gt;
    			if (inputTimestamp1.Year < inputTimestamp2.Year) return DateComparison.lt;
    			if (inputTimestamp1.DayOfYear > inputTimestamp2.DayOfYear) return DateComparison.gt;
    			if (inputTimestamp1.DayOfYear < inputTimestamp2.DayOfYear) return DateComparison.lt;
    			if (inputTimestamp1.Hour > inputTimestamp2.Hour) return DateComparison.gt;
    			if (inputTimestamp1.Hour < inputTimestamp2.Hour) return DateComparison.lt;
    			if (inputTimestamp1.Minute > inputTimestamp2.Minute) return DateComparison.gt;
    			if (inputTimestamp1.Minute < inputTimestamp2.Minute) return DateComparison.lt;
    			if (inputTimestamp1.Second > inputTimestamp2.Second) return DateComparison.gt;
    			if (inputTimestamp1.Second < inputTimestamp2.Second) return DateComparison.lt;
    			if (inputTimestamp1.Millisecond > inputTimestamp2.Millisecond) return DateComparison.gt;
    			if (inputTimestamp1.Millisecond < inputTimestamp2.Millisecond) return DateComparison.lt;
    			return DateComparison.eq;
    		
    		}

    Initrode has a bright future with this product. Deon adds:

    The contractor who is replacing me has rolled his own piece of software to try and replace Entity Framework because his version is “better” despite being written around a decade ago, so I’m sure he’ll fit right in.

    The future’s so bright I’ve gotta wear shades.

    Here’s the full block, if you want to suffer through that:

    /*
      Changes Log:
    
      @01 - 01/23/2009 - {some initials were here} - Improve performance of approval screens.
    */
    using System;
    using System.Collections;
    using System.Globalization; 
    using System.Text;
    
    namespace initrode.utilities
    {
    	/// <summary>
    	/// Summary description for DateManager.
    	/// </summary>
    	public class DateManager
    	{
    		public enum	DateComparison {gt = 1, eq = 0, lt = -1}
            public enum DateTimeParts
            {
                dateOnly
                , dateAndTime
                , dateTimeAndAMOrPM
            }
    						
    		/*
    			* @description return the days difference from today date
    			* @parm int amount of days in the past
    			* @return int the amount of days difference
    			* 
    			**/
    		public static int subStractFromCurrentDate(System.DateTime dateTimeParm) 
    		{
    			//get now
    			System.DateTime now = System.DateTime.Now;
    			//now compare days
    			int daysDifference  = now.Day - dateTimeParm.Day;
    			return daysDifference ;
    		}
    		/**
    		 * @description format date
    		 * */
    		public static string format (System.DateTime dateTime, string format) 
    		{
    			string dateFormat;
    			dateFormat = dateTime.ToString(format,DateTimeFormatInfo.InvariantInfo);
    			return dateFormat;
    		}
            public static DateTime  convertDateStringInSlashedFormatToDateTime
                                    (
                                        string inputDateStringInSlashedFormat
                                    )
            {
                inputDateStringInSlashedFormat =
                    initrode.utilities.StringManager.StripWhitespace
                    (
                        inputDateStringInSlashedFormat
                    );
                ArrayList _dateParts =
                    initrode.utilities.StringManager.splitIntoArrayList
                    (
                        inputDateStringInSlashedFormat
                        ,@"/"
                    );
                if (_dateParts.Count != 3) return new DateTime(1900, 1, 1);
    
                string _monthString =
                    initrode.utilities.StringManager.StripWhitespace
                    (
                        _dateParts[0].ToString()
                    );
                if (
                        initrode.utilities.StringManager.IsValidNumber
                        (
                            _monthString
                        ) == false
                    )
                    new DateTime(1900, 1, 1);
    
                string _dayString =
                    initrode.utilities.StringManager.StripWhitespace
                    (
                        _dateParts[1].ToString()
                    );
                if (
                        initrode.utilities.StringManager.IsValidNumber
                        (
                            _dayString
                        ) == false
                    )
                    new DateTime(1900, 1, 1);
    
                string _yearString =
                    initrode.utilities.StringManager.StripWhitespace
                    (
                        _dateParts[2].ToString()
                    );
                if (
                        initrode.utilities.StringManager.IsValidNumber
                        (
                            _yearString
                        ) == false
                    )
                    new DateTime(1900, 1, 1);
                return new DateTime
                            (
                                Convert.ToInt32
                                (
                                    _yearString
                                )
                                , Convert.ToInt32
                                (
                                    _monthString
                                )
                                , Convert.ToInt32
                                (
                                    _dayString
                                )
                            );
            }            
    		/**
    		 * @description format date
    		 * */
    		public static string formatEnglishDate (System.DateTime inputDateTime) 
    		{
    			Hashtable _monthsInEnglishByMonthNumber = new Hashtable();
    			_monthsInEnglishByMonthNumber[1] = "January";
    			_monthsInEnglishByMonthNumber[2] = "February";
    			_monthsInEnglishByMonthNumber[3] = "March";
    			_monthsInEnglishByMonthNumber[4] = "April";
    			_monthsInEnglishByMonthNumber[5] = "May";
    			_monthsInEnglishByMonthNumber[6] = "June";
    			_monthsInEnglishByMonthNumber[7] = "July";
    			_monthsInEnglishByMonthNumber[8] = "August";
    			_monthsInEnglishByMonthNumber[9] = "September";
    			_monthsInEnglishByMonthNumber[10] = "October";
    			_monthsInEnglishByMonthNumber[11] = "November";
    			_monthsInEnglishByMonthNumber[12] = "December";
    
    			StringBuilder _dateBldr = new StringBuilder();
    			_dateBldr.Append(_monthsInEnglishByMonthNumber[inputDateTime.Month].ToString());
    			_dateBldr.Append(" ");
    			_dateBldr.Append(inputDateTime.Day.ToString());
    			_dateBldr.Append(", ");
    			_dateBldr.Append(inputDateTime.Year.ToString());
    
    			return _dateBldr.ToString();
    		}
            public static bool currentDateIsFirstSaturdayOfTheMonth()
            {
                /*
                 * If current date is not a business date, then it cannot
                 * be the first business date of the month.
                 */
                DateTime _currentDate = new DateTime(DateTime.Now.Year, DateTime.Now.Month, DateTime.Now.Day);
                if (
                        _currentDate.DayOfWeek == DayOfWeek.Saturday
                        &&
                        _currentDate.Day <= 7
                    )
                    return true;
    
                return false;
            }
    
            public static bool  currentDateIsFirstBusinessDateOfTheMonth 
                                (
                                    Hashtable inputHolidayHash
                                )
            {
                /*
                 * If current date is not a business date, then it cannot
                 * be the first business date of the month.
                 */
                DateTime _currentDate = new DateTime(DateTime.Now.Year, DateTime.Now.Month, DateTime.Now.Day);
                _currentDate =
                    new DateTime(2010, 5, 6);
                if (
                        _currentDate.DayOfWeek == DayOfWeek.Saturday
                        ||
                        _currentDate.DayOfWeek == DayOfWeek.Sunday
                        ||
                        inputHolidayHash[_currentDate] != null
                    )
                    return false;
    
                /*
                 * If current date is a business date, and if it is also
                 * the first calendar date of the month, then the
                 * current date is the first business date of the month.
                 */
    
                DateTime _firstDayOfTheMonth =
                    _currentDate.AddDays(1 - _currentDate.Day);
                if (_firstDayOfTheMonth == _currentDate)
                    return true;
    
                /*
                 * If current date is a business date, but is not the 1st calendar
                 * date of the month, and, if, in stepping back day by day 
                 * from the current date,  we encounter a business day before 
                 * encountering the last calendar day of the preceding month, then the 
                 * current date is NOT the first business date of the month.
                */
                DateTime _tempDate = _currentDate.AddDays(-1);
                while (_tempDate >= _firstDayOfTheMonth)
                {
                    if (
                            _tempDate.DayOfWeek != DayOfWeek.Saturday
                            &&
                            _tempDate.DayOfWeek != DayOfWeek.Sunday
                            &&
                            inputHolidayHash[_tempDate] == null
                        )
                        return false;
                    _tempDate = _tempDate.AddDays(-1);
                }
                /*
                 * * If current date is a business date, but is not the 1st calendar
                 * date,and, if, in stepping back day by day from the current date, 
                 * we encounter no business day before encountering the 
                 * 1st calendar day of the month, then the current date 
                 * IS the first business date of the month.
                */
                return true;
            }
            //@??
            public static DateTime givenPeriodEndDateFindLastBusinessDateInPeriod
                                    (
                                        DateTime inputPeriodEndDate
                                        , Hashtable inputHolidayHash
                                    )
            {
                if (inputHolidayHash[inputPeriodEndDate] == null)
                    return inputPeriodEndDate;
                DateTime _tempDate = inputPeriodEndDate.AddDays(-1);
    
                while (
                            (
                                _tempDate.DayOfWeek == DayOfWeek.Saturday
                                ||
                                _tempDate.DayOfWeek == DayOfWeek.Sunday
                            )
                            ||
                            inputHolidayHash[_tempDate] != null
                        )
                {
                    _tempDate = _tempDate.AddDays(-1);
                }
                return _tempDate;
            }
    
    		/**
    		 * @description format date
    		 * */
            public static string convertDateTimeToSQLDate
                                    (
                                        DateTime inputDateTime
                                    )
            {
                StringBuilder _sqlDateBldr =
                    new StringBuilder();
                _sqlDateBldr.AppendFormat
                (
                    "{0}/{1}/{2}"
                    ,inputDateTime.Month.ToString()
                    ,inputDateTime.Day.ToString()
                    ,inputDateTime.Year.ToString()
                );
                return _sqlDateBldr.ToString();
            }
            /**
             * @description format date
             * */
            public static string convertDateTimeToDB2Timestamp
                                    (
                                        DateTime inputDateTime
                                    )
            {
                StringBuilder _sqlDateBldr =
                    new StringBuilder();
                _sqlDateBldr.AppendFormat
                (
                    "{0}-{1}-{2}.{3}:{4}:{5}.{6}"
                    , inputDateTime.Year.ToString()
                    ,   initrode.utilities.StringManager.Fill
                        (
                            inputDateTime.Month.ToString()
                            ,"0"
                            ,true  //boolFromLeft
                            ,2
                        )
                    ,   initrode.utilities.StringManager.Fill
                        (
                            inputDateTime.Day.ToString()
                            ,"0"
                            ,true  //boolFromLeft
                            ,2
                        )
                    , initrode.utilities.StringManager.Fill
                        (
                            inputDateTime.Hour.ToString()
                            ,"0"
                            ,true  //boolFromLeft
                            ,2
                        )
                    , initrode.utilities.StringManager.Fill
                        (
                            inputDateTime.Minute.ToString()
                            ,"0"
                            ,true  //boolFromLeft
                            ,2
                        )
                    ,   initrode.utilities.StringManager.Fill
                        (
                            inputDateTime.Second.ToString()
                            , "0"
                            , true  //boolFromLeft
                            , 2
                        )
                    ,   initrode.utilities.StringManager.Fill
                        (
                            inputDateTime.Millisecond.ToString()
                            , "0"
                            , true  //boolFromLeft
                            , 2
                        )
                );
                return _sqlDateBldr.ToString();
            }
    
    		/**
    		 * @description format date
    		 * */
    		public static string formatYYYYMMDD (System.DateTime inputDateTime) 
    		{
    			StringBuilder _bldr = new StringBuilder();
    			_bldr.Append(inputDateTime.Year.ToString());
    			_bldr.Append(initrode.utilities.StringManager.Fill(inputDateTime.Month.ToString(),
    															"0",       // Zero-Fill
    															true,	   // Fill from left
    															2));        // String length
    
    			_bldr.Append(initrode.utilities.StringManager.Fill(inputDateTime.Day.ToString(),
    															"0",       // Zero-Fill
    															true,	   // Fill from left
    															2));       // String length
    			return _bldr.ToString();
    		}
    		//@01
    		public static DateTime givenDateGetPeriodStartDate(DateTime inputDate1)
    		{
    			if (inputDate1.Day > 15) return new DateTime(inputDate1.Year,inputDate1.Month,16);
    			return new DateTime(inputDate1.Year,inputDate1.Month,1);
    		}
    		//@01
    		public static DateTime givenDateGetPeriodEndDate(DateTime inputDate1)
    		{
    			if (inputDate1.Day < 16) return new DateTime(inputDate1.Year,inputDate1.Month,15);
    			inputDate1 = inputDate1.AddMonths(1);
    			inputDate1 = new DateTime(inputDate1.Year,inputDate1.Month,1).AddDays(-1);
    			return inputDate1;
    		}
    
    		/**
    		 * @description add days to a date
    		 * */
    		public static DateTime addDays (DateTime dateTime, int days) 
    		{
    			DateTime newDate = dateTime.AddDays(days);
    			return newDate;
    		}
    		/** 
    		 * @description get first day of the  month from mm-dd-yyyy formatted string
    		 * **/
    		public static DateTime getFirstDayofTheMonthFromMM_DD_YYYYFormattedString(
    									string inputDateTimeInMM_DD_YYYYFormatString) 
    		{
    			if (initrode.utilities.StringManager.IsValidDateInMM_DD_YYYYFormat(inputDateTimeInMM_DD_YYYYFormatString) == false)
    			{
    				return initrode.utilities.DateManager.getFirstDayofTheCurrentMonth();
    			}
    			return initrode.utilities.DateManager.getFirstDayofTheMonth(Convert.ToDateTime(inputDateTimeInMM_DD_YYYYFormatString));
    		}
    
    
    		/** 
    		 * @description get first day of the  month
    		 * **/
    		public static DateTime getFirstDayofTheMonth(System.DateTime inputDateTime) 
    		{
    			return new DateTime(inputDateTime.Year,
    								inputDateTime.Month,
    								1);
    		}
    
            public static DateTime  convertTimestampOrDateInAnyStringFormatToDateTime
                                    (
                                        string inputTimestampOrDateInAnyStringFormat
                                    )
            {
                DateTime _returnDateTime = 
                    new DateTime(1900, 1, 1);
                ArrayList _splitDateTimeParts = new ArrayList();
                inputTimestampOrDateInAnyStringFormat =
                    initrode.utilities.StringManager.StripWhitespaceFromEnds
                    (
                        inputTimestampOrDateInAnyStringFormat
                    );
                DateTimeParts _myDateTimeParts = DateTimeParts.dateOnly;
                string _timeParts = "";
                string _amOrPMParts = "";
                if (inputTimestampOrDateInAnyStringFormat.Contains(" "))
                {
                    _splitDateTimeParts =
                        initrode.utilities.StringManager.splitIntoArrayList
                        (
                            inputTimestampOrDateInAnyStringFormat
                            , " "
                        );
                }
                else
                {
                    _splitDateTimeParts.Add
                    (
                        inputTimestampOrDateInAnyStringFormat
                    );
                }
                DateTime _dateOnly = new DateTime(1900, 1, 1);
                switch (_splitDateTimeParts.Count)
                {
                    case 1:
                        _myDateTimeParts = DateTimeParts.dateOnly;
                        _dateOnly =
                            initrode.utilities.DateManager.convertDateInAnyStringFormatIntoDateTime
                            (
                                inputTimestampOrDateInAnyStringFormat
                            );
                        break;
                    case 2:
                        _myDateTimeParts = DateTimeParts.dateAndTime;
                        _dateOnly =
                            initrode.utilities.DateManager.convertDateInAnyStringFormatIntoDateTime
                            (
                                _splitDateTimeParts[0].ToString()
                            );
                        _timeParts =
                            initrode.utilities.StringManager.StripWhitespace
                            (
                                _splitDateTimeParts[1].ToString()
                            );
                        break;
                    case 3:
                        _myDateTimeParts = DateTimeParts.dateTimeAndAMOrPM;
                        _dateOnly =
                            initrode.utilities.DateManager.convertDateInAnyStringFormatIntoDateTime
                            (
                                _splitDateTimeParts[0].ToString()
                            );
                        _timeParts =
                            initrode.utilities.StringManager.StripWhitespace
                            (
                                _splitDateTimeParts[1].ToString()
                            );
                        _amOrPMParts =
                            initrode.utilities.StringManager.StripWhitespace
                            (
                                _splitDateTimeParts[2].ToString()
                            ).ToUpper();
    
                        break;
                    default:
                        return _returnDateTime;
                }
                if (_myDateTimeParts == DateTimeParts.dateOnly) return _dateOnly;
                if (_dateOnly == new DateTime(1900, 1, 1)) return _returnDateTime;
    
                if (
                        _myDateTimeParts == DateTimeParts.dateTimeAndAMOrPM
                        &&
                        _amOrPMParts.CompareTo("AM") != 0
                        &&
                        _amOrPMParts.CompareTo("PM") != 0
                    ) return _returnDateTime;
                    
                switch (_myDateTimeParts)
                {
                    case DateTimeParts.dateAndTime:
                    return  initrode.utilities.DateManager.convertTimeInStringFormatAlongWithDateOnlyDateTimeIntoDateTime
                            (
                                _timeParts //string inputStrTime
                                , false //bool inputAMOrPMFormat
                                , "" //string inputAMOrPM
                                , _dateOnly //DateTime inputDateOnlyDateTime
                            );
                    case DateTimeParts.dateTimeAndAMOrPM:
                    return initrode.utilities.DateManager.convertTimeInStringFormatAlongWithDateOnlyDateTimeIntoDateTime
                            (
                                _timeParts //string inputStrTime
                                , true //bool inputAMOrPMFormat
                                , _amOrPMParts //string inputAMOrPM
                                , _dateOnly //DateTime inputDateOnlyDateTime
                            );
                }
                return _returnDateTime;
            }
            public static DateTime convertTimeInStringFormatAlongWithDateOnlyDateTimeIntoDateTime
                                    (
                                        string inputStrTime
                                        ,bool inputAMOrPMFormat
                                        ,string inputAMOrPM
                                        ,DateTime inputDateOnlyDateTime
                                    )
            {
                DateTime _returnDateTime = inputDateOnlyDateTime;
                if (inputStrTime.Contains(":") == false) return _returnDateTime;
    
                int _intMillisecondsPart = 0;
                if (inputStrTime.Contains(".") == true)
                {
                    ArrayList _millisecondsAndTimeParts =
                        initrode.utilities.StringManager.splitIntoArrayList
                        (
                            inputStrTime
                            ,@"."
                        );
                    if (_millisecondsAndTimeParts.Count != 2) return _returnDateTime;
                    string _strMillisecondsPart =
                        initrode.utilities.StringManager.StripWhitespace
                        (
                            _millisecondsAndTimeParts[1].ToString()
                        );
                    if (initrode.utilities.StringManager.IsValidNumber(_strMillisecondsPart) == true)
                        _intMillisecondsPart =
                            Convert.ToInt32
                            (
                                _strMillisecondsPart
                            );
                    inputStrTime =
                        initrode.utilities.StringManager.StripWhitespace
                        (
                            _millisecondsAndTimeParts[0].ToString()
                        );
                }
                ArrayList _timeParts =
                    initrode.utilities.StringManager.splitIntoArrayList
                    (
                        inputStrTime
                        ,":"
                    );
                if (_timeParts.Count != 3) return _returnDateTime;
    
    
                string _strHoursPart =
                    initrode.utilities.StringManager.StripWhitespace
                    (
                        _timeParts[0].ToString()
                    );
                string _strMinutesPart =
                    initrode.utilities.StringManager.StripWhitespace
                    (
                        _timeParts[1].ToString()
                    );
                string _strSecondsPart =
                    initrode.utilities.StringManager.StripWhitespace
                    (
                        _timeParts[2].ToString()
                    );
                if (
                        initrode.utilities.StringManager.IsValidNumber
                        (
                            _strHoursPart
                        ) == false
                        ||
                        initrode.utilities.StringManager.IsValidNumber
                        (
                            _strMinutesPart
                        ) == false
                        ||
                        initrode.utilities.StringManager.IsValidNumber
                        (
                            _strSecondsPart
                        ) == false
                    ) return _returnDateTime;
    
                int _intHoursPart =
                    Convert.ToInt32
                    (
                        _strHoursPart
                    );
                int _intMinutesPart =
                    Convert.ToInt32
                    (
                        _strMinutesPart
                    );
                int _intSecondsPart =
                    Convert.ToInt32
                    (
                        _strSecondsPart
                    );
    
                if (_intHoursPart > 23) return _returnDateTime;
                if (inputAMOrPMFormat == true)
                {
                    if (_intHoursPart > 12) return _returnDateTime;
                }
                if (_intMinutesPart > 59) return _returnDateTime;
                if (_intSecondsPart > 59) return _returnDateTime;
    
                if (inputAMOrPMFormat == true)
                {
                    if (inputAMOrPM.CompareTo("PM") == 0)
                    {
                        _intHoursPart += 12;
                    }
                    else if (
                                inputAMOrPM.CompareTo("AM") == 0
                                &&
                                _intHoursPart == 12
                                &&
                                _intMinutesPart == 0
                                &&
                                _intSecondsPart == 0
                                &&
                                _intMillisecondsPart == 0
                            )
                    {
                        return new DateTime
                                    (
                                        inputDateOnlyDateTime.Year
                                        , inputDateOnlyDateTime.Month
                                        , inputDateOnlyDateTime.Day
                                    );
                    }
                }
                _returnDateTime =
                    new DateTime
                        (
                            inputDateOnlyDateTime.Year
                            , inputDateOnlyDateTime.Month
                            , inputDateOnlyDateTime.Day
                            , _intHoursPart
                            , _intMinutesPart
                            , _intSecondsPart
                            , _intMillisecondsPart
                        );
                return _returnDateTime;
            }
            public static DateTime convertDateInAnyStringFormatIntoDateTime
                                    (
                                        string inputDateInAnyStringFormat   
                                    )
            {
                DateTime _returnDateTime = new DateTime(1900, 1, 1);
                inputDateInAnyStringFormat =
                    initrode.utilities.StringManager.StripWhitespace
                    (
                        inputDateInAnyStringFormat
                    );
                ArrayList _dateParts = new ArrayList();
                string _strMonth = "";
                string _strDay = "";
                string _strYear = "";
                if (inputDateInAnyStringFormat.Contains("/") == true)
                {
                    _dateParts =
                        initrode.utilities.StringManager.splitIntoArrayList
                        (
                            inputDateInAnyStringFormat
                            ,@"/"
                        );
                    if (_dateParts.Count != 3) return _returnDateTime;
                    _strMonth =
                            initrode.utilities.StringManager.StripWhitespace
                            (
                                _dateParts[0].ToString()
                            );
                    _strDay =
                            initrode.utilities.StringManager.StripWhitespace
                            (
                                _dateParts[1].ToString()
                            );
                    _strYear =
                            initrode.utilities.StringManager.StripWhitespace
                            (
                                _dateParts[2].ToString()
                            );
                    return initrode.utilities.DateManager.convertDateTimeStringPartsIntoDateTime
                            (
                                _strMonth
                                ,_strDay
                                ,_strYear
                            );
                }
                if (inputDateInAnyStringFormat.Contains("-") == true)
                {
                    _dateParts =
                        initrode.utilities.StringManager.splitIntoArrayList
                        (
                            inputDateInAnyStringFormat
                            , @"-"
                        );
                    if (_dateParts.Count != 3) return _returnDateTime;
                    _strYear =
                            initrode.utilities.StringManager.StripWhitespace
                            (
                                _dateParts[0].ToString()
                            );
                    _strMonth =
                            initrode.utilities.StringManager.StripWhitespace
                            (
                                _dateParts[1].ToString()
                            );
                    _strDay =
                            initrode.utilities.StringManager.StripWhitespace
                            (
                                _dateParts[2].ToString()
                            );
                    return initrode.utilities.DateManager.convertDateTimeStringPartsIntoDateTime
                            (
                                _strMonth
                                , _strDay
                                , _strYear
                            );
                }
                if (inputDateInAnyStringFormat.Length == 8)
                {
                    _strYear =
                        inputDateInAnyStringFormat.Substring(0, 4);
                    _strMonth =
                        inputDateInAnyStringFormat.Substring(4, 2);
                    _strDay =
                        inputDateInAnyStringFormat.Substring(6, 2);
                    return initrode.utilities.DateManager.convertDateTimeStringPartsIntoDateTime
                            (
                                _strMonth
                                , _strDay
                                , _strYear
                            );
                }
                if (inputDateInAnyStringFormat.Length == 6)
                {
                    _strYear =
                        inputDateInAnyStringFormat.Substring(0, 2);
                    _strMonth =
                        inputDateInAnyStringFormat.Substring(2, 2);
                    _strDay =
                        inputDateInAnyStringFormat.Substring(4, 2);
                    return initrode.utilities.DateManager.convertDateTimeStringPartsIntoDateTime
                            (
                                _strMonth
                                , _strDay
                                , _strYear
                            );
                }
                return _returnDateTime;
            }
            public static DateTime convertDateTimeStringPartsIntoDateTime
                                    (
                                        string inputStrMonth
                                        , string inputStrDay
                                        , string inputStrYear
                                    )
            {
                DateTime _returnDateTime = new DateTime(1900, 1, 1);
                if (
                        initrode.utilities.StringManager.IsValidNumber
                        (
                            inputStrMonth
                        ) == false
                        ||
                        initrode.utilities.StringManager.IsValidNumber
                        (
                            inputStrDay
                        ) == false
                        ||
                        initrode.utilities.StringManager.IsValidNumber
                        (
                            inputStrYear
                        ) == false
                    ) return _returnDateTime;
    
                int _intYear =
                    Convert.ToInt32
                    (
                        inputStrYear
                    );
                if (_intYear <= 100)
                {
                    if (_intYear >= 90)
                    {
                        _intYear += 1900;
                    }
                    else
                    {
                        _intYear += 2000;
                    }
                }
                inputStrYear = _intYear.ToString();
    
                inputStrMonth = 
                    initrode.utilities.StringManager.Fill
                    (
                        inputStrMonth
                        ,"0"
                        ,true //fromLeft
                        ,2
                    );
    
                inputStrDay = 
                    initrode.utilities.StringManager.Fill
                    (
                        inputStrDay
                        ,"0"
                        ,true //fromLeft
                        ,2
                    );
    
                if (
                        initrode.utilities.StringManager.IsValidDate
                        (
                            inputStrMonth
                            ,inputStrDay
                            ,inputStrYear
                        ) == false
                    ) return _returnDateTime;
    
                _returnDateTime =
                    new DateTime
                        (
                            Convert.ToInt32
                            (
                                inputStrYear
                            )
                            ,   Convert.ToInt32
                                (
                                    inputStrMonth
                                )
                            , Convert.ToInt32
                                (
                                    inputStrDay
                                )
                        );
                return _returnDateTime;
            }
            public static DateTime convertDateIn_MM_Slash_DD_Slash_YYYY_FormatToDateTime
                                    (
                                        string inputDateIn_MM_Slash_DD_Slash_YYYY_Format
                                    )
            {
                if (initrode.utilities.StringManager.IsValidDateInMM_DD_YYYYFormat(inputDateIn_MM_Slash_DD_Slash_YYYY_Format) == false)
                    return Convert.ToDateTime("1/1/1900");
                ArrayList _dateParts =
                    initrode.utilities.StringManager.splitIntoArrayList
                    (
                        inputDateIn_MM_Slash_DD_Slash_YYYY_Format
                        ,"/"
                    );
                string _mm = _dateParts[0].ToString();
                if (_mm.Substring(0, 1).CompareTo("0") == 0)
                    _mm = _mm.Substring(1, 1);
                string _dd = _dateParts[1].ToString();
                if (_dd.Substring(0, 1).CompareTo("0") == 0)
                    _dd = _dd.Substring(1, 1);
                string _yyyy = _dateParts[2].ToString();
    
                return new DateTime
                            (
                                    Convert.ToInt32
                                    (
                                        _yyyy
                                    )
                                ,   Convert.ToInt32
                                    (
                                        _mm
                                    )
                                ,   Convert.ToInt32
                                    (
                                        _dd
                                    )
                            );
            }
            public static bool  isInputtedDateTheLastBusinessDateOfTheMonth
                                (
                                    DateTime inputDateTime
                                    , Hashtable inputHolidayHash
                                )
            {
                inputDateTime =
                    new DateTime
                        (
                            inputDateTime.Year
                            , inputDateTime.Month
                            , inputDateTime.Day
                        );
    
                DateTime _lastBusinessDate =
                    initrode.utilities.DateManager.getLastBusinessDateOfMonthForInputtedDate
                    (
                        inputDateTime
                        ,inputHolidayHash
                    );
                if (
                        inputDateTime.Year == _lastBusinessDate.Year
                        && inputDateTime.Month == _lastBusinessDate.Month
                        && inputDateTime.Day == _lastBusinessDate.Day
                    )
                    return true;
                return false;
            }
    
            public static DateTime  getLastBusinessDateOfMonthForInputtedDate
                                    (
                                        DateTime inputDateTime
                                        , Hashtable inputHolidayHash
                                    )
            {
                inputDateTime = 
                    new DateTime
                        (
                            inputDateTime.Year
                            ,inputDateTime.Month
                            ,inputDateTime.Day
                        );
                DateTime _lastBusinessDate;
                if (
                        initrode.utilities.DateManager.isInputtedDateABusinessDate
                        (
                            inputDateTime
                            , inputHolidayHash
                        ) == true
                    )
                    _lastBusinessDate =
                        inputDateTime;
                else
                    _lastBusinessDate =
                        initrode.utilities.DateManager.getNextBusinessDateFromInputtedDate
                        (
                            inputDateTime
                            , inputHolidayHash
                        );
                if (_lastBusinessDate.Month != inputDateTime.Month)
                {
                    if (
                            initrode.utilities.DateManager.isInputtedDateABusinessDate
                            (
                                inputDateTime
                                , inputHolidayHash
                            ) == true
                        )
                        return inputDateTime;
                    else
                        return
                            initrode.utilities.DateManager.getPreviousBusinessDateFromInputtedDate
                            (
                                inputDateTime
                                , inputHolidayHash
                            );
                } 
                DateTime _nextBusinessDate =
                    initrode.utilities.DateManager.getNextBusinessDateFromInputtedDate
                    (
                        _lastBusinessDate
                        , inputHolidayHash
                    );
                while (_nextBusinessDate.Month == inputDateTime.Month)
                {
                    _lastBusinessDate =
                        _nextBusinessDate;
    
                    _nextBusinessDate =
                        initrode.utilities.DateManager.getNextBusinessDateFromInputtedDate
                        (
                            _lastBusinessDate
                            , inputHolidayHash
                        );
                }
                return _lastBusinessDate;
            }
            public static DateTime  getPreviousBusinessDateFromInputtedDate
                                    (
                                        DateTime inputDateTime
                                        , Hashtable inputHolidayHash
                                    )
            {
                DateTime _dateWithTimeOmitted =
                    new DateTime
                        (
                            inputDateTime.Year
                            , inputDateTime.Month
                            , inputDateTime.Day
                        );
                _dateWithTimeOmitted =
                    _dateWithTimeOmitted.AddDays(-1);
                while (
                            initrode.utilities.DateManager.isInputtedDateABusinessDate
                            (
                                _dateWithTimeOmitted
                                , inputHolidayHash
                            ) == false
                        )
                {
                    _dateWithTimeOmitted =
                        _dateWithTimeOmitted.AddDays(-1);
                }
                return _dateWithTimeOmitted;
            }
    
            public static DateTime  getNextBusinessDateFromInputtedDate
                                    (
                                        DateTime inputDateTime
                                        , Hashtable inputHolidayHash
                                    )
            {
                DateTime _dateWithTimeOmitted =
                    new DateTime
                        (
                            inputDateTime.Year
                            , inputDateTime.Month
                            , inputDateTime.Day
                        );
                _dateWithTimeOmitted.AddDays(1);
                while   (
                            initrode.utilities.DateManager.isInputtedDateABusinessDate
                            (
                                _dateWithTimeOmitted
                                ,inputHolidayHash
                            ) == false
                        )
                {
                    _dateWithTimeOmitted.AddDays(1);
                }
                return _dateWithTimeOmitted;
            }
    
            public static bool      isInputtedDateABusinessDate
                                    (
                                        DateTime inputDateTime
                                        ,Hashtable inputHolidayHash
                                    )
            {
                DateTime _dateWithTimeOmitted =
                    new DateTime
                        (
                            inputDateTime.Year
                            ,inputDateTime.Month
                            ,inputDateTime.Day
                        );
                if (_dateWithTimeOmitted.DayOfWeek == DayOfWeek.Saturday
                    || _dateWithTimeOmitted.DayOfWeek == DayOfWeek.Sunday)
                    return false;
                foreach (DateTime _holidayDate in inputHolidayHash.Keys)
                {
                    if (
                            _holidayDate.Year == inputDateTime.Year
                            && _holidayDate.Month == inputDateTime.Month
                            && _holidayDate.Day == inputDateTime.Day
                        )
                    {
                        return false;
                    }
                }
                return true;
            }
            public static string    convertDateTimeToMMDDYYYY_WithoutSlashesOrDashes
                                    (
                                        DateTime inputDateTime
                                    )
            {
                StringBuilder _dateBldr =
                    new StringBuilder();
                _dateBldr.AppendFormat
                (
                    "{0}{1}{2}"
                    , initrode.utilities.StringManager.Fill
                    (
                        inputDateTime.Month.ToString()
                        , "0"
                        , true //from left
                        , 2
                    )
                    ,initrode.utilities.StringManager.Fill
                    (
                        inputDateTime.Day.ToString()
                        , "0"
                        , true //from left
                        , 2
                    )
                    ,inputDateTime.Year.ToString()
                );
                return _dateBldr.ToString();
            }
            public static DateTime  convertMMDDYYYY_WithoutSlashesOrDashesToDateTime
                                    (
                                        string inputMMDDYYYY
                                    )
            {
                inputMMDDYYYY = 
                    initrode.utilities.StringManager.StripWhitespace
                    (
                        inputMMDDYYYY
                    );
                StringBuilder _mmSlashddSlashyyyyBldr =
                    new StringBuilder();
                _mmSlashddSlashyyyyBldr.AppendFormat
                (
                    "{0}/{1}/{2}"
                    ,inputMMDDYYYY.Substring(0,2)
                    ,inputMMDDYYYY.Substring(2,2)
                    ,inputMMDDYYYY.Substring(4,4)
                );
                if (
                        initrode.utilities.StringManager.IsValidDateInMM_DD_YYYYFormat
                        (
                            _mmSlashddSlashyyyyBldr.ToString()
                        ) == false
                    )
                    return new DateTime(1900, 1, 1);
                DateTime _returnDateTime =
                    new DateTime
                        (
                            Convert.ToInt32
                            (
                                inputMMDDYYYY.Substring(4, 4)
                            )
                            , Convert.ToInt32
                            (
                                inputMMDDYYYY.Substring(0, 2)
                            )
                            , Convert.ToInt32
                            (
                                inputMMDDYYYY.Substring(2, 2)
                            )
                        );
                return _returnDateTime;
            }
    
    		public static DateTime	convertDateInYYYYMMDDFormatToDateTime
    								(
    									string inputDateInYYYYMMDDFormat
    								)
    		{
    			if (initrode.utilities.StringManager.IsValidDateInYYYYMMDDFormat(inputDateInYYYYMMDDFormat) == false) 
                    return Convert.ToDateTime("1/1/1900");
    			return new	DateTime
    						(
    							Convert.ToInt32(inputDateInYYYYMMDDFormat.Substring(0,4))
    							,Convert.ToInt32(inputDateInYYYYMMDDFormat.Substring(4,2))
    							,Convert.ToInt32(inputDateInYYYYMMDDFormat.Substring(6,2))
    						);
    		}
    		public static DateTime	getNextPeriodStartDateFromGivenDate
    								(
    									DateTime inputDate
    								)
    		{
    			if (inputDate.Day == 1) return inputDate;
    			if (inputDate.Day == 16) return inputDate;
    			if (inputDate.Day <= 15) return inputDate.AddDays(16 - inputDate.Day);
    			return inputDate.AddMonths(1).AddDays(1 - inputDate.Day);
    		}
    		public static DateTime	getNextPeriodEndDateFromGivenPeriodStartDate
    								(
    									DateTime inputPeriodStartDate
    								)
    		{
    			if (inputPeriodStartDate.Day == 1) return inputPeriodStartDate.AddDays(15 - inputPeriodStartDate.Day);
    			return inputPeriodStartDate.AddMonths(1).AddDays(0 - inputPeriodStartDate.Day);
    		}
    
    		public static DateTime	convertDateInYYYYMMDDFormatAndTimeInHHColonMIColonSSFormatToDateTime
    								(
    									string inputDateInYYYYMMDDFormat,
    									string inputTimeInHHColonMIColonSSFormat
    								)
    		{
    			inputDateInYYYYMMDDFormat = initrode.utilities.StringManager.StripWhitespaceFromEnds(inputDateInYYYYMMDDFormat);
    			inputTimeInHHColonMIColonSSFormat = initrode.utilities.StringManager.StripWhitespaceFromEnds(inputTimeInHHColonMIColonSSFormat);
    			if (inputDateInYYYYMMDDFormat.Length != 8 ||
    				initrode.utilities.StringManager.IsValidDateInYYYYMMDDFormat(inputDateInYYYYMMDDFormat) == false) return new DateTime(1900,1,1);
    		
    			if (inputTimeInHHColonMIColonSSFormat.Length != 8 ||
    				initrode.utilities.StringManager.IsValidTimeInHHColonMIColonSSFormat(inputTimeInHHColonMIColonSSFormat) == false)
    					return new	DateTime 
    								(
    									Convert.ToInt32(inputDateInYYYYMMDDFormat.Substring(0,4)),
    									Convert.ToInt32(inputDateInYYYYMMDDFormat.Substring(4,2)),
    									Convert.ToInt32(inputDateInYYYYMMDDFormat.Substring(6,2))
    								);
    
    			return new	DateTime 
    						( 
    							Convert.ToInt32(inputDateInYYYYMMDDFormat.Substring(0,4)),
    							Convert.ToInt32(inputDateInYYYYMMDDFormat.Substring(4,2)),
    							Convert.ToInt32(inputDateInYYYYMMDDFormat.Substring(6,2)),
    							Convert.ToInt32(inputTimeInHHColonMIColonSSFormat.Substring(0,2)),
    							Convert.ToInt32(inputTimeInHHColonMIColonSSFormat.Substring(3,2)),
    							Convert.ToInt32(inputTimeInHHColonMIColonSSFormat.Substring(6,2))
    						);
    		
    		}
            public static bool validateTimestampInODBCCanonicalFormat
                               (
                                  string inputTimestampInODBCCanonicalFormat
                               )
            {
                if (inputTimestampInODBCCanonicalFormat.Length != 23)
                    return false;
    
                if (initrode.utilities.StringManager.IsValidNumber(inputTimestampInODBCCanonicalFormat.Substring(0, 4)) == false
                    || initrode.utilities.StringManager.IsValidNumber(inputTimestampInODBCCanonicalFormat.Substring(5, 2)) == false
                    || initrode.utilities.StringManager.IsValidNumber(inputTimestampInODBCCanonicalFormat.Substring(8, 2)) == false
                    || initrode.utilities.StringManager.IsValidNumber(inputTimestampInODBCCanonicalFormat.Substring(11, 2)) == false
                    || initrode.utilities.StringManager.IsValidNumber(inputTimestampInODBCCanonicalFormat.Substring(14, 2)) == false
                    || initrode.utilities.StringManager.IsValidNumber(inputTimestampInODBCCanonicalFormat.Substring(17, 2)) == false
                    || initrode.utilities.StringManager.IsValidNumber(inputTimestampInODBCCanonicalFormat.Substring(20, 3)) == false)
                    return false;
    
                string _yyyy =
                    inputTimestampInODBCCanonicalFormat.Substring(0, 4);
                string _mm =
                    inputTimestampInODBCCanonicalFormat.Substring(5, 2);
                string _dd =
                    inputTimestampInODBCCanonicalFormat.Substring(8, 2);
                if (initrode.utilities.StringManager.IsValidDate
                    (
                        _mm
                        ,_dd
                        ,_yyyy
                    ) == false)
                    return false;
    
                StringBuilder _timeBldr =
                    new StringBuilder();
                _timeBldr.Append
                (
                    inputTimestampInODBCCanonicalFormat.Substring(11, 2)
                );
                _timeBldr.Append
                (
                    ":"
                );
                _timeBldr.Append
                (
                    inputTimestampInODBCCanonicalFormat.Substring(14, 2)
                );
                _timeBldr.Append
                (
                    ":"
                );
                _timeBldr.Append
                (
                    inputTimestampInODBCCanonicalFormat.Substring(17, 2)
                );
                if (initrode.utilities.StringManager.IsValidTimeInHHColonMIColonSSFormat
                    (
                        _timeBldr.ToString()
                    ) == false)
                    return false;
                return true;
            }
            public static DateTime  convertTimestampInODBCCanonicalFormatToDateTime
                                    (
                                        string inputTimestampInODBCCanonicalFormat
                                    )
            {
                if (validateTimestampInODBCCanonicalFormat
                        (
                            inputTimestampInODBCCanonicalFormat
                         ) == false)
                    return new DateTime(1900, 1, 1);
    
                int _yyyy = 
                    Convert.ToInt32(inputTimestampInODBCCanonicalFormat.Substring(0,4));
                int _mm = 
                    Convert.ToInt32(inputTimestampInODBCCanonicalFormat.Substring(5,2));
                int _dd = 
                    Convert.ToInt32(inputTimestampInODBCCanonicalFormat.Substring(8,2));
    
                int _hh =
                    Convert.ToInt32(inputTimestampInODBCCanonicalFormat.Substring(11, 2));
                int _mi =
                    Convert.ToInt32(inputTimestampInODBCCanonicalFormat.Substring(14, 2));
                int _ss =
                    Convert.ToInt32(inputTimestampInODBCCanonicalFormat.Substring(17, 2));
                int _ms =
                    Convert.ToInt32(inputTimestampInODBCCanonicalFormat.Substring(20, 3));
                return new DateTime
                            (
                                _yyyy
                                , _mm
                                , _dd
                                , _hh
                                , _mi
                                , _ss
                                , _ms
                             );   
            }
    
    		/** 
    		 * 
    		 * @description get first day of the current month
    		 * **/
    		public static DateTime getFirstDayofTheCurrentMonth() 
    		{
    			return initrode.utilities.DateManager.getFirstDayofTheMonth(System.DateTime.Now);
    		}
    
            public static DateTime convertDateTimeToDate
                                    (
                                        DateTime inputTimestamp
                                    )
            {
                DateTime _returnDate =
                    new DateTime
                        (
                            inputTimestamp.Year
                            ,inputTimestamp.Month
                            ,inputTimestamp.Day
                        );
                return _returnDate;
            }
    		/**
    		 * @description get the last day of the month
    		 * */
    							   	
    		public static DateTime getLastDayOfTheMonth( System.DateTime inputDateTime) 
    		{
    			return initrode.utilities.DateManager.getFirstDayofNextMonth(inputDateTime).AddDays(-1);
    		}
    		/** 
    		 * @description get last day of the current month
    		 * **/
    		public static DateTime getLastDayofTheCurrentMonth() 
    		{
    			return initrode.utilities.DateManager.getLastDayOfTheMonth(DateTime.Now);
    		}
    
    		/** 
    		 * Convert the DateTime value to YYYYMMDD format
    		 * **/
    		public static string convertDateTimeToYYYYMMDDFormat(DateTime inputDateTime)
    		{
    			StringBuilder _dateBldr = new StringBuilder();
    			_dateBldr.Append(inputDateTime.Year.ToString());
    			_dateBldr.Append(initrode.utilities.StringManager.Fill(inputDateTime.Month.ToString(),"0",true,2));
    			_dateBldr.Append(initrode.utilities.StringManager.Fill(inputDateTime.Day.ToString(),"0",true,2));
    			return _dateBldr.ToString();
    		}
    		/** 
    		 * Convert the DateTime value to MM, DD, YYYY character parts
    		 * **/
    		public static void convertDateTimeToMM_DD_YYYYStringParts(DateTime inputDateTime,
    																	out string outputMM,
    																	out string outputDD,
    																	out string outputYYYY)
    		{
    			string _date_in_MM_DD_YYYY_Format = 
    				convertDateTimeToMM_DD_YYYYFormat(inputDateTime);
    			outputMM = "";
    			outputDD = "";
    			outputYYYY = "";
    
    			outputMM = _date_in_MM_DD_YYYY_Format.Substring(0,2);
    			outputDD = _date_in_MM_DD_YYYY_Format.Substring(3,2);
    			outputYYYY = _date_in_MM_DD_YYYY_Format.Substring(6,4);
    		}
    		/** 
    		 * Convert the DateTime value to MM_DD_YYYY format.
    		 * **/
    		public static DateTime convertMM_DD_YYYYFormatToDateTime(string inputDateInMM_DD_YYYYFormat)
    		{
    			inputDateInMM_DD_YYYYFormat = initrode.utilities.StringManager.StripWhitespaceFromEnds(inputDateInMM_DD_YYYYFormat);
    			if (initrode.utilities.StringManager.IsValidDateInMM_DD_YYYYFormat(inputDateInMM_DD_YYYYFormat) == false) return new DateTime(DateTime.Now.Year, DateTime.Now.Month, DateTime.Now.Day);
    			int _intMM = Convert.ToInt32(inputDateInMM_DD_YYYYFormat.Substring(0,2));
    			int _intDD = Convert.ToInt32(inputDateInMM_DD_YYYYFormat.Substring(3,2));
    			int _intYYYY = Convert.ToInt32(inputDateInMM_DD_YYYYFormat.Substring(6,4));
    			return new DateTime(_intYYYY,_intMM, _intDD);
    		}
            public static int calculateMonthsDifferenceBetweenTwoDates
                                (
                                    DateTime inputOlderDate
                                    , DateTime inputNewerDate
                                )
            {
                DateTime _tempDate = inputOlderDate;
                int _numberOfMonthsDifference = 0;
                while (_tempDate < inputNewerDate)
                {
                    _tempDate = _tempDate.AddMonths(1);
                    if (_tempDate < inputNewerDate)
                        _numberOfMonthsDifference++;
                }
                return _numberOfMonthsDifference;
            }
    
    		/** 
    		 * Convert the DateTime value to MM_DD_YYYY format.
    		 * **/
    		public static string convertTimestampToStringFormat(DateTime inputDateTime)
    		{
    			StringBuilder _bldr = new StringBuilder();
    			_bldr.Append(initrode.utilities.StringManager.Fill(	
    												inputDateTime.Month.ToString(),
    												"0",
    												true,           //Fill from left
    												2));
    			_bldr.Append("/");
    			_bldr.Append(initrode.utilities.StringManager.Fill(	
    												inputDateTime.Day.ToString(),
    												"0",
    												true,           //Fill from left
    												2));
    			_bldr.Append("/");
    			_bldr.Append(inputDateTime.Year.ToString());
    			_bldr.Append(" ");
    			_bldr.Append(initrode.utilities.StringManager.Fill(	
    												inputDateTime.Hour.ToString(),
    												"0",
    												true,           //Fill from left
    												2));
    			_bldr.Append(":");
    			_bldr.Append(initrode.utilities.StringManager.Fill(	
    												inputDateTime.Minute.ToString(),
    												"0",
    												true,           //Fill from left
    												2));
    			_bldr.Append(":");
    			_bldr.Append(initrode.utilities.StringManager.Fill(	
    												inputDateTime.Second.ToString(),
    												"0",
    												true,           //Fill from left
    												2));
    			_bldr.Append(".");
    			_bldr.Append(initrode.utilities.StringManager.Fill(	
    												inputDateTime.Millisecond.ToString(),
    												"0",
    												true,           //Fill from left
    												3));
    
    			return _bldr.ToString();
    		}
    
    		/** 
    		 * Convert the DateTime value to MM_DD_YYYY format.
    		 * **/
    		public static string convertDateTimeToMM_DD_YYYYFormat(DateTime inputDateTime)
    		{
    			StringBuilder _bldr = new StringBuilder();
    			_bldr.Append(initrode.utilities.StringManager.Fill(	
    												inputDateTime.Month.ToString(),
    												"0",
    												true,           //Fill from left
    												2));
    			_bldr.Append("/");
    			_bldr.Append(initrode.utilities.StringManager.Fill(	
    												inputDateTime.Day.ToString(),
    												"0",
    												true,           //Fill from left
    												2));
    			_bldr.Append("/");
    			_bldr.Append(inputDateTime.Year.ToString());
    			return _bldr.ToString();
    		}
    
    		/** 
    		 * @description get first day of the next  month
    		 * **/
    		public static DateTime getFirstDayofNextMonth(DateTime inputDateTime) 
    		{
    			return  initrode.utilities.DateManager.getFirstDayofTheMonth(inputDateTime).AddMonths(1);
    		}
    
    
    		/** 
    		 * @description get first day of the next  month
    		 * **/
    		public static DateTime getFirstDayofNextMonth() 
    		{
    			return  getFirstDayofNextMonth(DateTime.Now); 
    		}
    
    		/** 
    		 * @description add days to a date
    		 * **/
    		public static DateTime daysFromTheFirst(int days, System.DateTime date)
    		{
    			DateTime nextDate = date.AddDays(days); //calculate  1 day of next month
    			return nextDate;
    		}
    		/** 
    		 * 
    		 * @description get first day of the current year
    		 * **/
    		public static DateTime getFirstDayofTheCurrentYear() 
    		{
    			return initrode.utilities.DateManager.getFirstDayofTheInputtedDatesYear(System.DateTime.Now);
    		}
    		/** 
    		 * 
    		 * @description get last day of the current year
    		 * **/
    		public static DateTime getLastDayofTheCurrentYear() 
    		{
    			return initrode.utilities.DateManager.getFirstDayofTheCurrentYear().AddYears(1).AddDays(-1);
    		}
    
    		/** 
    		 * 
    		 * @description get first day of the inputted year
    		 * **/
    		public static DateTime getFirstDayofTheInputtedDatesYear(System.DateTime inputDateTime) 
    		{
    			return new DateTime(inputDateTime.Year,1,1);
    		}
    		/** 
    		 * 
    		 * @description get last day of the inputted year
    		 * **/
    		public static DateTime getLastDayofTheInputtedDatesYear(System.DateTime inputDateTime) 
    		{
    			return new DateTime(inputDateTime.Year,12,31);
    		}
    
    		public static DateComparison timestamp1ComparedToTimestamp2(DateTime inputTimestamp1, 
    																	DateTime inputTimestamp2)
    		{
    			if (inputTimestamp1.Year > inputTimestamp2.Year) return DateComparison.gt;
    			if (inputTimestamp1.Year < inputTimestamp2.Year) return DateComparison.lt;
    			if (inputTimestamp1.DayOfYear > inputTimestamp2.DayOfYear) return DateComparison.gt;
    			if (inputTimestamp1.DayOfYear < inputTimestamp2.DayOfYear) return DateComparison.lt;
    			if (inputTimestamp1.Hour > inputTimestamp2.Hour) return DateComparison.gt;
    			if (inputTimestamp1.Hour < inputTimestamp2.Hour) return DateComparison.lt;
    			if (inputTimestamp1.Minute > inputTimestamp2.Minute) return DateComparison.gt;
    			if (inputTimestamp1.Minute < inputTimestamp2.Minute) return DateComparison.lt;
    			if (inputTimestamp1.Second > inputTimestamp2.Second) return DateComparison.gt;
    			if (inputTimestamp1.Second < inputTimestamp2.Second) return DateComparison.lt;
    			if (inputTimestamp1.Millisecond > inputTimestamp2.Millisecond) return DateComparison.gt;
    			if (inputTimestamp1.Millisecond < inputTimestamp2.Millisecond) return DateComparison.lt;
    			return DateComparison.eq;
    		
    		}
    
    		public static DateComparison date1ComparedToDate2(DateTime inputDate1, 
    															DateTime inputDate2)
    		{
    			if (inputDate1.Year > inputDate2.Year) return DateComparison.gt;
    			if (inputDate1.Year < inputDate2.Year) return DateComparison.lt;
    			if (inputDate1.DayOfYear > inputDate2.DayOfYear) return DateComparison.gt;
    			if (inputDate1.DayOfYear < inputDate2.DayOfYear) return DateComparison.lt;
    			return DateComparison.eq;
    		
    		}
    
    		/** 
    		 * 
    		 * @description get first day of the first Future Period
    		 * **/
    		public static DateTime getTheDateBeforeTheFirstFuturePeriod() 
    		{
    			// If date is less than the 16th, the 15th is the date.
    
    			DateTime _date = DateTime.Now;
    			if (_date.Day < 16)
    			{
    				return _date.AddDays(15 - _date.Day);
    			}
    
    			// If date is greater than the 16th, the 1st of the following month is
    			// the first date of the first future period.
    
    			return _date.AddDays(1 - _date.Day).AddMonths(1).AddDays(-1);
    		}
    	}
    }
    [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

    Krebs on SecurityPatch Tuesday Lowdown, October 2019 Edition

    On Tuesday Microsoft issued software updates to fix almost five dozen security problems in Windows and software designed to run on top of it. By most accounts, it’s a relatively light patch batch this month. Here’s a look at the highlights.

    Happily, only about 15 percent of the bugs patched this week earned Microsoft’s most dire “critical” rating. Microsoft labels flaws critical when they could be exploited by miscreants or malware to seize control over a vulnerable system without any help from the user.

    Also, Adobe has kindly granted us another month’s respite from patching security holes in its Flash Player browser plugin.

    Included in this month’s roundup is something Microsoft actually first started shipping in the third week of September, when it released an emergency update to fix a critical Internet Explorer zero-day flaw (CVE-2019-1367) that was being exploited in the wild.

    That out-of-band security update for IE caused printer errors for many Microsoft users whose computers applied the emergency update early on, according to Windows update expert Woody Leonhard. Apparently, the fix available through this month’s roundup addresses those issues.

    Security firm Ivanti notes that the patch for the IE zero day flaw was released prior to today for Windows 10 through cumulative updates, but that an IE rollup for any pre-Windows 10 systems needs to be manually downloaded and installed.

    Once again, Microsoft is fixing dangerous bugs in its Remote Desktop Client, the Windows feature that lets a user interact with a remote desktop as if they were sitting in front of the other PC. On the bright side, this critical bug can only be exploited by tricking a user into connecting to a malicious Remote Desktop server — not exactly the most likely attack scenario.

    Other notable vulnerabilities addressed this month include a pair of critical security holes in Microsoft Excel versions 2010-2019 for Mac and Windows, as well as Office 365. These flaws would allow an attacker to install malware just by getting a user to open a booby-trapped Office file.

    Windows 10 likes to install patches all in one go and reboot your computer on its own schedule. Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update. To get there, click the Windows key on your keyboard and type “windows update” into the box that pops up.

    Staying up-to-date on Windows patches is good. Updating only after you’ve backed up your important data and files is even better. A reliable backup means you’re not pulling your hair out if the odd buggy patch causes problems booting the system. So do yourself a favor and backup your files before installing any patches.

    As always, if you experience any problems installing any of the patches this month, please feel free to leave a comment about it below; there’s a decent chance other readers have experienced the same and may even chime in here with some helpful tips.

    Planet DebianChris Lamb: Tour d'Orwell: Southwold

    I recently read that that during 1929 George Orwell returned to his family home in the Suffolk town of Southwold but when I further learned that he had acquired a motorbike during this time to explore the surrounding villages I could not resist visiting myself on such a transport mode.

    Orwell would end up writing his first novel here ("Burmese Days") followed by his first passable one ("A Clergyman's Daughter") but unfortunately the local bookshop was only to have the former in stock. He moved back to London in 1934 to work in a bookshop in Hampstead, now a «Le Pain Quotidien».

    If you are thinking of visiting, Southwold has some lovely quaint beach huts, a brewery and the officially signposted A1120 "Scenic Route" I took on the way out was neither as picturesque nor as fun to ride as the A1066

    ,

    TEDWe the Future 2019: Talks from TED, the Skoll Foundation and the United Nations Foundation

    Hosts Rajesh Mirchandani and Chee Pearlman wave to “We The Future” attendees who watched the salon live from around the world through TED World Theater technology. (Photo: Ryan Lash / TED)

    At “We the Future,” a day of talks from TED, the Skoll Foundation and the United Nations Foundation at the TED World Theater in New York City, 18 speakers and performers shared daring ideas, deep analysis, cautionary tales and behavior-changing strategies aimed at meeting the UN Sustainable Development Goals (SDGs), the global goals created in partnership with individuals around the world and adopted at the United Nations in 2015.

    The event: We the Future, presented by TED, the Skoll Foundation and the United Nations Foundation to share ingenious efforts of people from every corner of the globe

    When and where: Tuesday, September 24, 2019, at the TED World Theater in New York, NY

    Music: Queen Esther with Hilliard Greene and Jeff McGlaughlin, performing the jazzy “Blow Blossoms” and the protest song “All That We Are”

    The talks in brief:


    David Wallace-Wells, journalist

    Big idea: The climate crisis is too vast and complicated to solve with a silver bullet. We need a shift in how we live: a whole new politics, economics and relationship to technology and nature.

    Why? The climate crisis isn’t the legacy of our ancestors, but the work of a single generation — ours, says Wallace-Wells. Half of all the emissions from the burning of fossil fuels in the history of humanity were produced in the last 30 years. We clearly have immense power over the climate, and it’s put us on the brink of catastrophe — but it also means we’re the ones writing the story of our planet’s future. If we are to survive, we’ll need to reshape society as we know it — from building entirely new electric grids, planes and infrastructures to rethinking the way the global community comes together to support those hit hardest by climate change. In we do that, we just might build a new world that’s livable, prosperous and green.

    Quote of the talk: “We won’t be able to beat climate change — only live with it and limit it.”


    “When the cost of inaction is that innocent children are left unprotected, unvaccinated, unable to go to school … trapped in a cycle of poverty, exclusion and invisibility, it’s on us to take this issue out of darkness and into the light,” says legal identity expert Kristen Wenz. She speaks at “We The Future” on September 24, 2019, at the TED World Theater in New York, NY. (Photo: Ryan Lash / TED)

    Kristen Wenz, legal identity expert

    Big idea: More than one billion people — mostly children — don’t have legal identities or birth certificates, which means they can’t get vital government services like health care and schooling. It’s a massive human rights violation we need to fix.

    How? There are five key approaches to ensuring children are registered and protected — reduce distance, reduce cost, simplify the process, remove discrimination and increase demand. In Tanzania, the government helped make it easier for new parents to register their child by creating an online registration system and opening up registration hubs in communities. The results were dramatic: the number of children with birth certificates went from 16 to 83 percent in just a few years. By designing solutions with these approaches in mind, we can provide better protection and brighter opportunities for children across the world.

    Quote of the talk: “When the cost of inaction is that innocent children are left unprotected, unvaccinated, unable to go to school … trapped in a cycle of poverty, exclusion and invisibility, it’s on us to take this issue out of darkness and into the light.”


    Don Gips, CEO of the Skoll Foundation, in conversation with TEDWomen curator and author Pat Michell

    Big idea: Don Gips turned away from careers in both government and business and became CEO of the Skoll Foundation for one reason: the opportunity to take charge of investing in solutions to the most urgent issues humanity faces. Now, it’s the foundation’s mission to identify the investments that will spark the greatest changes.

    How?

    By reaching deeper into communities and discovering and investing in social entrepreneurs and other changemakers, the Skoll Foundation supports promising solutions to urgent global problems. As their investments yield positive results, Gips hopes to inspire the rest of the philanthropic community to find better ways to direct their resources.

    Quote of the interview: “We don’t tell the changemaker what the solution is. We invest in their solution, and go along on the journey with them.”


    “By making aesthetic, some might say beautiful, arrangements out of the world’s waste, I hope to hook the viewer, to draw in those that are numb to the horrors of the world, and give them a different way to understand what is happening,” says artist Alejandro Durán. He speaks at “We The Future” on September 24, 2019, at the TED World Theater in New York, NY. (Photo: Ryan Lash / TED)

    Alejandro Durán, artist

    Big Idea: Art can spotlight the environmental atrocities happening to our oceans — leaving viewers both mesmerized and shocked.

    Why? From prosthetic legs to bottle caps, artist Alejandro Durán makes ephemeral environmental artworks out of objects he finds polluting the waters of his native region of Sian Ka’an, Mexico. He meticulously organizes materials by color and curates them into site-specific work. Durán put on his first “Museo de La Basura or Museum of Garbage exhibition in 2015, which spoke to the horrors of the Great Pacific Garbage Patch, and he’s still making art that speaks to the problem of ocean trash. By endlessly reusing objects in his art, Durán creates new works that engage communities in environmental art-making, attempting to depict the reality of our current environmental predicament and make the invisible visible.

    Quote of the talk: “By making aesthetic, some might say beautiful, arrangements out of the world’s waste, I hope to hook the viewer, to draw in those that are numb to the horrors of the world, and give them a different way to understand what is happening.”


    Andrew Forrest, entrepreneur, in conversation with head of TED Chris Anderson

    Big idea: The true — and achievable! — business case for investing in plastic recycling.

    How? Since earning his PhD in marine ecology, Forrest has dedicated his time and money to solving the global plastic problem, which is choking our waterways and oceans with toxic material that never biodegrades. “I learned a lot about marine life,” he says of his academic experience. “But it taught me more about marine death.” To save ourselves and our underwater neighbors from death by nanoplastics, Forrest says we need the big corporations of the world to fund a massive environmental transition that includes increasing the price of plastic and turning the tide on the recycling industry.

    Quote of the talk: “[Plastic] is an incredible substance designed for the economy. It’s the worst substance possible for the environment.”


    Raj Panjabi, cofounder of medical NGO Last Mile Health

    Big idea: Community health workers armed with training and technology are our first line of defense against deadly viral surges. If we are to fully protect the world from killer diseases, we must ensure that people living in the most remote areas of the planet are never far from a community health worker trained to throttle epidemics at their outset.

    How? In December 2013, Ebola broke out in West Africa and began a transborder spread that threatened to wipe out millions of people. Disease fighters across Africa joined the battle to stop it — including Liberian health workers trained by Last Mile Health and armed with the technology, knowledge and support necessary to serve their communities. With their help, Ebola was stopped (for now), after killing 11,000 people. Panjabi believes that if we train and pay more community health workers, their presence in underserved areas will not only stop epidemics but also save the lives of the millions of people threatened by diseases like malaria, pneumonia and diarrhea.

    Quote of the talk:We dream of a future when millions of people … can gain dignified jobs as community health workers, so they can serve their neighbors in the forest communities of West Africa to the fishing villages of the Amazon; from the hilltops of Appalachia to the mountains of Afghanistan.”


    “Indigenous people have the answer. If we want to save the Amazon, we have to act now,” says Tashka Yawanawá, speaking at “We The Future” with his wife, Laura, on September 24, 2019, at the TED World Theater in New York, NY. (Photo: Ryan Lash / TED)

    Tashka and Laura Yawanawá, leaders of the Yawanawá in Acre, Brazil

    Big idea: To save the Amazon rainforest, let’s empower indigenous people who have been coexisting with the rainforest for centuries.

    Why? Tashka Yawanawá is chief of the Yawanawá people in Acre, Brazil, leading 900 people who steward 400,000 acres of Brazilian Amazon rainforest. As footage of the Amazon burning shocks the world’s conscious, Tashka and his wife, Laura, call for us to transform this moment into an opportunity to support indigenous people who have the experience, knowledge and tools to protect the land.

    Quote of the talk: “Indigenous people have the answer. If we want to save the Amazon, we have to act now.”


    Alasdair Harris, ocean conservationist

    Big idea: To the impoverished fishers that rely on the sea for their food, and who comprise 90 percent of the world’s fishing fleet, outside interference by scientists and marine managers can seem like just another barrier to their survival. Could the world rejuvenate its marine life and replenish its fish stocks by inspiring coastal communities rather than simply regulating them?

    How? When he first went to Madagascar, marine biologist Alasdair Harris failed to convince local leaders to agree to a years-long plan to close their threatened coral reefs to fishing. But when a contained plan to preserve a breeding ground for an important local species of octopus led to rapid growth in catches six months later, the same elders banded together with leaders across Madagascar to spearhead a conservation revolution. Today, Harris’s organization Blue Ventures works to help coastal communities worldwide take control of their own ecosystems.

    Quote of the talk: When we design it right, marine conservation reaps dividends that go far beyond protecting nature — improving catches, driving waves of social change along entire coastlines, strengthening confidence, cooperation and the resilience of communities to face the injustice of poverty and climate change.”


    Bright Simons, social entrepreneur and product security expert

    Big idea: A global breakdown of the trustworthiness of markets and regulatory institutions has led to a flurry of counterfeit drugs, mislabeled food and defective parts. Africa has been dealing with counterfeit goods for years, and entrepreneurs like Bright Simons have developed myriad ways consumers can confirm that their food and drug purchases are genuine. Why are these methods ignored in the rest of the world?

    How? Bright Simons demonstrates some of the innovative solutions Africans use to restore trust in their life-giving staples, such as text hotlines to confirm medications are real and seed databases to certify the authenticity of crops. Yet in the developed world, these solutions are often overlooked because they “don’t scale” — an attitude Simons calls “mental latitude imperialism.” It’s time to champion “intellectual justice” — and look at these supposedly non-scalable innovations with new respect.

    Quote of the talk: “It just so happens that today, the most advanced and most progressive solutions to these problems are being innovated in the developing world.”


    “Water is life. It is the spirit that binds us from sickness, death and destruction,” says LaToya Ruby Frazier. She speaks at “We The Future” on September 24, 2019, at the TED World Theater in New York, NY. (Photo: Ryan Lash / TED)

    LaToya Ruby Frazier, artist 

    Big Idea: LaToya Ruby Frazier’s powerful portraits of women in Flint, Michigan document the reality of the Flint water crisis, bringing awareness to the ongoing issue and creating real, positive change.

    How? Frazier’s portraits of the daily lives of women affected by the Flint water crisis are striking reminders that, after all the news crews were gone, the people of Flint still did not have clean water. For one photo series, she closely followed the lives of Amber Hasan and Shea Cobb — two activists, poets and best friends — who were working to educate the public about the water crisis. Frazier has continued collaborating with Hasan and Cobb to seek justice and relief for those suffering in Flint. In 2019, they helped raise funds for an atmospheric water generator that provided 120,000 gallons of water to Flint residents. 

    Quote of the talk: “Water is life. It is the spirit that binds us from sickness, death and destruction. Imagine how many millions of lives we could save if [the atmospheric water generator] were in places like Newark, New Jersey, South Africa and India — with compassion instead of profit motives.”


    Cassie Flynn, global climate change advisor

    Big idea: We need a new way to get citizen consensus on climate change and connect them with governments and global leaders.

    How? The United Nations is taking on an entirely new model of reaching the masses: mobile phone games. Flynn shares how their game “Mission 1.5” can help people learn about their policy choices on climate change by allowing them to play as heads of state. From there, the outcomes of their gameplay will be compiled and shared with their national leaders and the public. Flynn foresees this as a fresh, feasible way to meet citizens where they are, to educate them about climate change and to better connect them to the people who are making those tough decisions.

    Quote of the talk: “Right now, world leaders are faced with the biggest and most impactful decisions of their entire lives. What they decide to do on climate change will either lead to a riskier, more unstable planet or a future that is more prosperous and sustainable for us all.”


    Wanjira Mathai, entrepreneur

    Big Idea: Corruption is a constant threat in Kenya. To defeat it there and anywhere, we need to steer youth towards integrity through education and help them understand the power of the individual.

    Why? In 1989, the Karura Forest, a green public oasis in Nairobi, Kenya, was almost taken away by a corrupt government until political activist Wangari Maathai, Nobel Prize recipient and founder of the Greenbelt Movement, fought back fiercely and won. Continuing Maathai’s legacy, her daughter Wanjira explains how corruption is still very much alive in Kenya — a country that loses a third of its state budget to corruption every year. “Human beings are not born corrupt. At some point these behaviors are fostered by a culture that promotes individual gain over collective progress,” she says. She shares a three-pronged strategy for fighting corruption before it takes root by addressing why it happens, modeling integrity and teaching leadership skills.

    Quote of the talk: “We cannot complain forever. We either decide that we are going to live with it, or we are going to change it. And if we are going to change it, we know that today, most of the world’s problems are caused by corruption and greed and selfishness.”

    TEDThe TED Interview podcast kicks off season 3

    The TED Interview launches its newest season on October 9, 2019. Last season notably featured Bill Gates, Monica Lewinsky and Susan Cain — and you can expect another thoughtful lineup of scientists, thinkers and artists for the new season.

    Season 3 features eight episodes, during which head of TED Chris Anderson will continue to inspire curiosity with in-depth conversations on our consciousness, the ways we navigate community and the power of embracing paradox.

    During Season 3, Harvard psychologist Dan Gilbert expands on his TED Talk concerning the science of happiness; Turkish-British author Elif Shafak deconstructs storytelling and global community; and Michael Tubbs, one of the world’s youngest mayors, makes a case for universal basic income.

    Listen to the first episode with happiness expert Dan Gilbert on Apple Podcasts or Spotify.

    With a diverse lineup of global thought leaders, TED’s podcasts are downloaded in more than 190 countries (nearly every place on Earth!). “Just like the ideas we explore, The TED Interview continues to grow with even more thoughtful and challenging conversations this season,” says Chris Anderson. “We’ve hit our stride and will be delving deeper into the minds of some of TED’s most remarkable speakers.”

    More speakers will be unveiled throughout the season, and you can listen to them on The TED Interview for free on Apple Podcasts or Spotify. New hour-long episodes air every Wednesday. 

    TED’s content programming extends beyond its signature TED Talk format with six original podcasts. In August 2019, TED was ranked among Podtrac’s Top 10 Publishers in the US.

    The TED Interview is proudly sponsored by Lexus, whose passion for brave design, imaginative technology and exhilarating performance enables the luxury lifestyle brand to create amazing experiences for its customers.

    Planet DebianSteve Kemp: A blog overhaul

    When this post becomes public I'll have successfully redeployed my blog!

    My blog originally started in 2005 as a Wordpress installation, at some point I used Mephisto, and then I wrote my own solution.

    My project was pretty cool; I'd parse a directory of text-files, one file for each post, and insert them into an SQLite database. From there I'd initiate a series of plugins, each one to generate something specific:

    • One plugin would output an archive page.
    • Another would generate a tag cloud.
    • Yet another would generate the actual search-results for a particular month/year, or tag-name.

    All in all the solution was flexible and it wasn't too slow because finding posts via the SQLite database was pretty good.

    Anyway I've come to realize that freedom and architecture was overkill. I don't need to do fancy presentation, I don't need a loosely-coupled set of plugins.

    So now I have a simpler solution which uses my existing template, uses my existing posts - with only a few cleanups - and generates the site from scratch, including all the comments, in less than 2 seconds.

    After running make clean a complete rebuild via make upload (which deploys the generated site to the remote host via rsync) takes 6 seconds.

    I've lost the ability to be flexible in some areas, but I've gained all the speed. The old project took somewhere between 20-60 seconds to build, depending on what had changed.

    In terms of simplifying my life I've dropped the remote installation of a site-search which means I can now host this site on a static site with only a single handler to receive any post-comments. (I was 50/50 on keeping comments. I didn't want to lose those I'd already received, and I do often find valuable and interesting contributions from readers, but being 100% static had its appeal too. I guess they stay for the next few years!)

    Planet DebianAntoine Beaupré: Tip of the day: batch PDF conversion with LibreOffice

    Someone asked me today why they couldn't write on the DOCX document they received from a student using the pen in their Onyx Note Pro reader. The answer, of course, is that while the Onyx can read those files, it can't annotate them: that only works with PDFs.

    Next question then, is of course: do I really need to open each file separately and save them as PDF? That's going to take forever, I have 30 students per class!

    Fear not, shell scripting and headless mode flies in to the rescue!

    As it turns out, one of the Libreoffice parameters allow you to run batch operations on files. By calling:

    libreoffice --headless --convert-to pdf *.docx
    

    LibreOffice will happily convert all the *.docx files in the current directory to PDF. But because navigating the commandline can be hard, I figured I could push this a tiny little bit further and wrote the following script:

    #!/bin/sh
    
    exec libreoffice --headless --convert-to pdf "$@"
    

    Drop this in ~/.local/share/nautilus/scripts/libreoffice-pdf, mark it executable, and voilà! You can batch-convert basically any text file (or anything supported by LibreOffice, really) into PDF.

    Now I wonder if this would be a useful addition to the Debian package, anyone?

    Planet DebianJonas Meurer: debian lts report 2019.09

    Debian LTS report for September 2019

    This month I was allocated 10 hours and carried over 9.5 hours from August. Unfortunately, again I didn't find much time to work on LTS issues, partially because I was travelling. I spent 5 hours on the task listed below. That means that I carry over 14.5 hours to October.

    Planet DebianJamie McClelland: Editing video without a GUI? Really?

    It seems counter intuitive - if ever there was a program in need of a graphical user interface, it's a non-linear video editing program.

    However, as part of the May First board elections, I discovered otherwise.

    We asked each board candidate to submit a 1 - 2 minute video introduction about why they want to be on the board. My job was to connect them all into a single video.

    I had an unrealistic thought that I could find some simple tool that could concatenate them all together (like mkvmerge) but I soon realized that this approach requires that everyone use the exact same format, codec, bit rate, sample rate and blah blah blah.

    I soon realized that I needed to actually make a video, not compile one. I create videos so infrequently, that I often forget the name of the video editing software I used last time so it takes some searching. This time I found that I had openshot-qt installed but when I tried to run it, I got a back trace (which someone else has already reported).

    I considered looking for another GUI editor, but I wasn't that interested in learning what might be a complicated user interface when what I need is so simple.

    So I kept searching and found melt. Wow.

    I ran:

    melt originals/* -consumer avformat:all.webm acodec=libopus vcodec=libvpx
    

    And a while later I had a video. Impressive. It handled people who submitted their videos in portrait mode on their cell phones in mp4 as well as web cam submissions using webm/vp9 on landscape mode.

    Thank you melt developers!

    Google AdsenseIntroducing the new and improved Auto ads

    This time last year, we committed to making AdSense more assistive and powerful. We're excited to announce a significant update to AdSense; Auto ads are now easier to use and come with better controls to customize the ad experience for your users.

    What’s changing

    Easy setup and management
    - No more messing with ad codes! Auto ads now work through any AdSense ad unit code, so using them on your existing sites is as easy as turning them on in your account.
    - For new sites, just add them in the Sites page, copy-paste the AdSense code and turn on Auto ads.

    - For pages where you don't want to use Auto ads, add those URLs to your “Page exclusions” list. You can exclude individual pages or entire sections of your site.
    - You can see the full summary of your Auto ads settings for each of your sites as an overview.


    Greater customization

    - You can preview how Auto ads look on your site before they go live.
    - You can delete specific ad placements inside the preview using the “delete” button. Auto ads will immediately generate a placement in a new location for you to review.
    - You can easily specify the Ad formats that Auto ads places on your site, including Matched content.
    - You can control the number of Auto ads you'd like to show on your pages by using the Ad load slider.


    Updated reporting

    - We're updating our reports to allow you to easily see Auto ads and manual ad unit performance side by side for each of your sites.


    Don’t forget to check your Auto ads settings in the new Ads > Overview page to see these changes apply to your sites.

    Over the coming weeks, we'll email to let you know that your account has the new Auto ads, and provide further information.


    Experiments are coming!


    We've received many requests to make it easier to test Auto ads and get accurate performance metrics. We're currently working on creating a new experiment type to make this possible. 
    Together with the changes highlighted above, and a new experiment type, you'll be able to see how Auto ads will look on a site, and test out the performance on a portion of the traffic of a site before enabling them fully.



    Posted by:
    Google AdSense Product Team

    CryptogramNew Unpatchable iPhone Exploit Allows Jailbreaking

    A new iOS exploit allows jailbreaking of pretty much all version of the iPhone. This is a huge deal for Apple, but at least it doesn't allow someone to remotely hack people's phones.

    Some details:

    I wanted to learn how Checkm8 will shape the iPhone experience­ -- particularly as it relates to security­ -- so I spoke at length with axi0mX on Friday. Thomas Reed, director of Mac offerings at security firm Malwarebytes, joined me. The takeaways from the long-ranging interview are:

    • Checkm8 requires physical access to the phone. It can't be remotely executed, even if combined with other exploits.

    • The exploit allows only tethered jailbreaks, meaning it lacks persistence. The exploit must be run each time an iDevice boots.

    • Checkm8 doesn't bypass the protections offered by the Secure Enclave and Touch ID.

    • All of the above means people will be able to use Checkm8 to install malware only under very limited circumstances. The above also means that Checkm8 is unlikely to make it easier for people who find, steal or confiscate a vulnerable iPhone, but don't have the unlock PIN, to access the data stored on it.

    • Checkm8 is going to benefit researchers, hobbyists, and hackers by providing a way not seen in almost a decade to access the lowest levels of iDevices.

    Also:

    "The main people who are likely to benefit from this are security researchers, who are using their own phone in controlled conditions. This process allows them to gain more control over the phone and so improves visibility into research on iOS or other apps on the phone," Wood says. "For normal users, this is unlikely to have any effect, there are too many extra hurdles currently in place that they would have to get over to do anything significant."

    If a regular person with no prior knowledge of jailbreaking wanted to use this exploit to jailbreak their iPhone, they would find it extremely difficult, simply because Checkm8 just gives you access to the exploit, but not a jailbreak in itself. It's also a 'tethered exploit', meaning that the jailbreak can only be triggered when connected to a computer via USB and will become untethered once the device restarts.


    Planet DebianJonathan McDowell: Ada Lovelace Day: 5 Amazing Women in Tech

    Ada Lovelace portrait

    It’s Ada Lovelace day and I’ve been lax in previous years about celebrating some of the talented women in technology I know or follow on the interwebs. So, to make up for it, here are 5 amazing technologists.

    Allison Randal

    I was initially aware of Allison through her work on Perl, was vaguely aware of the fact she was working on Ubunutu, briefly overlapped with her at HPE (and thought it was impressive HP were hiring such high calibre of Free Software folk) when she was working on OpenStack, and have had the pleasure of meeting her in person due to the fact we both work on Debian. In the continuing theme of being able to do all things tech she’s currently studying a PhD at Cambridge (the real one), and has already written a fascinating paper about about the security misconceptions around virtual machines and containers. She’s also been doing things with home automation, properly, with local speech recognition rather than relying on any external assistant service (I will, eventually, find the time to follow her advice and try this out for myself).

    Alyssa Rosenzweig

    Graphics are one of the many things I just can’t do. I’m not artistic and I’m in awe of anyone who is capable of wrangling bits to make computers do graphical magic. People who can reverse engineer graphics hardware that would otherwise only be supported by icky binary blobs impress me even more. Alyssa is such a person, working on the Panfrost driver for ARM’s Mali Midgard + Bifrost GPUs. The lack of a Free driver stack for this hardware is a real problem for the ARM ecosystem and she has been tirelessly working to bring this to many ARM based platforms. I was delighted when I saw one of my favourite Free Software consultancies, Collabora, had given her an internship over the summer. (Selfishly I’m hoping it means the Gemini PDA will eventually be able to run an upstream kernel with accelerated graphics.)

    Angie McKeown

    The first time I saw Angie talk it was about the user experience of Virtual Reality, and how it had an entirely different set of guidelines to conventional user interfaces. In particular the premise of not trying to shock or surprise the user while they’re in what can be a very immersive environment. Obvious once someone explains it to you! Turns out she was also involved in the early days of custom PC builds and internet cafes in Northern Ireland, and has interesting stories to tell. These days she’s concentrating on cyber security - I’ve her to thank for convincing me to persevere with Ghidra - having looked at Bluetooth security as part of her Masters. She’s also deeply aware of the implications of the GDPR and has done some interesting work on thinking about how it affects the computer gaming industry - both from the perspective of the author, and the player.

    Claire Wilgar

    I’m not particularly fond of modern web design. That’s unfair of me, but web designers seem happy to load megabytes of Javascript from all over the internet just to display the most basic of holding pages. Indeed it seems that such things now require all the includes rather than being simply a matter of HTML, CSS and some graphics, all from the same server. Claire talked at Women Techmakers Belfast about moving away from all of this bloat and back to a minimalistic approach with improved performance, responsiveness and usability, without sacrificing functionality or presentation. She said all the things I want to say to web designers, but from a position of authority, being a front end developer as her day job. It’s great to see someone passionate about front-end development who wants to do things the right way, and talks about it in a way that even people without direct experience of the technologies involved (like me) can understand and appreciate.

    Karen Sandler

    There aren’t enough people out there who understand law and technology well. Karen is one of the few I’ve encountered who do, and not only that, but really, really gets Free software and the impact of the four freedoms on users in a way many pure technologists do not. She’s had a successful legal career that’s transitioned into being the general counsel for the Software Freedom Law Center, been the executive director of GNOME and is now the executive director of the Software Freedom Conservancy. As someone who likes to think he knows a little bit about law and technology I found Karen’s wealth of knowledge and eloquence slightly intimidating the first time I saw her speak (I think at some event in San Francisco), but I’ve subsequently (gratefully) discovered she has an incredible amount of patience (and ability) when trying to explain the nuances of free software legal issues.

    Worse Than FailureCodeSOD: Compiled Correctly

    Properly used, version history can easily help you track down and identify the source of a bug. Improperly used, it still can. As previously established, the chief architect Dana works with has some issues with source control.

    Dana works on a large, complex embedded system. “Suddenly”, her team started to spot huge piles of memory corruption problems. Something was misbehaving, but it was hard to see exactly what.

    They ported Valgrind to their platform, just so they could try and figure out what was going wrong. Eventually, they tracked the problem down to a pair of objects.

    In the flow of the code, the correct path was that object A, which we’ll call Monster would be allocated. Then a second object would be allocated. Somehow, Monster instances were corrupting the memory of the second object.

    How does an object allocated earlier corrupt the memory of an object allocated later? Well, “before” and “after” have different meaning when your code is multi-threaded, which this was. Worse, the Monster class was katamari of functionality rolled up across thousands of lines of code. Obviously, there had to be a race condition- but a quick glance at all the Monster methods showed that they were using a mutex to avoid the race condition.

    Or were they? Dana looked more closely. One of the methods called during the initialization process, doSomething, was marked const. In C++, that should mean that the method doesn’t change any property values. But if it doesn’t change any property values, how can it lock the mutex?

    This is where walking through the commit history tells a story. “Fortunately” this was before Jerry learned you could amend a commit, so each step of his attempts to get the code to compile are recorded for posterity.

    The chain of commits started with one labeled “Add Feature $X”, and our doSomething method looked like this.

      void doSomething() const {
          Mutex::ScopedLock lock(mutex);
          // Dozens of lines of code
      }

    Now, the intent here was to create a ScopedLock object based off a mutex property. But that required the mutex property to change, which violated const, which meant this didn’t even compile.

    Which brings up our next commit, labeled “Fix compile failure”:

      void doSomething() const {
          Mutex::ScopedLock lock(mutex) const;
          // Dozens of lines of code
      }

    Surprisingly, just slapping the const declaration on the variable initialization didn’t do anything. The next commit, also helpfully labeled “Fix compile failure”:

      void doSomething() const {
          Mutex::ScopedLock lock(const mutex);
          // Dozens of lines of code
      }

    Again, this didn’t work. Which brings us to the last “Fix compile failure” commit in this chain:

      void doSomething() const {
          Mutex::ScopedLock lock(const Mutex mutex);
          // Dozens of lines of code
      }

    By randomly adding and subtracting symbols, Jerry was able to finally write a function which compiles. Unfortunately, it also doesn’t work, because this time, the line of code is a function declaration for a function with no implementation. It takes a mutex as a parameter, and returns a lock on that mutex. Since the declaration has no implementation, if we ever tried to call this in doSomething, we’d get an error, but we don’t, because this was always meant to be a constructor.

    The end result is that nothing gets locked. Thus, the race condition means that sometimes, two threads contend with each other and corrupt memory. Dana was able to fix this method, but the root cause was only fixed when Jerry left Initech to be a CTO elsewhere.

    [Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

    Cory DoctorowWhy do people believe the Earth is flat?

    In my latest podcast (MP3), I read my Globe and Mail column, Why do people believe the Earth is flat?, which connects the rise of conspiratorial thinking to the rise in actual conspiracies, in which increasingly concentrated industries are able to come up with collective lobbying positions that result in everything from crashing 737s to toxic baby-bottle liners to the opioid epidemic.

    From climate denial to anti-vax to a resurgent eugenics movement, we are in a golden age of terrible conspiratorial thinking, with real consequences for our species’ continued survival on our (decidedly round) planet.

    Ideas spread because of some mix of ideology and material circumstances. Either ideas are convincingly argued and/or they are delivered to people whose circumstances make them susceptible to those ideas.

    Conspiracies aren’t on the rise because the arguments for them got better. The arguments for “alternative medicine” or against accepted climate science are no better than those that have lurked in the fringes for generations. Look up the 19th-century skeptics who decried the smallpox vaccine and you’ll find that anti-vax arguments have progressed very little in more than a century.

    MP3

    ,

    Cory DoctorowRevealing the cover of “Poesy the Monster Slayer,” my first-ever picture book!

    Firstsecond (publishers of In Real Life, the bestselling middle-grades graphic novel Jen Wang and I made) have just revealed the cover for Poesy the Monster Slayer, my first-ever picture book, illustrated by Matt Rockefeller and scheduled for publication in July 2020.

    Poesy is a book about a little girl who is obsessed with monsters, who uses her deep knowledge of monsters’ weaknesses to repurpose her toys — a princess tiara, bubblegum-scented perfume, a doll-house’s roof, and more — as field-expedient monster-slaying weapons, to do nightly battle with the monsters who come into her room, to the great consternation of her parents, who only want to get a good night’s sleep.

    This book has been in the works for a long time, and I’m so glad to see it finally heading to the finish line! Matt’s illustrations are perfect — a kid-friendly, 21st century update on my favorite monster drawings, from Universal’s classic monsters to Marc Davis’s Haunted Mansion spook designs.

    Once her parents are off to bed, Poesy excitedly awaits the monsters that creep into her room. With the knowledge she’s gained from her trusty Monster Book and a few of her favorite toys, Poesy easily fends off a werewolf, a vampire, and much more.

    But not even Poesy’s bubblegum perfume can defeat her sleep-deprived parents!

    CryptogramEdward Snowden's Memoirs

    Ed Snowden has published a book of his memoirs: Permanent Record. I have not read it yet, but I want to point you all towards two pieces of writing about the book. The first is an excellent review of the book and Snowden in general by SF writer and essayist Jonathan Lethem, who helped make a short film about Snowden in 2014. The second is an essay looking back at the Snowden revelations and what they mean. Both are worth reading.

    As to the book, there are lots of other reviews.

    The US government has sued to seize Snowden's royalties from book sales.

    Planet DebianJulien Danjou: Python and fast HTTP clients

    Python and fast HTTP clients

    Nowadays, it is more than likely that you will have to write an HTTP client for your application that will have to talk to another HTTP server. The ubiquity of REST API makes HTTP a first class citizen. That's why knowing optimization patterns are a prerequisite.

    There are many HTTP clients in Python; the most widely used and easy to
    work with is requests. It is the de-factor standard nowadays.

    Persistent Connections

    The first optimization to take into account is the use of a persistent connection to the Web server. Persistent connections are a standard since HTTP 1.1 though many applications do not leverage them. This lack of optimization is simple to explain if you know that when using requests in its simple mode (e.g. with the get function) the connection is closed on return. To avoid that, an application needs to use a Session object that allows reusing an already opened connection.

    import requests
    
    session = requests.Session()
    session.get("http://example.com")
    # Connection is re-used
    session.get("http://example.com")
    Using Session with requests

    Each connection is stored in a pool of connections (10 by default), the size of
    which is also configurable:

    import requests
    
    
    session = requests.Session()
    adapter = requests.adapters.HTTPAdapter(
        pool_connections=100,
        pool_maxsize=100)
    session.mount('http://', adapter)
    response = session.get("http://example.org")
    Changing pool size

    Reusing the TCP connection to send out several HTTP requests offers a number of performance advantages:

    • Lower CPU and memory usage (fewer connections opened simultaneously).
    • Reduced latency in subsequent requests (no TCP handshaking).
    • Exceptions can be raised without the penalty of closing the TCP connection.

    The HTTP protocol also provides pipelining, which allows sending several requests on the same connection without waiting for the replies to come (think batch). Unfortunately, this is not supported by the requests library. However, pipelining requests may not be as fast as sending them in parallel. Indeed, the HTTP 1.1 protocol forces the replies to be sent in the same order as the requests were sent – first-in first-out.

    Parallelism

    requests also has one major drawback: it is synchronous. Calling requests.get("http://example.org") blocks the program until the HTTP server replies completely. Having the application waiting and doing nothing can be a drawback here. It is possible that the program could do something else rather than sitting idle.

    A smart application can mitigate this problem by using a pool of threads like the ones provided by concurrent.futures. It allows parallelizing the HTTP requests in a very rapid way.

    from concurrent import futures
    
    import requests
    
    
    with futures.ThreadPoolExecutor(max_workers=4) as executor:
        futures = [
            executor.submit(
                lambda: requests.get("http://example.org"))
            for _ in range(8)
        ]
    
    results = [
        f.result().status_code
        for f in futures
    ]
    
    print("Results: %s" % results)
    Using futures with requests

    This pattern being quite useful, it has been packaged into a library named requests-futures. The usage of Session objects is made transparent to the developer:

    from requests_futures import sessions
    
    
    session = sessions.FuturesSession()
    
    futures = [
        session.get("http://example.org")
        for _ in range(8)
    ]
    
    results = [
        f.result().status_code
        for f in futures
    ]
    
    print("Results: %s" % results)
    Using futures with requests

    By default a worker with two threads is created, but a program can easily customize this value by passing the max_workers argument or even its own executor to the FuturSession object – for example like this: FuturesSession(executor=ThreadPoolExecutor(max_workers=10)).

    Asynchronicity

    As explained earlier, requests is entirely synchronous. That blocks the application while waiting for the server to reply, slowing down the program. Making HTTP requests in threads is one solution, but threads do have their own overhead and this implies parallelism, which is not something everyone is always glad to see in a program.

    Starting with version 3.5, Python offers asynchronicity as its core using asyncio. The aiohttp library provides an asynchronous HTTP client built on top of asyncio. This library allows sending requests in series but without waiting for the first reply to come back before sending the new one. In contrast to HTTP pipelining, aiohttp sends the requests over multiple connections in parallel, avoiding the ordering issue explained earlier.

    import aiohttp
    import asyncio
    
    
    async def get(url):
        async with aiohttp.ClientSession() as session:
            async with session.get(url) as response:
                return response
    
    
    loop = asyncio.get_event_loop()
    
    coroutines = [get("http://example.com") for _ in range(8)]
    
    results = loop.run_until_complete(asyncio.gather(*coroutines))
    
    print("Results: %s" % results)
    Using aiohttp

    All those solutions (using Session, threads, futures or asyncio) offer different approaches to making HTTP clients faster.

    Performances

    The snippet below is an HTTP client sending requests to httpbin.org, an HTTP API that provides (among other things) an endpoint simulating a long request (a second here). This example implements all the techniques listed above and times them.

    import contextlib
    import time
    
    import aiohttp
    import asyncio
    import requests
    from requests_futures import sessions
    
    URL = "http://httpbin.org/delay/1"
    TRIES = 10
    
    
    @contextlib.contextmanager
    def report_time(test):
        t0 = time.time()
        yield
        print("Time needed for `%s' called: %.2fs"
              % (test, time.time() - t0))
    
    
    with report_time("serialized"):
        for i in range(TRIES):
            requests.get(URL)
    
    
    session = requests.Session()
    with report_time("Session"):
        for i in range(TRIES):
            session.get(URL)
    
    
    session = sessions.FuturesSession(max_workers=2)
    with report_time("FuturesSession w/ 2 workers"):
        futures = [session.get(URL)
                   for i in range(TRIES)]
        for f in futures:
            f.result()
    
    
    session = sessions.FuturesSession(max_workers=TRIES)
    with report_time("FuturesSession w/ max workers"):
        futures = [session.get(URL)
                   for i in range(TRIES)]
        for f in futures:
            f.result()
    
    
    async def get(url):
        async with aiohttp.ClientSession() as session:
            async with session.get(url) as response:
                await response.read()
    
    loop = asyncio.get_event_loop()
    with report_time("aiohttp"):
        loop.run_until_complete(
            asyncio.gather(*[get(URL)
                             for i in range(TRIES)]))
    Program to compare the performances of different requests usage

    Running this program gives the following output:

    Time needed for `serialized' called: 12.12s
    Time needed for `Session' called: 11.22s
    Time needed for `FuturesSession w/ 2 workers' called: 5.65s
    Time needed for `FuturesSession w/ max workers' called: 1.25s
    Time needed for `aiohttp' called: 1.19s
    Python and fast HTTP clients

    Without any surprise, the slower result comes with the dumb serialized version, since all the requests are made one after another without reusing the connection — 12 seconds to make 10 requests.

    Using a Session object and therefore reusing the connection means saving 8% in terms of time, which is already a big and easy win. Minimally, you should always use a session.

    If your system and program allow the usage of threads, it is a good call to use them to parallelize the requests. However threads have some overhead, and they are not weight-less. They need to be created, started and then joined.

    Unless you are still using old versions of Python, without a doubt using aiohttp should be the way to go nowadays if you want to write a fast and asynchronous HTTP client. It is the fastest and the most scalable solution as it can handle hundreds of parallel requests. The alternative, managing hundreds of threads in parallel is not a great option.

    Streaming

    Another speed optimization that can be efficient is streaming the requests. When making a request, by default the body of the response is downloaded immediately. The stream parameter provided by the requests library or the content attribute for aiohttp both provide a way to not load the full content in memory as soon as the request is executed.

    import requests
    
    
    # Use `with` to make sure the response stream is closed and the connection can
    # be returned back to the pool.
    with requests.get('http://example.org', stream=True) as r:
        print(list(r.iter_content()))
    Streaming with requests
    import aiohttp
    import asyncio
    
    
    async def get(url):
        async with aiohttp.ClientSession() as session:
            async with session.get(url) as response:
                return await response.content.read()
    
    loop = asyncio.get_event_loop()
    tasks = [asyncio.ensure_future(get("http://example.com"))]
    loop.run_until_complete(asyncio.wait(tasks))
    print("Results: %s" % [task.result() for task in tasks])
    Streaming with aiohttp

    Not loading the full content is extremely important in order to avoid allocating potentially hundred of megabytes of memory for nothing. If your program does not need to access the entire content as a whole but can work on chunks, it is probably better to just use those methods. For example, if you're going to save and write the content to a file, reading only a chunk and writing it at the same time is going to be much more memory efficient than reading the whole HTTP body, allocating a giant pile of memory, and then writing it to disk.

    I hope that'll make it easier for you to write proper HTTP clients and requests. If you know any other useful technic or method, feel free to write it down in the comment section below!

    Worse Than FailureCodeSOD: Generically Bad

    The first two major releases of the .NET Framework, 1.0 and 1.1 were… not good. It's so long ago now that they're easily forgotten, but it's important to remember that a lot of core language features weren't in the framework until .NET 2.0.

    Like generics. Generics haven't always been part of the language, but they've been in the language since 2006. The hope would be that, in the course of 13 years, developers would learn to use this feature.

    Russell F (recently) has a co-worker who is still working on it.

    public static DataTable ClassRegionDToDatatable<POSInvoiceRegionD>(string tableName) where POSInvoiceRegionD : class { Type classType = typeof(POSInvoiceRegionD); DataTable result = new DataTable(tableName); foreach (PropertyInfo property in classType.GetProperties()) { DataColumn column = new DataColumn(); column.ColumnName = property.Name; column.DataType = property.PropertyType; result.Columns.Add(column); } return result; } public static DataTable ClassRegionFToDatatable<POSInvoiceRegionF>(string tableName) where POSInvoiceRegionF : class { Type classType = typeof(POSInvoiceRegionF); DataTable result = new DataTable(tableName); foreach (PropertyInfo property in classType.GetProperties()) { DataColumn column = new DataColumn(); column.ColumnName = property.Name; column.DataType = property.PropertyType; result.Columns.Add(column); } return result; } public static DataTable ClassRegionGToDatatable<POSInvoiceRegionG>(string tableName) where POSInvoiceRegionG : class { Type classType = typeof(POSInvoiceRegionG); DataTable result = new DataTable(tableName); foreach (PropertyInfo property in classType.GetProperties()) { DataColumn column = new DataColumn(); column.ColumnName = property.Name; column.DataType = property.PropertyType; result.Columns.Add(column); } return result; } public static DataTable ClassRegionKToDatatable<POSInvoiceRegionK>(string tableName) where POSInvoiceRegionK : class { Type classType = typeof(POSInvoiceRegionK); DataTable result = new DataTable(tableName); foreach (PropertyInfo property in classType.GetProperties()) { DataColumn column = new DataColumn(); column.ColumnName = property.Name; column.DataType = property.PropertyType; result.Columns.Add(column); } return result; }

    Now, the core idea behind generics is that code which is generic doesn't particularly care about what data-type it's working on. A generic list handles inserts and other list operations without thinking about what it's actually touching.

    So, right off the bat, the fact that we have a pile of generic methods which all contain the same code is a simple hint that something's gone terribly wrong.

    In this case, each of these methods takes a type parameter (which happens, in this case, to be named just like one of the actual classes we use), and then generates an empty DataTable with the columns configured to match the class. So, for example, you might do:

    DataTable d = POSInvoiceRegionUtils.ClassRegionDToDatatable<POSInvoiceRegionD>("the_d");

    Of course, because these methods are all generic and accept type parameters, you could just as easily…

    DataTable d = POSInvoiceRegionUtils.ClassRegionKToDatatable<POSInvoiceRegionD>("the_d");

    Not that such a counterintuitive thing ever happens. By the way, did you notice how these regions are named with letters? And you know how the alphabet has 26 of them? Well, while they're not using all 26 letters, there are a lot more regions than illustrated here, and they all get the same ClassRegion{x}ToDatatable implementation.

    So yes, we could boil all of these implementations down into one. Then again, should we? GetProperties is one of .NET's reflection methods, which lets us examine the definition of class objects. Using it isn't wrong, but it's always suspicious. Perhaps we don't need any of this code? Without more information, it's hard to say, but Russell adds:

    I'm going to leave aside the question of whether this is something that should be done at all to focus on the fact that it's being done in a really bizarre way.

    I'm not sure about "bizarre", but wrong? Definitely. Definitely wrong.

    [Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

    Planet DebianAntoine Beaupré: This is why native apps matter

    I was just looking a web stream on Youtube today and was wondering why my CPU was so busy. So I fired up top and saw my web browser (Firefox) took up around 70% of a CPU to play the stream.

    I thought, "this must be some high resolution crazy stream! how modern! such wow!" Then I thought, wait, this is the web, there must be something insane going on.

    So I did a little experiment: I started chromium --temp-profile on the stream, alongside vlc (which can also play Youtube streams!). Then I took a snapshot of the top(1) command after 5 minutes. Here are the results:

      PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
    16332 anarcat   20   0 1805160 269684 102660 S  60,2   1,7   3:34.96 chromium
    16288 anarcat   20   0  974872 119752  87532 S  33,2   0,7   1:47.51 chromium
    16410 anarcat   20   0 2321152 176668  80808 S  22,0   1,1   1:15.83 vlc
     6641 anarcat   20   0   21,1g 520060 137580 S  13,8   3,2  55:36.70 x-www-browser
    16292 anarcat   20   0  940340  83980  67080 S  13,2   0,5   0:41.28 chromium
     1656 anarcat   20   0 1970912  18736  14576 S  10,9   0,1   4:47.08 pulseaudio
     2256 anarcat   20   0  435696  93468  78120 S   7,6   0,6  16:03.57 Xorg
    16262 anarcat   20   0 3240272 165664 127328 S   6,2   1,0   0:31.06 chromium
      920 message+  20   0   11052   5104   2948 S   1,3   0,0   2:43.37 dbus-daemon
    17915 anarcat   20   0   16664   4164   3276 R   1,3   0,0   0:02.07 top
    

    To deconstruct this, you can see my Firefox process (masquerading as x-www-browser) which has been started for a long time. It's taken 55 hours of CPU time, but let's ignore that for now as it's not in the benchmark. What I find fascinating is there are at least 4 chromium processes running here, and they collectively take up over 7 minutes of CPU time.

    Compare this a little over one (1!!!11!!!) minute of CPU time for VLC, and you realize why people are so ranty about everything being packaged as web apps these days. It's basically using up an order of magnitudes more processing power (and therefore electric power and slave labor) to watch those silly movies in your web browsers than in a proper video player.

    Keep that in mind next time you let Youtube go on a "autoplay Donald Drumpf" playlist...

    ,

    Planet DebianIustin Pop: IronBike Einsiedeln 2019

    The previous race I did at the end of August was soo much fun—I forgot how much fun this is—that I did the unthinkable and registered to the IronBike as well, at a short (but not the shortest) distance.

    Why unthinkable? Last time I was here, three years ago, I apparently told my wife I will never, ever go to this race again, since it’s not a fun one. Steep up, steep down, or flat, but no flowing sections.

    Well, it seems I forgot I said that, I only remembered “it’s a hard race”, so I registered at the 53km distance, where there are no age categories. Just “Herren Fun” ☺

    September

    The month of September was a pretty good one, overall, and I managed to even lose 2kg (of fat, I’m 100% sure), and continued to increase my overall fitness - 23→34 CTL in Training Peaks.

    I also did the Strava Escape challenge and the Sufferfest Yoga challenge, so core fitness improved as well. And I was more rested: Training Peaks form was -8, up from -18 for previous race.

    Overall, I felt reasonably confident to complete (compete?) the shorter distance; official distance 53km, 1’400m altitude.

    Race (day)

    The race for this distance starts very late (around half past ten), so it was an easy morning, breakfast, drive to place, park, get dressed, etc.

    The weather was actually better than I feared, it was plain shorts and t-shirt, no need for arm warmers, or anything like that. And it got sunnier as the race went on, which reminded me why on my race prep list it says “Sunscreen (always)”, and not just sunscreen.

    And yes, this race as well, even at this “fun” distance, started very strong. The first 8.7km are flat (total altitude gain 28m, i.e. nothing), and I did them at an average speed of 35.5km/h! Again, this is on a full-suspension mountain bike, with thick tires. I actually drafted, a lot, and it was the first time I realised even with MTB, knowing how to ride in a group is useful.

    And then, the first climb starts. About 8.5km, gaining 522m altitude (around a third only of the goal…), and which was tiring. On the few places where the climb flattened I saw again that one of the few ways in which I can gain on people is by timing my effort well enough such that when it flattens, I can start strongly and thus gain, sometimes significantly.

    I don’t know if this is related to the fact that I’m on the heavier but also stronger side (this is comparing peak 5s power with other people on Sufferfest/etc.), or just I was fresh enough, but I was able to apply this repeatedly over the first two thirds of the race.

    At this moment I was still well, and the only negative aspect—my lower back pain, which started very early—was on and off, not continuous, so all good.

    First descent, 4.7km, average almost -12%, thus losing almost all the gain, ~450m. And this was an interesting descent: a lot of it was on a quite wide trail, but “paved” with square wood logs. And the distance between the wood logs, about 3-4cm, was not filled well enough with soil, so it was very jittery ride. I’ve seen a lot of people not able to ride there (to my surprise), and, an even bigger surprise, probably 10 or 15 people with flats. I guess they went low enough on pressure to get bites (not sure if correct term) and thus a flat. I had no issues, I was running a bit high pressure (lucky guess), which I took off later. But here it served me very well.

    Another short uphill, 1.8km/230m altitude, but I gave up and pushed the bike. Not fancy, but when I got off the bike it was 16%!! And that’s not my favourite place in the world to be. Pushing the bike was not that bad; I was going 4.3km/h versus 5.8km/h beforehand, so I wasn’t losing much time. But yes, I was losing time.

    Then another downhill, thankfully not so steep so I could enjoy going down for longer, and then a flat section. Overall almost 8km flat section, which I remembered quite well. Actually I was remembering quite a bit of things from the previous race, despite different route, but especially this flat section I did remember in two ways:

    • that I was already very, very tired back then
    • and by the fact that I don’t remembered at all what was following

    So I start on this flat section, expecting good progress, only to find myself going slower and slower. I couldn’t understand why it was so tiring to go straight, until I started approaching the group ahead. I realised then it was a “false flat”; I knew the term before, but didn’t understand well what it means, but now I got it ☺ About 1km, at a mean grade of 4.4%, which is not high when you know it’s a climb, but it is not flat. Once I realised this, I felt better. The other ~6.5km I spent pursuing a large group ahead, which I managed to catch about half a kilometre before the end of the flat.

    I enjoy this section, then the road turns and I saw what my brain didn’t want to remember: the last climb, 4km long, only ~180m altitude, but mean grade 10%, but I don’t have energy for all of it. Which is kind of hilarious, 180m altitude is only twice and a bit of my daily one-way commute, so peanuts, but I’m quite tired from both previous climbs and chasing that group, so after 11 minutes I get off the bike, and start pushing.

    And the pushing is the same as before, actually even better. I was going 5.2km/h on average here, versus 6.1km/h before, so less than 1km/h difference. This walking speed is more than usual walking, as it was not very steep I was walking fast. Since I was able to walk fast and felt recovered, I get back on the bike before the climb finished, only to feel a solid cramp in the right leg as soon as I try to start pedalling. OK, let me pedal with the left, ouch cramp as well. So I’m going actually almost as slow as walking, just managing the cramps and keeping them below-critical, and after about 5 minutes they go away.

    It was the first time I got such cramps, or any cramps, during a race. I’m not sure I understand why, I did drink energy drinks not just water (so I assume not acute lack of some electrolytes). Apparently reading on the internet it seems cramps are not a solved issue, and most likely just due to the hard efforts at high intensity. It was scary about ten seconds as I feared a full cramping of one of both legs (and being clipped in… not funny), but once I realised I can keep it under control, it was just interesting.

    I remembered now the rest of the route (ahem, see below), so I knew I was looking at a long (7km), flat-ish section (100m gain), before the last, relatively steep descent. Steep as in -24.4% max gradient, with average -15%. This done, I was on relatively flat roads (paved, even), and my Garmin was showing 1’365m altitude gain since the start of the race. Between the official number of 1’400m and this was a difference of only 35m, which I was sure was due to simple errors; distance now was 50.5km, so I was looking at the rest of 2.5km being a fast run towards the end.

    Then in a village we exit the main road and take a small road going up a bit, and then up a bit, and then—I was pretty annoyed here already—the road opens to a climb. A CLIMB!!!! I could see the top of the hill, people going up, but it was not a small climb! It was not 35m, not fewer than that, it was a short but real climb!

    I was swearing with all my power in various languages. I completely and totally forgot this last climb, I did not plan for it, and I was pissed off ☺ It turned out to be 1.3km long, mean grade of 11.3% (!), with 123m of elevation gain. It was almost 100m more than I planned… I got off the bike again, especially as it was a bit tricky to pedal and the gradient was 20% (!!!), so I pushed, then I got back on, then I got back off, and then back on. At the end of the climb there was a photographer again, so I got on the bike and hoped I might get a smiling picture, which almost happened:

    Pretending I'm enjoying this, and showing how much weight I should get rid of :/ Pretending I’m enjoying this, and showing how much weight I should get rid of :/

    And then from here on it was clean, easy downhill, at most -20% grade, back in Einsiedeln, and back to the start with a 10m climb, which I pushed hard, overtook a few people (lol), passed the finish, and promptly found first possible place to get off the bike and lie down. This was all behind me now:

    Thanks VeloViewer! Thanks VeloViewer!

    And, my Garmin said 1’514m altitude, more than 100m more than the official number. Boo!

    Aftermath

    It was the first race I actually had to lie down afterwards. First race I got cramps as well ☺ Third 1h30m heart rate value ever, a lot of other year 2019 peaks too. And I was dead tired.

    I learned it’s very important to know well the route altitude profile, in order to be able to time your efforts well. I got reminded I suck at climbing (so I need to train this over and over), but I learned that I can push better than other people at the end of a climb, so I need to train this strength as well.

    I also learned that you can’t rely on the official race data, since it can be off by more than 100m. Or maybe I can’t rely on my Garmin(s), but with a barometric altimeter, I don’t think the problem was here.

    I think I ate a bit too much during the race, which was not optimal, but I was very hungry already after the first climb…

    But the biggest thing overall was that, despite no age group, I got a better placement than I thought. I finished in 3h:37m.14s, 1h:21m.56s behind the first place, but a good enough time that I was 325th place out of 490 finishers.

    By my calculations, 490 finishers means the thirds are 1-163, 164-326, and 327-490. Which means, I was in the 2nd third, not in the last one!!! Hence the subtitle of this post, moving up, since I usually am in the bottom third, not the middle one. And there were also 11 DNF people, so I’m well in the middle third ☺

    Joke aside, this made me very happy, as it’s the first time I feel like efforts (in a bad year like this, even) do amount to something. So good.

    Speaking of DNF, I was shocked at the amount of people I saw either trying to fix their bike on the side, or walking their bike (to the next repair post), or just shouting angrily at a broken component. I counted probably towards 20 of such people, a lot after that first descent, but given only 11 DNF, I guess many people do carry spare tubes. I didn’t, hope is a strategy sometimes ☺

    Now, with the season really closed, time to start thinking about next year. And how to lose those 10kg of fat I still need to lose…

    Planet DebianAntoine Beaupré: Calibre replacement considerations

    Summary

    TL;DR: I'm considering replacing those various Calibre compnents with...

    My biggest blocker that don't really have good alternatives are:

    • collection browser
    • metadata editor
    • device sync

    See below why and a deeper discussion on all the features.

    Problems with Calibre

    Calibre is an amazing software: it allows users to manage ebooks on your desktop and a multitude of ebook readers. It's used by Linux geeks as well as Windows power-users and vastly surpasses any native app shipped by ebook manufacturers. I know almost exactly zero people that have an ebook reader that do not use Calibre.

    However, it has had many problems over the years:

    Update: a previous version of that post claimed that all of Calibre had been removed from Debian. This was inaccurate, as the Debian Calibre maintainer pointed out. What happened was Calibre 4.0 was uploaded to Debian unstable, then broke because of missing Python 2 dependencies, and an older version (3.48) was uploaded in its place. So Calibre will stay around in Debian for the foreseeable future, hopefully, but the current latest version (4.0) cannot get in because it depends on older Python 2 libraries.

    The latest issue (Python 3) is the last straw, for me. While Calibe is an awesome piece of software, I can't help but think it's doing too much, and the wrong way. It's one of those tools that looks amazing on the surface, but when you look underneath, it's a monster that is impossible to maintain, a liability that is just bound to cause more problems in the future.

    What does Calibre do anyways

    So let's say I wanted to get rid of Calibre, what would that mean exactly? What do I actually use Calibre for anyways?

    Calibre is...

    • an ebook viewer: Calibre ships with the ebook-viewer command, which allows one to browse a vast variety of ebook formats. I rarely use this feature, since I read my ebooks on a e-reader, on purpose. There is, besides, a good variety of ebook-readers, on different platforms, that can replace Calibre here:

      • Atril, MATE's version of Evince, supports ePUBs (Evince doesn't seem to), but fails to load certain ebooks (book #1459 for example)
      • Bookworm looks very promising, not in Debian (Debian bug #883867), but Flathub. scans books on exit, and can take a loong time to scan an entire library (took 24+ hours here, and had to kill pdftohtml a few times) without a progress bar. but has a nice library browser, although it looks like covers are sorted randomly. search works okay, however. unclear what happens when you add a book, it doesn't end up in the chosen on-disk library.
      • Buka is another "ebook" manager written in Javascript, but only supports PDFs for now.
      • coolreader is another alternative, not yet in Debian (#715470)
      • Emacs (of course) supports ebooks through nov.el
      • fbreader also supports ePUBs, but is much slower than all those others, and turned proprietary so is unmaintained
      • Foliate looks gorgeous and is built on top of the ePUB.js library, not in Debian, but Flathub.
      • GNOME Books is interesting, but relies on the GNOME search engine and doesn't find my books (and instead lots of other garbage). it's been described as "basic" and "the least mature" in this OMG Ubuntu review
      • koreader is a good alternative reader for the Kobo devices and now also has builds for Debian, but no Debian package
      • lucidor is a Firefox extension that can read an organize books, but is not packaged in Debian either (although upstream provides a .deb). It depends on older Firefox releases (or "Pale moon", a Firefox fork), see also the firefox XULocalypse for details
      • MuPDF also reads ePUBs and is really fast, but the user interface is extremely minimal, and copy-paste doesn't work so well (think "Xpdf"). it also failed to load certain books (e.g. 1359) and warns about 3.0 ePUBs (e.g. book 1162)
      • Okular supports ePUBs when okular-extra-backends is installed
      • plato is another alternative reader for Kobo readers, not in Debian
    • an ebook editor: Calibre also ships with an ebook-edit command, which allows you to do all sorts of nasty things to your ebooks. I have rarely used this tool, having found it hard to use and not giving me the results I needed, in my use case (which was to reformat ePUBs before publication). For this purpose, Sigil is a much better option, now packaged in Debian. There are also various tools that render to ePUB: I often use the Sphinx documentation system for that purpose, and have been able to produce ePUBs from LaTeX for some projects.

    • a file converter: Calibre can convert between many ebook formats, to accomodate the various readers. In my experience, this doesn't work very well: the layout is often broken and I have found it's much better to find pristine copies of ePUB books than fight with the converter. There are, however, very few alternatives to this functionality, unfortunately.

    • a collection browser: this is the main functionality I would miss from Calibre. I am constantly adding books to my library, and Calibre does have this incredibly nice functionality of just hitting "add book" and Just Do The Right Thingâ„¢ after that. Specifically, what I like is that it:

      • sort, view, and search books in folders, per author, date, editor, etc
      • quick search is especially powerful
      • allows downloading and editing metadata (like covers) easily
      • track read/unread status (although that's a custom field I had to add)

      Calibre is, as far as I know, the only tool that goes so deep in solving that problem. The Liber web server, however, does provide similar search and metadata functionality. It also supports migrating from an existing Calibre database as it can read the Calibre metadata stores. When no metadata is found, it fetches some from online sources (currently Google Books).

      One major limitation of Liber in this context is that it's solely search-driven: it will not allow you to see (for example) the "latest books added" or "browse by author". It also doesn't support "uploading" books although it will incrementally pick up new books added by hand in the library. It somewhat assumes Calibre already exists, in a way, to properly curate the library and is more designed to be a search engine and book sharing system between liber instances.

      This also connects with the more general "book inventory" problem I have which involves an inventory physical books and directory of online articles. See also firefox (Zotero section) and ?bookmarks for a longer discussion of that problem.

    • a metadata editor: the "collection browser" is based on a lot of metadata that Calibre indexes from the books. It can magically find a lot of stuff in the multitude of file formats it supports, something that is pretty awesome and impressive. For example, I just added a PDF file, and it found the book cover, author, publication date, publisher, language and the original mobi book id (!). It also added the book in the right directory and dumped that metadata and the cover in a file next to the book. And if that's not good enough, it can poll that data from various online sources like Amazon, and Google books.

      Maybe the work Peter Keel did could be useful in creating some tool which would do this automatically? Or maybe Sigil could help? Liber can also fetch metadata from Google books, but not interactively.

      I still use Calibre mostly for this.

    • a device synchronization tool : I mostly use Calibre to synchronize books with an ebook-reader. It can also automatically update the database on the ebook with relevant metadata (e.g. collection or "shelves"), although I do not really use that feature. I do like to use Calibre to quickly search and prune books from by ebook reader, however. I might be able to use git-annex for this, however, given that I already use it to synchronize and backup my ebook collection in the first place...

    • an RSS reader: I used this for a while to read RSS feeds on my ebook-reader, but it was pretty clunky. Calibre would be continously generating new ebooks based on those feeds and I would never read them, because I would never find the time to transfer them to my ebook viewer in the first place. Instead, I use a regular RSS feed reader. I ended up writing my own, feed2exec) and when I find an article I like, I add it to Wallabag which gets sync'd to my reader using wallabako, another tool I wrote.

    • an ebook web server : Calibre can also act as a web server, presenting your entire ebook collection as a website. It also supports acting as an OPDS directory, which is kind of neat. There are, as far as I know, no alternative for such a system although there are servers to share and store ebooks, like Trantor or Liber.

    Note that I might have forgotten functionality in Calibre in the above list: I'm only listing the things I have used or am using on a regular basis. For example, you can have a USB stick with Calibre on it to carry the actual software, along with the book library, around on different computers, but I never used that feature.

    So there you go. It's a colossal task! And while it's great that Calibre does all those things, I can't help but think that it would be better if Calibre was split up in multiple components, each maintained separately. I would love to use only the document converter, for example. It's possible to do that on the commandline, but it still means I have the entire Calibre package installed.

    Maybe a simple solution, from Debian's point of view, would be to split the package into multiple components, with the GUI and web servers packaged separately from the commandline converter. This way I would be able to install only the parts of Calibre I need and have limited exposure to other security issues. It would also make it easier to run Calibre headless, in a virtual machine or remote server for extra isoluation, for example.

    Update: this post generated some activity on Mastodon, follow the conversation here or on your favorite Mastodon instance.

    Rondam RamblingsWhatever happened to "no collusion"?

    Funny how fast the "no collusion" slogan evaporated after the recent revelations about Trump trying to shake down the president of Ukraine to fabricate a smear campaign about Joe Biden.  Two years of hand-wringing about the Mueller report are suddenly moot.  Instead of "no collusion" it's now, "Sure I colluded, but it was for a good cause.  Collusion with a foreign government is perfectly

    Planet DebianNorbert Preining: RIP (for now) Calibre in Debian

    The current purge of all Python2 related packages has a direct impact on Calibre. The latest version of Calibre requires Python modules that are not (anymore) available for Python 2, which means that Calibre >= 4.0 will for the foreseeable future not be available in Debian.

    I just have uploaded a version of 3.48 which is the last version that can run on Debian. From now on until upstream Calibre switches to Python 3, this will be the last version of Calibre in Debian.

    In case you need newer features (including the occasional security fixes), I recommend switching to the upstream installer which is rather clean (installing into /opt/calibre, creating some links to the startup programs, and installing completions for zsh and bash. It also prepares an uninstaller that reverts these changes.

    Enjoy.

    ,

    Planet DebianReproducible Builds: Reproducible Builds in September 2019

    Welcome to the September 2019 report from the Reproducible Builds project!

    In these reports we outline the most important things that we have been up over the past month. As a quick refresher of what our project is about, whilst anyone can inspect the source code of free software for malicious changes, most software is distributed to end users or servers as precompiled binaries. The motivation behind the reproducible builds effort is to ensure zero changes have been introduced during these compilation processes. This is achieved by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

    In September’s report, we cover:

    • Media coverage & eventsmore presentations, preventing Stuxnet, etc.
    • Upstream newskernel reproducibility, grafana, systemd, etc.
    • Distribution workreproducible images in Arch Linux, policy changes in Debian, etc.
    • Software developmentyet more work on diffoscope, upstream patches, etc.
    • Misc news & getting in touchfrom our mailing list how to contribute, etc

    If you are interested in contributing to our project, please visit our Contribute page on our website.


    Media coverage & events

    This month Vagrant Cascadian attended the 2019 GNU Tools Cauldron in Montréal, Canada and gave a presentation entitled Reproducible Toolchains for the Win (video).

    In addition, our project was highlighted as part of a presentation by Andrew Martin at the All Systems Go conference in Berlin titled Rootless, Reproducible & Hermetic: Secure Container Build Showdown, and Björn Michaelsen from the Document Foundation presented at the 2019 LibreOffice Conference in Almería in Spain on the status of reproducible builds in the LibreOffice office suite.

    In academia, Anastasis Keliris and Michail Maniatakos from the New York University Tandon School of Engineering published a paper titled ICSREF: A Framework for Automated Reverse Engineering of Industrial Control Systems Binaries (PDF) that speaks to concerns regarding the security of Industrial Control Systems (ICS) such as those attacked via Stuxnet. The paper outlines their ICSREF tool for reverse-engineering binaries from such systems and furthermore demonstrates a scenario whereby a commercial smartphone equipped with ICSREF could be easily used to compromise such infrastructure.

    Lastly, It was announced that Vagrant Cascadian will present a talk at SeaGL in Seattle, Washington during November titled There and Back Again, Reproducibly.


    2019 Summit

    Registration for our fifth annual Reproducible Builds summit that will take place between 1st → 8th December in Marrakesh, Morocco has opened and personal invitations have been sent out.

    Similar to previous incarnations of the event, the heart of the workshop will be three days of moderated sessions with surrounding “hacking” days and will include a huge diversity of participants from Arch Linux, coreboot, Debian, F-Droid, GNU Guix, Google, Huawei, in-toto, MirageOS, NYU, openSUSE, OpenWrt, Tails, Tor Project and many more. If you would like to learn more about the event and how to register, please visit our our dedicated event page.


    Upstream news

    Ben Hutchings added documentation to the Linux kernel regarding how to make the build reproducible. As he mentioned in the commit message, the kernel is “actually” reproducible but the end-to-end process was not previously documented in one place and thus Ben describes the workflow and environment needed to ensure a reproducible build.

    Daniel Edgecumbe submitted a pull request which was subsequently merged to the logging/journaling component of systemd in order that the output of e.g. journalctl --update-catalog does not differ between subsequent runs despite there being no changes in the input files.

    Jelle van der Waa noticed that if the grafana monitoring tool was built within a source tree devoid of Git metadata then the current timestamp was used instead, leading to an unreproducible build. To avoid this, Jelle submitted a pull request in order that it use SOURCE_DATE_EPOCH if available.

    Mes (a Scheme-based compiler for our “sister” bootstrappable builds effort) announced their 0.20 release.


    Distribution work

    Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution. Thunderbird and kernel-vanilla packages will be among the larger ones to become reproducible soon and there were additional Python patches to help reproducibility issues of modules written in this language that have C bindings.

    OpenWrt is a Linux-based operating system targeting embedded devices such as wireless network routers. This month, Paul Spooren (aparcar) switched the toolchain the use the GCC version 8 by default in order to support the -ffile-prefix-map= which permits a varying build path without affecting the binary result of the build []. In addition, Paul updated the kernel-defaults package to ensure that the SOURCE_DATE_EPOCH environment variable is considered when creating the the /init directory.

    Alexander “lynxis” Couzens began working on a set of build scripts for creating firmware and operating system artifacts in the coreboot distribution.

    Lukas Pühringer prepared an upload which was sponsored by Holger Levsen of python-securesystemslib version 0.11.3-1 to Debian unstable. python-securesystemslib is a dependency of in-toto, a framework to protect the integrity of software supply chains.

    Arch Linux

    The mkinitcpio component of Arch Linux was updated by Daniel Edgecumbe in order that it generates reproducible initramfs images by default, meaning that two subsequent runs of mkinitcpio produces two files that are identical at the binary level. The commit message elaborates on its methodology:

    Timestamps within the initramfs are set to the Unix epoch of 1970-01-01. Note that in order for the build to be fully reproducible, the compressor specified (e.g. gzip, xz) must also produce reproducible archives. At the time of writing, as an inexhaustive example, the lzop compressor is incapable of producing reproducible archives due to the insertion of a runtime timestamp.

    In addition, a bug was created to track progress on making the Arch Linux ISO images reproducible.

    Debian

    In July, Holger Levsen filed a bug against the underlying tool that maintains the Debian archive (“dak”) after he noticed that .buildinfo metadata files were not being automatically propagated in the case that packages had to be manually approved in “NEW queue”. After it was pointed out that the files were being retained in a separate location, Benjamin Hof proposed a patch for the issue that was merged and deployed this month.

    Aurélien Jarno filed a bug against the Debian Policy (#940234) to request a section be added regarding the reproducibility of source packages. Whilst there is already a section about reproducibility in the Policy, it only mentions binary packages. Aurélien suggest that it:

    … might be a good idea to add a new requirement that repeatedly building the source package in the same environment produces identical .dsc files.

    In addition, 51 reviews of Debian packages were added, 22 were updated and 47 were removed this month adding to our knowledge about identified issues. Many issue types were added by Chris Lamb including buildpath_in_code_generated_by_bison, buildpath_in_postgres_opcodes and ghc_captures_build_path_via_tempdir.

    Software development

    Upstream patches

    The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

    Diffoscope

    diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

    This month, Chris Lamb uploaded versions 123, 124 and 125 and made the following changes:

    • New features:

      • Add /srv/diffoscope/bin to the Docker image path. (#70)
      • When skipping tests due to the lack of installed tool, print the package that might provide it. []
      • Update the “no progressbar” logging message to match the parallel missing tlsh module warnings. []
      • Update “requires foo” messages to clarify that they are referring to Python modules. []
    • Testsuite updates:

      • The test_libmix_differences ELF binary test requires the xxd tool. (#940645)
      • Build the OCaml test input files on-demand rather than shipping them with the package in order to prevent test failures with OCaml 4.08. (#67)
      • Also conditionally skip the identification and “no differences” tests as we require the Ocaml compiler to be present when building the test files themselves. (#940471)
      • Rebuild our test squashfs images to exclude the character device as they requires root or fakeroot to extract. (#65)
    • Many code cleanups, including dropping some unnecessary control flow [], dropping unnecessary pass statements [] and dropping explicitly inheriting from object class as it unnecessary in Python 3 [].

    In addition, Marc Herbert completely overhauled the handling of ELF binaries particularly around many assumptions that were previously being made via file extensions, etc. [][][] and updated the testsuite to support a newer version of the coreboot utilities. []. Mattia Rizzolo then ensured that diffoscope does not crash when the progress bar module is missing but the functionality was requested [] and made our version checking code more lenient []. Lastly, Vagrant Cascadian not only updated diffoscope to versions 123 and 125, he enabled a more complete test suite in the GNU Guix distribution. [][][][][][]

    Project website

    There was yet more effort put into our our website this month, including:

    In addition, Cindy Kim added in-toto to our “Who is Involved?” page, James Fenn updated our homepage to fix a number of spelling and grammar issues [] and Peter Conrad added BitShares to our list of projects interested in Reproducible Builds [].

    strip-nondeterminism

    strip-nondeterminism is our tool to remove specific non-deterministic results from successful builds. This month, Marc Herbert made a huge number of changes including:

    • GNU ar handler:
      • Don’t corrupt the pseudo file mode of the symbols table.
      • Add test files for “symtab” (/) and long names (//).
      • Don’t corrupt the SystemV/GNU table of long filenames.
    • Add a new $File::StripNondeterminism::verbose global and, if enabled, tell the user that ar(1) could not set the symbol table’s mtime.

    In addition, Chris Lamb performed some issue investigation with the Debian Perl Team regarding issues in the Archive::Zip module including a problem with corruption of members that use bzip compression as well as a regression whereby various metadata fields were not being updated that was reported in/around Debian bug #940973.

    Test framework

    We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org.

    • Alexander “lynxis” Couzens:
      • Fix missing .xcompile in the build system. []
      • Install the GNAT Ada compiler on all builders. []
      • Don’t install the iasl ACPI power management compiler/decompiler. []
    • Holger Levsen:
      • Correctly handle the $DEBUG variable in OpenWrt builds. []
      • Fefactor and notify the #archlinux-reproducible IRC channel for problems in this distribution. []
      • Ensure that only one mail is sent when rebooting nodes. []
      • Unclutter the output of a Debian maintenance job. []
      • Drop a “todo” entry as we vary on a merged /usr for some time now. []

    In addition, Paul Spooren added an OpenWrt snapshot build script which downloads .buildinfo and related checksums from the relevant download server and attempts to rebuild and then validate them for reproducibility. []

    The usual node maintenance was performed by Holger Levsen [][][], Mattia Rizzolo [] and Vagrant Cascadian [][].

    reprotest

    reprotest is our end-user tool to build same source code twice in different environments and then check the binaries produced by each build for differences. This month, a change by Dmitry Shachnev was merged to not use the faketime wrapper at all when asked to not vary time [] and Holger Levsen subsequently released this as version 0.7.9 as dramatically overhauling the packaging [][].


    Misc news & getting in touch

    On our mailing list Rebecca N. Palmer started a thread titled Addresses in IPython output which points out and attempts to find a solution to a problem with Python packages, whereby objects that don’t have an explicit string representation have a default one that includes their memory address. This causes problems with reproducible builds if/when such output appears in generated documentation.

    If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



    This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa, Mattia Rizzolo and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

    Planet DebianRitesh Raj Sarraf: Setting a Lotus Pot

    Experiences setting up a Lotus Pond Pot

    A novice’s first time experience setting up a Lotus pond and germinating the Lotus seeds to a full plant

    The trigger

    Our neighbors have a very nice Lotus setup in the front of their garden, with flowers blooming in it. It is really a pleasing experience to see it. With lifestyles limiting to specifics, I’m glad to be around with like minded people. So we decided to set up a Lotus Pond Pot in our garden too.

    Hunting the pot

    About 2/3rd of our garden has been laid out with Mexican Grass, with the exception of some odd spots where there are 2 large concrete tanks in the ground and other small tanks. The large one is fairly big, with a dimension around 3.5 x 3.5 ft. We wanted to make use of that spot. I checked out some of the available pots and found a good looking circular pot carved in stone, of around 2 ft, but it was quite expensive and not fitting my budget.

    With the stone pot out of my budget, the other options left out were

    • One molded in cement
    • Granite

    We looked at another pot maker who made pots by molds in cement. They had some small size pot samples of around 1x1 feet but the finished product samples didn’t look very good. From the available samples, we weren’t sure how well would the pot what we wanted, would look like. Also, they weren’t having a proportionate price difference. The vendor would take a months time to make one. With no ready made sample of that size in place, this was something we weren’t very enthused to explore.

    We instead chose to explore the possibility of building one with granite slabs. First reason being, granite vendors were more easily available. But second and most important, given the spot I had chosen, I felt a square granite based setup will be an equally good fit. Also, the granite based pot was coming in budget friendly. Finally, we settled with a 3 x 3 ft granite pot.

    And this is what our initial Lotus Pot looked like.

    Note: The granite pot was quite heavy, close to around 200 Kilograms. It took us 3 people to place it at the designated spot

    Actual Lotus pot

    As you can see from the picture above, there’s another pot in the larger granite pot itself. Lotus’ grow in water and sludge. So we needed to provide it with the same setup. We used one of the Biryani Pots to prepare for the sludge. This is where the Lotus’ roots, tuber, will live and grow, inside the water and sludge.

    With an open pot, when you have water stored, you have other challenges to take care of.

    • Aeration of the water
    • Mosquitoes

    We bought a solar water fountain to take care of the water. It is a nice device that works very well under sunlight. Remember, for your Lotus, you need a well sun-lit area. So, the solar water fountain was a perfect fit for this scenario.

    But, with water, breed mosquitoes. As most would do, we chose to put in some fish into the pond to take care of that problem. We put in some Mollies, which we hope would produce more. Also the waste they produce, should work as a supplement fertilizer for the Lotus plant.

    Aeration is very important for the fish and so far our Solar Water Fountain seems to be doing a good job there, keeping the fishes happy and healthy.

    So in around a week, we had the Lotus Pot in place along with a water fountain and some fish.

    The Lotus Plant itself

    With the setup in place, it was now time to head to the nearby nursery and get the Lotus plant. It was an utter surprise to us to not find the Lotus plant in any of the nurseries. After quite a lot of searching around, we came across one nursery that had some lotus plants. They didn’t look in good health but that was the only option we had. But the bigger disappointment was the price, which was insanely high for a plant. We returned back home without the Lotus, a little disheartened.

    Thank you Internet

    The internet has connected the world so well. I looked up the internet and was delighted to see so many people sharing experiences in the form of articles and YouTube videos about Lotus. You can find all the necessary information about. With it, we were all charged up to venture into the next step, to grow the lotus from scratch seeds instead.

    The Lotus seeds

    First, we looked up the internet and placed an order for 15 lotus seeds. Soon, they were delivered. And this is what Lotus seeds look like.

    I had, never in my life before, seen lotus seeds. The shell is very very hard. An impressive remnant of the Lotus plant.

    Germinating the seed

    Germinating the seed is an experience of its own, given how hard the lotus’ shell is. There are very good articles and videos on the internet explaining on the steps on how to germinate a seed. In a gist, you need to scratch the pointy end of the seed’s shell enough to see the inner membrane. And then you need to submerge it in water for around 7-10 days. Every day, you’ll be able to witness the germination process.

    Make sure to scratch the pointy end well enough, while also ensuring to not damage the inner membrane of the seed. You also need to change the water in the container on a daily basis and keep it in a decently lit area.

    Here’s an interesting bit, specific to my own experience. Turned out that the seeds we purchased online wasn’t of good quality. Except for one, none of the other seeds germinated. And the one that did, did not sprout out proper. It popped a single shoot but the shoot did not grow much. In all, it didn’t work.

    But while we were waiting for the seeds to be delivered, my wife looked at the pictures of the seeds that I was studying about online, realized that we already had the lotus seeds at home. Turns out, these seeds are used in our Hindu Puja Rituals. The seeds are called कऎल ŕ¤—ŕ¤ŸŕĽŕ¤Ÿŕ¤ž in Hindi. We had some of the seeds from the puja remaining. So we had in parallel, used those seeds for germination and they sprouted out very well.

    Unfortunately, I had not taken any pictures of them during the germination phase Update: Sat 12 Oct 2019

    The sprouting phase should be done in a tall glass. This will allow for the shoots to grow long, as eventually, it needs to be set into the actual pot. During the germination phase, the shoot will grow daily around an inch or so, ultimately targeting to reach the surface of the water. Once it reaches the surface, eventually, at the base, it’ll start developing the roots.

    Now is time to start the preparation to sow the seed into the sub-pot which has all the sludge

    Sowing your seeds

    This picture is from a couple of days after I sowed the seeds. When you transfer the shoots into the sludge, use your finger to submerge into the sludge making room for the seed. Gently place the seed into the sludge. You can also cover the sub-pot with some gravel just to keep the sludge intact and clean.

    Once your shoots get accustomed to the new environment, they start to grow again. That is what you see in the picture above, where the shoot reaches the surface of the water and then starts developing into the beautiful lotus leaf

    Lotus leaf

    It is a pleasure to see the Lotus leaves floating on the water. The flowering is going to take well over a year, from what I have read on the internet. But the Lotus Leaves, its veins and the water droplets on it, for now itself, are very soothing to watch

    Final Result

    As of today, this is what the final setup looks like. Hopefully, in a year’s time, there’ll be flowers

    Update Sat 12 Oct 2019

    So I was able to capture some pictures from the Lotus pot, wherein you can see the Lotus Shoot while inside the water, then Lotus Shoot when it reaches the surface, and finally Lotus Shoot developing its leaf

    ,

    Planet DebianJohn Goerzen: Resurrecting Ancient Operating Systems on Debian, Raspberry Pi, and Docker

    I wrote recently about my son playing Zork on a serial terminal hooked up to a PDP-11, and how I eventually bought a vt420 (ok, some vt420s and vt510s, I couldn’t stop at one) and hooked it up to a Raspberry Pi.

    This led me down another path: there is a whole set of hardware and software that I’ve never used. For some, it fell out of favor before I could read (and for others, before I was even born).

    The thing is – so many of these old systems have a legacy that we live in today. So much so, in fact, that we are now seeing articles about how modern CPUs are fast PDP-11 emulators in a sense. The PDP-11, and its close association with early Unix, lives on in the sense that its design influenced microprocessors and operating systems to this day. The DEC vt100 terminal is, nowadays, known far better as that thing that is emulated, but it was, in fact, a physical thing. Some goes back into even mistier times; Emacs, for instance, grew out of the MIT ITS project but was later ported to TOPS-20 before being associated with Unix. vi grew up in 2BSD, and according to wikipedia, was so large it could barely fit in the memory of a PDP-11/70. Also in 2BSD, a buggy version of Zork appeared — so buggy, in fact, that the save game option was broken. All of this happened in the late 70s.

    When we think about the major developments in computing, we often hear of companies like IBM, Microsoft, and Apple. Of course their contributions are undeniable, and emulators for old versions of DOS are easily available for every major operating system, plus many phones and tablets. But as the world is moving heavily towards Unix-based systems, the Unix heritage is far more difficult to access.

    My plan with purchasing and setting up an old vt420 wasn’t just to do that and then leave. It was to make it useful for modern software, and also to run some of these old systems under emulation.

    To that end, I have released my vintage computing collection – both a script for setting up on a system, and a docker image. You can run Emacs and TECO on TOPS-20, zork and vi on 2BSD, even Unix versions 5, 6, and 7 on a PDP-11. And for something particularly rare, RDOS on a Data General Nova. I threw in some old software compiled for modern systems: Zork, Colossal Cave, and Gopher among them. The bsdgames collection and some others are included as well.

    I hope you enjoy playing with the emulated big-iron systems of the 70s and 80s. And in a dramatic turnabout of scale and cost, those machines which used to cost hundreds of thousands of dollars can now be run far faster under emulation on a $35 Raspberry Pi.

    CryptogramFriday Squid Blogging: Hawaiian Bobtail Squid Squirts Researcher

    Cute video.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

    Read my blog posting guidelines here.

    Cory Doctorow“Martian Chronicles”: Escape Pod releases a reading of my YA story about rich sociopaths colonizing Mars


    Back in 2011, I wrote a young adult novella called “Martian Chronicles,” which I podcasted as it was in progress; it’s a story about the second wave of wealthy colonists lifting off from climate-wracked, inequality-riven Earth to live in a libertarian utopia on Mars.

    The story (part of a series of stories that use titles of famous stories as jumping off points) was published in Jonathan Strahan’s excellent YA anthology Life on Mars: Tales from the New Frontier.

    Now, it’s getting a second life in podcast form, as the wonderful and venerable Escape Pod has produced a reading by Adam Pracht, whose first installment has just gone live (MP3). It came in via my podcatcher last night and I was so pleased with it. Now I’m on tenterhooks for part II!

    They say you can’t smell anything through a launch-hood, but I still smelled the pove in the next seat as the space-attendants strapped us into our acceleration couches and shone lights in our eyes and triple-checked the medical readouts on our wristlets to make sure our hearts wouldn’t explode when the rocket boosted us into orbit for transfer to the *Eagle* and the long, long trip to Mars.

    He was skinny, but not normal-skinny, the kind of skinny you get from playing a lot of sports and taking the metabolism pills your parents got for you so you wouldn’t get teased at school. He was kind of pot-bellied with scrawny arms and sunken cheeks and he was brown-brown, like the brown Mom used to slather on after a day at the beach covered in factor-500 sunblock. Only he was the kind of all-over-even brown that you only got by being *born* brown.

    He gave me a holy-crap-I’m-going-to-MARS smile and a brave thumbs-up and I couldn’t bring myself to snub him because he looked so damned happy about it. So I gave him the same thumbs up, rotating my wrist in the strap that held it onto the arm-rest so that I didn’t accidentally break my nose with my own hand when we “clawed our way out of the gravity well” (this was a phrase from the briefing seminars that they liked to repeat a lot. It had a lot of macho going for it).

    The pove smelled like garbage. There, I said it. No nice way of saying it. Like the smell out of the trash-chute at the end of our property line. It had been my job to haul our monster-sized tie-and-toss bags to the curb every day and toss them down that chute and into the tunnel-system that took them out to the Spruce Sunset Meadows recycling center, which was actually *outside* the Spruce Sunset Meadows wall, all the way in Springville, where there was a gigantic mega-prison. The prisoners sorted all our trash for us, which was good for the environment, since they sorted it into about 400 different categories for recycling; and good for us because it meant we didn’t have to do all that separating in our kitchen. On the other hand, it did mean that we had to have a double cross-cut shredder for anything like a bill or a legal document so that some crim didn’t use it to steal our identities when he got out of jail. I always wondered how they handled the confetti that came out of the shredder, if they had to pick up each little dot of it with their fingernails and drop it into a big hopper labelled “paper.”

    Escape Pod 700: Martian Chronicles (Part 1 of 2) [Escape Pod]

    CryptogramMore Cryptanalysis of Solitaire

    In 1999, I invented the Solitaire encryption algorithm, designed to manually encrypt data using a deck of cards. It was written into the plot of Neal Stephenson's novel Cryptonomicon, and I even wrote an afterward to the book describing the cipher.

    I don't talk about it much, mostly because I made a dumb mistake that resulted in the algorithm not being reversible. Still, for the short message lengths you're likely to use a manual cipher for, it's still secure and will likely remain secure.

    Here's some new cryptanalysis:

    Abstract: The Solitaire cipher was designed by Bruce Schneier as a plot point in the novel Cryptonomicon by Neal Stephenson. The cipher is intended to fit the archetype of a modern stream cipher whilst being implementable by hand using a standard deck of cards with two jokers. We find a model for repetitions in the keystream in the stream cipher Solitaire that accounts for the large majority of the repetition bias. Other phenomena merit further investigation. We have proposed modifications to the cipher that would reduce the repetition bias, but at the cost of increasing the complexity of the cipher (probably beyond the goal of allowing manual implementation). We have argued that the state update function is unlikely to lead to cycles significantly shorter than those of a random bijection.

    Sociological ImagesHarder, Better, Faster, Stronger

    And the hits start coming and they don’t stop coming. Research published in Royal Society Open Science (thanks to @MattGrossmann for sharing on Twitter) compared music charts in the US, the UK, Germany, and the Netherlands. The authors found that more albums are climbing these charts faster than they did in the past.

    Schneider, Lukas and Claudius Gros. “Five Decades of US, UK, German and Dutch Music Charts Show That Cultural Processes Are Accelerating.” Royal Society Open Science 6(8):190944.

    Last week we looked at cultural hybridity and the mixing of music genres. Here, the authors point out that these trends indicate cultural acceleration as more hits happen in a shorter time. This creates new pressures on the music production side. From the article:

    In the past, essentially no number one album would start at the top of a chart. Reaching the top was instead a tedious climbing process that would take on the average an entire month, or more. Nowadays, the situation is the opposite. If an album is not the number one the first week of its listing, it has only a marginal chance to climb to the top later on.

    This cultural acceleration is having a big impact on the kinds of hits we end up hearing, because creativity always happens in a particular social context. One of my favorite episodes of the Switched on Pop podcast recently looked at how songwriting is changing in the era of the quick streaming hit, including the rise of the “pop overture.” What’s a pop overture, you ask? Lizzo can tell you.

    Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

    (View original at https://thesocietypages.org/socimages)

    Planet DebianBen Hutchings: Kernel Recipes 2019, part 2

    This conference only has a single track, so I attended almost all the talks. This time I didn't take notes but I've summarised all the talks I attended. This is the second and last part of that; see part 1 if you missed it.

    XDP closer integration with network stack

    Speaker: Jesper Dangaard Brouer

    Details and slides: https://kernel-recipes.org/en/2019/xdp-closer-integration-with-network-stack/

    Video: Youtube

    The speaker introduced XDP and how it can improve network performance.

    The Linux network stack is extremely flexible and configurable, but this comes at some performance cost. The kernel has to generate a lot of metadata about every packet and check many different control hooks while handling it.

    The eXpress Data Path (XDP) was introduced a few years ago to provide a standard API for doing some receive packet handling earlier, in a driver or in hardware (where possible). XDP rules can drop unwanted packets, forward them, pass them directly to user-space, or allow them to continue through the network stack as normal.

    He went on to talk about how recent and proposed future extensions to XDP allow re-using parts of the standard network stack selectively.

    This talk was supposed to be meant for kernel developers in general, but I don't think it would be understandable without some prior knowledge of the Linux network stack.

    Faster IO through io_uring

    Speaker: Jens Axboe

    Details and slides: https://kernel-recipes.org/en/2019/talks/faster-io-through-io_uring/

    Video: Youtube. (This is part way through the talk, but the earlier part is missing audio.)

    The normal APIs for file I/O, such as read() and write(), are blocking, i.e. they make the calling thread sleep until I/O is complete. There is a separate kernel API and library for asynchronous I/O (AIO), but it is very restricted; in particular it only supports direct (uncached) I/O. It also requires two system calls per operation, whereas blocking I/O only requires one.

    Recently the io_uring API was introduced as an entirely new API for asynchronous I/O. It uses ring buffers, similar to hardware DMA rings, to communicate operations and completion status between user-space and the kernel, which is far more efficient. It also removes most of the restrictions of the current AIO API.

    The speaker went into the details of this API and showed performance comparisons.

    The Next Steps toward Software Freedom for Linux

    Speaker: Bradley Kuhn

    Details: https://kernel-recipes.org/en/2019/talks/the-next-steps-toward-software-freedom-for-linux/

    Slides: http://ebb.org/bkuhn/talks/Kernel-Recipes-2019/kernel-recipes.html

    Video: Youtube

    The speaker talked about the importance of the GNU GPL to the development of Linux, in particular the ability of individual developers to get complete source code and to modify it to their local needs.

    He described how, for a large proportion of devices running Linux, the complete source for the kernel is not made available, even though this is required by the GPL. So there is a need for GPL enforcement—demanding full sources from distributors of Linux and other works covered by GPL, and if necessary suing to obtain them. This is one of the activities of his employer, Software Freedom Conservancy, and has been carried out by others, particularly Harald Welte.

    In one notable case, the Linksys WRT54G, the release of source after a lawsuit led to the creation of the OpenWRT project. This is still going many years later and supports a wide range of networking devices. He proposed that the Conservancy's enforcement activity should, in the short term, concentrate on a particular class of device where there would likely be interest in creating a similar project.

    Suricata and XDP

    Speaker: Eric Leblond

    Details and slides: https://kernel-recipes.org/en/2019/talks/suricata-and-xdp/

    Video: Youtube

    The speaker described briefly how an Intrusion Detection System (IDS) interfaces to a network, and why it's important to be able to receive and inspect all relevant packets.

    He then described how the Suricata IDS uses eXpress Data Path (XDP, explained in an earlier talk) to filter and direct packets, improving its ability to handle very high packet rates.

    CVEs are dead, long live the CVE!

    Speaker: Greg Kroah-Hartman

    Details and slides: https://kernel-recipes.org/en/2019/talks/cves-are-dead-long-live-the-cve/

    Video: Youtube

    Common Vulnerabilities and Exposures Identifiers (CVE IDs) are a standard, compact way to refer to specific software and hardware security flaws.

    The speaker explained problems with the way CVE IDs are currently assigned and described, including assignments for bugs that don't impact security, lack of assignment for many bugs that do, incorrect severity scores, and missing information about the changes required to fix the issue. (My work on CIP's kernel CVE tracker addresses some of these problems.)

    The average time between assignment of a CVE ID and a fix being published is apparently negative for the kernel, because most such IDs are being assigned retrospectively.

    He proposed to replace CVE IDs with "change IDs" (i.e. abbreviated git commit hashes) identifying bug fixes.

    Driving the industry toward upstream first

    Speaker: Enric Balletbo i Serra

    Details snd slides: https://kernel-recipes.org/en/2019/talks/driving-the-industry-toward-upstream-first/

    Video: Youtube

    The speaker talked about how the Chrome OS developers have tried to reduce the difference between the kernels running on Chromebooks, and the upstream kernel versions they are based on. This has succeeded to the point that it is possible to run a current mainline kernel on at least some Chromebooks (which he demonstrated).

    Formal modeling made easy

    Speaker: Daniel Bristot de Oliveira

    Details and slides: https://kernel-recipes.org/en/2019/talks/formal-modeling-made-easy/

    Video: Youtube

    The speaker explained how formal modelling of (parts of) the kernel could be valuable. A formal model will describe how some part of the kernel works, in a way that can be analysed and proven to have certain properties. It is also necessary to verify that the model actually matches the kernel's implementation.

    He explained the methodology he used for modelling the real-time scheduler provided by the PREEMPT_RT patch set. The model used a number of finite state machines (automata), with conditions on state transitions that could refer to other state machines. He added (I think) tracepoints for all state transitions in the actual code and a kernel module that verified that at each such transition the model's conditions were met.

    In the process of this he found a number of bugs in the scheduler.

    Kernel documentation: past, present, and future

    Speaker: Jonathan Corbet

    Details and slides: https://kernel-recipes.org/en/2019/kernel-documentation-past-present-and-future/

    Video: Youtube

    The speaker is the maintainer of the Linux kernel's in-tree documentation. He spoke about how the documentation has been reorganised and reformatted in the past few years, and what work is still to be done.

    GNU poke, an extensible editor for structured binary data

    Speaker: Jose E Marchesi

    Details and slides: https://kernel-recipes.org/en/2019/talks/gnu-poke-an-extensible-editor-for-structured-binary-data/

    Video: Youtube

    The speaker introduced and demonstrated his project, the "poke" binary editor, which he thinks is approaching a first release. It has a fairly powerful and expressive language which is used for both interactive commands and scripts. Type definitions are somewhat C-like, but poke adds constraints, offset/size types with units, and types of arbitrary bit width.

    The expected usage seems to be that you write a script ("pickle") that defines the structure of a binary file format, use poke interactively or through another script to map the structures onto a specific file, and then read or edit specific fields in the file.

    CryptogramTracking by Smart TVs

    Long Twitter thread about the tracking embedded in modern digital televisions. The thread references three academic papers.

    Planet DebianNorbert Preining: TensorFlow 2.0 with GPU on Debian/sid

    Some time ago I have been written about how to get Tensorflow (1.x) running on current Debian/sid back then. It turned out that this isn’t correct anymore and needs an update, so here it is, getting the most uptodate TensorFlow 2.0 running with nVidia support running on Debian/sid.

    Step 1: Install CUDA 10.0

    Follow more or less the instructions here and do

    wget -O- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | sudo tee /etc/apt/trusted.gpg.d/nvidia-cuda.asc
    echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /" | sudo tee /etc/apt/sources.list.d/nvidia-cuda.list
    sudo apt-get update
    sudo apt-get install cuda-libraries-10-0

    Warning! Don’t install the 10-1 version since the TensorFlow binaries need 10.0.

    This will install lots of libs into /usr/local/cuda-10.0 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-10-0.conf.

    Step 2: Install CUDA 10.0 CuDNN

    One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 10.0. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Download cuDNN v7.N.N (xxxx NN, YYYY), for CUDA 10.0 and then cuDNN Runtime Library for Ubuntu18.04 (Deb).

    At the moment (as of today) this will download a file libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb.

    Step 3: Install Tensorflow for GPU

    This is the easiest one and can be done as explained on the TensorFlow installation page using

    pip3 install --upgrade tensorflow-gpu

    This will install several other dependencies, too.

    Step 4: Check that everything works

    Last but not least, make sure that TensorFlow can be loaded and find your GPU. This can be done with the following one-liner, and in my case gives the following output:

    $ python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
    ....(lots of output)
    2019-10-04 17:29:26.020013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3390 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
    tf.Tensor(444.98087, shape=(), dtype=float32)
    $

    I haven’t tried to get R working with the newest TensorFlow/Keras combination, though. Hope the above helps.

    Worse Than FailureError'd: An Error Storm of Monstrous Proportions

    "Move over NOAA, Google News shows us, unfortunately after the fact that The Daily Beast is the TRUEST hurricane prognosticator," Alejandro D. writes.

     

    "Um...So, these are so my car can listen to music, wirelessly, because its mirrors are its...er...ears??" Paul writes.

     

    Jyri B. wrote, "You know, it's really nice to see that the Eurovision people are embracing all the European languages."

     

    "Wow! Maltese looks like a tough language to learn. Glad I don't have to know it. Thank YOU Google Translate!" Peter K. writes.

     

    "At Gamestop, you can pre-order figurines of all your favoirte characters from MSI!" wrote Chris A.

     

    Mikkel H. writes, "I don't want to hear about timezone issues. The only thing possible that happened here was that my FedEx package was teleported from Beijing to Anchorage and back again."

     

    [Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

    Planet DebianMatthew Garrett: Investigating the security of Lime scooters

    (Note: to be clear, this vulnerability does not exist in the current version of the software on these scooters. Also, this is not the topic of my Kawaiicon talk.)

    I've been looking at the security of the Lime escooters. These caught my attention because:
    (1) There's a whole bunch of them outside my building, and
    (2) I can see them via Bluetooth from my sofa
    which, given that I'm extremely lazy, made them more attractive targets than something that would actually require me to leave my home. I did some digging. Limes run Linux and have a single running app that's responsible for scooter management. They have an internal debug port that exposes USB and which, until this happened, ran adb (as root!) over this USB. As a result, there's a fair amount of information available in various places, which made it easier to start figuring out how they work.

    The obvious attack surface is Bluetooth (Limes have wifi, but only appear to use it to upload lists of nearby wifi networks, presumably for geolocation if they can't get a GPS fix). Each Lime broadcasts its name as Lime-12345678 where 12345678 is 8 digits of hex. They implement Bluetooth Low Energy and expose a custom service with various attributes. One of these attributes (0x35 on at least some of them) sends Bluetooth traffic to the application processor, which then parses it. This is where things get a little more interesting. The app has a core event loop that can take commands from multiple sources and then makes a decision about which component to dispatch them to. Each command is of the following form:

    AT+type,password,time,sequence,data$

    where type is one of either ATH, QRY, CMD or DBG. The password is a TOTP derived from the IMEI of the scooter, the time is simply the current date and time of day, the sequence is a monotonically increasing counter and the data is a blob of JSON. The command is terminated with a $ sign. The code is fairly agnostic about where the command came from, which means that you can send the same commands over Bluetooth as you can over the cellular network that the Limes are connected to. Since locking and unlocking is triggered by one of these commands being sent over the network, it ought to be possible to do the same by pushing a command over Bluetooth.

    Unfortunately for nefarious individuals, all commands sent over Bluetooth are ignored until an authentication step is performed. The code I looked at had two ways of performing authentication - you could send an authentication token that was derived from the scooter's IMEI and the current time and some other stuff, or you could send a token that was just an HMAC of the IMEI and a static secret. Doing the latter was more appealing, both because it's simpler and because doing so flipped the scooter into manufacturing mode at which point all other command validation was also disabled (bye bye having to generate a TOTP). But how do we get the IMEI? There's actually two approaches:

    1) Read it off the sticker that's on the side of the scooter (obvious, uninteresting)
    2) Take advantage of how the scooter's Bluetooth name is generated

    Remember the 8 digits of hex I mentioned earlier? They're generated by taking the IMEI, encrypting it using DES and a static key (0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88), discarding the first 4 bytes of the output and turning the last 4 bytes into 8 digits of hex. Since we're discarding information, there's no way to immediately reverse the process - but IMEIs for a given manufacturer are all allocated from the same range, so we can just take the entire possible IMEI space for the modem chipset Lime use, encrypt all of them and end up with a mapping of name to IMEI (it turns out this doesn't guarantee that the mapping is unique - for around 0.01%, the same name maps to two different IMEIs). So we now have enough information to generate an authentication token that we can send over Bluetooth, which disables all further authentication and enables us to send further commands to disconnect the scooter from the network (so we can't be tracked) and then unlock and enable the scooter.

    (Note: these are actual crimes)

    This all seemed very exciting, but then a shock twist occurred - earlier this year, Lime updated their authentication method and now there's actual asymmetric cryptography involved and you'd need to engage in rather more actual crimes to obtain the key material necessary to authenticate over Bluetooth, and all of this research becomes much less interesting other than as an example of how other companies probably shouldn't do it.

    In any case, congratulations to Lime on actually implementing security!

    comment count unavailable comments

    Dave HallAnnouncing the DrupalSouth Diversity Scholarship

    Over the years I have benefited greatly from the generosity of the Drupal Community. In 2011 people sponsored me to write lines of code to get me to DrupalCon Chicago.

    Today Dave Hall Consulting is a very successful small business. We have contributed code, time and content to Drupal. It is time for us to give back in more concrete terms.

    We want to help someone from an under represented group take their career to the next level. This year we will provide a Diversity Scholarship for one person to attend DrupalSouth, our 2 day Gettin’ Git training course and 5 nights at the conference hotel. This will allow this person to attend the premier Drupal event in the region while also learning everything there is to know about git.

    To apply for the scholarship, fill out the form by 23:59 AEST 19 October 2019 to be considered. (Extended from 12 October)

    Sky CroeserTeaching with, and about, the Internet

    In a recent talk at the AoIR 2019 conference, I suggested that it would be helpful to have some kind of collaborative guidelines, similar to the AoIR ethics guidelines, around teaching in Internet Studies and related fields. (For more on my reasoning, see the bite-sized Twitter version of talk.)

    In the period after giving the talk, I realised that…maybe I should try to take on some of the labour involved in sparking (or at least checking if there’s broader interest in) the kinds of guidelines I was hoping for. (With a little prompt from Jeremy Hunsinger, thanks!) In some useful-but-hurried conversations over morning tea, I realised that it might be helpful to suggest some general parameters for what the guidelines could focus on.

    As someone pointed out, AoIR is the Internet Research Association, not the Internet teaching association. So why have guidelines about teaching at all?

    Not all teachers do research, and not all researchers teach, but teaching and research cross-fertilise and depend on each other in important ways in academia today. Drawing on research in Internet Studies, including around data privacy and platform capitalism, might help us to better understand and articulate concerns about how we teach about, and with, the Internet.

    There’s plenty of work available about how we teach with the Internet. There’s also some work about how we teach about the Internet (although, I think, a bit less). There are some other areas, like the Platform Pedagogies group, that I need to dig into more deeply. There seems to be room to bring together some of this work with other research being done around the impact of the Internet to provide guidelines or resources that could help us to understand how digital technologies, including learning management systems, extension management systems, Turnitin, and other platforms used in teaching, work. How do they use data? How do they make money? How do they structure and monitor our teaching, and students’ learning? And with a better understanding of these technologies, how might we draw on shared resources to resist or reshape universities’ use of them?

    Potentially, Internet Studies teaching guidelines might also do other things, like outline research on assessment or suggest ways of developing more inclusive curriculum. Such research and praxis already exists, but not necessarily with the authority that AoIR support might provide.

     

    ,

    Planet DebianJoey Hess: Project 62 Valencia Floor Lamp review

    From Target, this brass finish floor lamp evokes 60's modernism, updated for the mid-Anthropocene with a touch plate switch.

    The integrated microcontroller consumes a mere 2.2 watts while the lamp is turned off, in order to allow you to turn the lamp on with a stylish flick. With a 5 watt LED bulb (sold separately), the lamp will total a mere 7.2 watts while on, making it extremely energy efficient. While off, the lamp consumes a mere 19 kilowatt-hours per year.

    clamp multimeter reading 0.02 amps AC, connected to a small circuit board with a yellow capacitor, a coil, and a heat sinked IC visible lamp from rear; a small round rocker switch has been added to the top of its half-globe shade

    Though the lamp shade at first appears perhaps flimsy, while you are drilling a hole in it to add a physical switch, you will discover metal, though not brass all the way through. Indeed, this lamp should last for generations, should the planet continue to support human life for that long.

    As an additional bonus, the small plastic project box that comes free in this lamp will delight any electrical enthusiast. As will the approximately 1 hour conversion process to delete the touch switch phantom load. The 2 cubic foot of syrofoam packaging is less delightful.

    Two allen screws attach the pole to the base; one was missing in my lamp. Also, while the base is very heavily weighted, the lamp still rocks a bit when using the aftermarket switch. So I am forced to give it a mere 4 out of 5 stars.

    front view of lit lamp beside a bookcase

    Planet DebianMolly de Blanc: Free software activities (September 2019)

    September marked the end of summer and the end of my summer travel.  Paid and non-paid activities focused on catching up with things I fell behind on while traveling. Towards the middle of September, the world of FOSS blew up, and then blew up again, and then blew up again.

    A photo of a river with the New York skyline in the background.

    Free software activities: Personal

    • I caught up on some Debian Community Team emails I’ve been behind on. The CT is in search of new team members. If you think you might be interested in joining, please contact us.
    • After much deliberation, the OSI decided to appoint two directors to the board. We will decide who they will be in October, and are welcoming nominations.
    • On that note, the OSI had a board meeting.
    • Wrote a blog post on rights and freedoms to create a shared vocabulary for future writing concerning user rights. I also wrote a bit about leadership in free software.
    • I gave out a few pep talks. If you need a pep talk, hmu.

    Free software activities: Professional

    • Wrote and published the September Friends of GNOME Update.
    • Interviewed Sammy Fung for the GNOME Engagement Blog.
    • Did a lot of behind the scenes work for GNOME, that you will hopefully see more of soon!
    • I spent a lot of time fighting with CiviCRM.
    • I attended GitLab Commit on behalf of GNOME, to discuss how we implement GitLab.

     

    Planet DebianThorsten Alteholz: My Debian Activities in September 2019

    FTP master

    This month I accepted 246 packages and rejected 28. The overall number of packages that got accepted was 303.

    Debian LTS

    This was my sixty third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

    This month my all in all workload has been 23.75h. During that time I did LTS uploads of:

      [DLA 1911-1] exim4 security update for one CVE
      [DLA 1936-1] cups security update for one CVE
      [DLA 1935-1] e2fsprogs security update for one CVE
      [DLA 1934-1] cimg security update for 8 CVEs
      [DLA 1939-1] poppler security update for 3 CVEs

    I also started to work on opendmarc and spip but did not finish testing yet.
    Last but not least I did some days of frontdesk duties.

    Debian ELTS

    This month was the sixteenth ELTS month.

    During my allocated time I uploaded:

    • ELA-160-1 of exim4
    • ELA-166-1 of libpng
    • ELA-167-1 of cups
    • ELA-169-1 of openldap
    • ELA-170-1 of e2fsprogs

    I also did some days of frontdesk duties.

    Other stuff

    This month I uploaded new packages of …

    I also uploaded new upstream versions of …

    I improved packaging of …

    On my Go challenge I uploaded golang-github-rivo-uniseg, golang-github-bruth-assert, golang-github-xlab-handysort, golang-github-paypal-gatt.

    I also sponsored the following packages: golang-gopkg-libgit2-git2go.v28.

    CryptogramMeasuring the Security of IoT Devices

    In August, CyberITL completed a large-scale survey of software security practices in the IoT environment, by looking at the compiled software.

    Data Collected:

    • 22 Vendors
    • 1,294 Products
    • 4,956 Firmware versions
    • 3,333,411 Binaries analyzed
    • Date range of data: 2003-03-24 to 2019-01-24 (varies by vendor, most up to 2018 releases)

    [...]

    This dataset contains products such as home routers, enterprise equipment, smart cameras, security devices, and more. It represents a wide range of either found in the home, enterprise or government deployments.

    Vendors are Asus, Belkin, DLink, Linksys, Moxa, Tenda, Trendnet, and Ubiquiti.

    CyberITL's methodology is not source code analysis. They look at the actual firmware. And they don't look for vulnerabilities; they look for secure coding practices that indicate that the company is taking security seriously, and whose lack pretty much guarantees that there will be vulnerabilities. These include address space layout randomization and stack guards.

    A summary of their results.

    CITL identified a number of important takeaways from this study:

    • On average, updates were more likely to remove hardening features than add them.
    • Within our 15 year data set, there have been no positive trends from any one vendor.
    • MIPS is both the most common CPU architecture and least hardened on average.
    • There are a large number of duplicate binaries across multiple vendors, indicating a common build system or toolchain.

    Their website contains the raw data.

    LongNowThe History of China’s Cold Chain

    Introducing wide-scale refrigeration to a nation’s food system brings about massive changes. Podcaster and journalist Nicola Twilley was able to witness those changes in real-time during a visit to China, where the amount of refrigerated space has grown more than 20x in the past ten years.

    This highlight comes from Twilley’s 02018 talk at The Interval at Long Now, “Exploring the Artificial Cryosphere.” You can watch the full talk here.

    Planet DebianBirger Schacht: Installing and running Signal on Tails

    Because the topic comes up every now and then, I thought I’d write down how to install and run Signal on Tails. These instructions are based on the 2nd Beta of Tails 4.0 - the 4.0 release is scheduled for October 22nd. I’m not sure if these steps also work on Tails 3.x, I seem to remember having some problems with installing flatpaks on Debian Stretch.

    The first thing to do is to enable the Additional Software feature of Tails persistence (the Personal Data feature is also required, but that one is enabled by default when configuring persistence). Don’t forget to reboot afterwards. When logging in after the reboot, please set an Administration Password.

    The approach I use to run Signal on Tails is using flatpak, so install flatpak either via Synaptic or via commandline:

    sudo apt install flatpak

    Tails then asks if you want to add flatpak to your additional software and I recommend doing so. The list of additional software can be checked via Applications → System Tools → Additional Software. The next thing you need to do is set up the directories- flatpak installs the software packages either system-wide in $prefix/var/lib/flatpak/[1] or per user in $HOME/.local/share/flatpak/ (the latter lets you manage your flatpaks without having to use elevated permissions). User specific data of the apps goes into $HOME/.var/app. This means we have to create directories on our Peristent folder for those two locations and then link them to their targets in /home/amnesia.

    I recommend putting these commands into a script (i.e. /home/amnesia/Persistent/flatpak-setup.sh) and making it executable (chmod +x /home/amnesia/Persistent/flatpak-setup.sh):

    #!/bin/sh
    
    mkdir -p /home/amnesia/Persistent/flatpak
    mkdir -p /home/amnesia/.local/share
    ln -s /home/amnesia/Persistent/flatpak /home/amnesia/.local/share/flatpak
    mkdir -p /home/amnesia/Persistent/app
    mkdir -p /home/amnesia/.var
    ln -s /home/amnesia/Persistent/app /home/amnesia/.var/app

    Now you need to add a flatpak remote and install signal:

    amnesia@amnesia:~$ torify flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
    amnesia@amnesia:~$ torify flatpak install flathub org.signal.Signal

    This will take a couple of minutes.

    To show Signal the way to the next whiskey bar through Tor the HTTP_PROXY and HTTPS_PROXY environment variables have to be set. I recommend again to put this into a script (i.e. /home/amnesia/Persistent/signal.sh)

    #!/bin/sh
    
    export HTTP_PROXY=socks://127.0.0.1:9050
    export HTTPS_PROXY=socks://127.0.0.1:9050
    flatpak run org.signal.Signal

    Screenshot of Signal on Tails 4 Yay it works!

    To update signal you have to run

    amnesia@amnesia:~$ torify flatpak update

    To make the whole thing a bit more comfortably, the folder softlinks can be automatically created on login using a Gnome autostart script. For that to work you have to have the Dotfiles feature of Tails enabled. Then you can create a /live/persistence/TailsData_unlocked/dotfiles/.config/autostart/FlatpakSetup.desktop file:

    [Desktop Entry]
    Name=FlatpakSetup
    GenericName=Setup Flatpak on Tails
    Comment=This script runs the flatpak-setup.sh script on start of the user session
    Exec=/live/persistence/TailsData_unlocked/Persistent/flatpak-setup.sh
    Terminal=false
    Type=Application

    By adding /live/persistence/TailsData_unlocked/dotfiles/.local/share/applications/Signal.desktop file to the dotfiles folder, Signal also shows as part of the Gnome applications with a nice Signal icon:

    [Desktop Entry]
    Name=Signal
    GenericName=Signal Desktop Messenger
    Exec=/home/amnesia/Persistent/signal.sh
    Terminal=false
    Type=Application
    Icon=/home/amnesia/.local/share/flatpak/app/org.signal.Signal/current/active/files/share/icons/hicolor/128x128/apps/org.signal.Signal.png

    Screenshot of Signal Application Icon Tails 4


    1. It is also possible to configure additional system wide installation locations, details are documented in flatpak-installation(5) [return]

    Planet DebianMike Gabriel: Debian Edu FAI

    Over the past month I worked on re-scripting the installation process of a Debian Edu system (minimal installation profile and workstation installation profile for now) by utilizing FAI [1].

    My goal on this is to get the Debian Edu FAI config space into Debian bullseye (as package: debian-edu-fai) and provide an easy setup method for the FAI installation server on an existing Debian Edu site.

    Note: I do not intend to bootstrap a complete Debian Edu site via FAI. The use case is: get your Debian Edu main server up and running, add host faiserver.intern and install all your site's client systems via this FAI installation server.

    Debian Edu Installation Methods (until today)

    Currently, we only have a D-I based installation method (over PXE or ISO image) at hand with several disadvantages:

    • requires interaction
    • not really customizable
    • comparingly slow (now that I have seen FAI do these things)

    All of the above problems can be solved by installing Debian Edu via a FAI configuration.

    Debian Edu Installation via FAI ( This rocks so much!!! )

    As you may guess, but I need to repeat the above (because I am so excited about it), here are the advantages of installing Debian Edu via FAI:

    • Debian Edu installation via FAI is incredibly fast
    • Customization: drop in some more files into the FAI config space and you have a customized setup. [2]
    • FAI supports zero-click installs, so no more interaction is required except from booting via PXE
    • FAI supports stuffing the FAI installation bootstrap system into a bootable ISO image

    Get it!

    The whole setup process of a FAI server on a Debian Edu network still requires some documentation and testing, but the config space for FAI, I have already provided on Debian's GitLab server:

         https://salsa.debian.org/debian-edu/debian-edu-fai/

    Have fun with this and provide feedback, if you try this out. Thanks!

    light+love
    Mike

    References and Footnotes

    • [1] https://fai-project.org
    • [2] For our local "IT-Zukunft Schule" project I added several FAI config extensions without having to touch the Debian Edu FAI configuration files.

    Worse Than FailureThe Windows Update

    Every change breaks someones workflow.

    A few years ago, Ian started at one of the many investment banks based out of London. This particular bank was quite proud of how they integrated “the latest technology” into all their processes, “favoring the bleeding edge,” and “are always focusing on Agile methods, and cross-functional collaboration.”

    That last bit is why every software developer was on a tech support rotation. Every two weeks, they’d have to spend a day sitting with the end users, watching them work. Ostensibly, by seeing how the software was actually used, the developers would have a better sense of the users’ needs. In practice, they mostly showed people how to delete emails or recover files from the recycling bin.

    Unfortunately, these end users also directly or indirectly controlled the bank’s budgeting process, so keeping them happy was a big part of ensuring continued employment. Not just service, but service with a smile- or else.

    Ian’s problem customer was Jacob. Jacob had been with the bank at least thirty years, and still longed for the days of lunchtime brandy and casual sexual harassment. He did not like computers. He did not like the people who serviced his computer. He did not like it when a web page displayed incorrectly, and he especially did not like it when you explained that you couldn’t edit the web page you didn’t own, and couldn’t tell Microsoft to change Internet Explorer to work with that particular website.

    “I understand you smart technical kids are just a cost of doing business,” Jacob would often say, “but your budget is out of control. Something must be done!”

    Various IT projects proceeded apace. Jacob continued to try and cut their budget. And then the Windows 7 rollout happened.

    This was a massive effort. They had been on Windows XP. A variety of intranet and proprietary applications didn’t work on Windows 7, and needed to be upgraded. Even with those upgrades, everyone knew that there would be more problems. These big changes never came without unexpected side effects.

    The day Jacob got Windows 7 imaged onto his computer also happened to be the day Ian was on helldesk duty. Ian got a frantic email:

    My screen is broken! Everything is wrong! COME TO MY DESK RIGHT NOW, YOUNG MAN

    Ian had already prepared, and went right ahead and changed Jacob’s desktop settings so that they as closely mimicked Windows XP as possible.

    “That’s all fine and good,” Jacob said, “but it’s still broken.”

    Ian looked at the computer. Nothing was broken. “What… what exactly is the problem?”

    “Internet Explorer is broken!”

    Ian double clicked the IE icon. The browser launched just fine, and pulled up the company home page.

    “No! Close that window, and look at the desktop!”

    Ian did so, waiting for Jacob to explain the problem. Jacob waited for Ian to see the problem. They both sat there, waiting, no one willing to move until the other had gone.

    Jacob broke first. “The icon is wrong!”

    Ah, yes, the big-blue-E of Windows XP had been replaced by the big-blue-E of Windows 7.

    “This is unacceptable!” Jacob said.

    Ian had already been here for most of the morning, so a few more minutes made no difference. He fired up image search, grabbed the first image which was an XP era IE icon, and then set that as the icon on the desktop.

    Jacob squinted. “Nope. No, I don't like that. It’s too smooth.”

    Of course. Ian had grabbed the first image, which was much higher resolution than the original icon file. “I… see. Give me a minute.”

    Ian went back to his desk, resized the image, threw it on a network share, went back to Jacob’s desk, and changed the icon.

    “There we are,” Jacob said. “At least someone on your team knows how to support their users. It’s not just about making changes willy-nilly, you know. Good work!”

    That was the first and only honest compliment Jacob ever gave Ian. Two years later, Ian moved on to a new job, leaving Jacob with his old IE icon, sitting at the same desk he’d been since before the Internet was even a “thing”.

    [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

    ,

    Planet DebianGunnar Wolf: Presenting a webinar: Privacy and anonymity: Requisites for individuals' security online

    I was invited by the Mexican Chapter of the Internet Society (ISOC MX) to present a webinar session addressing the topics that motivated the project I have been involved for the past two years — And presenting some results, what we are doing, where we are heading.

    ISOC's webinars are usually held via the Zoom platform. However, I felt it directly adversarial to what we are doing; we don't need to register with a videoconference provider if we can use Jitsi! So, the webinar will be held at https://meet.jit.si/WebinarISOC. Of course, I am aware that if we reach a given threshold, Jitsi will stop giving a quality service — So I will also mirror it to a "YouTube live" thingy. I am not sure if this will be the right URL, but I think it will be here.

    Of course, I will later download the video and publish it in a site that tracks users less than YouTube :-]

    So, if you are interested — See you there on 2019.10.16, 19:00 (GMT-5).

    AttachmentSize
    webinario1609FINAL.jpg134.71 KB

    CryptogramNew Research into Russian Malware

    There's some interesting new research about Russian APT malware:

    The Russian government has fostered competition among the three agencies, which operate independently from one another, and compete for funds. This, in turn, has resulted in each group developing and hoarding its tools, rather than sharing toolkits with their counterparts, a common sight among Chinese and North Korean state-sponsored hackers.

    "Every actor or organization under the Russain APT umbrella has its own dedicated malware development teams, working for years in parallel on similar malware toolkits and frameworks," researchers said.

    "While each actor does reuse its code in different operations and between different malware families, there is no single tool, library or framework that is shared between different actors."

    Researchers say these findings suggest that Russia's cyber-espionage apparatus is investing a lot of effort into its operational security.

    "By avoiding different organizations re-using the same tools on a wide range of targets, they overcome the risk that one compromised operation will expose other active operations," researchers said.

    This is no different from the US. The NSA malware released by the Shadow Brokers looked nothing like the CIA "Vault 7" malware released by WikiLeaks.

    The work was done by Check Point and Intezer Labs. They have a website with an interactive map.

    Planet DebianMike Gabriel: My Work on Debian LTS/ELTS (September 2019)

    In September 2019, I have worked on the Debian LTS project for 11 hours (of 12 hours planned) and on the Debian ELTS project for another 2 hours (of 12 hours planned) as a paid contributor. I have given back the 10 ELTS hours, but will keep the 1 LTS hour and move it over to October. As I will be gone on family vacation during two weeks of Octobre I have reduced my workload for the coming months accordingly (10 hours LTS, 5 hours ELTS).

    LTS Work

    • Patch review on qemu (regarding DLA-1927-1)
    • Perform regression tests on previous LTS uploads of 389-ds-base (see [1,2] for results/statements)
    • Upload netty 3.2.6.Final-2+deb8u1 to jessie-security (DLA-1941-1 [3]), fixing 1 CVE
    • Triage nghttp2, probably not affected by CVE-2019-9511 and CVE-2019-9513. The code base is really different around the passages where the fixing patches have been applied by upstream. I left a comment in dla-needed.txt plus asked for a second opinion. [4]
    • Go over all 2019 LTS announcements in the webwml.git repository and ping LTS team members (including myself) on missing webwml DLAs.
    • Upload phpbb3 3.0.12-5+deb8u4 to jessie-security (DLA-1942-1 [5]), fixing 1 (or 2) CVE(s). Regarding the phpbb3 upload, Sylvain Beucler and I are currently discussing [6] whether CVE-2019-13376 got actually fixed with this upload or not. There will be some sort of follow-up announcement on this matter soon.

    ELTS Work

    • Upload netty 3.2.6.Final-2+deb7u1 to wheezy-lts (ELA-168-1 [7]), fixing 1 CVE

    References

    Sam VargheseHigh time for Michael Cheika to stop whinging about referees

    When Australia loses a rugby match, it is generally put down to some external factor like refereeing. This is the response of both the so-called experts and the coach, Michael Cheika, whose middle name should be “whinging”.

    Thus when Wales beat Australia in a pool game in the Rugby World Cup last week, a match that is very likely to decide the winner of that pool and condemn Australia to meet England in the quarter-finals, the reaction was no different.

    Cheika is helped in his whinging by the former players who act as “experts” on telecasts of the game. The coach complained about a penalty awarded against centre Samu Kerevi by French referee Romain Poite when the Australian forward’s forearm slid up to touch the throat of Welsh standoff Dan Bigger.

    Complaining that he no longer knew what the rules were, Cheika kept quiet about the fact that all coaches had been advised before the start of the tournament that any hits to the head region, intentional or accidental, would be strongly penalised.

    The bodies that govern all contact sports are wary of lawsuits from concussion-related injuries after the NFL had to pay out billions to players. World rugby does not have that kind of money to dish out; hence, the caution is understandable.

    Perhaps Cheika was frustrated by his selections for the match; he brought in Bernard Foley and Nic White as standoff and scrum-half respectively, after a lack-lustre performance by Christian Lealiifano and Will Genia in those roles in the first pool game. Both Foley and Genia played poorly.

    Cheika should have, instead, played Matt Toomua in the role of playmaker after he had put on a strong showing against Fiji when he came on in the second half. Toomua added some much-needed sharpness to the attack against Wales too, but by then it was too late.

    Some commentators also mumbled about the intercept try that Welsh scrum-half Gareth Davies scored, but this complaint had no basis. A look at the match video shows Davies clearly onside at the moment when Genia flung the pass. In fact, Genia took so many steps before passing that only a blind man would have been unable to judge his intentions.

    In the first game, Cheika whinged about the penalty handed out to winger Reece Hodge, for a tackle on Fijian flanker Peceli Yato. It left the big islander concussed and he was unable to play any further part in the game. Until that point, Yato had been Fiji’s best player by a mile.

    Hodge escaped any penalty during the game but was cited and then banned for three games by the disciplinary panel.

    Australia fails to recognise that by constantly complaining referees, they are annoying the officials no end. Cheika could learn a lesson or two from New Zealand coach Steve Hansen.

    In 2017, when the British and Irish Lions toured New Zealand, French referee Jérôme Garcès sent off centre Sonny Bill Williams during the second Test. The Kiwis had won the first Test, and as a result of playing with 14 men, New Zealand were beaten in the second game.

    In the third game, a tight one, where Poite was the referee, the scores were level 15-all with a few minutes left to play when a high ball bounced off a Lions player directly into the hands of replacement hooker Ken Owens.

    Owens caught the ball and then, realising he was offside, quickly threw it away.

    Poite awarded New Zealand a penalty in a position from which fly-half Beauden Barrett could have easily converted it. But then Garcès stepped up to him, and the two men conversed in French for a while.

    After that Poite said, “we will make a deal”, claimed it was an “accidental offside” and awarded New Zealand a scrum. This resulted in the game ending in a draw, which meant the series was drawn 1-1.

    Hansen did not complain a great deal about this, only saying: “Romain’s instinct was it was a penalty. If he had gone with his instincts he would have made the right decision. But he got caught up in over-thinking it. I bet he is not feeling good about that.

    “He is a good man, Romain. He hasn’t done it deliberately. You just have to accept it, as much as it can be frustrating and annoying. It is part of sport.”

    Cheika would do well to learn from that example.

    Worse Than FailureCodeSOD: An Updated Version

    Some folks were perplexed by the fact that Microsoft skipped Windows 9 and went straight to Windows 10. The urban legend is that so many old applications checked which version of Windows was running by doing something like version.startsWith("Windows 9") to see if they were on 95 or 98, that Microsoft risked breaking otherwise working code if they released Windows 9.

    But gone are those days of doing string munging to check which version of an OS we’re running on. We’ve got much better ways to check what features and functionality are available without having to parse strings out, right?

    John D found some TypeScript code in a Ionic app that needs to adapt to different versions of iOS:

    private iOS13Device(): boolean {
    		// fix for ios 13 pan end issue
    		if (
    			this.isIOS13Device === undefined &&
    			this.deviceService.isiOS &&
    			this.deviceInfoService.deviceInfo &&
    			this.deviceInfoService.deviceInfo.osVersion &&
    			this.deviceInfoService.deviceInfo.osVersion.indexOf('_') &&
    			this.deviceInfoService.deviceInfo.osVersion.split('_') &&
    			this.deviceInfoService.deviceInfo.osVersion.split('_')[0] &&
    			this.deviceInfoService.deviceInfo.osVersion.split('_')[0] === '11'
    		) {
    			this.isIOS13Device = true;
    			return this.isIOS13Device;
    		} else {
    			this.isIOS13Device = false;
    			return this.isIOS13Device;
    		}
    	}

    Well, at least they’re caching the result.

    Also, I’m no expert on iOS device strings, but this seems to imply that an iOS13Device (an OS which just came out recently) reports its OS version number as a string starting with 11. Maybe that’s correct, but in either case, that seems like a bonus WTF.

    [Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

    ,

    TEDThe terrifying now of big data and surveillance: A conversation with Jennifer Granick

    Jennifer Granick speaking at TEDxStandford.

    Concerns are growing around privacy and government surveillance in today’s hyper-connected world. Technology is smarter and faster than ever — and so are government strategies for listening in. As a lawyer for the ACLU, Jennifer Granick (TED Talk: How the US government spies on people who protest — including you) works to demystify the murky legal landscape of privacy civil rights, protecting our freedom of privacy against government and private interests. We spoke with her about the battle against government surveillance, how you can keep your data safe and why legal transparency — and legal action — is vital. 

    In your talk at TEDxStanford, you detail some of the history and methods of government surveillance in the United States. Can you elaborate on how these methods have evolved as technology has advanced?

    As Supreme Court Justice John Roberts put it, it’s the difference between “a ride on horseback [and] a flight to the moon.” The amount of information that’s available about us is exponentially more; the ease of accessing it and analyzing it, because of big data tools, storage and machine searching, is categorically different. At the same time, the laws that are intended to protect our privacy have been downgraded repeatedly, most recently in the name of the War on Terror. Everything is bigger; there’s just so much more out there.

    In your talk, you mentioned that Section 702 of the FISA amendments (which allows US government agencies to surveil “foreign terrorist threats”) expired in 2017. What kind of impact will that have on the landscape of surveillance?

    There was a long political battle about 702 and trying to amend it. What ended up happening is that Congress just reauthorized it, and passed it as part of a larger bill with no real reform. The movement to try to do something about it utterly failed. What it means is that right now, with more confidence than ever before, the intelligence community and [its] agencies can gather information in the name of targeting foreigners and store all of that information. So, they can search through conversations we’re having with people overseas. The news that’s happened since then shows that there are still mistakes and problems with the way these intelligence agencies are handling the information, and that they’re regularly breaking the rules. There was a recent story about the FBI violating the 702 rules. There’s no accountability to comply with the law; weak as it is, it’s basically not a concern.

    What role do tech companies like Amazon and Facebook play in perpetuating these surveillance efforts?

    Companies don’t want to comply with a whole bunch of legal processes, but when they do, they want it to be clear what they’re supposed to do, and they don’t want any liability for it. The companies have had some comments about wanting to restrain government surveillance to legitimate purposes to reassure their non-American users, and they’ve pushed for some sort of clarity and regularity in how surveillance is going to happen. They came out in favor of a more controlled exercise of 702, but no real reform. They also supported the Cloud Act which is a recent law that basically enables foreign governments to access information stored here in the US without meeting the higher standard of US legal process. They’re not consistently civil libertarians or privacy advocates.

    —————

    If you care about any political issue — whether it’s tax reform or Black Lives Matter — we need to ensure these people can operate freely in the political world.

    —————

    Facial recognition technology like Amazon’s “Rekognition” is being used by law enforcement across the country. What are the concerns and possible consequences around the use of this technology? 

    Face identification connected to surveillance cameras is particular dystopian, but the ACLU of Northern California’s test of Rekognition shows that even the more pedestrian uses of the technology are dangerous. In tests, the software incorrectly identified 28 members of Congress as people who have been arrested for a crime and disproportionately flagged members of the Congressional Black Caucus. The problem is both that the tool is inaccurate and discriminatory, and also that it gives unprecedented power to police.

    In an always-connected world with smart tech in our homes, cars and  pockets, how can we prepare for and avoid intrusive surveillance? 

    Number one: use encryption. Encrypting your data is getting easier and easier, and there are communications services out there that protect your communications. iMessage is one for iPhone users. There’s WhatsApp, too. I use Signal, which is a text messaging program. Encrypting your data is easier and easier. For many of us, one of the biggest challenges isn’t necessarily the government — it’s hackers, too, so always turn on multi-factor authentication. This is so that it’s not like somebody can bust into your account with a password; they will also need to have some other kind of hardware token. That’s a good thing to do, and it’s actually very little additional work.

    —————

    This idea that you can be manipulated into seeing, believing, buying and thinking things that aren’t what you normally would do — and nobody knows about it because nobody knows what I see is different from what you see — is scary.

    —————

    Don’t use technology that doesn’t need to be connected to the internet. If you don’t need that internet-connected baby thermometer, don’t buy it. It’s going to send your data to some company, and that company is going to sell it to marketers, and it’ll be a source of access for law enforcement. In particular, I don’t like those home assistants like the Alexa or Google Home because I think that eventually, those machines can be used to eavesdrop on people. Why would we invite a ready-made surveillance device into our home?

    Everybody likes new, fun stuff — I know lots of people who have those in-home assistants. I have a cell phone, I love the internet and I use Facebook. I think one of the things people really should do is push for better laws. That’s what the law is there for. It’s supposed to protect us and allow us to participate in the modern economy.

    At the end of your talk, you close by saying we need to demand transparency. What does transparency mean to you, and how we can reach it?

    There’s so much we don’t know about surveillance right now. In the criminal context, we don’t know how many particular surveillance orders are issued. We don’t know what kind of information they’re getting with them. We don’t know what they’re forcing companies to do. We don’t know if they’re potentially subverting security measures in order to facilitate spying on us. It’s much worse in the intelligence context where we have this FISA court that operates and issues opinions behind closed doors. They’re supposed to be publishing these opinions, but we very rarely see them. Any new and novel interpretations of law are meant to be published, but ever since that edict went into law, we haven’t had any FISA court opinions declassified. We find out way after the fact about things, like the FBI’s most recent violation of Section 702 rules, which meant agents had access to data and information they weren’t supposed to see. We find out about these problems years later. There’s just so much that we don’t know. 

    Transparency is the first step, but it’s not an end unto itself. There’s a Privacy and Civil Liberties Oversight Board, and that board has only recently confirmed members, and now there’s a quorum again. For a long time, that oversight board, which is expected to provide some narrow review of intelligence programs, wasn’t even in operation. We’re behind. Only a few senators and representatives care because the population isn’t coming forward and saying, “This is really important to us.” But they should be. 

    There’s no more obvious reason why you should care about surveillance than the Trump administration. In the past, people who have been blasé about surveillance had an assumption that if you weren’t doing anything wrong then you didn’t have anything to worry about — police would follow the rule of law, and everybody was operating with good faith. But today, you have the extremity of the immigration situation; today, you have the way that the Trump administration is punishing people who are coming to this country by kidnapping their children. There’s rampant sexism and anti-Semitism and racism, and this idea that people are “Black identity extremists” who should be surveilled — which just means the government is surveilling civil rights activists and communities of color. And so there’s this situation where this immense amount of technical power is in the hands of people who are operating in bad faith, based on the most base of motives.

    What does it mean that all this information has been gathered and can be accessed, manipulated and sold? And how do you speak to those who aren’t concerned and believe they have nothing to hide?

    There’s two things. One is that everybody has committed crimes. The amount of behavior that’s covered by criminal laws is huge — whether it’s smoking pot or lying on your taxes, there’s just so many ways that you can transgress the law. Nobody is 100 percent clean. If somebody wanted to go after you and they knew everything about you, there would be ample information to do that. It’s not just criminal stuff; it’s foolish things you’ve said in the past or people you were friends with who turned out to be crooked. There’s all kinds of things that can be used to tarnish your reputation with your employer or your friends or your spouse. 

    The second thing I tell people is that it’s not about you. You may be of no interest, but there are people out there who are challenging the status quo, and these people stick out in order to try to make change. And the powers that be don’t necessarily want change. They like the way things are because they’re the ones in control. So if you care about any political issue — whether it’s tax reform or Black Lives Matter — we need to ensure these people can operate freely in the political world. The ability to do that is greatly reduced if someone has to be afraid that the police are going to come after their undocumented relatives. People need to be concerned about information gathering on the private side because that’s one of the main avenues that information gets to law enforcement. There’s so much incentive on the private side to collect it. That incentive is based on the advertising model: the more that companies know about us, the more targeted the advertising can be and the more money they make. 

    —————

    The real thing to start worrying about is what we’re seeing in China, where they’re using face-surveillance to identify people, follow them out on the street and assign them a social score.

    —————

    Once you have that much information, people can be manipulated against their best interest. [Social media] sites are designed to be addictive, and in order to keep people clicking, they keep showing you more and more outrageous stuff. This totally skews your sense of the world and skews your facts so you don’t know what’s actually going on in the world. It makes you associate only with like-minded people and puts you into this filter bubble. This idea that you can be manipulated into seeing, believing, buying and thinking things that aren’t what you normally would do — and nobody knows about it because nobody knows that what I see is different from what you see — is scary.

    Once you have that data, there’s sociological or systemic problems, because there are certain decisions made based on that data about things, like who’s going to qualify for welfare benefits, what housing ads are shown to me based on my race, what job listings are shown to me based on my gender. These are other kinds of ways in which data can instantiate prejudice or discrimination. It’s not like there wasn’t prejudice or discrimination before big data — the fear is that it’s less obvious that it’s happening, and that makes it much more powerful.

    What does the future of surveillance and privacy look like? Is something like Google’s Smart City neighborhood in Toronto going to be the norm?

    I think that’s one possible outcome — that not just our communications data but data about our bodies, homes, relationships, shopping and more will be collected and will interact with each other far more than they are now. I think that’s definitely a trend. The real thing to start worrying about is what we’re seeing in China, where they’re using face-surveillance to identify people, follow them out on the street and assign them a social score, which is made up of factors like their law-abidingness, their job and their financials. This score that apparently dictates whether or not they’re good citizens follows them everywhere, enabling government and private entities to discriminate and make decisions about these people based on their rankings. That’s a really terrifying situation to have people be labeled and treated accordingly. That’s very Brave New World.

    Krebs on SecurityMariposa Botnet Author, Darkcode Crime Forum Admin Arrested in Germany

    A Slovenian man convicted of authoring the destructive and once-prolific Mariposa botnet and running the infamous Darkode cybercrime forum has been arrested in Germany on request from prosecutors in the United States, who’ve recently re-indicted him on related charges.

    NiceHash CTO Matjaž “Iserdo” Škorjanc, as pictured on the front page of a recent edition of the Slovenian daily Delo.si, is being held by German authorities on a US arrest warrant for operating the destructive “Mariposa” botnet and founding the infamous Darkode cybercrime forum.

    The Slovenian Press Agency reported today that German police arrested Matjaž “Iserdo” Škorjanc last week, in response to a U.S.-issued international arrest warrant for his extradition.

    In December 2013, a Slovenian court sentenced Škorjanc to four years and ten months in prison for creating the malware that powered the ‘Mariposa‘ botnet. Spanish for “Butterfly,” Mariposa was a potent crime machine first spotted in 2008. Very soon after its inception, Mariposa was estimated to have infected more than 1 million hacked computers — making it one of the largest botnets ever created.

    An advertisement for the ButterFly Bot.

    Škorjanc and his hacker handle Iserdo were initially named in a Justice Department indictment from 2011 (PDF) along with two other men who allegedly wrote and sold the Mariposa botnet code. But in June 2019, the DOJ unsealed an updated indictment (PDF) naming Škorjanc, the original two other defendants, and a fourth man (from the United States) in a conspiracy to make and market Mariposa and to run the Darkode crime forum.

    More recently, Škorjanc served as chief technology officer at NiceHash, a Slovenian company that lets users sell their computing power to help others mine virtual currencies like bitcoin. In December 2017, approximately USD $52 million worth of bitcoin mysteriously disappeared from the coffers of NiceHash. Slovenian police are reportedly still investigating that incident.

    The “sellers” page on the Darkode cybercrime forum, circa 2013.

    It will be interesting to see what happens with the fourth and sole U.S.-based defendant added in the latest DOJ charges — Thomas K. McCormick, a.k.a “fubar” — allegedly one of the last administrators of Darkode. Prosecutors say McCormick also was a reseller of the Mariposa botnet, the ZeuS banking trojan, and a bot malware he allegedly helped create called “Ngrbot.”

    Between 2010 and 2013, Fubar would randomly chat me up on instant messenger apropos of nothing to trade information about the latest goings-on in the malware and cybercrime forum scene.

    Fubar frequently knew before anyone else about upcoming improvements to or new features of ZeuS, and discussed at length his interactions with Iserdo/Škorjanc. Every so often, I would reach out to Fubar to see if he could convince one of his forum members to call off an attack against KrebsOnSecurity.com, an activity that had become something of a rite of passage for new Darkode members.

    On Dec. 5, 2013, federal investigators visited McCormick at his University of Massachusetts dorm room. According to a memo filed by FBI agents investigating the case, in that interview McCormick acknowledged using the “fubar” identity on Darkode, but said he’d quit the whole forum scene years ago, and that he’d even interned at Microsoft for several summers and at Cisco for one summer.

    A subsequent search warrant executed on his dorm room revealed multiple removable drives that held tens of thousands of stolen credit card records. For whatever reason, however, McCormick wasn’t arrested or charged until December 2018.

    According to the FBI, back in that December 2013 interview McCormick voluntarily told them a great deal about his various businesses and online personas. He also apparently told investigators he talked with KrebsOnSecurity quite a bit, and that he’d tipped me off to some important developments in the malware scene. For example:

    “TM had found the email address of the Spyeye author in an old fake antivirus affiliate program database and that TM was able to find the true name of the Spyeye author from searching online for an individual that used the email address,” the memo states. “TM passed this information on to Brian Krebs.”

    Read more of the FBI’s interview with McCormick here (PDF).

    News of Škorjanc’s arrest comes amid other cybercrime takedowns in Germany this past week. On Friday, German authorities announced they’d arrested seven people and were investigating six more in connection with the raid of a Dark Web hosting operation that allegedly supported multiple child porn, cybercrime and drug markets with hundreds of servers buried inside a heavily fortified military bunker.

    Planet DebianBen Hutchings: Debian LTS work, September 2019

    I was assigned 20 hours of work by Freexian's Debian LTS initiative and worked all those hours this month.

    I prepared and, after review, released Linux 3.16.74, including various security and other fixes. I then rebased the Debian package onto that. I uploaded that with a small number of other fixes and issued DLA-1930-1.

    I backported the latest security update for Linux 4.9 from stretch to jessie and issued DLA-1940-1 for that.

    CryptogramNSA on the Future of National Cybersecurity

    Glenn Gerstell, the General Counsel of the NSA, wrote a long and interesting op-ed for the New York Times where he outlined a long list of cyber risks facing the US.

    There are four key implications of this revolution that policymakers in the national security sector will need to address:

    The first is that the unprecedented scale and pace of technological change will outstrip our ability to effectively adapt to it. Second, we will be in a world of ceaseless and pervasive cyberinsecurity and cyberconflict against nation-states, businesses and individuals. Third, the flood of data about human and machine activity will put such extraordinary economic and political power in the hands of the private sector that it will transform the fundamental relationship, at least in the Western world, between government and the private sector. Finally, and perhaps most ominously, the digital revolution has the potential for a pernicious effect on the very legitimacy and thus stability of our governmental and societal structures.

    He then goes on to explain these four implications. It's all interesting, and it's the sort of stuff you don't generally hear from the NSA. He talks about technological changes causing social changes, and the need for people who understand that. (Hooray for public-interest technologists.) He talks about national security infrastructure in private hands, at least in the US. He talks about a massive geopolitical restructuring -- a fundamental change in the relationship between private tech corporations and government. He talks about recalibrating the Fourth Amendment (of course).

    The essay is more about the problems than the solutions, but there is a bit at the end:

    The first imperative is that our national security agencies must quickly accept this forthcoming reality and embrace the need for significant changes to address these challenges. This will have to be done in short order, since the digital revolution's pace will soon outstrip our ability to deal with it, and it will have to be done at a time when our national security agencies are confronted with complex new geopolitical threats.

    Much of what needs to be done is easy to see -- developing the requisite new technologies and attracting and retaining the expertise needed for that forthcoming reality. What is difficult is executing the solution to those challenges, most notably including whether our nation has the resources and political will to effect that solution. The roughly $60 billion our nation spends annually on the intelligence community might have to be significantly increased during a time of intense competition over the federal budget. Even if the amount is indeed so increased, spending additional vast sums to meet the challenges in an effective way will be a daunting undertaking. Fortunately, the same digital revolution that presents these novel challenges also sometimes provides the new tools (A.I., for example) to deal with them.

    The second imperative is we must adapt to the unavoidable conclusion that the fundamental relationship between government and the private sector will be greatly altered. The national security agencies must have a vital role in reshaping that balance if they are to succeed in their mission to protect our democracy and keep our citizens safe. While there will be good reasons to increase the resources devoted to the intelligence community, other factors will suggest that an increasing portion of the mission should be handled by the private sector. In short, addressing the challenges will not necessarily mean that the national security sector will become massively large, with the associated risks of inefficiency, insufficient coordination and excessively intrusive surveillance and data retention.

    A smarter approach would be to recognize that as the capabilities of the private sector increase, the scope of activities of the national security agencies could become significantly more focused, undertaking only those activities in which government either has a recognized advantage or must be the only actor. A greater burden would then be borne by the private sector.

    It's an extraordinary essay, less for its contents and more for the speaker. This is not the sort of thing the NSA publishes. The NSA doesn't opine on broad technological trends and their social implications. It doesn't publicly try to predict the future. It doesn't philosophize for 6000 unclassified words. And, given how hard it would be to get something like this approved for public release, I am left to wonder what the purpose of the essay is. Is the NSA trying to lay the groundwork for some policy initiative ? Some legislation? A budget request? What?

    Charlie Warzel has a snarky response. His conclusion about the purpose:

    He argues that the piece "is not in the spirit of forecasting doom, but rather to sound an alarm." Translated: Congress, wake up. Pay attention. We've seen the future and it is a sweaty, pulsing cyber night terror. So please give us money (the word "money" doesn't appear in the text, but the word "resources" appears eight times and "investment" shows up 11 times).

    Susan Landau has a more considered response, which is well worth reading. She calls the essay a proposal for a moonshot (which is another way of saying "they want money"). And she has some important pushbacks on the specifics.

    I don't expect the general counsel and I will agree on what the answers to these questions should be. But I strongly concur on the importance of the questions and that the United States does not have time to waste in responding to them. And I thank him for raising these issues in so public a way.

    I agree with Landau.

    Slashdot thread.

    Worse Than FailureWhen Unique Isn't Unique

    Palm III 24

    Gather 'round, young'uns, for a tale from the Dark Ages of mobile programming: the days before the iPhone launched. Despite what Apple might have you believe, the iPhone wasn't the first portable computing device. Today's submitter, Jack, was working for a company that streamed music to these non-iPhone devices, such as the Palm Treo or the Samsung Blackjack. As launch day approached for the new client for Windows Mobile 6, our submitter realized that he'd yet to try the client on a non-phone device (called a PDA, for those of you too young to recall). So he tracked down an HP iPaq on eBay just so he could verify that it worked on a device without the phone API.

    The device arrived a few days out from launch, after QA had already approved the build on other devices. It should've been a quick test: sideload the app, stream a few tracks, log in, log out. But when Jack opened the app for the first time on the new device, it was already logged into someone's account! He closed it and relaunched, only to find himself in a different, also inappropriate account. What on earth?!

    The only thing Jack could find in common between the users he was logged in as was that they were running the same model of PDA. That was the crucial key to resolving the issue. To distinguish which device was making the calls to the streaming service, Jack used a call in Windows Mobile that would return a unique ID for each mobile device. In most devices, it would base this identifier on the IMEI, ensuring uniqueness—but not on the HP iPaq. All HP devices could automatically log into the account of the most recently used iPaq, providing the user logged out and back in, as it would generate a recent-user record with the device ID.

    Jack had read the documentation many times, and it always stated that the ID was guaranteed to be unique. Either HP had a different definition of "unique" than anyone else, or they had a major security bug!

    Jack emailed HP, but they had no plans to fix the issue, so he had to whip up an alternate method of generating a UUID in the case that the user was on this device. The launch had to be pushed back to accommodate it, but the hole was plugged, and life went on as usual.

    [Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

    ,

    CryptogramSupply-Chain Security and Trust

    The United States government's continuing disagreement with the Chinese company Huawei underscores a much larger problem with computer technologies in general: We have no choice but to trust them completely, and it's impossible to verify that they're trustworthy. Solving this problem ­ which is increasingly a national security issue ­ will require us to both make major policy changes and invent new technologies.

    The Huawei problem is simple to explain. The company is based in China and subject to the rules and dictates of the Chinese government. The government could require Huawei to install back doors into the 5G routers it sells abroad, allowing the government to eavesdrop on communications or ­-- even worse ­-- take control of the routers during wartime. Since the United States will rely on those routers for all of its communications, we become vulnerable by building our 5G backbone on Huawei equipment.

    It's obvious that we can't trust computer equipment from a country we don't trust, but the problem is much more pervasive than that. The computers and smartphones you use are not built in the United States. Their chips aren't made in the United States. The engineers who design and program them come from over a hundred countries. Thousands of people have the opportunity, acting alone, to slip a back door into the final product.

    There's more. Open-source software packages are increasingly targeted by groups installing back doors. Fake apps in the Google Play store illustrate vulnerabilities in our software distribution systems. The NotPetya worm was distributed by a fraudulent update to a popular Ukranian accounting package, illustrating vulnerabilities in our update systems. Hardware chips can be back-doored at the point of fabrication, even if the design is secure. The National Security Agency exploited the shipping process to subvert Cisco routers intended for the Syrian telephone company. The overall problem is that of supply-chain security, because every part of the supply chain can be attacked.

    And while nation-state threats like China and Huawei ­-- or Russia and the antivirus company Kaspersky a couple of years earlier ­-- make the news, many of the vulnerabilities I described above are being exploited by cybercriminals.

    Policy solutions involve forcing companies to open their technical details to inspection, including the source code of their products and the designs of their hardware. Huawei and Kaspersky have offered this sort of openness as a way to demonstrate that they are trustworthy. This is not a worthless gesture, and it helps, but it's not nearly enough. Too many back doors can evade this kind of inspection.

    Technical solutions fall into two basic categories, both currently beyond our reach. One is to improve the technical inspection processes for products whose designers provide source code and hardware design specifications, and for products that arrive without any transparency information at all. In both cases, we want to verify that the end product is secure and free of back doors. Sometimes we can do this for some classes of back doors: We can inspect source code ­ this is how a Linux back door was discovered and removed in 2003 ­ or the hardware design, which becomes a cleverness battle between attacker and defender.

    This is an area that needs more research. Today, the advantage goes to the attacker. It's hard to ensure that the hardware and software you examine is the same as what you get, and it's too easy to create back doors that slip past inspection. And while we can find and correct some of these supply-chain attacks, we won't find them all. It's a needle-in-a-haystack problem, except we don't know what a needle looks like. We need technologies, possibly based on artificial intelligence, that can inspect systems more thoroughly and faster than humans can do. We need them quickly.

    The other solution is to build a secure system, even though any of its parts can be subverted. This is what the former Deputy Director of National Intelligence Sue Gordon meant in April when she said about 5G, "You have to presume a dirty network." Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?

    It sounds ridiculous on its face, but the Internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of research. That research continues today, and it's how we can have highly resilient distributed systems like Google's network even though none of the individual components are particularly good. It's also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.

    Security is a lot harder than reliability. We don't even really know how to build secure systems out of secure parts, let alone out of parts and processes that we can't trust and that are almost certainly being subverted by governments and criminals around the world. Current security technologies are nowhere near good enough, though, to defend against these increasingly sophisticated attacks. So while this is an important part of the solution, and something we need to focus research on, it's not going to solve our near-term problems.

    At the same time, all of these problems are getting worse as computers and networks become more critical to personal and national security. The value of 5G isn't for you to watch videos faster; it's for things talking to things without bothering you. These things ­-- cars, appliances, power plants, smart cities --­ increasingly affect the world in a direct physical manner. They're increasingly autonomous, using A.I. and other technologies to make decisions without human intervention. The risk from Chinese back doors into our networks and computers isn't that their government will listen in on our conversations; it's that they'll turn the power off or make all the cars crash into one another.

    All of this doesn't leave us with many options for today's supply-chain problems. We still have to presume a dirty network ­-- as well as back-doored computers and phones -- and we can clean up only a fraction of the vulnerabilities. Citing the lack of non-Chinese alternatives for some of the communications hardware, already some are calling to abandon attempts to secure 5G from Chinese back doors and work on having secure American or European alternatives for 6G networks. It's not nearly enough to solve the problem, but it's a start.


    Perhaps these half-solutions are the best we can do. Live with the problem today, and accelerate research to solve the problem for the future. These are research projects on a par with the Internet itself. They need government funding, like the Internet itself. And, also like the Internet, they're critical to national security.

    Critically, these systems must be as secure as we can make them. As former FCC Commissioner Tom Wheeler has explained, there's a lot more to securing 5G than keeping Chinese equipment out of the network. This means we have to give up the fantasy that law enforcement can have back doors to aid criminal investigations without also weakening these systems. The world uses one network, and there can only be one answer: Either everyone gets to spy, or no one gets to spy. And as these systems become more critical to national security, a network secure from all eavesdroppers becomes more important.

    This essay previously appeared in the New York Times.

    TEDUnlock: The talks of TED@BCG 2019

    Seema Bansal hosts Session 2 of TED@BCG: Unlock — a day of talks and performances exploring how we can reach our full potential — at the Grand Hyatt Mumbai on September 24, 2019 in Mumbai, India. (Photo: Amit Madheshiya / TED)

    To succeed in the next decade and beyond, we can’t just optimize what we know. We need to keep learning, imagining, inventing. In a day of talks and performances, 16 speakers and performers explored how we can unlock our full potential — human, technological and natural — to accomplish things we never thought possible.

    The event: TED@BCG, the eighth time TED and BCG have partnered to bring leaders, innovators and changemakers to the stage to share ideas for solving society’s biggest challenges. Hosted by TED’s Corey Hajim and BCG’s Seema Bansal.

    When and where: Tuesday, September 24, 2019, at the Grand Hyatt in Mumbai, India

    Music: Performances by Gingger Shankar and Dee MC

    Open and closing remarks: Rich Lesser, CEO of BCG

    The talks in brief:

    “Look around and find the people that inspire you to co-conspire. I promise you that your empathy and your courage will change someone’s life and may even change the world,” says Ipsita Dasgupta. She speaks at TED@BCG at the Grand Hyatt Mumbai on September 24, 2019 in Mumbai, India. (Photo: Amit Madheshiya / TED)

    Ipsita Dasgupta, co-conspirator

    Big idea: The world needs “co-conspirators”: people willing to bend or break the rules and challenge the status quo and societal norms.

    Why? In the face of constant change and complexity, we need unconventional people making decisions at the table. These co-conspirators — which Dasgupta shares through three exemplary stories, including a mother insistent on forgoing some traditional gender roles — can help create new ways of thinking, acting and questioning why we do and how we do it.

    Quote of the talk: “To achieve great heights or change the world, no matter how smart we are, we all need people.”


    Jean-Manuel Izaret, pricing strategist

    Big idea: Because of their huge per-patient cost, medications that could drastically reduce rates of deadly diseases like hepatitis C are often reserved for only the sickest patients, while many others go untreated. Is there a way to pay for these drugs so that every patient can get them, and drug companies can still finance their development?

    How? The pricing model for pharmaceuticals is typically based on the cost per patient treated — and it’s a broken model, says Izaret. He explains that a subscription-like payment system (similar to the one pioneered by Netflix) could distribute costs over time and across an entire population of patient subscribers. By combining the savings of early treatment with the lower costs of a larger patient pool, healthcare providers could improve outcomes and remain profitable.

    Quote of the talk: “I think we don’t really have a price point problem — I think we have a pricing model problem. I think the problem is not the number, but the unit by which we price.”


    Sougwen Chung, artist and researcher

    Big Idea: The future of creative collaboration between humans and machines is limitless — with beauty latent in our shared imperfections.

    Why? As the world strives towards precision and perfection, Chung creates collaborative art with robots that explores what automation means for the future of human creativity. Through machine learning, Chung “taught” her own artistic style to her nonhuman collaborator, a robot called Drawing Operations Unit: Generation (DOUG). DOUG’s initial goal was to mimic her line as she drew, but they made an unexpected discovery along the way: robots make mistakes too. “Our imperfections became what was beautiful about the interaction,” Chung says. “Maybe part of the beauty of human and machine systems is their inherent, shared fallibility.” Chung recently launched a lab called Scilicet, where artists and researchers are welcome to join her in contributing to the future of human and AI creativity.

    Quote of the talk: “By teaching machines to do the work traditionally done by humans, we can explore and evolve our criteria of what’s made possible by the human hand — and part of that journey is embracing the imperfections, recognizing the fallibility of both human and machine, in order to expand the potential of both.”


    Kavita Gupta thinks a global, decentralized currency would lead us to “true financial and economic inclusivity, where every citizen in this world has the same choice, same dignity and same opportunity.” She speaks at TED@BCG at the Grand Hyatt Mumbai on September 24, 2019 in Mumbai, India. (Photo: Amit Madheshiya / TED)

    Kavita Gupta, currency globalist

    Big idea: The world should share one stable, decentralized currency.

    How, and why? Blockchain and cryptocurrencies could provide better data privacy than anything we use today. They would be immune to global disruptions incited by local unrest or inefficient politicians while offering a global marketplace that “would not just be a way for the elite to diversify their portfolio, but also for the average person to increase sustainable wealth,” Gupta says. With real-world examples that root her perspective in the possible and achievable, she weaves a framework for a united future.

    Quote of the talk: “All of this inches us toward a more stable, secure place — to true financial and economic inclusivity, where every citizen in this world has the same choice, same dignity and same opportunity.”


    Markus Mutz, supply chain hacker

    Big idea: We need clarity on how consumer products are made and where they come from in order to make ethical and informed decisions before purchase.

    How? Over the past two years, Mutz and his team founded OpenSC (SC = supply chain) and partnered with the World Wide Fund for Nature to bring transparency and traceability to the supply chain process. Together, Mutz believes their efforts will help revolutionize the way we buy and create products. It’ll happen with three straightforward steps: by verifying production claims, tracing products throughout their supply chains and sharing information that will allow consumers to make decisions more aligned with their values — all with the aid of blockchain.

    Quote of the talk: “If we have reliable and trustworthy information, and the right systems that make use of it, consumers will support those who are doing the right thing by producing products in a sustainable and ethical way.”


    “I firmly believe that if there is any public system in any country that is in inertia, then you have to bring back the motivation. And a great way to trigger motivation is to increase transparency to the citizen,” says public sector strategist Abhishek Gopalka. He speaks at TED@BCG at the Grand Hyatt Mumbai on September 24, 2019 in Mumbai, India. (Photo: Amit Madheshiya / TED)

    Abhishek Gopalka, public sector strategist

    Big Idea: How do we motivate people working in public sectors like healthcare to feel accountable for providing quality care? With transparency.

    Why? Internal, data-driven reviews aren’t enough to keep people accountable, says Gopalka. Instead, we need to move people to do better by sparking their competitive sides — making actions transparent so they either shine or fail in the public eye. In Rajasthan, a state in India that’s home to more than 80 million people, Gopalka has helped to significantly improve the public health system in just two years. How? Public health centers now publicly promise to provide citizens with free care, medicine and diagnosis, resulting in an increase in doctor availability, readily available drugs and, ultimately, patient visits. If applied elsewhere, transparency could benefit many broken systems. Because the first step to solving any complex issue is motivation.

    Quote of the talk: “Motivation is a tricky thing. If you’ve led a team, raised a child or tried to change a personal habit, you know that motivation doesn’t just appear. Something needs to change to make you care. And if there’s one thing that all of us humans care about, it’s an inherent desire to shine in front of society.”


    Gaby Barrios, marketing expert

    Big Idea: By focusing less on gender when marketing products to consumers, we can build better brands — and a better world.

    How? Companies often advertise to consumers by appealing to gender stereotypes, but this kind of shortcut isn’t just bad for society — it’s bad for business, says Barrios. Research shows that gender doesn’t drive choice nearly as much as companies assume, yet many still rely on outdated, condescending stereotypes to reach consumers. By looking at variables outside of gender, like location and financial status, companies can develop more nuanced campaigns, grow their brands and reach the customers they want.

    Quote of the talk: “Growth is not going to come from using an outdated lens like gender. Let’s stop doing what’s easy and go for what’s right. At this point, it’s not just for your business — it’s for society.”


    Sylvain Duranton, AI bureaucracy buster

    Big idea: Artificial intelligence can streamline businesses, but it can also miss human nuances in disastrous ways. To avoid this, we need to use AI systems alongside humans, not instead of them. 

    How? For companies, deploying AI alongside human teams can be harder and more expensive than relying on AI alone. But this dynamic is necessary to ensure that business decisions take human needs and ethics into account, says Duranton. AI bases decisions on data sets and strict rules, but it can’t quite tell the difference between “right” and “wrong” — which means that AI mistakes can be severe, even fatal. By pairing AI with human teams, we can use AI’s efficiency and human knowledge to create business strategies that are successful, smart and ethical.

    Quote of the talk: “Winning organizations will invest in human knowledge, not just AI and data.”


    Akiko Busch, author

    Big idea: In a world where transparency and self-promotion are glorified, let’s not forget the power and beauty of invisibility.

    Why? Invisible cloaks, invisible ink, invisible friends — from the time we’re kids, invisibility gives us a sense of protection, knowledge and security. Akiko Busch thinks it’s time for us to reconsider the power of invisibility. When we disappear into nature, listen without responding, lose ourselves in the primal collectivity of concerts — in all cases, we become more creative and feel more connected to each other and ourselves. In an age where “visibility rules the day,” she says, there is beauty in stepping out of the spotlight, disappearing and existing — if only briefly — invisibly. 

    Quote of the talk: Being unseen takes us from self-interest to a larger sense of inclusion in the human family.”


    Evolutionary biologist Toby Kiers shares what fungi networks and relationships reveal about human economies — and what they can tell us about how extreme inequalities grow. She speaks at TED@BCG at the Grand Hyatt Mumbai on September 24, 2019 in Mumbai, India. (Photo: Amit Madheshiya / TED)

    Toby Kiers, evolutionary biologist

    Big idea: By studying fungi networks and relationships, we can learn more about how human economies work and how extreme inequalities grow.

    How? Extreme inequality is one of humanity’s greatest challenges — but it’s not a uniquely human phenomenon. Like us, fungi can strategically trade, steal and withhold resources (though they do all this without cognitive thought, of course). Whereas human systems are built with an understanding of morals, fungi networks have evolved to be ruthless and solely opportunistic. The parallels are remarkable: for example, Kiers found that supply-and-demand economics still held true in fungi relationships. Examining these relationships gives us the chance to better diagnose problems within our own systems and even borrow solutions from the fungi. Kiers’s team is now studying the parallels between fungal network flow patterns and computer algorithms — and there’s even more ahead.

    Quote of the talk: “The [fungal] trade system provides us with a benchmark to study what an economy looks like when it’s been shaped by natural selection for hundreds of millions of years, in the absence of morality, when strategies are just based on the gathering and processing of information.”


    Chris Kutarna, writer and philosopher

    Big idea: Facebook, Twitter and their disruptive cousins have upended our notions of truth. Social media’s assault on simple veracity has led many to cry for its regulation — but philosopher Chris Kutarna believes that we should “let social media run wild, because the truths it breaks … need to be broken.”

    How? Kutarna argues that it was the age of mass media that birthed the notion that truth exists in concise, marketable chunks — and this idea does not mirror reality. Promoting a concept like “globalization” as an unassailable axiom rather than as a complex idea with many conflicting currents is reductive and dangerous. If we were to embrace social media’s multiplicity of voices and perspectives rather than enforce a single standard for truth, we could initiate a search for truths too complex for a single perspective to contain. 

    Quote of the talk: “What is truth? I don’t know. I can’t know because truth is supposed to be the reality that is bigger than ourselves. To find truth, we need to get together and go and search for it together. Without that search … we’re trapped in our own perspective.”


    “Leaders should not impose their will; leaders should act by shaping the context rather than control,” says management consultant Fang Ruan. She speaks at TED@BCG at the Grand Hyatt Mumbai on September 24, 2019 in Mumbai, India. (Photo: Amit Madheshiya / TED)

    Fang Ruan, management consultant

    Big idea: Influenced by ancient Chinese philosophy, Chinese businesses are shifting towards management techniques that foster more collaborative, spontaneous environments.

    How? Enjoying a delicious plate of dumplings one night, Fang Ruan was intrigued as she watched how the business was run. To her surprise, she found a “two hat” strategy: front-line managers were given new responsibilities beyond their current scope, and ideas were welcomed from people at all steps of the career ladder. This approach varies from China’s dominant, Confucianism-influenced business strategy, which values authority and seniority and has served as a time-tested formula for precise execution at a large scale. Now, as tech companies disrupt traditional industries and millennials make up a larger share of the workforce, new ways of management have emerged, Ruan says. Unconventional management is on the rise — characterized by more collaborative, innovative strategies that resemble the philosophy of Taoism, which believes things work to perfection when their natural state is supported rather than controlled.

    Quote of the talk: “Leaders should not impose their will; leaders should act by shaping the context rather than control.”


    Amane Dannouni shares what digital marketplaces in the developing world can teach us about how to preserve jobs and local economies. He speaks at TED@BCG at the Grand Hyatt Mumbai on September 24, 2019 in Mumbai, India. (Photo: Amit Madheshiya / TED)

    Amane Dannouni, digital business strategist

    Big idea: Disruptive startups like Uber, Amazon and Airbnb have reinvented entire industries. Their digital disruption of existing services has provided game-changing benefits for their users and affiliates — but it’s also led to big losses for those whose livelihoods depended on the old, physical business models. Amane Dannouni believes that digital marketplaces in the developing world can teach us valuable lessons about how to preserve jobs and local economies.

    How? Companies like Gojek in Indonesia, Jumia in Nigeria and Grab in Singapore have reinvigorated the economic landscapes that spawned them, and in the process energized their surrounding communities. They did this not by ignoring their competitors but by integrating community businesses into their own platforms, and by giving their users support — like insurance and online education — that go above and beyond simply linking providers to their patrons. 

    Quote of the talk: “What all these [online marketplaces] have in common is that they transition this basic functionality of matching sellers and buyers from the physical world to the digital world and, by doing so, they can find better matches, do it faster, and ultimately unlock more value for everyone.”


    Lorna Davis, business leader

    Big idea: We need to break our obsession with heroes. Real change can only happen when we work together.

    How? “In a world as complex and interconnected as the one we live in, the idea that one person has the answer is ludicrous,” says Davis. What we really need is “radical interdependence,” shaped by leaders who set different goals and ask others to help them solve big problems. Here’s the difference: whereas “hero” leaders see everyone else as a competitor or a follower, interdependent leaders understand that they need others and genuinely want input. Likewise, heroes set goals that can be delivered through individual results, while interdependent leaders set goals that one person or organization cannot possibly achieve alone. At TED@BCG, Davis sets an “interdependent” goal of her own — calling on the world to help her in her work to end rhino poaching.

    Quote of the talk: “We don’t need heroes. We need radical interdependence — which is just another way of saying: we need each other.”

    LongNowHow to Avoid a Negative Climate Future for the World’s Oceans

    On September 25th, the UN-led Intergovernmental Panel on Climate Change (IPCC) released a landmark report on the impact of climate change on the world’s oceans. Over 100 authors from 36 countries analyzed the latest scientific findings on the cryosphere in a changing climate. The picture the report paints is dire, writes Robinson Meyer in The Atlantic:

    While the report covers how climate change is reshaping the oceans and ice sheets, its deeper focus is how water, in all its forms, is closely tied to human flourishing. If our water-related problems are relatively easy to manage, then the problem of self-government is also easier. But if we keep spewing carbon pollution into the air, then the resulting planetary upheaval would constitute “a major strike against the human endeavor,” says Michael Oppenheimer, a lead author of the report and a professor of geosciences and international affairs at Princeton.

    “We can adapt to this problem up to a point,” Oppenheimer told me. “But that point is determined by how strongly we mitigate greenhouse-gas emissions.”

    If humanity manages to quickly lower its carbon pollution in the next few decades, then sea-level rise by 2100 may never exceed about one foot, the report says. This will be tough but manageable, Oppenheimer said. But if carbon pollution continues rising through the middle of the century, then sea-level rise by 2100 could exceed 2 feet 9 inches. Then “the job will be too big,” he said. “It will be an unmanageable problem.”

    […]

    The headline finding of this report is that sea-level rise could be worse than we thought. The report’s projection of worst-case sea-level rise by 2100 is about 10 percent higher than the IPCC predicted five years ago. The IPCC has been steadily ratcheting up its sea-level-rise projections since its 2001 report, and it is likely to increase the numbers further in the 2021 report, when the IPCC runs a new round of global climate models.

    The cascade of consequences related to sea-level rise include a decline in seafood safety, extreme flooding for coastal areas, a decline in biodiversity in the oceans, and the melting of glaciers in the United States, including ones major cities rely upon for water.

    Unless policies are enacted to reduce carbon emissions now, many of the worst case scenarios outlined in the report might come to pass.

    A new paper in Science details a “no-regrets to-do list” of ocean climate proposals that could be set in motion today. The proposals are based on another just-released report from the High Level Panel (HLP) for a Sustainable Ocean Economy that, the authors say, “provide hope and a path forward.”

    The paper focuses on five areas of action mentioned in the report: renewable energy; shipping and transport; protection and restoration of coastal and marine ecosystems; fisheries, aquaculture, and shifting diets; and carbon storage in the seabed.

    These five areas were identified, quantified, and evaluated relative to achieving the 2030 Agenda for Sustainable Development. The report concludes that these actions (in the right policy, investment, and technology environments) could reduce global GHG emissions by up to 4 billion tonnes of carbon dioxide equivalents in 2030 and by up to 11 billion tonnes in 2050. This could contribute as much as 21% of the emission reduction required in 2050 to limit warming to 1.5°C and 25% for a 2°C target. Reductions of this magnitude are larger than the annual emissions from all current coal-fired power plants worldwide.

    The paper offers short-term and long-term proposals around these five action areas, and include setting “clear national targets for increasing the share of ocean-based renewable energy”; improving the fuel efficiency of ships; restoring coastal “blue carbon” ecosystems;  introducing seaweed to diets of sheep and cattle; encouraging diet shifts in humans to include more sources of sustainable low-carbon protein from the ocean,” and more.

    “Make no mistake: These actions are ambitious,” the paper admits. “But we argue that they are necessary, could pay major dividends toward closing the emissions gap in coming decades, and achieve other co-benefits along the way.”

    Another path forward was put forth earlier this summer by Revive & Restore. Its 200-page report provides the first-of-its-kind assessment of genomic and biotech innovations to complement, enhance, and accelerate today’s marine conservation strategies.

    Revive & Restore’s mission is to enhance biodiversity through the genetic rescue of endangered and extinct species. In pursuit of this and in response to global threats to marine ecosystems, the organization conducted an Ocean Genomics Horizon Scan – interviewing almost 100 marine biologists, conservationists, and technologists representing over 60 institutions. Each was challenged to identify ways that rapid advances in genomics could be applied to address marine conservation needs. The resulting report is a first-of-its-kind assessment of  highlighting the opportunities to bring genomic insight and biotechnology innovations to complement current and future marine conservation.

    Our research has shown that we now have the opportunity to apply biotechnology tools to help solve some of the most intractable problems in ocean conservation resulting from: overfishing, invasive species, biodiversity loss, habitat destruction, and climate change. This report presents the most current genomic solutions to these threats and develops 10 “Big Ideas” – which, if funded, can help build transformative change and be catalytic for marine health.

    Learn More

    Worse Than FailureCodeSOD: Butting In

    Initech is a large, international corporation. Any time you're doing business at a global scale, you're going to need to contend with a language barrier sooner or later. This makes employees who are multilingual valuable.

    Dana recently joined Initech, and in the first week, was warned about Jerry. Jerry was the "chief" "architect" and team "lead", and was one of those special, valuable employees who spoke three languages. Correction, "spoke" needs scare quotes too, because Jerry was incomprehensible in every language he spoke, including his native tongue.

    Jerry's emails were stuff of legend around the office. Punctuation was included, not to structure sentences, but as a kind of decoration, just to spice up his communiques. Capitalization was applied at random. Sentences weren't there to communicate a single thought or idea, but to express fragments of half considered dreams.

    Despite being the "chief architect", Jerry's code was about as clear as his emails. His class definitions were rambling stretches of unrelated functionality, piled together into a ball of mud. Splattered through it all were blocks of commented out functionality. And 99.9% of his commits to master had syntax errors.

    Why did his commits always have syntax errors? Jerry had never seen fit to install a C++ compiler on his machine, and instead pushed to master and let their CI system compile and find all his syntax errors. He'd then amend the commit to fix the errors, and woe betide anyone else working in the repo, because he'd next git push --force the amended commit. Then he'd fix the new round of syntax errors.

    Their organization did have an official code review standard, but since no one understood any of Jerry's code, and Jerry was the "chief", Jerry reviewed his own code.

    So, let's talk about enumerated types. A common practice in C++ enums is to include an extra value in the enum, just to make it easy to discover the size of the enum, like so:

    enum Color { COLOR_RED, COLOR_BLACK, COLOR_BLUE, COLOR_SIZE }

    COLOR_SIZE isn't actually a color value, but it tells you how many color values there are. This can be useful when working with a large team, as it's a sort of form of documentation. It also allows patterns like, `for (int i = 0; i < COLOR_SIZE; i++)…`. Of course, it only works when everyone follows the same convention.

    Jerry couldn't remember the convention. So, in his native language, he invented a new one: he'd end all his enums with a _END instead of _SIZE. But Jerry also couldn't remember what the English word for "end" was. So he went off to Google Translate, and got an English translation.

    Then he wrote code. Lots of code. No one got to review this code. Jerry touched everything, without worrying about what any other developer was doing.

    This meant that before long, every enum in the system looked like this:

    enum Color { COLOR_RED, COLOR_BLACK, COLOR_BLUE, COLOR_BUTT }

    Eventually, Jerry left Initech. He'd found a position where he could be a CTO of a well-funded startup. The very same day, Dana submitted her largest pull request ever, where she removed every single one of Jerry's butts.

    [Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

    ,

    TEDTrailblazers: A night of talks in partnership with The Macallan

    Curators David Biello and Chee Pearlman host TED Salon: Trailblazers, in partnership with The Macallan, at the TED World Theater in New York City on June 27, 2019. (Photo: Photo: Ryan Lash / TED)

    The event: TED Salon: Trailblazers, hosted by TED design and arts curator Chee Pearlman and TED science curator David Biello

    When and where: Thursday, June 27, 2019, at the TED World Theater in New York City

    The partner: The Macallan

    Music: Sammy Rae & The Friends

    The talks in brief:

    Marcus Bullock, entrepreneur and justice reform advocate

    • Big idea: Over his eight-year prison sentence, Marcus Bullock was sustained by his mother’s love — and her photos of cheeseburgers. Years later, as an entrepreneur, he asked himself, “How can I help make it easier for other families to deliver love to their own incarcerated loved ones?”
      Communicating with prisoners is notoriously difficult and dominated by often-predatory telecommunications companies. By creating Flikshop — an app that allows inmates’ friends and families to send physical picture postcards into prison with the ease of texting — Marcus Bullock is bypassing the billion-dollar prison telecommunications industry and allowing hundreds of thousands of prisoners access to the same love and motivation that his mother gave him.
    • Quote of the talk: “I stand today with a felony, and just like millions of others around the country who also have that ‘F’ on their chest, just as my mom promised me many years ago, I wanted to show them that there was still life after prison.”

    “It’s always better to collaborate with different communities rather than trying to speak for them,” says fashion designer Becca McCharen-Tran. She speaks at TED Salon: Trailblazers, in partnership with The Macallan, at the TED World Theater, June 27, 2019, New York, NY. (Photo: Ryan Lash / TED)

    Becca McCharen-Tran, founder and creative director of bodywear line CHROMAT

    • Big idea: Fashion designers have a responsibility to create inclusive designs suited for all gender expressions, ages, ability levels, ethnicities and races — and by doing so, they can shatter our limited definition of beauty.
      From day one in school, fashion designers are taught to create for a certain type of body, painting “thin, white, cisgender, able-bodied, young models as the ideal,” says fashion designer Becca McCharen-Tran. This has made body-shaming a norm for so many who strive to assimilate to the illusion of perfection in fashion imagery. McCharen-Tran believes creators are responsible for reimagining and expanding what a “bikini body” is. Her swimwear focused clothing line CHROMAT celebrates beauty in all its forms. They unapologetically counter the narrative through inclusive, explosive designs that welcome all of the uniqueness that comes with being a human.
    • Quote of the talk: “Inclusivity means nothing if it’s only surface level … who is making the decisions behind the scenes is just as important. It’s imperative to include diverse decision-makers in the process, and it’s always better to collaborate with different communities rather than trying to speak for them.”

    Amy Padnani, editor at the New York Times (or, as some of her friends call her, the “Angel of Death”)

    • Big idea: No one deserves to be overlooked in life, even in death.
      Padnani created “Overlooked,” a New York Times series that recognizes the stories of dismissed and marginalized people. Since 1851, the newspaper has published thousands of obituaries for individuals like heads of state and celebrities, but only a small amount of those obits chronicled the lives of women and people of color. With “Overlooked,” Padnani forged a path for the publication to right the wrongs of the past while refocusing society’s lens on who’s considered important. Powerful in its ability to perspective-shift and honor those once ignored, “Overlooked” is also on track to become a Netflix series.
    • Fun fact: Prior to Padnani’s breakout project, the New York Times had yet to publish obituaries on notable individuals in history such as Ida B. Wells, Sylvia Plath, Ada Lovelace and Alan Turing.

    Sam Van Aken shares the work behind the “Tree of 40 Fruit,” an ongoing series of hybridized fruit trees that grow multiple varieties of stone fruit. He speaks at TED Salon: Trailblazers, in partnership with The Macallan, at the TED World Theater, June 27, 2019, New York, NY. (Photo: Ryan Lash / TED)

    Sam Van Aken, multimedia contemporary artist, art professor at Syracuse University in New York and creator of the Tree of 40 Fruit

    • Big idea: Many of the fruits that have been grown in the US were originally brought there by immigrants. But due to industrialization, disease and climate change, American farmers produce just a fraction of the types available a century ago. Sam Van Aken has hand-grafted heirloom varieties of stone fruit — peaches, plums, apricots, nectarines and cherries — to make the “Tree of 40 Fruit.” What began as an art project to showcase their multi-hued blossoms has become a living archive of rare specimens and their histories; a hands-on (and delicious!) way to teach people about conservation and cultivation; and a vivid symbol of the need for biodiversity in order to ensure food security. Van Aken has created and planted his trees at museums and at people’s homes, and his largest project to date is the 50-tree Open Orchard — which, in total, will possess 200 varieties originated or historically grown in the region — on Governor’s Island in New York City.
    • Fun fact: One hundred years ago, there were over 2,000 varieties of peaches, nearly 2,000 varieties of plums, and nearly 800 named apple varieties grown in the United States.
    • Quote of the talk: “More than just food, embedded in these fruit is our culture. It’s the people who cared for and cultivated them, who valued them so much that they brought them here with them as a connection to their homes, and it’s the way they passed them on and shared them. In many ways, these fruit are our story.”

    Removing his primetime-ready makeup, Lee Thomas shares his personal story of living with vitiligo. He speaks at TED Salon: Trailblazers, in partnership with The Macallan, at the TED World Theater, June 27, 2019, New York, NY. (Photo: Ryan Lash / TED)

    Lee Thomas, broadcast journalist

    • Big idea: Despite having a disease that left him vulnerable to stares in public, Lee Thomas discovered he could respond to ignorance and fear with engagement and dialogue.
      As a news anchor, Lee Thomas used makeup to hide the effects of vitiligo, an autoimmune disorder that left large patches of his skin without pigmentation. But without makeup, he was vulnerable to derision — until he decided to counter misunderstanding with eye contact and conversation. Ultimately, an on-camera story on his condition led him to start a support group and join others in celebrating World Vitiligo Day.
    • Quote of the talk: “Positivity is something worth fighting for — and the fight is not with others, it’s internal. If you want to make positive changes in your life, you have to consistently be positive.”

    TEDWeaving Community: Notes from Session 1 of TEDSummit 2019

    Hosts Bruno Giussani and Helen Walters open Session 1: Weaving Community on July 21, 2019, Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    The stage is set for TEDSummit 2019: A Community Beyond Borders! During the opening session, speakers and performers explored themes of competition, political engagement and longing — and celebrated the TED communities (representing 84 countries) gathered in Edinburgh, Scotland to forge TED’s next chapter.

    The event: TEDSummit 2019, Session 1: Weaving Community, hosted by Bruno Giussani and Helen Walters

    When and where: Sunday, July 21, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

    Speakers: Pico Iyer, Jochen Wegner, Hajer Sharief, Mariana Lin, Carole Cadwalladr, Susan Cain with Min Kym

    Opening: A warm Scottish welcome from raconteur Mackenzie Dalrymple

    Music: Findlay Napier and Gillian Frame performing selections from The Ledger, a series of Scottish folk songs

    The talks in brief:

    “Seeming happiness can stand in the way of true joy even more than misery does,” says writer Pico Iyer. (Photo: Ryan Lash / TED)

    Pico Iyer, novelist and nonfiction author

    Big idea: The opposite of winning isn’t losing; it’s failing to see the larger picture.

    Why? As a child in England, Iyer believed the point of competition was to win, to vanquish one’s opponent. Now, some 50 years later and a resident of Japan, he’s realized that competition can be “more like an act of love.” A few times a week, he plays ping-pong at his local health club. Games are played as doubles, and partners are changed every five minutes. As a result, nobody ends up winning — or losing — for long. Iyer has found liberation and wisdom in this approach. Just as in a choir, he says, “Your only job is to play your small part perfectly, to hit your notes with feeling and by so doing help to create a beautiful harmony that’s much greater than the sum of its parts.”

    Quote of the talk: “Seeming happiness can stand in the way of true joy even more than misery does.”


    Jochen Wegner, journalist and editor of Zeit Online

    Big idea: The spectrum of belief is as multifaceted as humanity itself. As social media segments us according to our interests, and as algorithms deliver us increasingly homogenous content that reinforces our beliefs, we become resistant to any ideas — or even facts — that contradict our worldview. The more we sequester ourselves, the more divided we become. How can we learn to bridge our differences?

    How? Inspired by research showing that one-on-one conversations are a powerful tool for helping people learn to trust each other, Zeit Online built Germany Talks, a “Tinder for politics” that facilitates “political arguments” and face-to-face meetings between users in an attempt to bridge their points-of-view on issues ranging from immigration to same-sex marriage. With Germany Talks (and now My Country Talks and Europe Talks) Zeit has facilitated conversations between thousands of Europeans from 33 countries.

    Quote of the talk: “What matters here is not the numbers, obviously. What matters here is whenever two people meet to talk in person for hours, without anyone else listening, they change — and so do our societies. They change, little by little, discussion by discussion.”


    “The systems we have nowadays for political decision-making are not from the people for the people — they have been established by the few, for the few,” says activist Hajer Sharief. She speaks at TEDSummit: A Community Beyond Borders, July 21, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    Hajer Sharief, activist and cofounder of the Together We Build It Foundation

    Big Idea: People of all genders, ages, races, beliefs and socioeconomic statuses should participate in politics.

    Why? Hajer Sharief’s native Libya is recovering from 40 years of authoritarian rule and civil war. She sheds light on the way politics are involved in every aspect of life: “By not participating in it, you are literally allowing other people to decide what you can eat, wear, if you can have access to healthcare, free education, how much tax you pay, when can you retire, what is your pension,” she says. “Other people are also deciding whether your race is enough to consider you a criminal, or if your religion or nationality are enough to put you on a terrorist list.” When Sharief was growing up, her family held weekly meetings to discuss family issues, abiding by certain rules to ensured everyone was respectful and felt free to voice their thoughts. She recounts a meeting that went badly for her 10-year-old self, resulting in her boycotting them altogether for many years — until an issue came about which forced her to participate again. Rejoining the meetings was a political assertion, and it helped her realize an important lesson: you are never too young to use your voice — but you need to be present for it to work.

    Quote of talk: “Politics is not only activism — it’s awareness, it’s keeping ourselves informed, it’s caring for facts. When it’s possible, it is casting a vote. Politics is the tool through which we structure ourselves as groups and societies.”


    Mariana Lin, AI character designer and principal writer for Siri

    Big idea: Let’s inject AI personalities with the essence of life: creativity, weirdness, curiosity, fun.

    Why? Tech companies are going in two different directions when it comes to creating AI personas: they’re either building systems that are safe, flat, stripped of quirks and humor — or, worse, they’re building ones that are fully customizable, programmed to say just what you want to hear, just how you like to hear it. While this might sound nice at first, we’re losing part of what makes us human in the process: the friction and discomfort of relating with others, the hard work of building trusting relationships. Mariana Lin calls for tech companies to try harder to truly bring AI to life — in all its messy, complicated, uncomfortable glory. For starters, she says, companies can hire a diverse range of writers, creatives, artists and social thinkers to work on AI teams. If the people creating these personalities are as diverse as the people using it — from poets and philosophers to bankers and beekeepers — then the future of AI looks bright.

    Quote of the talk: “If we do away with the discomfort of relating with others not exactly like us, with views not exactly like ours — we do away with what makes us human.”


    In 2018, Carole Cadwalladr exposed Cambridge Analytica’s attempt to influence the UK Brexit vote and the 2016 US presidential election via personal data on Facebook. She’s still working to sound the alarm. She speaks at TEDSummit: A Community Beyond Borders, July 21, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    Carole Cadwalladr, investigative journalist, interviewed by TED curator Bruno Giussani

    Big idea: Companies that collect and hoard our information, like Facebook, have become unthinkably powerful global players — perhaps more powerful than governments. It’s time for the public hold them accountable.

    How? Tech companies with offices in different countries must obey the laws of those nations. It’s up to leaders to make sure those laws are enforced — and it’s up to citizens to pressure lawmakers to further tighten protections. Despite legal and personal threats from her adversaries, Carole Cadwalladr continues to explore the ways in which corporations and politicians manipulate data to consolidate their power.

    Quote to remember: “In Britain, Brexit is this thing which is reported on as this British phenomenon, that’s all about what’s happening in Westminster. The fact that actually we are part of something which is happening globally — this rise of populism and authoritarianism — that’s just completely overlooked. These transatlantic links between what is going on in Trump’s America are very, very closely linked to what is going on in Britain.”


    Susan Cain meditates on how the feeling of longing can guide us to a deeper understanding of ourselves, accompanied by Min Kym on violin, at TEDSummit: A Community Beyond Borders. July 21, 2019, Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    Susan Cain, quiet revolutionary, with violinist Min Kym

    Big idea: Life is steeped in sublime magic that you can tap into, opening a whole world filled with passion and delight.

    How? By forgoing constant positivity for a state of mind more exquisite and fleeting — a place where light (joy) and darkness (sorrow) meet, known to us all as longing. Susan Cain weaves her journey in search for the sublime with the splendid sounds of Min Kym on violin, sharing how the feeling of yearning connects us to each other and helps us to better understand what moves us deep down.

    Quote of the talk: “Follow your longing where it’s telling you to go, and may it carry you straight to the beating heart of the perfect and beautiful world.”

    TEDStages of Life: Notes from Session 5 of TEDSummit 2019

    Yilian Cañizares rocks the TED stage with a jubilant performance of her signature blend of classic jazz and Cuban rhythms. She performs at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    The penultimate session of TEDSummit 2019 had a bit of everything — new thoughts on aging, loneliness and happiness as well as breakthrough science, music and even a bit of comedy.

    The event: TEDSummit 2019, Session 5: Stages of Life, hosted by Kelly Stoetzel and Alex Moura

    When and where: Wednesday, July 24, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

    Speakers: Nicola Sturgeon, Sonia Livingstone, Howard Taylor, Sara-Jane Dunn, Fay Bound Alberti, Carl Honoré

    Opening: Raconteur Mackenzie Dalrymple telling the story of the Goodman of Ballengeich

    Music: Yilian Cañizares and her band, rocking the TED stage with a jubilant performance that blends classic jazz and Cuban rhythms

    Comedy: Amidst a head-spinning program of big (and often heavy) ideas, a welcomed break from comedian Omid Djalili, who lightens the session with a little self-deprecation and a few barbed cultural observations

    The talks in brief:

    “In the world we live in today, with growing divides and inequalities, with disaffection and alienation, it is more important than ever that we … promote a vision of society that has well-being, not just wealth, at its very heart,” says Nicola Sturgeon, First Minister of Scotland. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    Nicola Sturgeon, First Minister of Scotland

    Big idea: It’s time to challenge the monolithic importance of GDP as a quality-of-life metric — and paint a broader picture that also encompasses well-being.

    How? In 2018, Scotland, Iceland and New Zealand established the Wellbeing Economy Governments group to challenge the supremacy of GDP. The leaders of these countries — who are, incidentally, all women — believe policies that promote happiness (including equal pay, childcare and paternity rights) could help decrease alienation in its citizens and, in turn, build resolve to confront global challenges like inequality and climate change.

    Quote of the talk: “Growth in GDP should not be pursued at any and all cost … The goal of economic policy should be collective well-being: how happy and healthy a population is, not just how wealthy a population is.”


    Sonia Livingstone, social psychologist

    Big idea: Parents often view technology as either a beacon of hope or a developmental poison, but the biggest influence on their children’s life choices is how they help them navigate this unavoidable digital landscape. Society as a whole can positively impact these efforts.

    How? Sonia Livingstone’s own childhood was relatively analog, but her research has been focused on how families embrace new technology today. Changes abound in the past few decades — whether it’s intensified educational pressures, migration, or rising inequality — yet it’s the digital revolution that remains the focus of our collective apprehension. Livingstone’s research suggests that policing screen time isn’t the answer to raising a well-rounded child, especially at a time when parents are trying to live more democratically with their children by sharing decision-making around activities like gaming and exploring the internet. Leaders and institutions alike can support a positive digital future for children by partnering with parents to guide activities within and outside of the home. Instead of criticizing families for their digital activities, Livingstone thinks we should identify what real-world challenges they’re facing, what options are available to them and how we can support them better.

    Quote of the talk: “Screen time advice is causing conflict in the family, and there’s no solid evidence that more screen time increases childhood problems — especially compared with socio-economic or psychological factors. Restricting children breeds resistance, while guiding them builds judgment.”


    Howard Taylor, child safety advocate

    Big idea: Violence against children is an endemic issue worldwide, with rates of reported incidence increasing in some countries. We are at a historical moment that presents us with a unique opportunity to end the epidemic, and some countries are already leading the way.

    How? Howard Taylor draws attention to Sweden and Uganda, two very different countries that share an explicit commitment to ending violence against children. Through high-level political buy-in, data-driven strategy and tactical legislative initiatives, the two countries have already made progress on. These solutions and others are all part of INSPIRE, a set of strategies created by an alliance of global organizations as a roadmap to eliminating the problem. If we put in the work, Taylor says, a new normal will emerge: generations whose paths in life will be shaped by what they do — not what was done to them.

    Quote of the talk: “What would it really mean if we actually end violence against children? Multiply the social, cultural and economic benefits of this change by every family, every community, village, town, city and country, and suddenly you have a new normal emerging. A generation would grow up without experiencing violence.”


    “The first half of this century is going to be transformed by a new software revolution: the living software revolution. Its impact will be so enormous that it will make the first software revolution pale in comparison,” says computational biologist Sara-Jane Dunn. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    Sara-Jane Dunn, computational biologist

    Big idea: In the 20th century, computer scientists inscribed machine-readable instructions on tiny silicon chips, completely revolutionizing our lives and workplaces. Today, a “living software” revolution centered around organisms built from programmable cells is poised to transform medicine, agriculture and energy in ways we can scarcely predict.

    How? By studying how embryonic stem cells “decide” to become neurons, lung cells, bone cells or anything else in the body, Sara-Jane Dunn seeks to uncover the biological code that dictates cellular behavior. Using mathematical models, Dunn and her team analyze the expected function of a cellular system to determine the “genetic program” that leads to that result. While they’re still a long way from compiling living software, they’ve taken a crucial early step.

    Quote of the talk: “We are at the beginning of a technological revolution. Understanding this ancient type of biological computation is the critical first step. And if we can realize this, we would enter into the era of an operating system that runs living software.”


    Fay Bound Alberti, cultural historian

    Big idea: We need to recognize the complexity of loneliness and its ever-transforming history. It’s not just an individual and psychological problem — it’s a social and physical one.

    Why? Loneliness is a modern-day epidemic, with a history that’s often recognized solely as a product of the mind. Fay Bound Alberti believes that interpretation is limiting. “We’ve neglected [loneliness’s] physical effects — and loneliness is physical,” she says. She points to how crucial touch, smell, sound, human interaction and even nostalgic memories of sensory experiences are to coping with loneliness, making people feel important, seen and helping to produce endorphins. By reframing our perspective on this feeling of isolation, we can better understand how to heal it.

    Quote of talk: “I am suggesting we need to turn to the physical body, we need to understand the physical and emotional experiences of loneliness to be able to tackle a modern epidemic. After all, it’s through our bodies, our sensory bodies, that we engage with the world.”

    Fun fact: “Before 1800 there was no word for loneliness in the English language. There was something called: ‘oneliness’ and there were ‘lonely places,’ but both simply meant the state of being alone. There was no corresponding emotional lack and no modern state of loneliness.”


    “Whatever age you are: own it — and then go out there and show the world what you can do!” says Carl Honoré. He speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    Carl Honoré, writer, thinker and activist

    Big idea: Stop the lazy thinking around age and the “cult of youth” — it’s not all downhill from 40.

    How? We need to debunk the myths and stereotypes surrounding age — beliefs like “older people can’t learn new things” and “creativity belongs to the young.” There are plenty of trailblazers and changemakers who came into their own later in life, from artists and musicians to physicists and business leaders. Studies show that people who fear and feel bad about aging are more likely to suffer physical effects as if age is an actual affliction rather than just a number. The first step to getting past that is by creating new, more positive societal narratives. Honoré offers a set of simple solutions — the two most important being: check your language and own your age. Embrace aging as an adventure, a process of opening rather than closing doors. We need to feel better about aging in order to age better.

    Quote of the talk: “Whatever age you are: own it — and then go out there and show the world what you can do!”

    TEDAnthropo Impact: Notes from Session 2 of TEDSummit 2019

    Radio Science Orchestra performs the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    Session 2 of TEDSummit 2019 is all about impact: the actions we can take to solve humanity’s toughest challenges. Speakers and performers explore the perils — from melting glaciers to air pollution — along with some potential fixes — like ocean-going seaweed farms and radical proposals for how we can build the future.

    The event: TEDSummit 2019, Session 2: Anthropo Impact, hosted by David Biello and Chee Pearlman

    When and where: Monday, July 22, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

    Speakers: Tshering Tobgay, María Neira, Tim Flannery, Kelly Wanser, Anthony Veneziale, Nicola Jones, Marwa Al-Sabouni, Ma Yansong

    Music: Radio Science Orchestra, performing the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing (and the 100th anniversary of the theremin’s invention)

    … and something completely different: Improv maestro Anthony Veneziale, delivering a made-up-on-the-spot TED Talk based on a deck of slides he’d never seen and an audience-suggested topic: “the power of potatoes.” The result was … surprisingly profound.

    The talks in brief:

    Tshering Tobgay, politician, environmentalist and former Prime Minister of Bhutan

    Big idea: We must save the Hindu Kush Himalayan glaciers from melting — or else face dire, irreversible consequences for one-fifth of the global population.

    Why? The Hindu Kush Himalayan glaciers are the pulse of the planet: their rivers alone supply water to 1.6 billion people, and their melting would massively impact the 240 million people across eight countries within their reach. Think in extremes — more intense rains, flash floods and landslides along with unimaginable destruction and millions of climate refugees. Tshering Togbay telegraphs the future we’re headed towards unless we act fast, calling for a new intergovernmental agency: the Third Pole Council. This council would be tasked with monitoring the glaciers’ health, implementing policies to protect them and, by proxy, the billions of who depend of them.

    Fun fact: The Hindu Kush Himalayan glaciers are the world’s third-largest repository of ice (after the North and South poles). They’re known as the “Third Pole” and the “Water Towers of Asia.”


    Air pollution isn’t just bad for the environment — it’s also bad for our brains, says María Neira. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    María Neira, public health leader

    Big idea: Air pollution isn’t just bad for our lungs — it’s bad for our brains, too.

    Why? Globally, poor air quality causes seven million premature deaths per year. And all this pollution isn’t just affecting our lungs, says María Neira. An emerging field of research is shedding a light on the link between air pollution and our central nervous systems. The fine particulate matter in air pollution travels through our bloodstreams to our major organs, including the brain — which can slow down neurological development in kids and speed up cognitive decline in adults. In short: air pollution is making us less intelligent. We all have a role to play in curbing air pollution — and we can start by reducing traffic in cities, investing in clean energy and changing the way we consume.

    Quote of the talk: “We need to exercise our rights and put pressure on politicians to make sure they will tackle the causes of air pollution. This is the first thing we need to do to protect our health and our beautiful brains.”


    Tim Flannery, environmentalist, explorer and professor

    Big idea: Seaweed could help us drawdown atmospheric carbon and curb global warming.

    How? You know the story: the blanket of CO2 above our heads is driving adverse climate changes and will continue to do so until we get it out of the air (a process known as “drawdown”). Tim Flannery thinks seaweed could help: it grows fast, is made out of productive, photosynthetic tissue and, when sunk more than a kilometer deep into the ocean, can lock up carbon long-term. If we cover nine percent of the ocean surface in seaweed farms, for instance, we could sequester the same amount of CO2 we currently put into the atmosphere. There’s still a lot to figure, Flannery notes —  like how growing seaweed at scale on the ocean surface will affect biodiversity down below — but the drawdown potential is too great to allow uncertainty to stymie progress.

    Fun fact: Seaweed is the most ancient multicellular life known, with more genetic diversity than all other multicellular life combined.


    Could cloud brightening help curb global warming? Kelly Wanser speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. Photo: Bret Hartman / TED

    Kelly Wanser, geoengineering expert and executive director of SilverLining

    Big idea: The practice of cloud brightening — seeding clouds with sea salt or other particulates to reflect sunshine back into space — could partially offset global warming, giving us crucial time while we figure out game-changing, long-term solutions.

    How: Starting in 2020, new global regulations will require ships to cut emissions by 85 percent. This is a good thing, right? Not entirely, says Kelly Wanser. It turns out that when particulate emissions (like those from ships) mix with clouds, they make the clouds brighter — enabling them to reflect sunshine into space and temporarily cool our climate. (Think of it as the ibuprofen for our fevered climate.) Wanser’s team and others are coming up with experiments to see if “cloud-brightening” proves safe and effective; some scientists believe increasing the atmosphere’s reflectivity by one or two percent could offset the two degrees celsius of warming that’s been forecasted for earth. As with other climate interventions, there’s much yet to learn, but the potential benefits make those efforts worth it. 

    An encouraging fact: The global community has rallied to pull off this kind of atmospheric intervention in the past, with the 1989 Montreal Protocol.


    Nicola Jones, science journalist

    Big idea: Noise in our oceans — from boat motors to seismic surveys — is an acute threat to underwater life. Unless we quiet down, we will irreparably damage marine ecosystems and may even drive some species to extinction.

    How? We usually think of noise pollution as a problem in big cities on dry land. But ocean noise may be the culprit behind marine disruptions like whale strandings, fish kills and drops in plankton populations. Fortunately, compared to other climate change solutions, it’s relatively quick and easy to dial down our noise levels and keep our oceans quiet. Better ship propellor design, speed limits near harbors and quieter methods for oil and gas prospecting will all help humans restore peace and quiet to our neighbors in the sea.

    Quote of the talk: “Sonar can be as loud as, or nearly as loud as, an underwater volcano. A supertanker can be as loud as the call of a blue whale.”


    TED curator Chee Pearlman (left) speaks with architect Marwa Al-Sabouni at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    Marwa Al-Sabouni, architect, interviewed by TED curator Chee Pearlman

    Big idea: Architecture can exacerbate the social disruptions that lead to armed conflict.

    How? Since the time of the French Mandate, officials in Syria have shrunk the communal spaces that traditionally united citizens of varying backgrounds. This contributed to a sense of alienation and rootlessness — a volatile cocktail that built conditions for unrest and, eventually, war. Marwa Al-Sabouni, a resident of Homs, Syria, saw firsthand how this unraveled social fabric helped reduce the city to rubble during the civil war. Now, she’s taking part in the city’s slow reconstruction — conducted by citizens with little or no government aid. As she explains in her book The Battle for Home, architects have the power (and the responsibility) to connect a city’s residents to a shared urban identity, rather than to opposing sectarian groups.

    Quote of the talk: “Syria had a very unfortunate destiny, but it should be a lesson for the rest of the world: to take notice of how our cities are making us very alienated from each other, and from the place we used to call home.”


    “Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit,” says Ma Yansong. He speaks at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    Ma Yansong, architect and artist

    Big Idea: By creating architecture that blends with nature, we can break free from the “matchbox” sameness of many city buildings.

    How? Ma Yansong paints a vivid image of what happens when nature collides with architecture — from a pair of curvy skyscrapers that “dance” with each other to buildings that burst out of a village’s mountains like contour lines. Yansong embraces the shapes of nature — which never repeat themselves, he notes — and the randomness of hand-sketched designs, creating a kind of “emotional scenery.” When we think beyond the boxy geometry of modern cities, he says, the results can be breathtaking.

    Quote of talk: “Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit.”

    TEDThe Big Rethink: Notes from Session 3 of TEDSummit 2019

    Marco Tempest and his quadcopters perform a mind-bending display that feels equal parts science and magic at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    In an incredible session, speakers and performers laid out the biggest problems facing the world — from political and economic catastrophe to rising violence and deepfakes — and some new thinking on solutions.

    The event: TEDSummit 2019, Session 3: The Big Rethink, hosted by Corey Hajim and Cyndi Stivers

    When and where: Tuesday, July 23, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

    Speakers: George Monbiot, Nick Hanauer, Raghuram Rajan, Marco Tempest, Rachel Kleinfeld, Danielle Citron, Patrick Chappatte

    Music: KT Tunstall sharing how she found her signature sound and playing her hits “Miniature Disasters,” “Black Horse and the Cherry Tree” and “Suddenly I See.”

    The talks in brief:

    “We are a society of altruists, but we are governed by psychopaths,” says George Monbiot. He speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    George Monbiot, investigative journalist and self-described “professional troublemaker”

    Big idea: To get out of the political mess we’re in, we need a new story that captures the minds of people across fault lines.

    Why? “Welcome to neoliberalism, the zombie doctrine that never seems to die,” says George Monbiot. We have been induced by politicians and economists into accepting an ideology of extreme competition and individualism, weakening the social bonds that make our lives worth living. And despite the 2008 financial crisis, which exposed the blatant shortcomings of neoliberalism, it still dominates our lives. Why? We haven’t yet produced a new story to replace it — a new narrative to help us make sense of the present and guide the future. So, Monbiot proposes his own: the “politics of belonging,” founded on the belief that most people are fundamentally altruistic, empathetic and socially minded. If we can tap into our fundamental urge to cooperate — namely, by building generous, inclusive communities around the shared sphere of the commons — we can build a better world. With a new story to light the way, we just might make it there.

    Quote of the talk: “We are a society of altruists, but we are governed by psychopaths.”


    Nick Hanauer, entrepreneur and venture capitalist.

    Big idea: Economics has ceased to be a rational science in the service of the “greater good” of society. It’s time to ditch neoliberal economics and create tools that address inequality and injustice.

    How? Today, under the banner of unfettered growth through lower taxes, fewer regulations, and lower wages, economics has become a tool that enforces the growing gap between the rich and poor. Nick Hanauer thinks that we must recognize that our society functions not because it’s a ruthless competition between its economically fittest members but because cooperation between people and institutions produces innovation. Competition shouldn’t be between the powerful at the expense of everyone else but between ideas battling it out in a well-managed marketplace in which everyone can participate.

    Quote of the talk: “Successful economies are not jungles, they’re gardens — which is to say that markets, like gardens, must be tended … Unconstrained by social norms or democratic regulation, markets inevitably create more problems than they solve.”


    Raghuram Rajan shares his idea for “inclusive localism” — giving communities the tools to turn themselves around while establishing standards tp prevent discrimination and corruption — at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    Raghuram Rajan, economist and former Governor of the Reserve Bank of India

    Big idea: As markets grow and governments focus on solving economic problems from the top-down, small communities and neighborhoods are losing their voices — and their livelihoods. But if nations lack the tools to address local problems, it’s time to turn to grass-roots communities for solutions.

    How? Raghuram Rajan believes that nations must exercise “inclusive localism”: giving communities the tools to turn themselves around while establishing standards tp prevent discrimination and corruption. As local leaders step forward, citizens become active, and communities receive needed resources from philanthropists and through economic incentives, neighborhoods will thrive and rebuild their social fabric.

    Quote of the talk: “What we really need [are] bottom-up policies devised by the community itself to repair the links between the local community and the national — as well as thriving international — economies.”


    Marco Tempest, cyber illusionist

    Big idea: Illusions that set our imaginations soaring are created when magic and science come together.

    Why? “Is it possible to create illusions in a world where technology makes anything possible?” asks techno-magician Marco Tempest, as he interacts with his group of small flying machines called quadcopters. The drones dance around him, reacting buoyantly to his gestures and making it easy to anthropomorphize or attribute personality traits. Tempest’s buzzing buddies swerve, hover and pause, moving in formation as he orchestrates them. His mind-bending display will have you asking yourself: Was that science or magic? Maybe it’s both.

    Quote to remember: “Magicians are interesting, their illusions accomplish what technology cannot, but what happens when the technology of today seems almost magical?”


    Rachel Kleinfeld, democracy advisor and author

    Big idea: It’s possible to quell violence — in the wider world and in our own backyards — with democracy and a lot of political TLC.

    How? Compassion-concentrated action. We need to dispel the idea that some people deserve violence because of where they live, the communities they’re a part of or their socio-economic background. Kleinfeld calls this particular, inequality-based vein of violence “privilege violence,” explaining how it evolves in stages and the ways we can eradicate it. By deprogramming how we view violence and its origins and victims, we can move forward and build safer, more secure societies.

    Quote of the talk: “The most important thing we can do is abandon the notion that some lives are just worth less than others.”


    “Not only do we believe fakes, we are starting to doubt the truth,” says Danielle Citron, revealing the threat deepfakes pose to the truth and democracy. She speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    Danielle Citron, professor of law and deepfake scholar

    Big idea: Deepfakes — machine learning technology used to manipulate or fabricate audio and video content — can cause significant harm to individuals and society. We need a comprehensive legislative and educational approach to the problem.

    How? The use of deepfake technology to manipulate video and audio for malicious purposes — whether it’s to stoke violence against minorities or to defame politicians and journalists — is becoming ubiquitous. With tools being made more accessible and their products more realistic, what becomes of that key ingredient for democratic processes: the truth? As Danielle Citron points out, “Not only do we believe fakes, we are starting to doubt the truth.” The fix, she suggests, cannot be merely technological. Legislation worldwide must be tailored to fighting digital impersonations that invade privacy and ruin lives. Educational initiatives are needed to teach the media how to identify fakes, persuade law enforcement that the perpetrators are worth prosecuting and convince the public at large that the future of democracy really is at stake.

    Quote of the talk: “Technologists expect that advances in AI will soon make it impossible to distinguish a fake video and a real one. How can truths emerge in a deepfake ridden ‘marketplace of ideas?’ Will we take the path of least resistance and just believe what we want to believe, truth be damned?”


    “Freedom of expression is not incompatible with dialogue and listening to each other, but it is incompatible with intolerance,” says editorial cartoonist Patrick Chappatte. He speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    Patrick Chappatte, editorial cartoonist and graphic journalist

    Big idea: We need humor like we need the air we breathe. We shouldn’t risk compromising our freedom of speech by censoring ourselves in the name of political correctness.

    How? Our social media-saturated world is both a blessing and a curse for political cartoonists like Patrick Chappatte, whose satirical work can go viral while also making them, and the publications they work for, a target. Be it a prison sentence, firing or the outright dissolution of cartoon features in newspapers, editorial cartoonists worldwide are increasingly penalized for their art. Chappatte emphasizes the importance of the art form in political discourse by guiding us through 20 years of editorial cartoons that are equal parts humorous and caustic. In an age where social media platforms often provide places for fury instead of debate, he suggests that traditional media shouldn’t shy away from these online kingdoms, and neither should we. Now is the time to resist preventative self-censorship; if we don’t, we risk waking up in a sanitized world without freedom of expression.

    Quote of the talk: “Freedom of expression is not incompatible with dialogue and listening to each other, but it is incompatible with intolerance.”

    TEDBusiness Unusual: Notes from Session 4 of TEDSummit 2019

    ELEW and Marcus Miller blend jazz improvisation with rock in a musical cocktail of “rock-jazz.” They perform at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

    To keep pace with our ever-changing world, we need out-of-the-box ideas that are bigger and more imaginative than ever. The speakers and performers from this session explore these possibilities, challenging us to think harder about the notions we’ve come to accept.

    The event: TEDSummit 2019, Session 4: Business Unusual, hosted by Whitney Pennington Rodgers and Cloe Shasha

    When and where: Wednesday, July 24, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

    Speakers: Margaret Heffernan, Bob Langert, Rose Mutiso, Mariana Mazzucato, Diego Prilusky

    Music: A virtuosic violin performance by Min Kym, and a closing performance by ELEW featuring Marcus Miller, blending jazz improvisation with rock in a musical cocktail of “rock-jazz.”

    The talks in brief:

    “The more we let machines think for us, the less we can think for ourselves,” says Margaret Heffernan. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

    Margaret Heffernan, entrepreneur, former CEO and writer 

    Big idea: The more we rely on technology to make us efficient, the fewer skills we have to confront the unexpected. That’s why we must start practicing “just-in-case” management — anticipating the events (climate catastrophes, epidemics, financial crises) that will almost certainly happen but are ambiguous in timing, scale and specifics. 

    Why? In our complex, unpredictable world, changes can occur out of the blue and have outsize impacts. When governments, businesses and individuals prioritize efficiency above all else, it keeps them from responding quickly, effectively and creatively. That’s why we all need to focus on cultivating what Heffernan calls our “unpredictable, messy human skills.” These include exercising our social abilities to build strong relationships and coalitions; humility to admit we don’t have all the answers; imagination to dream up never-before-seen solutions; and bravery to keep experimenting.

    Quote of the talk: “The harder, deeper truth is that the future is uncharted, that we can’t map it until we get there. But that’s OK because we have so much capacity for imagination — if we use it. We have deep talents for inventiveness and exploration — if we apply them. We are brave enough to invent things we’ve never seen before. Lose these skills and we are adrift. But hone and develop them, and we can make any future we choose.”


    Bob Langert, sustainability expert and VP of sustainability at McDonald’s

    Big idea: Adversaries can be your best allies.

    How? Three simple steps: reach out, listen and learn. As a “corporate suit” (his words), Bob Langert collaborates with his company’s strongest critics to find business-friendly solutions for society. Instead of denying and pushing back, he tries to embrace their perspectives and suggestions. He encourages others in positions of power to do the same, driven by this mindset: assume the best intentions of your critics; focus on the truth, the science and facts; and be open and transparent in order to turn critics into allies. The worst-case scenario? You’ll become better, your organization will become better — and you might make some friends along the way.

    Fun fact: After working with NGOs in the 1990s, McDonald’s reduced 300 million pounds of waste over 10 years.


    “When we talk about providing energy for growth, it is not just about innovating the technology: it’s the slow and hard work of improving governance, institutions and a broader macro-environment,” says Rose Mutiso. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

    Rose Mutiso, energy scientist

    Big Idea: In order to grow out of poverty, African countries need a steady supply of abundant and affordable electricity.

    Why? Energy poverty, or the lack of access to electricity and other basic energy services, affects nearly two-thirds of Sub-Saharan Africa. As the region’s population continues to grow, we have the opportunity to build a new energy system — from scratch — to grow with it, says Rose Mutiso. It starts with naming the systemic holes that current solutions (solar, LED and battery technology) overlook: we don’t have a clear consensus on what energy poverty is; there’s too much reliance on quick fixes; and we’re misdirecting our climate change concerns. What we need, Mutiso says, is nuanced, large-scale solutions with a diverse range of energy sources. For instance, the region has significant hydroelectric potential, yet less than 10 percent of this potential is currently being utilized. If we work hard to find new solutions to our energy deficits now, everybody benefits.

    Quote of talk:Countries cannot grow out of poverty without access to a steady supply of abundant, affordable and reliable energy to power these productive sectors — what I call energy for growth.”


    Mariana Mazzucato, economist and policy influencer

    Big idea: We’ve forgotten how to tell the difference between the value extractors in the C-suites and finance sectors and the value producers, the workers and taxpayers who actually fuel innovation and productivity. And recently we’ve neglected the importance of even questioning what the difference between the two.

    How? Economists must redefine and recognize true value creators, envisioning a system that rewards them just as much as CEOs, investors and bankers. We need to rethink how we value education, childcare and other “free” services — which don’t have a price but clearly contribute to sustaining our economies. We need to make sure that our entire society not only shares risks but also rewards.

    Quote of the talk: “[During the bank bailouts] we didn’t hear the taxpayers bragging that they were value creators. But, obviously, having bailed out the biggest ‘value-creating’ productive companies, perhaps they should have.”


    Diego Prilusky demos his immersive storytelling technology, bringing Grease to the TED stage. He speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    Diego Prilusky, video pioneer

    Big idea: Get ready for the next revolution in visual storytelling: volumetric video, which aims to do nothing less than recreate reality as a cinematic experience.

    How? Movies have been around for more than 100 years, but we’re still making (and watching) them in basically the same way. Can movies exist beyond the flat screen? Yes, says Diego Prilusky, but we’ll first need to completely rethink how they’re made. With his team at Intel Studios, Prilusky is pioneering volumetric video, a data-intensive medium powered by hundreds of sensors that capture light and motion from every possible direction. The result is like being inside a movie, which you could explore from different perspectives (or even through a character’s own eyes). In a live tech demo, Prilusky takes us inside a reshoot of an iconic dance number from the 1978 hit Grease. As actors twirl and sing “You’re the One That I Want,” he positions and repositions his perspective on the scene — moving, around, in front of and in between the performers. Film buffs can rest easy, though: the aim isn’t to replace traditional movies, he says, but to empower creators to tell stories in new ways, across multiple vantage points.

    Quote of the talk: “We’re opening the gates for new possibilities of immersive storytelling.”

    TEDNot All Is Broken: Notes from Session 6 of TEDSummit 2019

    Raconteur Mackenzie Dalrymple regales the TEDSummit audience with a classic Scottish story. He speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    In the final session of TEDSummit 2019, the themes from the week — our search for belonging and community, our digital future, our inextricable connection to the environment — ring out with clarity and insight. From the mysterious ways our emotions impact our biological hearts, to a tour-de-force talk on the languages we all speak, it’s a fitting close to a week of revelation, laughter, tears and wonder.

    The event: TEDSummit 2019, Session 6: Not All Is Broken, hosted by Chris Anderson and Bruno Giussani

    When and where: Thursday, July 25, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

    Speakers: Johann Hari, Sandeep Jauhar, Anna Piperal, Eli Pariser, Poet Ali

    Interlude: Mackenzie Dalrymple sharing the tale of an uncle and nephew competing to become Lord of the Isles

    Music: Djazia Satour, blending 1950s Chaabi (a genre of North African folk music) with modern grooves

    The talks in brief:

    Johann Hari, journalist

    Big idea: The cultural narrative and definitions of depression and anxiety need to change.

    Why? We need to talk less about chemical imbalances and more about imbalances in the way we live. Johann Hari met with experts around the world, boiling down his research into a surprisingly simple thesis: all humans have physical needs (food, shelter, water) as well as psychological needs (feeling that you belong, that your life has meaning and purpose). Though antidepressant drugs work for some, biology isn’t the whole picture, and any treatment must be paired with a social approach. Our best bet is to listen to the signals of our bodies, instead of dismissing them as signs of weakness or madness. If we take time to investigate our red flags of depression and anxiety — and take the time to reevaluate how we build meaning and purpose, especially through social connections — we can start to heal in a society deemed the loneliest in human history.

    Quote of the talk: “If you’re depressed, if you’re anxious — you’re not weak. You’re not crazy. You’re not a machine with broken parts. You’re a human being with unmet needs.”


    “Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways,” says cardiologist Sandeep Jauhar. He speaks at TEDSummit: A Community Beyond Borders, July 21-25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    Sandeep Jauhar, cardiologist

    Big Idea: Emotional stress can be a matter of life and death. Let’s factor that into how we care for our hearts.

    How? “The heart may not originate our feelings, but it is highly responsive to them,” says Sandeep Jauhar. In his practice as a cardiologist, he has seen extensive evidence of this: grief and fear can cause profound cardiac injury. “Takotsubo cardiomyopathy,” or broken heart syndrome, has been found to occur when the heart weakens after the death of a loved one or the stress of a large-scale natural disaster. It comes with none of the other usual symptoms of heart disease, and it can resolve in just a few weeks. But it can also prove fatal. In response, Jauhar says that we need a new paradigm of care, one that considers the heart as more than “a machine that can be manipulated and controlled” — and recognizes that emotional stress is as important as cholesterol.

    Quote of the talk: “Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways.”


    “In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated,” says e-governance expert Anna Piperal. She speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

    Anna Piperal, e-governance expert 

    Big idea: Bureaucracy can be eradicated by going digital — but we’ll need to build in commitment and trust.

    How? Estonia is one of the most digital societies on earth. After gaining independence 30 years ago, and subsequently building itself up from scratch, the country decided not only to digitize existing bureaucracy but also to create an entirely new system. Now citizens can conduct everything online, from running a business to voting and managing their healthcare records, and only need to show up in person for literally three things: to claim their identity card, marry or divorce, or sell a property. Anna Piperal explains how, using a form of blockchain technology, e-Estonia builds trust through the “once-only” principle, through which the state cannot ask for information more than once nor store it in more than one place. The country is working to redefine bureaucracy by making it more efficient, granting citizens full ownership of their data — and serving as a model for the rest of the world to do the same.

    Quote of the talk: “In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated.”


    Eli Pariser, CEO of Upworthy

    Big idea: We can find ways to make our online spaces civil and safe, much like our best cities.

    How? Social media is a chaotic and sometimes dangerous place. With its trolls, criminals and segregated spaces, it’s a lot like New York City in the 1970s. But like New York City, it’s also a vibrant space in which people can innovate and find new ideas. So Eli Pariser asks: What if we design social media like we design cities, taking cues from social scientists and urban planners like Jane Jacobs? Built around empowered communities, one-on-one interactions and public censure for those who act out, platforms could encourage trust and discourse, discourage antisocial behavior and diminish the sense of chaos that leads some to embrace authoritarianism.

    Quote of the talk: “If online digital spaces are going to be our new home, let’s make them a comfortable, beautiful place to live — a place we all feel not just included, but actually some ownership of. A place we get to know each other. A place you’d actually want not just to visit, but to bring your kids.”


    “Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds,” says Poet Ali. He speaks at at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

    Poet Ali, architect of human connection

    Big idea: You speak far more languages than you realize, with each language representing a gateway to understanding different societies, cultures and experiences.

    How? Whether it’s the recognized tongue of your country or profession, or the social norms of your community, every “language” you speak is more than a lexicon of words: it also encompasses feelings like laughter, solidarity, even a sense of being left out. These latter languages are universal, and the more we embrace their commonality — and acknowledge our fluency in them — the more we can empathize with our fellow humans, regardless of our differences.

    Quote of the talk: “Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds.”

    TEDBorder Stories: A night of talks on immigration, justice and freedom

    Hosts Anne Milgram and Juan Enriquez kick off the evening at TEDSalon: Border Stories at the TED World Theater in New York City on September 10, 2019. (Photo: Ryan Lash / TED)

    Immigration can be a deeply polarizing topic. But at heart, immigration policies and practices reflect no less than our attitude towards humanity. At TEDSalon: Border Stories, we explored the reality of life at the US-Mexico border, the history of the US immigration policy and possible solutions for reform — and investigated what’s truly at stake.

    The event: TEDSalon: Border Stories, hosted by criminal justice reformer Anne Milgram and author and academic Juan Enriquez

    When and where: Tuesday, September 10, 2019, at the TED World Theater in New York City

    Speakers: Paul A. Kramer, Luis H. Zayas, Erika Pinheiro, David J. Bier and Will Hurd

    Music: From Morley and Martha Redbone

    A special performance: Poet and thinker Maria Popova, reading an excerpt from her book Figuring. A stunning meditation on “the illusion of separateness, of otherness” — and on “the infinitely many kinds of beautiful lives” that inhabit this universe — accompanied by cellist Dave Eggar and guitarist Chris Bruce.

    “There are infinitely many kinds of beautiful lives,” says Maria Popova, reading a selection of her work at TEDSalon: Border Stories. (Photo: Ryan Lash / TED)

    The talks in brief:

    Paul A. Kramer, historian, writer, professor of history

    • Big idea: It’s time we make the immigration conversation to reflect how the world really works.
    • How? We must rid ourselves of the outdated questions, born from nativist and nationalist sentiments, that have permeated the immigration debate for centuries: interrogations of usefulness and assimilation, of parasitic rhetoric aimed at dismantling any positive discussions around immigration. What gives these damaging queries traction and power, Kramer says, is how they tap into a seemingly harmless sense of national belonging — and ultimately activate, heighten and inflame it. Kramer maps out a way for us to redraw those mental, societal and political borders and give immigrants access to the rights and resources that their work, activism and home countries have already played a fundamental role in creating.
    • Quote of the talk: “[We need] to redraw the boundaries of who counts — whose life, whose rights and whose thriving matters. We need to redraw … the borders of us.”

    Luis H. Zayas, social worker, psychologist, researcher

    • Big idea: Asylum seekers — especially children — face traumatizing conditions at the US-Mexico border. We need compassionate, humane practices that give them the care they need during arduous times.
    • Why? Under prolonged and intense stress, the young developing brain is harmed — plain and simple, says Luis H. Zayas. He details the distressing conditions immigrant families face on their way to the US, which have only escalated since children started being separated from their parents and held in detention centers. He urges the US to reframe its practices, replacing hostility and fear with safety and compassion. For instance: the US could open processing centers, where immigrants can find the support they need to start a new life. These facilities would be community-oriented, offering medical care, social support and the fundamental human right to respectful and dignified treatment.
    • Quote of the talk: “I hope we can agree on one thing: that none of us wants to look back at this moment in our history when we knew we were inflicting lifelong trauma on children, and that we sat back and did nothing. That would be the greatest tragedy of all.”

    Immigration lawyer Erika Pinheiro discusses the hidden realities of the US immigration system. “Seeing these horrors day in and day out has changed me,” she says. (Photo: Ryan Lash / TED)

    Erika Pinheiro, nonprofit litigation and policy director

    • Big idea: The current US administration’s mass separations of asylum-seeking families at the Mexican border shocked the conscience of the world — and the cruel realities of the immigration system have only gotten worse. We need a legal and social reckoning.
    • How? US immigration laws are broken, says Erika Pinheiro. Since 2017, US attorneys general have made sweeping changes to asylum law to ensure fewer people qualify for protection in the US. This includes all types of people fleeing persecution: Venezuelan activists, Russian dissidents, Chinese Muslims, climate change refugees — the list goes on. The US has simultaneously created a parallel legal system where migrants are detained indefinitely, often without access to legal help. Pinheiro issues a call to action: if you are against the cruel and inhumane treatment of migrants, then you need to get involved. You need to demand that your lawmakers expand the definition of refugees and amend laws to ensure immigrants have access to counsel and independent courts. Failing to act now threatens the inherent dignity of all humans.
    • Quote of the talk: “History shows us that the first population to be vilified and stripped of their rights is rarely the last.”

    David J. Bier, immigration policy analyst

    • Big idea: We can solve the border crisis in a humane fashion. In fact, we’ve done so before.
    • How? Most migrants who travel illegally from Central America to the US do so because they have no way to enter the US legally. When these immigrants are caught, they find themselves in the grips of a cruel system of incarceration and dehumanization — but is inhumane treatment really necessary to protect our borders? Bier points us to the example of Mexican guest worker programs, which allow immigrants to cross borders and work the jobs they need to support their families. As legal opportunities to cross the border have increased, the number of illegal Mexican immigrants seized at the border has plummeted 98 percent. If we were to extend guest worker programs to Central Americans as well, Bier says, we could see a similar drop in the numbers of illegal immigrants.
    • Quote of the talk: “This belief that the only way to maintain order is with inhumane means is inaccurate — and, in fact, the opposite is true. Only a humane system will create order at the border.”

    “Building a 30-foot-high concrete structure from sea to shining sea is the most expensive and least effective way to do border security,” says Congressman Will Hurd in a video interview with Anne Milgram at TEDSalon: Border Stories. (Photo: Ryan Lash / TED)

    Will Hurd, US Representative for Texas’s 23rd congressional district

    • Big idea: Walls won’t solve our problems.
    • Why? Representing a massive district that encompasses 29 counties and two times zones and shares an 820-mile border with Mexico, Republican Congressman Will Hurd has a frontline perspective on illegal immigration in Texas. Legal immigration options and modernizing the Border Patrol (which still measures their response times to border incidents in hours and days) will be what ultimately stems the tide of illegal border crossings, Hurd says. Instead of investing in walls and separating families, the US should invest in their own defense forces — and, on the other side of the border, work to alleviate poverty and violence in Central American countries.
    • Quote of the talk: “When you’re debating your strategy, if somebody comes up with the idea of snatching a child out of their mother’s arms, you need to go back to the drawing board. This is not what the United States of America stands for. This is not a Republican or a Democrat or an Independent thing. This is a human decency thing.”

    Juan Enriquez, author and academic

    • Big idea: If the US continues to divide groups of people into “us” and “them,” we open the door to inhumanity and atrocity — and not just at our borders.
    • How? Countries that survive and grow as the years go by are compassionate, kind, smart and brave; countries that don’t govern by cruelty and fear, says Juan Enriquez. In a personal talk, he calls on us to realize that deportation, imprisonment and dehumanization aren’t isolated phenomena directed at people crossing the border illegally but instead things are happening to the people who live and work by our sides in our communities. Now is the time to stand up and do something to stop our country’s slide into fear and division — whether it’s engaging in small acts of humanity, loud protests in the streets or activism directed at enacting legislative or policy changes.
    • Quote of the talk: “This is how you wipe out an economy. This isn’t about kids and borders, it’s about us. This is about who we are, who we the people are, as a nation and as individuals. This is not an abstract debate.”

    TEDTransform: The talks of TED@DuPont

    Hosts Briar Goldberg and David Biello open TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

    Transformation starts with the spark of something new. In a day of talks and performances about transformation, 16 speakers and performers explored exciting developments in science, technology and beyond — from the chemistry of everyday life to innovations in food, “smart” clothing, enzyme research and much more.

    The event: TED@DuPont: Transform, hosted by TED’s David Biello and Briar Goldberg

    When and where: Thursday, September 12, 2019, at The Fillmore in Philadelphia, PA

    Music: Performances by Elliah Heifetz and Jane Bruce and Jeff Taylor, Matt Johnson and Jesske Hume

    The talks in brief:

    “The next time you send a text or take a selfie, think about all those atoms that are hard at work and the innovation that came before them,” says chemist Cathy Mulzer. She speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

    Cathy Mulzer, chemist and tech shrinker

    Big idea: You owe a big thank you to chemistry for all that technology in your pocket.

    Why? Almost every component that goes into creating a superpowered device like a smartphone or tablet exists because of a chemist — not the Silicon Valley entrepreneurs that come to most people’s minds. Chemistry is the real hero in our technological lives, Mulzer says — building up and shrinking down everything from vivid display screens and sleek bodies to nano-sized circuitries and long-lasting batteries.

    Quote of talk: The next time you send a text or take a selfie, think about all those atoms that are hard at work and the innovation that came before them.”


    Adam Garske, enzyme engineer

    Big Idea: We can harness the power of new, scientifically modified enzymes to solve urgent problems across the world.

    How? Enzymes are proteins that catalyze chemical reactions — turning milk into cheese, for example. Through a process called “directed evolution,” scientists can carefully edit and design the building blocks of enzymes for specific functions — to help treat diseases like diabetes, reduce CO2 in our laundry, break down plastics in the ocean and more. Enzyme evolution is already changing how we tackle health and environmental issues, Garske says, and there’s so much more ahead.

    Quote of the talk: With enzymes, we can edit what nature wrote — or write our own stories.”


    Henna-Maria Uusitupa, bioscientist

    Big idea: Our bodies host an entire ecosystem of microorganisms that we’ve been cultivating since we were babies. And as it turns out, the bacteria we acquire as infants help keep us healthier as adults. Henna-Maria Uusitupa wants to ensure that every baby grows a healthy microbiome.

    How? Babies must acquire the right balance of microbes in their bodies, but they must also receive them at the correct stages of their lives. C-sections and disruptions in breastfeeding can throw a baby’s microbiome out of balance. With a carefully curated blend of probiotics and other chemicals, scientists are devising ways to restore harmony — and beneficial microbes — to young bodies.

    Quote of the talk: “I want to contribute to the unfolding of a future in which each baby has an equal starting point to be programmed for life-long health.”


    Leon Marchal, innovation director 

    Big Idea: Animals account for 50 to 80 percent of antibiotic consumption worldwide — a major contributing factor to the growing threat of antimicrobial resistance. To combat this, farmers can adopt a number of practices — like balanced, antibiotic-free nutrition for animals — on their farms.

    Why: The UN predicts that antimicrobial resistance will become our biggest killer by 2050. To prevent that from happening, Marchal is working to transform a massive global industry: animal feed. Antibiotics are used in animal feed to keep animals healthy and to grow them faster and bigger. They can be found in the most unlikely places — like the treats we give our pets. This constant, low-dose exposure could lead some animals to develop antibiotic-resistant bugs, which could cause wide-ranging health problems for animals and humans alike. The solution? Antibiotic-free production — and it all starts with better hygiene. This means taking care of animal’s good bacteria with balanced nutrition and alterations to the food they eat, to keep their microbiomes more resilient.

    Quote of the talk: “We have the knowledge on how to produce meat, eggs and milk without or with very low amounts of antibiotics. This is a small price to pay to avoid a future in which bacterial infections again become our biggest killer.”


    Physical organic chemist Tina Arrowood shares a simple, eco-friendly proposal to protect our freshwater resources from future pollution. She speaks at TED@DuPont at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

    Tina Arrowood, physical organic chemist

    Big idea: Human activity is a threat to freshwater rivers. We can transform that risk into an environmental and economic reward.

    How? A simple, eco-friendly proposal to protect our precious freshwater resources from future pollution. We’ve had technology that purifies industrial wastewaters for the last 50 years. Arrowood suggests that we go a step further: as we clean our rivers, we can sell the salt byproduct as a primary resource — to de-ice roads and for other chemical processing — rather than using the tons of salt we currently mine from the earth.

    Fun fact: If you were to compare the relative volume of ocean water to fresh river water on our planet, the former would be an Olympic-sized swimming pool — and the latter would be a one-gallon jug.


    “Why not transform clothing and make it a part of our digitized world, in a manner that shines continuous light into our health and well-being?” asks designer Janani Bhaskar. She speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

    Janani Bhaskar, smart clothing designer

    Big Idea: By designing “smart” clothing with durable technologies, we can better keep track of health and well-being.

    How? Using screen-printing technology, we can design and attach biometric “smart stickers” to any piece of clothing. These stickers are super durable, Bhaskar says: they can withstand anything our clothing can, including workouts and laundry. They’re customizable, too — athletes can use them to track blood pressure and heart rate, healthcare providers can use them to remotely monitor vital signs, and expecting parents can use them to receive information about their baby’s growth. By making sure this technology is affordable and accessible, our clothing — the “original wearables” — can help all of us better understand our bodies and our health.

    Quote of the talk: “Why not transform clothing and make it a part of our digitized world, in a manner that shines continuous light into our health and well-being?”


    Camilla Andersen, neuroscientist and food scientist

    Big idea: We can create tastier, healthier foods with insights from people’s brain activity.

    How? Our conscious experience of food — how much we enjoy a cup of coffee or how sweet we find a cookie to be, for example — is heavily influenced by hidden biases. Andersen provides an example: after her husband started buying a fancy coffee brand, she conducted a blind taste test with two cups of coffee. Her husband described the first cup as cheap and bitter, and raved about the second — only to find out that the two were actually the same kind of coffee. The taste difference was the result of his bias for the new, fancy coffee — the very kind of bias that can leave food scientists in the dark when testing out new products. But there’s a workaround: brain scans that can access the raw, unfiltered, unconscious taste information that’s often lost in people’s conscious assessments. With this kind of information, Andersen says, we can create healthier foods without sacrificing taste — like creating a zero-calorie milkshake that tastes just like the original.

    Fun fact: The five basic tastes are universally accepted: sweet, salty, sour, bitter and umami. But, based on evidence from Andersen’s EEG experiments, there’s evidence of a new sixth basic taste: fat, which we may sense beyond its smell and texture. 


    “Science is an integral part of our everyday lives, and I think we’re only at the tip of the iceberg in terms of harnessing all of the knowledge we have to create a better world,” says enzyme scientist Vicky Huang. She speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

    Vicky Huang, enzyme scientist

    Big idea: Enzymes are unfamiliar to many of us, but they’re far more important in our day-to-day lives than we realize — and they might help us unlock eco-friendly solutions to everything from food spoilage to household cleaning problems. 

    How? We were all taught in high school that enzymes are a critical part of digestion and, because of that, they’re also ideal for household cleaning. But enzymes can do much more than remove stains from our clothes, break down burnt-on food in our dishwashers and keep our baguettes soft. As scientists are able to engineer better enzymes, we’ll be able to cook and clean with less energy, less waste and fewer costs to our environment.

    Quote of the talk: “Everywhere in your homes, items you use every day have had a host of engineers and scientists like me working on them and improving them. Just one part of this everyday science is using enzymes to make things more effective, convenient and environmentally sustainable.”


    Geert van der Kraan, microbe detective

    Big Idea: We can use microbial life in oil fields to make oil production safer and cleaner.

    How? Microbial life is often a problem in oil fields, corroding steel pipes and tanks and producing toxic chemicals like dihydrogen sulfide. We can transform this challenge into a solution by studying the clues these microbes leave behind. By tracking the presence and activity of these microbes, we can see deep within these undergrounds fields, helping us create safer and smoother production processes.

    Quote of the talk: “There are things we can learn from the microorganisms that call oil fields their homes, making oil field operations just a little cleaner. Who knows what other secrets they may hold for us?”


    Lori Gottlieb, psychotherapist and author

    Big idea: The stories we tell about our lives shape who we become. By editing our stories, we can transform our lives for the better.

    How? When the stories we tell ourselves are incomplete, misleading or just plain wrong, we can get stuck. Think of a story you’re telling about your life that’s not serving you — maybe that everyone’s life is better than yours, that you’re an impostor, that you can’t trust people, that life would be better if only a certain someone would change. Try exploring this story from another point of view, or asking a friend if there’s an aspect of the story you might be leaving out. Rather than clinging to an old story that isn’t doing us any good, Gottlieb says, we can work to write the most beautiful story we can imagine, full of hard truths that lead to compassion and redemption — our own “personal Pulitzer Prize.” We get to choose what goes on the page in our minds that shapes our realities. So get out there and write your masterpiece.

    Quote of the talk: “We talk a lot in our culture about ‘getting to know ourselves,’ but part of getting to know yourself is to unknow yourself: to let go of the one version of the story you’ve told yourself about who you are — so you can live your life, and not the story you’ve been telling yourself about your life.”


    “I’m standing here before you because I have a vision for the future: one where technology keeps my daughter safe,” says tech evangelist Andrew Ho. He speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

    Andrew Ho, tech evangelist

    Big idea: As technological devices become smaller, faster and cheaper, they make daily tasks more convenient. But they can also save lives.

    How? For epilepsy patients like Andrew Ho’s daughter Hilarie, a typical day can bring dangerous — or even fatal — challenges. Medical devices currently under development could reduce the risk of seizures, but they’re bulky and fraught with risk. The more quickly developers can improve the speed and portability of these devices (and other medical technologies), the sooner we can help people with previously unmanageable diseases live normal lives.

    Quote of the talk: Advances in technology are making it possible for people with different kinds of challenges and problems to lead normal lives. No longer will they feel isolated and marginalized. No longer will they live in the shadows, afraid, ashamed, humiliated and excluded. And when that happens, our world will be a much more diverse and inclusive place, a better place for all of us to live.”


    “Learning from our mistakes is essential to improvement in many areas of our lives, so why not be intentional about it in our most risk-filled activity?” asks engineer Ed Paxton. He speaks at TED@DuPont at The Fillmore, September 12, 2019, in Philadelphia, Pennsylvania. (Photo: Ryan Lash / TED)

    Ed Paxton, aircraft engineer and safety expert

    Big idea: Many people fear flying but think nothing of driving their cars every day. Statistically, driving is far more dangerous than flying — in part because of common-sense principles pilots use to govern their behavior. Could these principles help us be safer on the road?

    How? There’s a lot of talk about how autonomous vehicles will make traffic safer in the future. Ed Paxton shares three principles that can reduce accidents right now: “positive paranoia” (anticipating possible hazards or mishaps without anxiety), allowing feedback from passengers who might see things you don’t and learning from your mistakes (near-misses caused by driving while tired, for example).

    Quote of the talk:  “Driving your car is probably the most dangerous activity that most of you do … it’s almost certain you know someone who’s been seriously injured or lost their life out on the road … Over the last ten years, seven billion people have boarded domestic airline flights, and there’s been just one fatality.”


    Jennifer Vail, tribologist

    Big idea: Complex systems lose much of their energy to friction; the more energy they lose, the more power we consume to keep them running. Tribology — or the study of friction and things that rub together — could unlock massive energy savings by reducing wear and alleviating friction in cars, wind turbines, motors and engines.

    How? By studying the different ways surfaces rub together, and engineering those surfaces to create more or less friction, tribologists can tweak a surprising range of physical products, from dog food that cleans your pet’s teeth to cars that use less gas; from food that feels more appetizing in our mouth to fossil fuel turbines that waste less power. Some of these changes could have significant impacts on how much energy we consume.

    Quote of the talk: “I have to admit that it’s a lot of fun when people ask me what I do for my job, because I tell them: ‘I literally rub things together.'”

    Sam VargheseRWC commentators need to be lined up and shot

    While many people have raised questions about the quality of refereeing at the ongoing Rugby World Cup, nobody, surprisingly has questioned the quality of commentary that is available. If one were to compare the two, the commentators would lose by a mile.

    There is a strange kind of logic that has prevailed in management circles for quite a while now, namely that a person who is good in one sector of an industry would also be equally good in another. It is this kind of logic (?) that leads managers to appoint rank and file employees to positions of leadership. It flies in the face of logic to argue that someone who is good at following orders would be equally good as a leader, but that’s the conventional wisdom that has prevailed and will never go away.

    Some years ago, there was a class of person known as a professional commentator. Now this class of person was not one who had necessarily played the game on which he/she was commentating; the two are not connected. No, the commentator had a tremendous understand of the sport in question, an incredibly good vocabulary and a turn of phrase guaranteed to keep even the most of fidgety of individuals glued to their seats. John Arlott and Brian Johnston are two good examples of this class of person; neither had played Test cricket but find me someone who was better at the art of commentating on the game.

    Alas, nowadays, there is no vetting of commentators and all seem to be appointed in you-scratch-my-back-and-I’ll-scratch yours deals. Some ex-players write well, a few, a very select few, have sufficient vocal skills to be good commentators. But the majority are mundane, idiots of the first order, with limited vocabularies, malaprops and generally prone to think that screaming out loud and displaying the behaviour of a baby in a basinet is the best way to commentate.

    As a result, good players often earn the ire of the public and lose whatever goodwill they accumulated during their playing days. Take the case of Joel Stransky, fly-half in the victorious South African team of 1995. Before he became a commentator, Stransky was known as the man who won the Springboks their first Webb Ellis Trophy through a drop-goal in extra-time. But now he is known as an incompetent, biased commentator, who has an incredibly poor knowledge of English, is unable to speak three sentences without tripping over his tongue, and one who is close to the head of the queue vying for the title of Mr Malaprop.

    Stransky also seems unaware that the job of an expert commentator is to provide something extra, something over and above what the commentator says, some analysis of what is going on on the field. He merely parrots what the commentator says and often leaves his sentences incomplete.

    But it is not only the ex-players who lack any competence in the art of commentary. There is one Sean Maloney who is part of the commentary team for the ongoing Rugby World Cup who often does not know the names of players on teams in a match where he is the commentator. The other day, he said, “the ball goes to the number 15 from Tonga…” completely forgetting that this gentleman has a name. Remembering names and faces is one of the basics for commentators so how Malone got a gig is puzzling.

    The television and radio networks that appoint incompetents to this job benefit too. For one, the people who are appointed are aware that they have received a favour and thus avoid criticising the network or the organisers. Ex-players try to promote their own favourites. A commentator is meant to function as a journalist, but the current crop act as toadies.

    They may have learned to do from the case of Murray Mexted. The former All Black, who was an expert commentator on Fox Sports some years ago, was suddenly thrown out. All it took for Mexted to be punted was some mild criticism of the New Zealand Rugby Football Union, the organisation that runs the game in that country. NZRFU complained to Fox, and Mexted was shown the door. But Mexted was good at his job and what he did was the right thing; someone who is in a position that demands he/she function as a journalist should have no fear about criticising something that deserves to be criticised.

    There are a few Australians, too, who have no business being on the commentary panel. Phil Kearns, Drew Mitchell and George Gregan were all good players in their time. But they are totally out of their depth when it comes to providing something incisive. They haul out all the old cliches and repeat them ad infinitum.

    And this is supposed be the World Cup! When will Fox Sports ensure that professional commentators take over and do a decent job? Grant Nisbett, one of the better commentators and a man who has some 300 Tests under his belt, is nowhere to be seen. But then perhaps that’s because he’s a pro who does what a commentator should.

    ,

    Krebs on SecurityGerman Cops Raid “Cyberbunker 2.0,” Arrest 7 in Child Porn, Dark Web Market Sting

    German authorities said Friday they’d arrested seven people and were investigating six more in connection with the raid of a Dark Web hosting operation that allegedly supported multiple child porn, cybercrime and drug markets with hundreds of servers buried inside a heavily fortified military bunker. Incredibly, for at least two of the men accused in the scheme, this was their second bunker-based hosting business that was raided by cops and shut down for courting and supporting illegal activity online.

    The latest busted cybercrime bunker is in Traben-Trarbach, a town on the Mosel River in western Germany. The Associated Press says investigators believe the 13-acre former military facility — dubbed the “CyberBunker” by its owners and occupants — served a number of dark web sites, including: the “Wall Street Market,” a sprawling, online bazaar for drugs, hacking tools and financial-theft wares before it was taken down earlier this year; the drug portal “Cannabis Road;” and the synthetic drug market “Orange Chemicals.”

    German police reportedly seized $41 million worth of funds allegedly tied to these markets, and more than 200 servers that were operating throughout the underground temperature-controlled, ventilated and closely guarded facility.

    The former military bunker in Germany that housed CyberBunker 2.0 and, according to authorities, plenty of very bad web sites.

    The authorities in Germany haven’t named any of the people arrested or under investigation in connection with CyberBunker’s alleged activities, but said those arrested were apprehended outside of the bunker. Still, there are clues in the details released so far, and those clues have been corroborated by sources who know two of the key men allegedly involved.

    We know the owner of the bunker hosting business has been described in media reports as a 59-year-old Dutchman who allegedly set it up as a “bulletproof” hosting provider that would provide Web site hosting to any business, no matter how illegal or unsavory.

    We also know the German authorities seized at least two Web site domains in the raid, including the domain for ZYZTM Research in The Netherlands (zyztm[.]com), and cb3rob[.]org.

    A “seizure” placeholder page left behind by German law enforcement agents after they seized cb3rob.org, an affiliate of the the CyberBunker bulletproof hosting facility owned by convicted Dutch cybercriminal Sven Kamphuis.

    According to historic whois records maintained by Domaintools.com, Zyztm[.]com was originally registered to a Herman Johan Xennt in the Netherlands. Cb3rob[.]org was an organization hosted at CyberBunker registered to Sven Kamphuis, a self-described anarchist who was convicted several years ago for participating in a large-scale attack that briefly impaired the global Internet in some places.

    Both 59-year-old Xennt and Mr. Kamphuis worked together on a previous bunker-based project — a bulletproof hosting business they sold as CyberBunker and ran out of a five-story military bunker in The Netherlands.

    That’s according to Guido Blaauw, director of Disaster-Proof Solutions, a company that renovates and resells old military bunkers and underground shelters. Blaauw’s company bought the 1,800 square-meter Netherlands bunker from Mr. Xennt in 2011 for $700,000.

    Guido Blaauw, in front of the original CyberBunker facility in the Netherlands, which he bought from Mr. Xennt in 2011. Image: Blaauw.

    Media reports indicate that in 2002 a fire inside the CyberBunker 1.0 facility in The Netherlands summoned emergency responders, who discovered a lab hidden inside the bunker that was being used to produce the drug ecstasy/XTC.

    Blaauw said nobody was ever charged for the drug lab, which was blamed on another tenant in the building. Blauuw said Xennt and others in 2003 were then denied a business license to continue operating in the bunker, and they were forced to resell servers from a different location — even though they bragged to clients for years to come about hosting their operations from an ultra-secure underground bunker.

    “After the fire in 2002, there was never any data or servers stored in the bunker,” in The Netherlands, Blaauw recalled. “For 11 years they told everyone [the hosting servers where] in this ultra-secure bunker, but it was all in Amsterdam, and for 11 years they scammed all their clients.”

    Firefighters investigating the source of a 2002 fire at the CyberBunker’s first military bunker in The Netherlands discovered a drug lab amid the Web servers. Image: Blaauw.

    Blaauw said sometime between 2012 and 2013, Xennt purchased the bunker in Traben-Trarbach, Germany — a much more modern structure that was built in 1997. CyberBunker was reborn, and it began offering many of the same amenities and courted the same customers as CyberBunker 1.0 in The Netherlands.

    “They’re known for hosting scammers, fraudsters, pedophiles, phishers, everyone,” Blaauw said. “That’s something they’ve done for ages and they’re known for it.”

    The former Facebook profile picture of Sven Olaf Kamphuis, shown here standing in front of Cyberbunker 1.0 in The Netherlands.

    About the time Xennt and company were settling into their new bunker in Germany, he and Kamphuis were engaged in a fairly lengthy and large series of distributed denial-of-service (DDoS) attacks aimed at sidelining a number of Web sites — particularly anti-spam organization Spamhaus. A chat record of that assault, detailed in my 2016 piece, Inside the Attack that Almost Broke the Internet, includes references to and quotes from both Xennt and Kamphuis.

    Kamphuis was later arrested in Spain on the DDoS attack charges. He was convicted in The Netherlands and sentenced to time served, which was approximately 55 days of detention prior to his extradition to the United States.

    Some of the 200 servers seized from CyberBunker 2.0, a “bulletproof” web hosting facility buried inside a German military bunker. Image: swr.de.

    The AP story mentioned above quoted German prosecutor Juergen Bauer saying the 59-year-old main suspect in the case was believed to have links to organized crime.

    A 2015 expose’ (PDF) by the Irish newspaper The Sunday World compared Mr. Xennt (pictured below) to a villain from a James Bond movie, and said he has been seen frequently associating with another man: an Irish mobster named George “the Penguin” Mitchell, listed by Europol as one of the top-20 drug traffickers in Europe and thought to be involved in smuggling heroin, cocaine and ecstasy.

    Cyberbunkers 1.0 and 2.0 owner and operator Mr. Xennt, top left, has been compared to a “Bond villain.” Image: The Sunday World, July 26, 2015.

    Blaauw said he doesn’t know whether Kamphuis was arrested or named in the investigation, but added that people who know him and can usually reach him have not heard from Kamphuis over several days.

    Here’s what the CyberBunker in The Netherlands looked like back in the early aughts when Xennt still ran it:

    Here’s what it looks like now after being renovated by Blaauw’s company and designed as a security operations center (SOC):

    The former CyberBunker in the Netherlands, since redesigned as a security operations center by its current owner. Image: Blaauw.

    I’m glad when truly bad guys doing bad stuff like facilitating child porn are taken down. The truth is, almost anyone trafficking in the kinds of commerce these guys courted also is building networks of money laundering business that become very tempting to use or lease out for other nefarious purposes, including human trafficking, and drug trafficking.

    Harald WelteSometimes software development is a struggle

    I'm currently working on the firmware for a new project, an 8-slot smart card reader. I will share more about the architecture and design ideas behind this project soon, but today I'll simply write about how hard it sometimes is to actually get software development done. Seemingly trivial things suddenly take ages. I guess everyone writing code knows this, but today I felt like I had to share this story.

    Chapter 1 - Introduction

    As I'm quite convinced of test-driven development these days, I don't want to simply write firmware code that can only execute in the target, but I'm actually working on a USB CCID (USb Class for Smart Card readers) stack which is hardware-independent, and which can also run entirely in userspace on a Linux device with USB gadget (device) controller. This way it's much easier to instrument, trace, introspect and test the code base, and tests with actual target board hardware are limited to those functions provided by the board.

    So the current architecture for development of the CCID implementation looks like this:

    • Implement the USB CCID device using FunctionFS (I did this some months ago, and in fact developing this was a similarly much more time consuming task than expected, maybe I find time to expand on that)
    • Attach this USB gadget to a virtual USB bus + host controller using the Linux kernel dummy_hcd module
    • Talk to a dumb phoenix style serial SIM card reader attached to a USB UART, which is connected to an actual SIM card (or any smart card, for that matter)

    By using a "stupid" UART based smart card reader, I am very close to the target environment on a Cortex-M microcntroller, where I also have to talk to a UART and hence implement all the beauty of ISO 7816-3. Hence, the test / mock / development environment is as close as possible to the target environment.

    So I implemented the various bits and pieces and ended up at a point where I wanted to test. And I'm not getting any response from the UART / SIM card at all. I check all my code, add lots of debugging, play around with various RTS / DTR / ... handshake settings (which sometimes control power) - no avail.

    In the end, after many hours of trial + error I actually inserted a different SIM card and finally, I got an ATR from the card. In more than 20 years of working with smart cards and SIM cards, this is the first time I've actually seen a SIM card die in front of me, with no response whatsoever from the card.

    Chapter 2 - Linux is broken

    Anyway, the next step was to get the T=0 protocol of ISO 7816-3 going. Since there is only one I/O line between SIM card and reader for both directions, the protocol is a half-duplex protocol. This is unlike "normal" RS232-style UART communication, where you have a separate Rx and Tx line.

    On the hardware side, this is most often implemented by simply connecting both the Rx and Tx line of the UART to the SIM I/O pin. This in turn means that you're always getting an echo back for every byte you write.

    One could discard such bytes, but then I'm targeting a microcontroller, which should be running eight cards in parallel, at preferably baud-rates up to ~1 megabit speeds, so having to read and discard all those bytes seems like a big waste of resources.

    The obvious solution around that is to disable the receiver inside the UART before you start transmitting, and re-enable it after you're done transmitting. This is typically done rather easily, as most UART registers in hardware provide some way to selectively enable transmitter and/or receiver independently.

    But since I'm working in Linux userspace in my development environment: How do I approximate this kind of behavior? At least the older readers of this blog will remember something called the CREAD flag of termios. Clearing that flag will disable the receiver. Back in the 1990ies, I did tons of work with serial ports, and I remembered there was such a flag.

    So I implement my userspace UART backend and somehow it simply doesn't want to work. Again of course I assume I must be doing something wrong. I'm using strace, I'm single-stepping through code - no avail.

    In the end, it turns out that I've just found a bug in the Linux kernel, one that appears to be there at least ever since the git history of linux-2.6.git started. Almost all USB serial device drivers do not implement CREAD, and there is no sotware fall-back implemented in the core serial (or usb-serial) handling that would discard any received bytes inside the kernel if CREAD is cleared. Interestingly, the non-USB serial drivers for classic UARTs attached to local bus, PCI, ... seem to support it.

    The problem would be half as much of a problem if the syscall to clear CREAD would actually fail with an error. But no, it simply returns success but bytes continue to be received from the UART/tty :/

    So that's the second big surprise of this weekend...

    Chapter 3 - Again a broken card?

    So I settle for implementing the 'receive as many characters as you wrote' work-around. Once that is done, I continue to test the code. And what happens? Somehow my state machine (implemented using osmo-fsm, of course) for reading the ATR (code found here) somehow never wants to complete. The last byte of the ATR always is missing. How can that be?

    Well, guess what, the second SIM card I used is sending a broken, non-spec compliant ATR where the header indicates 9 historical bytes are present, but then in reality only 8 bytes are sent by the card.

    Of course every reader has a timeout at that point, but that timeout was not yet implemented in my code, and I also wasn't expecting to hit that timeout.

    So after using yet another SIM card (now a sysmoUSIM-SJS1, not sure why I didn't even start with that one), it suddenly works.

    After a weekend of detours, each of which I would not have assumed at all before, I finally have code that can obtain the ATR and exchange T=0 TPDUs with cards. Of course I could have had that very easily if I wanted (we do have code in pySim for this, e.g.) but not in the architecture that is as close as it gets to the firmware environment of the microcontroller of my target board.

    ,

    CryptogramFriday Squid Blogging: Did Super-Intelligent Giant Squid Steal an Underwater Research Station?

    There's no proof they did, but there's no proof they didn't.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

    Read my blog posting guidelines here.

    Sociological ImagesWhat Makes a Mashup Work?

    From music to movies and restaurants, genres are a core part of popular culture. The rules we use to classify different scenes and styles help to shape our tastes and our social identities, and so we often see people sticking to clear boundaries between what they like and what they don’t like (for example: “I’ll listen to anything but metal.”). 

    But bending the rules of genre can be the quickest way to shake up expectations. Mashups were huge a few years ago. This past summer we saw “Old Town Road” push boundaries in the country music world on its way to becoming a mega-hit. Zeal & Ardor’s mix of black metal and gospel, country blues, and funk is breaking new ground in heavier music.

    Blending genres can also backfire. A new fusion concept could be a hit, or it could just be confusing. Sociological research on Netflix ratings and Yelp reviews finds that people with a high preference for variety, who like to consume many different things, are not necessarily interested in atypical work that blends genres in a new or strange way.

    One of the more interesting recent examples is this new gameshow concept from Hillsong—a media channel tied to the charismatic megachurch organization:

    What is this show? Is it preaching? Is it a game show? Do millennials even watch prime time game shows? Don’t get me wrong, I’ll hate-watch The Masked Singer every once in a while, but the mix seems a little out of place here. Gerardo Martí makes a good point in the tweet above. This show may be a way to repackage religious messaging in a new style. Given what we know about cultural consumption, however, I wonder if this is just too risky to pull anyone in.

    It is easy to chase atypicality today, both for media organizations and religious groups trying to retain a younger viewership and find the next big thing. For all the pressure to innovate, this trailer for SOUTHPAW shows us just how much we still rely on genre rules to figure out what to consume.

    Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

    (View original at https://thesocietypages.org/socimages)

    Cory DoctorowShort documentary on the quest to re-decentralize the internet

    I sat down for an interview for Reason’s short feature, The Decentralized Web Is Coming, which documents the surging Decentralized Web movement, whose goal is to restore the internet’s early, decentralized era, before it turned into five giant services filled with screenshots from the other four.

    CryptogramSuperhero Movies and Security Lessons

    A paper I co-wrote was just published in Security Journal: "Superheroes on screen: real life lessons for security debates":

    Abstract: Superhero films and episodic shows have existed since the early days of those media, but since 9/11, they have become one of the most popular and most lucrative forms of popular culture. These fantastic tales are not simple amusements but nuanced explorations of fundamental security questions. Their treatment of social issues of power, security and control are here interrogated using the Film Studies approach of close reading to showcase this relevance to the real-life considerations of the legitimacy of security approaches. By scrutinizing three specific pieces -- Daredevil Season 2, Captain America: Civil War, and Batman v Superman: Dawn of Justice -- superhero tales are framed (by the authors) as narratives which significantly influence the general public's understanding of security, often encouraging them to view expansive power critically­to luxuriate within omnipotence while also recognizing the possibility as well as the need for limits, be they ethical or legal.

    This was my first collaboration with Fareed Ben-Youssef, a film studies scholar. (And with Andrew Adams and Kiyoshi Murata.) It was fun to think about and write.

    Krebs on SecurityMyPayrollHR CEO Arrested, Admits to $70M Fraud

    Earlier this month, employees at more than 1,000 companies saw one or two paycheck’s worth of funds deducted from their bank accounts after the CEO of their cloud payroll provider absconded with $35 million in payroll and tax deposits from customers. On Monday, the CEO was arrested and allegedly confessed that the diversion was the last desperate gasp of a financial shell game that earned him $70 million over several years.

    Michael T. Mann, the 49-year-old CEO of Clifton Park, NY-based MyPayrollHR, was arrested this week and charged with bank fraud. In court filings, FBI investigators said Mann admitted under questioning that in early September — on the eve of a big payroll day — he diverted to his own bank account some $35 million in funds sent by his clients to cover their employee payroll deposits and tax withholdings.

    After that stunt, two different banks that work with Mann’s various companies froze those corporate accounts to keep the funds from being moved or withdrawn. That action set off a chain of events that led another financial institution that helps MyPayrollHR process payments to briefly pull almost $26 million out of checking accounts belonging to employees at more than 1,000 companies that use MyPayrollHR.

    At the same time, MyPayrollHR sent a message (see screenshot above) to clients saying it was shutting down and that customers should find alternative methods for paying employees and for processing payroll going forward.

    In the criminal complaint against Mann (PDF), a New York FBI agent said the CEO admitted that starting in 2010 or 2011 he began borrowing large sums of money from banks and financing companies under false pretenses.

    “While stating that MyPayroll was legitimate, he admitted to creating other companies that had no purpose other than to be used in the fraud; fraudulently representing to banks and financing companies that his fake businesses had certain receivables that they did not have; and obtaining loans and lines of credit by borrowing against these non-existent receivables.”

    “Mann estimated that he fraudulently obtained about $70 million that he has not paid back. He claimed that he committed the fraud in response to business and financial pressures, and that he used almost all of the fraudulently obtained funds to sustain certain businesses, and purchase and start new ones. He also admitted to kiting checks between Bank of America and Pioneer [Savings Bank], as part of the fraudulent scheme.”

    Check-kiting is the illegal act of writing a check from a bank account without sufficient funds and depositing it into another bank account, explains MagnifyMoney.com. “Then, you withdraw the money from that second account before the original check has been cleared.”

    Kiting also is known as taking advantage of the “float,” which is the amount of time between when an individual submits a check as payment and when the individual’s bank is instructed to move the funds from the account.

    Magnify Money explains more:

    “Say, for example, that you write yourself a check for $500 from checking account A, and deposit that check into checking account B — but the balance in checking account A is only $75. Then, you promptly withdraw the $500 from checking account B. This is check-kiting, a form of check fraud that uses non-existent funds in a checking account or other type of bank account. Some check-kiting schemes use multiple accounts at a single bank, and more complicated schemes involve multiple financial institutions.”

    “In a more complex scenario, a person could open checking accounts at bank A and bank B, at first depositing $500 into bank A and nothing in bank B. Then, they could write a check for $10,000 with account A and deposit it into account B. Bank B immediately credits the account, and in the time it might take for bank B to clear the check (generally about three business days), the scammer writes a $10,000 check with bank B, which gets deposited into bank A to cover the first check. This could keep going, with someone writing checks between banks where there’s no actual funds, yet the bank believes the money is real and continues to credit the accounts.”

    The government alleges Mann was kiting millions of dollars in checks between his accounts at Bank of American and Pioneer from Aug. 1, 2019 to Aug. 30, 2019.

    For more than a decade, MyPayrollHR worked with California-based Cachet Financial Services to process payroll deposits for MyPayrollHR client employees. Every other week, MyPayrollHR’s customers would deposit their payroll funds into a holding account run by Cachet, which would then disburse the payments into MyPayrollHR client employee bank accounts.

    But when Mann diverted $26 million in client payroll deposits from Cachet to his account at Pioneer Bank, Cachet’s emptied holding account was debited for the payroll payments. Cachet quickly reversed those deposits, causing one or two pay periods worth of salary to be deducted from bank accounts for employees of companies that used MyPayrollHR.

    That action caused so much uproar from affected companies and their employees that Cachet ultimately decided to cancel all of those reversals and absorb that $26 million hit, which it is now trying to recover through the courts.

    According to prosecutors in New York, Pioneer was Mann’s largest creditor.

    “Mann stated that the payroll issue was precipitated by his decision to route MyPayroll’s clients’ payroll payments to an account at Pioneer instead of directly to Cachet,” wrote FBI Special Agent Matthew J. Wabby. “He did this in order to temporarily reduce the amount of money he owed to Pioneer. When Pioneer froze Mann’s accounts, it’s also (inadvertently) stopped movement of MyPayroll’s clients’ payroll payments to Cachet.”

    Approximately $9 million of the $35 million diverted by Mann was supposed to go to accounts at the National Payment Corporation (NatPay) — the Florida-based firm which handles tax withholdings for MyPayrollHR clients. NatPay said its insurance should help cover the losses it incurred when MyPayrollHR’s banks froze the company’s accounts.

    Court records indicate Mann hasn’t yet entered a plea, but that he was ordered to be released today under a $200,000 bond secured by a family home and two vehicles. His passport also was seized.

    LongNowThe Art of World-Building in Science Fiction

    The process of world-building in science fiction isn’t just about coming to grips with the consequences of your narrative arc and making it believable. It’s also about imagining a better world.

    Stanford anthropologist James Holland Jones spoke about “The Science of Climate Fiction: Can Stories Lead to Social Action?” in 02019 at The Interval. Watch his talk in full here.

    Worse Than FailureError'd: Modern Customer Support

    "It's interesting to consider that First Great Western's train personnel track on-time but meanwhile, their seats measure uptime," writes Roger G.

     

    Peter G. writes, "At $214.90 for two years I was perfectly happy, but this latest price increase? You've simply gone TOO FAR and I will be cancelling ASAP!"

     

    "SharePoint does a lot of normal things, but in the case of this upgrade, it truly went above and beyond," Adam S. wrote.

     

    "Sure, I guess you can email a question, but just don't get your hopes up for a reply," writes Samuel N.

     

    Al H. writes, "When I signed up for a trial evaluation of Toad and got an e-mail with the activation license key, this was not quite what I was expecting."

     

    "The cover story, in case anybody starts asking too many questions, is that Dustin is the name of the male squirrel outside the window. He and Sylvia the squirrel are married. Nobody was testing in Production," writes Sam P.

     

    [Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

    ,

    Cory DoctorowCome see me in Portland, Maine next Monday with James Patrick Kelly

    I’m coming to Maine to keynote the Maine Library Association conference in Newry next Monday; later that day, I’m appearing with James Patrick Kelly at the Portland, Maine Main Library, from 6:30PM-8PM (it’s free and open to the public) This is the first time I’ve been to Maine, and I can’t wait!

    CryptogramOn Chinese "Spy Trains"

    The trade war with China has reached a new industry: subway cars. Congress is considering legislation that would prevent the world's largest train maker, the Chinese-owned CRRC Corporation, from competing on new contracts in the United States.

    Part of the reasoning behind this legislation is economic, and stems from worries about Chinese industries undercutting the competition and dominating key global industries. But another part involves fears about national security. News articles talk about "spy trains," and the possibility that the train cars might surreptitiously monitor their passengers' faces, movements, conversations or phone calls.

    This is a complicated topic. There is definitely a national security risk in buying computer infrastructure from a country you don't trust. That's why there is so much worry about Chinese-made equipment for the new 5G wireless networks.

    It's also why the United States has blocked the cybersecurity company Kaspersky from selling its Russian-made antivirus products to US government agencies. Meanwhile, the chairman of China's technology giant Huawei has pointed to NSA spying disclosed by Edward Snowden as a reason to mistrust US technology companies.

    The reason these threats are so real is that it's not difficult to hide surveillance or control infrastructure in computer components, and if they're not turned on, they're very difficult to find.

    Like every other piece of modern machinery, modern train cars are filled with computers, and while it's certainly possible to produce a subway car with enough surveillance apparatus to turn it into a "spy train," in practice it doesn't make much sense. The risk of discovery is too great, and the payoff would be too low. Like the United States, China is more likely to try to get data from the US communications infrastructure, or from the large Internet companies that already collect data on our every move as part of their business model.

    While it's unlikely that China would bother spying on commuters using subway cars, it would be much less surprising if a tech company offered free Internet on subways in exchange for surveillance and data collection. Or if the NSA used those corporate systems for their own surveillance purposes (just as the agency has spied on in-flight cell phone calls, according to an investigation by the Intercept and Le Monde, citing documents provided by Edward Snowden). That's an easier, and more fruitful, attack path.

    We have credible reports that the Chinese hacked Gmail around 2010, and there are ongoing concerns about both censorship and surveillance by the Chinese social-networking company TikTok. (TikTok's parent company has told the Washington Post that the app doesn't send American users' info back to Beijing, and that the Chinese government does not influence the app's use in the United States.)

    Even so, these examples illustrate an important point: there's no escaping the technology of inevitable surveillance. You have little choice but to rely on the companies that build your computers and write your software, whether in your smartphones, your 5G wireless infrastructure, or your subway cars. And those systems are so complicated that they can be secretly programmed to operate against your interests.

    Last year, Le Monde reported that the Chinese government bugged the computer network of the headquarters of the African Union in Addis Ababa. China had built and outfitted the organization's new headquarters as a foreign aid gift, reportedly secretly configuring the network to send copies of confidential data to Shanghai every night between 2012 and 2017. China denied having done so, of course.

    If there's any lesson from all of this, it's that everybody spies using the Internet. The United States does it. Our allies do it. Our enemies do it. Many countries do it to each other, with their success largely dependent on how sophisticated their tech industries are.

    China dominates the subway car manufacturing industry because of its low prices­ -- the same reason it dominates the 5G hardware industry. Whether these low prices are because the companies are more efficient than their competitors or because they're being unfairly subsidized by the Chinese government is a matter to be determined at trade negotiations.

    Finally, Americans must understand that higher prices are an inevitable result of banning cheaper tech products from China.

    We might willingly pay the higher prices because we want domestic control of our telecommunications infrastructure. We might willingly pay more because of some protectionist belief that global trade is somehow bad. But we need to make these decisions to protect ourselves deliberately and rationally, recognizing both the risks and the costs. And while I'm worried about our 5G infrastructure built using Chinese hardware, I'm not worried about our subway cars.

    This essay originally appeared on CNN.com.

    EDITED TO ADD: I had a lot of trouble with CNN's legal department with this essay. They were very reluctant to call out the US and its allies for similar behavior, and spent a lot more time adding caveats to statements that I didn't think needed them. They wouldn't let me link to this Intercept article talking about US, French, and German infiltration of supply chains, or even the NSA document from the Snowden archives that proved the statements.

    Worse Than FailureCodeSOD: Trim Off a Few Miles

    I don’t know the length of Russell F’s commute. Presumably, the distance is measured in miles. Miles and miles. I say that, because of this block, which is written… with care.

      string Miles_w_Care = InvItem.MilesGuaranteeFlag == true && InvItem.Miles_w_Care.HasValue ? (((int)InvItem.Miles_w_Care / 1000).ToString().Length > 2 ? ((int)InvItem.Miles_w_Care / 1000).ToString().Trim().Substring(0, 2) : ((int)InvItem.Miles_w_Care / 1000).ToString().Trim()) : "  ";
      string Miles_wo_Care = InvItem.MilesGuaranteeFlag == true && InvItem.Miles_wo_Care.HasValue ? (((int)InvItem.Miles_wo_Care / 1000).ToString().Length > 2 ? ((int)InvItem.Miles_wo_Care / 1000).ToString().Trim().Substring(0, 2) : ((int)InvItem.Miles_wo_Care / 1000).ToString().Trim()) : "  ";

    Two lines, so many nested ternaries. Need to round off to the nearest thousand? Just divide and then ToString the result, selecting the substring as needed. Be sure to Trim the string which couldn’t possibly contain whitespace, you never know.

    Ironically, the only expression in this block which isn’t a WTF is InvItem.MilesGuaranteeFlag == true, because while we’re comparing against true, MilesGuaranteeFlag is a Nullable<bool>, so this confirms that it has a value and that the value is true.

    So many miles.

    And I would write five hundred lines
    and I would write five hundred more
    just to be the man who wrote a thousand lines
    Uncaught Exception at line 24

    [Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

    Krebs on SecurityInterview With the Guy Who Tried to Frame Me for Heroin Possession

    In April 2013, I received via U.S. mail more than a gram of pure heroin as part of a scheme to get me arrested for drug possession. But the plan failed and the Ukrainian mastermind behind it soon after was imprisoned for unrelated cybercrime offenses. That individual recently gave his first interview since finishing his jail time here in the states, and he’s shared some select (if often abrasive and coarse) details on how he got into cybercrime and why. Below are a few translated excerpts.

    When I first encountered now-31-year-old Sergei “Fly,” “Flycracker,” “MUXACC” Vovnenko in 2013, he was the administrator of the fraud forum “thecc[dot]bz,” an exclusive and closely guarded Russian language board dedicated to financial fraud and identity theft.

    Many of the heavy-hitters from other fraud forums had a presence on Fly’s forum, and collectively the group financed and ran a soup-to-nuts network for turning hacked credit card data into mounds of cash.

    Vovnenko first came onto my radar after his alter ego Fly published a blog entry that led with an image of my bloodied, severed head and included my credit report, copies of identification documents, pictures of our front door, information about family members, and so on. Fly had invited all of his cybercriminal friends to ruin my financial identity and that of my family.

    Somewhat curious about what might have precipitated this outburst, I was secretly given access to Fly’s cybercrime forum and learned he’d freshly hatched a plot to have heroin sent to my home. The plan was to have one of his forum lackeys spoof a call from one of my neighbors to the police when the drugs arrived, complaining that drugs were being delivered to our house and being sold out of our home by Yours Truly.

    Thankfully, someone on Fly’s forum also posted a link to the tracking number for the drug shipment. Before the smack arrived, I had a police officer come out and take a report. After the heroin showed up, I gave the drugs to the local police and wrote about the experience in Mail From the Velvet Cybercrime Underground.

    Angry that I’d foiled the plan to have me arrested for being a smack dealer, Fly or someone on his forum had a local florist send a gaudy floral arrangement in the shape of a giant cross to my home, complete with a menacing message that addressed my wife and was signed, “Velvet Crabs.”

    The floral arrangement that Fly or one of his forum lackeys had delivered to my home in Virginia.

    Vovnenko was arrested in Italy in the summer of 2014 on identity theft and botnet charges, and spent some 15 months in arguably Italy’s worst prison contesting his extradition to the United States. Those efforts failed, and he soon pleaded guilty to aggravated identity theft and wire fraud, and spent several years bouncing around America’s prison system.

    Although Vovnenko sent me a total of three letters from prison in Naples (a hand-written apology letter and two friendly postcards), he never responded to my requests to meet him following his trial and conviction on cybercrime charges in the United States. I suppose that is fair: To my everlasting dismay, I never responded to his Italian dispatches (the first I asked to be professionally analyzed and translated before I would touch it).

    Seasons greetings from my pen pal, Flycracker.

    After serving his 41 month sentence in the U.S., Vovnenko was deported, although it’s unclear where he currently resides (the interview excerpted here suggests he’s back in Italy, but Fly doesn’t exactly confirm that). 

    In an interview published on the Russian-language security blog Krober.biz, Vovnenko said he began stealing early in life, and by 13 was already getting picked up for petty robberies and thefts.

    A translated English version of the interview was produced and shared with KrebsOnSecurity by analysts at New York City-based cyber intelligence firm Flashpoint.

    Sometime in the mid-aughts, Vovnenko settled with his mother in Naples, Italy, but he had trouble keeping a job for more than a few days. Until a chance encounter led to a front job at a den of thieves.

    “When I came to my Mom in Naples, I could not find a permanent job. Having settled down somewhere at a new job, I would either get kicked out or leave in the first two days. I somehow didn’t succeed with employment until I was invited to work in a wine shop in the historical center of Naples, where I kinda had to wipe the dust from the bottles. But in fact, the wine shop turned out to be a real den and a sales outlet of hashish and crack. So my job was to be on the lookout and whenever the cops showed up, take a bag of goods and leave under the guise of a tourist.”

    Cocaine and hash were plentiful at his employer’s place of work, and Vovnenko said he availed himself of both abundantly. After he’d saved enough to buy a computer, Fly started teaching himself how to write programs and hack stuff. He quickly became enthralled with the romanticized side of cybercrime — the allure of instant cash — and decided this was his true vocation.

    “After watching movies and reading books about hackers, I really wanted to become a sort of virtual bandit who robs banks without leaving home,” Vovnenko recalled. “Once, out of curiosity, I wrote an SMS bomber that used a registration form on a dating site, bypassing the captcha through some kind of rookie mistake in the shitty code. The bomber would launch from the terminal and was written in Perl, and upon completion of its work, it gave out my phone number and email. I shared the bomber somewhere on one of my many awkward sites.”

    “And a couple of weeks later they called me. Nah, not the cops, but some guy who comes from Sri Lanka who called himself Enrico. He told me that he used my program and earned a lot of money, and now he wants to share some of it with me and hire me. By a happy coincidence, the guy also lived in Naples.”

    “When we met in person, he told me that he used my bomber to fuck with a telephone company called Wind. This telephone company had such a bonus service: for each incoming SMS you received two cents on the balance. Well, of course, this guy bought a bunch of SIM cards and began to bomb them, getting credits and loading them into his paid lines, similar to how phone sex works.”

    But his job soon interfered with his drug habit, and he was let go.

    “At the meeting, Enrico gave me 2K euros, and this was the first money I’ve earned, as it is fashionable to say these days, on ‘cybercrime’. I left my previous job and began to work closely with Enrico. But always stoned out of my mind, I didn’t do a good job and struggled with drug addiction at that time. I was addicted to cocaine, as a result, I was pulling a lot more money out of Enrico than my work brought him. And he kicked me out.”

    After striking out on his own, Vovnenko says he began getting into carding big time, and was introduced to several other big players on the scene. One of those was a cigarette smuggler who used the nickname Ponchik (“Doughnut”).

    I wonder if this is the same Ponchik who was arrested in 2013 as being the mastermind behind the Blackhole exploit kit, a crimeware package that fueled an overnight explosion in malware attacks via Web browser vulnerabilities.

    In any case, Vovnenko had settled on some schemes that were generating reliably large amounts of cash.

    “I’ve never stood still and was not focusing on carding only, with the money I earned, I started buying dumps and testing them at friends’ stores,” Vovnenko said. “Mules, to whom I signed the hotlines, were also signed up for cashing out the loads, giving them a mere 10 percent for their work. Things seemed to be going well.”

    FAN MAIL

    There is a large chronological gap in Vovnenko’s account of his cybercrime life story from that point on until the time he and his forum friends started sending heroin, large bags of feces and other nasty stuff to our Northern Virginia home in 2013.

    Vovnenko claims he never sent anything and that it was all done by members of his forum.

    -Tell me about the packages to Krebs.
    “That ain’t me. Suitcase filled with sketchy money, dildoes, and a bouquet of coffin wildflowers. They sent all sorts of crazy shit. Forty or so guys would send. When I was already doing time, one of the dudes sent it. By the way, Krebs wanted to see me. But the lawyer suggested this was a bad idea. Maybe he wanted to look into my eyes.”

    In one part of the interview, Fly is asked about but only briefly touches on how he was caught. I wanted to add some context here because this part of the story is richly ironic, and perhaps a tad cathartic.

    Around the same time Fly was taking bitcoin donations for a fund to purchase heroin on my behalf, he was also engaged to be married to a nice young woman. But Fly apparently did not fully trust his bride-to-be, so he had malware installed on her system that forwarded him copies of all email that she sent and received.

    Fly,/Flycracker discussing the purchase of a gram of heroin from Silk Road seller “10toes.”

    But Fly would make at least two big operational security mistakes in this spying effort: First, he had his fiancée’s messages forwarded to an email account he’d used for plenty of cybercriminal stuff related to his various “Fly” identities.

    Mistake number two was the password for his email account was the same as one of his cybercrime forum admin accounts. And unbeknownst to him at the time, that forum was hacked, with all email addresses and hashed passwords exposed.

    Soon enough, investigators were reading Fly’s email, including the messages forwarded from his wife’s account that had details about their upcoming nuptials, such as shipping addresses for their wedding-related items and the full name of Fly’s fiancée. It didn’t take long to zero in on Fly’s location in Naples.

    While it may sound unlikely that a guy so immeshed in the cybercrime space could make such rookie security mistakes, I have found that a great many cybercriminals actually have worse operational security than the average Internet user.

    I suspect this may be because the nature of their activities requires them to create vast numbers of single- or brief-use accounts, and in general they tend to re-use credentials across multiple sites, or else pick very poor passwords — even for critical resources.

    In addition to elaborating on his hacking career, Fly talks a great deal about his time in various prisons (including their culinary habits), and an apparent longing or at least lingering fondness for the whole carding scene in general.

    Towards the end, Fly says he’s considering going back to school, and that he may even take up information security as a study. I wish him luck in that whatever that endeavor is as long as he can also avoid stealing from people.

    I don’t know what I would have written many years ago to Fly had I not been already so traumatized by receiving postal mail from him. Perhaps it would go something like this:

    “Dear Fly: Thank you for your letters. I am very sorry to hear about the delays in your travel plans. I wish you luck in all your endeavors — and I sincerely wish the next hopeful opportunity you alight upon does not turn out to be a pile of shit.”

    The entire translated interview is here (PDF). Fair warning: Many readers may find some of the language and topics discussed in the interview disturbing or offensive.

    ,

    Cory DoctorowMy appearance on Futurithmic

    I was delighted to sit down with my old friend Michael Hainsworth for his new TV show Futurithmic, where we talked about science fiction, technological self-determination, internet freedom. They’ve just posted the episode and it’s fabulous!

    CryptogramIneffective Package Tracking Facilitates Fraud

    This article discusses an e-commerce fraud technique in the UK. Because the Royal Mail only tracks packages to the postcode -- and not to the address - it's possible to commit a variety of different frauds. Tracking systems that rely on signature are not similarly vulnerable.

    Worse Than FailureCodeSOD: And it was Uphill Both Ways

    Today’s submission is a little bit different. Kevin sends us some code where the real WTF is simply that… it still is in use somewhere. By the standards of its era, I’d actually say that the code is almost good. This is more of a little trip down memory lane, about the way web development used to work.

    Let’s start with the HTML snippet:

    <frameset  border="0" frameborder="0" framespacing="0" cols="*,770,*"  onLoad="MaximizeWindow()">
    	<!-- SNIPPED... -->
    </frameset>

    In 2019, if you want to have a sidebar full of links which allow users to click, and have a portion of the page update while not refreshing the whole page, you probably write a component in the UI framework of your choice. In 1999, you used frames. Honestly, by 1999, frames were already on the way out (he says, despite maintaining a number of frames-based applications well into the early 2010s), but for a brief period in web development history, they were absolutely all the rage.

    In fact, shortly after I made my own personal home page, full of <marquee> tags, creative abuse of the <font> tag, and a color scheme which was hot pink and neon green, I showed it to a friend, who condescendingly said, “What, you didn’t even use frames?” He made me mad enough that I almost deleted my Geocities account.

    Frames are dead, but now we have <iframes> which do the same thing, but are almost entirely used for embedding ads or YouTube videos. Some things will never truly die.

      IE4 = (document.all) ? true : false;
      NS4 = (document.layers) ? true : false;
      ver4 = (IE4||NS4);
    
      if (ver4!=true){  
        function MaximizeWindow(){
            alert('Please install a browser with support for Javascript 1.2. This website works for example with Microsofts Internet Explorer or Netscapes Navigator in versions 4.x or newer!')
            self.history.back();
            }
        }
      
      if (ver4==true){
        function MaximizeWindow(){
        window.focus();
    	window.moveTo(0,0)
    	window.resizeTo(screen.availWidth,screen.availHeight)
          }
    }

    Even today, in the era of web standards, we still constantly need to use shims and compatibility checks. The reasons are largely the same as they were back then: standards (or conventions) evolve quickly, vendors don’t care about standards, and browsers represent fiendishly complicated blocks of software. Today, we have better ways of doing those checks, but here we do our check with the first two lines of code.

    And this, by the way, is why I said this code was “almost good”. In the era of “a browser with support for Javascript 1.2”, the standard way of checking browser versions was mining the user-agent string. And because of that we have situations where browsers report insanity like Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36.

    Even in the late 90s though, the “right” way to check if your site was compatible with a given browser was to check for the features you planned to use. Which this code does- specifically, it’s looking for document.all or document.layers, which were two different approaches to exploring the DOM before we had actual tools for exploring the DOM. In this era, we’d call stuff like this “DHTML” (the D is for “dynamic”), and we traversed the DOM as a chain of properties, doing things like document.forms[0].inputs[0] to access fields on the form.

    This is almost good, though, because it doesn’t gracefully degrade. If you don’t have a browser which reports these properties- document.all or document.layers, we just pop up an alert and forcibly hit the back button on your browser. Then again, if you do have a browser that supports those properties, it’s just going to go and forcibly hit the “Maximize” button on you, which is also not great, but I’m sure would make the site look quite impressive on an 800x600 resolution screen. I’m honestly kind of surprised that this doesn’t also check your resolution, and provide some warning about looking best at a certain resolution, which was also pretty standard stuff for this era.

    Again, the real WTF is that this code still exists out in the wild somewhere. Kevin found it when he encountered a site that kept kicking him back to the previous page. But there’s a deeper WTF: web development is bad. It’s always been bad. It possibly always will be bad. It’s complicated, and hard, and for some reason we’ve decided that we need to build all our UIs using a platform where a paragraph is considered a first-class UI element comparable to a button. But the next time you struggle to “grok” the new hot JavaScript framework, just remember that you’re part of a long history of people who have wrestled with accomplishing basic tasks on the web, and that it’s always been a hack, whether it’s a hack in the UA-string, a hack of using frames to essentially embed browser windows inside of browser windows, or a hack to navigate the unending efforts of browser vendors to hamstring and befuddle the competition.

    [Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

    ,

    CryptogramCrown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago

    Earlier this month, I made fun of a company called Crown Sterling, for...for...for being a company that deserves being made fun of.

    This morning, the company announced that they "decrypted two 256-bit asymmetric public keys in approximately 50 seconds from a standard laptop computer." Really. They did. This keylength is so small it has never been considered secure. It was too small to be part of the RSA Factoring Challenge when it was introduced in 1991. In 1977, when Ron Rivest, Adi Shamir, and Len Adelman first described RSA, they included a challenge with a 426-bit key. (It was factored in 1994.)

    The press release goes on: "Crown Sterling also announced the consistent decryption of 512-bit asymmetric public key in as little as five hours also using standard computing." They didn't demonstrate it, but if they're right they've matched a factoring record set in 1999. Five hours is significantly less than the 5.2 months it took in 1999, but slower than would be expected if Crown Sterling just used the 1999 techniques with modern CPUs and networks.

    Is anyone taking this company seriously anymore? I honestly wouldn't be surprised if this was a hoax press release. It's not currently on the company's website. (And, if it is a hoax, I apologize to Crown Sterling. I'll post a retraction as soon as I hear from you.)

    EDITED TO ADD: First, the press release is real. And second, I forgot to include the quote from CEO Robert Grant: "Today's decryptions demonstrate the vulnerabilities associated with the current encryption paradigm. We have clearly demonstrated the problem which also extends to larger keys."

    People, this isn't hard. Find an RSA Factoring Challenge number that hasn't been factored yet and factor it. Once you do, the entire world will take you seriously. Until you do, no one will. And, bonus, you won't have to reveal your super-secret world-destabilizing cryptanalytic techniques.

    EDITED TO ADD (9/21): Others are laughing at this, too.

    EDITED TO ADD (9/24): More commentary.

    CryptogramRussians Hack FBI Comms System

    Yahoo News reported that the Russians have successfully targeted an FBI communications system:

    American officials discovered that the Russians had dramatically improved their ability to decrypt certain types of secure communications and had successfully tracked devices used by elite FBI surveillance teams. Officials also feared that the Russians may have devised other ways to monitor U.S. intelligence communications, including hacking into computers not connected to the internet. Senior FBI and CIA officials briefed congressional leaders on these issues as part of a wide-ranging examination on Capitol Hill of U.S. counterintelligence vulnerabilities.

    These compromises, the full gravity of which became clear to U.S. officials in 2012, gave Russian spies in American cities including Washington, New York and San Francisco key insights into the location of undercover FBI surveillance teams, and likely the actual substance of FBI communications, according to former officials. They provided the Russians opportunities to potentially shake off FBI surveillance and communicate with sensitive human sources, check on remote recording devices and even gather intelligence on their FBI pursuers, the former officials said.

    It's unclear whether the Russians were able to recover encrypted data or just perform traffic analysis. The Yahoo story implies the former; the NBC News story says otherwise. It's hard to tell if the reporters truly understand the difference. We do know, from research Matt Blaze and others did almost ten years ago, that at least one FBI radio system was horribly insecure in practice -- but not in a way that breaks the encryption. Its poor design just encourages users to turn off the encryption.

    Worse Than FailureCodeSOD: Do You Need this

    I’ve written an unfortunate amount of “useless” code in my career. In my personal experience, that’s code where I write it for a good reason at the time- like it’s a user request for a feature- but it turns out nobody actually needed or wanted that feature. Or, perhaps, if I’m being naughty, it’s a feature I want to implement just for the sake of doing it, not because anybody asked for it.

    The code’s useless because it never actually gets used.

    Claude R found some code which got used a lot, but was useless from the moment it was coded. Scattered throughout the codebase were calls to getInstance(), as in, Task myTask = aTask.getInstance().

    At first glance, Claude didn’t think much of it. At second glance, Claude worried that there was some weird case of deep indirection where aTask wasn’t actually a concrete Task object and instead was a wrapper around some factory-instantiated concrete class or something. It didn’t seem likely, but this was Java, and a lot of Java code will follow patterns like that.

    So Claude took a third glance, and found some code that’s about as useful as a football bat.

    public Task getInstance(){
        return this;
    }

    To invoke getInstance you need a variable that references the object, which means you have a variable referencing the same thing as this. That is to say, this is unnecessary.

    [Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

    Sam VargheseSaudis want US to fight another war for them

    On 3 August 1990, the morning after Iraq invaded Kuwait, the Saudi Arabian government was more than a bit jittery, fearing that the Iraqi dictator Saddam Hussein would make Riyadh his next target. The Saudis had been some of the bigger buyers of American and British arms, but they found that they had a big problem.

    And that was the fact that all the princes who were pilots of F-16 jets, considered one of the glamour jobs, had gone missing. Empty jets were of no use. How would the Saudis defend their country if Baghdad decided to march into the country’s Eastern Region? If Hussein decided to do so, he would be in control of a sizeable portion of the world’s oil resources and many countries would be royally screwed.

    Then the Americans came calling, ready with doctored satellite imagery to scare the hell out of King Fahd and his colleagues. Finally, the king gave in to Dick Cheney’s arguments and asked the Americans to come into Saudi Arabia to defend the country.

    The situation appears to be repeating itself after missiles hit Saudi Arabian oil installations two weeks ago, though this time the Americans seem reluctant to get into a fight with Iran, which has been blamed for the attack.

    There is not a shred of proof to implicate Teheran apart from American and Saudi claims but then when has the Western press ever needed anything more than claims to point the finger at Iran?

    The Saudis have been using foreign labour for a long time to do all the work in the country, right from cleaning the toilets to managing their companies. And they would, no doubt, be looking to the Americans to fight Iran too if it becomes necessary.

    The fact is, the Saudis have more than enough military equipment to protect their country. But they are either incompetent to the point where they are unable to use it as it should be used. Or else, they are lazy and want others to do the work for them. After all, these are royals, right?

    The Americans made a profit on the war which was waged in 1991 to eject Iraq from Kuwait; they spent US$51 billion and raked in US$60 billion, with contributions being made by numerous countries, all worried that oil prices would put their economies into negative territory.

    But Iran will not be a pushover as Iraq was. And there is unlikely to be any kind of coalition like the one assembled in 1990. Nobody has the appetite for a fight. The world economy is looking decidedly shaky. And after the US pulled out of a deal to prevent Iran from developing nuclear weapons, countries in Europe are not exactly enthusiastic about joining the Americans in any more crazy adventures.

    ,

    TEDIs geoengineering a good idea? A brief Q&A with Kelly Wanser and Tim Flannery

    This satellite image shows marine clouds off the Pacific West Coast of the United States. The streaks in the clouds are created by the exhaust from ships, which include both greenhouse gases and particulates like sulfates that mix with clouds and temporarily make them brighter. Brighter clouds reflect more sunlight back to space, cooling the climate.

    As we recklessly warm the planet by pumping greenhouse gases into the atmosphere, some industrial emissions also produce particles that reflect sunshine back into space, putting a check on global warming that we’re only starting to understand. In her talk at TEDSummit 2019, “Emergency medicine for our climate fever,” climate activist Kelly Wanser asked: Can we engineer ways to harness this effect and reduce the effects global warming?

    This idea, known as “cloud brightening,” is seen as controversial. After her talk, Wanser was joined onstage by environmentalist Tim Flannery — who gave a talk just moments earlier about the epic carbon-capturing abilities of seaweed — to discuss cloud brightening and how it could help restore our climate to health. Check out their exchange below.

    CryptogramI'm Looking to Hire a Strategist to Help Figure Out Public-Interest Tech

    I am in search of a strategic thought partner: a person who can work closely with me over the next 9 to 12 months in assessing what's needed to advance the practice, integration, and adoption of public-interest technology.

    All of the details are in the RFP. The selected strategist will work closely with me on a number of clear deliverables. This is a contract position that could possibly become a salaried position in a subsequent phase, and under a different agreement.

    I'm working with the team at Yancey Consulting, who will follow up with all proposers and manage the process. Please email Lisa Yancey at lisa@yanceyconsulting.com.

    Google AdsenseAdSense now understands Marathi

    Today, we’re excited to announce the addition of Marathi, a language spoken by over 80 millions people in Maharashtra, India and many other countries around the world, to the family of AdSense supported languages.

    The interest for Marathi language content has been growing steadily over the last few years. With this launch, AdSense provides an easy way for publishers to monetize the content they create in Marathi, and advertisers can connect to a Marathi speaking audience with relevant ads.

    To start monetizing your Marathi content website with Google AdSense:

    1. Check the AdSense Program policies and make sure your site is compliant.
    2. Sign up for an AdSense account
    3. Add the AdSense code to start displaying relevant ads to your users

    Welcome to AdSense! Sign Up now!

    Posted by:
    AdSense Internationalization Team

    CryptogramFrance Outlines Its Approach to Cyberwar

    In a document published earlier this month (in French), France described the legal framework in which it will conduct cyberwar operations. Lukasz Olejnik explains what it means, and it's worth reading.

    Worse Than FailureAccounting for Changes

    Sara works as a product manager for a piece of accounting software for a large, international company. As a product manager, Sara interacts with their internal customers- the accounting team- and Bradley is the one she always bumps heads with.

    Bradley's idea of a change request is to send a screenshot, with no context, and a short message, like "please fix", "please advise", or "this is wrong". It would take weeks of emails and, if they were lucky, a single phone call, for Sara's team to figure out what needs to be fixed, because Bradley is "too busy" to provide any more information.

    One day, Bradley sent a screenshot of their value added taxation subsystem, saying, "This is wrong. Please fix." The email was much longer, of course, but the rest of the email was Bradley's signature block, which included a long list of titles, certifications, a few "inspirational" quotes, and his full name.

    Sara replied. "Hi Brad," her email began- she had once called him "Bradley" which triggered his longest email to date, a screed about proper forms of address. "Thanks for notifying us about a possible issue. Can you help me figure out what's wrong? In your screen shot, I see SKU numbers, tax information, and shipping details."

    Bradley's reply was brief. "Yes."

    Sara sighed and picked up her phone. She called Bradley's firm, which landed her with an assistant, who tracked down another person, who asked another who got Bradley to confirm that the issue is that, in some cases, the Value Added Tax isn't using the right rate, as in some situations multiple rates have to be applied at the same time.

    It was a big update to their VAT rules. Sara managed to talk to some SMEs at her company to refine the requirements, contacted development, and got the modifications built in the next sprint.

    "Hi, Bradley," Sara started her next email. "Thank you for bringing the VAT issue to our attention. Based on your description, we have implemented an update. We've pushed it to the User Acceptance Testing environment. After you sign off that the changes are correct, we will deploy it into production. Let me know if there are any issues with the update." The email included links to the UAT process document, the UAT test plan template, and all the other details that they always provided to guide the UAT process.

    A week later, Bradley sent an email. "It works." That was weird, as Bradley almost never signed off until he had pushed in a few unrelated changes. Still, she had the sign off. She attached the email to the ticket and once the changes were pushed to production, she closed the ticket.

    A few days later, the entire accounting team goes into a meltdown and starts filing support request after support request. One user submitted ten by himself- and that user was the CFO. This turns into a tense meeting between the CFO, Bradley, Sara, and Sara's boss.

    "How did this change get released to production?"

    Sara pulled up the ticket. She showed the screenshots, referenced the specs, showed the development and QA test plans, and finally, the email from Bradley, declaring the software ready to go.

    The CFO turned to Bradley.

    "Oh," Bradley said, "we weren't able to actually test it. We didn't have access to our test environment at all last week."

    "What?" Sara asked. "Why did you sign off on the change if you weren't able to test it!?"

    "Well, we needed it to go live on Monday."

    After that, a new round of requirements gathering happened, and Sara's team was able to implement them. Bradley wasn't involved, and while he still works at the same company, he's been shifting around from position to position, trying to find the best fit…

    [Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

    ,

    Sam VargheseWas Garcès the right choice to officiate SA-NZ game?

    The authorities who select referees for matches at the Rugby World Cup do not seem to think very deeply about the choices they make. This is, perhaps, what resulted in the French referee Jérôme Garcès being put in charge of the game between New Zealand and South Africa on 21 September.

    Some background is necessary to understand why Garcès’ appointment was questionable. He had officiated in the game between Australia and New Zealand earlier this year and handed out a red card to Kiwi lock Scott Barrett for a charge on Australian skipper Michael Hooper. This was a decision that was questioned in many quarters; that Scott Barrett deserved a yellow card was not in question, but a red card was deemed to be a gross over-reaction.

    Scott Barrett was banned for two matches after that and was making his return in Saturday’s game. Thus there were a fair few people observing how Garcès would officiate, especially when it came to Scott Barrett.

    An additional factor that made Garcès unsuitable for this game is the regular claim about referees going easy on New Zealand because of their influence in world rugby; apart from those who come to watch a game because they are fans of this team or that, there is huge contingent of people who come to watch the All Blacks because they have some sort of mystique around them.

    This claim is made by officials of teams which have been getting hammered by the Kiwis for years so one can put it down to that variety of fruit which is common these days: sour grapes. The fact is that all teams take advantage of the rules to the extent possible.

    Garcès, thus, had to avoid being seen as going easy on New Zealand. And he made some very elementary errors.

    The most glaring mistake he made was when he failed to send off South Africa winger Makazole Mapimpi for not releasing the New Zealand standoff Richie Mo’unga, after the latter had booted a ball downfield, collected it five metres from the goalline and, though somewhat off-balance, was set to stumble over the line and score. Mapimpi tacked him but did not release Mo’unga as per the rules as there were no other South African players around to lend support.

    Given that South Africa indulges in cynical tactics like this quite often — who can forget the professional fouls committed by the like of Bakkies Botha, Victor Matfield and Bryan Habana in years gone by? — a hardline referee may well have awarded the All Blacks a penalty try.

    But Garcès did not go beyond a regulation penalty. He earned bitter criticism from the New Zealand captain Kieran Read who described him as “gutless” right there on the field.

    Garcès also overlooked a number of neck rolls by South Africa’s Pieter-Steph du Toit on the All Blacks flanker Ardi Savea. Springboks giant lock Eben Etzebeth also grabbed the neck of a Kiwi player here and there but Garcès had no eyes for these tactics. All this in a year when there have been repeated reports that rugby referees have been ordered to crack down on tackles that come anywhere near the head.

    The French official also missed a number of questionable tackles by the New Zealand players. He was put in a tricky situation by whoever selected him to officiate in the game and came out smelling of anything but roses.

    But then Garcès was not responsible for the most shocking refereeing decision of the opening weekend of the tournament. This honour was claimed by British referee Rowan Kitt who was officiating as the television match official in the game between Australia and Fiji.

    Kitt had nothing to offer on a tackle that Australian winger Reece Hodge effected on Fiji’s Peceli Yato, the team’s best player up to that point of the game, blocking the flanker with a shoulder-led, no-arms challenge to the head that resulted in Yato having to leave the field with concussion. He played no further part in the game.

    On-field official Ben O’Keeffe missed the tackle too, but he was somewhat unsighted as the tackle took place close to the sideline. Former referee Jonathan Kaplan was scathing in his criticism of Kitt.

    “On this occasion Kitt ruled that the challenge was legal and I find that extremely surprising,” said the 70-Test referee, a highly respected official during his day, in a column for the UK’s Daily Telegraph. “To let it pass without any sanction whatsoever was clearly the wrong call.”

    He added: “Going into this tournament World Rugby have been very clear about contact with the head and what constitutes a red card under their new High Tackle Sanction framework.

    “With that in mind I have absolutely no idea why Reece Hodge was not sent off for his tackle on Fiji’s Peceli Yato. To me it was completely clear and an almost textbook example of the type of challenge they are trying to outlaw.”

    Exactly what it will take for referees to rule equally on all infringements remains to be seen. Perhaps someone needs to die on the field in real-time before rugby officials sit up and take notice.

    CryptogramA Feminist Take on Information Privacy

    Maria Farrell has a really interesting framing of information/device privacy:

    What our smartphones and relationship abusers share is that they both exert power over us in a world shaped to tip the balance in their favour, and they both work really, really hard to obscure this fact and keep us confused and blaming ourselves. Here are some of the ways our unequal relationship with our smartphones is like an abusive relationship:

    • They isolate us from deeper, competing relationships in favour of superficial contact -- 'user engagement' -- that keeps their hold on us strong. Working with social media, they insidiously curate our social lives, manipulating us emotionally with dark patterns to keep us scrolling.

    • They tell us the onus is on us to manage their behavior. It's our job to tiptoe around them and limit their harms. Spending too much time on a literally-designed-to-be-behaviorally-addictive phone? They send company-approved messages about our online time, but ban from their stores the apps that would really cut our use. We just need to use willpower. We just need to be good enough to deserve them.

    • They betray us, leaking data / spreading secrets. What we shared privately with them is suddenly public. Sometimes this destroys lives, but hey, we only have ourselves to blame. They fight nasty and under-handed, and are so, so sorry when they get caught that we're meant to feel bad for them. But they never truly change, and each time we take them back, we grow weaker.

    • They love-bomb us when we try to break away, piling on the free data or device upgrades, making us click through page after page of dark pattern, telling us no one understands us like they do, no one else sees everything we really are, no one else will want us.

    • It's impossible to just cut them off. They've wormed themselves into every part of our lives, making life without them unimaginable. And anyway, the relationship is complicated. There is love in it, or there once was. Surely we can get back to that if we just manage them the way they want us to?

    Nope. Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true.

    EDITED TO ADD (9/22) Cindy Cohn echoed a similar sentiment in her essay about John Barlow and his legacy.

    ,

    Cory DoctorowWhy do people believe the Earth is flat?

    I have an op-ed in today’s Globe and Mail, “Why do people believe the Earth is flat?” wherein I connect the rise of conspiratorial thinking to the rise in actual conspiracies, in which increasingly concentrated industries are able to come up with collective lobbying positions that result in everything from crashing 737s to toxic baby-bottle liners to the opioid epidemic.

    In a world where official processes are understood to be corruptible and thus increasingly unreliable, we don’t just have a difference in what we believe to be true, but in how we believe we know whether something is true or not. Without an official, neutral, legitimate procedure for rooting out truth — the rule of law — we’re left just trusting experts who “sound right to us.”

    Big Tech has a role to play here, but it’s not in automated brainwashing through machine learning: rather, it’s in the ability for conspiracy peddlers to find people who are ripe for their version of the truth, and in the ability of converts to find one another and create communities that make them resilient against social pressure to abandon their conspiracies.

    Fighting conspiracies, then, is ultimately about fighting the corruption that makes them plausible — not merely correcting the beliefs of people who have come under their sway.

    They say that ad-driven companies such as Google and Facebook threw so much R&D at using data-mining to persuade people to buy refrigerators, subprime loans and fidget-spinners that they accidentally figured out how to rob us of our free will. These systems put our online history through a battery of psychological tests, automatically pick an approach that will convince us, then bombard us with an increasingly extreme, increasingly tailored series of pitches until we’re convinced that creeping sharia and George Soros are coming for our children.

    This belief is rooted in a deep and completely justified mistrust of the Big Tech companies, which have proven themselves liars time and again on matters of taxation, labour policy, complicity in state surveillance and oppression, and privacy practices.

    But this well-founded skepticism is switched off when it comes to evaluating Big Tech’s self-serving claims about the efficacy of its products. Exhibit A for the Mind-Control Ray theory of conspiratorial thinking is the companies’ own sales literature, wherein they boast to potential customers about the devastating impact of their products, which, they say, are every bit as terrific as the critics fear they are.

    Why do people believe the Earth is flat? [Cory Doctorow/The Globe and Mail]

    LongNowHow to Practice Long-term Thinking in a Distracted World

    WIRED’s Editor-in-Chief Nicholas Thompson recently interviewed Bina Venkataraman about her new book, The Optimist’s Telescope: Thinking Ahead in a Reckless Age. Venkataraman’s book focuses on the need for more long-term thinking in the world, and explores issues that have long been a focus for us at Long Now, including the nuclear waste storage problem (discussed in the interview).

    Nicholas Thompson: So what I want to do in this conversation with Bina is start out with some personal stuff, move to some organizational stuff, and then try to get to some complicated stuff. So let’s begin with the personal: Why did you write this?

    Bina Venkataraman: Well, there’s two answers to that question. The first is that I think we are part of a generation of humanity who have never faced higher stakes for thinking ahead. We’re living longer than our grandparents or their grandparents, and we’re going to need to think about our own futures and how we plan for them. If you look at problems like climate change, our knowledge of how we impact the future is far greater than previous generations of humanity. But we are in a culture that’s encouraging instant gratification. And so I started to wonder: Is it actually possible to think ahead?

    The personal part of the answer is that I was working in the White House, and part of my job was to meet with executives of major corporations—like food corporations, for example—and talk about the threat of drought and heat waves to their supply chain. So how farms are going to be affected, the potential for crop failure and a warming climate. One time I sat across from an executive, and he looked at me and said, “You know, I really care about this problem. I have children. I have grandchildren, but I just can’t think ahead. You know, my board and my shareholders have me focused on the quarter. I just can’t think ahead.”