Planet Russell

,

CryptogramSubway Elevators and Movie-Plot Threats

Local residents are opposing adding an elevator to a subway station because terrorists might use it to detonate a bomb. No, really. There's no actual threat analysis, only fear:

"The idea that people can then ride in on the subway with a bomb or whatever and come straight up in an elevator is awful to me," said Claudia Ward, who lives in 15 Broad Street and was among a group of neighbors who denounced the plan at a recent meeting of the local community board. "It's too easy for someone to slip through. And I just don't want my family and my neighbors to be the collateral on that."

[...]

Local residents plan to continue to fight, said Ms. Gerstman, noting that her building's board decided against putting decorative planters at the building's entrance over fears that shards could injure people in the event of a blast.

"Knowing that, and then seeing the proposal for giant glass structures in front of my building ­- ding ding ding! -- what does a giant glass structure become in the event of an explosion?" she said.

In 2005, I coined the term "movie-plot threat" to denote a threat scenario that caused undue fear solely because of its specificity. Longtime readers of this blog will remember my annual Movie-Plot Threat Contests. I ended the contest in 2015 because I thought the meme had played itself out. Clearly there's more work to be done.

Worse Than FailureRepresentative Line: As the Clock Terns

Hydranix” can’t provide much detail about today’s code, because they’re under a “strict NDA”. All they could tell us was that it’s C++, and it’s part of a “mission critical” front end package. Honestly, I think this line speaks for itself:

(mil == 999 ? (!(mil = 0) && (sec == 59 ? 
  (!(sec = 0) && (min == 59 ? 
    (!(min = 0) && (++hou)) : ++min)) : ++sec)) : ++mil);

“Hydranix” suspects that this powers some sort of stopwatch, but they’re not really certain what its role is.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianWouter Verhelst: Day one of the pre-FOSDEM Debconf Videoteam sprint

I'm at the Linux Belgium training center, where this last week before FOSDEM the DebConf video team is holding a sprint. The nice folks of Linux Belgium made us feel pretty welcome:

Linux Belgium message

Yesterday was the first day of that sprint, where I had planned to blog about things, but I forgot, so here goes (first thing this morning)

Nattie and Tzafrir

Nattie and Tzafrir have been spending much of their time proofreading our documentation, and giving us feedback to improve their readability and accuracy.

Stefano

Spent some time working on his youtube uploader. He didn't finish it to a committable state yet, but more is to be expected today.

He also worked on landing a gstreamer pipeline change that was suggested at LCA last week (which he also visited), and did some work on setting up the debconf18 dev website.

Finally, he fixed the irker config on salsa so that it would actually work and send commit messages to IRC after a push.

Kyle

Wrote a lot of extra documentation on the opsis that we use and various other subjects, and also fixed some of the templates of the documentation, so that things would look better and link correctly.

Wouter

I spent much of my time working on the FOSDEM SReview instance, which will be used next weekend; that also allowed me to improve the code quality of some of the newer stuff that I wrote over the past few months. In between things, being the local guy here, I also drove around getting a bit of stuff that we needed.

Pollo

Pollo isn't here, but he's sprinting remotely from home. He spent some time setting up gitlab-ci so that it would build our documentation after pushing to salsa.

Planet DebianRuss Allbery: Review: Reap the Wild Wind

Review: Reap the Wild Wind, by Julie E. Czerneda

Series: Stratification #1
Publisher: DAW
Copyright: 2007
Printing: September 2008
ISBN: 0-7564-0487-8
Format: Mass market
Pages: 459

Reap the Wild Wind is the first book in the Stratification series. This is set in the same universe as the Trade Pact series (which starts with A Thousand Words for Stranger), but goes back in time, telling the story of the Om'ray before they left Cersi to become the Clan. You may have more interest in this series if you read and enjoyed the Trade Pact trilogy, but it's not a prerequisite. It's been over ten years since I read that series, I've forgotten nearly everything about it except the weird gender roles, and I didn't have any trouble following the story.

Aryl Sarc is member of the Yena Clan, who live a precarious existence in the trees above a vast swamp filled with swarms of carnivorous creatures. They are one of several isolated clans of Om'ray on the planet Cersi. Everything about the clans is tightly constrained by an agreement between the Om'ray, the Tiktik, and the Oud to maintain a wary peace. The agreement calls for nothing about the nature of the world or its three species to ever change.

Reap the Wild Wind opens with the annual dresel harvest: every fall, a great, dry wind called the M'hir flows down the mountains and across the forest in which the Yena live, blowing free the ripe dresel for collection at the treetops. Dresel is so deeply a part of Aryl's world that the book never explains it, but the reader can intuit that it contains some essential nutrient without which all the Yena would die. But disaster strikes while Aryl is watching the dresel harvest, disaster in the form of a strange flying vehicle no one has seen before and an explosion that kills many of the Yena and ruins the harvest essential to life.

The early part of the book is the emotional and political fallout of this disaster. Aryl discovers an unknown new talent, saving the man she's in love with (although they're too young to psychically join in the way of the Om'ray) at the cost of her brother. There's a lot of angst, a lot of cliched descriptions of internal psychic chaos (the M'hir that will be familiar to readers of the Trade Pact books), and a lot of her mother being nasty and abusive in ways that Aryl doesn't recognize as abuse. I struggled to get into the story; Aryl was an aimless mess, and none of the other characters were appealing. The saving grace for me in the early going were the interludes with Enris, an Om'ray from a far different clan, a metalworker whose primary dealings are with the Oud instead of the Tiktik.

This stage of the story thankfully doesn't last. Aryl eventually ends up among the Tiktik, struggling to understand their far different perspective on the world, and then meets the visitors who caused the disaster. They're not only from outside of Aryl's limited experience; they shouldn't even exist by the rules of Aryl's world. As Aryl slowly tries to understand what they're doing, the scope of the story expands, with hints that Aryl's world is far more complicated than she realized.

Czerneda sticks with a tight viewpoint focus on Aryl and Enris. That's frustrating when Aryl is uninterested in, or cannot understand, key pieces of the larger picture that the reader wants to know. But it creates a sense of slow discovery from an alien viewpoint that occasionally reminded me of Rosemary Kirstein's Steerswoman series. Steerswoman is much better, but it's much better than almost everything, and Aryl's growing understanding of her world is still fun. I particularly liked how Aryl's psychic species defines the world by the sensed locations of the Om'ray clans, making it extremely hard for her to understand geography in the traditional sense.

I was also happy to see Czerneda undermine the strict sexual dimorphism of Clan society a tiny bit with an Om'ray who doesn't want to participate in the pair-bonding of Choosing. She painted herself into a corner with the extreme gender roles in the Trade Pact series and there's still a lot of that here, but at least a few questions raised about that structure.

Reap the Wild Wind is all setup with little payoff. By the end of the book, we still just have hints of the history of Cersi, the goals of the Oud or Tiktik, or the true shape of what the visitors are investigating. But it had grabbed my interest, mostly because of Aryl's consistent, thoughtful curiosity. I wish this first book had gotten into the interesting meat of the story faster and had gotten farther, but this is good enough that I'll probably keep reading.

Followed by Riders of the Storm.

Rating: 7 out of 10

Planet DebianErich Schubert: Homepage reboot

I haven’t blogged in a long time, and that probably won’t change.

Yet, I wanted to reboot my website on a different technology underneath.

I just didn’t want to have to touch the old XSLT scripts powering the old website anymore. I now converted all my XML input to Markdown instead.

If you notice anything broken, let me know by my usual email adresses.

Planet DebianErich Schubert: Homepage reboot

I haven’t blogged in a long time, and that probably won’t change.

Yet, I wanted to reboot my website on a different technology underneath.

I just didn’t want to have to touch the old XSLT scripts powering the old website anymore. I now converted all my XML input to Markdown instead.

If you notice anything broken, let me know by my usual email adresses.

,

Planet DebianJeremy Bicha: GNOME Tweaks 3.28 Progress Report 1

A few days ago, I released GNOME Tweaks 3.27.4, a development snapshot on the way to the next stable version 3.28 which will be released alongside GNOME 3.28 in March. Here are some highlights of what’s changed since 3.26.

New Name (Part 2)

For 3.26, we renamed GNOME Tweak Tool to GNOME Tweaks. It was only a partial rename since many underlying parts still used the gnome-tweak-tool name. For 3.28, we have completed the rename. We have renamed the binary, the source tarball releases, the git repository, the .desktop, and app icons. For upgrade compatibility, the autostart file and helper script for the Suspend on Lid Close inhibitor keeps the old name.

New Home

GNOME Tweaks has moved from the classic GNOME Git and Bugzilla to the new GNOME-hosted gitlab.gnome.org. The new hosting includes git hosting, a bug tracker and merge requests. Much of GNOME Core has moved this cycle, and I expect many more projects will move for the 3.30 cycle later this year.

Dark Theme Switch Removed

As promised, the Global Dark Theme switch has been removed. Read my previous post for more explanation of why it’s removed and a brief mention of how theme developers should adapt (provide a separate Dark theme!).

Improved Theme Handling

The theme chooser has been improved in several small ways. Now that it’s quite possible to have a GNOME desktop without any gtk2 apps, it doesn’t make sense to require that a theme provide a gtk2 version to show up in the theme chooser so that requirement has been dropped.

The theme chooser will no longer show the same theme name multiple times if you have a system-wide installed theme and a theme in your user theme directory with the same name. Additionally, GNOME Tweaks does better at supporting the  XDG_DATA_DIRS standard in case you use custom locations to store your themes or gsettings overrides.

GNOME Tweaks 3.27.4 with the HighContrastInverse theme

Finally, gtk3 still offers a HighContrastInverse theme but most people probably weren’t aware of that since it didn’t show up in Tweaks. It does now! It is much darker than Adwaita Dark.

Several of these theme improvements (including HighContrastInverse) have also been included in 3.26.4.

For more details about what’s changed and who’s done the changing, see the project NEWS file.

CryptogramLocating Secret Military Bases via Fitness Data

In November, the company Strava released an anonymous data-visualization map showing all the fitness activity by everyone using the app.

Over this weekend, someone realized that it could be used to locate secret military bases: just look for repeated fitness activity in the middle of nowhere.

News article.

Planet DebianVincent Bernat: L3 routing to the hypervisor with BGP

On layer 2 networks, high availability can be achieved by:

Layer 2 networks need very little configuration but come with a major drawback in highly available scenarios: an incident is likely to bring the whole network down.2 Therefore, it is safer to limit the scope of a single layer 2 network by, for example, using one distinct network in each rack and connecting them together with layer 3 routing. Incidents are unlikely to impact a whole IP network.

In the illustration below, top of the rack switches provide a default gateway for hosts. To provide redundancy, they use an MC-LAG implementation. Layer 2 fault domains are scoped to a rack. Each IP subnet is bound to a specific rack and routing information is shared between top of the rack switches and core routers using a routing protocol like OSPF.

Legacy L2 design

There are two main issues with this design:

  1. The L2 domains are still large. A rack could host several dozen hypervisors and several thousand virtual guests. Therefore, a network incident will have a large impact.

  2. IP subnets are pinned to each rack. A virtual guest cannot move to another rack and unused IP addresses in a rack cannot be used in another one.

To solve both these problems, it is possible to push L3 routing further to the south, turning each hypervisor into a L3 router. However, we need to ensure customer virtual guests are blind to this change: they should keep getting their configuration from DHCP (IP, subnet and gateway).

Hypervisor as a router

In a nutshell, for a guest with an IPv4 address:

  • the hosting hypervisor configures a /32 route with the virtual interface as next-hop, and
  • this route is distributed to other hypervisors (and routers) using BGP.

Our design also handles two routing domains: a public one (hosting virtual guests from multiple tenants with direct Internet access) and a private one (used by our own infrastructure, hypervisors included). Each hypervisor uses two routing tables for this purpose.

The following illustration shows the configuration of an hypervisor with 5 guests. No bridge is needed.

L3 routing inside an hypervisor

The complete configuration described below is also available on GitHub. In real life, a piece of software is needed to update the hypervisor configuration when an instance is added or removed.

Calico is a project fulfilling the same objective (L3 routing to the hypervisor) with mostly the same ideas (except it heavily relies on Netfilter to ensure separation between administrative domains). It provides an agent (Felix) to serve as an interface with orchestrators like OpenStack or Kubernetes. Check it if you want a turnkey solution.

Routing setup

Using IP rules, each interface is “attached” to a routing table:

$ ip rule show
0:      from all lookup local
20:     from all iif lo lookup main
21:     from all iif lo lookup local-out
30:     from all iif eth0.private lookup private
30:     from all iif eth1.private lookup private
30:     from all iif vnet8 lookup private
30:     from all iif vnet9 lookup private
40:     from all lookup public

The most important rules are the highlighted ones (priority 30 and 40): any traffic coming from a “private” interface uses the private table. Any remaining traffic uses the public table.

The two iif lo rules manage routing for packets originated from the hypervisor itself. The local-out table is a mix of the private and public tables. The hypervisor mostly needs the routes from the private table but also needs to contact local virtual guests (for example, to answer to a ping request) using the public table. Both tables contain a default route (no chaining possible), so we build a third table, local-out, by copying all routes from the private table and directly connected routes from the public table.

To avoid an accidental leak of traffic, public, private and local-out routing tables contain a default last-resort route with a large metric.3 On normal operations, these routes should be shadowed by a regular default route:

ip route add blackhole default metric 4294967294 table public
ip route add blackhole default metric 4294967294 table private
ip route add blackhole default metric 4294967294 table local-out

IPv6 is far simpler as we have only have one routing domain. While we keep a public table, there is no need for a local-out table:

$ ip -6 rule show
0:      from all lookup local
20:     from all lookup main
40:     from all lookup public

As a last step, forwarding is enabled and the number of maximum routes for IPv6 is increased (default is only 4096):

sysctl -qw net.ipv4.conf.all.forwarding=1
sysctl -qw net.ipv6.conf.all.forwarding=1
sysctl -qw net.ipv6.route.max_size=4194304

Guest routes

The second step is to configure routes to each guest. For IPv6, we use the link-local address, derived from the remote MAC address, as next-hop:

ip -6 route add 2001:db8:cb00:7100:5254:33ff:fe00:f/128 \
    via fe80::5254:33ff:fe00:f dev vnet6 \
    table public

Assigning several IP addresses (or subnets) to each guest can be done by adding more routes:

ip -6 route add 2001:db8:cb00:7107::/64 \
    via fe80::5254:33ff:fe00:f dev vnet6 \
    table public

For IPv4, the route uses the guest interface as a next-hop. Linux will issue an ARP request before being able to forward the packet:4

ip route add 192.0.2.15/32 dev vnet6 \
  table public

Additional IP addresses and subnets can be configured the same way but each IP address would have to answer to ARP requests. To avoid this, it is possible to route additional subnets through the first IP address:5

ip route add 203.0.113.128/28 \
  via 192.0.2.15 dev vnet6 onlink \
  table public

BGP setup

The third step is to share routes between hypervisors, through BGP. This part is dependant on how hypervisors are connected to each others.

Fabric design

Several designs are possible to connect hypervisors. The most obvious one is to use a full L3 leaf-spine fabric:

Full L3 fabric design

Each hypervisor establishes an eBGP session to each of the leaf top-of-the-rack routers. These routers establishes an iBGP session with their neighbor and an eBGP session with each spine router. This solution can be expensive because the spine routers need to handle all routes. With the current generation of switches/routers, this puts a limit around the maximum number of routes with respect to the expected density.6 IP and BGP configuration can also be tedious unless some uncommon autoconfiguration mechanisms are used. On the other hand, leaf routers (and hypervisors) may optionally learn less routes as they can push non-local traffic north.

Another potential design is to use an L2 fabric. This may sound surprising after bad mouthing L2 networks for their unreliability but we don’t need them to be highly available. They can provide a very scalable and cost-efficient design:7

L2 fabric design

Each hypervisor is connected to 4 distinct L2 networks, limiting the scope of a single failure to a quarter of the available bandwidth. In this design, only iBGP is used. To avoid a full-mesh topology between all hypervisors, route reflectors are used. Each hypervisor has an iBGP session with one or several route reflectors from each of the L2 networks. Route reflectors on the same L2 network share their routes using iBGP.

This is the solution described below. Public and private domains share the same infrastructure but use distinct VLANs.

Route reflectors

Route reflectors are BGP-speaking boxes acting as a hub for all routes on a given L2 network but not routing any traffic. We need at least one of them on each L2 network. We can use more for redundancy.

Here is an example of configuration for JunOS:8

protocols {
    bgp {
        group public-v4 {
            family inet {
                unicast {
                    no-install; # ❶
                }
            }
            type internal;
            cluster 198.51.100.126; # ❷
            allow 198.51.100.0/25; # ❸
            neighbor 198.51.100.127;
        }
        group public-v6 {
            family inet6 {
                unicast {
                    no-install;
                }
            }
            type internal;
            cluster 198.51.100.126;
            allow 2001:db8:c633:6401::/64;
            neighbor 2001:db8:c633:6401::198.51.100.127;
        }
        ttl 255;
        bfd-liveness-detection { # ❹
            minimum-interval 100;
            multiplier 5;
        }
    }
}
routing-options {
    router-id 198.51.100.126;
    autonomous-system 65000;
}

This route reflector accepts and redistributes IPv4 and IPv6 routes. In ❶, we ensure received routes are not installed in FIB: route reflectors are not routers.

Each route reflector needs to be assigned a cluster identifier, which is used in loop detection. In our case, we use the IPv4 address for this purpose (in ❷). Having a different cluster identifier for each route reflector on the same network ensure they share the routes they receive—increasing resiliency.

Instead of explicitely declaring all hypervisors allowed to connect to this route reflector, a whole subnet is authorized in ❸.9 We also declare the second route reflector for the same network as neighbor to ensure they connect to each other.

Another important point of this setup is how to quickly react to unavailable paths. With directly connected BGP sessions, a faulty link may be detected immediately and the associated BGP sessions will be brought down. This may not always be reliable. Moreover, in our case, BGP sessions are established over several switches: a link down on a path may be left undetected until the hold timer expires. Therefore, in ❹, we enable BFD, a protocol to quickly detect faults in path between two BGP peers (RFC 5880).

A last point to consider is whetever you want to allow anycast on your network: if an IP is advertised from more than one hypervisor, you may want to:

  • send all flows to only one hypervisor, or
  • load-balance flows between hypervisors.

The second choice provides a scalable L3 load-balancer. With the above configuration, for each prefix, route reflectors choose one path and distribute it. Therefore, only one hypervisor will receive packets. To get load-balancing, you need to enable advertisement of multiple paths in BGP (RFC 7911):10

set protocols bgp group public-v4 family inet  unicast add-path send path-count 4
set protocols bgp group public-v6 family inet6 unicast add-path send path-count 4

Here is an excerpt of show route exhibiting “simple” routes as well as an anycast route:

> show route protocol bgp
inet.0: 6 destinations, 7 routes (7 active, 1 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[BGP/170] 00:09:01, localpref 100
                      AS path: I, validation-state: unverified
                    > to 198.51.100.1 via em1.90
192.0.2.15/32      *[BGP/170] 00:09:00, localpref 100
                      AS path: I, validation-state: unverified
                    > to 198.51.100.101 via em1.90
203.0.113.1/32     *[BGP/170] 00:09:00, localpref 100
                      AS path: I, validation-state: unverified
                    > to 198.51.100.101 via em1.90
203.0.113.6/32     *[BGP/170] 00:09:00, localpref 100
                      AS path: I, validation-state: unverified
                    > to 198.51.100.102 via em1.90
203.0.113.18/32    *[BGP/170] 00:09:00, localpref 100
                      AS path: I, validation-state: unverified
                    > to 198.51.100.103 via em1.90
203.0.113.10/32    *[BGP/170] 00:09:00, localpref 100
                      AS path: I, validation-state: unverified
                    > to 198.51.100.101 via em1.90
                    [BGP/170] 00:09:00, localpref 100
                      AS path: I, validation-state: unverified
                    > to 198.51.100.102 via em1.90

Complete configuration is available on GitHub. Configurations for GoBGP, BIRD or FRR are also available.11 The configuration for the private routing domain is similar. To avoid dedicated boxes, a solution is to run the route reflectors on some of the top of the rack switches.

Hypervisor configuration

So, let’s tackle the last step: the hypervisor configuration. We use BIRD (1.6.x) as a BGP daemon. It maintains three internal routing tables (public, private and local-out). We use a template with the common properties to connect to a route reflector:

template bgp rr_client {
  local as 65000;   # Local ASN matches route reflector ASN
  import all;       # Accept all received routes
  export all;       # Send all routes to remote peer
  next hop self;    # Modify next-hop with the IP used for this BGP session
  bfd yes;          # Enable BFD
  direct;           # Not a multi-hop BGP session
  ttl security yes; # GTSM is enabled
  add paths rx;     # Enable ADD-PATH reception (for anycast)

  # Low timers to establish sessions faster
  connect delay time 1;
  connect retry time 5;
  error wait time 1,5;
  error forget time 10;
}

table public;
protocol bgp RR1_public from rr_client {
  neighbor 198.51.100.126 as 65000;
  table public;
}
# […]

With the above configuration, all routes in BIRD’s public table are sent to the route reflector 198.51.100.126. Any route from the route reflector is accepted. We also need to connect BIRD’s public table to the kernel’s one:12

protocol kernel kernel_public {
  persist;
  scan time 10;
  import filter {
    # Take any route from kernel,
    # except our last-resort default route
    if krt_metric < 4294967294 then accept;
    reject;
  };
  export all;      # Put all routes into kernel
  learn;           # Learn routes not added by BIRD
  merge paths yes; # Build ECMP routes if possible
  table public;    # BIRD's table name
  kernel table 90; # Kernel table number
}

We also need to enable BFD on all interfaces:

protocol bfd {
  interface "*" {
    interval 100ms;
    multiplier 5;
  };
}

To avoid loosing BFD packets when the conntrack table is full, it is safer to disable connection tracking for these datagrams:

ip46tables -t raw -A PREROUTING -p udp --dport 3784 \
  -m addrtype --dst-type LOCAL -j CT --notrack
ip46tables -t raw -A OUTPUT -p udp --dport 3784 \
  -m addrtype --src-type LOCAL -j CT --notrack

Some missing bits are:

Once the BGP sessions have been established, we can query the kernel for the installed routes:

$ ip route show table public proto bird
default
        nexthop via 198.51.100.1 dev eth0.public weight 1
        nexthop via 198.51.100.254 dev eth1.public weight 1
203.0.113.6
        nexthop via 198.51.100.102 dev eth0.public weight 1
        nexthop via 198.51.100.202 dev eth1.public weight 1
203.0.113.18
        nexthop via 198.51.100.103 dev eth0.public weight 1
        nexthop via 198.51.100.203 dev eth1.public weight 1

Performance

You may be worried on how much memory Linux may use when handling many routes. Well, don’t:

  • 128 MiB can fit 1 million IPv4 routes, and
  • 512 MiB can fit 1 million IPv6 routes.

BIRD uses about the same amount of memory for its own usage. As for lookup times, performance are also excellent with IPv4 and still quite good with IPv6:

  • 30 ns per lookup with 1 million IPv4 routes, and
  • 1.25 µs per lookup with 1 million IPv6 routes.

Therefore, the impact on letting Linux handle many routes is very low. For more details, see “IPv4 route lookup on Linux” and “IPv6 route lookup on Linux.”

Reverse path filtering

To avoid spoofing, reverse path filtering is enabled on virtual guest interfaces: Linux will verify the source address is legit by checking the answer would use the incoming interface as an outgoing interface. This effectively prevent any possible spoofing from guests.

For IPv4, reverse path filtering can be enabled either through a per-interface sysctl14 or through the rpfilter match of Netfilter. For IPv6, only the second method is available.

# For IPv6, use NetFilter
ip6tables -t raw -N RPFILTER
ip6tables -t raw -A RPFILTER -m rpfilter -j RETURN
ip6tables -t raw -A RPFILTER -m rpfilter --accept-local \
  -m addrtype --dst-type MULTICAST -j DROP
ip6tables -t raw -A RPFILTER -m limit --limit 5/s --limit-burst 5 \
  -j LOG --log-prefix "NF: rpfilter: " --log-level warning
ip6tables -t raw -A RPFILTER -j DROP
ip6tables -t raw -A PREROUTING -i vnet+ -j RPFILTER

# For IPv4, use sysctls
sysctl -qw net.ipv4.conf.all.rp_filter=0
for iface in /sys/class/net/vnet*; do
    sysctl -qw net.ipv4.conf.${iface##*/}.rp_filter=1
done

There is no need to prevent L2 spoofing as there is no gain for the attacker.

Keeping guests in the dark

An important aspect of the solution is to ensure guests believe they are attached to a classic L2 network (with an IP in a subnet).

The first step is to provide them with a working default gateway. On the hypervisor, this can be done by assigning the default gateway IP directly to the guest interface:

ip addr add 203.0.113.254/32 dev vnet5 scope link

Our main goal is to ensure Linux will answer to ARP requests for the gateway IP. Configuring a /32 is enough for this and we do not want to configure a larger subnet as, by default, Linux would install a route for the subnet to this interface, which would be incorrect.15

For IPv6, this is not needed as we rely on link-local addresses instead.

A guest may also try to speak with other guests on the same subnet. The hypervisor will answer ARP requests on their behalf. Once it starts receiving IP traffic, it will route it to the appropriate interface. This can be done by enabling ARP proxying on the guest interface:

sysctl -qw net.ipv4.conf.vnet5.proxy_arp=1
sysctl -qw net.ipv4.neigh.vnet5.proxy_delay=0

For IPv6, Linux NDP proxying is far less convenient. Instead, ndppd can handle this task. For each interface, we use the following configuration snippet:

proxy vnet5 {
  rule 2001:db8:cb00:7100::/64 {
    static
  }
}

For DHCP, some daemons may have difficulties to handle this odd configuration (with the /32 IP address on the interface), but Dnsmasq accepts such an oddity. For IPv6, assuming the assigned IP address is an EUI-64 one, radvd works with the following configuration on each interface:

interface vnet5 {
  AdvSendAdvert on;
  prefix 2001:db8:cb00:7100::/64 {
    AdvOnLink on;
    AdvAutonomous on;
    AdvRouterAddr on;
  };
};

Conclusion and future work

This setup should work with BIRD 1.6.3 and a Linux 3.15+ kernel. Compared to legacy L2 networks, it brings flexibility and resiliency while keeping guests unaware of the change. By handing over routing to Linux, this design is also cheap as existing equipments can be reused. Still, exploitation of such solution is simple enough once the basic concepts are understood—IP rules, routing tables and BGP sessions.

There are several potential improvements:

using VRF
Starting from Linux 4.3, L3 VRF domains enable binding interfaces to routing tables. We could have three VRFs: public, private and local-out. This would improve performances by removing most IP rules (but until Linux 4.8, performances are crippled due to offloading features not enabled, see commit 7889681f4a6c). For more information, have a look at the kernel documentation.
full L3 routing
The BGP setup can be enhanced and simplified by using an L3 fabric and using some autoconfiguration features. Cumulus published a nice book, “BGP in the datacenter,” on this topic. However, this requires all BGP speakers to support these features. On the hypervisors, this would mean using FRR while the various network equipments would need to run Cumulus Linux.
BGP resiliency with BGP LLGR
Using short BFD timers make our network react fast to any disruption by quickly invalidating faulty paths without relying on link status. However, under load or congestion, BFD packets may be lost, making the whole hypervisor unreachable until BGP sessions can be brought up again. Some BGP implementations support Long-Lived BGP Graceful Restart, an extension allowing stale routes to be retained with a lower priority (see draft-uttaro-idr-bgp-persistence-03). This is an ideal solution to our problem: these stale routes are used only in last resort, after all links have failed. Currently, no open source implementation supports this draft.

  1. MC-LAG has been standardized in IEEE 802.1AX-2014. However, most vendors are likely to stick with their implementations. With MC-LAG, control planes remain independent. 

  2. An incident can stem from an operator mistake, but also from software bugs, which are more likely to happen in complex implementations during infrequent operations, like a software upgrade. 

  3. These routes should not be distributed over BGP. Hypervisors should receive a default route with a lower metric from edge routers. 

  4. A static ARP entry can also be added with the remote MAC address. 

  5. For example, a Juniper QFX5100 supports about 200k IPv4 routes (about $10,000, with Broadcom Trident II chipset). On the other hand, an Arista 7208SR supports 1.2M IPv4 routes (about $20,000, with Broadcom Jericho chipset), through the use of an external TCAM. A Juniper MX240 would support more than 2M IPv4 routes (about $30,000 for an empty chassis with two routing engines, with Juniper Trio chipset) with a lower density. 

  6. From a scalability point of view, with switches able to handle 32k MAC addresses, the fabric can host more than 8,000 hypervisors (more than 5 million virtual guests). Therefore, cost-effective switches can be used as both leaves and spines. Each hypervisor has to handle all routes, an easy task for Linux. 

  7. Use of routing instances would enable hosting several route reflectors on the same box. This is not used in this example but should be considered to reduce costs. 

  8. This prevents the use of any authentication mechanism: BGP usually relies on TCP MD5 signature (RFC 2385) to authenticate BGP sessions. On most OS, this requires to know allowed peers. To tighten a bit the security in absence of authentication, we use the Generalized TTL Security Mechanism (RFC 5082). For JunOS, the configuration presented here (with ttl 255) is incomplete. A firewall filter is also needed

  9. Unfortunately, for no good reason, JunOS doesn’t support the BGP add-path extension in a routing instance. Such a configuration is possible with Cumulus Linux. 

  10. Only BIRD comes with BFD support out of the box but it does not support implicit peers. FRR needs Cumulus’ PTMD. If you don’t care about BFD, GoBGP is really nice as a route reflector. 

  11. Kernel tables are numbered. ip can use names declared in /etc/iproute2/rt_tables

  12. It should be noted that ECMP with IPv6 only works from BIRD 1.6.1. Moreover, when using Linux 4.11 or more recent, you need to apply commit 98bb80a243b5

  13. For a given interface, Linux uses the maximum value between the sysctl for all and the one for the interface. 

  14. It is possible to prevent Linux to install a connected route by using the noprefixroute flag. However, this flag is only available since Linux 4.4 for IPv4. Only use this flag if your DHCP server is giving you a hard time as it may trigger other issues (related to the promotion of secondary addresses). 

Planet DebianShirish Agarwal: Webmail and whole class of problems.

Yesterday I was reading Daniel Pocock’s ‘Do the little things matter’ and while I agree with parts of his assessment I feel it is incomplete unless taken from user’s perspective having limited resources, knowledge etc. I am a gmail user so trying to put a bit of perspective here. I usually wait for a day or more when I feel myself getting inflamed/heated as it seemed to me a bit of arrogant perspective, meaning gmail users don’t have any sense of privacy. While he is perfectly entitled to his opinion, I *think* just blaming gmail is an easy way out, the problems are multi-faceted. Allow me to explain what I mean.

The problems he has shared I do not think are Gmail’s alone but all webmail providers, those providing services free of cost as well as those providing services for a fee. Regardless of what you think, the same questions arise whether you use one provider or the other. Almost all webmail providers give you a mailbox, an e-mail id and a web interface to interact with the mails you get.

The first problem which Daniel essentially tries to convey is the deficit of trust. I *think* that applies to all webmail providers. Until and unless you can audit and inspect the code you are just ‘trusting’ somebody else to provide you a service. What pained me while reading his blog post is that he could have gone much further but chose not to. What happens when webmail providers break your trust was not explored at all.

Most of the webmail providers I know are outside my geographical jurisdictions. While in one way it is good that the government of the day cannot directly order them to check my mails, it also means that I have no means to file a suit or prosecute the company in case if breaches do occur. I am talking here as an everyday user, a student and not a corporation who can negotiate, make iron-clad agreements and have some sort of liability claim for many an unforeseen circumstances. So no matter how you skin it, most users or to put it more bluntly almost all non-corporate users are at a disadvantage to negotiate terms of a contract with their mail provider.

So whether the user uses one webmail provider or other, it’s the same thing. Even startups like riseup who updated/shared the canary do show that even they are vulnerable. Also it probably is easier for webmail services to have backdoors as they can be pressurized for one government or the other.

So the only way to solve it really is having your own mail server which to say truthfully is no solution as it’s a full-time job. The reason is because you are responsible for everything. Each new vulnerability you come to know, you are supposed to either patch it or get it patched, or have some sort of workaround. In the last 4-5 years itself, it has become more complex as more and more security features are being added as each new vulnerability or class of vulnerabilities has revealed itself. Add to that at the very least a mail server should at the very least have something like RAID 1 at the very least to lessen data corruption. While I have seen friends who have the space and the money to invest and maintain a mail server most people won’t have the time, energy and the space to do the needful. I don’t see that changing in the near future at least.

Add to that over the years when I did work for companies most of the times I have found I needed to have more than one e-mail client as emails in professional setting need to be responded quickly and most of the GUI based mail clients could have subtle bugs which you come to know only when you are using it.

Couple of years back I was working with Hamaralinux. They have their own mail server. Without going into any technical details, looking into the features needed and wanted for both the parties. I started out using Thunderbird. I was using stable releases of Thunderbird. Even then, I used to see subtle bugs which sometimes used to corrupt the mail database or do one thing or the other. I had to resort to using Evolution which provided comparable features and even there I found bugs so for most of the time I had to resort between hopping between the two mail clients.

Now if you look at the history of the two clients you would assume that most of the bugs should not be there but surprisingly they were. At least for Thunderbird, I remember gecko used to create lot of problems besides other things. I did report the bugs I encountered and while some of them were worked upon, the solution used to often take days and sometimes even weeks to be resolved. Somewhat similar was the case with Evolution also. At times I also witnessed broken formatting and things like that but that is our of the preview of the topic.

Crudely, AFAIK these the basic functions an email client absolutely needs to do –

a. Authenticate the user to the mail server
b. If the user is genuine, go ahead to next step or reject the user at this stage itself.
c. If the user is genuine. let them go to their mailbox.
d. Once you enter the mailbox (mbox) it probably looks at the time-stamp when the last mail was delivered and see if any new mail has come looking at the diff between timesw (either using GMT or using epoch+GMT).
e. If any new mail has come it starts transferring those mails to your box.
f. If there are any new mails which need to be sent it would transfer them at this point.
g. If there are any automatically acknowledgments of mails received and that feature is available it would do that as well.
h. Ideally you should be able to view and compose replies offline at will.

In reality, at times I used to see transfers not completed meaning that the mail server still has mails but for some reason the connection got broken (maybe due to some path in-between or something else entirely)

At times even notification of new mails used to not come.

Sometimes offline Thunderbird used to lock mails or mbox at my end and I had to either use evolution or use some third-party tool to read the mails and rely on webmail to give my reply.

Notice in all this I haven’t mentioned ssh or any sort of encryption or anything like that.

It took me long time to figure out https://wiki.mozilla.org/MailNews:Logging but as you can see it deviates you from the work you wanted to do in the first place.

I am sure some people would suggest either Emacs or alpine or some other tool which works and I’m sure it worked right out of bat for them, for me I wanted to have something which had a GUI and I didn’t have to think too much about it. It also points out the reason why Thunderbird was eventually moved out of mozilla in a sense so that community could do feature and bug-fixing more faster than either mozilla did or had the resources or the will to do so.

From a user perspective I find webmail more compelling even with leakages as Daniel described because even though it’s ‘free’ it also has in-built redundancy. AFAIK they have enough redundant copies of mail database so that even if the node where my mails are dies, it simply will resurrect it from the other copies and give it to me in timely fashion.

While I do hope that in the long-run we do get better tools, in the short-to-medium term at least from my perspective its more about which compromises you are able to live with.

While I’m too small and too common a citizen for the government to take notice of me, I think it’s too easy to blame ‘X’ or ‘Y’ . I believe the real revolution will only begin when there are universal data protection laws for all citizens irrespective of countries and companies and governments are made answerable and liable for any sort of interactive digital services provided. Unless we raise the consciousness of people about security in general and have some sort of multi-stake holders meetings and understanding in real life including people from security, e- mail providers, general users and free software hackers, regulators and if possible even people from legislature I believe we would just be running about in circles.

TEDLee Cronin’s ongoing quest for print-your-own medicine, and more news from TED speakers

Behold, your recap of TED-related news:

Print your own pharmaceutical factory. As part of an ongoing quest to make pharmaceuticals easier to manufacture, chemist Lee Cronin and his team at the University of Glasgow have designed a way to 3D-print a portable “factory” for the complicated and multi-step chemical reactions needed to create useful drugs. It’s a grouping of vessels about the size of water bottles; each vessel houses a different type of chemical reaction. Pharmacists or doctors could create specific drugs by adding the right ingredients to each vessel from pre-measured cartridges, following a simple step-by-step recipe. The process could help replace or supplement large chemical factories, and bring helpful drugs to new markets. (Watch Cronin’s TED Talk)

How Amit Sood’s TED Talk spawned the Google Art selfie craze. While Amit Sood was preparing for his 2016 TED Talk about Google’s Cultural Institute and Art Project, his co-presenter Cyril Diagne, a digital interaction artist, suggested that he include a prototype of a project they’d been playing with, one that matched selfies to famous pieces of art. Amit added the prototype to his talk, in which he matched live video of Cyril’s face to classic artworks — and when the TED audience saw it, they broke out in spontaneous applause. Inspired, Amit decided to develop the feature and add it to the Google Arts & Culture app. The new feature launched in December 2017, and it went viral in January 2018. Just like the live TED audience before it, online users loved it so much that the app shot to the number one spot in both the Android and iOS app stores. (Watch Sood’s TED Talk)

A lyrical film about the very real danger of climate change. Funded by a Kickstarter campaign by director and producer Matthieu Rytz, the documentary Anote’s Ark focuses on the clear and present danger of global warming to the Pacific Island nation of Kiribati (population: 100,000). As sea levels rise, the small, low-lying islands that make up Kiribati will soon be entirely covered by the ocean, displacing the population and their culture. Former president Anote Tong, who’s long been fighting global warming to save his country and his constituents, provides one of two central stories within the documentary. The film (here’s the trailer) premiered at the 2018 Sundance Festival in late January, and will be available more widely soon; follow on Facebook for news. (Watch Tong’s TED Talk)

An animated series about global challenges. Sometimes the best way to understand a big idea is on a whiteboard. Throughout 2018, Rabbi Jonathan Sacks and his team are producing a six-part whiteboard-animation series that explains key themes in his theology and philosophy around contemporary global issues. The first video, called “The Politics of Hope,” examines political strife in the West, and ways to change the culture from the politics of anger to the politics of hope. Future whiteboard videos will delve into integrated diversity, the relationship between religion and science, the dignity of difference, confronting religious violence, and the ethics of responsibility. (Watch Rabbi Sacks’ TED Talk)

Nobody wins the Google Lunar X Prize competition :( Launched in 2007, the Google Lunar X Prize competition challenged entrepreneurs and engineers to design low-cost ways to explore space. The premise, if not the work itself, was simple — the first privately funded team to get a robotic spacecraft to the moon, send high-resolution photos and video back to Earth, and move the spacecraft 500 meters would win a $20 million prize, from a total prize fund of $30 million. The deadline was set for 2012, and was pushed back four times; the latest deadline was set to be March 31, 2018. On January 23, X Prize founder and executive chair Peter Diamandis and CEO Marcus Shingles announced that the competition was over: It was clear none of the five remaining teams stood a chance of launching by March 31. Of course, the teams may continue to compete without the incentive of this cash prize, and some plan to. (Watch Diamandis’ TED Talk)

15 photos of America’s journey towards inclusivity. Art historian Sarah Lewis took control of the New Yorker photo team’s Instagram last week, sharing pictures that answered the timely question: “What are 15 images that chronicle America’s journey toward a more inclusive level of citizenship?” Among the iconic images from Gordon Parks and Carrie Mae Weems, Lewis also includes an image of her grandfather, who was expelled from the eleventh grade in 1926 for asking why history books ignored his own history. In the caption, she tells how he became a jazz musician and an artist, “inserting images of African Americans in scenes where he thought they should—and knew they did—exist.” All the photos are ones she uses in her “Vision and Justice” course at Harvard, which focuses on art, race and justice. (Watch Lewis’ TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.

Krebs on SecurityFile Your Taxes Before Scammers Do It For You

Today, Jan. 29, is officially the first day of the 2018 tax-filing season, also known as the day fraudsters start requesting phony tax refunds in the names of identity theft victims. Want to minimize the chances of getting hit by tax refund fraud this year? File your taxes before the bad guys can!

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

According to the IRS, consumer complaints over tax refund fraud have been declining steadily over the years as the IRS and states enact more stringent measures for screening potentially fraudulent applications.

If you file your taxes electronically and the return is rejected, and if you were the victim of identity theft (e.g., if your Social Security number and other information was leaked in the Equifax breach last year), you should submit an Identity Theft Affidavit (Form 14039). The IRS advises that if you suspect you are a victim of identity theft, continue to pay your taxes and file your tax return, even if you must do so by paper.

If the IRS believes you were likely the victim of tax refund fraud in the previous tax year they will likely send you a special filing PIN that needs to be entered along with this year’s return before the filing will be accepted by the IRS electronically. This year marks the third out of the last five that I’ve received one of these PINs from the IRS.

Of course, filing your taxes early to beat the fraudsters requires one to have all of the tax forms needed to do so. As a sole proprietor, this is a great challenge because many companies take their sweet time sending out 1099 forms and such (even though they’re required to do so by Jan. 31).

A great many companies are now turning to online services to deliver tax forms to contractors, employees and others. For example, I have received several notices via email regarding the availability of 1099 forms online; most say they are sending the forms in snail mail, but that if I need them sooner I can get them online if I just create an account or enter some personal information at some third-party site.

Having seen how so many of these sites handle personal information, I’m not terribly interested in volunteering more of it. According to Bankrate, taxpayers can still file their returns even if they don’t yet have all of their 1099s — as long as you have the correct information about how much you earned.

“Unlike a W-2, you generally don’t have to attach 1099s to your tax return,” Bankrate explains. “They are just issued so you’ll know how much to report, with copies going to the IRS so return processors can double-check your entries. As long as you have the correct information, you can put it on your tax form without having the statement in hand.”

In past tax years, identity thieves have used data gleaned from a variety of third-party and government Web sites to file phony tax refund requests — including from the IRS itself! One of their perennial favorites was the IRS’s Get Transcript service, which previously had fairly lax authentication measures.

After hundreds of thousands of taxpayers had their tax data accessed through the online tool, the IRS took it offline for a bit and then brought it back online but requiring a host of new data elements.

But many of those elements — such as your personal account number from a credit card, mortgage, home equity loan, home equity line of credit or car loan — can be gathered from multiple locations online with almost no authentication. For example, earlier this week I heard from Jason, a longtime reader who was shocked at how little information was required to get a copy of his 2017 mortgage interest statement from his former lender.

“I called our old mortgage company (Chase) to retrieve our 1098 from an old loan today,” Jason wrote. “After I provided the last four digits of the social security # to their IVR [interactive voice response system] that was enough to validate me to request a fax of the tax form, which would have included sensitive information. I asked for a supervisor who explained to me that it was sufficient to check the SSN last 4 + the caller id phone number to validate the account.”

If you’ve taken my advice and placed a security freeze on your credit file with the major credit bureaus, you don’t have to worry about thieves somehow bypassing the security on the IRS’s Get Transcript site. That’s because the IRS uses Experian to ask a series of knowledge-based authentication questions before an online account can even be created at the IRS’s site to access the transcript.

Now, anyone who reads this site regularly should know I’ve been highly critical of these KBA questions as a means of authentication. But the upshot here is that if you have a freeze in place at Experian (and I sincerely hope you do), Experian won’t even be able to ask those questions. Thus, thieves should not be able to create an account in your name at the IRS’s site (unless of course thieves manage to successfully request your freeze PIN from Experian’s site, in which case all bets are off).

While you’re getting your taxes in order this filing season, be on guard against fake emails or Web sites that may try to phish your personal or tax data. The IRS stresses that it will never initiate contact with taxpayers about a bill or refund. If you receive a phishing email that spoofs the IRS, consider forwarding it to phishing@irs.gov.

Finally, tax season also is when the phone-based tax scams kick into high gear, with fraudsters threatening taxpayers with arrest, deportation and other penalties if they don’t make an immediate payment over the phone. If you care for older parents or relatives, this may be a good time to remind them about these and other phone-based scams.

CryptogramEstimating the Cost of Internet Insecurity

It's really hard to estimate the cost of an insecure Internet. Studies are all over the map. A methodical study by RAND is the best work I've seen at trying to put a number on this. The results are, well, all over the map:

"Estimating the Global Cost of Cyber Risk: Methodology and Examples":

Abstract: There is marked variability from study to study in the estimated direct and systemic costs of cyber incidents, which is further complicated by the considerable variation in cyber risk in different countries and industry sectors. This report shares a transparent and adaptable methodology for estimating present and future global costs of cyber risk that acknowledges the considerable uncertainty in the frequencies and costs of cyber incidents. Specifically, this methodology (1) identifies the value at risk by country and industry sector; (2) computes direct costs by considering multiple financial exposures for each industry sector and the fraction of each exposure that is potentially at risk to cyber incidents; and (3) computes the systemic costs of cyber risk between industry sectors using Organisation for Economic Co-operation and Development input, output, and value-added data across sectors in more than 60 countries. The report has a companion Excel-based modeling and simulation platform that allows users to alter assumptions and investigate a wide variety of research questions. The authors used a literature review and data to create multiple sample sets of parameters. They then ran a set of case studies to show the model's functionality and to compare the results against those in the existing literature. The resulting values are highly sensitive to input parameters; for instance, the global cost of cyber crime has direct gross domestic product (GDP) costs of $275 billion to $6.6 trillion and total GDP costs (direct plus systemic) of $799 billion to $22.5 trillion (1.1 to 32.4 percent of GDP).

Here's Rand's risk calculator, if you want to play with the parameters yourself.

Note: I was an advisor to the project.

Separately, Symantec has published a new cybercrime report with their own statistics.

Worse Than FailureRepresentative Line: The Mystery of the SmallInt

PT didn’t provide very much information about today’s Representative Line.

Clearly bits and bytes was not something studied in this SQL stored procedure author. Additionally, Source control versions are managed with comments. OVER 90 Thousand!

        --Declare @PrevNumber smallint
                --2015/11/18 - SMALLINT causes overflow error when it goes over 32000 something 
                -- - this sp errors but can only see that when
                -- code is run in SQL Query analyzer 
                -- - we should also check if it goes higher than 99999
        DECLARE @PrevNumber int         --2015/11/18

Fortunately, I am Remy Poirot, the world’s greatest code detective. To your untrained eyes, you simply see the kind of comment which would annoy you. But I, an expert, with experience of the worst sorts of code man may imagine, can piece together the entire lineage of this code.

Let us begin with the facts: no source control is in use. Version history is managed in the comments. From this, we can deduce a number of things: the database where this code runs is also where it is stored. Changes are almost certainly made directly in production.

A statue of Hercule Poirot in Belgium

Which, when those changes fail, they may only be detected when the “code is run in SQL Query Analyzer”. This ties in with the “changes in production/no source control”, but it also tells us that it is possible to run this code, have it fail, and no one notices. This means this code must be part of an unattended process, a batch job of some kind. Even an overflow error vanishes into the ether.

This code also, according to the comments, should “also check if [@PrevNumber] goes higher than 99999”. This is our most vital clue, for it tells us that the content of the value has a maximum width- more than 5 characters to represent it is a problem. This obviously means that the target system is a mainframe with a flat-file storage model.

Already, from one line and a handful of comments, we’ve learned a great deal about this code, but one need not be the world’s greatest code detective to figure out this much. Let’s see what else we can tease out.

@PrevNumber must tie to some ID in the database, likely the “last processed ID” from the previous run of the batch job. The confusion over smallint and need to enforce a five-digit limit implies that this database isn’t actually in control of its data. Either the data comes from a front-end with no validation- certainly possible- or it comes from an external system. But a value greater than 99999 isn’t invalid in the database- otherwise they could enforce that restriction via a constraint. This means the database holds data coming from and going to different places- it’s a “business integration” database.

With these clues, we can assemble the final picture of the crime scene.

In a dark corner of a datacenter are the components of a mainframe system. The original install was likely in the 70s, and while it saw updates and maintenance for the next twenty years, starting in the 90s it was put on life support. “It’s going away, any day now…” Unfortunately, huge swathes of the business depended on it and no one is entirely certain what it does. They can’t simply port business processes to a new system, because no one knows what the business processes are. They’re encoded as millions of lines of code in a dialect of COBOL no one understands.

A conga-line of managers have passed through the organization, trying to kill the mainframe. Over time, they’ve brought in ERP systems from Oracle or SAP. Maybe they’ve taken some home-grown ERP written by users in Access and tried to extend them to an enterprise solution. Worse, over the years, acquisitions come along, bringing new systems from other vendors under the umbrella of one corporate IT group.

All these silos need to talk to each other. They also have to be integrated into the suite of home-grown reporting tools, automation, and Excel spreadsheets that run the business. Shipping can only pull from the mainframe, while the P/L dashboard the executives use can only pull from SAP. At first, it’s a zoo of ETL jobs, until someone gets the bright idea to centralize it.

They launch a project, and give it a slick three-letter-acronym, like “QPR” or “LMP”. Or, since there are only so many TLAs one can invent, they probably reuse an existing one, like “GPS” or “FBI”. The goal: have a central datastore that integrates data from every silo in the organization, and then the silos can pull back out the data they want for their processes. This project has a multi-million dollar budget, and has exceeded that budget twice over, and is not yet finished.

The code PT supplied for us is but one slice of that architecture. It’s pulling data from one of those centralized tables in the business integration database, and massaging it into a format it can pass off to the mainframe. Like a murder reconstructed from just a bit of fingernail, we’ve laid bare the entire crime with our powers of observation, our experience, and our knowledge of bad code.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianSean Whitton: Debian Policy call for participation -- January 2018

Looks like there won’t be a release of Policy this month, but please consider contributing so we can get one out next month. Here’s a selection of bugs:

Consensus has been reached and help is needed to write a patch

#685746 debian-policy Consider clarifying the use of recommends

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#759316 Document the use of /etc/default for cron jobs

#761219 document versioned Provides

#767839 Linking documentation of arch:any package to arch:all

#770440 policy should mention systemd timers

#793499 The Installed-Size algorithm is out-of-date

#823256 Update maintscript arguments with dpkg >= 1.18.5

#835451 Building as root should be discouraged

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#742364 Document debian/missing-sources

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

Merged for the next release

#299007 Transitioning perms of /usr/local

#515856 remove get-orig-source

#886890 Fix for found typos

#888437 Several example scripts are not valid.

Planet DebianRuss Allbery: Review: Roads and Bridges

Review: Roads and Bridges, by Nadia Eghbal

Publisher: Ford Foundation
Copyright: July 2016
Format: Epub
Pages: 143

Subtitled The Unseen Labor Behind Our Digital Infrastructure, Roads and Bridges isn't a book. It's a long report for the Ford Foundation, available for free from their web site. But I read it like a book, so you get a review anyway.

If, like me, you've spent years in the free software community, you'll know much of this already. Eghbal starts with a survey of the history of free software and open source (skewed towards the practical and economic open source analysis, and essentially silent on ethics), and then a survey of how projects are currently organized. The emphasis, consistent with the rest of the report, is on how these free software building blocks underlie nearly all the consumer software and services used today. Eghbal singles out OpenSSL as her example of lack of infrastructure support due to Heartbleed and the subsequent discussion of how vital OpenSSL is but how little funding it received.

Eghbal hit her stride for me at the end of the third section, which tries to understand why people contribute to open source without being paid. I'm a bit dubious that many people contribute to build their personal reputation, since that's not a commonly stated reason in my areas of the free software community, but Eghbal's analysis of the risk of this motive from the infrastructure perspective seemed on point if this is becoming common. Better was her second motive: "the project became unexpectedly popular, and the maintainer feels obligated to support it." Yes. There is so much of this in free software, and it's a real threat to the sustainability of projects because it's a description of the trajectory of burnout. It's also a place where a volunteer culture and the unfairness of unpaid labor come into uncomfortable tension. Eghbal very correctly portrays her third reason, "a labor of love," as not that obviously distinct from that feeling of obligation.

The following discussion of challenges rightfully focuses on the projects that are larger than a weekend hobby but smaller than Linux:

However, many projects are trapped somewhere in the middle: large enough to require significant maintenance, but not quite so large that corporations are clamoring to offer support. These are the stories that go unnoticed and untold. From both sides, these maintainers are told they are the problem: Small project maintainers think mid-sized maintainers should just learn to cope, and large project maintainers think if the project were "good enough," institutional support would have already come to them.

Eghbal includes a thoughtful analysis of the problems posed by the vast increase in the number of programmers and the new economic focus on software development. This should be a good thing for free software, and in some ways it is, but the nature of software and human psychology tends towards fragmentation. It's more fun to write a new thing than do hard maintenance work on an old code base. The money is also almost entirely in building new things while spending as little time as possible on existing components. Industry perception is that open source accelerates new business models by allowing someone to build their new work on top of a solid foundation, but this use is mostly parasitic in practice, and the solid foundation doesn't stay solid if no one contributes to it.

Eghbal closes with a survey of current funding models for open source software, from corporate sponsorship to crowdfunding to foundations, and some tentative suggestions for principles of successful funding. This is primarily a problem report, though; don't expect much in the way of solutions. Putting together an effective funding model is difficult, community-specific, and requires thoughtful understanding of what resources are most needed and who can answer that need. It's also socially fraught. A lot of people work on these projects because they're not part of the capitalist system of money-seeking, and don't want to deal with the conflicts and overhead that funding brings.

I was hoping Eghbal would propose more solutions than she did, but I'm not surprised. I've been through several of these funding conversations in various communities. The problem is very hard, on both economic and social levels. But despite the lack of solutions, the whole report was more interesting than I was expecting given how familiar I already am with this problem. Eghbal's background is in venture capital, so she looks at infrastructure primarily through the company-building lens, but she's not blind to the infrastructure gaps those companies leave behind or even make worse. It's a different and clarifying angle on the problem than mine.

I realized, reading this, that while I think of myself as working on infrastructure, nearly all of my contributions have been in the small project. Only INN (back in the heyday of Usenet), OpenAFS (which I'm no longer involved in), and Debian rise to the level of significant infrastructure projects that might benefit from funding. Debian is large enough that, while it has resource challenges, it's partly transitioned into the lower echelons of institutional support. And INN is back in weekend project territory, since Usenet isn't what it was.

This report made me want to get involved in some more significant infrastructure project in need of this kind of support, but simultaneously made it clear how difficult it is to do this on a hobby basis. And it's remarkably hard to find corporate sponsorship for this sort of work that doesn't involve so much complexity and uncertainty that it's hard to justify leaving a stable and well-understood job. Which, of course, is much of Eghbal's point.

Eghbal also surfaces the significant tension between the volunteer, interest-based allocation of resources native to free software, and the money-based allocation of resources of a surrounding capitalist society. Projects are usually the healthiest and the happiest when they function as volunteer communities: they spontaneously develop governance structures that work for people as volunteers, they tend towards governance where investment of effort translates into influence (not without its problems, but generally better than other models), and each contributor has a volunteer's freedom to stop doing things they aren't enjoying (although one shouldn't underestimate the obligation factor of working on a project used by other people). But since nearly everyone has to spend the majority of their time on paying work, it's very difficult to find sustained and significant resources for volunteer projects. You need funding, so that people can be paid, but once people are paid they're no longer volunteers, and that fundamentally alters the social structure of the project. Those changes are rarely for the better, since the motives of those paying are both external to the project (not part of the collaborative decision-making process) and potentially overriding given how vital they are to the project.

It's a hard problem. I avoided it for years by living in the academic world, which is much better at reconciling these elements than for-profit companies, but the academic world doesn't have enough total resources, or the right incentives, to maintain this type of infrastructure.

The largest oversight I saw in this report was the lack of discussion of the international nature of open source development coupled with the huge discrepancy in cost of living in different parts of the world. This poses strange and significant fairness issues for project funding that I'm quite surprised Eghbal didn't notice: for the same money required to support full-time work by a current maintainer who lives in New York or San Francisco, one could fund two or three (or even more) developers in, say, some eastern European or southeast Asian countries with much lower costs of living and average incomes. Eghbal doesn't say a word about the social challenges this creates.

Other than that, though, this is a thoughtful and well-written survey of the resource problems facing the foundations of nearly all of our digital world. Free software developers will be annoyed but unsurprised by the near-total disregard of ethical considerations, but here the economic and ethical case arrive at roughly the same conclusion: nearly all the resources are going to companies and projects that are parasitical on a free software foundation, that foundation is nowhere near as healthy as people think it is, and the charity-based funding and occasional corporate sponsorship is inadequate and concentrated on the most visible large projects. For every Linux, with full-time paid developers, heavy corporate sponsorship, and sustained development with a wide variety of funding models, there are dozens of key libraries or programs developed by a single person in their scant free time, and dozens of core frameworks that are barely maintained and draw more vitriol than useful assistance.

Worth a read if you have an interest in free software governance or funding models, particularly since it's free.

Rating: 7 out of 10

TEDTalks from TEDNYC Idea Search 2018

Cloe Shasha and Kelly Stoetzel hosted the fast-paced TED Idea Search 2018 program on January 24, 2018 at TED HQ in New York, NY. (Photo: Ryan Lash / TED)

TED is always looking for new voices with fresh ideas — and earlier this winter, we opened a challenge to the world: make a one-minute audition video that makes the case for your TED Talk. More than 1,200 people applied to be a part of the Idea Search program this year, and on Wednesday night at our New York headquarters, 13 audition finalists shared their ideas in a fast-paced program. Here are voices you may not have heard before — but that you’ll want to hear more from soon.

Ruth Morgan shares her work preventing the misinterpretation of forensic evidence. (Photo: Ryan Lash / TED)

Forensic evidence isn’t as clear-cut as you think. For years, forensic science research has focused on making it easier and more accurate to figure out what a trace — such as DNA or a jacket fiber — is and who it came from, but that doesn’t help us interpret what the evidence means. “What we need to know if we find your DNA on a weapon or gunshot residue on you is how did it get there and when did it get there,” says forensic scientist Ruth Morgan. These gaps in understanding have real consequences: forensic evidence is often misinterpreted and used to convict people of crimes they didn’t commit. Morgan and her team are committed to finding ways to answer the why and how, such as determining whether it’s possible to get trace evidence on you during normal daily activities (it is) and how trace DNA can be transferred. “We need to dramatically reduce the chance of forensic evidence being misinterpreted,” she says. “We need that to happen so that you never have to be that innocent person in the dock.”

The intersection of our senses. An experienced composer and filmmaker, Philip Clemo has been on a quest to determine if people can experience imagery with the same depth that they experience music. Research has shown that sound can impact how we perceive visuals, but can visuals have a similarly profound impact? In his live performances, Clemo and his band use abstract imagery in addition to abstract music to create a visual experience for the audience. He hopes that people can have these same experiences in their everyday lives by quieting their minds to fully experience the “visual music” of our surrounding environment — and improve our connection to our world.

Reading the Bible … without omission. At a time when he was a recovering fundamentalist and longtime atheist, David Ellis Dickerson received a job offer as a quiz question writer and Bible fact-checker for the game show The American Bible Challenge. Among his responsibilities: coming up with questions that conveniently ignored the sections of the Bible that mention slavery, concubines and incest. The omission expectations he was faced with made him realize that evangelicals read the Bible in the same way they watch reality television: “with a willing, even loving, suspension of disbelief.” Now, he invites devout Christians to read the Bible in its most unedited form, to recognize its internal contradictions and to grapple with its imperfections.

Danielle Bilot Bilot suggests three simple but productive actions we can take to help bees: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food. (Photo: Ryan Lash / TED)

To bee or not to bee? The most famous bee species of recent memory has overwhelmingly been the honey bee. For years, their concerning disappearance has made international news and been the center of research. Environmental designer Danielle Bilot believes that the honey bee should share the spotlight with the 3,600 other species that pollinate much of the food we eat every day in the US, such as blueberries, tomatoes and eggplants. Honey bees, she says, aren’t even native to North America (they were originally brought over from Europe) and therefore have a tougher time successfully pollinating these and many other indigenous crops. Regardless of species, human activity is harming them, and Bilot suggests three simple but productive actions we can take to make their lives easier and revive their populations: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food.

What if technology protected your hearing? Computers that talk to us and voice-enabled technology like Siri, Alexa and Google Home are changing the importance of hearing, says ear surgeon Matthew Bromwich. And with more than 500 million people suffering from disabling hearing loss globally, the importance of democratizing hearing health care is more relevant than ever. “How do we use our technology to improve the clarity of our communication?” Bromwich asks. He and his team have created a hearing testing technology called “SHOEBOX,” which gives hearing health care access to more than 70,000 people in 32 countries. He proposes using technology to help prevent this disability, amplify sound clarity, and paint a new future for hearing.

Welcome to the 2025 Technology Design Awards, with your host, Tom Wujec. Rocking a glittery dinner jacket, design guru Tom Wujec presents a science-fiction-y awards show from eight years into the future, honoring the designs that made the biggest impact in technology, consumer goods and transportation — capped off by a grand impact award chosen live onstage by an AI. While the designs seem fictional — a hacked auto, a self-rising house, a cutting-edge prosthetic — the kicker to Tom’s future award show is that everything he shows is in development right now.

In collaboration with the MIT Media Lab, Nilay Kulkarni used his skills as a self-taught programmer to build a simple tech solution to prevent human stampedes during the Kumbh Mela, one of the world’s largest crowd gatherings, in India. (Photo: Ryan Lash / TED)

A 15-year-old solves a deadly problem with a low-cost device. Every four years, more than 30 million Hindus gather for the Kumbh Mela, the world’s largest religious gathering, in order to wash themselves of their sins. Once every 12 years, it takes place in Nashik, a city on the western coast of India that ordinarily contains 1.5 million residents. With such a massive crowd in a small space, stampedes inevitably happen, and in 2003, 39 people were killed during the festival in Nashik. In 2014, then-15-year-old Nilay Kulkarni decided he wanted to find a solution. He recalls: “It seemed like a mission impossible, a dream too big.” After much trial and error, he and collaborators at the MIT Media Lab came up with a cheap, portable, effective stampede stopper called “Ashioto” (meaning footstep in Japanese): a pressure-sensor-equipped mat which counts the number of people stepping on it and sends the data over the internet to authorities so they can monitor the flow of people in real time. Five mats were deployed at the 2015 Nashik Kumbh Mela, and thanks to their use and other innovations, no stampedes occurred for the first time ever there. Much of the code is now available to the public to use for free, and Kulkarni is trying to improve the device. His dream: for Ashiotos to be used at all large gatherings, like the other Kumbh Melas, the Hajj and even at major concerts and sports events.

A new way to think about health care. Though doctors and nurses dominate people’s minds when it comes to health care, Tamekia MizLadi Smith is more interested in the roles of non-clinical staff in creating in creating effective and positive patient experiences. As an educator and spoken word poet, Smith uses the acronym “G.R.A.C.E.D.” to empower non-clinical staff to be accountable for data collection and to provide compassionate care to patients. Under the belief that compassionate care doesn’t begin and end with just clinicians, Smith asks that desk specialists, parking attendants and other non-clinical staff are trained and treated as integral parts of well-functioning health care systems.

Mad? Try some humor. “The world seems humor-impaired,” says comedian Judy Carter. “It just seems like everyone is going around angry: going, ‘Politics is making me so mad; my boss is just making me so mad.'” In a sharp, zippy talk, Carter makes the case that no one can actually make you mad — you always have a choice of how to respond — and that anger actually might be the wrong way to react. Instead, she suggests trying humor. “Comedy rips negativity to shreds,” she says.

Want a happier, healthier life? Look to your friends. In the relationship pantheon, friends tend to place third in importance (after spouses and blood relatives). But we should make them a priority in our lives, argues science writer Lydia Denworth. “The science of friendship suggests we should invest in relationships that deliver strong bonds. We should value people who are positive, stable, cooperative forces,” Denworth says. While friendship was long ignored by academics, researchers are now studying it and finding it provides us with strong physical and emotional benefits. “In this time when we’re struggling with an epidemic of loneliness and bitter political divisions, [friendships] remind us what joy and connection look like and why they matter,” Denworth says.

An accessible musical wonderland. With quick but impressive musical interludes, Matan Berkowitz introduces the “Airstrument” — a new type of instrument that allows anyone to create music in a matter of minutes by translating movement into sound. This technology is part of a series of devices Berkowitz has developed to enable musicians with disabilities (and anyone who wants to make music) to express themselves in non-traditional ways. “Creation with intention,” he raps, accompanied by a song created on the Airstrument. “Now it’s up to us to wake up and act.”

Divya Chander shares her work studying what human brains look like when they lose and regain consciousness. (Photo: Ryan Lash / TED)

Where does consciousness go when you’re under anesthesia? Divya Chander is an anesthesiologist, delivering specialized doses of drugs that put people in an altered state before surgery. She often wonders: Where do peoples brains do while they’re under? What do they perceive? The question has led her into a deeper exploration of perception, awareness and consciousness itself. In a thoughtful talk, she suggests that we have a lot to learn about consciousness … that we could learn by studying unconsciousness.

The art of creation without preparation. To close out the night, vocalist and improviser Lex Empress creates coherent lyrics from words written by audience members on paper airplanes thrown onto the stage. Empress is accompanied by virtuoso pianist and producer Gilian Baracs, who also improvises everything he plays. Their music reminds us to enjoy the great improvisation that is life.

,

Planet DebianDaniel Pocock: Let's talk about Hacking (EPFL, Lausanne, 20 February 2018)

I've been very fortunate to have the support from several free software organizations to travel to events around the world and share what I do with other people. It's an important mission in a world where technology is having an increasing impact on our lives. With that in mind, I'm always looking for ways to improve my presentations and my presentation skills. As part of mentoring programs like GSoC and Outreachy, I'm also looking for ways to help newcomers in our industry to maximize their skills in communicating about the great work they do when they attend their first event.

With that in mind, one of the initiatives I've taken this year is participating in the Toastmasters organization. I've attended several meetings of the Toastmasters group at EPFL and on 20 February 2018, I'll give my first talk there on the subject of Hacking.

If you live in the area, please come along. Entrance is free, there is plenty of parking available in the evening and it is close to the metro too. Please try to arrive early so as not to disrupt the first speaker. Location map, Add to your calendar.

The Toastmasters system encourages participants to deliver a series of ten short (5-7 minute) speeches, practicing a new skill each time.

The first of these, the The Ice Breaker, encourages speakers to begin using their existing skills and experience. When I read that in the guide, I couldn't help wondering if that is a cue to unleash some gadget on the audience.

Every group provides a system for positive feedback, support and mentoring for speakers at every level. It is really wonderful to see the impact that this positive environment has for everybody. At the EPFL meetings, I've met a range of people, some with far more speaking experience than me but most of them are working their way through the first ten speeches.

One of the longest running threads on the FSFE discussion list in 2017 saw several people arguing that it is impossible to share ideas without social media. If you have an important topic you want to share with the world, could public speaking be one way to go about it and does this possibility refute the argument that we "need" social media to share ideas? Is it more valuable to learn how to engage with a small audience for five minutes than to have an audience of hundreds on Twitter who scrolls past you in half a second as they search for cat photos? If you are not in Lausanne, you can easily find a Toastmasters club near you anywhere in the world.

Planet DebianDirk Eddelbuettel: digest 0.6.15

And yet another small maintenance release, now at version 0.6.15, of the digest package arrived on CRAN and in Debian today.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'sha-512', 'crc32', 'xxhash32', 'xxhash64' and 'murmur32' algorithms) permitting easy comparison of R language objects.

Just like release 0.6.13 in December, and release 0.6.14 two weeks ago, this release accomodates a request by R Core. This time it was Kurt who improved POSIXlt extraction yesterday which required a one-line change to sha1() summaries---which he kindly sent along. We also already had a change by Thierry who had generalized sha1() to accept a new argument allowing sha256 and sha512 summaries to be created.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RVowpalWabbit 0.0.11

Another boring little RVowpalWabbit package update to version 0.0.11 came in response to another CRAN request: We were writing temporary output (a cache file for the fit/prediction, to be precise) to a non-temporary directory, which is now being caught by new tests added by Kurt. And as this is frowned upon, we made the requested change.

No new code or features were added.

We should mention once more that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch. I am also thinking that rvw extensions / work may make for a good GSoC 2018 project. Again, if interested, please get in touch.

More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMichael Stapelberg: pristine-tar considered harmful

If you want to follow along at home, clone this repository:

% GBP_CONF_FILES=:debian/gbp.conf gbp clone https://anonscm.debian.org/git/pkg-go/packages/golang-github-go-macaron-inject.git

Now, in the golang-github-go-macaron-inject directory, I’m aware of three ways to obtain an orig tarball (please correct me if there are more):

  1. Run gbp buildpackage, creating an orig tarball from git (upstream/0.0_git20160627.0.d8a0b86)
    The resulting sha1sum is d085a04b7b35856be24f8cc4a9a6d9799cdb59b4.
  2. Run pristine-tar checkout
    The resulting sha1sum is d51575c0b00db5fe2bbf8eea65bc7c4f767ee954.
  3. Run origtargz
    The resulting sha1sum is d51575c0b00db5fe2bbf8eea65bc7c4f767ee954.

Have a look at the archive’s golang-github-go-macaron-inject_0.0~git20160627.0.d8a0b86-2.dsc, however: the file entry orig tarball reads:

f5d5941c7b77e8941498910b64542f3db6daa3c2 7688 golang-github-go-macaron-inject_0.0~git20160627.0.d8a0b86.orig.tar.xz

So, why did we get a different tarball? Let’s go through the methods:

  1. The uploader must not have used gbp buildpackage to create their tarball. Perhaps they imported from a tarball created by dh-make-golang, or created manually, and then left that tarball in place (which is a perfectly fine, normal workflow).
  2. I’m not entirely sure why pristine-tar resulted in a different tarball than what’s in the archive. I think the most likely theory is that the uploader had to go back and modify the tarball, but forgot to update (or made a mistake while updating) the pristine-tar branch.
  3. origtargz, when it detects pristine-tar data, uses pristine-tar, hence the same tarball as ②.

Had we not used pristine-tar for this repository at all, origtargz would have pulled the correct tarball from the archive.

The above anecdote illustrates the fragility of the pristine-tar approach. In my experience from the pkg-go team, when the pristine-tar branch doesn’t contain outright incorrect data, it is often outdated. Even when everything is working correctly, a number of packagers are disgruntled about the extra work/mental complexity.

In the pkg-go team, we have (independently of this specific anecdote) collectively decided to have the upstream branch track the upstream remote’s master (or similar) branch directly, and get rid of pristine-tar in our repositories. This should result in method ① and ③ working correctly.

In conclusion, my recommendation for any repository is: don’t bother with pristine-tar. Instead, configure origtargz as a git-buildpackage postclone hook in your ~/.gbp.conf to always work with archive orig tarballs:

[clone]
# Ensure the correct orig tarball is present.
postclone=origtargz

[buildpackage]
# Pick up the orig tarballs created by the origtargz postclone hook.
tarball-dir = ..

Planet DebianShirish Agarwal: Economic Migration, Unemployment, Retirement benefits in advanced countries etc.

After my South African Debconf experience and especially the Doha, Qatar layover experience soon after my return back, a friend from Kerala had sent me a link of a movie called Pathemari . For various reasons I could not see the movie till I had come from the Hospital few months back. I would recommend everybody to see that movie if they want to see issues from a blue-collar migrant worker’s views.

Before I venture further, I think a lot of people confuse between economic migration and immigration. As can be seen in the movie, the idea of economic migrants is to do work and come back to his/her own country while immigration is more about political asylum, freedom of expression those kind of ideas. The difference between the two can be starkly seen in one of my favorite movies of all times ‘Moscow on Hudson’.

I have had quite a few discussions with some friends from Kerala last year and years before seeing this movie and had been sort of flabbergasted with the answers shared by them with me at the time. Most of them were on the lines of ‘we don’t want/need any development. I/We would go to X (Any Oil producing country) or west to make money and then come back home. Then why should we have industry ? While this is from personal anecdotal experiences while I was in the hospital, I also saw similar observations online as well. For e.g. Northwestern did an article which explains some of the complexity years ago . More recently has been an IIM, Bangalore Working Paper which corroborates the importance of Nursing to Kerala, the state as well as to the Indian economy as a whole. It’s a pretty interesting paper specifically for those wishing to understand aspects of Indian migration outward (nursing) and some info. about expectations from such migrants who want to join in the labor markets in Netherlands and Denmark (local language, culture, adaptions etc. all of which is good.).

In hindsight, I now agree with parts of the reasons shared by my colleagues and peers from Kerala in context to what has occurred in Goa in recent past and how that affected tourism of the state. While it has been few years since I last stayed in Goa for 2 weeks or more, I have always found it to be a little piece of Paradise tucked in the corner.

Also similarly in the context of median age of Americans rising which was shared in the previous article, I don’t see them replenishing their own ranks with young blood. The baby boom years for America seem to be over (for now and bit into the future). On the medicine side, since we have been talking about nursing. another observation is it seems that the American Government will cut off whole lot of Americans from medical care which Mr. Trump did few days back. The statements shared therein seems much a spin story as no numbers were shared or anything. There was this report I read last year which tells how an urban middle-class American family might suffer depending on how much medicare is cut.

I have seen something very similar happening in Pune, India, with quite a few insurance companies, medical practitioners, staff etc. giving needless medicines or tests where they aren’t needed, more so if you have insurance. Of course after you have availed it, your individual premium will rise as the ‘risk’ has increased but this is veering off the main story. There were quite a few patients who shared their horror stories and lessons with me during my stay in the hospital.

On the labor front I don’t see a way out for Americans to work. For e.g. Patels (a caste and a community) went to States and found that most Americans do not or did not like maintaining motels and they provided/took over the that service, partly as it’s a risky business and partly most motels are run-down etc. Apart from the spin being put in the context of both legal and illegal immigration in States, it seems, at least to me there would be more undocumented illegal Americans living then those coming legally and America would suffer economically due to that.

You can see Qatar doing it already as well as Saudi Arabia trying to be more open, while States seems to be dancing on another beat altogether.

Coming to the India perspective –

Note – Mrs. Sushma Swaraj, Ministry of External Affairs, India has been particularly active and robust in seeking welfare for Indian brethren trying to find work abroad.

One of the few good things that the present Government has done is have a pro-active foreign policy minister and being given a free hand to operate, she also seems to trust herself and others to do the right thing. Although she hasn’t done much apart from taking the lowest apples which were ripe for taking for years, it also tells/reminds that what apathy most Governments had towards foreign policy partly due to the socialist structure and culture in education, culture and even affairs of the State.

While I was reading on the subject I came across I,Daniel Blake . I saw the movie and shared with my mother. We were both shocked as we saw the trials that the gentleman had to go through and eventually his passing away. We thought that only bureaucracy in India was bad, now we come to know its the same at least as UK is concerned.

https://www.theguardian.com/society/2018/jan/19/esther-mcvey-makes-disability-benefits-u-turn-over-payments

https://www.theguardian.com/society/2017/nov/17/benefit-claimants-underpaid-employment-support-allowance

https://www.theguardian.com/society/2016/mar/29/employment-and-support-allowance-the-disability-benefit-cuts-you-have-not-heard-about

https://www.theguardian.com/commentisfree/2018/jan/16/government-policy-poor-people-debt-benefits-universal-credit

http://www.dailymail.co.uk/news/article-3138853/Britain-s-mid-life-crisis-UK-average-age-hits-40-time-population-jumps-500-000-64-6-million.html

https://www.theguardian.com/business/2018/jan/28/freedom-great-deal-of-that-inside-the-eu-brexit

After seeing the movie saw the. The above does give some of the understanding why UK opted for Brexit and the expected fallout that probably will be.

Before I end I want to give a shout out, kudos to Daniel Echeverry for putting guake ported to gtk3 with dark theme. I really like the theme and do hope more themes follow in upcoming days, weeks and months.

Guake, dark theme and gtk3

Also another shoutout to Timo Aaltonen for getting a newer snapshot of xserver-xorg-video-intel for testing .

I do hope to explore a bit more of the new system, see what the new CPU, GPU can do in the coming days and weeks. I did some explorations about libsdl1.2 recently http://lists.alioth.debian.org/pipermail/pkg-sdl-maintainers/2018-January/002711.html and do hope to at least get some know-how where the newer integrated graphics and power options would become more useful in short and medium-term.

I also was thinking about the impending python3 transition and it seems that 90% of the big libraries are ready to make the transition. The biggest laggards seem to be mozilla, which I guess is still trying to deal with the fallout from firefox 57.0, the whole web-extensions bit etc.

Atm it seems a huge setback for mozilla, whether they will be able to survive is entirely on the third-party add-on developers. If that ecosystem doesn’t get enriched to the status they were before the transition, we could see firefox losing lot of users, at least in the short and medium-term.

Lastly, I did try to add a new usb device in the usb-database at https://usb-ids.gowdy.us/read/UD/1ecb/02e2 but there doesn’t seem to be a way to know whether that entry got accepted or not 😦

Planet DebianRenata D'Avila: The right to be included in the conversation

"To be GOVERNED is to be watched, inspected, spied upon, directed, law-driven, numbered, regulated, enrolled, indoctrinated, preached at, controlled, checked, estimated, valued, censured, commanded, by creatures who have neither the right nor the wisdom nor the virtue to do so. To be GOVERNED is to be at every operation, at every transaction noted, registered, counted, taxed, stamped, measured, numbered, assessed, licensed, authorized, admonished, prevented, forbidden, reformed, corrected, punished. It is, under pretext of public utility, and in the name of the general interest, to be placed under contribution, drilled, fleeced, exploited, monopolized, extorted from, squeezed, hoaxed, robbed; then, at the slightest resistance, the first word of complaint, to be repressed, fined, vilified, harassed, hunted down, abused, clubbed, disarmed, bound, choked, imprisoned, judged, condemned, shot, deported, sacrificed, sold, betrayed; and to crown all, mocked, ridiculed, derided, outraged, dishonored. That is government; that is its justice; that is its morality."

Pierre-Joseph Proudhon

Lately, to be on the internet is to be governed. A lot of those actions Proudhon mentions might as well refer to our interactions with the companies that 'rule' the web.

A while back, I applied to a tech event made for women, but I didn't get selected to attend it. I really, really wanted to attend it and because, for me, it wasn't clear on why I hadn't been selected (and what should I do next time to get to attend), I sent an email asking. Their answer? Even though the registration was put up on a public website (for the event) and anyone could sign up there, the spots for this event had been reserved for girls who had filled the form they had put up on the group's Facebook page. I didn't use Facebook and I wasn't one of these people, so I had been excluded. Okay.

When I began PyLadies Porto Alegre, I insisted with Liliane that we didn't make the same mistake. That any information about our meetings should be open to anyone who wanted to join. That is why we set up an website which hopefully any person with internet access could visit and get the next meeting date. We also created a mailing list (indeed, with the third-party company MailChimp) which you only need an e-mail to subscribe to and get the next meeting date delivered to your inbox. We publish event dates on Quitter.se (an instance of GNU Social), which then replicates to Twitter.

But the feedback we got on Python Brasil after presenting the work we've been doing with PyLadies Porto Alegre for the past year was the following:

"I didn't even know PyLadies Porto Alegre was so active, that it had organized this much stuff. You people don't divulge much."

"What do you mean?"

"Well, I always check PyLadies Brazil Facebook and there is always stuff about other groups there, but there isn't anything about PyLadies Porto Alegre".

"Yes, well, all our stuff is public in our website, that anyone can access without needing a Facebook account. We publish to GNU Social and Twitter. Our website link is on the PyLadies Brazil site. But if your only source of news is Facebook..."

If your only source of news is Facebook you will miss tons of all the amazing things that are happening out there and that are not on Facebook. The internet and the web itself are way more than Facebook (even though, depending on your access, it might be a bit complicated to get past this. This is not the case for the privileged people that I usually talk to and that simply chose to have Facebook as their 'internet portal/newsfeed').

The point is: regardless of which service the people looking to engage with PyLadies Porto Alegre had agreed to provide data for (Mailchimp, Twitter or their an email provider) - or even if they chose not to provide data at all, simply visiting our site using proxy or VPNs or Tor or scriptblockers that allowed them to anonimize their location - we tried ensure that anyone could get the information about the meetings.

Because Facebook actively excludes people that don't have an account there (requiring login to even look at some posts or events), we didn't bother with a profile there at first (even though we do have a Facebook page now -.-).

Even with all that, sometimes it feels like a one-women cruzade, to ensure that the information and participation stays open and free. Because it's a sad reality that if I don't take part and if I don't stir this conversation on this group, all these efforts are quickly forgotten and people easily fall back into the closed options "that everyone uses" (namely Facebook, Whatsapp, Google Hangouts).

It's important to say that it's not just companies that exclude people. We exclude people from the conversation, from the ability to interact with us when we chose proprietary (and closed) services to communicate with each other.

In name of using 'easy' and 'free' (as in beer) tools, people simply overlook that all these closed options require that you (and everyone else that want to contact you and has to use those plataforms) sign a contract with a company to use these services. And it's not even a contract that you can negotiate the terms, it's a contract that the company dictates the terms and it's terms that can be changed at at point in time, always to benefit these companies.

A Google Map with several spots marked, linked by red lines. Those are places I had visited in my city during a whole month.

This is a picture of all the data Google collected about my movements during one month in Porto Alegre, simply because I owned an Android smartphone. If a person, a company or a government knowing and cataloging every movement you make doesn't freak you out, I honestly don't know what would.

People overlook that all those 'free' services aren't really free. They have a cost. A cost that is being paid by our actual freedom as human beings. They take away our freedom of having a really free internet and we are complicit. We are allowing them to do that. By giving away our privacy (and the privacy of other fellow human beings), we are allowing these companies (and, a lot of times, governments) to watch and know every step we take, to chose what we read and which websites we access, how we think, to limit our freedom of expression, our freedom of chosing not to have our data in their database and even our freedom of being. A lot of times, they act without us even knowing what they are doing.

If there is one wish I have for this Data Privacy Day, is for people to start considering the services they are using and how this affects everyone else around them. I do not choose to have my phone number indexed by Google, you do that for me when you add me to your contacts. I do not choose to have my face indentified and indexed by Facebook, it's you who do that every time you upload a picture with me to your timeline. But, most of all, it's not me who choses to be 'out of reach', 'not to participate in your community or your meeting', 'to isolate myself from communicating on the internet' (even though I am constantly online). It's you who choose to hide behind proprietary services with terms I cannot consciously agree to.

Planet DebianAntoine Beaupré: 4TB+ large disk price review

For my personal backups, I am now looking at 4TB+ single-disk long-term storage. I currently have 3.5TB of offline storage, split in two disks: this is rather inconvenient as I need to plug both in a toaster-like SATA enclosure which gathers dusts and performs like crap. Now I'm looking at hosting offline backups at a friend's place so I need to store everything in a single drive, to save space.

This means I need at least 4TB of storage, and those needs are going to continuously expand in the future. Since this is going to be offsite, swapping the drive isn't really convenient (especially because syncing all that data takes a long time), so I figured I would also look at more than 4 TB.

So I built those neat little tables. I took the prices from Newegg.ca or Newegg.com as a fallback when the item wasn't available in Canada. I used to order from NCIX because it was "more" local, but they unfortunately went bankrupt and in the worse possible way: the website is still up and you can order stuff, but those orders never ship. Sad to see a 20-year old institution go out like that; I blame Jeff Bezos.

I also used failure rate figures from the latest Backblaze review, although those should always be taken with a grain of salt. For example, the apparently stellar 0.00% failure rates are all on sample sizes too small to be statistically significant (<100 drives).

All prices are in CAD, sometimes after conversion from USD for items that are not on newegg.ca, as of today.

8TB

Brand Model Price $/TB fail% Notes
HGST 0S04012 280$ 35$ N/A
Seagate ST8000NM0055 320$ 40$ 1.04%
WD WD80EZFX 364$ 46$ N/A
Seagate ST8000DM002 380$ 48$ 0.72%
HGST HUH728080ALE600 791$ 99$ 0.00%

6TB

Brand Model Price $/TB fail% Notes
HGST 0S04007 220$ 37$ N/A
Seagate ST6000DX000 ~222$ 56$ 0.42% not on .ca, refurbished
Seagate ST6000AS0002 230$ 38$ N/A
WD WD60EFRX 280$ 47$ 1.80%
Seagate STBD6000100 343$ 58$ N/A

4TB

Brand Model Price $/TB fail% Notes
Seagate ST4000DM004 125$ 31$ N/A
Seagate ST4000DM000 150$ 38$ 3.28%
WD WD40EFRX 155$ 39$ 0.00%
HGST HMS5C4040BLE640 ~242$ 61$ 0.36% not on .ca
Toshiba MB04ABA400V ~300$ 74$ 0.00% not on .ca

Conclusion

Cheapest per TB costs seem to be in the 4TB range, but the 8TB HGST comes really close. Reliabilty for this drive could be an issue, however - I can't explain why it is so cheap compared to other devices... But I guess we'll see where it goes as I'll just order the darn thing and try it out.

,

Krebs on SecurityFirst ‘Jackpotting’ Attacks Hit U.S. ATMs

ATM “jackpotting” — a sophisticated crime in which thieves install malicious software and/or hardware at ATMs that forces the machines to spit out huge volumes of cash on demand — has long been a threat for banks in Europe and Asia, yet these attacks somehow have eluded U.S. ATM operators. But all that changed this week after the U.S. Secret Service quietly began warning financial institutions that jackpotting attacks have now been spotted targeting cash machines here in the United States.

To carry out a jackpotting attack, thieves first must gain physical access to the cash machine. From there they can use malware or specialized electronics — often a combination of both — to control the operations of the ATM.

A keyboard attached to the ATM port. Image: FireEye

On Jan. 21, 2018, KrebsOnSecurity began hearing rumblings about jackpotting attacks, also known as “logical attacks,” hitting U.S. ATM operators. I quickly reached out to ATM giant NCR Corp. to see if they’d heard anything. NCR said at the time it had received unconfirmed reports, but nothing solid yet.

On Jan. 26, NCR sent an advisory to its customers saying it had received reports from the Secret Service and other sources about jackpotting attacks against ATMs in the United States.

“While at present these appear focused on non-NCR ATMs, logical attacks are an industry-wide issue,” the NCR alert reads. “This represents the first confirmed cases of losses due to logical attacks in the US. This should be treated as a call to action to take appropriate steps to protect their ATMs against these forms of attack and mitigate any consequences.”

The NCR memo does not mention the type of jackpotting malware used against U.S. ATMs. But a source close to the matter said the Secret Service is warning that organized criminal gangs have been attacking stand-alone ATMs in the United States using “Ploutus.D,” an advanced strain of jackpotting malware first spotted in 2013.

According to that source — who asked to remain anonymous because he was not authorized to speak on the record — the Secret Service has received credible information that crooks are activating so-called “cash out crews” to attack front-loading ATMs manufactured by ATM vendor Diebold Nixdorf.

The source said the Secret Service is warning that thieves appear to be targeting Opteva 500 and 700 series Dielbold ATMs using the Ploutus.D malware in a series of coordinated attacks over the past 10 days, and that there is evidence that further attacks are being planned across the country.

“The targeted stand-alone ATMs are routinely located in pharmacies, big box retailers, and drive-thru ATMs,” reads a confidential Secret Service alert sent to multiple financial institutions and obtained by KrebsOnSecurity. “During previous attacks, fraudsters dressed as ATM technicians and attached a laptop computer with a mirror image of the ATMs operating system along with a mobile device to the targeted ATM.

Reached for comment, Diebold shared an alert it sent to customers Friday warning of potential jackpotting attacks in the United States. Diebold’s alert confirms the attacks so far appear to be targeting front-loaded Opteva cash machines.

“As in Mexico last year, the attack mode involves a series of different steps to overcome security mechanism and the authorization process for setting the communication with the [cash] dispenser,” the Diebold security alert reads. A copy of the entire Diebold alert, complete with advice on how to mitigate these attacks, is available here (PDF).

The Secret Service alert explains that the attackers typically use an endoscope — a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body — to locate the internal portion of the cash machine where they can attach a cord that allows them to sync their laptop with the ATM’s computer.

An endoscope made to work in tandem with a mobile device. Source: gadgetsforgeeks.com.au

“Once this is complete, the ATM is controlled by the fraudsters and the ATM will appear Out of Service to potential customers,” reads the confidential Secret Service alert.

At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

“In previous Ploutus.D attacks, the ATM continuously dispensed at a rate of 40 bills every 23 seconds,” the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert.

An 2017 analysis of Ploutus.D by security firm FireEye called it “one of the most advanced ATM malware families we’ve seen in the last few years.”

“Discovered for the first time in Mexico back in 2013, Ploutus enabled criminals to empty ATMs using either an external keyboard attached to the machine or via SMS message, a technique that had never been seen before,” FireEye’s Daniel Regalado wrote.

According to FireEye, the Ploutus attacks seen so far require thieves to somehow gain physical access to an ATM — either by picking its locks, using a stolen master key or otherwise removing or destroying part of the machine.

Regalado says the crime gangs typically responsible for these attacks deploy “money mules” to conduct the attacks and siphon cash from ATMs. The term refers to low-level operators within a criminal organization who are assigned high-risk jobs, such as installing ATM skimmers and otherwise physically tampering with cash machines.

“From there, the attackers can attach a physical keyboard to connect to the machine, and [use] an activation code provided by the boss in charge of the operation in order to dispense money from the ATM,” he wrote. “Once deployed to an ATM, Ploutus makes it possible for criminals to obtain thousands of dollars in minutes. While there are some risks of the money mule being caught by cameras, the speed in which the operation is carried out minimizes the mule’s risk.”

Indeed, the Secret Service memo shared by my source says the cash out crew/money mules typically take the dispensed cash and place it in a large bag. After the cash is taken from the ATM and the mule leaves, the phony technician(s) return to the site and remove their equipment from the compromised ATM.

“The last thing the fraudsters do before leaving the site is to plug the Ethernet cable back in,” the alert notes.

FireEye said all of the samples of Ploutus.D it examined targeted Diebold ATMs, but it warned that small changes to the malware’s code could enable it to be used against 40 different ATM vendors in 80 countries.

The Secret Service alert says ATMs still running on Windows XP are particularly vulnerable, and it urged ATM operators to update to a version of Windows 7 to defeat this specific type of attack.

This is a quickly developing story and may be updated multiple times over the next few days as more information becomes available.

Planet DebianAntoine Beaupré: A summary of my 2017 work

New years are strange things: for most arbitrary reasons, around January 1st we reset a bunch of stuff, change calendars and forget about work for a while. This is also when I forget to do my monthly report and then procrastinate until I figure out I might as well do a year report while I'm at it, and then do nothing at all for a while.

So this is my humble attempt at fixing this, about a month late. I'll try to cover December as well, but since not much has happened then, I figured I could also review the last year and think back on the trends there. Oh, and you'll get chocolate cookies of course. Hang on to your eyeballs, this won't hurt a bit.

Debian Long Term Support (LTS)

Those of you used to reading those reports might be tempted to skip this part, but wait! I actually don't have much to report here and instead you will find an incredibly insightful and relevant rant.

So I didn't actually do any LTS work in December. I actually reduced my available hours to focus on writing (more on that later). Overall, I ended up working about 11 hours per month on LTS in 2017. That is less than the 16-20 hours I was available during that time. Part of that is me regularly procrastinating, but another part is that finding work to do is sometimes difficult. The "easy" tasks often get picked and dispatched quickly, so the stuff that remains, when you're not constantly looking, is often very difficult packages.

I especially remember the pain of working on libreoffice, the KRACK update, more tiff, GraphicsMagick and ImageMagick vulnerabilities than I care to remember, and, ugh, Ruby... Masochists (also known as "security researchers") can find the details of those excruciating experiments in debian-lts for the monthly reports.

I don't want to sound like an old idiot, but I must admit, after working on LTS for two years, that working on patching old software for security bugs is hard work, and not particularly pleasant on top of it. You're basically always dealing with other people's garbage: badly written code that hasn't been touched in years, sometimes decades, that no one wants to take care of.

Yet someone needs to take care of it. A large part of the technical community considers Linux distributions in general, and LTS releases in particular, as "too old to care for". As if our elders, once they passed a certain age, should just be rolled out to the nearest dumpster or just left rotting on the curb. I suspect most people don't realize that Debian "stable" (stretch) was released less than a year ago, and "oldstable" (jessie) is a little over two years old. LTS (wheezy), our oldest supported release, is only four years old now, and will become unsupported this summer, on its fifth year anniversary. Five years may seem like a long time in computing but really, there's a whole universe out there and five years is absolutely nothing in the range of changes I'm interested in: politics, society and the environment range much beyond that shortsightedness.

To put things in perspective, some people I know still run their office on an Apple II, which celebrated its 40th anniversary this year. That is "old". And the fact that the damn thing still works should command respect and admiration, more than contempt. In comparison, the phone I have, an LG G3, is running an unpatched, vulnerable version of Android because it cannot be updated, because it's locked out of the telcos networks, because it was found in a taxi and reported "lost or stolen" (same thing, right?). And DRM protections in the bootloader keep me from doing the right thing and unbricking this device.

We should build devices that last decades. Instead we fill junkyards with tons and tons of precious computing devices that have more precious metals than most people carry as jewelry. We are wasting generations of programmers, hardware engineers, human robots and precious, rare metals on speculative, useless devices that are destroying our society. Working on supporting LTS is a small part in trying to fix the problem, but right now I can't help but think we have a problem upstream, in the way we build those tools in the first place. It's just depressing to be at the receiving end of the billions of lines of code that get created every year. Hopefully, the death of Moore's law could change that, but I'm afraid it's going to take another generation before programmers figure out how far away from their roots they have strayed. Maybe too long to keep ourselves from a civilization collapse.

LWN publications

With that gloomy conclusion, let's switch gears and talk about something happier. So as I mentioned, in December, I reduced my LTS hours and focused instead on finishing my coverage of KubeCon Austin for LWN.net. Three articles have already been published on the blog here:

... and two more articles, about Prometheus, are currently published as exclusives by LWN:

I was surprised to see that the container runtimes article got such traction. It wasn't the most important debate in the whole conference, but there were some amazingly juicy bits, some of which we didn't even cover because. Those were... uh... rather controversial and we want the community to stay sane. Or saner, if that word can be applied at all to the container community at this point.

I ended up publishing 16 articles at LWN this year. I'm really happy about that: I just love writing and even if it's in English (my native language is French), it's still better than rambling on my own like I do here. My editors allow me to publish well polished articles, and I am hugely grateful for the privilege. Each article takes about 13 hours to write, on average. I'm less happy about that: I wish delivery was more streamlined and I spare you the miserable story of last minute major changes I sent in some recent articles, to which I again apologize profusely to my editors.

I'm often at a loss when I need to explain to friends and family what I write about. I often give the example of the password series: I wrote a whole article about just how to pick a passphrase then a review of two geeky password managers and then a review of something that's not quite a password manager and you shouldn't be using. And on top of that, I even wrote an history of those but by that time my editors were sick and tired of passwords and understandably made me go away. At this point, neophytes are just scratching their heads and I remind them of the TL;DR:

  1. choose a proper password with a bunch of words picked at random (really random, check out Diceware!)

  2. use a password manager so you have to remember only one good password

  3. watch out where you type those damn things

I covered two other conferences this year as well: one was the NetDev conference, for which I wrote 4 articles (1, 2, 3, 4). It turned out I couldn't cover NetDev in Korea even though I wanted to, but hopefully that is just "partie remise" as we say in french... I also covered DebConf in Montreal, but that ended up being much harder than I thought: I got involved in networking and volunteered all over the place. By the time the conference started, I was too exhausted to do actually write anything, even though I took notes like crazy and ran around trying to attend everything. I found it's harder to write about topics that are close to home: nothing is new, so you don't get excited as much. I still enjoyed writing about the supposed decline of copyleft, which was based on a talk by FSF executive director John Sullivan, and I ended up writing about offline PGP key storage strategies and cryptographic keycards, after buying a token from friendly gniibe at DebConf.

I also wrote about Alioth moving to Pagure, unknowingly joining up with a long tradition of failed predictions at LWN: a few months later, the tide turned and Debian launched the Alioth replacement as a beta running... GitLab. Go figure - maybe this is the a version of the quantum observer effect applied to journalism?

Two articles seemed to have been less successful. The GitHub TOS update was less controversial than I expected it would be and didn't seem to have a significant impact, although GitHub did rephrase some bits of their TOS eventually. The ROCA review didn't seem to bring excited crowds either, maybe because no one actually understood anything I was saying (including myself).

Still, 2017 has been a great ride in LWN-land: I'm hoping to publish even more during the next year and encourage people to subscribe to the magazine, as it helps us publish new articles, if you like what you're reading here of course.

Free software work

Last but not least is my free software work. This was just nuts.

New programs

I have written a bunch of completely new programs:

  • Stressant - a small wrapper script to stress-test new machines. no idea if anyone's actually using the darn thing, but I have found it useful from time to time.

  • Wallabako - a bridge between Wallabag and my e-reader. This is probably one of my most popular programs ever. I get random strangers asking me about it in random places, which is quite nice. Also my first Golang program, something I am quite excited about and wish I was doing more of.

  • Ecdysis - a pile of (Python) code snippets, documentation and standard community practices I reuse across projects. Ended up being really useful when bootstrapping new projects, but probably just for me.

  • numpy-stats - a dumb commandline tool to extract stats from streams. didn't really reuse it so maybe not so useful. interestingly, found similar tools called multitime and hyperfine that will be useful for future benchmarks

  • feed2exec - a new feed reader (just that) which I have been using ever since for many different purposes. I have now replaced feed2imap and feed2tweet with that simple tool, and have added support for storing my articles on https://archive.org/, checking for dead links with linkchecker (below) and pushing to the growing Mastodon federation

  • undertime - a simple tool to show possible meeting times across different timezones. a must if you are working with people internationally!

If I count this right (and I'm omitting a bunch of smaller, less general purpose programs), that is six new software projects, just this year. This seems crazy, but that's what the numbers say. I guess I like programming too, which is arguably a form of writing. Talk about contributing to the pile of lines of code...

New maintainerships

I also got more or less deeply involved in various communities:

And those are just the major ones... I have about 100 repositories active on GitHub, most of which are forks of existing repositories, so actual contributions to existing free software projects. Hard numbers for this are annoyingly hard to come by as well, especially in terms of issues vs commits and so on. GitHub says I have made about 600 contributions in the last year, which is an interesting figure as well.

Debian contributions

I also did a bunch of things in the Debian project, apart from my LTS involvement:

  • Gave up on debmans, a tool I had written to rebuild https://manpages.debian.org, in the face of the overwhelming superiority of the Golang alternative. This is one of the things which lead me to finally try the language and write Wallabako. So: net win.

  • Proposed standard procedures for third-party repositories, which didn't seem to have caught up significantly in the real world. Hopefully just a matter of time...

  • Co-hosted a bug squashing party for the Debian stretch release, also as a way to have the DebConf team meet up.

  • That lead to a two hour workshop at Montreal DebConf which was packed and really appreciated. I'm thinking of organizing this at every DebConf I attend, in a (so far) secret plot to standardize packaging practices by evangelizing new package maintainers to my peculiar ways. I hope to teach again in Taiwan this year, but I'm not sure I'll make it that far across the globe...

  • And of course, I did a lot of regular package maintenance. I don't have good numbers on the exact activity stats here (any way to pull that out easily?) but I now directly maintain 34 Debian packages, a somewhat manageable number.

What's next?

This year, I'll need to figure out what to do with legacy projects. Gameclock and Monkeysign both need to be ported away from GTK2, which is deprecated. I will probably abandon the GUI in Monkeysign but gameclock will probably need a rewrite of its GUI. This begs the question of how we can maintain software in the longterm if even the graphical interface (even Xorg is going away!) under our feet all the time. Without this change, both software could have kept on going for another decade without trouble. But now, I need to spend time just to keep those tools from failing to build at all.

Wallabako seems to be doing well on its own, but I'd like to fix the refresh issues that make the reader sometimes unstable: maybe I can write directly to the SQLite database? I tried statically linking sqlite to do some tests about that, but that's apparently impossible and failed.

Feed2exec just works for me. I'm not very proud of the design, but it does its job well. I'll fix bugs and maybe push out a 1.0 release when a long enough delay goes by without any critical issues coming up. So try it out and report back!

As for the other projects, I'm not sure how it's going to go. It's possible that my involvement in paid work means I cannot commit as much to general free software work, but I can't help but just doing those drive-by contributions all the time. There's just too much stuff broken out there to sit by and watch the dumpster fire burn down the whole city.

I'll try to keep doing those reports, of which you can find an archive in monthly-report. Your comments, encouragements, and support make this worth it, so keep those coming!

Happy new year everyone: may it be better than the last, shouldn't be too hard...

PS: Here is the promised chocolate cookie: 🍪 Well, technically, that is a plain cookie, but the only chocolate-related symbol was 🍫 (chocolate bar): modernity is to be expected with technology...

Cory DoctorowI’m speaking at UCSD on Feb 9!

I’m appearing at UCSD on February 9, with a talk called “Scarcity, Abundance and the Finite Planet: Nothing Exceeds Like Excess,” in which I’ll discuss the potentials for scarcity and abundance — and bright-green vs austere-green futurism — drawing on my novels Walkaway, Makers and Down and Out in the Magic Kingdom.


The talk is free and open to the public; the organizers would appreciate an RSVP to galleryinfo@calit2.net.


The lecture will take place in the Calit2 Auditorium in the Qualcomm Institute’s Atkinson Hall headquarters. The talk will begin at 5:00 p.m., and it will be moderated by Visual Arts professor Jordan Crandall, who chairs the gallery@calit2’s 2017-2018 faculty committee. Following the talk and Q&A session, attendees are invited to stay for a public reception.


Doctorow will discuss the economics, material science, psychology and politics of scarcity and abundance as described in his novels WALKAWAY, MAKERS and DOWN AND OUT IN THE MAGIC KINGDOM. Together, they represent what he calls “a 15-year literature of technology, fabrication and fairness.”

Among the questions he’ll pose: How can everyone in the world live like an American when we need six planets’ worth of materials to realize that dream? Doctorow also asks, “Can fully automated luxury communism get us there, or will our futures be miserable austerity-ecology hairshirts where we all make do with less?”


Author Cory Doctorow to Speak at UC San Diego on Scarcity, Abundance and the Finite Planet [Doug Ramsey/UCSD News]

Planet DebianSteinar H. Gunderson: Exploring minimax polynomials with Sollya

Following Fabian Giesen's advice, I took a look at Sollya—I'm not really that much into numerics (and Sollya, like the other stuff that comes out of the same group, is really written by hardcode numerics nerds), but approximation is often useful.

A simple example: When converting linear light values to sRGB, you need to be able to compute the formula f(x) = (x + ɑ - 1) / ɑ)^ɣ for a given (non-integer) ɑ and ɣ. (Movit frequently needs this. For the specific case of sRGB, GPUs often have hard-coded lookup tables, but they are not always applicable, for instance if the data comes from Y'CbCr.) However, even after simplifications, the exponentiation is rather expensive to run for every pixel, so we'd like some sort of approximation.

If you've done any calculus, you may have heard of Taylor series, which looks at the derivatives in a certain point and creates a polynomial from that. One of the perhaps most famous is arctan(x) = x - 1/3 x³ + 1/5 x⁵ - 1/7 x⁷ + ..., which gives rise to a simple formula for approximating pi if you set x=1 (since arctan(1) = pi/4). However, for practical approximation, Taylor series are fairly useless; they're accurate near the origin point of the expansion, but don't care at all about what happens far from it. Minimax polynomials are better; they minimize the maximum error over the range of interest.

In the past, I've been using Maple for this (I never liked Mathematica much); it's non-free, but not particularly expensive for a personal license, and it can do pretty much everything I expect from a computer algebra system. However, it would be interesting to see if Sollya could do better. After toying around a bit, it seems there are pros and cons:

  • Sollya appears to be faster. I haven't made any formal benchmarks, but I just feel like I have to wait a lot less for it.
  • I find Sollya's syntax maybe a bit more obscure (e.g., [| to start a list), although this is probably partially personal preference. Its syntax error handling is also a lot less friendly.
  • Sollya appears to be a lot more robust towards actually terminating with a working result. E.g., Maple just fails on optimizing sqrt(x) over 0..1 (a surprisingly hard case), whereas I haven't really been able to make Sollya fail yet except in the case of malformed problems (e.g. asking for optimizing for relative error of an error which is zero at certain points). Granted, I haven't pushed it that hard.
  • Maple supports a much wider range of functions. This is a killer for me; I frequently need something as simple as piecewise functions, and Sollya simply doesn't appear to support them.
  • Maple supports rational expansions, ie. two polynomials divided by each other (which can often increase performance dramatically—although the execution cost also balloons, of course). Sollya doesn't. On the other hand, Sollya supports expansion over given base functions, e.g. if you happen to sin(x) computed for whatever obscure reason, you can get an expansion of the type f(x) = a + bsin(x) + cx + dsin(x)² + ex².
  • Maple supports arbitrary weighing of the error (e.g. if you care more about errors at the endpoints)—I find this super-useful, especially if you are dealing with transformed variables or piecewise approximations. Sollya only supports relative and absolute errors, which is more limiting.
  • Sollya can seemingly be embedded as a library. Useful for some, not really relevant for me.
  • And finally, Sollya doesn't optimize coefficients over arbitrary precision; you tell it what accuracy you have to deal with (number of bits in floating or fixed point) and it optimizes the coefficients with that round-off error in mind. (I don't know if it also deals with intermediate roundoff errors when evaluating the polynomial.) Fabian makes a big deal of this, but for fp32, it doesn't really seem to matter much; I did some tests relative to what I had already gotten out of Maple, and the difference in maximum error was microscopic.

So, the verdict? Sollya is certainly good, and I can see myself using it in the future, but for me, it's more of an augmentation than replacing Maple for this use.

Planet Linux AustraliaDonna Benjamin: Turning stories into software at LCA2018

Donna speaking in front of a large screen showing a survey and colourful graph. Photo Credit: Josh Simmons
I love free software, but sometimes, I feel, that free software does not love me.
 
Why is it so hard to use? Why is it still so buggy? Why do the things I can do simply with other tools, take so much effort? Why is the documentation so inscrutable?  Why have all the config settings been removed from the GUI? Why does this HowTo assume I can find a config file, and edit it with VI? Do I have to learn to use VI before I can stop my window manager getting in the way of the application I’m trying to use?
 
Tis a mystery. Or is it?
 
It’s fair to say, that the Free Software community is still largely made up of blokes, who are software developers.  The idea that “user centered design” is a “Good Thing” is not evenly distributed. In fact, some seem to think it’s not a good thing at all, “patches welcome” they say, “go fix it yourself”. 
 
The web community on the other hand, has discovered that the key to their success is understanding and meeting the needs of the people who use their software. Ideological purity is great, but enabling people to meet their objectives, is better.
 
As technologists, we get excited by technology. Of course we do! Technology is modern magic. And we are wizards. It’s wonderful. But the people who use our software are not necessarily interested in the tech itself, they probably just want to use it to get something done. They probably don’t even care what language it’s written in.
 
Let’s say a customer walks into a hardware store and says they want a drill.  Or perhaps they walk in and stand in front of a shelf simply contemplating a dizzying array of drills, drill bits and other accessories. Which one is right for the job they wonder. Should I get a cordless one? Will I really need diamond tipped drill bits? 
 
There's a technique called the 5 Why's that's useful to get under the surface of a requirement. The idea is, you keep asking why until you uncover the real reason for a request, need, feature or widget. For example, we could ask this customer...
 
Why do you want this drill? To drill a hole. 
Why? To hang a picture on my wall.  
Why? To be able to share and enjoy this amazing photo from my recent holiday.
 
So we discover our customer did not, in fact, want a drill. Our customer wanted to express something about their identity by decorating their home.  So telling them all about the voltage of the drill, and the huge range of drill bits available, may have helped them choose the right drill for the job, but if we stop to understand the job in the first place, we’re more likely to be able to help that person get what they need to get their job done.
 
User stories are one way we can explore the “Why” behind the software we build. Check out my talk from the Developers Developers miniconf at linux.conf.au on Monday “Turning stories, into software.”
 

 

References

 

Photo by Josh Simmons

,

CryptogramFriday Squid Blogging: Squid that Mate, Die, and Then Sink

The mating and death characteristics of some squid are fascinating.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityRegistered at SSA.GOV? Good for You, But Keep Your Guard Up

KrebsOnSecurity has long warned readers to plant your own flag at the my Social Security online portal of the U.S. Social Security Administration (SSA) — even if you are not yet drawing benefits from the agency — because identity thieves have been registering accounts in peoples’ names and siphoning retirement and/or disability funds. This is the story of a Midwest couple that took all the right precautions and still got hit by ID thieves who impersonated them to the SSA directly over the phone.

In mid-December 2017 this author heard from Ed Eckenstein, a longtime reader in Oklahoma whose wife Ruth had just received a snail mail letter from the SSA about successfully applying to withdraw benefits. The letter confirmed she’d requested a one-time transfer of more than $11,000 from her SSA account. The couple said they were perplexed because both previously had taken my advice and registered accounts with MySocialSecurity, even though Ruth had not yet chosen to start receiving SSA benefits.

The fraudulent one-time payment that scammers tried to siphon from Ruth Eckenstein’s Social Security account.

Sure enough, when Ruth logged into her MySocialSecurity account online, there was a pending $11,665 withdrawal destined to be deposited into a Green Dot prepaid debit card account (funds deposited onto a Green Dot card can be spent like cash at any store that accepts credit or debit cards). The $11,655 amount was available for a one-time transfer because it was intended to retroactively cover monthly retirement payments back to her 65th birthday.

The letter the Eckensteins received from the SSA indicated that the benefits had been requested over the phone, meaning the crook(s) had called the SSA pretending to be Ruth and supplied them with enough information about her to enroll her to begin receiving benefits. Ed said he and his wife immediately called the SSA to notify them of fraudulent enrollment and pending withdrawal, and they were instructed to appear in person at an SSA office in Oklahoma City.

The SSA ultimately put a hold on the fraudulent $11,665 transfer, but Ed said it took more than four hours at the SSA office to sort it all out. Mr. Eckenstein said the agency also informed them that the thieves had signed his wife up for disability payments. In addition, her profile at the SSA had been changed to include a phone number in the 786 area code (Miami, Fla.).

“They didn’t change the physical address perhaps thinking that would trigger a letter to be sent to us,” Ed explained.

Thankfully, the SSA sent a letter anyway. Ed said many additional hours spent researching the matter with SSA personnel revealed that in order to open the claim on Ruth’s retirement benefits, the thieves had to supply the SSA with a short list of static identifiers about her, including her birthday, place of birth, mother’s maiden name, current address and phone number.

Unfortunately, most (if not all) of this data is available on a broad swath of the American populace for free online (think Zillow, Ancestry.com, Facebook, etc.) or else for sale in the cybercrime underground for about the cost of a latte at Starbucks.

The Eckensteins thought the matter had been resolved until Jan. 14, when Ruth received a 1099 form from the SSA indicating they’d reported to the IRS that she had in fact received an $11,665 payment.

“We’ve emailed our tax guy for guidance on how to deal with this on our taxes,” Mr. Eckenstein wrote in an email to KrebsOnSecurity. “My wife logged into SSA portal and there was a note indicating that corrected/updated 1099s would be available at the end of the month. She’s not sure whether that message was specific to her or whether everyone’s seeing that.”

NOT SMALL IF IT HAPPENS TO YOU

Identity thieves have been exploiting authentication weaknesses to divert retirement account funds almost since the SSA launched its portal eight years ago. But the crime really picked up in 2013, around the same time KrebsOnSecurity first began warning readers to register their own accounts at the MySSA portal. That uptick coincided with a move by the U.S. Treasury to start requiring that all beneficiaries receive payments through direct deposit (though the SSA says paper checks are still available to some beneficiaries under limited circumstances).

More than 34 million Americans now conduct business with the Social Security Administration (SSA) online. A story this week from Reuters says the SSA doesn’t track data on the prevalence of identity theft. Nevertheless, the agency assured the news outlet that its anti-fraud efforts have made the problem “very rare.”

But Reuters notes that a 2015 investigation by the SSA’s Office of Inspector General investigation identified more than 30,000 suspicious MySSA registrations, and more than 58,000 allegations of fraud related to MySSA accounts from February 2013 to February 2016.

“Those figures are small in the context of overall MySSA activity – but it will not seem small if it happens to you,” writes Mark Miller for Reuters.

The SSA has not yet responded to a request for comment.

Ed and Ruth’s experience notwithstanding, it’s still a good idea to set up a MySSA account — particularly if you or your spouse will be eligible to withdraw benefits soon. The agency has been trying to beef up online authentication for citizens logging into its MySSA portal. Last summer the SSA began requiring all users to enter a username and password in addition to a one-time security code sent their email or phone, although as previously reported here that authentication process could be far more robust.

The Reuters story reminds readers to periodically use the MySSA portal to make sure that your personal information – such as date of birth and mailing address – are correct. “For current beneficiaries, if you notice that a monthly payment has not arrived, you should notify the SSA immediately via the agency’s toll-free line (1-800-772-1213) or at your local field office,” Miller advised. “In most cases, the SSA will make you whole if the theft is reported quickly.”

Another option is to use the SSA’s “Block Electronic Access” feature, which blocks any automatic telephone or online access to your Social Security record – including by you (although it’s unclear if blocking access this way would have stopped ID thieves who manage to speak with a live SSA representative). To restore electronic access, you’ll need to contact the Social Security Administration and provide proof of your identity.

Planet DebianEddy Petrișor: Detecting binary files in the history of a git repository

Git, VCSes and binary files

Git is famous and has become popular even in the enterprise/commercial environments. But Git is also infamous regarding storage of large and/or binary files that change often, in spite of the fact they can be efficiently stored. For large files there have been several attempts to fix the issue, with varying degree of success, the most successful being git-lfs and git-annex.

My personal view is that, contrary to many practices, is a bad idea to store binaries in any VCS. Still, this practice has been and still is in use in may projects, especially in closed source projects. I won't go into the reasons, and how legitimate they are, let's say that we might finally convince people that binaries should be removed from the VCS, git, in particular.

Since the purpose of a VCS is to make sure all versions of the stored objects are never lost, Linus designed git in such a way that knowing the exact hash of the tip/head of your git branch, it is guaranteed the whole history of that branch hasn't changed even if the repository was stored in a non-trusted location (I will ignore hash collisions, for practical reasons).

The consequence of this is that if the history is changed one bit, all commit hashes and history after that change will change also. This is what people refer to when they say they rewrite the (git) history, most often, in the context of a rebase.

But did you know that you could use git rebase to traverse the history of a branch and do all sorts of operations such as detecting all binary files that were ever stored in the branch?

Detecting any binary files, only in the current commit

As with everything on *nix, we start with some building blocks, and construct our solution on top of them. Let's first find all files, except the ones in .git:

find . -type f -print | grep -v '^\.\/\.git\/'
Then we can use the 'file' utility to list for non-text files:
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text'
And if there are any such file, then it means the current git commit is one that needs our attention, otherwise, we're fine.
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
 Of course, we assume here, the work tree is clean.

Checking all commits in a branch

Since we want to make this an efficient process and we only care if the history contains binaries, and branches are cheap in git, we can use a temporary branch that can be thrown away after our processing is finalized.
Making a new branch for some experiments is also a good idea to avoid losing the history, in case we do some stupid mistakes during our experiment.

Hence, we first create a new branch which points to the exact same tip the branch to be checked points to, and move to it:
git checkout -b test_bins
Git has many commands that facilitate automation, and my case I want to basically run the chain of commands on all commits. For this we can put our chain of commands in a script:

cat > ../check_file_text.sh
#!/bin/sh

(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
then (ab)use 'git rebase' to execute that for us for all commits:
git rebase --exec="sh ../check_file_text.sh" -i $startcommit
After we execute this, the editor window will pop up, just save and exit. Assuming $startcommit is the hash of the first commit we know to be clean or beyond which we don't care to search for binaries, this will look in all commits since then.

Here is an example output when checking the newest 5 commits:

$ git rebase --exec="sh ../check_file_text.sh" -i HEAD~5
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Successfully rebased and updated refs/heads/test_bins.

Please note this process can change the history on the test_bins branch, but that is why we used a throw-away branch anyway, right? After we're done, we can go back to another branch and delete the test branch.

$ git co master
Switched to branch 'master'

Your branch is up-to-date with 'origin/master'
$ git branch -D test_bins
Deleted branch test_bins (was 6358b91).
Enjoy!

CryptogramThe Effects of the Spectre and Meltdown Vulnerabilities

On January 3, the world learned about a series of major security vulnerabilities in modern microprocessors. Called Spectre and Meltdown, these vulnerabilities were discovered by several different researchers last summer, disclosed to the microprocessors' manufacturers, and patched­ -- at least to the extent possible.

This news isn't really any different from the usual endless stream of security vulnerabilities and patches, but it's also a harbinger of the sorts of security problems we're going to be seeing in the coming years. These are vulnerabilities in computer hardware, not software. They affect virtually all high-end microprocessors produced in the last 20 years. Patching them requires large-scale coordination across the industry, and in some cases drastically affects the performance of the computers. And sometimes patching isn't possible; the vulnerability will remain until the computer is discarded.

Spectre and Meltdown aren't anomalies. They represent a new area to look for vulnerabilities and a new avenue of attack. They're the future of security­ -- and it doesn't look good for the defenders.

Modern computers do lots of things at the same time. Your computer and your phone simultaneously run several applications -- ­or apps. Your browser has several windows open. A cloud computer runs applications for many different computers. All of those applications need to be isolated from each other. For security, one application isn't supposed to be able to peek at what another one is doing, except in very controlled circumstances. Otherwise, a malicious advertisement on a website you're visiting could eavesdrop on your banking details, or the cloud service purchased by some foreign intelligence organization could eavesdrop on every other cloud customer, and so on. The companies that write browsers, operating systems, and cloud infrastructure spend a lot of time making sure this isolation works.

Both Spectre and Meltdown break that isolation, deep down at the microprocessor level, by exploiting performance optimizations that have been implemented for the past decade or so. Basically, microprocessors have become so fast that they spend a lot of time waiting for data to move in and out of memory. To increase performance, these processors guess what data they're going to receive and execute instructions based on that. If the guess turns out to be correct, it's a performance win. If it's wrong, the microprocessors throw away what they've done without losing any time. This feature is called speculative execution.

Spectre and Meltdown attack speculative execution in different ways. Meltdown is more of a conventional vulnerability; the designers of the speculative-execution process made a mistake, so they just needed to fix it. Spectre is worse; it's a flaw in the very concept of speculative execution. There's no way to patch that vulnerability; the chips need to be redesigned in such a way as to eliminate it.

Since the announcement, manufacturers have been rolling out patches to these vulnerabilities to the extent possible. Operating systems have been patched so that attackers can't make use of the vulnerabilities. Web browsers have been patched. Chips have been patched. From the user's perspective, these are routine fixes. But several aspects of these vulnerabilities illustrate the sorts of security problems we're only going to be seeing more of.

First, attacks against hardware, as opposed to software, will become more common. Last fall, vulnerabilities were discovered in Intel's Management Engine, a remote-administration feature on its microprocessors. Like Spectre and Meltdown, they affected how the chips operate. Looking for vulnerabilities on computer chips is new. Now that researchers know this is a fruitful area to explore, security researchers, foreign intelligence agencies, and criminals will be on the hunt.

Second, because microprocessors are fundamental parts of computers, patching requires coordination between many companies. Even when manufacturers like Intel and AMD can write a patch for a vulnerability, computer makers and application vendors still have to customize and push the patch out to the users. This makes it much harder to keep vulnerabilities secret while patches are being written. Spectre and Meltdown were announced prematurely because details were leaking and rumors were swirling. Situations like this give malicious actors more opportunity to attack systems before they're guarded.

Third, these vulnerabilities will affect computers' functionality. In some cases, the patches for Spectre and Meltdown result in significant reductions in speed. The press initially reported 30%, but that only seems true for certain servers running in the cloud. For your personal computer or phone, the performance hit from the patch is minimal. But as more vulnerabilities are discovered in hardware, patches will affect performance in noticeable ways.

And then there are the unpatchable vulnerabilities. For decades, the computer industry has kept things secure by finding vulnerabilities in fielded products and quickly patching them. Now there are cases where that doesn't work. Sometimes it's because computers are in cheap products that don't have a patch mechanism, like many of the DVRs and webcams that are vulnerable to the Mirai (and other) botnets -- ­groups of Internet-connected devices sabotaged for coordinated digital attacks. Sometimes it's because a computer chip's functionality is so core to a computer's design that patching it effectively means turning the computer off. This, too, is becoming more common.

Increasingly, everything is a computer: not just your laptop and phone, but your car, your appliances, your medical devices, and global infrastructure. These computers are and always will be vulnerable, but Spectre and Meltdown represent a new class of vulnerability. Unpatchable vulnerabilities in the deepest recesses of the world's computer hardware is the new normal. It's going to leave us all much more vulnerable in the future.

This essay previously appeared on TheAtlantic.com.

Planet DebianDirk Eddelbuettel: prrd 0.0.2: Many improvements

The prrd package was introduced recently, and made it to CRAN shortly thereafter. The idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development and is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the following screenshot (running six parallel workers, arranged in split byobu session).

This note announce the second, and much improved, release. The package now runs on all operating systems supported by R and no longer has external system requirements. Several functions were improved, two new helper functions were added in a so-far still preliminary form, and everything is more robust now.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.2 (2018-01-24)

  • The package no longer require wget.

  • Enhanced sanity checker function.

  • Expanded and improved dequeue function.

  • No longer use $HOME in xvfb-run-safe (#2).

  • The use of xvfb-run use is now conditional on the OS (#3).

  • The set of available packages is no longer constrained to CRAN, but could be via the local setup script (#4).

  • The dequeue() function now uses system2().

  • The enqueue() functions checks if no reverse dependencies are found and stops (#6).

  • The enqueue() functions checks for repository information being set (#5).

CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureError'd: #TITLE_OF_ERRORD2#

Joe P. wrote, When I tried to buy a coffee at the airport with my contactless VISA card, it apparently thought my name was '%s'."

 

"Instead of outsourcing to Eastern Europe or the Asian subcontinent, companies should be hiring from Malta. Just look at these people! They speak fluent base64!" writes Michael J.

 

Raffael wrote, "While I can proudly say that I am working on bugs, the Salesforce Chatter site should probably consider doing the same."

 

"Wow! Thanks! Happy Null Year to you too!" Alexander K. writes.

 

Joel B. wrote, "Yesterday was the first time I've ever seen a phone with a 'License Violation'. Phone still works, so I guess there's that."

 

"They missed me so much, they decided to give me...nothing," writes Timothy.

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Light Talks and Close

Lightning Talk

  • Usability Fails
  • Etching
  • Diverse Events
  • Kids Space – fairly unstructured and self organising
  • Opening up LandSat imagery – NBAR-T available on NCI
  • Project Nacho – HTML -> VPN/RDP gateway . Apache Guacomle
  • Vocaloids
  • Blockchain
  • Using j2 to create C++ code
  • Memory model code update
  • CLIs are user interface too
  • Complicated git things
  • Mollygive -matching donations
  • Abusing Docker

Closing

  • LCA 2019 will be in Christchurch, New Zealand – http://lca2019.linux.org.au
  • 700 Attendees at 2018
  • 400 talk and 36 Miniconf submissions

 

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Session 2

QUIC: Replacing TCP for the Web Jana Iyengar

  • History
    • Protocol for http transport
    • Deployed Inside Google 2014 and Chrome / mobile apps
    • Improved performance: Youtube rebuffers 15-18% , Google search latency 3.6 – 8 %
    • 35% of Google’s egree traffic (7% of Internet)
    • Working group started in 2016 to standardized QUIC
    • Turned off at the start of 2016 due to security problem
    • Doubled in Sept 2016 due turned on for the youtube app
  • Technology
    • Previously – ip _> TCP -> TLS -> HTTP/2
    • QUIC -> udp -> QUIC -> http over QUIC
    • Includes crypto and tcp handshake
    • congestion control
    • loss recovery
    • TLS 1.3 has some of the same features that QUIC pioneered, being updated to take account
  • HTTP/1
    • 1 trip for TCP
    • 2 trips for TLS
    • Single connection – Head Of Line blocking
    • Multiple TCP connections workaround.
  • HTTP/2
    • Streams within a single transport connection
    • Packet loss will stall the TCP layer
    • Unresolved problems
      • Connection setup latency
      • Middlebox interference with TCP – makes it hard to change TCP
      • Head of line blocking within TCP
  • QUIC
    • Connection setup
      • 0 round trips, handshake packet followed directly by data packet
      • 1 round-trips if crypto keys are not new
      • 2 round trips if QUIC version needs renegotiation
    • Streams
      • http/2 streams are sent as quic streams
  • Aspirations of protocol
    • Deployable and evolveable
    • Low latency connection establishment
    • Stream multiplexing
    • Better loss recovery and flexible congestion control
      • richer signalling (unique packet number)
      • better RTT estimates
    • Resilience to NAT-rebinding ( UDP Nat-mapping changes often, maybe every few seconds)
  • UDP is not a transport, you put something in top of UDP to build a transport
  • Why not a new protocol instead of UDP? Almost impossible to get a new protocol in middle boxes around the Internet.
  • Metrics
    • Search Latency (see paper for other metrics)
    • Enter search term > entire page is loaded
    • Mean: desktop improve 8% , mobile 3.6 %
    • Low latency: Desktop 1% , Mobile none
    • Highest Latency 90-99% of users: Desktop & mobile 15-16%
    • Video similar
    • Big gain is from 0 RTT handshake
  • QUIC – Search Latency Improvements by Country
    • South Korea – 38ms RTT – 1% improvement
    • USA – 50ms – 2 – 3.5 %
    • India – 188ms – 5 – 13%
  • Middlebox ossification
    • Vendor ossified first byte of QUIC packet – flags byte
    • since it seemed to be the same on all QUIC packets
    • broke QUIC deployment when a flag was fixed
    • Encryption is the only way to protect against network ossification
    • “Greasing” by randomly changing options is also an option.
  • Other Protocols over QUIC?
    • Concentrating on http/2
    • Looking at Web RPC

Remote Work: My first decade working from the far end of the earth John Dalton

  • “Remote work has given me a fulfilling technical career while still being able to raise my family in Tasmania”
  • First son both in 2015, wanted to start in Tasmania with family to raise them, rather than moving to a tech hub.
  • 2017 working with High Performance Computing at University Tasmania
  • If everything is going to be outsourced, I want to be the one they outsourced to.
  • Wanted to do big web stuff, nobody in Tasmania doing that.
  • Was a user at LibraryThing
    • They were searching for Sysadmin/DBA in Portland, Maine
    • Knew he could do the job even though was on other side of the world
    • Negotiated into it over a couple of months
    • Knew could do the work, but not sure how the position would work out

Challenges

  • Discipline
    • Feels he is not organised. Doesn’t keep planner uptodate or todo lists etc
    • “You can spend a lot of time reading about time management without actually doing it”
    • Do you need to have the minimum level
  • Isolation
    • Lives 20 minutes out of Hobart
    • In semi-rural area for days at a time, doesn’t leave house all week except to ferry kids on weekends.
    • “Never considered myself an extrovert, but I do enjoy talking to people at least weekly”
    • Need to work to hook in with Hobart tech community, Goes to meetups. Plays D&D with friends.
    • Considering going to coworking space. sometimes goes to Cafes etc
  • Setting Boundries
    • Hard to Leave work.
    • Have a dedicated work space.
  • Internet Access
    • Prioritise Coverage over cost these days for mobile.
    • Sometimes fixed provider go down, need to have a backup
  • Communication
    • Less random communicated with other employees
    • Cannot assume any particular knowledge when talking with other people
    • Aware of particular cultural differences
    • Multiple chance of a miscommunication

Opportunities

  • Access to companies and jobs and technologies that could get locally
  • Access to people with a wider range of experiences and backgrounds

Finding remote work

  • Talk your way into it
  • Networking
  • Job Bof
  • stackoverflow.com/jobs can filter
  • weworkremotely.com

Making it work

  • Be Visable
  • Go home at the end of the day
  • Remember real people are at the end of the email

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Session 1

Self-Documenting Coders: Writing Workshop for Devs Heidi Waterhouse

History of Technical documentation

  • Linear Writing
    • On Paper, usually books
    • Emphasis on understanding and doing
  • Task-based writing
    • Early 90s
    • DITA
    • Concept, Procedure, Reference
  • Object-orientated writing
    • High art for of tech writers
    • Content as code
    • Only works when compiled
    • Favoured by tech writers, translated. Up to $2000 per seat
  • Guerilla Writing
    • Stack Overflow
    • Wikis
    • YouTube
    • frustrated non-writers trying to help peers
  • Search-first writing
    • Every page is page one
    • Search-index driven

Writing Words

  • 5 W’s of journalism.
  • Documentation needs to be tested
  • Audiences
    • eg Users, future-self, Sysadmins, experts, End users, installers
  • Writing Basics
    • Sentences short
    • Graphics for concepts
    • Avoid screencaps (too easily outdated)
    • User style guides and linters
    • Accessibility is a real thing
  • Words with pictures
    • Never include settings only in an image ( “set your screen to look like this” is bad)
    • Use images for concepts not instructions
  • Not all your users are readers
    • Can’t see well
    • Can’t parse easily
    • Some have terrible equipment
    • Some of the “some people” is us
    • Accessibility is not a checklist, although that helps, it is us
  • Using templates to write
    • Organising your thoughts and avoid forgetting parts
    • Add a standard look at low mental cost
  • Search-first writing – page one
    • If you didn’t answer the question or point to the answer you failed
    • answer “How do I?”
  • Indexing and search
    • All the words present are indexed
    • No false pointers
    • Use words people use and search for, Don’t use just your internal names for things
  • Semantic tagging and reuse
    • Semantic text splits form and content
    • Semantic tagging allows reuse
    • Reuse saves duplication
    • Reuse requires compiling
  • Sorting topics into buckets
    • Even with search you need some organisation
    • Group items by how they get used not by how they get prammed
    • Grouping similar items allows serendipity
  • Links, menus and flow
    • give people a next step
    • Provide related info on same page
    • show location
    • offer a chance to see the document structure

Distributing Words

  • Static Sites
  • Hosted Sites
  • Baked into the product
    • Only available to customers
    • only updates with the product
    • Hard to encourage average user to input
  • Knowledge based / CMS
    • Useful to community that known what it wants
    • Prone to aging and rot
    • Sometimes diverges from published docs or company message
  • Professional Writing Tools
    • Shiny and powerful
    • Learning Cliff
    • IDE
    • Super features
    • Not going to happen again
  • Paper-ish things
    • Essential for some topics
    • Reassuring to many people
    • touch is a sense we can bond with
    • Need to understand if people using docs will be online or offline when they want them.
  • Using templates to publish
    • Unified look and feel
    • Consistency and not missing things
    • Built-in checklist

Collaborating on Words

  • One weird trick, write it up as your best guess and let them correct it
  • Have a hack day
    • Ste a goal of things to delete
    • Set a goal of things to fix
    • Keep track of debt you can’t handle today
    • team-building doesn’t have to be about activities

Deleting Words

  • What needs to go
    • Old stuff that is wrong and terrible
    • Wrong stuff that hides right stuff
  • What to delete
    • Anything wrong
    • Anything dangerious
    • Anything used of updated in year
  • How
    • Delete temporarily (put aside for a while)
    • Based on analytics
    • Ruthlessly
    • Delete or update

Documentation Must be

  • True
  • Timely
  • Testable
  • Tuned

Documentation Components

  • Who is reading and why
    • Assuming no one likes reading docs
    • What is driving them to be here
  • Pre Requisites
    • What does a user need to succeed
    • Can I change the product to reduce documentation
    • Is there any hazard in this process
  • How do I do this task
    • Steps
    • Results
    • Next steps
  • Test – How do I know that it worked
    • If you can’t test i, it is not a procedure
    • What will the system do, how does the state change
  • Reference
    • What other stuff that affects this
    • What are the optionsal settings
    • What are the related things
  • Code and code samples
    • Best: code you can modify and run in the docs
    • 2nd Best: Code you can copy easily
    • Worst: retyping code
  • Option
    • Why did we build it this way
    • What else might you want to know
    • Have other people done this
    • Lifecycle

Documentation Types

  • Instructions
  • Ideas (arch, problem space,discarded options, process)
  • Action required (release notes, updates, deprecation)
  • Historical (roads maps, projects plans, retrospective documents)
  • Invisible docs (user experience, microinteractions, error messages)
    • Error messages – Unique ID, what caused, What mitigation, optional: Link to report

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Keynote – Jess Frazelle

Keynote: Containers aka crazy user space fun

  • Work at Microsoft on Open Source and containers, specifically on kubernetes
  • Containers vs Zones vs Jails vs VMs
  • Containers are not a first class concept in the kernel.
    • Namespaces
    • Cgroups
    • AppArmour in LSM (prevent mounting, writing to /proc etc) (or SELinux)
    • Seccomp (syscall filters, which allowed or denied) – Prevent 150 other syscalls which are uncommon or dangerous.
      • Got list from testing all of dockerhub
      • eg CLONE, UNSHARE
      • NoNewPrivs (exposed as “AllowPrivilegeEsculation” in K8s)
      • rkt and systemd-nspawn don’t 100% follow
  • Intel Clear containers are really VMs

History of Containers

  • OpenVZ – released 2005
  • Linux-Vserver (2008)
  • LXC ( 2008)
  • Docker ( 2013)
    • Initially used LXC as a backend
    • Switched to libcontainer in v0.7
  • lmctfy (2013)
    • By Google
  • rkt (2014)
  • runc (2015)
    • Part of Open container Initiative
  • Container runtimes are like the new Javascript frameworks

Are Containers Secure

  • Yes
  • and I can prove it
  • VMs / Zones and Jails are like all the Lego pieces are already glued togeather
  • Containers you have the parts seperate
    • You can turn on and off certain namespaces
    • You can share namespaces between containers
    • Every container in k8s shares PID and NET namespaces
    • Docker has sane defaults
    • You can sandbox apps every further though
  • https://contained.af/
    • No one has managed to break out of the container
    • Has a very strict seccomp profile applied
    • You’d be better off attacking the app, but you are still running a containers default seccomp filters

Containerizing the Desktop

  • Switched to runc from docker (had to convert stuff)
  • rootless containers
  • Runc hook “netns” to do networking
  • Sandboxed desktop apps, running in containers
  • Switch from Debian to CoreOS Container Linux as base OS
    • Verify the integrity of the OS
    • Just had to add graphics drivers
    • Based on gentoo, emerge all the way down

What if we applied the the same defaults to programming languages?

  • Generate seccomp filters at build-time
    • Previously tried at run time, doesn’t work that well, something always missed
    • At build time we can ensure all code is included in the filter
    • The go compiler writes the assembly for all the syscalls, you can hijack and grab the list of these, create a seccomp filter
    • No quite that simply
      • plugins
      • exec external stuff
      • can directly exec a syscall in go code, the name passed in via arguments at runtime
  • metaparticle.io
    • Library for cloud-native applications

Linux Containers in secure enclaves (SCONE)

  • Currently Slow
  • Lots of tradeoffs or what executes where (trusted area or untrsuted area)

Soft multi-tenancy

  • Reduced threat model, users not actively malicious
  • Hard Multi-tenancy would have potentially malicious containers running next to others
  • Host OS – eg CoreOs
  • Container Runtime – Look at glasshouse VMs
  • Network – Lots to do, default deny in k8s is a good start
  • DNS – Needs to be namespaced properly or turned off. option: kube-dns as a sidecar
  • Authentication and Authorisation – rbac
  • Isolation of master and System nodes from nodes running containers
  • Restricting access to host resources (k8s hostpath for volumes, pod security policy)
  • making sure everything else is “very dumb” to it’s surroundings

 

Share

,

Planet DebianDaniel Pocock: Do the little things matter?

In a widely shared video, US Admiral McRaven addressing University of Texas at Austin's Class of 2014 chooses to deliver a simple message: make your bed every day.

A highlight of this talk is the quote The little things in life matter. If you can't do the little things right, you'll never be able to do the big things right.

In the world of free software engineering, we have lofty goals: the FSF's High Priority Project list identifies goals like private real-time communication, security and diversity in our communities. Those deploying free software in industry have equally high ambitions, ranging from self-driving cars to beating the stock market.

Yet over and over again, we can see people taking little shortcuts and compromises. If Admiral McRaven is right, our failure to take care of little decisions, like how we choose an email provider, may be the reason those big projects, like privacy or diversity, appear to be no more than a pie-in-the-sky.

The IT industry has relatively few regulations compared to other fields such as aviation, medicine or even hospitality. Consider a doctor who re-uses a syringe - how many laws would he be breaking? Would he be violating conditions of his insurance? Yet if an IT worker overlooks the contempt for the privacy of Gmail users and their correspondents that is dripping off the pages of the so-called "privacy" policy, nobody questions them. Many people will applaud their IT staff for choices or recommendations like this, because, of course, "it works". A used syringe "just works" too, but who would want one of those?

Google's CEO Eric Schmidt tells us that if you don't have anything to hide, you don't need to worry.

Compare this to the advice of Sun Tzu, author of the indispensable book on strategy, The Art of War. The very first chapter is dedicated to estimating, calculating and planning: what we might call data science today. Tzu unambiguously advises to deceive your opponent, not to let him know the truth about your strengths and weaknesses.

In the third chapter, Offense, Tzu starts out that The best policy is to take a state intact ... to subdue the enemy without fighting is the supreme excellence. Surely this is only possible in theory and not in the real world? Yet when I speak to a group of people new to free software and they tell me "everybody uses Windows in our country", Tzu's words take on meaning he never could have imagined 2,500 years ago.

In many tech startups and even some teams in larger organizations, the oft-repeated mantra is "take the shortcut". But the shortcuts and the things you get without paying anything, without satisfying the conditions of genuinely free software, compromises such as Gmail, frequently involve giving up a little bit too much information about yourself: otherwise, why would they leave the bait out for you? As Mr Tzu puts it, you have just been subdued without fighting.

In one community that has taken a prominent role in addressing the challenges of diversity, one of the leaders recently expressed serious concern that their efforts had been subdued in another way: Gmail's Promotions Tab. Essential emails dispatched to people who had committed to their program were routinely being shunted into the Promotions Tab along with all that marketing nonsense that most people never asked for and the recipients never saw them.

I pointed out many people have concerns about Gmail and that I had been having thoughts about simply blocking it at my mail server. It is quite easy to configure a mail server to send an official bounce message, for example, in Postfix, it is just one line in the /etc/postfix/access file:

gmail.com   REJECT  The person you are trying to contact hasn't accepted Gmail's privacy policy.  Please try sending the email from a regular email provider.

Some communities could go further, refusing to accept Gmail addresses on mailing lists or registration forms: the lesser evil compared to a miserable fate in Promotions Tab limbo.

I was quite astounded at the response: several people complained that this was too much for participants to comply with (the vast majority register with a Gmail address) or that it was even showing all Gmail users contempt (can't they smell the contempt for users in the aforementioned Gmail "privacy" policy?). Nobody seemed to think participants could cope with that and if we hope these people are going to be the future of diversity, that is really, really scary.

Personally, I have far higher hopes for them: just as Admiral McRaven's Navy SEALS are conditioned to make their bed every day at boot camp, people entering IT, especially those from under-represented groups, need to take pride in small victories for privacy and security, like saying "No" each and every time they have the choice to give up some privacy and get something "free", before they will ever hope to accomplish big projects and change the world.

If they don't learn these lessons at the outset, like the survival and success habits drilled into soldiers during boot-camp, will they ever? If programs just concentrate on some "job skills" and gloss over the questions of privacy and survival in the information age, how can they ever deliver the power shift that is necessary for diversity to mean something?

Come and share your thoughts on the FSFE discussion list (join, thread and reply).

Sociological ImagesChildren Learn Rules for Romance in Preschool

Originally Posted at TSP Discoveries

Photo by oddharmonic, Flickr CC

In the United States we tend to think children develop sexuality in adolescence, but new research by Heidi Gansen shows that children learn the rules and beliefs associated with romantic relationships and sexuality much earlier.

Gansen spent over 400 hours in nine different classrooms in three Michigan preschools. She observed behavior from teachers and students during daytime classroom hours and concluded that children learn — via teachers’ practices — that heterosexual relationships are normal and that boys and girls have very different roles to play in them.

In some classrooms, teachers actively encouraged “crushes” and kissing between boys and girls. Teachers assumed that any form of affection between opposite gender children was romantically-motivated and these teachers talked about the children as if they were in a romantic relationship, calling them “boyfriend/girlfriend.” On the other hand, the same teachers interpreted affection between children of the same gender as friendly, but not romantic. Children reproduced these beliefs when they played “house” in these classrooms. Rarely did children ever suggest that girls played the role of “dad” or boys played the role of “mom.” If they did, other children would propose a character they deemed more gender-appropriate like a sibling or a cousin.

Preschoolers also learned that boys have power over girls’ bodies in the classroom. In one case, teachers witnessed a boy kiss a girl on the cheek without permission. While teachers in some schools enforced what the author calls “kissing consent” rules, the teachers in this school interpreted the kiss as “sweet” and as the result of a harmless crush. Teachers also did not police boys’ sexual behaviors as actively as girls’ behaviors. For instance, when girls pulled their pants down teachers disciplined them, while teachers often ignored the same behavior from boys. Thus, children learned that rules for romance also differ by gender.

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at https://thesocietypages.org/socimages)

CryptogramWhatsApp Vulnerability

A new vulnerability in WhatsApp has been discovered:

...the researchers unearthed far more significant gaps in WhatsApp's security: They say that anyone who controls WhatsApp's servers could effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation.

Matthew Green has a good description:

If all you want is the TL;DR, here's the headline finding: due to flaws in both Signal and WhatsApp (which I single out because I use them), it's theoretically possible for strangers to add themselves to an encrypted group chat. However, the caveat is that these attacks are extremely difficult to pull off in practice, so nobody needs to panic. But both issues are very avoidable, and tend to undermine the logic of having an end-to-end encryption protocol in the first place.

Here's the research paper.

Worse Than FailureThe More Things Change: Fortran Edition

Technology improves over time. Storage capacity increases. Spinning platters are replaced with memory chips. CPUs and memory get faster. Moore's Law. Compilers and languages get better. More language features become available. But do these changes actually improve things? Fifty years ago, meteorologists used the best mainframes of the time, and got the weather wrong more than they got it right. Today, they have a global network of satellites and supercomputers, yet they're wrong more than they're right (we just had a snowstorm in NJ that was forecast as 2-4", but got 16" before drifting).

As with most other languages, FORTRAN also added structure, better flow control and so forth. The problem with languages undergoing such a massive improvement is that occasionally, coding styles live for a very long time.

Imagine a programmer who learned to code using FORTRAN IV (variable names up to 6 characters, integers implicitly start with "I" through "N" and reals start with any other letter - unless explicitly declared, flow control via GOTO, etc) writing a program in 2000 (using a then-current compiler but with FORTRAN IV style). Now imagine some PhD candidate coming along in 2017 to maintain and enhance this FORTRAN IV-style code with the latest FORTRAN compiler.

A.B.was working at a university with just such a scientific software project as part of earning a PhD. These are just a couple of the things that caused a few head-desk moments.

Include statements. The first variant only allows code to be included. The second allows preprocessor directives (like #define).

    INCLUDE  'path/file'

    #include 'path/file'

Variables. Since the only data types/structures originally available were character, logical, integer, real*4, real*8 and arrays, you had to shoehorn your data into the closest fit. This led to declarations sections that included hundreds of basic declarations. This hasn't improved today as people still use one data type to hold something that really should be implemented as something else. Also, while the compilers of today support encapsulation/modules, back then, everything was pretty much global.

Data structures. The only thing supported back then was multidimensional arrays. If you needed something like a map, you needed to roll your own. This looks odd to someone who cut their teeth on a version of the language where these features are built-in.

Inlining. FORTRAN subroutines support local subroutines and functions which are inlined, which is useful to provide implied visibility scoping. Prudent use allows you to DRY your code. This feature isn't even used, so the same code is duplicated over and over again inline. Any of you folks see that pattern in your new-fangled modern systems?

Joel Spolsky commented about the value of keeping old code around. While there is much truth in his words, the main problem is that the original programmers invariably move on, and as he points out, it is much harder to read (someone else's) code than to write your own; maintenance of ancient code is a real world issue. When code lives across too many language version improvements, it becomes inherently more difficult to maintain as its original form becomes more obsolete.

To give you an idea, take a look at the just the declaration section of one module that A.B. inherited (except for a 2 line comment at the top of the file, there were no comments). FWIW, when I did FORTRAN at the start of my career, I used to document the meaning of every. single. abbreviated. variable. name.

      subroutine thesub(xfac,casign,
     &     update,xret,typret,
     &     meop1,meop2,meop12,meoptp,
     &     traop1, traop2, tra12,
     &     iblk1,iblk2,iblk12,iblktp,
     &     idoff1,idoff2,idof12,
     &     cntinf,reoinf,
     &     strinf,mapinf,orbinf)
      implicit none
      include 'routes.h'
      include 'contr_times.h'
      include 'opdim.h'
      include 'stdunit.h'
      include 'ioparam.h'
      include 'multd2h.h'
      include 'def_operator.h'
      include 'def_me_list.h'
      include 'def_orbinf.h'
      include 'def_graph.h'
      include 'def_strinf.h'
      include 'def_filinf.h'
      include 'def_strmapinf.h'
      include 'def_reorder_info.h'
      include 'def_contraction_info.h'
      include 'ifc_memman.h'
      include 'ifc_operators.h'
      include 'hpvxseq.h'

      integer, parameter ::
     &     ntest = 000
      logical, intent(in) ::
     &     update
      real(8), intent(in) ::
     &     xfac, casign
      real(8), intent(inout), target ::
     &     xret(1)
      type(coninf), target ::
     &     cntinf
      integer, intent(in) ::
     &     typret,
     &     iblk1, iblk2, iblk12, iblktp,
     &     idoff1,idoff2,idof12
      logical, intent(in) ::
     &     traop1, traop2, tra12
      type(me_list), intent(in) ::
     &     meop1, meop2, meop12, meoptp
      type(strinf), intent(in) ::
     &     strinf
      type(mapinf), intent(inout) ::
     &     mapinf
      type(orbinf), intent(in) ::
     &     orbinf
      type(reoinf), intent(in), target ::
     &     reoinf

      logical ::
     &     bufop1, bufop2, buf12, 
     &     first1, first2, first3, first4, first5,
     &     msfix1, msfix2, msfx12, msfxtp,
     &     reject, fixsc1, fixsc2, fxsc12,
     &     reo12, non1, useher,
     &     traop1, traop2
      integer ::
     &     mstop1,mstop2,mst12,
     &     igmtp1,igmtp2,igmt12,
     &     nc_op1, na_op1, nc_op2, na_op2,
     &     nc_ex1, na_ex1, nc_ex2, na_ex2, 
     &     ncop12, naop12,
     &     nc12tp, na12tp,
     &     nc_cnt, na_cnt, idxres,
     &     nsym, isym, ifree, lenscr, lenblk, lenbuf,
     &     buftp1, buftp1, bftp12,
     &     idxst1, idxst2, idxt12,
     &     ioff1, ioff2, ioff12,
     &     idxop1, idxop2, idop12,
     &     lenop1, lenop2, len12,
     &     idxm12, ig12ls,
     &     mscmxa, mscmxc, msc_ac, msc_a, msc_c,
     &     msex1a, msex1c, msex2a, msex2c,
     &     igmcac, igamca, igamcc,
     &     igx1a, igx1c, igx2a, igx2c,
     &     idxms, idxdis, lenmap, lbuf12, lb12tp,
     &     idxd12, idx, ii, maxidx
      integer ::
     &     ncblk1, nablk1, ncbka1, ncbkx1, 
     &     ncblk2, nablk2, ncbka2, ncbkx2, 
     &     ncbk12, nabk12, ncb12t, nab12t, 
     &     ncblkc, nablkc,
     &     ncbk12, nabk12,
     &     ncro12, naro12,
     &     iblkof
      type(filinf), pointer ::
     &     ffop1,ffop2,ff12
      type(operator), pointer ::
     &     op1, op2, op1op2, op12tp
      integer, pointer ::
     &     cinf1c(:,:),cinf1a(:,:),
     &     cinf2c(:,:),cinf2a(:,:),
     &     cif12c(:,:),
     &     cif12a(:,:),
     &     cf12tc(:,:),
     &     cf12ta(:,:),
     &     cfx1c(:,:),cfx1a(:,:),
     &     cfx2c(:,:),cfx2a(:,:),
     &     cfcntc(:,:),cfcnta(:,:),
     &     inf1c(:),
     &     inf1a(:),
     &     inf2c(:),
     &     inf2a(:),
     &     inf12c(:),
     &     inf12a(:),
     &     dmap1c(:),dmap1a(:),
     &     dmap2c(:),dmap2a(:),
     &     dm12tc(:),dm12ta(:)

      real(8) ::
     &     xnrm, facscl, fcscl0, facab, xretls
      real(8) ::
     &     cpu, sys, cpu0, sys0, cpu00, sys00
      real(8), pointer ::
     &     xop1(:), xop2(:), xop12(:), xscr(:)
      real(8), pointer ::
     &     xbf1(:), xbf2(:), xbf12(:), xbf12(:), x12blk(:)

      integer ::
     &     msbnd(2,3), igabnd(2,3),
     &     ms12ia(3), ms12ic(3), ig12ia(3), ig12ic(3),
     &     ig12rw(3)

      integer, pointer ::
     &     gm1dc(:), gm1da(:),
     &     gm2dc(:), gm2da(:),
     &     gmx1dc(:), gmx1da(:),
     &     gmx2dc(:), gmx2da(:),
     &     gmcdsc (:), gmcdsa (:),
     &     gmidsc (:), gmidsa (:),
     &     ms1dsc(:), ms1dsa(:),
     &     ms2dsc(:), ms2dsa(:),
     &     msx1dc(:), msx1da(:),
     &     msx2dc(:), msx2da(:),
     &     mscdsc (:), mscdsa (:),
     &     msidsc (:), msidsa (:),
     &     idm1ds(:), idxm1a(:),
     &     idm2ds(:), idxm2a(:),
     &     idx1ds(:), ixms1a(:),
     &     idx2ds(:), ixms2d(:),
     &     idxdc (:), idxdsa (:),
     &     idxmdc (:),idxmda (:),
     &     lstrx1(:),lstrx2(:),lstcnt(:),
     &     lstr1(:), lstr2(:), lst12t(:)

      integer, pointer ::
     &     mex12a(:), m12c(:),
     &     mex1ca(:), mx1cc(:),
     &     mex2ca(:), mx2cc(:)

      integer, pointer ::
     &     ndis1(:,:), dgms1(:,:,:), gms1(:,:),
     &     lngms1(:,:),
     &     ndis2(:,:), dgms2(:,:,:), gms2(:,:),
     &     lngms2(:,:),
     &     nds12t(:,:), dgms12(:,:,:),
     &     gms12(:,:),
     &     lgms12(:,:),
     &     lg12tp(:,:), lm12tp(:,:,:)

      integer, pointer ::
     &     cir12c(:,:), cir12a(:,:),
     &     ci12c(:,:),  ci12a(:,:),
     &     mire1c(:),   mire1a(:),
     &     mire2c(:),   mire2a(:),
     &     mca12(:), didx12(:), dca12(:),
     &     mca1(:),  didx1(:),  dca1(:),
     &     mca2(:),  didx2(:),  dca2(:),
     &     lenstr_array(:,:,:)

c dbg
      integer, pointer ::
     &     dum1c(:), dum1a(:), hpvx1c(:), hpvx1a(:),
     &     dum2c(:), dum2a(:), hpvx2c(:), hpvx2a(:)
      integer ::
     &     msd1(ngastp,2,meop1%op%njoined),
     &     msd2(ngastp,2,meop2%op%njoined),
     &     jdx, totc, tota
c dbg

      type(graph), pointer ::
     &     graphs(:)

      integer, external ::
     &     ielsum, ielprd, imdst2, glnmp, idxlst,
     &     mxdsbk, m2ims4
      logical, external ::
     &     nxtdis, nxtds2, lstcmp,
     &     nndibk, nndids
      real(8), external ::
     &     ddot
[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 3

Insights – solving every problem for good Paul Wayper

Sysadmins

  • Too much to check, too little time
  • What does this message mean again
  • Too reactive

How Sysadmins fix problems

  • Read text files and command output
  • Look at them for information
  • Check this information against the knowlede
  • Decide on appobiate solution

Insites

  • Reads test files and outputs
  • Process them into information
  • Use information in rules
  • Rules provide information about Solution

Examples

  • Simple rule – check “localhost” is in /etc/hosts
  • Rule 2 – chronyd refuses to fix server’s time since is out by more than 1000s
    • Checks /var/log/message for error message from chrony
  • Insites rolls up all the checks against messages, so only down once
  • Rule 3 – rsyslog dropping messages

Website

http://red.ht/demo_rules

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 2

Personalisation at Scale: A “Cookie Cutter” Approach Jim O’Halloran

  • Impact on site performance on conversion is huge
  • Magento
    • LAMP stack + Redis or memcached
    • Generally App is CPI bound
    • Routing / Rendering still time consuming
  • Varnish full page caching (FPC)
  • But what about personalised content?
  • Edge Side Includes (ESIs)
    • But ESIs run in series, is slllow when you have many
    • Content is nont cacheable, expensive to calculate, significant render time
    • ESI therefore undermines much advantage of FPC
  • Ajax
    • Make ajax request and fetch personalised content
    • Still load on backend
    • ESI limitations plus added network latency
  • Cookie Cutter
    • When an event occurs that modifies personalisation state, send a cookies containing the required data with the response.
    • In the browser, use the content of that cookie to update the page

Example

  • Goto www.example.com
    • Probably cached in varnish
    • I don’t have a cookie
    • If I login, uncachable request, I am changing login state
    • Response includes Set-Cookie header creating a personalised cookie
  • Advantages
    • No backend requests
    • Page data served is cached always
  • How big can cookies be?
    • RFC 6265 has limits but in reality
    • Actual limit ~4096 bytes per cookie
    • Some older browsers also limit to ~4096 bytes total per domain

Potential issues

  • Request Size
    • Keep cookies small
      • Store small values only, No pre-rendered markup, No larger data structures
    • Serve static assets via CDN
    • Lot of stuff in cart can get huge
  • Information leakage
    • Final URLs leaked to unlogged in users
  • Large Scale changes
    • Page needs to look completely different to different users
    • Vary headers might be an option
  • Formkeys
    • XSRF protection workarounds
  • What about cache misses
    • Megento assembles all it’s pages from a series of blocks
    • Most parts of page are relatively static (block cache)
    • Aligent_CacheObserver – Megento extension that adds cache tags to blocks that should be cached but were not picked up as cachable by default
    • Aoe_TemplateHints – Visibility into Block cache
    • Cacheing != Performance Optimisation – Aoe_Profiler

Availability

  • Plugin availbale for Megento 1
    • Varnish CookieCutter
  • For Magento 2 has native varnish
    • But has limitations
    • Maybe some off CookieCutter stuff could improve

Future

  • localStorage instead of cookies


 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 1

Panel: Meltdown, Spectre, and the free-software community Jonathan Corbet, Andrew ‘bunnie’ Huang, Benno Rice, Jess Frazelle, Katie McLaughlin, Kees Cook

  • FreeBSD only heard 11 days beforehand. Would have liked more notice
  • Got people involved from the Kernel Summit in Oct
  • Hosting company only heard once it went official, been busy patching since
  • Likely to be class-action lawsuit for $billions. That might make chip makers more paranoid about documentation and disclosure.
  • Thoughts in embargo
    • People noticed strange patches going in beforehand.
    • Only broke 6 days early, had been going for 6 months
    • “Linus is happy with this, something is terribly wrong”
    • Sad that the 2nd-tier cloud providers didn’t know. Exclusive club and lines as to who got informed were not clear
    • Projects that don’t have explicit relationship with Intel didn’t get informed
  • Thoughts on other vendors
    • This class of bugs could affect anybody, open hardware would probably not fix
    • More open hardware could enable people to review the processors and find these from the design rather than poking around
    • Hard to guarantee the shipped hardware matches the design
    • Software people can build everything at home and check. FABs don’t work at home.
  • Speculative execution warned about years ago. Danger ignored. How to make sure the next one isn’t ignored?
    • We always have to do some risky stuff
    • The research on this built up slowly over the years
    • Even if you have only found impractical attacks against something doesn’t mean the practical one doesn’t exist.
  • What criteria do we use to decide who is in?
    • Mechanisms do exist, they were mainly not used. Perhaps because they were for software vulnerabilities
  • Did people move providers?
    • No but Containers made things easier to reboot stuff and shuffle
  • Are there similar vulnerabilities ( similar or general hardware ) coming along?
    • The Kernel page-table patches were fairly general, should cover many similar ones
    • All these performance optimising bit of your CPU are now attack surfaces
    • What are people going to do if this slows down hardware too much?
  • How do we explain problems like these to politicians etc
    • Legos
    • We still have kernel devs getting their laptops
  • Can be use CPUs that don’t have speculative execution?
    • Not really. Back to 486s
  • Who are we protesting against with the embargo?
    • Everybody
    • The longer period let better fixes get in
    • The meltdown fix could be done in semi-public so had better quality

What is the most common street name in Australia? Rachel Bunder

  • Why?
    • Saw a map with most common name by US street
  • Just looking at name, not end bit “park” , “road”
  • Data
    • PSMA Geocoded national address file – Great but came out after project
    • Use Open Street Maps
  • Started with Common Name in Sydney
    • Used Metro Extracts – site closing down soon
    • Format is geojson
    • Road files separately provided
  • Procedure
    • Used python, R also has good features and libaraies
    • geopandas
    • Had some paths with no names
    • What is a road? – “Something with a name I can drive a car on”
  • Sydney
    • Full street name
      • Victoria Road
      • Pacific Highway
      • oops like like names are being counted twice
    • Tried merging them together
    • Roads don’t 100% match ends. Added function to fuzzy merge the roads that are 100m apart
    • Still some weird ones but probably won’t affect top
    • Second attempt
      • Short st, George st, William st, John st, Church st
  • Now with just the “name bit”
    • Tried taking out just the last name. ended up with “the” as most common.
    • Started with “The” = whole name
    • Single word = whole name
    • name – descriptor – suffex
    • lots of weird names
    • name list – Park, Victoria, Railway, William, Short
  • Wouldn’t work in many other counties
  • Now for all over Australia
    • overpass data
    • Downloaded in 50kmx50x squares
  • Lessons
    • Start small
    • Choose something familiar
    • Check you bias (different naming conventions)
    • Constance vigerlence
    • Know your problem
  • Common plant names
    • Wattle – 15th – 385
  • Other name
    • “The Esplanade” more common than “The Avenue”
  • Top names
    • 5th – Victoria
    • 4th – Church – 497
    • 3rd – George –  551
    • 2nd – Railway
    • 1st – Park – 693
  • By State
    • WA – Forest
    • SA – Railway
    • Vic – Park
    • Tas – Esplanade
    • NT – Smith/Stuart
    • NSW – Park

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Keynote – Hugh Blemings

Wandering through the Commons

Reflections on Free and Open Source Software/Hardware in Australia, New Zealand and beyond

  • Past Linux.conf.au’s reviewed
  • FOSS in Aus and NZ
    • Above per capita
  • List of Aus / NZ people and their contributions
    • John Lions , Lions book on Unix
    • Pia Andrews/Waugh/Smith – Open Government, GovHack, Linux Australia, Open Data
    • Vik Oliver – 3D Printing
    • Clare Cuuran – Open Government in NZ
    • plus a bunch of others

Working in Free Software and Open Hardware

  • The basics
    • Be visable in projects of relevance
      • You will be typed into Google, looked at in GitHub
    • Be yourself
      • But be business Friendly
    • Linkedin is a thing, really
    • Need a accurate basic presence
  • Finding a new job
    • Networks
    • Local user groups
    • Conferences
    • The projects you work on
  • Application and negotiation
    • Be professional, courteous
    • Do homework about company and culture
    • Talk to people that work there
    • Spend time on interview prep
      • Know your stuff, if you don’t know, say so
    • Think about Salary expectations and stick to them
      • Val Aurora’s page on this is excellent
    • Ask to keep copyright on your code
      • Should be a no-brainer for a FOSS.OH company
  • In the Job
    • Takes time to get into groove, don’t sweat it
    • Get out every now and then, particularly if working from home
    • Work/life balance
    • Know when to jump
      • Poisonous workplaces
    • An aside to People’s managers
      • Bring your best or don’t be a people manager
      • Take your reports welfare seriously

Looking after You

  • Ours is in the main a sedentary and solitary pursuit
    • exercise
  • Sitting and standing in front of a desk all day is bad
    • takes breaks
  • Depression is a real thing
  • Eat more vegetables
  • Find friends/colleagues to exercise with

Working if FOSS / OH – Staying Current

  • Look over a colleagues shoulder
  • Do something that is not part of your regular job
    • low level programming
    • Karger systems, Openstack
  • Stay uptodate with Security Blogs and the like
    • Many of the attack vectors have generic relevance
  • Take the lid off, tinker with hardware
    • Lots of videos online to help or just watch

Make Hay while the Sun Shines

  • Save some money for rainy day
  • Keep networks Open
  • Even when you have a job

You’re fired … Now What? – In a moment

  • Don’t panic
    • Going out in a twitter storm won’t help anyone
  • It’s not personal
    • It is the position that is no longer needed, not you
  • If you think it an unfair dismissal, seek legal advice before signing anything
  • It is normal to feel rubbish
  • Beware of imposter syndrome
  • Try to keep 2-3 opportunities in the pipeline
  • Don’t assume people will remember you
    • It’s not personal, everyone gets busy
    • It’s okay to (politely naturally) follow up periodically
  • Keep search a little narrow for the first week or two
    • The expand widely
  • Balance take “something/everything” as better than waiting for your dream job

Dream Job

  • Power 9 CPU
    • 14nm process
    • 4GHz, 24 cores
    • 25km of wires
    • 8 billion transisters
    • 3900 official chips pins
    • ~19,000 connections from die to the pin

Conclusions

  • Part of a vibrant FOSS/OH community both hear and abroad
  • We have accomplished much
  • The most exciting (in both senses) things lie before us
  • We need all of you to be part at every level of the stack
  • Look forward to working with you…

Share

,

Planet DebianSteinar H. Gunderson: Movit 1.6.0 released

I just released version 1.6.0 of Movit, my GPU-based video filter library.

The full changelog is below, but what's more interesting is maybe what isn't in it, namely the compute shader version of the high-quality resampling filter I blogged about earlier. It turned out that my benchmark setup was wrong in a sort-of subtle way, and unfortunately biased towards the compute shader. Fixing that negated the speed difference—it was actually usually a few percent slower than the fragment shader version, despite a fair amount of earlier tweaks. (It did use less CPU when setting up new parameters, which was nice for things like continuous zooms, but probably not enough to justify the GPU slowdown.)

Which means that after a month or so of testing and performance tuning, I had to scrap it—it's sad to notice so late (I only realized that something was wrong as I started writing up the final documentation, and figured I couldn't actually justify why I would let one of them chain with other effects and the other one not), but it's a sunk cost, and keeping it in based on known-bad benchmarks would have helped nobody. I've left it in a git branch in case the world should change.

I still believe there are useful gains from compute shaders—in particular, the deinterlacer shines—but it's increasingly clear to me that fragment shaders should remain the default go-to tool for graphics on the GPU. (I guess the next natural target would be the convolution/FFT operators, but they're not all that much used.)

The full changelog reads:

Movit 1.6.0, January 24th, 2018

  - Support for effects that work as compute shaders. Compute shaders are
    generally slower than fragment shaders for the same algorithm,
    but allow some forms of communication between shader invocations
    and have more flexible output, which can enable more efficient algorithms.
    See effect.h for more details. Note that the fastest rendering API on
    EffectChain is now to a texture if possible, not to an FBO. This will
    only matter if the last effect is a compute shader.

  - Movit now includes a compute shader implementation of DeinterlaceEffect,
    which is automatically used instead of the fragment shader implementation
    if your GPU and OpenGL driver supports it (in practice, this means on
    all platforms except on macOS). The compute shader version is typically
    20–80% faster than the fragment shader version, depending on your GPU
    and other factors.

    A compute shader implementation of ResampleEffect was written but
    ultimately failed to be faster, and so is not included.

  - Support for microbenchmarks of effects through the Google microbenchmarking
    framework (optional). Currently, DeinterlaceEffect and ResampleEffect has
    benchmarks; enable them by running the unit test with --benchmark (also try
    --benchmark --help).

  - Effects can now explicitly request _not_ to have mipmaps, which means they
    can do so without needing to request bounce and fiddling with the sampler
    state. Note that this is an API change for effects.

  - Movit now requires C++11, both to build and to #include the header files.
    Support for SDL1 has been dropped; unit tests and the demo program now need
    SDL2.

  - Various smaller bugfixes and optimizations.

Debian packages are on their way up through the NEW queue (there's a soname bump).

Planet DebianShirish Agarwal: The Pune Metro 1st anniversary celebrations

Pune Metro facebook, twitter friends

This would be long. First and foremost, couple of days ago, I got the following direct message on my twitter handle –

Hi Shirish,

We are glad to inform you that we are celebrating the 1st anniversary of Pune Metro Rail Project & the incorporation of both Nagpur Metro and Pune Metro into Maharashtra Metro Rail Corporation Limited(MahaMetro) on 23rd January at 13:00 hrs followed by the lunch.

On this occasion we would like to invite you to accept a small token of appreciation for your immense support & continued valuable interaction on our social media channels for the project at the hands of Dr. Brijesh Dixit, Managing Director, MahaMetro.

Venue: Hotel Citrus, Opposite PCMC Building, Pimpri-Chinchwad.
Time: 13:00 Hrs
Lunch: 14:00 hrs

Kindly confirm your attendance. Looking forward to meet you.

Regards & Thanks, Pune Metro Team

I went and had an interaction with Mr. Dixit and was gifted a gift card which can be redeemed.

I shared it on facebook. Some people have asked me privately as to what I did.

First of all, let me be very clear. I did not enter into any competition or put up any queries with getting any sort of monetary benefit at all. I have been a user of public transport both out of necessity and choice and do feel the need for a fast, secure, reasonable mode of transport. I am also immensely passionate and curious about public transport as a whole.

Just to share couple of facts and I’m sure most of you will agree with me, it takes more than twice the time if you are taking public transport. at least that’s true in India. Part of it is due to the drivers not keeping the GPS on, nor people/users asking/stressing for using GPS and using that location-based info. to derive when the next bus is gonna come. So, for instance for my journey to PCMC roughly took 20 kms. and about 45 minutes, but waiting took almost twice the time and this was not the rush-hour time where it could easily have taken double the time. Hence people opting for private vehicles even though they know it’s harmful for the environment as well as for themselves and their loved ones.

There was/has been a plan of PMC (Pune Municipal Corporation) for over a decade to use GPS to make aware the traveling public tentatively when the next bus would arrive but due to number of reasons (corruption, lack of training, awareness, discipline) all of which has been hampering citizen’s productivity due to which people are forced to get private vehicles and it becomes a zero-sum game. There is much more but don’t want to go in there.

Now people have asked me what sort of suggestions I gave or am giving –

Yesterday’s interaction after seeing Mahametro’s interaction with the press, it seems the press or media seems to have a very poor understanding of the dynamics and not really interested in enriching citizen’s understanding of either the Pune Metro or the idea of Integrated Transport Initiative which has been in making for sometime now. Part of the issue also seem to lay with Pune Metro in not sharing knowledge as much as they can with the opportunities that digital media/space provides and at very low-cost.

Suggestions and Queries –

1. One of the first things that Pune Metro could make is an animation of how a DPR (Detailed Project Report) is made. I don’t think any of the people from the press, especially English language press has seen the DPR or otherwise many of the questions would have been answered.

http://www.punemetrorail.org/download/PuneMetro-DPR.pdf

The simplest analogy I can give is let’s say you want to build a hospital but the land on which you have to build it belongs to 2-3 parties, so how will you will build it? Also you don’t have money. The DPR is different only in the sense of the scale of the things and construction of the routes is not by a single contractor but by multiple contractors. A route say A – B is divided in 5 parts and asked by people to submit tenders for the packet a company/contractor/s are interested in.

The way I see it, the DPR has to figure out the right of way where construction of the spans have to be, where the stations have to be built, from where electricity and water has to come, where the maintainance depot will be (usually the depot is at the end), the casting yard for the spans/pillars etc.

There is a pre-qualification round so that only eligible bidders are interested who have history of doing similar scale work and then bidding as to who can do it at lowest cost with a set reserve price. If no bidder comes up either due to from contractor’s POV a very high reserve price, then the reserve price is lowered. The idea there is simply to have a price discovery which may be seen as being found out by a just and fair method.

The press seemed to be more interested in making a tiff between Pune Metro/Maha Metro chief and Gaurdian Minister Shri Girish Bapat of something which to my mind is a non-issue at this juncture.

Mr. Dixit was absolutely correct in saying that he can’t comment on when the extension to Nigdi will happen unless the DPR for extension to Nigdi is made, land is found and the various cost heads, expenses and funding is approved in the State and Central Government and funding from multiple places is done.

The other question which was being raised by the press was razing of the BRTS in Pune. While the press knew it was neither Mr. Dixit’s place or responsibility nor is he supposed to comment upon whatever happens to BRTS. He can’t even comment as that would come under Pune Urban Transport Ministry.

As far as I understand Mr. Dixit’s obligations, it is to build Pune Metro as safely, as quickly, using good materials, give good signages and give an efficient public transit service that we Puneties can be proud of.

2. The site http://www.punemetrorail.org/ really needs an update and upgrade. You should use something like wordpress or something where you are able to change themes every now and then. Every 3-6 months the themes should be tweaked so the content remains or at least looks fresh.

3. There are no time-stamps of any of the videos. At the very least should have time-stamps so some sort of contextual information is available.

4. There is no way to know if there is news. News should be highlighted and more information be shared. For instance, there should have been more info. e.g. about this particular item –

MoU signed between Dr. Brijesh Dixit, MD MahaMetro, Mrs. Prerna Deshbhratar, Addl Municipal Commissioner(Spl), PMC and Mr Kong Wy Mun, CEO, Singapore Cooperation Enterprise for the Urban Management(Multi-Modal Transport) during a program at Yashada, Pune.

from http://www.punemetrorail.org/projectupdate.aspx

It would have been interesting to know what it means and how would the Singapore Government help us in achieving a unified multi-modal transport model.

There were/are tons of questions that the press could have asked but didn’t as above and more below.

5. The DPR was made in November 2015 and then now it is 2018. There probably needs to be vis-a-vis adjusted prices taking into consideration changes over 3 years and would probably change till 2021.

6. Similarly, there are time-gaps between plans and execution of the plan and for Puneties we don’t even know what the plan is.

I would urge Pune Metro to have a dynamic plan which shows areas in which work is progressing in terms of blinking lights to know which are active areas and which are not. They could be a source of inspiration and trail-blazer on this.

7. Similarly, another idea which could be done or might even be done is to have a single photograph taken everyday at say 1200 hrs. in afternoon at all the areas at 640×480 resolution which can be uploaded to the site and in turn could be published onto a separate web-page which in-turn over days and weeks could be turn into a time-lapse video similar to what was achieved for the first viaduct video shot over a day or two –

If you want to make it more interesting and challenging, you could invite students from Symbiosis to make it on something like a Raspberry Pi2/3 or some other SBC (Single Board Computer), a camera lens, a solar cell and a modem with instructions to stagger images to send it to Pune metro rail portal in case some web traffic is already there. There could be specific port (not port 80) .

Later on make a time-lapse video would be simple as stitching all those photographs together and getting some nice audio music as fillers. Something which has already been done once for the viaduct video as seen above.

8. Tracking planned versus real-time progress – While Mr. Dixit has time and again ensured that things are progressing well, it would make it far much easier to trust if there was a web-service which tells if things are going according to schedule or is it bit off. It does overlap a bit with my earlier suggestion but there are many development projects around the wold which show tentative and actual progress.

9. Apart from traffic diversion news in major newspapers, it would be nice to also have a section about traffic diversion with blinkers or something about road diversions which are in effect.

10. Another would be to have a RSS feed about all news found out by various search-engine crawlers, delete duplicate links and share the news and views for people to click-through and know for themselves.

11. Statistics of jobs (both direct and indirect created) due to Pune Metro works displayed prominently.

12. Have a glossary of terms which can easily be garnered by having a 10th-12th average student going through say the DPR and see which terms he has problems with.

The simplest example is the word ‘Reach’ which has been used in a different context in Pune Metro than what is usually understood.

13. Are there and if there are, How many Indian SME’s have been entrusted either via joint-venture or whichever way to ensure knowledge transfer of making and maintaining the rakes, car/bogies, track etc.

14. If any performance and load guarantee has been asked from various packet holders. If yes, what are the guarantees and for what duration ?

These are all low-hanging fruits. Also I’m no web-developer although am a bit of content producer (as can be seen) and a voracious consumer of the web. I do have few friends though, if there is requirement and who understand the medium in a far more better, intimate way than the crudish manner I shared above.

A student who believes democracy needs work and needs efforts to democracy work. If citizens themselves would not ask these questions who would ?

Krebs on SecurityChronicle: A Meteor Aimed At Planet Threat Intel?

Alphabet Inc., the parent company of Google, said today it is in the process of rolling out a new service designed to help companies more quickly make sense of and act on the mountains of threat data produced each day by cybersecurity tools.

Countless organizations rely on a hodgepodge of security software, hardware and services to find and detect cybersecurity intrusions before an incursion by malicious software or hackers has the chance to metastasize into a full-blown data breach.

The problem is that the sheer volume of data produced by these tools is staggering and increasing each day, meaning already-stretched IT staff often miss key signs of an intrusion until it’s too late.

Enter “Chronicle,” a nascent platform that graduated from the tech giant’s “X” division, which is a separate entity tasked with tackling hard-to-solve problems with an eye toward leveraging the company’s core strengths: Massive data analytics and storage capabilities, machine learning and custom search capabilities.

“We want to 10x the speed and impact of security teams’ work by making it much easier, faster and more cost-effective for them to capture and analyze security signals that have previously been too difficult and expensive to find,” wrote Stephen Gillett, CEO of the new venture.

Few details have been released yet about how exactly Chronicle will work, although the company did say it would draw in part on data from VirusTotal, a free service acquired by Google in 2012 that allows users to scan suspicious files against dozens of commercial antivirus tools simultaneously.

Gillett said his division is already trialing the service with several Fortune 500 firms to test the preview release of Chronicle, but the company declined to name any of those participating.

ANALYSIS

It’s not terribly clear from Gillett’s post or another blog post from Alphabet’s X division by Astro Teller how exactly Chronicle will differentiate itself in such a crowded market for cybersecurity offerings. But it’s worth considering the impact that VirusTotal has had over the years.

Currently, VirusTotal handles approximately one million submissions each day. The results of each submission get shared back with the entire community of antivirus vendors who lend their tools to the service — which allows each vendor to benefit by adding malware signatures for new variants that their tools missed but that a preponderance of other tools flagged as malicious.

Naturally, cybercriminals have responded by creating their own criminal versions of VirusTotal: So-called “no distribute” scanners. These services cater to malware authors, and use the same stable of antivirus tools, except they prevent these tools from phoning home to the antivirus companies about new, unknown variants.

On balance, it’s difficult to know whether the benefit that antivirus companies — and by extension their customers — gain by partnering with VirusTotal outweighs the mayhem enabled by these no-distribute scanners. But it seems clear that VirusTotal has helped antivirus companies and their customers do a better job focusing on threats that really matter, as opposed to chasing after (or cleaning up after) so-called “false positives,” — benign files that erroneously get flagged as malicious.

And this is precisely the signal-to-noise challenge created by the proliferation of security tools used in a typical organization today: How to spend more of your scarce cybersecurity workforce, budget and time identifying and stopping the threats that matter and less time sifting through noisy but otherwise time-wasting alerts triggered by non-threats.

I’m not a big listener of podcasts, but I do find myself increasingly making time to listen to Risky Business, a podcast produced by Australian cybersecurity journalist Patrick Gray. Responding to today’s announcement on Chronicle, Gray said he likewise had few details about it but was looking forward to learning more.

“Google has so much data and so many amazing internal resources that my gut reaction is to think this new company could be a meteor aimed at planet Threat Intel™️,” Gray quipped on Twitter, referring to the burgeoning industry of companies competing to help companies trying to identify new threats and attack trends. “Imagine if other companies spin out their tools…Netflix, Amazon, Facebook etc. That could be a fundamentally reshaped industry.”

Well said. I also look forward to hearing more about how Chronicle works and, more importantly, if it works.

Full disclosure: Since September 2016, KrebsOnSecurity has received protection against massive online attacks from Project Shield, a free anti-distributed denial-of-service (DDoS) offering provided by Jigsaw — another subsidiary of Google’s parent company. Project Shield provides DDoS protection for news, human rights, and elections monitoring Web sites.

Rondam RamblingsA Multilogue on Free Will

[Inspired by this comment thread.] The Tortoise is standing next to a railroad track when Achilles, an ancient Greek warrior, happens by.  In the distance, a train whistle sounds. Tortoise: Greetings, friend Achilles.  You have impeccable timing.  I could use your assistance. Achilles: Hello, Mr. T.  Always happy to help.  What seems to be the trouble? Tortoise: Look there. Achilles: Why, it

Planet DebianRenata D'Avila: Ideas for the project architecture and short term goals

There has been many discussions about planning for the FOSS calendar. On this post, I report about some of the ideas.

How I first thought the Foss Events calendar

Back in december, when I was just making sense of my surroundings and trying to find a way to start the internship, I drawed this diagram to picture in my head how everything would work:

A diagram showing the schema that will be described bellow. Each item is connected to the next using arrows, except for the relationship between user interface and API, where data flows both ways.

  1. There would be a "crawler.py" module, which would access each site on a determined list (It could be Facebook, Meetup or any other site such as another calendar) that have events information. This module would pull the event data from those sites.

  2. A validator.py would check if the data was good and if there was data. Once this module verified this, it would dump all info into a dirty_events database.

  3. The dirty_events database would be accessed by the module parser.py, which would clean and organize the data to be properly stored in the events database.

  4. An API.py module would query the events database and return the proper data, formatted into JSON, ical and/or some other formats.

  5. There would be an user interface to get data from API.py and to display this data. It should also be possible to add (properly formatted) events to the database using this interface. [If we were talking about a plugin to merely display the events in MoinMoin or Wordpress or some other system, this plugin would enter in this category.]

The ideas that Daniel put on paper

Then, when I shared with my mentors, Daniel came up with this:

Another diagram. On the left, the plugins boxes, they connect to an aggregator, with input towards the storage. The storage then outputs to reports and data dump, represented on the right.

Daniel proposed that module or plugins could be developed or improved (there are some of them already, but they might not support iCalendar URLs) for MoinMoin, Drupal, Wordpress that would allow the data each of these systems have about events to be aggregated. Information from the Meetup and the Facebook APIs could be converted to ical to be agreggated. This aggregation process could happen through a cron job - and I believe daily is enough, because people don't usually publish an event to happen in the very next day (they need time for people to acknowledge it). If the time frame ends up not being the ideal, this can be reviewed and changed later.

Once all this data is gathered, it would then be stored, inserting it or updating it in what could be a PostgreSQL or NoSQL solution.

Using the database with events information, it should be possible to do a data dump with all the information or to give "reports" of the event data, whether the user wants to access the data in iCalendar format (for Thunderbird or GNOME Evolution) or just HTML for viewing in the browser.

Short term goals

Creating a FOSS events calendar it is a big project that will most certainly continue beyond my Outreachy internship.

Therefore, along with my mentors, we have established that my short term goal will be to contribute a bit to it by working on the MoinMoin EventCalendar so the events can be exported to the iCalendar format.

I have been studying and playing around with the EventCalendar code and, so far, I've concluded that the best way to do this might be by writing a function to it. Just like there are other functions on this plugin to change the display of the calendar, there might be a function to just sort the data to the iCalendar format and to allow downloading the file.

Krebs on SecurityExpert: IoT Botnets the Work of a ‘Vast Minority’

In December 2017, the U.S. Department of Justice announced indictments and guilty pleas by three men in the United States responsible for creating and using Mirai, a malware strain that enslaves poorly-secured “Internet of Things” or IoT devices like security cameras and digital video recorders for use in large-scale cyberattacks.

The FBI and the DOJ had help in their investigation from many security experts, but this post focuses on one expert whose research into the Dark Web and its various malefactors was especially useful in that case. Allison Nixon is director of security research at Flashpoint, a cyber intelligence firm based in New York City. Nixon spoke with KrebsOnSecurity at length about her perspectives on IoT security and the vital role of law enforcement in this fight.

Brian Krebs (BK): Where are we today with respect to IoT security? Are we better off than were a year ago, or is the problem only worse?

Allison Nixon (AN): In some aspects we’re better off. The arrests that happened over the last year in the DDoS space, I would call that a good start, but we’re not out of the woods yet and we’re nowhere near the end of anything.

BK: Why not?

AN: Ultimately, what’s going with these IoT botnets is crime. People are talking about these cybersecurity problems — problems with the devices, etc. — but at the end of the day it’s crime and private citizens don’t have the power to make these bad actors stop.

BK: Certainly security professionals like yourself and others can be diligent about tracking the worst actors and the crime machines they’re using, and in reporting those systems when it’s advantageous to do so?

AN: That’s a fair argument. I can send abuse complaints to servers being used maliciously. And people can write articles that name individuals. However, it’s still a limited kind of impact. I’ve seen people get named in public and instead of stopping, what they do is improve their opsec [operational security measures] and keep doing the same thing but just sneakier. In the private sector, we can frustrate things, but we can’t actually stop them in the permanent, sanctioned way that law enforcement can. We don’t really have that kind of control.

BK: How are we not better off?

AN: I would say that as time progresses, the community that practices DDoS and malicious hacking and these pointless destructive attacks get more technically proficient when they’re executing attacks, and they just become a more difficult adversary.

BK: A more difficult adversary?

AN: Well, if you look at the individuals that were the subject of the announcement this month, and you look in their past, you can see they’ve been active in the hacking community a long time. Litespeed [the nickname used by Josiah White, one of the men who pleaded guilty to authoring Mirai] has been credited with lots of code.  He’s had years to develop and as far as I could tell he didn’t stop doing criminal activity until he got picked up by law enforcement.

BK: It seems to me that the Mirai authors probably would not have been caught had they never released the source code for their malware. They said they were doing so because multiple law enforcement agencies and security researchers were hot on their trail and they didn’t want to be the only ones holding the source code when the cops showed up at their door. But if that was really their goal in releasing it, doing so seems to have had the exact opposite effect. What’s your take on that?

AN: You are absolutely, 100 million percent correct. If they just shut everything down and left, they’d be fine now. The fact that they dumped the source was a tipping point of sorts. The damages they caused at that time were massive, but when they dumped the source code the amount of damage their actions contributed to ballooned [due to the proliferation of copycat Mirai botnets]. The charges against them specified their actions in infecting the machines they controlled, but when it comes to what interested researchers in the private sector, the moment they dumped the source code — that’s the most harmful act they did out of the entire thing.

BK: Do you believe their claimed reason for releasing the code?

AN: I believe it. They claimed they released it because they wanted to hamper investigative efforts to find them. The problem is that not only is it incorrect, it also doesn’t take into account the researchers on the other end of the spectrum who have to pick from many targets to spend their time looking at. Releasing the source code changed that dramatically. It was like catnip to researchers, and was just a new thing for researchers to look at and play with and wonder who wrote it.

If they really wanted to stay off law enforcement’s radar, they would be as low profile as they could and not be interesting. But they did everything wrong: They dumped the source code and attacked a security researcher using tools that are interesting to security researchers. That’s like attacking a dog with a steak. I’m going to wave this big juicy steak at a dog and that will teach him. They made every single mistake in the book.

BK: What do you think it is about these guys that leads them to this kind of behavior? Is it just a kind of inertia that inexorably leads them down a slippery slope if they don’t have some kind of intervention?

AN: These people go down a life path that does not lead them to a legitimate livelihood. They keep doing this and get better at it and they start to do these things that really can threaten the Internet as a whole. In the case of these DDoS botnets, it’s worrying that these individuals are allowed to go this deep before law enforcement catches them.

BK: There was a narrative that got a lot of play recently, and it was spun by a self-described Internet vigilante who calls himself “the Janitor.” He claimed to have been finding zero-day exploits in IoT devices so that he could shut down insecure IoT things that can’t really be secured before or maybe even after they have been compromised by IoT threats like Mirai. The Janitor says he released a bunch of his code because he’s tired of being the unrecognized superhero that he is, and many in the media seem to have eaten this up and taken his manifesto as gospel. What’s your take on the Janitor, and his so-called “bricker bot” project?

AN: I have to think about how to choose my words, because I don’t want to give anyone bad ideas. But one thing to keep in mind is that his method of bricking IoT devices doesn’t work, and it potentially makes the problem worse.

BK: What do you mean exactly?

AN: The reason is sometimes IoT malware like Mirai will try to close the door behind it, by crashing the telnet process that was used to infect the device [after the malware is successfully installed]. This can block other telnet-based malware from getting on the machine. And there’s a lot of this type of King of the Hill stuff going on in the IoT ecosystem right now.

But what [this bricker bot] malware does is a lot times it reboots a machine, and when the device is in that state the vulnerable telnet service goes back up. It used to be a lot of devices were infected with the very first Mirai, and when the [control center] for that botnet went down they were orphaned. We had a bunch of Mirai infections phoning home to nowhere. So there’s a real risk of taking the machine that was in the this weird state and making it vulnerable again.

BK: Hrm. That’s a very different story from the one told by the Bricker bot author. According to him, he spent several years of his life saving the world from certain doom at the hands of IoT devices. He even took credit for foiling the Mirai attacks on Deutsche Telekom. Could this just be a case of researcher exaggerating his accomplishments? Do you think his Bricker bot code ever really spread that far?

AN: I don’t have any evidence that there was mass exploitation by Bricker bot. I know his code was published. But when I talk to anyone running an IoT honeypot [a collection of virtual or vulnerable IoT devices designed to attract and record novel attacks against the devices] they have never seen it. The consensus is that regardless of peoples’ opinion on it we haven’t seen it in our honeypots. And considering the diversity of IoT honeypots out there today, if it was out there in real life we would have seen it by now.

BK: A lot of people believe that we’re focusing on the wrong solutions to IoT security — that having consumers lock down IoT devices security-wise or expecting law enforcement agencies to fix this problem for us for me are pollyannish ideas that in any case don’t address the root cause: Which is that there are a lot of companies producing crap IoT products that have virtually no security. What’s your take?

AN: The way I approach this problem is I see law enforcement as the ultimate end goal for all of these efforts. When I look at the IoT DDoS activity and the actual human beings doing this, the vast majority of Mirai attacks, attack infrastructure, malware variants and new exploits are coming from a vast minority of people doing this. That said, the way I perceive the underground ecosystem is probably different than the way most people perceive it.

BK: What’s the popular perception, do you think?

AN: It’s that, “Oh hey, one guy got arrested, great, but another guy will just take his place.” People compare it to a drug dealer on the street corner, but I don’t think that’s accurate in this case. The difference is when you’re looking at advanced criminal hacking campaigns, there’s not usually a replacement person waiting in the wings. These are incredibly deep skills developed over years. The people doing innovations in DDoS attacks and those who are driving the field forward are actually very few. So when you can ID them and attach behavior to the perpetrator, you realize there’s only a dozen people I need to care about and the world suddenly becomes a lot smaller.

BK: So do you think the efforts to force manufacturers to harden their products are a waste of time?

AN: I want to make it clear that all these different ways to tackle the problem…I don’t want to say one is more important than the other. I just happened to be working on one component of it. There’s definitely a lot of disagreement on this. I totally recognize this as a legitimate approach. A lot of people think the way forward is to focus on making sure the devices are secure. And there are efforts ongoing to help device manufacturers create more secure devices that are more resistant to these efforts.

And a lot is changing, although slowly. Do you remember way back when you bought a Wi-Fi router and it was open by default? Because the end user was obligated to change the default password, we had open Wi-Fi networks everywhere. As years passed, many manufacturers started making them more secure. For example, many of these devices now have customers refer to sticker on the machine that has a unique Wi-Fi password. That type of shift may be an example of what we can see in the future of IoT security.

BK: In the wake of the huge attacks from Mirai in 2016 and 2017, several lawmakers have proposed solutions. What do you think of the idea that it doesn’t matter what laws we pass in the United States that might require more security by IoT makers, that those makers are just going to keep on ignoring best practices when it comes to security?

AN: It’s easy to get cynical about this and a lot of people definitely feel like these these companies don’t sell directly to the U.S. and therefore don’t care about such efforts. Maybe in the short term that might be true, but in the long term I think it ends up biting them if they continue to not care.

Ultimately, these things just catch up with you if you have a reputation for making a poor product. What if you had a reputation for making a device that if you put it on the Internet it would reboot every five minutes because it’s getting attacked? Even if we did enact security requirements for IoT that manufacturers not in the U.S. wouldn’t have to follow, it would still in their best interests to care, because they are going to care sooner or later.

BK: I was on a Justice Department conference call with other journalists on the day they announced the Mirai author arrests and guilty pleas, and someone asked why this case was prosecuted out of Alaska. The answer that came back was that a great many of the machines infected with Mirai were in Alaska. But it seems more likely that it was because there was an FBI agent there who decided this was an important case but who actually had a very difficult time finding enough infected systems to reach the threshold needed to prosecute the case. What’s your read on that?

AN: I think that this case is probably going to set precedent in terms of the procedures and processes used to go after cybercrime. I’m sure you finished reading The Wired article about the Alaska investigation into Mirai: It goes in to detail about some of the difficult things that the Alaska FBI field office had to do to satisfy the legal requirements to take the case. Just to prove they had jurisdiction, they had to find a certain number of infected machines in Alaska.

Those were not easy to find, and in fact the FBI traveled far and wide in order to find these machines in Alaska. There are all kinds of barriers big and small that slow down the legal process for prosecuting cases like this, some of which are legitimate and some that I think are going to end up being streamlined after a case like this. And every time a successful case like this goes through [to a guilty plea], it makes it more possible for future cases to succeed.

This one group [that was the subject of the Mirai investigation] was the worst of the worst in this problem area. And right now it’s a huge victory for law enforcement to take down one group that is the worst of the worst in one problem area. Hopefully, it will lead to the takedown of many groups causing damage and harming people.

But the concept that in order for cybercriminals to get law enforcement attention they need to make international headlines and cause massive damage needs to change. Most cybercriminals probably think that what they’re doing nobody is going to notice, and in a sense they’re correct because there is so much obvious criminal activity blatantly connected to specific individuals. And that needs to change.

BK: Is there anything we didn’t talk about related to IoT security, the law enforcement investigations into Mirai, or anything else you’d like to add?

AN: I want to extend my gratitude to the people in the security industry and network operator community who recognized the gravity of this threat early on. There are a lot of people who were not named [in the stories and law enforcement press releases about the Mirai arrests], and want to say thank you for all the help. This couldn’t have happened without you.

Worse Than FailureSponsor Post: Make Your Apps Better with Raygun

I once inherited an application which had a bug in it. Okay, I’ve inherited a lot of applications like that. In this case, though, I didn’t know that there was a bug, until months later, when I sat next to a user and was shocked to discover that they had evolved a complex work-around to bypass the bug which took about twice as long, but actually worked.

“Why didn’t you open a ticket? This shouldn’t be like this.”

“Enh… it’s fine. And I hate dealing with tickets.”

In their defense, our ticketing system at that office was a godawful nightmare, and nobody liked dealing with it.

The fact is, awful ticket tracking aside, 99% of users don’t report problems in software. Adding logging can only help so much- eventually you have a giant haystack filled with needles you don’t even know are there. You have no way to see what your users are experiencing out in the wild.

But what if you could? What if you could build, test and deploy software with a real-time feedback loop on any problems the users were experiencing?

Our new sponsor, Raygun, gives you a window into the real user-experience for your software. With a few minutes of setup, all the errors, crashes, and performance issues will be identified for you, all in one tool.

You're probably using software and services today that relies on Raygun to identify when users have a poor experiences: Domino's Pizza, Coca-Cola, Microsoft and Unity all use it, along with many others.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integration, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Worse Than FailureAll Saints' Day

Cathedral Antwerp July 2015-1

Oh, PHP. It's the butt of any number of jokes in the programming community. Those who do PHP often lie and pretend they don't, just to avoid the social stigma. Today's submitter not only works in PHP, but they also freelance: the bottom of the bottom of the development hierarchy.

Last year, Ilya was working on a Joomla upgrade as well as adjusting several components on a big, obscure website. As he was poking around in the custom code, he found today's submission. You see, the website is in Italian. At the top of the page, it shows not only the date, but also the saint of the day. This is a Catholic thing: every day of the year has a patron saint, and in certain cultures, you might even be named for the saint whose day you were born on. A full list can be found on this Italian Wikipedia page.

Every day, the website was supposed to display text like "18 luglio: santi Sinforosa e sette compagni" (July 18: Sinforosa and the Seven Companions). But the code that generated this string had broken. It wasn't Ilya's task to fix it, but he chose to do so anyway, because why not?

His first suspect for where this text came from was this mess of Javascript embedded in the head:

     function getDataGiorno(){
     data = new Date();
     ora =data.getHours();
     minuti=data.getMinutes();
     secondi=data.getSeconds();
     giorno = data.getDay();
     mese = data.getMonth();
     date= data.getDate();
     year= data.getYear();
     if(minuti< 10)minuti="0"+minuti;
     if(secondi< 10)secondi="0"+secondi;
     if(year<1900)year=year+1900;
     if(ora<10)ora="0"+ora;
     if(giorno == 0) giorno = " Domenica ";
     if(giorno == 1) giorno = " Lunedì ";
     if(giorno == 2) giorno = " Martedì ";
     if(giorno == 3) giorno = " Mercoledì ";
     if(giorno == 4) giorno = " Giovedì ";
     if(giorno == 5) giorno = " Venerdì ";
     if(giorno == 6) giorno = " Sabato ";
     if(mese == 0) mese = "gennaio ";
     if(mese ==1) mese = "febbraio ";
     if(mese ==2) mese = "marzo ";
     if(mese ==3) mese = "aprile ";
     if(mese ==4) mese = "maggio ";
     if(mese ==5) mese = "giugno ";
     if(mese ==6) mese = "luglio ";
     if(mese ==7) mese = "agosto ";
     if(mese ==8) mese = "settembre ";
     if(mese ==9) mese = "ottobre ";
     if(mese ==10) mese = "novembre ";
     if(mese ==11) mese = "dicembre";
     var dt=date+" "+mese+" "+year;
     var gm =date+"_"+mese;

     return gm.replace(/^\s+|\s+$/g,""); ;
     }

     function getXMLHttp() {
     var xmlhttp = null;
     if (window.ActiveXObject) {
       if (navigator.userAgent.toLowerCase().indexOf("msie 5") != -1) {
         xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
       } else {
           xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
       }
     }
     if (!xmlhttp && typeof(XMLHttpRequest) != 'undefined') {
       xmlhttp = new XMLHttpRequest()
     }
     return xmlhttp
     }

     function elaboraRisposta() {
      var dt=getDataGiorno();
      var data = dt.replace('_',' ');
      if (dt.indexOf('1_')==0){
          dt.replace('1_','%C2%BA');
      }
       // alert("*"+dt+"*");
     var temp = new Array();
     temp = objHTTP.responseText.split(dt);
     //alert(temp[1]);

     var temp1=new Array();
     temp1=temp[1].split(":");
     temp=temp1[1].split("");


      if (objHTTP.readyState == 4) {
      santi=temp[0].split(",");
      //var app = new Array();
      //app=santi[0].split(";");
      santo=santi[0];
      //alert(santo);

        // document.write(data+" - "+santo.replace(/\/wiki\//g,"http://it.wikipedia.org/wiki/"));
        document.write(data+" - "+santo);
      }else {

      }

     }
     function loadDati() {
      objHTTP = getXMLHttp();
      objHTTP.open("GET", "calendario.html" , false);

      objHTTP.onreadystatechange = function() {elaboraRisposta()}


     objHTTP.send(null)
     }

If you've never seen Joomla before, do note that most templates use jQuery. There's no need to use ActiveXObject here.

"calendario.html" contained very familiar text: a saved copy of the Wikipedia page linked above. This ought to be splitting the text with the date, avoiding parsing HTML with regex by using String.prototype.split(), and then parsing the HTML to get the saint for that date to inject into the HTML.

But what if a new saint gets canonized, and the calendar changes? By caching a local copy, you ensure that the calendar will get out of date unless meticulously maintained. Therefore, the code to call this Javascript was commented out entirely in the including page:

<div class="santoForm">
<!--  <?php echo "<script>loadDati()</script>" ;  ?> -->
<?php ...

Instead, it had been replaced with this:

       setlocale(LC_TIME, 'ita', 'it_IT.utf8');
       $gg = ltrim(strftime("%d"), '0');

       if ($gg=='1'){
        $dt=ltrim(strftime("1º %B"), '0');
       }else{
         $dt=ltrim(strftime("%d %B"), '0');
       }
       //$dt='4 febbraio';
     $html = file_get_contents('http://it.wikipedia.org/wiki/Calendario_dei_santi');
     $doc = new DOMDocument();
     $doc->loadHTML($html);
     $elements = $doc->getElementsByTagName('li');
     foreach ($elements as $element) {
        if (strpos($element->nodeValue,utf8_encode($dt))!== false){
         list($santo,$after)= split(';',$element->nodeValue);
         break ;
        }else {}
     }
       list($santo,$after)= split(',',utf8_decode($santo));
       if (strlen ( $santo) > 55) {
         $santo=substr($santo, 0, 55)."...";
       }

This migrates the logic to the backend—the One True Place for all such logic—and uses standard library routines, just as it should. Of course, this being PHP, it breaks horribly if you look at it cross-eyed, or—like Ilya—don't have the Italian locale installed on your development machine. And of course, it'll also break if the live Wikipedia page about the saints gets reformatted. But what's the likelihood of that? Plus, it's not cached in the least, letting every visitor see updates in real time. After all, next time they canonize a saint, everyone will rush right to this site to verify that the day changed. That's how the Internet works, right?

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

CryptogramDetecting Drone Surveillance with Traffic Analysis

This is clever:

Researchers at Ben Gurion University in Beer Sheva, Israel have built a proof-of-concept system for counter-surveillance against spy drones that demonstrates a clever, if not exactly simple, way to determine whether a certain person or object is under aerial surveillance. They first generate a recognizable pattern on whatever subject­ -- a window, say -- someone might want to guard from potential surveillance. Then they remotely intercept a drone's radio signals to look for that pattern in the streaming video the drone sends back to its operator. If they spot it, they can determine that the drone is looking at their subject.

In other words, they can see what the drone sees, pulling out their recognizable pattern from the radio signal, even without breaking the drone's encrypted video.

The details have to do with the way drone video is compressed:

The researchers' technique takes advantage of an efficiency feature streaming video has used for years, known as "delta frames." Instead of encoding video as a series of raw images, it's compressed into a series of changes from the previous image in the video. That means when a streaming video shows a still object, it transmits fewer bytes of data than when it shows one that moves or changes color.

That compression feature can reveal key information about the content of the video to someone who's intercepting the streaming data, security researchers have shown in recent research, even when the data is encrypted.

Research paper and video.

Planet DebianDaniel Pocock: apt-get install more contributors

Every year I participate in a number of initiatives introducing people to free software and helping them make a first contribution. After all, making the first contribution to free software is a very significant milestone on the way to becoming a leader in the world of software engineering. Anything we can do to improve this experience and make it accessible to more people would appear to be vital to the continuation of our communities and the solutions we produce.

During the time I've been involved in mentoring, I've observed that there are many technical steps in helping people make their first contribution that could be automated. While it may seem like creating SSH and PGP keys is not that hard to explain, wouldn't it be nice if we could whisk new contributors through this process in much the same way that we help people become users with the Debian Installer and Synaptic?

Paving the path to a first contribution

Imagine the following series of steps:

  1. Install Debian
  2. apt install new-contributor-wizard
  3. Run the new-contributor-wizard (sets up domain name, SSH, PGP, calls apt to install necessary tools, procmail or similar filters, join IRC channels, creates static blog with Jekyll, ...)
  4. write a patch, git push
  5. write a blog about the patch, git push

Steps 2 and 3 can eliminate a lot of "where do I start?" head-scratching for new contributors and it can eliminate a lot of repetitive communication for mentors. In programs like GSoC and Outreachy, where there is a huge burst of enthusiasm during the application process (February/March), will a tool like this help a higher percentage of the applicants make a first contribution to free software? For example, if 50% of applicants made a contribution last March, could this tool raise that to 70% in March 2019? Is it likely more will become repeat contributors if their first contribution is achieved more quickly after using a tool like this? Is this an important pattern for the success of our communities? Could this also be a useful stepping stone in the progression from being a user to making a first upload to mentors.debian.net?

Could this wizard be generic enough to help multiple communities, helping people share a plugin for Mozilla, contribute their first theme for Drupal or a package for Fedora?

Not just for developers

Notice I've deliberately used the word contributor and not developer. It takes many different people with different skills to build a successful community and this wizard will also be useful for people who are not writing code.

What would you include in this wizard?

Please feel free to add ideas to the wiki page.

All projects really need a couple of mentors to support them through the summer and if you are able to be a co-mentor for this or any of the other projects (or even proposing your own topic) now is a great time to join the debian-outreach list and contact us. You don't need to be a Debian Developer either and several of these projects are widely useful outside Debian.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 3 – Booting

Securing the Linux boot process Matthew Garrett

  • Without boot security there is no other security
  • MBR Attacks – previously common, still work sometimes
  • Bootloader attacks – Seen in the wild
  • Malicious initrd attacks
    • RAM disk, does stuff like decrypt hard drive
    • Attack captures disk pass-shrase when typed in
  • How do we fix these?
    • UEFI Secure boot
    • Microsoft required in machines shipped after mid-2012
    • sign objects, firmware trusts some certs, boots things correctly signed
    • Problem solved! Nope
    • initrds are not signed
  • initrds
    • contain local changes
    • do a lot of security stuff
  • TPMs
    • devices on system motherboards
    • slow but inexpensive
    • Not under control of the CPU
    • Set of registers “platform configuration registers”, list of hashes of objects booted in boot process. Measurements
    • PCR can enforce things, stop boots if stuff doesn’t match
    • But stuff changes all the time, eg update firmware . Can brick machine
  • Microsoft to the resuce
    • Tie Secure boot into measured boot
    • Measure signing keys rather than the actual files themselves
    • But initrds are not signed
  • Systemd to the resuce
    • systemd boot stub (not the systemd boot loader)
    • Embed initrd and the kernel into a single image with a single signature
    • But initrds contain local information
    • End users should not be signing stuff
  • Kernel can be handed multiple initranfs images (via cpio)
    • each unpacked in turn
    • Each will over-write the previous one
    • configuration can over-written but the signed image, perhaps safely so that if config is changed, stuff fails
    • unpack config first, code second
  • Kernel command line is also security sensative
    • eg turn off iommu and dump RAM to extract keys
    • Have a secure command line turning on all security features, append on the what user sends
  • Proof of device state
    • Can show you are number after boot based on TPM. Can compare to 2FA device to make sure it is securely booted. Safe to type in passwords
  • Secure Provision of secrets
    • Know a remote machine is booted safely and not been subverted before sending it secret stuff.

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 2

Dealing with Contributor Overload Holden Karau

  • Developer Advocate at Google
  • Apache Spark, Contributor to BEAM

Some people from big projects, some from projects hoping to get big

  • Remember it’s okay to not fix it all
  • The fun of a small project
    • Simple communication
    • Aligned incentives
    • Easy to tell who knows what
    • Tight community
  • The fun of a parge project
    • More people to do the work
    • More impact and people thanking you
    • Lots of ideas and experiences
    • If $s then fun conferences
    • Get paid to work on it.
  • Is my project on Fire? or just lots of people on it.
    • Measurable
      • User questions spike
      • issue spike
    • Lesss measurable
      • Non-explicit stuff not being passed on
  • Classic Pipeline
    • Users -> contributors -> committers _> PMC
    • Each stage takes times
    • Very leaky pipeline, perhaps it leaks too much
  • With hyper-growth project can quickly go south
    • Committer:user ration can’t get too far out.
  • Even without hyper-growth: sadness
    • Same thing happens, but slower
  • Overload – Mitigation
    • You don’t have to answer everyone, this can be hard
    • Stackoverflow
    • Are your answers easily searchable
    • Knowledge base – “do you mean”
    • Take time and look for patterns in questions
    • Find people who like writing and get to to write a book
      • Don’t to for core committers, they will have no time for anything else
  • Issue overload
    • Try and get rid of duplicate tickets
    • Autoclose tickets – mixed results
  • How to deal with a spike
    • Raise the bar
    • Make it easier
    • Get Perl to solve the problem
  • Raising the bar
    • Reject trivial changes – reduces the onramp
    • Add weird system – more posts on how to contribute
  • What can Perl solve
    • Style guide
    • bot bot bots
    • make it faster to merge
    • Improve PR + reviewer notice
    • Can increase productivity
  • Add more committers
    • Takes time and effort
    • People can be shy
    • Make a guide for new folks to follow
    • Have a safe space for people to ask questions
  • Reduce overhead for contributing well
    • Have doc on how to contribute next to the code, not elsewhere that people have to search for.

The Open Sourcing of Infrastructure Elizabeth K. Joseph

The recent history of infrastructure

  • 1998
    • To make a server use Solaris or NT. But off a shelf
    • Linux seen as Cheap Unix
    • Lots of FUD

Got a Junior Sysadmin Job

  • 2004
    • Had to tell people the basics “What is free software?”  , “Using Open Source Web Applications to Produce Business Results”
    • Turning point LAMP stack
    • Flood of changes on how customers interacted with software over last
      • Reluctance to be locked-in by a vendor
      • Concerns of security
      • Ability to fix bugs ourselves
      • Innovation stifled when software developed in isloation

Last 10 years

  • Changes in how peopel interacted with software
    • Downtime un-acceptable
    • Reliance of scaling and automation
    • Servers as Pets -> cattle
    • Large focus on data

Open Source is now Ubiquitous

  • Even Microsoft is using it a lot and interacting with the community

Operations tools were not as Open Sourced

  • Configuration Management
    • puppet modules, chef playbooks
  • Open application definitions – juhu charms, DC?OS Universe Catalog
  • Full disk images
    • Dockerhub

The Cloud

  • Cloud is the new propriatory
  • EC2-only infrastructure
  • Questions you should ask beforehand
    • Is your service adhering to open standards or am I locked in?
    • Recourse if the company goes out of business
    • Does vendor have a history of communicating about downtime and security problems?
    • Does vendor responds to bugs and feature requests?
    • Will the vendor use data in a way I’m not comfortable with?
    • Initial costs may be low, but do you have a plan to handle long term, growing costs
  • Alternatives
    • Openstack, Kubernetes, Docker Swarm, DC/OS with Apache Mesos

Hybrid Cloud

  • Tooling can be platform agnostic
  • Hard but can be done

Share

Planet DebianFrançois Marier: LXC setup on Debian stretch

Here's how to setup LXC-based "chroots" on Debian stretch. While I wrote about this on Debian jessie, I had to make some networking changes for stretch and so here are the full steps that should work on stretch.

Start by installing (as root) the necessary packages:

apt install lxc libvirt-clients debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here):

lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx

That configuration requires that the veth kernel module be loaded. If you have any kinds of module-loading restrictions enabled, you probably need to add the following to /etc/modules and reboot:

veth

Next, I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

    net.ipv4.ip_forward=1
    
  2. and then applying it using:

    sysctl -p
    
  3. Restart the network bridge:

    systemctl restart lxc-net.service
    
  4. and ensure that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -s 10.0.3.0/24 -j ACCEPT
    -A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
    -A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
    -A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT
    
  5. and applying the rules using:

    iptables-apply
    

Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR=http://httpredir.debian.org/debian lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64 -F

Since the root password is randomly generated, you'll need to reset it before you can login as root:

sudo lxc-attach -n sid64 passwd

Then login as root and install a text editor inside the container because the root image doesn't have one by default:

apt install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Mounting your home directory inside a container

In order to have my home directory available within the container, I created a user account for myself inside the container and then added the following to the container config file (/var/lib/lxc/sid64/config):

lxc.mount.entry=/home/francois home/francois none bind 0 0

before restarting the container:

lxc-stop -n sid64
lxc-start -n sid64

Fixing locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

If you see these errors while reconfiguring the locales package:

Generating locales (this might take a while)...
  en_US.UTF-8...cannot change mode of new locale archive: No such file or directory
 done
  fr_CA.UTF-8...cannot change mode of new locale archive: No such file or directory
 done
Generation complete.

and see the following dmesg output on the host:

[235350.947808] audit: type=1400 audit(1441664940.224:225): apparmor="DENIED" operation="chmod" info="Failed name lookup - deleted entry" error=-2 profile="/usr/bin/lxc-start" name="/usr/lib/locale/locale-archive.WVNevc" pid=21651 comm="localedef" requested_mask="w" denied_mask="w" fsuid=0 ouid=0

then AppArmor is interfering with the locale-gen binary and the work-around I found is to temporarily shutdown AppArmor on the host:

lxc-stop -n sid64
systemctl stop apparmor
lxc-start -n sid64

and then start up it later once the locales have been updated:

lxc-stop -n sid64
systemctl start apparmor
lxc-start -n sid64

AppArmor support

If you are running AppArmor, your container probably won't start until you add the following to the container config (/var/lib/lxc/sid64/config):

lxc.aa_allow_incomplete = 1

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 1 – k8s @ home and bad buses

How to run Kubernetes on your spare hardware at home, and save the world Angus Lees

  • Mainframe ->
  • PC ->
  • Rackmount PC
    • Back the rackmount PC even with built-in redundancy will still fail. Or the location will go offline, or your data spreads across multiple machines
  • Since you need to have distributed/redundancy anyway. New model (2005). Grid computing. Clever software, dumb hardware. Loosely coupled servers
    • Libraries > RPC / Microservices
    • Threadpool -> hadoop
    • SQL -> key/store
    • NFS -> Object store
    • In-place upgrades -> “Immutable” image-based build from scratch
  • Computers in clouds
    • No cases. No redundant Power, journaling on filesystems turned off, etc
  • Everything is in clouds – Secondary effects
    • Corperate driven
    • Apache license over GPL
    • Centralised services rather than federated protocols
    • Profit-driven rather than scrating itches
  • Summary
    • Problem
      • Distributed Systems hard to configure
      • Solutions scale down poorly
      • Most homes don’t have racks of servers
    • Implication
      • Home Free Software “stuck” at single-machine architecture
  • Kubernetes (lots of stuff, but I use it already so just doing unique bits)
    • “Unix Process as a service”
    • Inverts the stack. Data is important then app. Kernel and Hardware unimportant.
    • Easy upgrades, everything is an upgrade
    • Declarative API , command line interface
  • “We’ve conducted this experiment for decades now, and I have news for you, Hardware fails”

Hardware at Home

  • Raid used to be “enterprise” now normal for home
  • Elastic compute for home too
  • Kubernetes for Home
    • Budget $100
      • ARM master nodes
      • Mixed architecture
    • Assume single layer-2 home ethernet
    • Worker nodes – old $500 laptops
      • x86-64
      • CoreOS
      • Broken screens, dead batteries
    • 3 * $30 Banana pis
      • Raspberry Pi2
      • armv7a
      • containOS
    • Persistentvolumes
      • NFS mount from RAID server
    • Service – keepalived-vip
    • Ingress
      • keepalived and nginx-ingress , letsEncrypt
      • Wildcard DNS
    • Status
      • Works!
      • Printing works
      • Install: PXE boot and run coreos-install
    • Status – ungood
      • Banana PIs a bit too slow.
    • github.com/anguslees/k8s-home

Is the 370 the worst bus route in Sydney? Katie Bell

  • The 370 bus
    • Goes UNSW and Sydney University. Goes around the city
  • If bus runs every 15 minutes, you should not be able to see 3 at once
  • Newspaper articles and Facebook group about how bad it is.
  • Two Questions
    • Bus privitisation better or worse
    • Is the 370 really the worst
  • Data provided
    • Lots of stuff but nothing the reliability
    • But they do have realtime data eg for the Tripetime app (done via a 3rd party)
    • They have a API and Key with standard format via GTFS
  • But they only publish “realtime” data, not the old data
    • So collected the realtime data, once a minute for 4 months
    • 557 GB
  • Format
    • zipfile of csv files
    • IDs sometimes ephemeral
    • Had to match timetable data and realtime data
    • Data had to be tidied up – lots
  • Processing realtime data
    • Download 1 minute
    • Parse
    • Match each of around ~7000 trips in timetable (across all of NSW)
    • Write ~20000 realtime updates to the DB
    • Running 5 EC2 instances at leak
    • Writing up to 40MB/s to the DB
  • Is the 370 the worst?
    • Define “worst”
    • Found NSW definition of what an on-time bus is.
    • Now more than 5:59 late or 1:59 early. Measured start/middle/end
    • Victoria definition strictor
    • She defined:
      • Early: more than 2min early
      • On time: 2m early – 5 min late
      • late more than 5m late
      • Very late – more thna 20m late
    • Across all trips
      • 3.7 million trips
      • On time 31%
      • More than 20m late 2.86%
    • Best routes
      • Nightime buses
      • Outside of Sydney
      • Shorter routes
      • 86% – 97% or better
    • Worst
      • Less than 5% on time
      • Longer routes
      • 370 is the 22nd worst
        • 8.79% on time
    • Worst routes ( percent > 20 min late)
      • 23% of 370 trips (6th worst)
      • Lots of Wollongong
    • Worst agencies
      • No obvious difference between agencies and private companies
    • Conclusion
      • Privatisation could go either way
      • 370 is close to the worst (277 could be worse) in Sydney
    • bus-shaming.com
    • github.com/katharosada/bus-shaming

Questions

  • Used Spot instances to keep cost down
  • $200 month on AWS
  • Buses better/worse according to time? Now checked yet
  • Wanted to calculate the “wait time” , not done yet.
  • Another feed of bus locations and some other data out there too.
  • Lots of other questions

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Keynote – Karen Sandler

Executive director of Software Freedom Conservancy

Previously spoke that LCA 2012 about closed-source software on her heart implant. Since then has pivoted career to more open-source advocacy in career.

  • DMCA exemption for medical device research
  • When you ask your doctor about safety of devices you sound like a conspiracy theorist
  • Various problems have been highlighted, some progress
  • Some companies addressing them

Initially published paper highlighting problem without saying she had the device

  • Got pushback from groups who thought she was scaremongering
  • Companies thinking about liability issues
  • After told story in 2012 things improved

Had to get new device recently.

  • Needed this disabled since her jobs pisses off hackers sometimes
  • All manufacturers said they could not disable wireless access
  • Finally found a single model that could be disabled made by a European manufacturer

 

Note: This is a quick summary, Lots more covered but hard to cover. Video should be good. Her slides were broken though much of the talk be she still delivered great talk.

Share

,

Google AdsenseLet machine learning create your In-feed ads


Last year we launched AdSense Native ads, a new family of ads created to match the look and feel of your site. If your site has an editorial feed (a list of articles or news) or a listings feed (a list of products or services), then Native In-feed ads are a great option to give your users a better experience.

Now we've brought the power of machine learning to In-feed ads, saving you time. If you're not quite sure what fonts, colors, and styles will work best for your site, you can let Google's machine learning help you decide. 

How it works: 

  1. Create a new Native In-Feed ad and select "Let Google suggest a style." 
  2. Enter the URL of a page with a feed you’d like to monetize. AdSense will scan your page to find the best placement. 
  3. Select which feed element you’d like your In-feed ad to match.
  4. Your ad is automatically created – simply place the piece of code into your feed, and you’re done! 

By the way, this method is optional, so if you prefer, you can create your ads manually. 

Posted by: 

Faris Zerdoudi, AdSense Tech Lead 
Violetta Kalathaki, AdSense Product Manager 


Planet DebianJonathan McDowell: Going to FOSDEM 2018

Laura comments that she has no idea who is going to FOSDEM. I’m slightly embarrassed to admit I’ve only been once before, way back in 2005. A mixture of good excuses and disorganisation about arranging to go has meant I haven’t been back since. So a few months ago I made the decision to attend and sorted out the appropriate travel and hotel bookings and I’m pleased to say I’m attending FOSDEM 2018. I get in late Friday evening and fly out on Sunday evening, so I’ll miss the Friday beering but otherwise be around for the whole event. Hope to catch up with a bunch of people there!

Sociological ImagesPod Panic & Social Problems

My gut reaction was that nobody is actually eating the freaking Tide Pods.

Despite the explosion of jokes—yes, mostly just jokes—about eating detergent packets, sociologists have long known about media-fueled cultural panics about problems that aren’t actually problems. Joel Best’s groundbreaking research on these cases is a classic example. Check out these short video interviews with Best on kids not really being poisoned by Halloween candy and the panic over “sex bracelets.”

Click here to view the embedded video.

In a tainted Halloween candy study, Best and Horiuchi followed up on media accounts to confirm cases of actual poisoning or serious injury, and they found many cases couldn’t be confirmed or were greatly exaggerated. So, I followed the data on detergent digestion.

It turns out, there is a small trend. According to a report from the American Association of Poison Control Centers,

…in 2016 and 2017, poison control centers handled thirty-nine and fifty-three cases of intentional exposures, respectively, among thirteen to nineteen year olds. In the first fifteen days of 2018 alone, centers have already handled thirty-nine such intentional cases among the same age demographic.

That said, this trend is only relative to previous years and cannot predict future incidents. The life cycle of internet jokes is fairly short, rotating quickly with an eye toward the attention economy. It wouldn’t be too surprising if people moved on from the pods long before the panic dies out.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianJulien Danjou: Scaling a polling Python application with parallelism

A few weeks ago, Alon contacted me and asked me the following:

It so happened that I'm currently working on scaling some Python app. Specifically, now I'm trying to figure out the best way to scale SSH connections - when one server has to connect to thousands (or even tens of thousands) of remote machines in a short period of time (say, several minutes).

How would you write an application that does that in a scalable way?

Alon is using such an application to gather information on the hosts it connects to, though that's not important in this case.

In a series of blog post, I'd like to help Alon solve this problem! We're gonna write an application that can manage millions of hosts.

Well, if you have enough hardware, obviously.

The job

Writing a Python application that connects to a host by ssh can be done using, for example, Paramiko. That will not be the focus of this blog post since it is pretty straightforward to do.

To keep this exercise simple, we'll just use a ping function that looks like this:

import subprocess
 

def ping(hostname):
p = subprocess.Popen(["ping", "-c", "3", "-w", "1", hostname],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL)
return p.wait() == 0


The function ping returns True if the host is reachable and alive, or False if an error occurs (bad hostname, network unreachable, ping timeout, etc.). We're also not trying to make ping fast by specifying a lower timeout or a smaller number of packets. The goal is to scale this task while knowing it takes time to execute.

So ping is going to be the job to be executed by our application. It'll replace ssh in this example, but you'll see it'll be easy to replace it with any other job you might have.

We're going to use this job to accomplish a bigger mission: determine which hosts in my home network are up:

for i in range(255):
ip = "192.168.2.%d" % i
if ping(ip):
print("%s is alive" % ip)


Running this program alone and pinging all 255 IP addresses takes more than 10 minutes.

It is pretty slow because each time we ping a host, we wait for the ping to succeed or timeout before starting the next ping. So if you need 3 seconds to ping each host in average, then to ping 255 nodes you'll need 5 seconds × 255 = 765 seconds and that's more than 12 minutes.

The solution

If 255 hosts need 12 minutes to be pinged, you can imagine how long it's going to be when we're going to test which hosts are alive on the IPv4 Internet – 4 294 967 296 addresses to ping!

Since those ping (or ssh) jobs are not CPU intensive, we can consider that one multi-processor host is going to be powerful enough – at least for a beginning.

The real issue here currently is that those tasks are I/O intensive and executing them serially is very long.

So let's run them in parallel!

To do this, we're going to use threads. Threads are not efficient in Python when your tasks are CPU intensive, but in case of blocking I/O, they are good enough.

Using concurrent.futures

With concurrent.futures, it's easy to manage a pool of threads and schedule the execution of tasks. Here's how we're going to do it:

import functools
from concurrent import futures
import subprocess
 

def ping(hostname):
p = subprocess.Popen(["ping", "-q", "-c", "3", "-W", "1",
hostname],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL)
return p.wait() == 0
 

with futures.ThreadPoolExecutor(max_workers=4) as executor:
futs = [
(host, executor.submit(functools.partial(ping, host)))
for host in ("192.168.2.%d" % i for i in range(255))
]
 
for ip, f in futs:
if f.result():
print("%s is alive" % ip)


The ThreadPoolExecutor is an engine, called executor, that allows us to submit tasks to it. Each task submitted is put into an internal queue using the executor.submit method. This method takes a function to execute as argument.

Then, the executor pulls jobs out of its queue and execute them. In order to execute them, it starts a thread that is going to be responsible for the execution. The maximum number of threads to start is controlled by the max_workers parameters.

executor.submit returns a Future object, that holds the future result of the submitted task. Future objects expose methods to know if the task is finished or not; here we just use Future.result() to get the result. This method will block until the result is ready.

There's no magic recipe to find how many max workers you should use. It really depends on the nature of the tasks that are submitted. In this case, using a value of 4 brings down the execution time to 3 minutes – roughly 12 minutes divided by 4, which makes sense. Setting the max_workers to 255 (i.e. the number of tasks submitted) will make all the pings started at the same time, producing a CPU usage spike, but bringing down the total execution time to less than 5 seconds!

Obviously, you wouldn't be able to start 4 billion threads in parallel, but if your system is big and fast enough, and your task using more I/O than CPU, you can use a pretty high value in this case. The memory should also be taken into account – in this case, it's very low since the ping task is not using a lot of memory.

Just a first step

As already said, this ping job does not use a lot of CPU time or I/O bandwidth, neither would the original ssh case by Alon. However, if that would be the case, this method would be limited pretty quickly. Threads are not always the best option to maximize your throughput, especially with Python.

These are just the first steps of the distribution and scalability mechanism that you can implement using Python. There are a few other options available on top of this mechanism – I've covered those in my book Scaling Python, if you're interested in learning more!

Until then, stay tuned for the next article of this series!

Planet DebianJonathan Dowland: Imaging DVD-Rs: Overview and Step 1

Example of a degraded DVD-R

Example of a degraded DVD-R

Moiré-like degredation on a commercial CD-ROM

Moiré-like degredation on a commercial CD-ROM

From the late 1990s until relatively recently, I used to archive digital material onto CD-Rs, and later DVD-Rs. I've got about half a dozen spindles of hundreds of home-made discs with all sorts of stuff on them. A project to import them to a more future-proof storage location is long overdue. If you are in a similar position, consider a project like this as soon as possible: based on my experiences it might already be too late for some of them. The adjacent pictures were created with the ddrescueview software and show both a degraded home-made DVD-R and a degraded commercially pressed CD-ROM. I came across both as I embarked on this project.

The process can be divided into roughly five stages:

  1. Gather the discs & preparation
  2. Initial import of the discs
  3. Figuring out disc contents
  4. Retrying damaged/degraded discs
  5. Organising the extracted files

If you are importing a lot of discs, the stages can run like a pipeline, where you perform steps 3 onwards for some images whilst you are beginning stage 2 for others.

I'll be writing in more depth about each step separately. To start, here's the preparatory step.

Preparation (gather the discs)

Fetch all the discs you want to read together into one place.

I had some in jewel cases and others on the spindles that the blank discs come on. I had some at my house, some in boxes at my parents house and more at work. I decided to consolidate most of the discs down onto the spindles and throw away most of the jewel cases. If you suspect a particular disc as having particularly valuable data on it, you may wish to leave it in a jewel case. You might also want to hang onto one or two empty jewel cases, if there's a chance you'll happen upon a disc you want to give to someone else.

discs in progress
a pile of CDs and DVDs

I dedicated an initially-empty spindle as the "done" spindle, opting to keep the imported discs, at least until the import process was complete. I also kept a second "needs attention" spindle for discs that couldn't be read successfully straight away. I labelled both using a label-maker.

Don't trust disc labels: If in doubt, put the disc in your "to image" pile. Don't throw a disc out on the basis of the label alone. I had a bad habit of topping up a disc which was mostly for one thing with other data if there was space left over. Mistakes can also be made, and I had plenty of unlabelled discs anyway.

You're going to need a computer with sufficient storage space upon which to store the disc images, metadata and/or the data within them, once you start organising it. You're also going to need an optical drive to read them. If you haven't yet got a system in place for reliably storing your data and managing backups, it would be worth sorting that out first before embarking on a project like this.

I attached a USB drive to my NAS and did the importing and storing directly onto it.

Finally, this is going to take time. In the best case, discs read quickly and reliably, and the time is spent simply inserting them and ejecting them. In the worst case, you might have troublesome discs that you really want to attempt to read everything from, which can take a great deal of (unattended) time.


Next up is the initial import stage. I'll try to get my notes on that transformed into the next blog post soon.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main February 2018 Meeting: Linux.conf.au report

Feb 6 2018 18:30
Feb 6 2018 20:30
Feb 6 2018 18:30
Feb 6 2018 20:30
Location: 
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

PLEASE NOTE NEW LOCATION

Tuesday, February 6, 2018
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000

Speakers:

  • Russell Coker and others, LCA conference report

Russell Coker has done lots of Linux development over the years, mostly involved with Debian.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 6, 2018 - 18:30

CryptogramNew Malware Hijacks Cryptocurrency Mining

This is a clever attack.

After gaining control of the coin-mining software, the malware replaces the wallet address the computer owner uses to collect newly minted currency with an address controlled by the attacker. From then on, the attacker receives all coins generated, and owners are none the wiser unless they take time to manually inspect their software configuration.

So far it hasn't been very profitable, but it -- or some later version -- eventually will be.

Worse Than FailureCoded Smorgasbord: Archive This

Michael W came into the office to a hair-on-fire freakout: the midnight jobs failed. The entire company ran on batch processes to integrate data across a dozen ERPs, mainframes, CRMs, PDQs, OMGWTFBBQs, etc.: each business unit ran its own choice of enterprise software, but then needed to share data. If they couldn’t share data, business ground to a halt.

Business had ground to a halt, and it was because the archiver job had failed to copy some files. Michael owned the archiver program, not by choice, but because he got the short end of that particular stick.

The original developer liked logging. Pretty much every method looked something like this:

public int execute(Map arg0, PrintWriter arg1) throws Exception {
    Logger=new Logger(Properties.getString("LOGGER_NAME"));
    Log=new Logger(arg1);
    .
    .
    .
catch (Exception e) {
    e.printStackTrace();
    Logger.error("Monitor: Incorrect arguments");
    Log.printError("Monitor: Incorrect arguments");
    arg1.write("In Correct Argument Passed to Method.Please Check the Arguments passed \n \r");
    System.out.println("Monitor: Incorrect arguments");
}

Sometimes, to make the logging even more thorough, the catch block might look more like this:

catch(Exception e){
    e.printStackTrace();
    Logger.error("An exception happened during SFTP movement/import. " + (String)e.getMessage());
}

Java added Generics in 2004. This code was written in 2014. Does it use generics? Of course not. Every Hashtable is stringly-typed:

Hashtable attributes;
.
.
.
if (((String) attributes.get(key)).compareTo("1") == 0 | ((String) attributes.get(key)).compareTo("0") == 0) { … }

And since everything is stringly-typed, you have to worry about case-sensitive comparisons, but don’t worry, the previous developer makes sure everything’s case-insensitive, even when comparing numbers:

if (flag.equalsIgnoreCase("1") ) { … }

And don’t forget to handle Booleans…

public boolean convertToBoolean(String data) {
    if (data.compareToIgnoreCase("1") == 0)
        return true;
    else
        return false;
}

And empty strings…

if(!TO.equalsIgnoreCase("") && TO !=null) { … }

Actually, since types are so confusing, let’s make sure we’re casting to know-safe types.

catch (Exception e) {
    Logger.error((Object)this, e.getStackTraceAsString(), null, null);
}

Yes, they really are casting this to Object.

Since everything is stringly typed, we need this code, which checks to see if a String parameter is really sure that it’s a string…

protected void moveFile(String strSourceFolder, Object strSourceObject,
                     String strDestFolder) {
    if (strSourceObject.getClass().getName().compareToIgnoreCase("java.lang.String") == 0) { … }
    …
}

Now, that all was enough to get Michael’s blood pressure up, but none of that had anything to do with his actual problem. Why did the copy fail? The logs were useless, as they were spammed with messages with no particular organization. The code was bad, sure, so it wasn’t surprising that it crashed. For a little while, Michael thought it might be the getFiles method, which was supposed to identify which files needed to be copied. It did a recursive directory search (with no depth checking, so one symlink could send it into an infinite loop) nor did it actually filter files that it didn’t care about. It just made an ArrayList of every file in the directory structure and then decided which ones to copy.

He spent some time really investigating the copy method, to see if that would help him understand what went wrong:

sourceFileLength = sourceFile.length();
newPath = sourceFile.getCanonicalPath();
newPath = newPath.replace(".lock", "");
newFile = new File(newPath);
sourceFile.renameTo(newFile);                    
destFileLength = newFile.length();
while(sourceFileLength!=destFileLength)
{
    //Copy In Progress
}
//Remy: I didn't elide any code from the inside of that while loop- that is exactly how it's written, as an empty loop.

Hey, out of curiosity, what does the JavaDoc have to say about renameTo?

Many aspects of the behavior of this method are inherently platform-dependent: The rename operation might not be able to move a file from one filesystem to another, it might not be atomic, and it might not succeed if a file with the destination abstract pathname already exists. The return value should always be checked to make sure that the rename operation was successful.

It only throws exceptions if you don’t supply a destination, or if you don’t have permissions to the files. Otherwise, it just returns false on a failure.

So… if the renameTo operation fails, the archiver program will drop into an infinite loop. Unlogged. Undetected. That might seem like the root cause of the failure, but it wasn’t.

As it turned out, the root cause was that someone in ops hit “Ok” on a security update, which triggered a reboot, disrupting all the scheduled jobs.

Michael still wanted to fix the archiver program, but there was another problem with that. He owned the InventoryArchiver.jar. There was also OrdersArchiver.jar, and HRArchiver.jar, and so on. They had all been “written” by the same developer. They all did basically the same job. So they were all mostly copy-and-paste jobs with different hard-coded strings to specify where they ran. But they weren’t exactly copy-and-paste jobs, so each one had to be analyzed, line by line, to see where the logic differences might possibly crop up.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet DebianLouis-Philippe Véronneau: Long Live the Memory of Ursula K. Le Guin

Today, one of my favorite author passed away.

I stumbled upon Le Guin's work about 10 years ago when my uncle gave me a copy of The Left Hand of Darkness and since then, I've managed to lose and buy this novel three or four times. I'm normally very careful with how I manage my book collection and the only way I can explain how I lost this book so many times is I must have been too eager to share it with my entourage.

My weathered copy of The Dispossessed

Very tasteful eulogies have sprung up all day long and express far better than I can how much of a genius Ursula K. Le Guin was.

So here is my humble homage to her. In 1987, the CBC radio show Vanishing Point adapted what is to me Le Guin's best novel, The Dispossessed, in a series of 30 minute episodes.

The result if far less interesting than the actual novel, but if you are the kind of person that enjoys audiobooks, it might just be what convinces you to pick up the book.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #143

Here's what happened in the Reproducible Builds effort between Sunday January 14 and Saturday January 20 2018:

Upcoming events

Packages reviewed and fixed, and bugs filed

During reproducibility testing, 83 FTBFS bugs have been detected and reported by Adrian Bunk.

Reviews of unreproducible packages

56 package reviews have been added, 44 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

diffoscope development

Furthermore Juliana Oliveira has been working on a separated branch on parallizing diffoscope.

jenkins.debian.net development

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianLaura Arjona Reina: It’s 2018, where’s my traditional New Year Plans post?

I closed my eyes, opened them again, a new year began, and we’re even almost finishing January. Time flies.

In this article I’ll post some updates about my life with computer, software and free software communities. It’s more a “what I’ve been doing” than a “new year plans” post… it seems that I’m learning to not to make so much plans (life comes to break them anyways!).

At home

My home server is still running Debian Jessie. I’m happy that it just works and my services are up, but I’m sad that I couldn’t find time for an upgrade to Debian stable (which is now Debian 9 Stretch) and maybe reinstall it with another config. I have lots of photos and videos to upload in my GNU MediaGoblin instances, but also couldn’t find time to do it (nor to print some of them, which was a plan for 2017, and the files still sleep in external harddrives or DVDs). So, this is a TODO item that crossed the year (yay! now I have almost 12 months ahead to try to complete it!). I’ll try to get this done before summer. I am considering installing my own pump.io instance but I’m not sure it’s good to place it in the same machine as the other services. We’ll see.

I bought a new laptop (well, second hand, but in a very good condition), a Lenovo X230, and this is now my main computer. It’s an i5 with 8 GB RAM. Wow, modern computer at home!
I’m very very happy with it, with its screen, keyboard, and everything. It’s running a clean install of Debian 9 stable with KDE Plasma Desktop and works great. It is not heavy at all so I carry it to work and use it in the public transport (when I can sit) for my contributions to free software.

My phone (Galaxy S III with Lineage OS 14 which is Android 7) fell down and the touchscreen broke (I can see the image but it is unresponsive to touch). When normal boot, the phone is recognized by the PC as storage, and thus I could recover most of the data on it, but it’s not recognized by adb (as when USB debugging is disabled). It is recognized by adb when booted into Recovery (TWRP), though. I tried to enable USB debugging in several ways from adb while in Recovery, but couldn’t. I could switch off the wifi, though, so when I booted the phone it does not receive new messages, etc. I bought an OTG cable but I have no wireless mouse at home and couldn’t make it work with a normal USB mouse. I’ve given up for now until I find a wireless mouse or I have more time, and temporarily returned to use my old Galaxy Ace (with CyanogenMod 7 which is Android 2.3.7). I’ve looked at new phones but I don’t like that all of them have integrated battery, the screens are too big, all of them are very expensive (I know they are hi-tech machines, but don’t want to carry so valuable stuff all the time in my pocket) and other things. I still need to find time to go shopping with the list of phones where I can install Lineage OS (I already visited some stores but didn’t get convinced by the price, or they had no suitable models).

My glasses broke (in a different incident than the phone) and I used old ones for two weeks, because in the middle of the new ones preparation I had some family issues to care about. So putting time in reading or writing in front of the computer has been a bit uncomfortable and I tried to avoid it in the last weeks. Now I have new glasses and I can see very well 🙂 so I’m returning to my computer TODO.

I’ve given up the battle against iThings at home (I lost). I don’t touch them but other members of the family use them. I’m considering contributing to Debian info about testing things or maintaining some wiki pages about accessing iThings from Debian etc, but will leave that for summer, maybe later. Now I just try not to get depressed about this.

At work

We still have servers running Debian Wheezy which is in LTS support until May. I’m confident that we’ll upgrade before Wheezy reaches end of life, but frankly looking at my work plan, I’m not sure when. Every month seems packed with other stuff. I’ve taken some weeks leave to attend my family and I have no clear mind about when and how do things. We’ll see.

I gave a course about free software (in Spanish) for University staff last October. It was 20 hours, and 20 attendants, mostly administrative staff, librarians, and some IT assistants. It went pretty well, we talked about the definition of free software, history, free culture, licenses, free software tools for the office, for Android, and free software as a service (“cloud” stuff). They liked it very much. Many of them didn’t know that our Uni uses free software for our webmail (RoundCube), Cloud services (OwnCloud), and other important areas. I requested promotional material from the FSFE and I gave away many stickers. I also gave away all the Debian stickers that I had, and some other free software stickers. I’m not sure when and how I will get new Debian stickers, not sure if somebody from Madrid is going to FOSDEM. I’m considering printing them myself but I don’t know a good printer (for stickers) here. I’ll ask and try with a small investment, and see how it works out.

Debian

I think I have too many things in my plate and would like to close some stuff and focus on other, or maybe do other things.

I feel comfortable doing publicity work, but I would be happier if the team gets bigger and we have more contributors. I’m happy that we managed to publish a Debian Project News issue in DebConf17, a new one in September, and a new one in November, but since then I couldn’t find time to put on it. I’ll try to make a new issue happen before February ends, though. Meanwhile, the team has managed to handle the different announcements (release points and others) and we try to keep the community informed via micronews (mostly) and the blog bits.debian.org.

I’m keeping an eye on DebConf18 organization and I hope I can engage with publicity work about it, but I feel that we will need a local team member that leads the what-to-publish/when-to-publish and probably translations too.

About Spanish translations, I’m very happy that the translations for the Debian website have new contributors and reviewers that are making a really good work. In the last months I’m a bit behind, just trying to review and keep my files up to date, but I hope I can setup a routine in the following weeks to get more involved again, and also try to translate new files too.

Since some time, the Debian website work is the one that keeps my motivation in Debian up. It’s like a paradox because the Debian website is too big, complicated, old in some sense, and we have so much stuff that needs to be done, and so many people complaining or giving ideas (without patches) that one would get overwhelmed, depressed and sometimes would like just to resign from this team. But after all these years, it is now when I feel comfortable with the codebase and experienced enough to try things, review bugs, and try to help with the things needed. So I’m happy to put time in the website team, updating or improving the website, even when I do mistakes, or triage bugs. Also, working in the website is very rewarding because there is always some small thing that I can do to fix something, and thus, “get something done” even when my time is limited. The bad news is that there are also some big tasks that require a lot of time and motivation, and I get them postponed and postponed… 😦 At least, I try to file bugs for all the stuff that I would like to put time on, and maybe slowly, but thanks to all the team members and other contributors, we are advancing: we have a more updated /partners section (still needs work), a new /derivatives section, and we are working on the migration from CVS to Git, the reorganization of the download pages, and other stuff.

Some times I’d like to do other/new things in Debian. Learn to package (and thus, package spigot and gnusrss, used in Publicity, or weewx, that we use it at work, and also help maintaining or adopting some small things), or join the Documentation Team, or put more work in the Outreach Team (relaunch the Welcome Team), or put more work in Internationalization Team. Or maybe other stuff. But before that, I feel that I would need to finish some pending tasks in my current teams, and also find more people for them, too.

Other free software communities

I am still active in the pump.io community, although I don’t post very often in my social network account. I’ll try to open Dianara more often, and use Puma in my new phone (maybe I should adopt/fork Puma…). I am present in the IRC channel (#pump.io in Freenode) and try to organize and attend the meetings. I have a big TODO which is advance our application to join Software Freedom Conservancy (another item that crossed the TODO from 2017 to 2018) but I’ll really try to get this done before January ends.

I keep on testing F-Droid and free software apps for Android (now again in Android 2.x, I get F-Droid crashes all the time “OutofMemory” :D). I keep on reading the IRC channels and mailing list (also the mailing list for Replicant. If I get the broken phone to work with the OTG I will install Replicant on it and will keep it for tests). I keep on translating Android apps when I have some time to kill.

I have no idea who is going to FOSDEM and if I should talk to them prior to their travel (e.g. ask to bring Debian stickers for me if somebody from Madrid goes, or promote if there is any F-Droid or Pump.io or GNU MediaGoblin IRC meeting or talk or whatever) but I really got busy in December-January with life and family stuff, so I just left FOSDEM apart in my mind and will try to join and see the streaming the weekend that the conference is happening, or maybe later.

I think that’s all, or at least this blogpost became very long and I don’t find anything else to write, for now, to make it longer. In any case, it’s hard for me these days to make plans more than one-two weeks ahead. Hopefully I’ll write in my blog more often during this year.

Comments?

You can comment on this post using this pump.io thread.

Planet DebianBenjamin Mako Hill: Introducing Computational Methods to Social Media Scientists

The ubiquity of large-scale data and improvements in computational hardware and algorithms have provided enabled researchers to apply computational approaches to the study of human behavior. One of the richest contexts for this kind of work is social media datasets like Facebook, Twitter, and Reddit.

We were invited by Jean BurgessAlice Marwick, and Thomas Poell to write a chapter about computational methods for the Sage Handbook of Social Media. Rather than simply listing what sorts of computational research has been done with social media data, we decided to use the chapter to both introduce a few computational methods and to use those methods in order to analyze the field of social media research.

A “hairball” diagram from the chapter illustrating how research on social media clusters into distinct citation network neighborhoods.

Explanations and Examples

In the chapter, we start by describing the process of obtaining data from web APIs and use as a case study our process for obtaining bibliographic data about social media publications from Elsevier’s Scopus API.  We follow this same strategy in discussing social network analysis, topic modeling, and prediction. For each, we discuss some of the benefits and drawbacks of the approach and then provide an example analysis using the bibliographic data.

We think that our analyses provide some interesting insight into the emerging field of social media research. For example, we found that social network analysis and computer science drove much of the early research, while recently consumer analysis and health research have become more prominent.

More importantly though, we hope that the chapter provides an accessible introduction to computational social science and encourages more social scientists to incorporate computational methods in their work, either by gaining computational skills themselves or by partnering with more technical colleagues. While there are dangers and downsides (some of which we discuss in the chapter), we see the use of computational tools as one of the most important and exciting developments in the social sciences.

Steal this paper!

One of the great benefits of computational methods is their transparency and their reproducibility. The entire process—from data collection to data processing to data analysis—can often be made accessible to others. This has both scientific benefits and pedagogical benefits.

To aid in the training of new computational social scientists, and as an example of the benefits of transparency, we worked to make our chapter pedagogically reproducible. We have created a permanent website for the chapter at https://communitydata.cc/social-media-chapter/ and uploaded all the code, data, and material we used to produce the paper itself to an archive in the Harvard Dataverse.

Through our website, you can download all of the raw data that we used to create the paper, together with code and instructions for how to obtain, clean, process, and analyze the data. Our website walks through what we have found to be an efficient and useful workflow for doing computational research on large datasets. This workflow even includes the paper itself, which is written using LaTeX + knitr. These tools let changes to data or code propagate through the entire workflow and be reflected automatically in the paper itself.

If you  use our chapter for teaching about computational methods—or if you find bugs or errors in our work—please let us know! We want this chapter to be a useful resource, will happily consider any changes, and have even created a git repository to help with managing these changes!


The book chapter and this blog post were written with Jeremy Foote and Aaron Shaw. You can read the book chapter here. This blog post was originally published on the Community Data Science Collective blog.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 2 – Keynote – Matthew Todd

Collaborating with Everybody: Open Source Drug Discovery

  • Term used is a bit undefined. Open Source, Free Drugs?
  • First Open Source Project – Praziquantel
    • Molecule has 2 mirror image forms. One does the job, other tastes awful. Pills were previously a mix
    • Project to just have pill with the single form
      • Created discussion
      • Online Lab Notebook
      • 75% of contributions were from private sector (especially Syncom)
      • Ended up finding a approach that worked, different from what was originally proposed from feedback.
      • Similar method found by private company that was also doing the work
  • Conventional Drug discovery
    • Find drug that kills something bad – Hit
    • Test it and see if it is suitable – Led
    • 13,500 molecules in public domain that kill maleria parasite
  • 6 Laws of Open Scrience
    • All data is open and all ideas are shared
    • Anyone can take part at any level of the project
  • Openness increasing seen as a key
  • Open Source Maleria
    • 4 campaigns
    • Work on a molecule, park it when doesn’t seem promising
    • But all data is still public
  • What it actually is
    • Electronic lab book (80% of scientists still use paper)
    • Using Labtrove, changing to labarchives
    • Everything you do goes up every day
    • Todo list
      • Tried stuff, ended up using issue list on github
      • Not using most other github stuff
    • Data on a Google Sheet
    • Light Website, twitter feed
  • Lab vs Code
  • Have a promising molecule – works well in mice
    • Would probably be a patentable state
    • Not sure yet exactly how it works
  • Competition – Predictive model
    • Lots of solutions submitted, not good enough to use
    • Hopeful a model will be created
  • Tried a a known-working molecule from elsewhere, but couldn’t get it to work
    • This is out in the open. Lots of discussion
  • School group able to recreate Daraprim, a high-priced US drug
  • Public Domain science is now accepted for publications
  • Need to to make computers understand molecule digram and convert to representative format which can then be search one.
  • Missing
    • Automated links to databases in tickets
    • Basic web page stuff, auto-porting of data, newsletter, become non-profit, stickers
    • Stuff is not folded back into the Wiki
  • OS Mycetoma – New Project
    • Fungus with no treatment
    • Working on possible molecule to treat
  • Some ideas on how to get products created this way to market – eg “data exclusivity”

 

Share

,

Planet DebianBits from Debian: Mentors and co-mentors for Debian's Google Summer of Code 2018

GSoC logo

Debian is applying as a mentoring organization for the Google Summer of Code 2018, an internship program open to university students aged 18 and up.

Debian already has a wide range of projects listed but it is not too late to add more or to improve the existing proposals. Google will start reviewing the ideas page over the next two weeks and students will start looking at it in mid-February.

Please join us and help extending Debian! You can consider listing a potential project for interns or listing your name as a possible co-mentor for one of the existing projects on Debian's Google Summer of Code wiki page.

At this stage, mentors are not obliged to commit to accepting an intern but it is important for potential mentors to be listed to get the process started. You will have the opportunity to review student applications in March and April and give the administrators a definite decision if you wish to proceed in early April.

Mentors, co-mentors and other volunteers can follow an intern through the entire process or simply volunteer for one phase of the program, such as helping recruit students in a local university or helping test the work completed by a student at the end of the summer.

Participating in GSoC has many benefits for Debian and the wider free software community. If you have questions, please come and ask us on IRC #debian-outreach or the debian-outreachy mailing list.

Planet DebianLars Wirzenius: Ick: a continuous integration system

TL;DR: Ick is a continuous integration or CI system. See http://ick.liw.fi/ for more information.

More verbose version follows.

First public version released

The world may not need yet another continuous integration system (CI), but I do. I've been unsatisfied with the ones I've tried or looked at. More importantly, I am interested in a few things that are more powerful than what I've ever even heard of. So I've started writing my own.

My new personal hobby project is called ick. It is a CI system, which means it can run automated steps for building and testing software. The home page is at http://ick.liw.fi/, and the download page has links to the source code and .deb packages and an Ansible playbook for installing it.

I have now made the first publicly advertised release, dubbed ALPHA-1, version number 0.23. It is of alpha quality, and that means it doesn't have all the intended features and if any of the features it does have work, you should consider yourself lucky.

Invitation to contribute

Ick has so far been my personal project. I am hoping to make it more than that, and invite contributions. See the governance page for the constitution, the getting started page for tips on how to start contributing, and the contact page for how to get in touch.

Architecture

Ick has an architecture consisting of several components that communicate over HTTPS using RESTful APIs and JSON for structured data. See the architecture page for details.

Manifesto

Continuous integration (CI) is a powerful tool for software development. It should not be tedious, fragile, or annoying. It should be quick and simple to set up, and work quietly in the background unless there's a problem in the code being built and tested.

A CI system should be simple, easy, clear, clean, scalable, fast, comprehensible, transparent, reliable, and boost your productivity to get things done. It should not be a lot of effort to set up, require a lot of hardware just for the CI, need frequent attention for it to keep working, and developers should never have to wonder why something isn't working.

A CI system should be flexible to suit your build and test needs. It should support multiple types of workers, as far as CPU architecture and operating system version are concerned.

Also, like all software, CI should be fully and completely free software and your instance should be under your control.

(Ick is little of this yet, but it will try to become all of it. In the best possible taste.)

Dreams of the future

In the long run, I would ick to have features like ones described below. It may take a while to get all of them implemented.

  • A build may be triggered by a variety of events. Time is an obvious event, as is source code repository for the project changing. More powerfully, any build dependency changing, regardless of whether the dependency comes from another project built by ick, or a package from, say, Debian: ick should keep track of all the packages that get installed into the build environment of a project, and if any of their versions change, it should trigger the project build and tests again.

  • Ick should support building in (or against) any reasonable target, including any Linux distribution, any free operating system, and any non-free operating system that isn't brain-dead.

  • Ick should manage the build environment itself, and be able to do builds that are isolated from the build host or the network. This partially works: one can ask ick to build a container and run a build in the container. The container is implemented using systemd-nspawn. This can be improved upon, however. (If you think Docker is the only way to go, please contribute support for that.)

  • Ick should support any workers that it can control over ssh or a serial port or other such neutral communication channel, without having to install an agent of any kind on them. Ick won't assume that it can have, say, a full Java run time, so that the worker can be, say, a micro controller.

  • Ick should be able to effortlessly handle very large numbers of projects. I'm thinking here that it should be able to keep up with building everything in Debian, whenever a new Debian source package is uploaded. (Obviously whether that is feasible depends on whether there are enough resources to actually build things, but ick itself should not be the bottleneck.)

  • Ick should optionally provision workers as needed. If all workers of a certain type are busy, and ick's been configured to allow using more resources, it should do so. This seems like it would be easy to do with virtual machines, containers, cloud providers, etc.

  • Ick should be flexible in how it can notify interested parties, particularly about failures. It should allow an interested party to ask to be notified over IRC, Matrix, Mastodon, Twitter, email, SMS, or even by a phone call and speech syntethiser. "Hello, interested party. It is 04:00 and you wanted to be told when the hello package has been built for RISC-V."

Please give feedback

If you try ick, or even if you've just read this far, please share your thoughts on it. See the contact page for where to send it. Public feedback is preferred over private, but if you prefer private, that's OK too.

CryptogramSkygofree: New Government Malware for Android

Kaspersky Labs is reporting on a new piece of sophisticated malware:

We observed many web landing pages that mimic the sites of mobile operators and which are used to spread the Android implants. These domains have been registered by the attackers since 2015. According to our telemetry, that was the year the distribution campaign was at its most active. The activities continue: the most recently observed domain was registered on October 31, 2017. Based on our KSN statistics, there are several infected individuals, exclusively in Italy.

Moreover, as we dived deeper into the investigation, we discovered several spyware tools for Windows that form an implant for exfiltrating sensitive data on a targeted machine. The version we found was built at the beginning of 2017, and at the moment we are not sure whether this implant has been used in the wild.

It seems to be Italian. Ars Technica speculates that it is related to Hacking Team:

That's not to say the malware is perfect. The various versions examined by Kaspersky Lab contained several artifacts that provide valuable clues about the people who may have developed and maintained the code. Traces include the domain name h3g.co, which was registered by Italian IT firm Negg International. Negg officials didn't respond to an email requesting comment for this post. The malware may be filling a void left after the epic hack in 2015 of Hacking Team, another Italy-based developer of spyware.

BoingBoing post.

Cory DoctorowMy keynote from ConveyUX 2017: “I Can’t Let You Do That, Dave.”

“The Internet’s broken and that’s bad news, because everything we do today involves the Internet and everything we’ll do tomorrow will require it. But governments and corporations see the net, variously, as a perfect surveillance tool, a perfect pornography distribution tool, or a perfect video on demand tool—not as the nervous system of the 21st century. Time’s running out. Architecture is politics. The changes we’re making to the net today will prefigure the future our children and their children will thrive in—or suffer under.”

—Cory Doctorow

ConveyUX is pleased to feature author and activist Cory Doctorow to close out our 2017 event. Cory’s body work includes fascinating science fiction and engaging non-fiction about the relationship between society and technology. His most recent book is Information Doesn’t Want to be Free: Laws for the Internet Age. Cory will delve into some of the issues expressed in that book and talk about issues that affect all of us now and in the future. Cory will be on hand for Q&A and a post-session book signing.

Planet DebianRenata D'Avila: Improving communication

After my last post, a lot of things happened, but what I'm going to talk about now is the thing that I believe had the most impact in improving my experience with the Outreachy internship: the changes that were made in communication, specially between me and my mentors.

When I struggled with the tasks, with moving forward, it was somewhat a wish of mine to change the ways I communicate with my mentors. (Alright, Renata, so why didn't you start by just doing that? Well, I wasn't sure where to begin.)

I didn't know how to propose something like that to my mentors, I mean... maybe that was how Outreachy was supposed to be and I just might have set different expectations? The first step to figure this out I took by reaching Anna, an Outreachy intern with Wikimedia who I'd been talking to since the interns announcement had been made.

I asked her about how she interacted with her mentors and how often, so I knew what I could ask for. She told me about her weekly meetings with her mentors and how she could chat direcly with them when she ran into some issues. And, indeed, I felt like things like that what I wanted to happen.

Before I could reach out and discuss this with my mentors, though, Daniel himself read last week's post and brought up the idea of us speaking on the phone for the first time. That was indeed a good experience and I told him I would like to repeat or establish some sort of schedule to communicate with each other.

Yes, well, a schedule would be the best improvement, I think. It's not just about the means (phone call or IRC, for instance) that we communicate, but to know that, at some point, either one per week or bi-weekly, there would be someone to talk to at a determined time so I could untie any knots that were created during my internship (if that makes sense). I know I could just send an email at any time to my mentors (and sometimes I do) and they would reply, but that's not quite the point.

So, to make this short: I started to talk to one of my mentors daily and it's been really helpful. We are working on a schedule for bi-weekly calls. And we always have e-mails. I'm glad to say that now I talk not just with mentors, but also with fellow brazilian Outreachers and former participants and everyone is willing to help out.

For all the ways to reach me, you can look up my Debian wiki profile.

Planet DebianThomas Lange: FAI.me build service now supports backports

The FAI.me build service now supports packages from the backports repository. When selecting the stable distribution, you can also enable backports packages. The customized installation image will then uses the kernel from backports (currently 4.14) and you can add additional packages by appending /stretch-backports to the package name, e.g. notmuch/stretch-backports.

Currently, the FAIme service offers images build with Debian stable, stable with backports and Debian testing.

If you have any ideas for extensions or any feedback, send an email to FAI.me =at= fai-project.org

FAI.me

Planet DebianDirk Eddelbuettel: Rblpapi 0.3.8: Strictly maintenance

Another Rblpapi release, now at version 0.3.8, arrived on CRAN yesterday. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the eight release since the package first appeared on CRAN in 2016. This release wraps up a few smaller documentation and setup changes, but also includes an improvement to the (less frequently-used) subscription mode which Whit cooked up on the weekend. Details below:

Changes in Rblpapi version 0.3.8 (2018-01-20)

  • The 140 day limit for intra-day data histories is now mentioned in the getTicks help (Dirk in #226 addressing #215 and #225).

  • The Travis CI script was updated to use run.sh (Dirk in #226).

  • The install_name_tool invocation under macOS was corrected (@spennihana in #232)

  • The blpAuthenticate help page has additional examples (@randomee in #252).

  • The blpAuthenticate code was updated and improved (Whit in #258 addressing #257)

  • The jump in version number was an oversight; this should have been 0.3.7.

And only while typing up these notes do I realize that I fat-fingered the version number. This should have been 0.3.7. Oh well.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramDark Caracal: Global Espionage Malware from Lebanon

The EFF and Lookout are reporting on a new piece of spyware operating out of Lebanon. It primarily targets mobile devices compromised by fake secure messaging clients like Signal and WhatsApp.

From the Lookout announcement:

Dark Caracal has operated a series of multi-platform campaigns starting from at least January 2012, according to our research. The campaigns span across 21+ countries and thousands of victims. Types of data stolen include documents, call records, audio recordings, secure messaging client content, contact information, text messages, photos, and account data. We believe this actor is operating their campaigns from a building belonging to the Lebanese General Security Directorate (GDGS) in Beirut.

It looks like a complex infrastructure that's been well-developed, and continually upgraded and maintained. It appears that a cyberweapons arms manufacturer is selling this tool to different countries. From the full report:

Dark Caracal is using the same infrastructure as was previously seen in the Operation Manul campaign, which targeted journalists, lawyers, and dissidents critical of the government of Kazakhstan.

There's a lot in the full report. It's worth reading.

Three news articles.

Worse Than FailureAlien Code Reuse

“Probably the best thing to do is try and reorganize the project some,” Tim, “Alien”’s new boss said. “It’s a bit of a mess, so a little refactoring will help you understand how the code all fits together.”

“Alien” grabbed the code from git, and started walking through the code. As promised, it was a bit of a mess, but partially that mess came from their business needs. There was a bunch of common functionality in a Common module, but for each region they did business in- Asia, North America, Europe, etc.- there was a region specific deployable, each in its own module. Each region had its own build target that would include the Common module as part of the build process.

The region-specific modules were vaguely organized into sub-modules, and that’s where “Alien” settled in to start reorganizing. Since Asia was the largest, most complicated module, they started there, on a sub-module called InventoryManagement. THey moved some files around, set up the module and sub-modules in Maven, and then rebuilt.

The Common library failed to build. This gave “Alien” some pause, as they hadn’t touched anything pertaining to the Common project. Specifically, Common failed to build because it was looking for some files in the Asia.InventoryManagement sub-module. Cue the dive into the error trace and the vagaries of the build process. Was there a dependency between Common and Asia that had gone unnoticed? No. Was there a build-order issue? No. Was Maven just being… well, Maven? Yes, but that wasn’t the problem.

After hunting around through all the obvious places, “Alien” eventually ran an ls -al.

~/messy-app/base/Common/src/com/mycompany > ls -al
lrwxrwxrwx 1 alien  alien    39 Jan  4 19:10 InventoryManagement -> ../../../../../Asia/InventoryManagement/src/com/mycompany/IM/
drwxr-x--- 3 alien  alien  4096 Jan  4 19:10 core/

Yes, that is a symbolic link. A long-ago predecessor discovered that the Asia.InventoryManagement sub-module contained some code that was useful across all modules. Acutally moving that code into Common would have involved refactoring Asia, which was the largest, most complicated module. Presumably, that sounded like work, so instead they just added a sym-link. The files actually lived in Asia, but were compiled into Common.

“Alien” writes, “This is the first time in my over–20-year working life I see people reuse source code like this.”

They fixed this, and then went hunting, only to find a dozen more examples of this kind of code “reuse”.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaJames Morris: LCA 2018 Kernel Miniconf – SELinux Namespacing Slides

I gave a short talk on SELinux namespacing today at the Linux.conf.au Kernel Miniconf in Sydney — the slides from the talk are here: http://namei.org/presentations/selinux_namespacing_lca2018.pdf

This is a work in progress to which I’ve been contributing, following on from initial discussions at Linux Plumbers 2017.

In brief, there’s a growing need to be able to provide SELinux confinement within containers: typically, SELinux appears disabled within a container on Fedora-based systems, as a workaround for a lack of container support.  Underlying this is a requirement to provide per-namespace SELinux instances,  where each container has its own SELinux policy and private kernel SELinux APIs.

A prototype for SELinux namespacing was developed by Stephen Smalley, who released the code via https://github.com/stephensmalley/selinux-kernel/tree/selinuxns.  There were and still are many TODO items.  I’ve since been working on providing namespacing support to on-disk inode labels, which are represented by security xattrs.  See the v0.2 patch post for more details.

Much of this work will be of interest to other LSMs such as Smack, and many architectural and technical issues remain to be solved.  For those interested in this work, please see the slides, which include a couple of overflow pages detailing some known but as yet unsolved issues (supplied by Stephen Smalley).

I anticipate discussions on this and related topics (LSM stacking, core namespaces) later in the year at Plumbers and the Linux Security Summit(s), at least.

The session was live streamed — I gather a standalone video will be available soon!

ETA: the video is up! See:

Planet DebianDaniel Pocock: Keeping an Irish home warm and free in winter

The Irish Government's Better Energy Homes Scheme gives people grants from public funds to replace their boiler and install a zoned heating control system.

Having grown up in Australia, I think it is always cold in Ireland and would be satisfied with a simple control switch with a key to make sure nobody ever turns it off but that isn't what they had in mind for these energy efficiency grants.

Having recently stripped everything out of the house, right down to the brickwork and floorboards in some places, I'm cautious about letting any technologies back in without checking whether they are free and trustworthy.

bare home

This issue would also appear to fall under the scope of FSFE's Public Money Public Code campaign.

Looking at the last set of heating controls in the house, they have been there for decades. Therefore, I can't help wondering, if I buy some proprietary black box today, will the company behind it still be around when it needs a software upgrade in future? How many of these black boxes have wireless transceivers inside them that will be compromised by security flaws within the next 5-10 years, making another replacement essential?

With free and open technologies, anybody who is using it can potentially make improvements whenever they want. Every time a better algorithm is developed, if all the homes in the country start using it immediately, we will always be at the cutting edge of energy efficiency.

Are you aware of free and open solutions that qualify for this grant funding? Can a solution built with devices like Raspberry Pi and Arduino qualify for the grant?

Please come and share any feedback you have on the FSFE discussion list (join, reply to the thread).

Planet DebianNorbert Preining: Continuous integration testing of TeX Live sources

The TeX Live sources consists in total of around 15000 files and 8.7M lines (see git stats). It integrates several upstream projects, including big libraries like FreeType, Cairo, and Poppler. Changes come in from a variety of sources: external libraries, TeX specific projects (LuaTeX, pdfTeX etc), as well as our own adaptions and changes/patches to upstream sources. Since quite some time I wanted to have a continuous integration (CI) testing, but since our main repository is based on Subversion, the usual (easy, or the one I know) route via Github and one of the CI testing providers, didn’t come to my mind – until last week.

Over the weekend I have set up CI testing for our TeX Live sources by using the following ingredients: git-svn for checkout, Github for hosting, Travis-CI for testing, and a cron job that does the connection. To be more specific:

  • git-svn I use git-svn to check out only the source part of the (otherwise far to big) subversion repository onto my server. This is similar to the git-svn checkout of the whole of TeX Live as I reported here, but contains only the source part.
  • Github The git-svn checkout is pushed to the project TeX-Live/texlive-source on Github.
  • Travis-CI The CI testing is done in the TeX-Live/texlive-source project on Travis-CI (who are offering free services for open source projects, thanks!)

Although this sounds easy, there are a few stumbling blocks: First of all, the .travis.yml file is not contained in the main subversion repository. So adding it to the master tree that is managed via git-svn is not working, because the history is rewritten (git svn rebase). My solution was to create a separate branch travis-ci which adds only the .travis.yml file and merge master.

Travis-CI by default tests all branches, and does not test those not containing a .travis.yml, but to be sure I added an except clause stating that the master branch should not be tested. This way other developers can try different branches, too. The full .travis.yml can be checked on Github, here is the current status:

# .travis.yml for texlive-source CI building
# Norbert Preining
# Public Domain

language: c

branches:
  except:
  - master

before_script:
  - find . -name \*.info -exec touch '{}' \;

before_install:
  - sudo apt-get -qq update
  - sudo apt-get install -y libfontconfig-dev libx11-dev libxmu-dev libxaw7-dev

script: ./Build

What remains is stitching these things together by adding a cron job that regularly does git svn rebase on the master branch, merges the master branch into travis-ci branch, and pushes everything to Github. The current cron job is here:

#!/bin/bash
# cron job for updating texlive-source and pushing it to github for ci
set -e

TLSOURCE=/home/norbert/texlive-source.git
GIT="git --no-pager"

quiet_git() {
    stdout=$(tempfile)
    stderr=$(tempfile)

    if ! $GIT "$@" $stdout 2>$stderr; then
	echo "STDOUT of git command:"
	cat $stdout
	echo "************"
        cat $stderr >&2
        rm -f $stdout $stderr
        exit 1
    fi

    rm -f $stdout $stderr
}

cd $TLSOURCE
quiet_git checkout master
quiet_git svn rebase
quiet_git checkout travis-ci
# don't use [skip ci] here because we only built the 
# last commit, which would stop building
quiet_git merge master -m "merging master"
quiet_git push --all

With this setup we can CI testing of our changes in the TeX Live sources, and in the future maybe some developers will use separate branches to get testing there, too.

Enjoy.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 3 – Developers, Developers Miniconf

Beyond Web 2.0 Russell Keith-Magee

  • Django guy
  • Back in 2005 when Django first came out
    • Web was fairly simple, click something and something happened
    • model, views, templates, forms, url routing
  • The web c 2016
    • Rich client
    • API
    • mobile clients, native apps
    • realtime channels
  • Rich client frameworks
    • reponse to increased complexity that is required
    • Complex client-side and complex server-side code
  • Isomorphic Javascript development
    • Same code on both client and server
    • Only works with javascript really
    • hacks to work with other languages but not great
  • Isomorphic javascript development
    • Requirements
    • Need something in-between server and browser
    • Was once done with Java based web clients
    • model, view, controller
  • API-first development
  • How does it work with high-latency or no-connection?
  • Part of the controller and some of the model needed in the client
    • If you have python on the server you need python on the client
    • brython, skulp, pypy.js
    • <script type=”text/pyton”>
    • Note: Not phyton being compiled into javascript. Python is run in the browser
    • Need to download full python interpreter though (500k-15M)
    • Fairly fast
  • Do we need a full python interpreter?
    • Maybe something just to run the bytecode
    • Batavia
    • Javascript implementation of python virtual machine
    • 10KB
    • Downside – slower than cpython on the same machine
  • WASM
    • Like assembly but for the web
    • Benefits from 70y of experience with assembly languages
    • Close to Cpython speed
    • But
      • Not quite on browsers
      • No garbage collection
      • Cannot manipulate DOM
      • But both coming soon
  • Example: http://bit.ly/covered-in-bees
  • But “possible isn’t enough”
  • pybee.org
  • pybee.org/bee/join

Using “old skool” Free tools to easily publish API documentation – Alec Clew

  • https://github.com/alecthegeek/doc-api-old-skool
  • You API is successful if people are using it
  • High Quality and easy to use
  • Provide great docs (might cut down on support tickets)
  • Who are you writing for?
    • Might not have english as first language
    • New to the API
    • Might have different tech expertise (different languages)
    • Different tooling
  • Can be hard work
  • Make better docs
    • Use diagrams
    • Show real code (complete and working)
  • Keep your sentence simple
  • Keep the docs current
  • Treat documentation like code
    • Fix bugs
    • add features
    • refactor
    • automatic builds
    • Cross platform support
    • “Everything” is text and under version control
  • Demo using pandoc
  • Tools
  • pandoc, plantuml, Graphviz, M4, make, base/sed/python/etc

 

Lightning Talks

  • Nic – Alt attribute
    • need to be added to images
    • Don’t have alts when images as links
    • http://bit.ly/Nic-slides
  • Vaibhav Sager – Travis-CI
    • Builds codes
    • Can build websites
    • Uses to build Resume
    • Build presentations
  • Steve Ellis
    • Openshift Origin Demo
  • Alec Clews
    • Python vs C vs PHP vs Java vs Go for small case study
    • Implemented simple xmlrpc client in 5 languages
    • Python and Go were straightforward, each had one simple trick (40-50 lines)
    • C was 100 lines. A lot harder. Conversions, etc all manual
    • PHP wasn’t too hard. easier in modern vs older PHP
  • Daurn
    • Lua
    • Fengari.io – Lua in the browser
  • Alistair
    • How not to docker ( don’t trust the Internet)
    • Don’t run privileged
    • Don’t expose your docker socket
    • Don’t use host network mode
    • Don’t where your code is FROM
    • Make sure your kernel on your host is secure
  • Daniel
    • Put proxy in front of the docker socket
    • You can use it to limit what no-priv users with socket access to docker port can do

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 2

Manage all your tasks with TaskWarrior Paul ‘@pjf’ Fenwick

  • Lots of task management software out there
    • Tried lots
    • Doesn’t like proprietary ones, but unable to add features he wants
    • Likes command line
  • Disclaimer: “Most systems do not work for most people”
  • TaskWarrior
    • Lots of features
    • Learning cliff

Intro to TaskWarrior

  • Command line
  • Simple level can be just a todo list
  • Can add tags
    • unstructured many to many
    • Added just put putting “+whatever” on command
    • Great for searching
    • Can put all people or all types of jobs togeather
  • Meta Tags
    • Automatic date related (eg due this week or today)
  • Project
    • A bunch of tasks
    • Can be strung togeather
    • eg Travel project, projects for each trip inside them
  • Contexts (show only some projects and tasks)
    • Work tasks
    • Tasks for just a client
    • Home stuff
  • Annotation (Taking notes)
    • $ task 31 annotate “extra stuff”
    • has an auto timestamp
    • show by default, or just show a count of them
  • Tasks associated with dates
    • “wait”
    • Don’t show task until a date (approx)
    • Hid a task for an amount of time
    • Scheduled tasks urgency boasted at specific date
  • Until
    • delete a task after a certain date
  • Relative to other tasks
    • eg book flights 30 days before a conference
    • good for scripting, create a whole bunch of related tasks for a project
  • due dates
    • All sorts of things give (see above) gives tasks higher priority
    • Tasks can be manually changed
  • Tools and plugins
    • Taskopen – Opens resources in annotations (eg website, editor)
  • Working with others
    • Bugworrier – interfaces with github trello, gmail, jira, trac, bugzilla and lots of things
    • Lots of settings
    • Keeps all in sync
  • Lots of extra stuff
    • Paul updates his shell prompt to remind him things are busy
  • Also has
    • Graphical reports: burndown, calendar
    • Hooks: Eg hooks to run all sort of stuff
    • Online Sync
    • Android client
    • Web client
  • Reminder it has a steep learning curve.

Love thy future self: making your systems ops-friendly Matt Palmer

  • Instrumentation
  • Instrumenting incoming requests
    • Count of the total number of requests (broken down by requestor)
    • Count of reponses (broken down by request/error)
    • How long it took (broken down by sucess/errors
    • How many right now
  • Get number of in-progress requests, average time etc
  • Instrumenting outgoing requests
    • For each downstream component
    • Number of request sent
    • how many reponses we’ve received (broken down by success/err)
    • How long it too to get the response (broken down by request/ error)
    • How many right now
  • Gives you
    • incoming/outgoing ratio
    • error rate = problem is downstream
  • Logs
    • Logs cost tends to be more than instrumentation
  • Three Log priorities
    • Error
      • Need a full stack trace
      • Add info don’t replace it
      • Capture all the relivant variables
      • Structure
    • Information
      • Startup messages
      • Basic request info
      • Sampling
    • Debug
      • printf debugging at webcale
      • tag with module/method
      • unique id for each request
      • late-bind log data if possible.
      • Allow selective activation at runtime (feature flag, special url, signals)
    • Summary
      • Visbility required
      • Fault isolation

 

Share

Planet DebianShirish Agarwal: PrimeZ270-p, Intel i7400 review and Debian – 1

This is going to be a biggish one as well.

This is a continuation from my last blog post .

Before diving into installation, I had been reading for quite a while Matthew Garett’s work. Thankfully most of his blog posts do get mirrored on planet.debian.org hence it is easy to get some idea as what needs to be done although have told him (I think even shared here) that he should somehow make his site more easily navigable. Trying to find posts on either ‘GPT’ and ‘UEFI’ and to have those posts in an ascending or descending way date-wise is not possible, at least I couldn’t find a way to do it as he doesn’t do it date-wise or something.

The closest I could come to is sing ‘$keyword’ site:https://mjg59.dreamwidth.org/ via a search-engine and go through the entries shared therein. This doesn’t mean I don’t value his contribution. It is in fact, the opposite. AFAIK he was one of the first people who drew the community’s attention when UEFI came in and only Microsoft Windows could be booted on them, nothing else.

I may be wrong but AFAIK he was the first one to talk about having a shim and was part of getting people to be part of the shim process.

While I’m sure Matthew’s understanding may have evolved significantly from what he had shared before, it was two specific blog posts that I had to re-read before trying to install MS-Windows and then Debian-GNU/Linux system on it. .

I went to a friend’s house who had windows 7 running at his end, I ran over there, used diskpart and did the change to GPT using Windows technet article.

I had to use/go the GPT way as I understood that MS-Windows takes all the four primary partitions for itself, leaving nothing for any other operating system to do/use .

I did the conversion to GPT and tried to have MS-Windows 10 as my current motherboard and all future motherboards from Intel Gen7/Gen8 onwards do not support anything less than Windows 10. I did see an unofficial patch floating on github somewhere but now have lost the reference to it. I had read some of the bug-reports of the repo. which seemed to suggest it was still a work in progress.

Now this is where it starts becoming a bit… let’s say interesting.

Now a friend/client of mine offered me a job to review MS-Windows 10, with his product keys of course. I was a bit hesitant as it had been a long time since I had worked with MS-Windows and didn’t know if I could do it or not, the other was a suspicion that I might like it too much. While I did review it, I found –

a. It it one heck of a bloatware – I had thought MS-Windows would have learned it by now but no, they still have to have to learn that adware and bloatware aren’t solutions. I still can’t get my head wrapped around as to how 4.1 GB of an MS-WIndows ISO gets extracted to 20 GB and still have to install shit-loads of third-party tools to actually get anything done. Just amazed (and not in good way.) .

Just to share as an example I still had to get something like Revo Uninstaller as MS-Windows even till date hasn’t learned to uninstall programs cleanly and needs a tool like that to clean the registry and other places to remove the titbits left along the way.

Edit/Update – It still doesn’t have Fall Creators Update which is still supposed to be another 4 GB+ iso which god only knows how much space that will take.

b. It’s still not gold – With all the hoopla around MS-Windows 10 that I had been hearing and seeing ads, I was under the impression that MS-Windows had turned gold i.e. it had a release like Debian would have ‘buster’ something around next year probably around or after 2019 Debconf is held. Windows 10 Microsoft would be released around July 2018, so it’s still a few months off.

c. I had read an insightful article few years ago by a Junior Microsoft employee sharing/emphasizing why MS cannot do GNU/Linux volunteer/bazaar type of development. To put in not so many words, it came down to the cultural differences the way two communities operate. While in GNU/Linux a one more patch, one more pull request will be encouraged, and it may be integrated in that point release or it can’t it would be in the next point release (unless it changes something much more core/fundamentally which needs more in-depth review) MS-Windows on the other hand, actively discourages that sort of behavior as it meant more time for integration and testing and from the sound of it MS still doesn’t do Continuous Integration (CI), regressive testing etc. as is common in many GNU/Linux common projects more and more.

I wish I could have shared the article but don’t have the link anymore. @Lazyweb, if you would be so kind so as to help find that article. The developer had shared some sort of ssh credentials or something to prove who he was which he later to remove (probably) because of the consequences to him for sharing that insight were not worth it, although the writings seemed to be valid.

There were many more quibbles but shared the above ones. For e.g. copying files from hdd to usb disks doesn’t tell how much time it takes, while in Debian I’ve come to see time taken for any operation as guaranteed.

Before starting on to the main issue, some info. before-hand although I don’t know how relevant or not that info. might be –

Prime Z270-P uses EFI 2.60 by American Megatrends –

/home/shirish> sudo dmesg | grep -i efi
[sudo] password for shirish:
[ 0.000000] efi: EFI v2.60 by American Megatrends

I can share more info. if needed later.

Now as I understood/interpretated info. found on the web and by experience Microsoft makes quite a few more partitions than necessary to get MS-Windows installed.

This is how it stacks up/shows up –

> sudo fdisk -l
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: xxxxxxxxxxxxxxxxxxxxxxxxxxx

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data
/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem
/dev/sda6 3718232064 5280731135 1562499072 745.1G Linux filesystem
/dev/sda7 5280731136 7761199103 2480467968 1.2T Linux filesystem
/dev/sda8 7761199104 7814035455 52836352 25.2G Linux swap

I had made 2 GB for /boot in MS-Windows installer as I had thought it would take only some space and leave the rest for Debian GNU/Linux’s /boot to put its kernel entries, tools to check memory and whatever else I wanted to have on /boot/debian but for some reason I have not yet understood, that didn’t work out as I expected it to be.

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data

As seen in the above, the first four primary partitions are taken by MS-Windows themselves. I just wish I had understood how to use GPT disklabels properly so I could figure out things better, but it seems (for reasons not fully understood) why the efi partition is a lowly 100 MB which I suspect where /boot is when I asked it to be 2 GB. Is that UEFI doing, Microsoft’s doing or something which is a default bit, dunno. Having the EFI partition smaller hampers the way I want to do things as will be clear in a short while from now.

After I installed MS-Windows, I installed Debian GNU/Linux using the net install method.

The following is what I had put on piece of paper as what partitions would be for GNU/Linux –

/boot – 512 MB (should be enough to accommodate couple of kernel versions, memory checking and any other tools I might need in the future.

/ – 700 GB – well admittedly that looks insane a bit but I do like to play with new programs/binaries as and when possible and don’t want to run out of space as and when I forget to clean it up.

[off-topic, wishlist] One tool I would like to have (and dunno if it’s there) is an ability to know when I installed a package, how many times I have used it, how frequently and the ability to add small notes or description to the package. Many a times I have seen that the package description is either too vague or doesn’t focus on the practical usefulness of a package to me .

An easy example to share what I mean would be the apt package –

aptitude show apt
Package: apt
Version: 1.6~alpha6
Essential: yes
State: installed
Automatically installed: no
Priority: required
Section: admin
Maintainer: APT Development Team
Architecture: amd64
Uncompressed Size: 3,840 k
Depends: adduser, gpgv | gpgv2 | gpgv1, debian-archive-keyring, libapt-pkg5.0 (>= 1.6~alpha6), libc6 (>= 2.15), libgcc1 (>= 1:3.0), libgnutls30 (>= 3.5.6), libseccomp2 (>=1.0.1), libstdc++6 (>= 5.2)
Recommends: ca-certificates
Suggests: apt-doc, aptitude | synaptic | wajig, dpkg-dev (>= 1.17.2), gnupg | gnupg2 | gnupg1, powermgmt-base, python-apt
Breaks: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~), aptitude (< 0.8.10)
Replaces: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~)
Provides: apt-transport-https (= 1.6~alpha6)
Description: commandline package manager
This package provides commandline tools for searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library.

These include:
* apt-get for retrieval of packages and information about them from authenticated sources and for installation, upgrade and removal of packages together with their dependencies
* apt-cache for querying available information about installed as well as installable packages
* apt-cdrom to use removable media as a source for packages
* apt-config as an interface to the configuration settings
* apt-key as an interface to manage authentication keys

Now while I love all the various tools that the apt package has, I do have special fondness for $apt-cache rdepends $package

as it gives another overview of a package or library or shared library that I may be interested in and which other packages are in its orbit.

Over period of time it becomes easy/easier to forget packages that you don’t use day-to-day hence having something like such a tool would be a god-send where you can put personal notes about packages. Another could be reminders of tickets posted upstream or something on those lines. I don’t know of any tool/package which does something on those lines. [/off-topic, wishlist]

/home – 1.2 TB

swap – 25.2 GB

Admit I got a bit overboard on swap space but as and when I get more memory at least should have swap 1:1 right. I am not sure if the old rules would still apply or not.

Then I used Debian buster alpha 2 netinstall iso

https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/debian-buster-DI-alpha2-amd64-netinst.iso and put it on the usb stick. I did use the sha1sum to ensure that the netinstall iso was the same as the original one https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/SHA1SUMS

After that simply doing a dd if of was enough to copy the net install to the usb stick.

I did have some issues with the installation which I’ll share in the next post but the most critical issue was that I had to again do make a /boot and even though I made /boot as a separate partition and gave 1 GB to it during the partitioning step, I got only 100 MB and I have no idea why it is like that.

/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem

> df -h /boot
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 88M 68M 14M 84% /boot

home/shirish> ls -lh /boot
total 55M
-rw-r--r-- 1 root root 193K Dec 22 19:42 config-4.14.0-2-amd64
-rw-r--r-- 1 root root 193K Jan 15 01:15 config-4.14.0-3-amd64
drwx------ 3 root root 1.0K Jan 1 1970 efi
drwxr-xr-x 5 root root 1.0K Jan 20 10:40 grub
-rw-r--r-- 1 root root 19M Jan 17 10:40 initrd.img-4.14.0-2-amd64
-rw-r--r-- 1 root root 21M Jan 20 10:40 initrd.img-4.14.0-3-amd64
drwx------ 2 root root 12K Jan 1 17:49 lost+found
-rw-r--r-- 1 root root 2.9M Dec 22 19:42 System.map-4.14.0-2-amd64
-rw-r--r-- 1 root root 2.9M Jan 15 01:15 System.map-4.14.0-3-amd64
-rw-r--r-- 1 root root 4.4M Dec 22 19:42 vmlinuz-4.14.0-2-amd64
-rw-r--r-- 1 root root 4.7M Jan 15 01:15 vmlinuz-4.14.0-3-amd64

root@debian:/boot/efi/EFI# ls -lh
total 3.0K
drwx------ 2 root root 1.0K Dec 31 21:38 Boot
drwx------ 2 root root 1.0K Dec 31 19:23 debian
drwx------ 4 root root 1.0K Dec 31 21:32 Microsoft

I would be the first to say I don’t really the understand this EFI business.

The only thing I do understand that it’s good that even without OS it becomes easier to see that all the components if you change/add which would or would not work in BIOS. In bios, getting info on components were iffy at best.

There have been other issues with EFI which I may take in another blog post but for now I would be happy if somebody can share –

how to have a big /boot/ so it’s not a small partition for debian boot. I don’t see any value in having a bigger /boot for MS-Windows unless there is a way to also get grub2 pointer/header added in MS-Windows bootloader. Will share the reasons for it in the next blog post.

I am open to reinstalling both MS-Windows and Debian from scratch although that would happen when debian-buster-alpha3 arrives. Any answer to the above would give me something to try the solution and share if I get the desired result.

Looking forward for answers.

Planet DebianLouis-Philippe Véronneau: French Gender-Neutral Translation for Roundcube

Here's a quick blog post to tell the world I'm now doing a French gender-neutral translation for Roundcube.

A while ago, someone wrote on the Riseup translation list to complain against the current fr_FR translation. French is indeed a very gendered language and it is common place in radical spaces to use gender-neutral terminologies.

So yeah, here it is: https://github.com/baldurmen/roundcube_fr_FEM

I haven't tested the UI integration yet, but I'll do that once the Riseup folks integrate it to their Roundcube instance.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 1 – Kernel Miniconf

Look out for what’s in the security pipeline – Casey Schaufler

Old Protocols

  • SeLinux
    • No much changing
  • Smack
    • Network configuration improvements and catchup with how the netlable code wants things to be done.
  • AppArmor
    • Labeled objects
    • Networking
    • Policy stacking

New Security Modules

  • Some peopel think existing security modules don’t work well with what they are doing
  • Landlock
    • eBPF extension to SECMARK
    • Kills processes when it goes outside of what it should be doing
  • PTAGS
    • General purpose process tags
    • Fro application use ( app can decide what it wants based on tags, not something external to the process enforcing things )
  • HardChroot
    • Limits on chroot jail
    • mount restrictions
  • Safename
    • Prevents creation of unsafe files names
    • start, middle or end characters
  • SimpleFlow
    • Tracks tainted data

Security Module Stacking

  • Problems with incompatibility of module labeling
  • People want different security policy and mechanism in containers than from the base OS
  • Netfilter problems between smack and Apparmor

Container

  • Containers are a little bit undefined right now. Not a kernel construct
  • But while not kernel constructs, need to work with and support them

Hardening

  • Printing pointers (eg in syslog)
  • Usercopy

 

Share

,

Planet DebianDirk Eddelbuettel: #15: Tidyverse and data.table, sitting side by side ... (Part 1)

Welcome to the fifteenth post in the rarely rational R rambling series, or R4 for short. There are two posts I have been meaning to get out for a bit, and hope to get to shortly---but in the meantime we are going start something else.

Another longer-running idea I had was to present some simple application cases with (one or more) side-by-side code comparisons. Why? Well at times it feels like R, and the R community, are being split. You're either with one (increasingly "religious" in their defense of their deemed-superior approach) side, or the other. And that is of course utter nonsense. It's all R after all.

Programming, just like other fields using engineering methods and thinking, is about making choices, and trading off between certain aspects. A simple example is the fairly well-known trade-off between memory use and speed: think e.g. of a hash map allowing for faster lookup at the cost of some more memory. Generally speaking, solutions are rarely limited to just one way, or just one approach. So if pays off to know your tools, and choose wisely among all available options. Having choices is having options, and those tend to have non-negative premiums to take advantage off. Locking yourself into one and just one paradigm can never be better.

In that spirit, I want to (eventually) show a few simple comparisons of code being done two distinct ways.

One obvious first candidate for this is the gunsales repository with some R code which backs an earlier NY Times article. I got involved for a similar reason, and updated the code from its initial form. Then again, this project also helped motivate what we did later with the x13binary package which permits automated installation of the X13-ARIMA-SEATS binary to support Christoph's excellent seasonal CRAN package (and website) for which we now have a forthcoming JSS paper. But the actual code example is not that interesting / a bit further off the mainstream because of the more specialised seasonal ARIMA modeling.

But then this week I found a much simpler and shorter example, and quickly converted its code. The code comes from the inaugural datascience 1 lesson at the Crosstab, a fabulous site by G. Elliot Morris (who may be the highest-energy undergrad I have come across lately) focusssed on political polling, forecasts, and election outcomes. Lesson 1 is a simple introduction, and averages some polls of the 2016 US Presidential Election.

Complete Code using Approach "TV"

Elliot does a fine job walking the reader through his code so I will be brief and simply quote it in one piece:


## Getting the polls

library(readr)
polls_2016 <- read_tsv(url("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv"))

## Wrangling the polls

library(dplyr)
polls_2016 <- polls_2016 %>%
    filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"))
library(lubridate)
polls_2016 <- polls_2016 %>%
    mutate(end_date = ymd(end_date))
polls_2016 <- polls_2016 %>%
    right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date),
                                              max(polls_2016$end_date), by="days")))

## Average the polls

polls_2016 <- polls_2016 %>%
    group_by(end_date) %>%
    summarise(Clinton = mean(Clinton),
              Trump = mean(Trump))

library(zoo)
rolling_average <- polls_2016 %>%
    mutate(Clinton.Margin = Clinton-Trump,
           Clinton.Avg =  rollapply(Clinton.Margin,width=14,
                                    FUN=function(x){mean(x, na.rm=TRUE)},
                                    by=1, partial=TRUE, fill=NA, align="right"))

library(ggplot2)
ggplot(rolling_average)+
  geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") +
  geom_point(aes(x=end_date,y=Clinton.Margin))

It uses five packages to i) read some data off them interwebs, ii) then filters / subsets / modifies it leading to a right (outer) join with itself before iv) averaging per-day polls first and then creates rolling averages over 14 days before v) plotting. Several standard verbs are used: filter(), mutate(), right_join(), group_by(), and summarise(). One non-verse function is rollapply() which comes from zoo, a popular package for time-series data.

Complete Code using Approach "DT"

As I will show below, we can do the same with fewer packages as data.table covers the reading, slicing/dicing and time conversion. We still need zoo for its rollapply() and of course the same plotting code:


## Getting the polls

library(data.table)
pollsDT <- fread("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv")

## Wrangling the polls

pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ]
pollsDT[, end_date := as.IDate(end_date)]
pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]),
                                              max(pollsDT[,end_date]), by="days")), on="end_date"]

## Average the polls

library(zoo)
pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), by=end_date]
pollsDT[, Clinton.Margin := Clinton-Trump]
pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14,
                                   FUN=function(x){mean(x, na.rm=TRUE)},
                                   by=1, partial=TRUE, fill=NA, align="right")]

library(ggplot2)
ggplot(pollsDT) +
    geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") +
    geom_point(aes(x=end_date,y=Clinton.Margin))

This uses several of the components of data.table which are often called [i, j, by=...]. Row are selected (i), columns are either modified (via := assignment) or summarised (via =), and grouping is undertaken by by=.... The outer join is done by having a data.table object indexed by another, and is pretty standard too. That allows us to do all transformations in three lines. We then create per-day average by grouping by day, compute the margin and construct its rolling average as before. The resulting chart is, unsurprisingly, the same.

Benchmark Reading

We can looking how the two approaches do on getting data read into our session. For simplicity, we will read a local file to keep the (fixed) download aspect out of it:

R> url <- "http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv"
R> download.file(url, destfile=file, quiet=TRUE)
R> file <- "/tmp/poll-responses-clean.tsv"
R> res <- microbenchmark(tidy=suppressMessages(readr::read_tsv(file)),
+                       dt=data.table::fread(file, showProgress=FALSE))
R> res
Unit: milliseconds
 expr     min      lq    mean  median      uq      max neval
 tidy 6.67777 6.83458 7.13434 6.98484 7.25831  9.27452   100
   dt 1.98890 2.04457 2.37916 2.08261 2.14040 28.86885   100
R> 

That is a clear relative difference, though the absolute amount of time is not that relevant for such a small (demo) dataset.

Benchmark Processing

We can also look at the processing part:

R> rdin <- suppressMessages(readr::read_tsv(file))
R> dtin <- data.table::fread(file, showProgress=FALSE)
R> 
R> library(dplyr)
R> library(lubridate)
R> library(zoo)
R> 
R> transformTV <- function(polls_2016=rdin) {
+     polls_2016 <- polls_2016 %>%
+         filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"))
+     polls_2016 <- polls_2016 %>%
+         mutate(end_date = ymd(end_date))
+     polls_2016 <- polls_2016 %>%
+         right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date), 
+                                                   max(polls_2016$end_date), by="days")))
+     polls_2016 <- polls_2016 %>%
+         group_by(end_date) %>%
+         summarise(Clinton = mean(Clinton),
+                   Trump = mean(Trump))
+ 
+     rolling_average <- polls_2016 %>%
+         mutate(Clinton.Margin = Clinton-Trump,
+                Clinton.Avg =  rollapply(Clinton.Margin,width=14,
+                                         FUN=function(x){mean(x, na.rm=TRUE)}, 
+                                         by=1, partial=TRUE, fill=NA, align="right"))
+ }
R> 
R> transformDT <- function(dtin) {
+     pollsDT <- copy(dtin) ## extra work to protect from reference semantics for benchmark
+     pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ]
+     pollsDT[, end_date := as.IDate(end_date)]
+     pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]), 
+                                                   max(pollsDT[,end_date]), by="days")), on="end_date"]
+     pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), 
+                        by=end_date][, Clinton.Margin := Clinton-Trump]
+     pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14,
+                                        FUN=function(x){mean(x, na.rm=TRUE)}, 
+                                        by=1, partial=TRUE, fill=NA, align="right")]
+ }
R> 
R> res <- microbenchmark(tidy=suppressMessages(transformTV(rdin)),
+                       dt=transformDT(dtin))
R> res
Unit: milliseconds
 expr      min       lq     mean   median       uq      max neval
 tidy 12.54723 13.18643 15.29676 13.73418 14.71008 104.5754   100
   dt  7.66842  8.02404  8.60915  8.29984  8.72071  17.7818   100
R> 

Not quite a factor of two on the small data set, but again a clear advantage. data.table has a reputation for doing really well for large datasets; here we see that it is also faster for small datasets.

Side-by-side

Stripping the reading, as well as the plotting both of which are about the same, we can compare the essential data operations.

Summary

We found a simple task solved using code and packages from an increasingly popular sub-culture within R, and contrasted it with a second approach. We find the second approach to i) have fewer dependencies, ii) less code, and iii) running faster.

Now, undoubtedly the former approach will have its staunch defenders (and that is all good and well, after all choice is good and even thirty years later some still debate vi versus emacs endlessly) but I thought it to be instructive to at least to be able to make an informed comparison.

Acknowledgements

My thanks to G. Elliot Morris for a fine example, and of course a fine blog and (if somewhat hyperactive) Twitter account.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaBen Martin: 4cm thick wood cnc project: shelf

The lighter wood is about 4cm thick. Both of the sides are cut from a single plank of timber which left the feet with a slight weak point at the back. Given a larger bit of timber I would have tapered the legs outward from the back more gradually. But the design is restricted by the timber at hand.


The shelves are plywood which turned out fairly well after a few coats of poly. I knocked the extreme sharp edges of the ply so its a hurt a little rather than a lot if you accidentally poke the edge. This is a mixed machine and human build, the back of the plywood that meets the uprights was knocked off using a bandsaw.

Being able to CNC thick timber like this opens up more bold designs. Currently I have to use a 1/2 inch bit to get this reach. Stay tuned for more CNC timber fun!


,

Planet DebianRuss Allbery: New year haul

Some new acquired books. This is a pretty wide variety of impulse purchases, filled with the optimism of a new year with more reading time.

Libba Bray — Beauty Queens (sff)
Sarah Gailey — River of Teeth (sff)
Seanan McGuire — Down Among the Sticks and Bones (sff)
Alexandra Pierce & Mimi Mondal (ed.) — Luminescent Threads (nonfiction anthology)
Karen Marie Moning — Darkfever (sff)
Nnedi Okorafor — Binti (sff)
Malka Older — Infomocracy (sff)
Brett Slatkin — Effective Python (nonfiction)
Zeynep Tufekci — Twitter and Tear Gas (nonfiction)
Martha Wells — All Systems Red (sff)
Helen S. Wright — A Matter of Oaths (sff)
J.Y. Yang — Waiting on a Bright Moon (sff)

Several of these are novellas that were on sale over the holidays; the rest came from a combination of reviews and random on-line book discussions.

The year hasn't been great for reading time so far, but I do have a couple of things ready to review and a third that I'm nearly done with, which is not a horrible start.

Planet DebianShirish Agarwal: PC desktop build, Intel, spectre issues etc.

This is and would be a longish one.

I have been using desktop computers for around couple of decades now. My first two systems were an Intel Pentium III and then a Pentium Dual-core, the first one on kobian/mercury motherboard. The motherboards were actually called Mercury and was a brand which was later sold to Kobian which kept the brand-name. The motherboards and the CPU/processor used to be cheap. One could set up a decentish low-end system with display for around INR 40k/- which seemed to be decent as a country we had just come out of non-alignment movement and also chose to come out of isolationist tendencies (technology and otherwise as well). Most middle-class income families got their first taste of computers after y2k. There were quite a few y2k incomes which prompted the Government to lose duties further.

One of the highlights during 1991 when satellite TV came was shown by CNN (probably CNN International) was the coming down of the Berlin Wall. There were many of us who were completely ignorant of world politics or what is/was happening in other parts of the world.

Computer systems at those times were considered a luxury item and duties were sky-high ( between 1992-2001). The launch of Mars Pathfinder, its subsequent successful landing on the Martian surface also catapulted people’s imagination about PCs and micro-processors.

I can still recall the excitement that was among young people of my age first seeing the liftoff from Cape Canaveral and then later the processed images of Spirits cameras showing images of a desolate desert-type land. We also witnessed the beginnings of ‘International Space Station‘ (ISS) .

Me and few of my friends had drunk lot of Carl Sagan and many other sci-fi coolaids/stories. Star Trek, the movies and the universal values held/shared by them was a major influence to all our lives.

People came to know about citizen based science or projects/distributed science projects, y2k fear appeared to be unfounded all these factors and probably a few more prompted the Government of India to reduce duties on motherboards, processors, components as well as taking Computers out of the restricted list which lead to competition and finally the common man being able to dream of a system sooner than later. Y2K also kick-started the beginnings of Indian software industry which is the bread and butter of many a middle class men-women who are in the service industry using technology directly or indirectly.

In 2002 I bought my first system, an Intel Pentium III, i810 chipset (integrated graphics) with 256 MB of SDRAM which was supposed to be sufficient for the tasks it was being used for, Some light gaming, some web-mails, seeing movies,etc running on a mercury board. I don’t remember the code-name partly because the code-names are/were really weird and partly because it is just too long ago. I remember using Windows ’98 and trying to install one of the early GNU/Linux variants on that machine. Ir memory serves right, you had to flick a jumper (like a switch) to use the extended memory.

I do not know/remember what happened but I think somewhere in a year or two in that time-frame Mercury India filed for bankruptcy and the name, manufacturing was sold to Kobian. After Kobian took over the ownership, it said it would neither honor the 3/5 year warranty or even repairs on the motherboards Mercury had sold, it created a lot of bad will against the company and relegated itself to the bottom of the pile for both experienced and new system-builders. Also mercury motherboards weren’t reputed/known to have a long life although the one I had gave me quite a decent life.

The next machine I purchased was a Pentium Dual-core, (around 2009/2010) LGA a Williamnette which had out-of-order execution, the bug meltdown which is making news nowadays has history this far back. I think I bought it in 45nm which was a huge jump from the previous version although still secure in the mATX package. Again the board was from mercury. (Intel 845 chipset, DDR2 2 GB RAM and SATA came to stay).

So meltdown has been in existence for 10-12 odd years and is in everything which either uses Intel or ARM processors.

As you can probably make-out most systems came stretched out 2-3 years later than when they were launched in American or/and European markets. Also business or tourism travel was neither so easy, smooth or transparent as is today. All of which added to delay in getting new products in India.

Sadly, the Indian market is similar to other countries where Intel is used in more than 90% machines. I know of few institutions (though pretty much rare) who insisted and got AMD solutions.

That was the time when gigabyte came onto the scene which formed the basis of the Wolfdale-3M 45nm system which was in the same price range as the earlier models, and offered a weeny tiny bit of additional graphics performance.To the best of my knowledge, it was perhaps the first motherboard which had solid state capacitors being offered/put in a budget motherboard. The mobo-processor bundle used to be in the range of INR 7/8k excluding RAM. cabinet etc, I had a Philips 17″ CRT display which ran a good decade or so, so just had to get the new cabinet, motherboard, CPU, RAM and was good to go.

Few months later at a hardware exhibition held in the city I was invited to an Asus party which was just putting a toe-hold in the Indian market. I went to the do, enjoyed myself. They had a small competition where they asked some questions and asked if people had queries. To my surprise, I found that most people who were there were hardware vendors and for one reason or the other they chose to remain silent. Hence I got an AMD Asus board. This is different from winning another Gigabyte motherboard which I also won in the same year in another competition as well in the same time-frame. Both were mid-range motherboards (ATX build).

As I had just bought a Gigabyte (mATX) motherboard and had made the build, I had to give both the motherboards away, one to a friend and one to my uncle and both were pleased with the AMD-based mobos which they somehow paired with AMD processors. At that time AMD had one-upped Intel in both graphics and even bare computing especially at the middle level and they were striving to push into new markets.

Apart from the initial system bought, most of my systems when being changed were in the INR 20-25k/- budget including all and any accessories I bought later.

The only real expensive parts I purchased have been external hdd ( 1 TB WD passport) and then a Viewsonic 17″ LCD which together sent me back by around INR 10k/- but both seem to give me adequate performance (both have outlived the warranty years) with the monitor being used almost 24×7 over 6 years or so, of course over GNU/Linux specifically Debian. Both have been extremely well value for the money.

As I had been exposed to both the motherboards I had been following those and other motherboards as well. What was and has been interesting to observe what Asus did later was to focus more on the high-end gaming market while Gigabyte continued to dilute it energy both in the mid and high-end motherboards.

Cut to 2017 and had seen quite a few reports –

http://www.pcstats.com/NewsView.cfm?NewsID=131618

http://www.digitimes.com/news/a20170904PD207.html

http://www.guru3d.com/news-story/asus-has-the-largest-high-end-intel-motherboard-share.html

All of which points to the fact that Asus had cornered a large percentage of the market and specifically the gaming market . While there are no formal numbers as both Asus and Gigabyte choose to releases only APAC numbers rather than a country-wide split which would have made for some interesting reading.

Just so that people do not presume anything, there are about 4-5 motherboard vendors in the Indian market. There is Asus at the top (I believe) followed by Gigabyte, Intel at a distant 3rd place (because it’s too expensive). There are also pockets of Asrock and MSI and I know of people who follow them religiously although their mobos are supposed to be somewhat pensive than the two above. Asus and Gigabyte do try to fight out with each other but each has its core competency I believe with Asus being used by heavy gamers, overclockers more than Gigabyte.

Anyway come October 2017 and my main desktop died and am left as they say up the creek without the paddle. I didn’t even have Net access for about 3 weeks due to BSNL or PMC’s foolishness and then later small riots breaking out due to Koregaon Bhima conflict.

This led to a situation where I had to buy/build a system with oldish/half knowledge. I was open to having an AMD system but both datacare and even Rashi peripherals, Pune both of whom used to deal in AMD systems shared they had stopped dealing in AMD stuff sometime back. While datacare had AMD mobos, getting processors were an issue. Both the vendors are near to my home so if I buy from them getting support becomes an non-issue. I could have gone out of my way to get an AMD processor but getting support could have been an issue as would have had to travel and I do not know the vendors enough. Hence fell back to the Intel platform.

I asked around quite a few PC retailers and distributors around and found the Asus Prime Z270-P was the only mid-range motherboard available at that time. I did come to know a bit later of other motherboards in the z270 series but most vendors didn’t/don’t stock them as there is capital, interest and stock cost.

History – Historically, there has also been huge time lag in getting motherboards, processors etc. between worldwide announcements, and then announcements of sale in India and actually getting hands-on to the newest motherboards and processors as seen above. This had led to quite a bit of frustration to many a users. I have known of many a soul visiting Lamington Road, Mumbai to get the latest motherboard, processor. Even to-date this system flourishes as Mumbai has an International Airport and there is always a demand and people willing to pay a premium for the newest processor/motherboard even before any reviews are in.

I was highly surprised to know recently that Prime Z370-P motherboards are already selling (just 3 months late) with the Intel 8th generation processors although these are still as samples rather than a torrent some of the other motherboard-combo might be.

At the end I bought an Intel I7400 chip and an Asus Prime Z270-P motherboard with 2400 mhz Corsair 8 GB and a 4 TB WD Green (5400) HDD with a Circle 545 cabinet and (with the almost criminal 400 Watts SMPS). Later came to know that it’s not really even 400 Watts, but around 20-25% less . The whole package costed me north of INR 50k/- with still need to spend on a better SMPS (probably a Cosair or Coolermaster 80 600/650 SMPS) with a few accessories I still need to complete the system.

I will be changing the PSU most probably next week.

Circle SMPS Picture

Asus motherboard, i7400 and RAM

Disclosure – The neatness you see is not me. I was unsure if I would be able to put the heatsink on the CPU properly as that is the most sensitive part while building a system. A bent pin on the CPU could play havoc as well as void the warranty on the CPU or motherboard or both. The new thing I saw were the knobs that can be seen on the heatsink fan is something which I hadn’t seen before. The vendor did the fixing of the processor on the mobo for me as well as tied up the remaining power cables without asking for which I am and was grateful and would definitely provide him with more business as and when I need components.

Future – While it’s ok for now, I’m still using a pretty old 2 speaker setup which I hope to upgrade to either a 2.1/3.1 speaker setup, have full 64 GB 2400 Mhz Kingston Razor/G.Skill/Corsair memory, an M.2 512 MB SSD .

If I do get the Taiwan Debconf bursary I do hope to buy some or all of the above plus a Samsung or some other Android/Replicant/Librem smartphone. I have been also looking for a vastly simplified smartphone for my mum with big letters and everything but that has been a failure to find in the Indian market. Of course this all depends if I do get the bursary and even after the bursary if Global warranty and currency exchange works out in my favor vis-a-vis what I would have to pay in India.

Apart from above, Taiwan is supposed to be a pretty good source to get graphic novels, manga comics, lots of RPG games for very cheap prices with covers and hand-drawn material etc. All of this is based upon few friend’s anecdotal experiences so dunno if all of that would still hold true if I manage to be there.

There are also quite a few chip foundries and maybe during debconf could have visit to one of them if possible. It would be rewarding if the visit was to any 45nm or lower chip foundry as India is still stuck at 65nm range till date.

I would be sharing about my experience about the board, the CPU, the expectations I had from the Intel chip and the somewhat disappointing experience of using Debian on the new board in the next post, not necessarily Debian’s fault but the free software ecosystem being at fault here.

Feel free to point out any mistakes you find, grammatically or even otherwise. The blog post has been in the works for over couple of weeks so its possible for mistakes to creep in.

Planet DebianDirk Eddelbuettel: Rcpp 0.12.15: Numerous tweaks and enhancements

The fifteenth release in the 0.12.* series of Rcpp landed on CRAN today after just a few days of gestation in incoming/.

This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, and the 0.12.14.release in November 2017 making it the nineteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1288 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This release contains a pretty large number of pull requests by a wide variety of authors. Most of these pull requests are very focused on a particular issue at hand. One was larger and ambitious with some forward-looking code for R 3.5.0; however this backfired a little on Windows and is currently "parked" behind a #define. Full details are below.

Changes in Rcpp version 0.12.15 (2018-01-16)

  • Changes in Rcpp API:

    • Calls from exception handling to Rf_warning() now correctly set an initial format string (Dirk in #777 fixing #776).

    • The 'new' Date and Datetime vectors now have is_na methods too. (Dirk in #783 fixing #781).

    • Protect more temporary SEXP objects produced by wrap (Kevin in #784).

    • Use public R APIs for new_env (Kevin in #785).

    • Evaluation of R code is now safer when compiled against R 3.5 (you also need to explicitly define RCPP_PROTECTED_EVAL before including Rcpp.h). Longjumps of all kinds (condition catching, returns, restarts, debugger exit) are appropriately detected and handled, e.g. the C++ stack unwinds correctly (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ]

    • The new function Rcpp_fast_eval() can be used for performance-sensitive evaluation of R code. Unlike Rcpp_eval(), it does not try to catch errors with tryEval in order to avoid the catching overhead. While this is safe thanks to the stack unwinding protection, this also means that R errors are not transformed to an Rcpp::exception. If you are relying on error rethrowing, you have to use the slower Rcpp_eval(). On old R versions Rcpp_fast_eval() falls back to Rcpp_eval() so it is safe to use against any versions of R (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ]

    • Overly-clever checks for NA have been removed (Kevin in #790).

    • The included tinyformat has been updated to the current version, Rcpp-specific changes are now more isolated (Kirill in #791).

    • Overly picky fall-through warnings by gcc-7 regarding switch statements are now pre-empted (Kirill in #792).

    • Permit compilation on ANDROID (Kenny Bell in #796).

    • Improve support for NVCC, the CUDA compiler (Iñaki Ucar in #798 addressing #797).

    • Speed up tests for NA and NaN (Kirill and Dirk in #799 and #800).

    • Rearrange stack unwind test code, keep test disabled for now (Lionel in #801).

    • Further condition away protect unwind behind #define (Dirk in #802).

  • Changes in Rcpp Attributes:

    • Addressed a missing Rcpp namespace prefix when generating a C++ interface (James Balamuta in #779).
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ now shows Rcpp::Rcpp.plugin.maker() and not the outdated ::: use applicable non-exported functions.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianNorbert Preining: TLCockpit v0.8

Today I released v0.8 of TLCockpit, the GUI front-end for the TeX Live Manager tlmgr. I spent the winter holidays in updating and polishing, but also in helping me debug problems that users have reported. Hopefully the new version works better for all.

If you are looking for a general introduction to TLCockpit, please see the blog introducing it. Here I only want to introduce the changes made since the last release:

  • add debug facility: It is now possible to pass -d for debugging to tlcockpit, activating debugging. There is also -dd for more verbose debugging.
  • select mirror facility: The edit screen for the repository setting now allows selecting from the current list of mirrors, see the following screenshot:
  • initial loading speedup: Till now we used to parse the json output of tlmgr, which included everything the whole database contains. We now load the initial minimal information via info --data and load additional data when details for a package is shown on demand. This should especially make a difference on systems without a compiled json Perl library available.
  • fixed self update: In the previous version, updating the TeX Live Manager itself was not properly working – it was updated but the application itself became unresponsive afterwards. This is hopefully fixed (although this is really tricky).
  • status indicator: The status indicator has moved from the menu bar (where it was somehow a stranger) to below the package listing, and now also includes the currently running command, see screenshot after the next item.
  • nice spinner: Only an eye-candy, but I added a rotating spinner while loading the database, updates, backups, or doing postactions. See the attached screenshot, which also shows the new location of the status indicator and the additional information provided.

I hope that this version is more reliable, stable, and easier to use. As usual, please use the issue page of the github project to report problems.

TeX Live should contain the new version starting from tomorrow.

Enjoy.

Don MartiMore brand safety bullshit

There's enough bullshit on the Internet already, but I'm afraid I'm going to quote some more. This time from Ilyse Liffreing at IBM.

The reality is none of us can say with certainty that anywhere in the world, we are [brand] safe. Look what just happened with YouTube. They are working on fixing it, but even Facebook and Google themselves have said there’s not much they can do about it. I mean, it’s hard. It’s not black and white. We are putting a lot of money in it, and pull back on channels where we have concerns. We’ve had good talks with the YouTube teams.

Bullshit.

One important part of this decision is black and white.

Either you give money to Nazis.

Or you don't give money to Nazis.

If Nazis are better at "programmatic" than the resting-and-vesting chill bros at the programmatic ad firms (and, face it, Nazis kick ass at programmatic), then the choice to spend ad money in a we're-kind-of-not-sure-if-this-goes-to-Nazis-or-not way is a choice that puts your brand on the wrong side of a black and white line.

There are plenty of Nazi-free places for brands to run ads. They might not be the cheapest. But I know which side of the line I buy from.

,

CryptogramFriday Squid Blogging: Te Papa Colossal Squid Exhibition Is Being Renovated

The New Zealand home of the colossal squid exhibit is behind renovated.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSecurity Breaches Don't Affect Stock Price

Interesting research: "Long-term market implications of data breaches, not," by Russell Lange and Eric W. Burger.

Abstract: This report assesses the impact disclosure of data breaches has on the total returns and volatility of the affected companies' stock, with a focus on the results relative to the performance of the firms' peer industries, as represented through selected indices rather than the market as a whole. Financial performance is considered over a range of dates from 3 days post-breach through 6 months post-breach, in order to provide a longer-term perspective on the impact of the breach announcement.

Key findings:

  • While the difference in stock price between the sampled breached companies and their peers was negative (1.13%) in the first 3 days following announcement of a breach, by the 14th day the return difference had rebounded to + 0.05%, and on average remained positive through the period assessed.

  • For the differences in the breached companies' betas and the beta of their peer sets, the differences in the means of 8 months pre-breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • For the differences in the breached companies' beta correlations against the peer indices pre- and post-breach, the difference in the means of the rolling 60 day correlation 8 months pre- breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • In regression analysis, use of the number of accessed records, date, data sensitivity, and malicious versus accidental leak as variables failed to yield an R2 greater than 16.15% for response variables of 3, 14, 60, and 90 day return differential, excess beta differential, and rolling beta correlation differential, indicating that the financial impact on breached companies was highly idiosyncratic.

  • Based on returns, the most impacted industries at the 3 day post-breach date were U.S. Financial Services, Transportation, and Global Telecom. At the 90 day post-breach date, the three most impacted industries were U.S. Financial Services, U.S. Healthcare, and Global Telecom.

The market isn't going to fix this. If we want better security, we need to regulate the market.

Note: The article is behind a paywall. An older version is here. A similar article is here.

Worse Than FailureError'd: Alphabetical Soup

"I appreciate that TIAA doesn't want to fully recognize that the country once known as Burma now calls itself Myanmar, but I don't think that this is the way to handle it," Bruce R. writes.

 

"MSI Installed an update - but I wonder what else it decided to update in the process? The status bar just kept going and going..." writes Jon T.

 

Paul J. wrote, "Apparently my occupation could be 'All Other Persons' on this credit card application!"

 

Geoff wrote, "So I need to commit the changes I didn't make, and my options are 'don't commit' or 'don't commit'?"

 

David writes, "This was after a 15 minute period where I watched a timer spin frantically."

 

"It's as if DealeXtreme says 'three stars, I think you meant to say FIVE stars'," writes Henry N.

 

[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

Planet DebianEddy Petrișor: Suppressing color output of the Google Repo tool

On Windows, in the cmd shell, the color control caracters generated by the Google Repo tool (or its windows port made by ESRLabs) or git appear as garbage. Unfortunately, the Google Repo tool, besides the fact it has a non-google-able name, lacks documentation regarding its options, so sometimes the only way to find out what is the option I want is to look in the code.
To avoid repeatedly look over the code to dig up this, future self, here is how you disable color output in the repo tool with the info subcommand:
repo --color=never info
Other options are 'auto' and 'always', but for some reason, auto does not do the right thing (tm) in Windows and garbage is shown with auto.

,

Sociological ImagesBros and Beer Snobs

The rise of craft beer in the United States gives us more options than ever at happy hour. Choices in beer are closely tied to social class, and the market often veers into the world of pointlessly gendered products. Classic work in sociology has long studied how people use different cultural tastes to signal social status, but where once very particular tastes showed membership in the upper class—like a preference for fine wine and classical music—a world with more options offers status to people who consume a little bit of everything.

Photo Credit: Brian Gonzalez (Flickr CC)

But who gets to be an omnivore in the beer world? New research published in Social Currents by Helana Darwin shows how the new culture of craft beer still leans on old assumptions about gender and social status. In 2014, Darwin collected posts using gendered language from fifty beer blogs. She then visited four craft beer bars around New York City, surveying 93 patrons about the kinds of beer they would expect men and women to consume. Together, the results confirmed that customers tend to define “feminine” beer as light and fruity and “masculine” beer as strong, heavy, and darker.

Two interesting findings about what people do with these assumptions stand out. First, patrons admired women who drank masculine beer, but looked down on those who stuck to the feminine choices. Men, however, could have it both ways. Patrons described their choice to drink feminine beer as open-mindedness—the mark of a beer geek who could enjoy everything. Gender determined who got “credit” for having a broad range of taste.

Second, just like other exclusive markers of social status, the India Pale Ale held a hallowed place in craft brew culture to signify a select group of drinkers. Just like fancy wine, Darwin writes,

IPA constitutes an elite preference precisely because it is an acquired taste…inaccessible to those who lack the time, money, and desire to cultivate an appreciation for the taste.

Sociology can get a bad rap for being a buzzkill, and, if you’re going to partake, you should drink whatever you like. But this research provides an important look at how we build big assumptions about people into judgments about the smallest choices.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianMike Gabriel: Building packages with Meson and Debhelper version level 11 for Debian stretch-backports

More a reminder for myself, than a blog post...

If you want to backport a project from unstable based on the meson build system and your package uses debhelper to invoke the meson build process, then you need to modify the backported package's debian/control file slightly:

diff --git a/debian/control b/debian/control
index 43e24a2..d33e76b 100644
--- a/debian/control
+++ b/debian/control
@@ -14,7 +14,7 @@ Build-Depends: debhelper (>= 11~),
                libmate-menu-dev (>= 1.16.0),
                libmate-panel-applet-dev (>= 1.16.0),
                libnotify-dev,
-               meson,
+               meson (>= 0.40.0),
                ninja-build,
                pkg-config,
 Standards-Version: 4.1.3

Enforce the build to pull-in meson from stretch-backports, i.e. a meson version that is newer than 0.40.0.

Reasoning: if you want to build your package against debhelper (>= 11~) from stretch-backports it will use the --wrap-mode option when invoking meson. However, this option only got added in meson 0.40.0. So you need to make sure, the meson version from stretch-backports gets pulled in, too, for your build. The build will fail when using the meson version that we find in Debian stretch.

TEDNew clues about the most mysterious star in the universe, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

New clues about the most mysterious star in the universe. KIC 8462852 (often called “Tabby’s star,” after the astronomer Tabetha Boyajian, who led the first study of the star) intermittently dims as much as 22% and then brightens again, for a reason no one has yet quite figured out. This bizarre occurrence led astronomers to propose over a dozen theories for why the star might be dimming, including the fringe theory that it was caused by an alien civilization using the planet’s energy. Now, new data shows that the dimming isn’t fully opaque; certain colors of light are blocked more than others. This suggests that what’s causing the star to dim is dust. After all, if an opaque object — like a planet or alien megastructure — was passing in front of the star, all of the light would be blocked equally. Tabby’s star is due to become visible again in late February or early March of 2018. (Watch Boyajian’s TED Talk)

TED’s new video series celebrates the genius design of everyday objects. What do the hoodie, the London Tube Map, the hyperlink, and the button have in common? They’re everyday objects, often overlooked, that have profoundly influenced the world around us. Each 3- to 4- minute episode of TED’s original video series Small Thing Big Idea celebrates one of these objects, with a well-known name in design explaining what exactly makes it so great. First up is Michael Bierut on the London Tube Map. (Watch the first episode here and tune in weekly on Tuesday for more.)

The science of black holes. In the new PBS special Black Hole Apocalypse, astrophysicist Janna Levin explores the science of black holes, what they are, why they are so powerful and destructive, and what they might tell us about the very origin of our existence. Dubbing them the world’s greatest mystery, Levin and her fellow scientists, including astronomer Andrea Ghez and experimental physicist Rainer Weiss, embark on a journey to portray the magnitude and importance of these voids that were long left unexplored and unexplained. (Watch Levin’s TED Talk, Ghez’s TED Talk, and read Weiss’ Ideas piece.)

An organized crime thriller with non-fiction roots. McMafia, a television show starring James Norton, premiered in the UK in early January. The show is a fictionalized account of Misha Glenny’s 2008 non-fiction book of the same name. The show focuses on Alex Goldman, the son of an exiled Mafia boss who wants to put his family’s history behind him. Unfortunately, a murder foils his plans and to protect his family, he must face up to various international crime syndicates. (Watch Glenny’s TED Talk)

Inside the African-American anti-abortion movement. In her new documentary for PBS’ Frontline, Yoruba Richen examines the complexities of the abortion debate as it relates to US’ racial history. Richen speaks with African-American members of both the pro-life and the anti-abortion movements, as her short doc follows a group of anti-abortion activists as they work in the black community. (Watch Richen’s TED Talk.)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.

Worse Than FailureCodeSOD: The Least of the Max

Adding assertions and sanity checks to your code is important, especially when you’re working in a loosely-typed language like JavaScript. Never assume the input parameters are correct, assert what they must be. Done correctly, they not only make your code safer, but also easier to understand.

Matthias’s co-worker… doesn’t exactly do that.

      function checkPriceRangeTo(x, min, max) {
        if (max == 0) {
          max = valuesPriceRange.max
        }
        min = Math.min(min, max);
        max = Math.max(min, max);
        x = parseInt(x)
        if (x == 0) {
          x = 50000
        }

        //console.log(x, 'min:', min, 'max:', max);
        return x >= min && x <= max
      }

This code isn’t bad, per se. I knew a kid, Marcus, in middle school that wore the same green sweatshirt every day, and had a musty 19th Century science textbook that discussed phlogiston in his backpack. Over lunch, he was happy to strike up a conversation with you about the superiority of phlogiston theory over Relativity. He wasn’t bad, but he was annoying and not half as smart as he thought he was.

This code is the same. Sure, x might not be a numeric value, so let’s parseInt first… which might return NaN. But we don’t check for NaN, we check for 0. If x is 0, then make it 50,000. Why? No idea.

The real treat, though, is the flipping of min/max. If the calling code did this wrong (min=6,max=1) then instead of swapping them, which is obviously the intent, it instead makes them both equal to the lowest of the two.

In the end, Matthias has one advantage in dealing with this pest, that I didn’t have in dealing with Marcus. He could actually make it go away. I just had to wait until the next year, when we didn’t have lunch at the same time.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet DebianJoey Hess: cubietruck temperature sensor

I wanted to use 1-wire temperature sensors (DS18B20) with my Cubietruck board, running Debian. The only page I could find documenting this is for the sunxi kernel, not the mainline kernel Debian uses. After a couple of hours of research I got it working, so here goes.

wiring

First you need to pick a GPIO pin to use for the 1-wire signal. The Cubietruck's GPIO pins are documented here, and I chose to use pin PG8. Other pins should work as well, although I originally tried to use PB17 and could not get it to work for an unknown reason. I also tried to use PB18 but there was a conflict with something else trying to use that same pin. To find a free pin, cat /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins and look for a line like: "pin 200 (PG8): (MUX UNCLAIMED) (GPIO UNCLAIMED)"

Now wire the DS18B20 sensor up. With its flat side facing you, the left pin goes to ground, the center pin to PG8 (or whatever GPIO pin you selected), and the right pin goes to 3.3V. Don't forget to connect the necessary 4.7K ohm resistor between the center and right pins.

You can find plenty of videos showing how to wire up the DS18B20 on youtube, which typically also involve a quick config change to a Raspberry Pi running Raspbian to get it to see the sensor. With Debian it's unfortunately quite a lot more complicated, and so this blog post got kind of long.

configuration

We need to get the kernel to enable the GPIO pin. This seems like a really easy thing, but this is where it gets really annoying and painful.

You have to edit the Cubietruck's device tree. So apt-get source linux and in there edit arch/arm/boot/dts/sun7i-a20-cubietruck.dts

In the root section ('/'), near the top, add this:

    onewire_device {
       compatible = "w1-gpio";
       gpios = <&pio 6 8 GPIO_ACTIVE_HIGH>; /* PG8 */
       pinctrl-names = "default";
       pinctrl-0 = <&my_w1_pin>;
    };

In the '&pio` section, add this:

    my_w1_pin: my_w1_pin@0 {
         allwinner,pins = "PG8";
         allwinner,function = "gpio_in";
    };

Note that if you used a different pin than PG8 you'll need to change that. The "pio 6 8" means letter G, pin 8. The 6 is because G is the 7th letter of the alphabet. I don't know where this is documented; I reverse engineered it from another example. Why this can't be hex, or octal, or symbolic names or anything sane, I don't know.

Now you'll need to compile the dts file into a dtb file. One way is to configure the kernel and use its Makefile; I avoided that by first sudo apt-get install device-tree-compiler and then running, in the top of the linux source tree:

cpp -nostdinc -I include -undef -x assembler-with-cpp \
    ./arch/arm/boot/dts/sun7i-a20-cubietruck.dts | \
    dtc -O dtb -b 0 -o sun7i-a20-cubietruck.dtb -

You'll need to install that into /etc/flash-kernel/dtbs/sun7i-a20-cubietruck.dtb on the cubietruck. Then run flash-kernel to finish installing it.

use

Now reboot, and if all went well, it'll come up and the GPIO pin will finally be turned on:

# grep PG8 /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins
pin 200 (PG8): onewire_device 1c20800.pinctrl:200 function gpio_in group PG8

And if you picked a GPIO pin that works and got the sensor wired up correctly, in /sys/bus/w1/devices/ there should be a subdirectory for the sensor, using its unique ID. Here I have two sensors connected, which 1-wire makes easy to do, just hang them all off the same wire.. er wires.

root@honeybee:/sys/bus/w1/devices> ls
28-000008290227@  28-000008645973@  w1_bus_master1@
root@honeybee:/sys/bus/w1/devices> cat *-*/w1_slave
f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES
f6 00 4b 46 7f ff 0a 10 d6 t=15375
f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES
f6 00 4b 46 7f ff 0a 10 d6 t=15375

So, it's 15.37 Celsius in my house. I need to go feed the fire, this took too long to get set up.

future work

Are you done at this point? I fear not entirely, because what happens when there's a kernel upgrade? If the device tree has changed in some way in the new kernel, you might need to update the modified device tree file. Or it might not boot properly or not work in some way.

With Raspbian, you don't need to modify the device tree. Instead it has support for device tree overlay files, which add some entries to the main device tree. The distribution includes a bunch of useful overlays, including one that enables GPIO pins. The Raspberry Pi's bootloader takes care of merging the main device tree and the selected overlays.

There are u-boot patches to do such merging, or the merging could be done before reboot (by flash-kernel perhaps), but apparently Debian's device tree files are built without phandle based referencing needed for that to work. (See http://elektranox.org/2017/05/0020-dt-overlays/)

There's also a kernel patch to let overlays be loaded on the fly using configfs. It seems to have been around for several years without being merged, for whatever reason, but would avoid this problem nicely if it ever did get merged.

,

Valerie AuroraGetting free of toxic tech culture

This post was co-authored by Valerie Aurora and Susan Wu, and cross-posted on both our blogs.

Marginalized people leave tech jobs in droves, yet we rarely write or talk publicly about the emotional and mental process of deciding to leave tech. It feels almost traitorous to publicly discuss leaving tech when you’re a member of a marginalized group – much less actually go through with it.

There are many reasons we feel this way, but a major reason is that the “diversity problem in tech” is often framed as being caused by marginalized people not “wanting” to be in tech enough: not taking the right classes as teenagers, not working hard enough in university, not “leaning in” hard enough at our tech jobs. In this model, it is the moral responsibility of marginalized people to tolerate unfair pay, underpromotion, harassment, and assault in order to serve as role models and mentors to the next generation of marginalized people entering tech. With this framing, if marginalized people end up leaving tech to protect ourselves, it’s our duty to at least keep quiet about it, and not scare off other marginalized people by sharing our bad experiences.

A green plant growing out of a printer
A printer converted to a planter
CC BY-SA Ben Stanfield https://flic.kr/p/2CjHL

Under that model, this post is doubly taboo: it’s a description of how we (Susan and Valerie) went through the process of leaving toxic tech culture, as a guide to other marginalized people looking for a way out. We say “toxic tech culture” because we want to distinguish between leaving tech entirely, and leaving areas of tech which are abusive and harmful. Toxic tech culture comes in many forms: the part of Silicon Valley VC hypergrowth culture that deifies founders as “white, male, nerds who’ve dropped out of Harvard or Stanford,” the open source software ecosystem that so often exploits and drives away its best contributors, and the scam-riddled cryptocurrency community, to name just three.

What is toxic tech culture? Toxic tech cultures are those that demean and devalue you as holistic, multifaceted human beings. Toxic tech cultures are those that prioritize profits and growth over human and societal well being. Toxic tech cultures are those that treat you as replaceable cogs within a system of constant churn and burnout.

But within tech there are exceptions to the rule: technology teams, organizations, and communities where marginalized people can feel a degree of safety, belonging, and purpose. You may be thinking about leaving all of tech, or leaving a particular toxic tech culture for a different, better tech culture; either way, we hope this post will be useful to you.

A little about us: Valerie spent more than ten years working as a software engineer, specializing in file systems, Linux, and operating systems. Susan grew up on the Internet, and spent 25 years as a software developer, a community builder, an investor, and a VC-backed Silicon Valley founder. We were both overachievers who advanced quickly in our fields – until we could not longer tolerate the way we were treated, or be complicit in a system that did not match our values. Valerie quit her job as a programmer to co-found a tech-related non-profit for women, and now teaches ally skills to tech workers. Susan relocated to France and Australia, co-founded Project Include, a nonprofit dedicated to improving diversity and inclusion in tech, and is now launching a new education system. We are both still involved in tech to various degrees, but on our own terms, and we are much happier now.

We disagree that marginalized people should stay silent about how and why they left toxic tech culture. When, for example, more than 50% of women in tech leave after 12 years, there is an undeniable need for sharing experience and hard-learned lessons. Staying silent about the unfairness that causes 37% of underrepresented people of color to leave tech 37% of people of color cite as a reason they left helps no one.

We reject the idea that it is the “responsibility” of marginalized people to stay in toxic tech culture despite abuse and discrimination, solely to improve the diversity of tech. Marginalized people have already had to overcompensate for systemic sexist, ableist, and racist biases in order to earn their roles in tech. We believe people with power and privilege are responsible for changing toxic tech culture to be more inclusive and fair to marginalized people. If you want more diversity in tech, don’t ask marginalized people to be silent, to endure often grievous discrimination, or to take on additional unpaid, unrecognized labor – ask the privileged to take action.

For many marginalized people, our experience of being in tech includes traumatic experience(s) which we may not have not yet fully come to terms with and that influenced our decisions to leave. Sometimes we don’t make a direct connection between the traumatic experiences and our decision to leave. We just find that we are “bored” and are no longer excited about our work, or start avoiding situations that used to be rewarding, like conferences, speaking, and social events. Often we don’t realize traumatic events are even traumatic until months or years later. If you’ve experienced trauma, processing the trauma is necessary, whether or not you decide to leave toxic tech culture.

This post doesn’t assume that you are sure that you want to leave your current area of tech, or tech as a whole. We ourselves aren’t “sure” we want to permanently leave the toxic tech cultures we were part of even now – maybe things will get better enough that we will be willing to return. You can take the steps described in this post and stay in your current area of tech for as long as you want – you’ll just be more centered, grounded, and happy.

The steps we took are described in roughly the order we took them, but they all overlapped and intermixed with each other. Don’t feel like you need to do things in a particular order or way; this is just to give you some ideas on what you could do to work through your feelings about leaving tech and any related trauma.

Step 1: Deprogram yourself from the cult of tech

The first step is to start deprogramming yourself from the cult of tech. Being part of toxic tech culture has a lot in common with being part of a cult. How often have you heard a Silicon Valley CEO talk about how his (it’s almost always a he) startup is going to change the world? The refrain of how a startup CEO is going to save humanity is so common that it’s actually uncommon for a CEO to not use saviour language when describing their startup. Cult leaders do the same thing: they create a unique philosophy, imbued with some sort of special message that they alone can see or hear, convince people that only they have the answers for what ails humanity, and use that influence to control the people around them.

Consider this list of how to identify a cult, and how closely this list mirrors patterns we can observe in Silicon Valley tech:

  • “Be wary of any leader who proclaims him or herself as having special powers or special insight.” How often have you heard a Silicon Valley founder or CEO proclaimed as some sort of genius, and they alone can figure out how to invent XYZ? Nearly every day, there’s some deific tribute to Elon Musk or Mark Zuckerberg in the media.
  • “The group is closed, so in other words, although there may be outside followers, there’s usually an inner circle that follows the leader without question, and that maintains a tremendous amount of secrecy.” The Information just published a database summarizing how secretive, how protective, how insular the boards are for the top 30 private companies in tech. Here’s what they report: “Despite their enormous size and influence, the biggest privately held technology companies eschew some basic corporate governance standards, blocking outside voices, limiting decision making to small groups of mostly white men and holding back on public disclosures, an in-depth analysis by The Information shows.”
  • “A very important aspect of cult is the idea that if you leave the cult, horrible things will happen to you.” There’s an insidious reason why your unicorn startup provides you with a free cafeteria, gym, yoga rooms, and all night snack bars: they never want you to leave. And if you do leave the building, you can stay engaged with Slack, IM, SMS, and every other possible communications tool so that you can never disconnect. They then layer over this with purported positive cultural messaging around how lucky, how fortunate you are to have landed this job — you were the special one selected out of thousands of candidates. Nobody else has it as good as we do here. Nobody else is as smart, as capable, as special as our team. Nobody else is building the best, most impactful solutions to solve humanity’s problems. If you fall off this treadmill, you will become irrelevant, you’ll be an outsider, a consumer instead of a builder, you’ll never be first on the list for the Singularity, when it happens. You’ll be at the shit end of the income inequality distribution funnel.

Given how similar toxic tech culture (and especially Silicon Valley tech culture) is to cult culture, leaving tech often requires something like cult-deprogramming techniques. We found the following steps especially useful for deprogramming ourselves from the cult of tech: recognizing our unconscious beliefs, experimenting with our identity, avoiding people who don’t support us, and making friendships that aren’t dependent on tech.

Recognize your unconscious beliefs

One cult-like aspect of toxic tech culture is a strong moral us-vs-them dichotomy: either you’re “in tech,” and you’re important and smart and hardworking and valuable, or you are not “in tech” because you are ignorant and untalented and lazy and irrelevant. (What are the boundaries of “in tech?” Well, the more privileged you are, the more likely people will define you as “in tech” – so be generous to yourself if you are part of a marginalized group. Or read more about the fractal nature of the gender binary and how it shows up in tech.)

We didn’t realize how strongly we’d unconsciously adopted this belief that people in tech were better than those who weren’t until we started to imagine ourselves leaving tech and felt a wave of self-judgment and fear. Early on, Valerie realized that she unconsciously thought of literally every single job other than software engineer as “for people who weren’t good enough to be a software engineer” – and that she thought this because other software engineers had been telling her that for her entire career. Even now, as Susan is launching a new education startup in Australia, she’s trying to be careful to not assume that just because people are doing things in a “non Silicon Valley, lean startup, agile way,” that it’s not automatically wrong. In reality, the best way in which to do things is probably not based on any particular dogma, but one that reflects a healthy balance of diverse perspectives and styles.

The first step to ridding yourself of the harmful belief that only people who are “in tech” or doing things in a “startup style” are good or smart or valuable is surfacing the unconscious belief to the conscious level, so you can respond to it. Recognize and name that belief when it comes up: when you think about leaving your job and feel fear, when you meet a new person and immediately lose interest when you learn their job is not “technical,” when you notice yourself trying to decide if someone is “technical enough.” Say to yourself, “I am experiencing the belief that only people I consider technical are valuable. This isn’t true. I believe everyone is valuable regardless of their job or level of technical knowledge.”

Experiment with your self-identity

The next step is to experiment with your own self-identity. Begin thinking of yourself as having different non-tech jobs or self-descriptions, and see what thoughts come up. React to those thoughts as though you were reacting to a friend you care about who was saying those things about them. Try to find positive things to think and say about your theoretical new job and new life. Think about people you know with that job and ask yourself if you would say negative things about their job to them. Some painful thoughts and experiences will come up during this time; aim to recognize them consciously and process them, rather than trying to stuff them down or make them go away.

When you live in Silicon Valley, it’s easy for your work life to consume 95% of your waking hours — this is how startups are designed, after all, with their endless perks and pressures to socialize within the tribe. Often times, promotions go hand in hand with socializing successfully within the startup scene. What can you do to carve out several hours a week just for yourself, and an alternate identity that isn’t defined by success within toxic tech culture? How do you make space for self care? For example, Susan began to take online writing courses, and found that the outlet of interacting with poets and fiction writers helped ground her.

If necessary, change the branding of your personal life. Stop wearing tech t-shirts and get shirts that reflect some other part of your self. Get a different print for your office wall. Move the tech books into one out-of-the-way shelf and donate any you don’t use right now (especially the ones that you have been planning to read but never got around to). Donate most of your conference schwag and stop accepting new schwag. Pack away the shelf of tech-themed tchotchkes or even (gasp) throw them away. Valerie went to a “burn party” on Ocean Beach, where everyone brought symbols of old jobs that they were happy to be free of and symbolically burned them in a beach bonfire. You might consider a similar ritual.

De-emphasize tech in your self-presentation. Change any usernames that reference your tech interests. Rewrite any online bios or descriptions to emphasize non-tech parts of your life. Start introducing yourself by talking about your non-tech hobbies and interests rather than your job. You might even try introducing yourself to new people as someone whose primary job isn’t tech. Valerie, who had been writing professionally for several years, started introducing herself as a writer at tech events in San Francisco. People who would have talked to her had she introduced herself as a Linux kernel developer would immediately turn away without a second word. Counterintuitively, this made her more determined to leave her job, when she saw how inconsiderate her colleagues were when she did not make use of her technical privilege.

Avoid unsupportive people

Identify any people in your life who are consistently unsupportive of you, or only supportive when you perform to their satisfaction, and reduce your emotional and financial dependence on them. If you have friends or idols who are unhelpfully critical or judgemental, take steps to see or hear from them less often. Don’t seek out their opinion and don’t stoke your admiration for them. This will be difficult the closer and more dependent you are on the person; if your spouse or manager is one of these people, you have our sympathy. For more on this dynamic and how to end it, see this series of posts about narcissism, co-narcissism, and tech.

Depressingly often, we especially seek the approval of people who give approval sparingly (think about the popularity of Dr. House, who is a total jerk). If you find yourself yearning for the approval of someone in tech who has been described as an “asshole,” this is a great time to stop. Some helpful tips to stop seeking the approval of an asshole: make a list of cruel things they’ve done, make a list of times they were wrong, stop reading their writing or listening to their talks, filter them out of your daily reading, talk to people who don’t know who that person is or care what they think, listen to people who have been hurt by them, and spend more time with people who are kind and nurturing.

At the same time, seek out and spend more time with people who are generally supportive of you, especially people who encourage experimentation and personal change. You may already have many of these people in your life, but don’t spend much time thinking about them because you can depend on their friendship and support. Reach out to them and renew your relationship.

Make friendships that don’t depend on tech

If your current social circle consists entirely of people who are fully bought into toxic tech culture, you may not have anyone in your life willing to support a career change. To help solve this, make friendships that aren’t dependent on your identity as a person in tech. The goal is to have a lot of friendships that aren’t dependent on your being in tech, so that if you decide to leave, you won’t lose all your friends at the same time as your job. Being friends with people who aren’t in tech will help you get an outside perspective on the kind of tech culture you are part of. It also helps you envision a future for yourself that doesn’t depend on being in toxic tech culture. You can still have lots of friends in tech, you are just aiming for diversity in your friendships.

One way to make this easier is to focus on your existing friendships that are “near tech,” such as people working in adjacent fields that sometimes attend tech conferences, but aren’t “in tech” themselves. Try also getting a new hobby, being more open to invitations to social events, and contacting old friends you’ve fallen out of touch with. Spend less time attending tech-related events, especially if you currently travel to a lot of tech conferences. It’s hard to start and maintain new local friendships when you’re constantly out of town or working overtime to prepare a talk for a conference. If you have a set of conferences you attend every year, it will feel scary the first time you miss one of them, but you’ll notice how much more time you have to spend with your local social circle.

Making friends outside of your familiar context (tech co-workers, tech conferences, online tech forums) is challenging for most people. If you learned how to socialize entire in tech culture, you may also need to learn new norms and conventions (such as how to have a conversation that isn’t about competing to show who knows more about a subject). Both Valerie and Susan experienced this when we started trying to make friends outside of toxic tech culture: all we knew how to talk about was startups, technology, video games, science fiction, scientific research, and (ugh) libertarian economic philosophy. We discovered people outside toxic tech culture wanted to talk about a wider range of topics, and often in a less confrontational way. And after a lifetime of socialization to distrust and discount everyone who wasn’t a man, we learned to seek out and value friendships with women and non-binary people.

If making new friends sounds intimidating, we recommend checking out Captain Awkward’s practical advice on making friends. Making new friends takes work and willingness to be rejected, but you’ll thank yourself for it later on.

Step 2: Make room for a career change

If you are already in a place where you have the freedom to make a big career change, congratulations! But if changing careers seems impossibly hard right now, that’s okay too. You can make room for a career change while still working in tech. Even if you end up deciding to stay in your current job, you will likely appreciate the freedom and flexibility that you’ve opened up for yourself.

Find a career counselor

The most useful action you can take is to find a career counselor who is right for you, and be honest with them about your fears, goals, and desires. Finding a career counselor is a lot like finding a dentist or a therapist: ask your friends for recommendations, read online reviews, look for directories or lists, and make an appointment for a free first meeting. If your first meeting doesn’t click, go ahead and try another career counselor until you find someone you can work with. A good career counselor will get a comprehensive view of your entire life (including family and friends) and your goals (not just job-related goals), and give you concrete steps to take to bring you closer to your goals.

Sometimes a career counselor’s job is explaining to you how the job you want but thought was impossible to get is actually possible. Valerie started seeing a career counselor about two years before she quit her last job as a software engineer and co-founded a non-profit. It took her about five years to get everything she listed as part of what she thought was an unattainable dream job (except for the “view of the water from her office,” which she is still working on). All the rest of this section is a high-level generic version of the advice a good career counselor will give you.

Improve your financial situation

Many tech jobs pay relatively well, but many people in tech would still have a hard time switching careers tomorrow because they don’t have enough money saved or couldn’t take a pay cut (hello, overheated rental markets and supporting your extended family). Don’t assume you’ll have to take a pay cut if you leave tech or your particular part of toxic tech culture, but it gives you more flexibility if you don’t have to immediately start making the same amount of money in a different job.

Look for ways to change your lifestyle or your expectations in ways that let you save money or lower your bills. Status symbols and class markers will probably loom large here and it’s worth thinking about which things are most valuable to you and which ones you can let go. You might find it is a relief to no longer have an expensive car with all its attendant maintenance and worries and fear, but that you really value the weekly exercise class that makes you feel happier and more energetic the rest of the week. Making these changes will often be painful in the short term but pay off in the long term. Valerie ended up temporarily moving out of the San Francisco Bay Area to a cheaper area near her family, which let her save up money and spend less while she was planning a career change. She moved back to the Bay Area when she was established in her new career, into a smaller, cheaper apartment she could afford on her new salary. Today she is making more money than she ever did as a programmer.

Take stock of your transferrable skills

Figure out what you actually like to do and how much of that is transferrable to other fields or jobs. One way to do this is to look back at, say, the top seven projects you most enjoyed doing in your life, either for your job or as a volunteer. What skills were useful to you in getting those projects done? What parts of doing that project did you enjoy the most? For example, being able to quickly read and understand a lot of information is a transferrable skill that many people enjoy using. The ability to persuade people is another such skill, useful for selling gym memberships, convincing people to recycle more, teaching, getting funding, and many other jobs. Once you have an idea of what it is that you enjoy doing and that is transferrable to other jobs, you can figure out what jobs you might enjoy and would be reasonably good at from the beginning.

Think carefully before signing up for new education

This is not necessarily the time to start taking career-related classes or going back to university in a serious way! If you start taking classes without first figuring out what you enjoy, what your skills are, and what your goals are, you are likely to be wasting your time and money and making it more difficult to find your new career. We highly recommend working with a career counselor before spending serious money or time on new training or classes. However, it makes sense to take low-cost, low-time commitment classes to explore what you enjoy doing, open your mind to new possibilities, or meet new people. This might look like a pottery class at the local community college, learning to 3D print objects at the local hackerspace, or taking an online course in African history.

Recognise there are many different paths in tech

The good news about software finally eating the world is that there are now many ways in which you can work in and around technology, without having to be part of toxic tech culture. Every industry needs tech expertise, and nearly every country around the world is trying to cultivate its own startup ecosystem. Many of these are much saner, kinder places to work than the toxic tech culture you may currently be part of, and a few of these involve industries that are more inclusive and welcoming of marginalized groups. Some of our friends have left the tech industry to work in innovation or technology related jobs in government, education, advocacy, policy, and arts. Though there are no great industries, and no ideal safe places for marginalized groups nearly anywhere in the world, there are varying degrees of toxicity and you can seek out areas with less toxicity. Try not to be swayed by the narrative that the only tech worth doing is the tech that’s written about in the media or receiving significant VC funding.

Step 3: Take care of yourself

Since being part of toxic tech culture is harmful to you as a person, simply focusing on taking care of yourself will help you put tech culture in its proper perspective, leaving you the freedom to be part of tech or not as you choose.

Prioritize self-care

Self-care means doing things that are kind or nurturing for yourself, whatever that looks like for you. Being in toxic tech culture means that many things take priority over self-care: fixing that last bug instead of taking a walk, going to an evening work-related meetup instead of staying home and getting to sleep on time, flying to yet another tech conference instead of spending time with family and friends. For Susan, prioritizing self-care looked like taking a road trip up the Pacific Coast Highway for the weekend instead of going to an industry fundraiser, or eating lunch by herself with a book instead of meeting up with another VC. One of the few constants in life is that you will always be stuck with your own self – so take care of it!

Learn to say no and enforce boundaries

We found that we were saying yes to too many things. The tech industry depends on extracting free or low-cost labor from many people in different ways: everything from salaried employees working 60-hour weeks to writing and giving talks in your “free time” – all of which are considered required for your career to advance. Marginalized people in tech are often expected to work an additional second (third?) shift of diversity-related work for free: giving recruiting advice, mentoring other marginalized people, or providing free counseling to more privileged people.

FOMO (fear of missing out) plays an important role too. It’s hard to cut down on free work when you are wondering, what if this is the conference where you’ll meet the person who will get you that venture capital job you’ve always wanted? What if serving on this conference program committee will get you that promotion? What if going to lunch with this powerful person so they can “pick your brain” for free will get you a new job? Early in your tech career, these kinds of investments often pay off but later on they have diminishing returns. The first time you attend a conference in your field, you will probably meet dozens of people who are helpful to your career. The twentieth conference – not so much.

For Valerie, switching from a salaried job to hourly consulting taught her the value of her time and just how many hours she was spending on unpaid work for the Linux and file systems communities. She taped a note reading “JUST SAY NO” to the wall behind her computer, and then sent a bunch of emails quitting various unpaid responsibilities she had accumulated. A few months later, she found she had made too many commitments again, and had to send another round of emails backing out of commitments. It was painful and embarrassing, but not being constantly frazzled and stressed out was worth it.

When you start saying no to unpaid work, some people will be upset and push back. After all, they are used to getting free work from you which gives them some personal advantage, and many people won’t be happy with this. They may try to make you feel guilty, shame you, or threaten you. Learning to enforce boundaries in the face of opposition is an important part of this step. If this is hard for you, try reading books, practicing with a friend, or working with a therapist. If you are worried about making mistakes when going against external pressure, keep in mind that simply exercising some control over your life choices and career path will often increase your personal happiness, regardless of the outcome.

Care for your mental health

Let’s be brutally honest: toxic tech culture is highly abusive, and there’s an excellent chance you are suffering from depression, trauma, chronic stress, or other serious psychological difficulties. The solution that works for many people is to work with a good therapist or counselor. A good licensed therapist is literally an expert in helping people work through these problems. Even if you don’t think your issues reach the level of seriousness that requires a therapist, a good therapist can help you with processing guilt, fear, anxiety, or other emotions that come up around the idea of leaving toxic tech culture.

Whether or not you work with a therapist, you can make use of many other forms of mental health care: meditation, support groups, mindfulness apps, walking, self-help books, spending time in nature, various spiritual practices, doing exercises in workbooks, doing something creative, getting alone time, and many more. Try a bunch of different things and pick what works for you – everyone is different. For Susan, practicing yoga four times a week, meditating, and working in her vegetable garden instead of reading Hacker News gave her much needed perspective and space.

Finding a therapist can be intimidating for many people, which is why Valerie wrote “HOWTO therapy: what psychotherapy is, how to find a therapist, and when to fire your therapist.” It has some tips on getting low-cost or free therapy if that’s what you need. You can also read Tiffany Howard‘s list of free and low-cost mental health resources which covers a wide range of different options, including apps, peer support groups, and low-cost therapy.

Process your grief

Even if you are certain you want to leave toxic tech culture, actually leaving is a loss – if nothing else, a loss of what you thought your career and future would look like. Grief is an appropriate response to any major life change, even if it is for the better. Give yourself permission to grieve and be sad, for whatever it is that you are sad about. A few of the things we grieved for: the meritocracy we thought we were participating in, our vision for where our careers would be in five years, the good times we had with friends at conferences, a sense of being part of something excited and world-changing, all the good people who left before us, our relationships with people we thought would support us but didn’t, and the people we were leaving behind to suffer without us.

Step 4: Give yourself time

If you do decide to leave toxic tech culture, give yourself a few years to do it, and many more years to process your feelings about it. Valerie decided to stop being a programmer two years before she actually quit her programming job, and then she worked as a file systems consultant on and off for five years after that. Seven years later, she finally feels mostly at peace about being driven out of her chosen career (though she still occasionally has nightmares about being at a Linux conference). Susan’s process of extricating herself from the most toxic parts of tech culture and reinvesting in her own identity and well being has taken many years as well. Her partner (who knows nothing about technology) and her two kids help her feel much more balanced. Because Susan grew up on the Internet and has been building in tech for 25 years, she feels like she’ll probably always be doing something in tech, or tech-related, but wants to use her knowledge and skills to do this on her own terms, and to use her hard won know-how to benefit other marginalized folks to successfully reshape the industry.

An invitation to share your story

We hope this post was helpful to other people thinking about leaving toxic tech culture. There is so much more to say on this topic, and so many more points of view we want to hear about. If you feel safe doing so, we would love to read your story of leaving toxic tech culture. And wherever you are in your journey, we see you and support you, even if you don’t feel safe sharing your story or thoughts.

Krebs on SecuritySome Basic Rules for Securing Your IoT Stuff

Most readers here have likely heard or read various prognostications about the impending doom from the proliferation of poorly-secured “Internet of Things” or IoT devices. Loosely defined as any gadget or gizmo that connects to the Internet but which most consumers probably wouldn’t begin to know how to secure, IoT encompasses everything from security cameras, routers and digital video recorders to printers, wearable devices and “smart” lightbulbs.

Throughout 2016 and 2017, attacks from massive botnets made up entirely of hacked IoT devices had many experts warning of a dire outlook for Internet security. But the future of IoT doesn’t have to be so bleak. Here’s a primer on minimizing the chances that your IoT things become a security liability for you or for the Internet at large.

-Rule #1: Avoid connecting your devices directly to the Internet — either without a firewall or in front it, by poking holes in your firewall so you can access them remotely. Putting your devices in front of your firewall is generally a bad idea because many IoT products were simply not designed with security in mind and making these things accessible over the public Internet could invite attackers into your network. If you have a router, chances are it also comes with a built-in firewall. Keep your IoT devices behind the firewall as best you can.

-Rule #2: If you can, change the thing’s default credentials to a complex password that only you will know and can remember. And if you do happen to forget the password, it’s not the end of the world: Most devices have a recessed reset switch that can be used to restore to the thing to its factory-default settings (and credentials). Here’s some advice on picking better ones.

I say “if you can,” at the beginning of Rule #2 because very often IoT devices — particularly security cameras and DVRs — are so poorly designed from a security perspective that even changing the default password to the thing’s built-in Web interface does nothing to prevent the things from being reachable and vulnerable once connected to the Internet.

Also, many of these devices are found to have hidden, undocumented “backdoor” accounts that attackers can use to remotely control the devices. That’s why Rule #1 is so important.

-Rule #3: Update the firmware. Hardware vendors sometimes make available security updates for the software that powers their consumer devices (known as “firmware). It’s a good idea to visit the vendor’s Web site and check for any firmware updates before putting your IoT things to use, and to check back periodically for any new updates.

-Rule #4: Check the defaults, and make sure features you may not want or need like UPnP (Universal Plug and Play — which can easily poke holes in your firewall without you knowing it) — are disabled.

Want to know if something has poked a hole in your router’s firewall? Censys has a decent scanner that may give you clues about any cracks in your firewall. Browse to whatismyipaddress.com, then cut and paste the resulting address into the text box at Censys.io, select “IPv4 hosts” from the drop-down menu, and hit “search.”

If that sounds too complicated (or if your ISP’s addresses are on Censys’s blacklist) check out Steve Gibson‘s Shield’s Up page, which features a point-and-click tool that can give you information about which network doorways or “ports” may be open or exposed on your network. A quick Internet search on exposed port number(s) can often yield useful results indicating which of your devices may have poked a hole.

If you run antivirus software on your computer, consider upgrading to a “network security” or “Internet security” version of these products, which ship with more full-featured software firewalls that can make it easier to block traffic going into and out of specific ports.

Alternatively, Glasswire is a useful tool that offers a full-featured firewall as well as the ability to tell which of your applications and devices are using the most bandwidth on your network. Glasswire recently came in handy to help me determine which application was using gigabytes worth of bandwidth each day (it turned out to be a version of Amazon Music’s software client that had a glitchy updater).

-Rule #5: Avoid IoT devices that advertise Peer-to-Peer (P2P) capabilities built-in. P2P IoT devices are notoriously difficult to secure, and research has repeatedly shown that they can be reachable even through a firewall remotely over the Internet because they’re configured to continuously find ways to connect to a global, shared network so that people can access them remotely. For examples of this, see previous stories here, including This is Why People Fear the Internet of Things, and Researchers Find Fresh Fodder for IoT Attack Cannons.

-Rule #6: Consider the cost. Bear in mind that when it comes to IoT devices, cheaper usually is not better. There is no direct correlation between price and security, but history has shown the devices that tend to be toward the lower end of the price ranges for their class tend to have the most vulnerabilities and backdoors, with the least amount of vendor upkeep or support.

In the wake of last month’s guilty pleas by several individuals who created Mirai — one of the biggest IoT malware threats ever — the U.S. Justice Department released a series of tips on securing IoT devices.

One final note: I realize that the people who probably need to be reading these tips the most likely won’t ever know they need to care enough to act on them. But at least by taking proactive steps, you can reduce the likelihood that your IoT things will contribute to the global IoT security problem.

CryptogramArticle from a Former Chinese PLA General on Cyber Sovereignty

Interesting article by Major General Hao Yeli, Chinese People's Liberation Army (ret.), a senior advisor at the China International Institute for Strategic Society, Vice President of China Institute for Innovation and Development Strategy, and the Chair of the Guanchao Cyber Forum.

Against the background of globalization and the internet era, the emerging cyber sovereignty concept calls for breaking through the limitations of physical space and avoiding misunderstandings based on perceptions of binary opposition. Reinforcing a cyberspace community with a common destiny, it reconciles the tension between exclusivity and transferability, leading to a comprehensive perspective. China insists on its cyber sovereignty, meanwhile, it transfers segments of its cyber sovereignty reasonably. China rightly attaches importance to its national security, meanwhile, it promotes international cooperation and open development.

China has never been opposed to multi-party governance when appropriate, but rejects the denial of government's proper role and responsibilities with respect to major issues. The multilateral and multiparty models are complementary rather than exclusive. Governments and multi-stakeholders can play different leading roles at the different levels of cyberspace.

In the internet era, the law of the jungle should give way to solidarity and shared responsibilities. Restricted connections should give way to openness and sharing. Intolerance should be replaced by understanding. And unilateral values should yield to respect for differences while recognizing the importance of diversity.

Worse Than FailureIn $BANK We Trust

During the few months after getting my BS and before starting my MS, I worked for a bank that held lots of securities - and gold - in trust for others. There was a massive vault with multiple layers of steel doors, iron door grates, security access cards, armed guards, and signature comparisons (live vs pre-registered). It was a bit unnerving to get in there, so deep below ground, but once in, it looked very much like the Fort Knox vault scene in Goldfinger.

Someone planning things on a whiteboard

At that point, PCs weren't yet available to the masses and I had very little exposure to mainframes. I had been hired as an assistant to one of their drones who had been assigned to find all of the paper-driven-changes that had gone awry and get their books up to date.

To this end, I spent about a month talking to everyone involved in taking a customer order to take or transfer ownership of something, and processing the ledger entries to reflect the transaction. From this, I drew a simple flow chart, listing each task, the person(s) responsible, and the possible decision tree at each point.

Then I went back to each person and asked them to list all the things that could and did go wrong with transaction processing at their junction in the flow.

What had been essentially straight-line processing with a few small decision branches, turned out to be enough to fill a 30 foot long by 8 foot high wall of undesirable branches. This became absolutely unmanageable on physical paper, and I didn't know of any charting programs on the mainframe at that time, so I wrote the whole thing up with an index card at each junction. The "good" path was in green marker, and everything else was yellow (one level of "wrong") or red (wtf-level of "wrong").

By the time it was fully documented, the wall-o-index-cards had become a running joke. I invited the people (who had given me all of the information) in to view their problems in the larger context, and verify that the problems were accurately documented.

Then management was called in to view the true scope of their problems. The reason that the books were so snafu'd was that there were simply too many manual tasks that were being done incorrectly, cascading to deeply nested levels of errors.

Once we knew where to look, it became much easier to track transactions backward through the diagram to the last known valid junction and push them forward until they were both correct and current. A rather large contingent of analysts were then put onto this task to fix all of the transactions for all of the customers of the bank.

It was about the time that I was to leave and go back to school that they were talking about taking the sub-processes off the mainframe and distributing detailed step-by-step instructions for people to follow manually at each junction to ensure that the work flow proceeded properly. Obviously, more manual steps would reduce the chance for errors to creep in!

A few years later when I got my MS, I ran into one of the people that was still working there and discovered that the more-manual procedures had not only not cured the problem, but that entirely new avenues of problems had cropped up as a result.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Google AdsenseReceiving your payment via EFT (Electronic Funds Transfer)


Electronic Funds Transfer (EFT) is our fastest, most secure, and environmentally friendly payment method. It is available across most countries and you can check if this payment method is available to you here.

To use this payment method we first need to verify your bank account to ensure that you will receive your payment. This involves entering specific bank account information and receiving a small test deposit.

Some of our publishers found this process confusing and we want to guide you through it. Our latest video will guide you through adding EFT as a payment method, from start to finish.
If you didn’t receive your test deposit, you can watch this video to understand why. If you have more questions, visit our Help Center.
Posted by: The AdSense Support Team

,

LongNowStewart Brand Gives In-Depth and Personal Interview to Tim Ferriss

Tim Ferriss, who wrote the The Four Hour Work Week and gave a Long Now talk on accelerated learning in 02011, recently interviewed Long Now co-founder Stewart Brand on his podcast, “The Tim Ferriss Show”. The interview is wide-ranging, in-depth, and among the most personal Brand has given to date. Over the course of nearly three hours, Brand touches on everything from the Whole Earth Catalog, why he gave up skydiving, how he deals with depression, his early experiences with psychedelics, the influence of Marshall McLuhan and Buckminster Fuller on his thinking, his recent CrossFit regimen, and the ongoing debate between artificial intelligence and intelligence augmentation. He also discusses the ideas and projects of The Long Now Foundation.

Brand frames The Long Now Foundation as a way to augment social intelligence:

The idea of the Long Now Foundation is to give encouragement and permission to society that is rewarded for thinking very, very rapidly, in business terms and, indeed, in scientific terms, of rapid turnaround, and getting inside the adversaries’ loop, move fast and break things, [to think long term]. Long term thinking might be proposing that some things you don’t want to break. They might involve moving slow, and steadily.

The Pace Layer diagram.

He introduces the pace layer diagram as a tool to approach global scale challenges:

What we’re proposing is there are a lot of problems, a lot of issues and a lot of quite wonderful things in that category of being big and slow moving and so I wound up with Brian Eno developing a pace layer diagram of civilization where there’s the fast moving parts like fashion and commerce, and then it goes slower when you get to infrastructure and then things move really slow in how governance changes, and then you go down to culture and language and religion move really slowly and then nature, the tectonic forces in climate change and so on move really big and slow. And what’s interesting about that is that the fast parts get all the attention, but the slow parts have all the power. And if you want to really deal with the powerful forces in the world, bear relation to seeing what can be done with appreciating and maybe helping adjust the big slow things.

Stewart Brand and ecosystem ecologist Elena Bennett during the Q&A of her November 02017 SALT Talk. Photo: Gary Wilson.

Ferris admits that in the last few months he’s been pulled out of the current of long-term thinking by the “rip tide of noise,” and asks Brand for a “homework list” of SALT talks that can help provide him with perspective. Brand recommends Jared Diamond’s 02005 talk on How Societies Fail (And Sometimes Succeed), Matt Ridley’s 02011 talk on Deep Optimism, and Ian Morris’ 02011 talk on Why The West Rules (For Now).

Brand also discusses Revive & Restore’s efforts to bring back the Wooly Mammoth, and addresses the fear many have of meddling with complex systems through de-extinction.

Long-term thinking has figured prominently in Tim Ferriss’ podcast in recent months. In addition to his interview with Brand, Ferris has also interviewed Long Now board member Kevin Kelly and Long Now speaker Tim O’Reilly.

Listen to the podcast in full here.

TEDTED debuts “Small Thing Big Idea” original video series on Facebook Watch

Today we’re debuting a new original video series on Facebook Watch called Small Thing Big Idea: Designs That Changed the World.

Each 3- to 4-minute weekly episode takes a brief but delightful look at the lasting genius of one everyday object – a pencil, for example, or a hoodie – and explains how it is so perfectly designed that it’s actually changed the world around it.

The series features some of design’s biggest names, including fashion designer Isaac Mizrahi, museum curator Paola Antonelli, and graphic designer Michael Bierut sharing their infectious obsession with good design.

To watch the first episode of Small Thing Big Idea (about the little-celebrated brilliance of subway maps!), tune in here, and check back every Tuesday for new episodes.

Cory DoctorowThe Man Who Sold the Moon, Part 02


Here’s part two of my reading (MP3) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.

MP3

CryptogramJim Risen Writes about Reporting Government Secrets

Jim Risen writes a long and interesting article about his battles with the US government and the New York Times to report government secrets.

Worse Than FailureWhy Medical Insurance Is So Expensive

VA One AE Preliminary Project Timeline 2001-02

At the end of 2016, Ian S. accepted a contract position at a large medical conglomerate. He was joining a team of 6 developers on a project to automate what was normally a 10,000-hour manual process of cross-checking spreadsheets and data files. The end result would be a Django server offering a RESTful API and MySQL backend.

"You probably won't be doing anything much for the first week, maybe even the first month," Ian's interviewer informed him.

Ian ignored the red flag and accepted the offer. He needed the experience, and the job seemed reasonable enough. Besides, there were only 2 layers of management to deal with: his boss Daniel, who led the team, and his boss' boss Jim.

The office was in an lavish downtown location. The first thing Ian learned was that nobody had assigned desks. Each day, everyone had to clean out their desks and return their computers and peripherals to lockers. Because team members needed to work closely together, everyone claimed the same desk every day anyway. This policy only resulted in frustration and lost time.

As if that weren't bad enough, the computers were also heavily locked down. Ian had to go through the company's own "app store" to install anything. This was followed by an approval process that could take a few days based on how often Jim went through his pending approvals. The one exception was VMWare Workstation. Because this app cost money, it involved a 2-week approval process. In the middle of December, everyone was off on holiday, making it impossible for Ian's team to get approvals or talk to anyone helpful. Thus Ian's only contributions that month were a couple of Visio diagrams and a Django "hello world" that Daniel had requested. (It wasn't as if Daniel could check his work, though. He didn't know anything about Python, Django, REST, MySQL, MVC, or any other technology relevant to the project.)

The company provided Ian a copy of Agile for Dummies, which seemed ironic in retrospect, as the team was forced to the spend entire first week of January breaking the next 6 months into 2-week sprints. They weren't allowed to leave sprints empty, and had to allocate 36-40 hours each week. They could only make stories for features, so no time was penciled in for bug fixes or paying off technical debt. These stories were then chopped into meaningless pieces ("Part 1", "Part 2", etc.) so they'd fit into their arbitrary timelines.

"This is why medical insurance is so expensive", Daniel remarked at one point, either trying to lighten the mood or stave off his pending insanity.

Later in January, Ian arrived one morning to find the rest of his team standing around confused. Their project was now dead at the hands of a VP who'd had it in for Jim. The company had a tenure process, so the VP couldn't just fire Jim, but he could make his life miserable. He reassigned all of Jim's teams that he didn't outright terminate, exiled Jim to New Jersey, and gave him nothing to do but approve timesheets. Meanwhile, Daniel was told not to bother coming in again.

"Don't worry," the powers-that-be said. "We don't usually terminate people here."

Ian's gapingly empty schedule was filled with a completely different task: "shadowing" someone in another state by screen-sharing and watching them work. The main problem with this arrangement was that Ian's disciple was a systems analyst, not a programmer.

Come February, Ian's new team was also terminated.

"We don't have a culture of layoffs," the powers-that-be assured him.

They were still intent on shoving Ian into a systems analyst position despite his requisite lack of experience. It was at that point that he gave up and moved on. He later heard that within a few months, the entire division had been fired.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Don MartiRemove all the tracking widgets? Maybe not.

Good one from Mark Pilipczuk: Publisher Advice From a Buyer.

Remove all the tracking widgets from your site. That Facebook “Like” button only serves to exfiltrate your valuable data to an entity that doesn’t have your best interests at heart. If you’ve got a valuable audience, why would you want to help the ad tech industry which promises “I can find the same and bigger audience over here for $2 CPM, so don’t buy from the publisher?” Sticking your own head in the noose is never a good idea.

That advice makes sense for the Facebook "like button." That button is just a data shoplifter. The others, though? All those extra trackers come in as side effects of ad deals, and they're likely to be contractually required to make ads on the site saleable.

Yes, those trackers feed bots and data leakage, and yes, they're even terrible at fighting adfraud. Augustine Fou points out that Fraud filters don't work. "In some cases it's worse when filter is on."

So in an ideal world you would be able to pull all the third-party trackers, but as far as day-to-day operations go, user tracking is a Chesterton's Fence problem. What happens if a legit site unilaterally takes down the third-party trackers? All the targeted ad impressions that would have given that site a (small) payment end up going to bots.

So what can a site do? Understand that the real fix has to happen on the browser end, and nudge the users to either make their browsers less data-leaky, or switch to browsers that are leakage-resistant out of the box.

Start A/B testing some notifications to remind users to turn on tracking protection.

  • Can you get users who are already choosing "Do Not Track" to turn on real protection if you inform them that sites ignore their DNT choice?

  • If a user is running an ad blocker with a paid whitelisting scheme, can you inform them about it to get them to switch to a better tool, or at least add a second layer of protection that limits the damage that paid whitelisting can do?

  • When users visit privacy pages or opt-out of a marketing program, are they also willing to check their browser privacy settings?

Every site's audience is different. It's hard to know in advance how users will respond to different calls to action to turn up their privacy and create a win-win for legit sites and legit brands. We do know that users are concerned and confused about web advertising, and the good news is that the JavaScript needed to collect data and administer nudges is as easy to add as yet another tracker.

More on what sites can do, that might be more effective than just removing trackers: What The Verge can do to help save web advertising

Krebs on SecuritySerial SWATter Tyler “SWAuTistic” Barriss Charged with Involuntary Manslaughter

Tyler Raj Barriss, a 25-year-old serial “swatter” whose phony emergency call to Kansas police last month triggered a fatal shooting, has been charged with involuntary manslaughter and faces up to eleven years in prison.

Tyler Raj Barriss, in an undated selfie.

Barriss’s online alias — “SWAuTistic” — is a nod to a dangerous hoax known as “swatting,” in which the perpetrator spoofs a call about a hostage situation or other violent crime in progress in the hopes of tricking police into responding at a particular address with potentially deadly force.

Barriss was arrested in Los Angeles this month for alerting authorities in Kansas to a fake hostage situation at an address in Wichita, Kansas on Dec. 28, 2017.

Police responding to the alert surrounded the home at the address Barriss provided and shot 28-year old Andrew Finch as he emerged from the doorway of his mother’s home. Finch, a father of two, was unarmed, and died shortly after being shot by police.

The officer who fired the shot that killed Finch has been identified as a seven-year veteran with the Wichita department. He has been placed on administrative leave pending an internal investigation.

Following his arrest, Barriss was extradited to a Wichita jail, where he had his first court appearance via video on FridayThe Los Angeles Times reports that Barriss was charged with involuntary manslaughter and could face up to 11 years and three months in prison if convicted.

The moment that police in Kansas fired a single shot that killed Andrew Finch (in doorway of his mother’s home).

Barriss also was charged with making a false alarm — a felony offense in Kansas. His bond was set at $500,000.

Sedgwick County District Attorney Marc Bennett told the The LA Times Barriss made the fake emergency call at the urging of several other individuals, and that authorities have identified other “potential suspects” that may also face criminal charges.

Barriss sought an interview with KrebsOnSecurity on Dec. 29, just hours after his hoax turned tragic. In that interview, Barriss said he routinely called in bomb threats and fake hostage situations across the country in exchange for money, and that he began doing it after his own home was swatted.

Barriss told KrebsOnSecurity that he felt bad about the incident, but that it wasn’t he who pulled the trigger. He also enthused about the rush that he got from evading police.

“Bomb threats are more fun and cooler than swats in my opinion and I should have just stuck to that,” he wrote in an instant message conversation with this author.

In a jailhouse interview Friday with local Wichita news station KWCH, Barriss said he feels “a little remorse for what happened.”

“I never intended for anyone to get shot and killed,” he reportedly told the news station. “I don’t think during any attempted swatting anyone’s intentions are for someone to get shot and killed.”

The Wichita Eagle reports that Barriss also has been charged in Calgary, Canada with public mischief, fraud and mischief for allegedly making a similar swatting call to authorities there. However, no one was hurt or killed in that incident.

Barriss was convicted in 2016 for calling in a bomb threat to an ABC affiliate in Los Angeles. He was sentenced to two years in prison for that stunt, but was released in January 2017.

Using his SWAuTistic alias, Barriss claimed credit for more than a hundred fake calls to authorities across the nation. In an exclusive story published here on Jan. 2, KrebsOnSecurity dissected several months’ worth of tweets from SWAuTistic’s account before those messages were deleted. In those tweets, SWAuTistic claimed responsibility for calling in bogus hostage situations and bomb threats at roughly 100 schools and at least 10 residences.

In his public tweets, SWAuTistic claimed credit for bomb threats against a convention center in Dallas and a high school in Florida, as well as an incident that disrupted a much-watched meeting at the U.S. Federal Communications Commission (FCC) in November.

But in private online messages shared by his online friends and acquaintances SWAuTistic can be seen bragging about his escapades, claiming to have called in fake emergencies at approximately 100 schools and 10 homes.

The serial swatter known as “SWAuTistic” claimed in private conversations to have carried out swattings or bomb threats against 100 schools and 10 homes.

,

Krebs on SecurityCanadian Police Charge Operator of Hacked Password Service Leakedsource.com

Canadian authorities have arrested and charged a 27-year-old Ontario man for allegedly selling billions of stolen passwords online through the now-defunct service Leakedsource.com.

The now-defunct Leakedsource service.

On Dec. 22, 2017, the Royal Canadian Mounted Police (RCMP) charged Jordan Evan Bloom of Thornhill, Ontario for trafficking in identity information, unauthorized use of a computer, mischief to data, and possession of property obtained by crime. Bloom is expected to make his first court appearance today.

According to a statement from the RCMP, “Project Adoration” began in 2016 when the RCMP learned that LeakedSource.com was being hosted by servers located in Quebec.

“This investigation is related to claims about a website operator alleged to have made hundreds of thousands of dollars selling personal information,” said Rafael Alvarado, the officer in charge of the RCMP Cybercrime Investigative Team. “The RCMP will continue to work diligently with our domestic and international law enforcement partners to prosecute online criminality.”

In January 2017, multiple news outlets reported that unspecified law enforcement officials had seized the servers for Leakedsource.com, perhaps the largest online collection of usernames and passwords leaked or stolen in some of the worst data breaches — including three billion credentials for accounts at top sites like LinkedIn and Myspace.

Jordan Evan Bloom. Photo: RCMP.

LeakedSource in October 2015 began selling access to passwords stolen in high-profile breaches. Enter any email address on the site’s search page and it would tell you if it had a password corresponding to that address. However, users had to select a payment plan before viewing any passwords.

The RCMP alleges that Jordan Evan Bloom was responsible for administering the LeakedSource.com website, and earned approximately $247,000 from trafficking identity information.

A February 2017 story here at KrebsOnSecurity examined clues that LeakedSource was administered by an individual in the United States.  Multiple sources suggested that one of the administrators of LeakedSource also was the admin of abusewith[dot]us, a site unabashedly dedicated to helping people hack email and online gaming accounts.

That story traced those clues back to a Michigan man who ultimately admitted to running Abusewith[dot]us, but who denied being the owner of LeakedSource.

The RCMP said it had help in the investigation from The Dutch National Police and the FBI. The FBI could not be immediately reached for comment.

LeakedSource was a curiosity to many, and for some journalists a potential source of news about new breaches. But unlike services such as BreachAlarm and HaveIBeenPwned.com, LeakedSource did nothing to validate users.

This fact, critics charged, showed that the proprietors of LeakedSource were purely interested in making money and helping others pillage accounts.

Since the demise of LeakedSource.com, multiple, competing new services have moved in to fill the void. These services — which are primarily useful because they expose when people re-use passwords across multiple accounts — are popular among those involved in a variety of cybercriminal activities, particularly account takeovers and email hacking.

CryptogramFighting Ransomware

No More Ransom is a central repository of keys and applications for ransomware, so people can recover their data without paying. It's not complete, of course, but is pretty good against older strains of ransomware. The site is a joint effort by Europol, the Dutch police, Kaspersky, and McAfee.

Worse Than FailureRepresentative Line: Tern Back

In the process of resolving a ticket, Pedro C found this representative line, which has nothing to do with the bug he was fixing, but was just something he couldn’t leave un-fixed:

$categories = (isset($categoryMap[$product['department']]) ?
                            (isset($categoryMap[$product['department']][$product['classification']])
                                        ?
                                    $categoryMap[$product['department']][$product['classification']]
                                        : NULL) : NULL);

Yes, the venerable ternary expression, used once again to obfuscate and confuse.

It took Pedro a few readings before he even understood what it did, and then it took him a few more readings to wonder about why anyone would solve the problem this way. Then, he fixed it.

$department = $product['department'];
$classification = $product['classification'];
$categories = NULL;
//ED: isset never triggers as error with an undefined expression, but simply returns false, because PHP
if( isset($categoryMap[$department][$classification]) ) { 
    $categories = $categoryMap[$department][$classification];
}

He submitted the change for code-review, but it was kicked back. You see, Pedro had fixed the bug, which had a ticket associated with it. There were to be no code changes without a ticket from a business user, and since this change wasn’t strictly related to the bug, he couldn’t submit this change.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

,

Don MartiEasy question with too many wrong answers

Content warning: Godwin's Law.

Here's a marketing question that should be easy.

How much of my brand's ad budget goes to Nazis?

Here's the right answer.

Zero.

And here's a guy who still seems to be having some trouble answering it: Dear Google (GOOG): Please stop using my advertising dollars to monetize hate speech.

If you're responsible for a brand and somewhere in the mysterious tubes of adtech your money is finding its way to Nazis, what is the right course of action?

One wrong answer is to write a "please help me" letter to a company that will just ignore it. That's just admitting to knowingly sending money to Nazis, which is clearly wrong.

Here's another wrong idea, from the upcoming IAB Annual Leadership Meeting session on "brand safety" (which is the nice, sanitary professional-sounding term for "trying not to sponsor Nazis, but not too hard.")

Threats to brand safety arise internally and externally, in your control and out of your control—and the stakes have never been higher. Learn how to minimize brand safety risks and maximize odds of survival when your brand takes a hit (spoiler alert: overreacting is as bad as underreacting). Best Buy and Starcom share best practices based on real-world encounters with brand safety issues.

Really, people? Overreacting is as bad as underreacting? The IAB wants you to come to a deluxe conference about how it's fine to send a few bucks to Nazis here and there as long as it keeps their whole adtech/adfraud gravy train running on time.

I disagree. If Best Buy is fine with (indirectly of course) paying the occasional Nazi so that the IAB companies can keep sending them valuable eyeballs from the cheapest possible sites, then I can shop elsewhere.

Any nationalist extremist movement has its obvious supporters, who wear the outfits and get the tattoos and go march in the streets and all that stuff, and also the quiet supporters, who come up with the money and make nice with the powers that be. The supporters who can keep it deniable.

Can I, as a potential customer from the outside, tell the difference between quiet Nazi supporters and people who are just bad at online advertising and end up supporting Nazis by mistake? Of course not. Do I care? Of course not. If you're not willing to put the basic "don't pay Nazis to do Nazi stuff" rule ahead of a few ad clicks, I don't want your brand anyway. And I'll make sure to install and use the tracking protection tools that help keep my good data away from bad sites.

,

CryptogramFriday Squid Blogging: Japanese "Dude Food" Includes Squid

This seems to be a trend.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesScreen Capping the News Shows Different Stories for Different Folks

During a year marked by social and political turmoil, the media has found itself under scrutiny from politicians, academics, the general public, and increasingly self-reflexive journalists and editors. Fake news has entered our lexicon both as a form of political meddling from foreign powers and a dismissive insult directed towards any less-than-complimentary news coverage of the current administration.

Paying attention to where people are getting their news and what that news is telling them is an important step to understanding our increasingly polarized society and our seeming inability to talk across political divides. The insight can also help us get at those important and oh-too common questions of “how could they think that?!?” or “how could they support that politician?!?”

My interest in this topic was sparked a few months ago when I began paying attention to the top four stories and single video that magically appear whenever I swipe left on my iPhone. The stories compiled by the Apple News App provide a snapshot of what the dominant media sources consider the newsworthy happenings of the day. After paying an almost obsessive attention to my newsfeed for a few weeks—and increasingly annoying my friends and colleagues by telling them about the compelling patterns I was seeing—I started to take screenshots of the suggested news stories on a daily or twice daily basis. The images below were gathered over the past two months.

It is worth noting that the Apple News App adapts to a user’s interests to ensure that it provides “the stories you really care about.” To minimize this complicating factor I avoided clicking on any of the suggested stories and would occasionally verify that my news feed had remained neutral through comparing the stories with other iPhone users whenever possible.

Some of the differences were to be expected—People simply cannot get enough of celebrity pregnancies and royal weddings. The Washington Post, The New York Times, and CNN frequently feature stories that are critical of the current administration, and Fox News is generally supportive of President Trump and antagonistic towards enemies of the Republican Party.

(Click to Enlarge)

However, there are two trends that I would like to highlight:

1) A significant number of Fox News headlines offer direct critiques of other media sites and their coverage of key news stories. Rather than offering an alternative reading of an event or counter-coverage, the feature story undercuts the journalistic work of other news sources through highlighting errors and making accusations of partisanship motivations. In some cases, this even takes the form of attacking left-leaning celebrities as proxy to a larger movement or idea. Neither of these tactics were employed by any of the other news sources during my observation period.

(Click to Enlarge)

2) Fox News often featured coverage of vile, treacherous, or criminal acts committed by individuals as well as horrifying accidents. This type of story stood out both due to the high frequency and the juxtaposition to coverage of important political events of the time—murderous pigs next to Senate resignations and sexually predatory high school teachers next to massively destructive California wildfires. In a sense, Fox News is effectively cultivating an “asociological” imagination by shifting attention to the individual rather than larger political processes and structural changes. In addition, the repetitious coverage of the evil and devious certainly contributes to a fear-based society and confirms the general loss of morality and decline of conservative values.

(Click to Enlarge)

It is worth noting that this move away from the big stories of the day also occurs through a surprising amount of celebrity coverage.

(Click to Enlarge)

From the screen captures I have gathered over the past two months, it seems apparent that we are not just consuming different interpretations of the same event, but rather we are hearing different stories altogether. This effectively makes the conversation across political affiliation (or more importantly, news source affiliation) that much more difficult if not impossible.

I recommend taking time to look through the images that I have provided on your own. There are a number of patterns I did not discuss in this piece for the sake of brevity and even more to be discovered. And, for those of us who spend our time in the front of the classroom, the screenshot approach could provide the basis for a great teaching activity where the class collectively takes part in both the gathering of data and conducting the analysis. 

Kyle Green is an Assistant Professor of Sociology at Utica College. He is a proud TSP alumnus and the co-author /co-host of Give Methods a Chance.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: Hamilton, Hamilton, Hamilton, Hamilton

"Good news! I can get my order shipped anywhere I want...So long as the city is named Hamilton," Daniel wrote.

 

"I might have forgotten my username, but at least I didn't forget to change the email template code in Production," writes Paul T.

 

Jamie M. wrote, "Using Lee Hecht Harrison's job search functionality is very meta."

 

"When I decided to go to Cineworld, wasn't sure what I wanted to watch," writes Andy P., "The trailer for 'System Restore' looks good, but it's got a bad rating on Rotten Tomatoes."

 

Mattias writes, "I get the feeling that Visual Studio really doesn't like this error."

 

"While traveling in Philadelphia's airport, I was pleased to see Macs competing in the dumb error category too," Ken L. writes.

 

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

,

TEDExploring the boundaries of legacy at TED@Westpac

Cyndi Stivers and Adam Spencer host TED@Westpac — a day of talks and performances themed around “The Future Legacy” — in Sydney, Australia, on Monday, December 11th. (Photo: Jean-Jacques Halans / TED)

Legacy is a delightfully complex concept, and it’s one that the TED@Westpac curators took on with gusto for the daylong event held in Sydney, Australia, on Monday December 11th. Themed around the idea of “The Future Legacy,” the day was packed with 15 speakers and two performers and hosted by TED’s Cyndi Stivers and TED speaker and monster prime number aficionado Adam Spencer. Topics ranged from education to work-health balance to designer babies to the importance of smart conversations around death.

For Westpac managing director and CEO Brian Hartzer, the day was an opportunity both to think back over the bank’s own 200-year-legacy — and a chance for all gathered to imagine a bold new future that might suit everyone. He welcomed talks that explored ideas and stories that may shape a more positive global future. “We are so excited to see the ripple effect of your ideas from today,” he told the collected speakers before introducing Aboriginal elder Uncle Ray Davison to offer the audience a traditional “welcome to country.”

And with that, the speakers were up and running.

“Being an entrepreneur is about creating change,” says Linda Zhang. She suggests we need to encourage the entrepreneurial mindset in high-schoolers. (Photo: Jean-Jacques Halans / TED)

Ask questions, challenge the status quo, build solutions. Who do you think of when you hear the word “entrepreneur?” Steve Jobs, Mark Zuckerberg, Elon Musk and Bill Gates might come to mind. What about a high school student? Linda Zhang might just have graduated herself but she’s been taking entrepreneurial cues from her parents, who started New Zealand’s second-largest thread company. Zhang now runs a program to pair students with industry mentors and get them to work for 48 hours on problems they actually want to solve. The results: a change in mindset that could help prepare them for a tumultuous but opportunity-filled job market. “Being an entrepreneur is about creating change,” Zhang says. “This is what high school should be about … finding things you care about, having the curiosity to learn about those things and having the drive to take that knowledge and implement it into problems you care about solving.”

Should we bribe kids to study math? In this sparky talk, Mohamad Jebara shares a favorite quote from fellow mathematician Francis Su: “We study mathematics for play, for beauty, for truth, for justice, and for love.” Only problem: kids today, he says, often don’t tend to agree, instead finding math “difficult and boring.” Jebara has a counterintuitive potential solution: he wants to bribe kids to study math. His financial incentive plan works like this: his company charges parents a monthly subscription fee; if students complete their weekly math goal then the program refunds that amount of the fee directly into the student’s bank account; if not, the company pockets the profit. Ultimately, Jebara wants kids to discover math’s intrinsic worth and beauty, but until they get there, he’s happy to pay them. And this isn’t just about his own business model. “Unless we find a way to improve student engagement with mathematics, we’ll have not only a huge skills shortage crisis, but a fickle population easily manipulated by whoever can get the most airtime,” he says.

You, cancer and the workplace. When lawyer Sarah Donnelly was diagnosed with breast cancer, she turned to her friends and family for support — but she also sought refuge in her work. “My job and my coworkers would make me feel valuable and human at times when I would have otherwise felt like a statistic,” she says. “Work gave me focus and stability when I was dealing with so many unknowns and difficult personal decisions.” But, she says, not all employers realize that work can be a sanctuary for the sick, and often — believing themselves polite and thoughtful — cast out their employees. Now, Donnelly is striving to change the experiences of individuals coping with serious illness — and the perceptions others might have of them. Together with a colleague, she created a “Working with Cancer” toolkit that provides a framework and guidance for all those professionally involved in an employee’s life, and she is traveling to different companies around Australia to implement it.

Digital strategist Will Jenkins asks that we need to think about what we really want from life, not just our day-to-day. (Photo: Jean-Jacques Halans / TED)

The connection between time and money. We all need more time, says digital strategist Will Jenkins, and historically we’ve developed systems and technologies to save time for ourselves and others by reducing waste and inefficiency. But there’s a problem: even after spending centuries trying to perfect time-saving techniques, it too often still doesn’t feel like we’re getting anywhere. “As individuals, we’re busier than ever,” Jenkins points out, before calling for us to look beyond specialized techniques to think about what we actually really want from life itself, not just our day-to-day. In taking a holistic approach to time, we might, he says, channel John Maynard Keynes to figure out new ways that will allow all of us “to live wisely, agreeably, and well.”

Creating a digital future for Australia’s First People. Aboriginal Australian David Unaipon (1862-1967) was called his country’s Leonardo da Vinci — he was responsible for at least 19 inventions, including a tool that led to modern sheep shears. But according to Westpac business analyst Michael Mieni, we need to find better ways to encourage future Unaipons. Right now, he says, too many Indigenous Australians are on the far side of the digital divide, lacking access to computers and the Internet as well as basic schooling in technology. Mieni was the first Indigenous IT honors students at the University of Technology Sydney and he makes the case that tech-savvy Indigenous Australians are badly needed to serve as role models and teachers, as inventors of ways to record and promote their culture and as guardians of their people’s digital rights. “What if the next ground-breaking idea is already in the mind of a young Aboriginal student but will never surface because they face digital disadvantage or exclusion?” he asks. Everyone in Australia — not just the First Peoples — gains when every citizen has the opportunity and resources to become digitally literate.

Shade Zahrai and Aric Yegudkin perform a gorgeous, sensual dance at TED@Westpac. (Photo: Jean-Jacques Halans / TED)

The beauty of a dance duet. “Partner dance embodies the coming together of two people,” Shade Zahrai‘s voice whispers to a dark auditorium as she and her partner take the TED stage. In the middle of session one, the pair perform a gorgeous and sensual modern dance, complete with Zahrai’s recorded voiceover explaining the coordination and unity that partner dance requires of its participants.

The power of inclusiveness. Inclusion strategist Hayley Yeates shares how her identity as a proud Australian was dimmed by prejudice shown towards her by those who saw her as Asian. When in school, she says, fellow students didn’t want to associate with her in classrooms, while she didn’t add a picture to her LinkedIn profile for fear her race would deem her less worthy of a job. But Yeates focuses on more than the personal stories of those who’ve been dubbed an outsider, and makes the case that diversity leads to innovation and greater profitability for companies. She calls for us all to sponsor safe spaces where authentic, unrestrained conversations about the barriers faced by cultural minorities can be held freely. And she invites leaders to think about creating environments where people’s whole selves can work, and where an organization can thrive because of, not in spite of, its employees’ differences.

Olivia Tyler tracks the complexity of global supply chains, looking to develop smart technology that can allow both corporations and consumers to understand buying decisions. (Photo: Jean-Jacques Halans / TED)

How to do yourself out of a job. As a sustainability practitioner, Olivia Tyler is trying hard to develop systems that will put her out of work. Why? For the good of us all, of course. And how? By encouraging all of us to ask questions about where what we buy, wear or eat comes from. Tyler tracks the fiendish complexity of today’s global supply chains, and she is attempting to develop smart technology that can allow both corporations and consumers to have the visibility they need to understand the buying decisions they make. When something as ostensibly simple as a baked good can include hundreds of data points about the ingredients it contains — a cake can be a minefield, she jokes — it’s time to open up the cupboard and use tech such as the blockchain to crack open the sustainability code. “We can adopt new and exciting ways to change the game on how we conduct ourselves as corporates and consumers across our increasingly smaller world,” she promises.

Can machine intelligence liberate human purpose? Much has been made of the threat robots place to the very existence of certain jobs, with some estimates reckoning that as much as 80% of low skill jobs have already been automated. Self-styled “datapreneur” Tomer Garzberg shares how he researched 11,000 of the world’s most widely held jobs to create the “Short-Term Automation Susceptibility Index” to identify the types of role that might be up for automation next. Perhaps unsurprisingly, highly specialized roles held by those such as neurosurgeons, chemical engineers and, well, acrobats face the least risk of being automated, while even senior blue collar positions or standard white collar roles such as pharmacists, accountants and health inspectors can expect a 25% shrinkage over the next 10 years. But Garzberg believes that we can — must — embrace this cybernated future.”Prepare your family to be okay with change, as uncomfortable as it may be,” he says. “We’ll likely be switching careers far more frequently in the near future.”

Everything’s gonna be alright. After a quick break and a breather, Westpac’s own Rowan Fitzpatrick and his band Heart of Mind played in session two with a sweet, uplifting rock ballad about better days and leaning on one another with love and hope. “Keep looking forward / Don’t lose your grip / One step at a time,” the trained jazz singer croons.

Alastair O’Neill shares the ethical wrangling his family undertook as they figured out how they felt about potentially eradicating a debilitating disease with gene editing. (Photo: Jean-Jacques Halans / TED)

You have the ability to end a hereditary disease. Do you take it? “Recently I had to sign a form promising that I wouldn’t have sex with my wife,” says a deadpan Alastair O’Neill as he kicks off the session’s talks. “Why? Because we decided to have a baby.” He waits a beat. “Let me rewind.” As the audience settles in for a rollercoaster talk of emotional highs and lows, he explains his family’s journey through the ethical minefield of embryonic genetic testing, also known as preimplantation genetic diagnosis or PGD. It was a journey prompted by a hereditary condition in his wife’s family — his father-in-law Phil had inherited the gene for retinal dystrophy and was declared legally blind at 30 years old. The odds that his own young family would have a baby either carrying or inheriting the disease were as low as one in two. In this searingly personal talk, O’Neill shares the ups and downs of both the testing process and the ethical wrangling that their entire family undertook as they tried to figure out how they felt about potentially eradicating a debilitating disease. Spoiler alert: O’Neill is in favor. “PGD gives couples the ability to choose to end a hereditary disease,” he says. “I think we should give every potential parent that choice.”

A game developer’s solution to the housing crisis. When Sarah Murray wanted to buy her first house, she discovered that home prices far exceeded her budget — and building a new house would be prohibitively costly and time-consuming. Frustrated by her lack of self-determination, Murray decided to create a computer game to give control back to buyers. The program allows you to design all aspects of your future home (even down to attention to price and environmental impact) and then delivers the final product directly to you in modular components that can be assembled onsite. Murray’s innovative idea both cuts costs and makes more sustainable dwellings; the first physical houses should be ready by 2018. But the digital housing developer isn’t done yet. Now she is working on adapting the program and investing in construction techniques such as 3D printing so that when a player designs and builds a home, they can also contribute to a home for someone in need. As she says, “I want to put every person who wants one in a home of their own design.”

Tough guys need mental-health help, too. In 2013 in Castlemaine, Victoria, painter and decorator Jeremy Forbes was shaken when a friend and fellow tradie (or tradesman), committed suicide. But what truly shocked him were the murmurs he overheard at the man’s wake — people asking, “Who’s next?” Tradies deal with the same struggles faced by many — depression, alcohol and drug dependency, gambling, financial hardship — but they often don’t feel comfortable opening up about them. “You’re expected to be silent in the face of adversity,” says Forbes. So he and artist Catherine Pilgrim founded HALT (Hope Assistance Local Tradies), a mental health awareness organization for tradie men and women, apprentices, builders, farmers, and their partners. HALT meets people where they are, hosting gatherings at hardware stores, football and sports clubs, and vocational training facilities. There, people learn about the warning signs of depression and anxiety and the available services. According to Forbes, who received a Westpac Social Change Fellowship in 2016, HALT has now held around 150 events, and he describes the process as both empowering and cathartic. We need to know how to respond if people are not OK, he says.

The conversation about death you need to have. “Most of us don’t want to acknowledge death, we don’t want to plan for it, and we don’t want to discuss it with the most important people in our lives,” says mortal realist and portfolio manager Michelle Knox. She’s got stats to prove it: 45% of people in Australia over the age of 18 don’t have a legal will. But dying without one is complicated and expensive for those left behind, and just one reason Knox believes it’s time we take ownership of our own deaths. Others include that talking about death before it happens can help us experience a good death, reduce stress on our loved ones, and also help us support others who are grieving. Knox experienced firsthand the power of talking about death ahead of time when her father passed away earlier this year. “I discovered this year it’s actually a privilege to help someone exit this life and although my heart is heavy with loss and sadness, it is not heavy with regret,” she says, “I knew what Dad wanted and I feel at peace knowing I could support his wishes.”

“What would water do?” asks Raymond Tang. “This simple and powerful question has changed my life for the better.” (Photo: Jean-Jacques Halans / TED)

The philosophy of water. How do we find fulfillment in a world that’s constantly changing? IT strategy manager and “agent of flow” Raymond Tang struggled mightily with this question — until he came across the ancient Chinese philosophy of the Tao Te Ching. In it, he found a passage comparing goodness to water and, inspired, he’s now applying the concepts to his everyday life. In this charming talk, he shares three lessons he’s learned so far from the “philosophy of water.” First, humility: in the same way water helps plants and animals grow without seeking reward, Tang finds fulfillment and meaning in helping others overcome their challenges. Next, harmony: just as water is able to navigate its way around obstacles without force or conflict, Tang believes we can find a greater sense of fulfillment in our endeavors by shifting our focus away from achieving success and towards achieving harmony. Finally, openness: water can be a liquid, solid or gas, and it adapts to the shape in which it’s contained. Tang finds in his professional life that the teams most open to learning (and un-learning) do the best work. “What would water do?” Tang asks. “This simple and powerful question has changed my life for the better.”

With great data comes great responsibility. Remember the hacks on companies such as Equifax and JP Morgan? Well, you ain’t seen nothing yet. As computer technology becomes more powerful (think quantum) the systems we use to protect our wells of data become ever more vulnerable. However, there is still time to plan countermeasures against the impending data apocalypse, reassures encryption expert Vikram Sharma. He and his team are designing security devices and programs that also rely on quantum physics to power a defense against the most sophisticated attacks. “The race is on to build systems that will remain secure in the face of rapid technological advance,” he says.

Rach Ranton brings the leadership lessons she learned in the military to corporations, suggesting that leaders succeed when everyone knows the final goal they’re working toward. (Photo: Jean-Jacques Halans / TED)

Leadership lessons from the front line. How does a leader give their people a sense of purpose and direction? Rach Ranton spent more than a decade in the Australian Army, including tours of Afghanistan and East Timor. Now, she brings the lessons she learned in the military to companies, blending organizational psychology aimed at corporations with the planning and best practices of a well-oiled military unit. Even in a situation of extreme uncertainty, she says, military units function best if everyone understands the leader’s objective exactly as well as they understand their own role, not just their individual part to play but also the whole. She suggests leaders spend time thinking about how to communicate “commander’s intent,” the final goal that everyone is working toward. As a test, she asks: If you as a leader were absent from the scene, would your team still know what to do … and why they were doing it?

CryptogramFingerprinting Digital Documents

In this era of electronic leakers, remember that zero-width spaces and homoglyph substitution can fingerprint individual instances of files.

Krebs on SecurityBitcoin Blackmail by Snail Mail Preys on Those with Guilty Conscience

KrebsOnSecurity heard from a reader whose friend recently received a remarkably customized extortion letter via snail mail that threatened to tell the recipient’s wife about his supposed extramarital affairs unless he paid $3,600 in bitcoin. The friend said he had nothing to hide and suspects this is part of a random but well-crafted campaign to prey on men who may have a guilty conscience.

The letter addressed the recipient by his first name and hometown throughout, and claimed to have evidence of the supposed dalliances.

“You don’t know me personally and nobody hired me to look into you,” the letter begins. “Nor did I go out looking to burn you. It is just your bad luck that I stumbled across your misadventures while working on a job around Bellevue.”

The missive continues:

“I then put in more time than I probably should have looking into your life. Frankly, I am ready to forget all about you and let you get on with your life. And I am going to give you two options that will accomplish that very thing. These two options are to either ignore this letter, or simply pay me $3,600. Let’s examine those two options in more detail.”

The letter goes on to say that option 1 (ignoring the threat) means the author will send copies of his alleged evidence to the man’s wife and to her friends and family if he does not receive payment within 12 days of the letter’s post marked date.

“So [name omitted], even if you decide to come clean with your wife, it won’t protect her from the humiliation she will feel when her friends and family find out your sordid details from me,” the extortionist wrote.

Option 2, of course, involves sending $3,600 in Bitcoin to an address specified in the letter. That bitcoin address does not appear to have received any payments. Attached to the two-sided extortion note is a primer on different ways to quickly and easily obtain bitcoin.

“If I don’t receive the bitcoin by that date, I will go ahead and release the evidence to everyone,” the letter concludes. “If you go that route, then the least you could do is tell your wife so she can come up with an excuse to prepare her friends and family before they find out. The clock is ticking, [name omitted].”

Of course, sending extortion letters via postal mail is mail fraud, a crime which carries severe penalties (fines of up to $1 million and up to 30 years in jail). However, as the extortionist rightly notes in his letter, the likelihood that authorities would ever be able to catch him is probably low.

The last time I heard of or saw this type of targeted extortion by mail was in the wake of the 2015 breach at online cheating site AshleyMadison.com. But those attempts made more sense to me since obviously many AshleyMadison users quite clearly did have an affair to hide.

In any case, I’d wager that this scheme — assuming that the extortionist is lying and has indeed sent these letters to targets without actual knowledge of extramarital affairs on the part of the recipients — has a decent chance of being received by someone who really does have a current or former fling that he is hiding from his spouse. Whether that person follows through and pays the extortion, though, is another matter.

I searched online for snippets of text from the extortion letter and found just one other mention of what appears to be the same letter: It was targeting people in Wellesley, Mass, according to a local news report from December 2017.

According to that report, the local police had a couple of residents drop off letters or call to report receiving them, “but to our knowledge no residents have fallen prey to the scam. The envelopes have no return address and are postmarked out of state, but from different states. The people who have notified us suspected it was a scam and just wanted to let us know.”

In the Massachusetts incidents, the extortionist was asking for $8,500 in bitcoin. Assuming it is the same person responsible for sending this letter, perhaps the extortionist wasn’t getting many people to bite and thus lowered his “fee.”

I opted not to publish a scan of the letter here because it was double-sided and redacting names, etc. gets dicey thanks to photo and image manipulation tools. Here’s a transcription of it instead (PDF).

CryptogramYet Another FBI Proposal for Insecure Communications

Deputy Attorney General Rosenstein has given talks where he proposes that tech companies decrease their communications and device security for the benefit of the FBI. In a recent talk, his idea is that tech companies just save a copy of the plaintext:

Law enforcement can also partner with private industry to address a problem we call "Going Dark." Technology increasingly frustrates traditional law enforcement efforts to collect evidence needed to protect public safety and solve crime. For example, many instant-messaging services now encrypt messages by default. The prevent the police from reading those messages, even if an impartial judge approves their interception.

The problem is especially critical because electronic evidence is necessary for both the investigation of a cyber incident and the prosecution of the perpetrator. If we cannot access data even with lawful process, we are unable to do our job. Our ability to secure systems and prosecute criminals depends on our ability to gather evidence.

I encourage you to carefully consider your company's interests and how you can work cooperatively with us. Although encryption can help secure your data, it may also prevent law enforcement agencies from protecting your data.

Encryption serves a valuable purpose. It is a foundational element of data security and essential to safeguarding data against cyber-attacks. It is critical to the growth and flourishing of the digital economy, and we support it. I support strong and responsible encryption.

I simply maintain that companies should retain the capability to provide the government unencrypted copies of communications and data stored on devices, when a court orders them to do so.

Responsible encryption is effective secure encryption, coupled with access capabilities. We know encryption can include safeguards. For example, there are systems that include central management of security keys and operating system updates; scanning of content, like your e-mails, for advertising purposes; simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop. No one calls any of those functions a "backdoor." In fact, those very capabilities are marketed and sought out.

I do not believe that the government should mandate a specific means of ensuring access. The government does not need to micromanage the engineering.

The question is whether to require a particular goal: When a court issues a search warrant or wiretap order to collect evidence of crime, the company should be able to help. The government does not need to hold the key.

Rosenstein is right that many services like Gmail naturally keep plaintext in the cloud. This is something we pointed out in our 2016 paper: "Don't Panic." But forcing companies to build an alternate means to access the plaintext that the user can't control is an enormous vulnerability.

Worse Than FailureCodeSOD: Dictionary Definition

Guy’s eight-person team does a bunch of computer vision (CV) stuff. Guy is the “framework Guy”: he doesn’t handle the CV stuff so much as provide an application framework to make the CV folks lives easy. It’s a solid division of labor, with one notable exception: Richard.

Richard is a Computer Vision Researcher, head of the CV team. Guy is a mere “code monkey”, in Richard’s terms. Thus, everything Richard does is correct, and everything Guy does is “cute” and “a nice attempt”. That’s why, for example, Richard needed to take a method called readFile() and turn it into readFileHandle(), “for clarity”.

The code is a mix of C++ and Python, and much of the Python was written before Guy’s time. While the style in use doesn’t fit PEP–8 standards (the official Python style), Guy has opted to follow the in use standards, for consistency. This means some odd things, like putting a space before the colons:

    def readFile() :
      # do stuff

Which Richard felt the need to comment on in his code:

    def readFileHandle() : # I like the spaced out :'s, these are cute =]

There’s no “tone of voice” in code, but the use of “=]” instead of a more conventional smile emoticon is a clear sign that Richard is truly a monster. The other key sign is that Richard has taken an… unusual approach to object-oriented programming. When tasked with writing up an object, he takes this approach:

class WidgetSource:
    """
    Enumeration of various sources available for getting the data needed to construct a Widget object.
    """

    LOCAL_CACHE    = 0
    DB             = 1
    REMOTE_STORAGE = 2
    #PROCESSED_DATA  = 3

    NUM_OF_SOURCES = 3

    @staticmethod
    def toString(widget_source):
        try:
            return {
                WidgetSource.LOCAL_CACHE:     "LOCAL_CACHE",
                WidgetSource.DB:              "DB",
                #WidgetSource.PROCESSED_DATA:   "PROCESSED_DATA", # @DEPRECATED - Currently not to be used
                WidgetSource.REMOTE_STORAGE:  "REMOTE_STORAGE"
            }[widget_source]
        except KeyError:
            return "UNKNOWN_SOURCE"

def deserialize_widget(id, curr_src) :
     # SNIP
     widget = {
         WidgetSource.LOCAL_CACHE: _deserialize_from_cache,
         WidgetSource.DB: _deserialize_from_db,
         WidgetSource.REMOTE_STORAGE: _deserialize_from_remote
         #WidgetSource.PROCESSED_DATA: widgetFactory.fromProcessedData,
     }[curr_src](id)

For those not up on Python, there are a few notable elements here. First, by convention, anything in ALL_CAPS is a constant. A dictionary/map literal takes the form {aKey: aValue, anotherKey: anotherValue}.

So, the first thing to note is that both the deserialize_widget and toString methods create a dictionary. The keys are drawn from constants… which have the values 0, 1, 2, and 3. So… it’s an array, represented as a map, but without the ability to iterate across it in order.

But the dictionary isn’t what gets returned. It’s being used as a lookup table. This is actually quite common, as Python doesn’t have a switch construct, but it does leave one scratching one’s head wondering why.

The real thing that makes one wonder “why” is this, though: Why is newly written code already marked as @DEPRECATED? This code was not yet released, and nothing outside of Richard’s newly written feature depended on it. I suspect Richard recently learned what deprecated means, and just wanted to use it in a sentence.

It’s okay, though. I like the @deprecated, those are cute =]

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

CryptogramSusan Landau's New Book: Listening In

Susan Landau has written a terrific book on cybersecurity threats and why we need strong crypto. Listening In: Cybersecurity in an Insecure Age. It's based in part on her 2016 Congressional testimony in the Apple/FBI case; it examines how the Digital Revolution has transformed society, and how law enforcement needs to -- and can -- adjust to the new realities. The book is accessible to techies and non-techies alike, and is strongly recommended.

And if you've already read it, give it a review on Amazon. Reviews sell books, and this one needs more of them.

CryptogramSpectre and Meltdown Attacks Against Microprocessors

The security of pretty much every computer on the planet has just gotten a lot worse, and the only real solution -- which of course is not a solution -- is to throw them all away and buy new ones.

On Wednesday, researchers just announced a series of major security vulnerabilities in the microprocessors at the heart of the world's computers for the past 15-20 years. They've been named Spectre and Meltdown, and they have to do with manipulating different ways processors optimize performance by rearranging the order of instructions or performing different instructions in parallel. An attacker who controls one process on a system can use the vulnerabilities to steal secrets elsewhere on the computer. (The research papers are here and here.)

This means that a malicious app on your phone could steal data from your other apps. Or a malicious program on your computer -- maybe one running in a browser window from that sketchy site you're visiting, or as a result of a phishing attack -- can steal data elsewhere on your machine. Cloud services, which often share machines amongst several customers, are especially vulnerable. This affects corporate applications running on cloud infrastructure, and end-user cloud applications like Google Drive. Someone can run a process in the cloud and steal data from every other users on the same hardware.

Information about these flaws has been secretly circulating amongst the major IT companies for months as they researched the ramifications and coordinated updates. The details were supposed to be released next week, but the story broke early and everyone is scrambling. By now all the major cloud vendors have patched their systems against the vulnerabilities that can be patched against.

"Throw it away and buy a new one" is ridiculous security advice, but it's what US-CERT recommends. It is also unworkable. The problem is that there isn't anything to buy that isn't vulnerable. Pretty much every major processor made in the past 20 years is vulnerable to some flavor of these vulnerabilities. Patching against Meltdown can degrade performance by almost a third. And there's no patch for Spectre; the microprocessors have to be redesigned to prevent the attack, and that will take years. (Here's a running list of who's patched what.)

This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.

The first is that these vulnerabilities affect embedded computers in consumer devices. Unlike our computer and phones, these systems are designed and produced at a lower profit margin with less engineering expertise. There aren't security teams on call to write patches, and there often aren't mechanisms to push patches onto the devices. We're already seeing this with home routers, digital video recorders, and webcams. The vulnerability that allowed them to be taken over by the Mirai botnet last August simply can't be fixed.

The second is that some of the patches require updating the computer's firmware. This is much harder to walk consumers through, and is more likely to permanently brick the device if something goes wrong. It also requires more coordination. In November, Intel released a firmware update to fix a vulnerability in its Management Engine (ME): another flaw in its microprocessors. But it couldn't get that update directly to users; it had to work with the individual hardware companies, and some of them just weren't capable of getting the update to their customers.

We're already seeing this. Some patches require users to disable the computer's password, which means organizations can't automate the patch. Some antivirus software blocks the patch, or -- worse -- crashes the computer. This results in a three-step process: patch your antivirus software, patch your operating system, and then patch the computer's firmware.

The final reason is the nature of these vulnerabilities themselves. These aren't normal software vulnerabilities, where a patch fixes the problem and everyone can move on. These vulnerabilities are in the fundamentals of how the microprocessor operates.

It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

This isn't to say you should immediately turn your computers and phones off and not use them for a few years. For the average user, this is just another attack method amongst many. All the major vendors are working on patches and workarounds for the attacks they can mitigate. All the normal security advice still applies: watch for phishing attacks, don't click on strange e-mail attachments, don't visit sketchy websites that might run malware on your browser, patch your systems regularly, and generally be careful on the Internet.

You probably won't notice that performance hit once Meltdown is patched, except maybe in backup programs and networking applications. Embedded systems that do only one task, like your programmable thermostat or the computer in your refrigerator, are unaffected. Small microprocessors that don't do all of the vulnerable fancy performance tricks are unaffected. Browsers will figure out how to mitigate this in software. Overall, the security of the average Internet-of-Things device is so bad that this attack is in the noise compared to the previously known risks.

It's a much bigger problem for cloud vendors; the performance hit will be expensive, but I expect that they'll figure out some clever way of detecting and blocking the attacks. All in all, as bad as Spectre and Meltdown are, I think we got lucky.

But more are coming, and they'll be worse. 2018 will be the year of microprocessor vulnerabilities, and it's going to be a wild ride.


Note: A shorter version of this essay previously appeared on CNN.com. My previous blog post on this topic contains additional links.

,

TEDMeet the 2018 class of TED Fellows and Senior Fellows

The TED Fellows program is excited to announce the new group of TED2018 Fellows and Senior Fellows.

Representing a wide range of disciplines and countries — including, for the first time in the program, Syria, Thailand and Ukraine — this year’s TED Fellows are rising stars in their fields, each with a bold, original approach to addressing today’s most complex challenges and capturing the truth of our humanity. Members of the new Fellows class include a journalist fighting fake news in her native Ukraine; a Thai landscape architect designing public spaces to protect vulnerable communities from climate change; an American attorney using legal assistance and policy advocacy to bring justice to survivors of campus sexual violence; a regenerative tissue engineer harnessing the body’s immune system to more quickly heal wounds; a multidisciplinary artist probing the legacy of slavery in the US; and many more.

The TED Fellows program supports extraordinary, iconoclastic individuals at work on world-changing projects, providing them with access to the global TED platform and community, as well as new tools and resources to amplify their remarkable vision. The TED Fellows program now includes 453 Fellows who work across 96 countries, forming a powerful, far-reaching network of artists, scientists, doctors, activists, entrepreneurs, inventors, journalists and beyond, each dedicated to making our world better and more equitable. Read more about their visionary work on the TED Fellows blog.

Below, meet the group of Fellows and Senior Fellows who will join us at TED2018, April 10–14, in Vancouver, BC, Canada.

Antionette Carroll
Antionette Carroll (USA)
Social entrepreneur + designer
Designer and founder of Creative Reaction Lab, a nonprofit using design to foster racially equitable communities through education and training programs, community engagement consulting and open-source tools and resources.


Psychiatrist Essam Daod comforts a Syrian refugee as she arrives ashore at the Greek island of Lesvos. His organization Humanity Crew provides psychological aid to refugees and recently displaced populations. (Photo: Laurence Geai)

Essam Daod
Essam Daod (Palestine | Israel)
Mental health specialist
Psychiatrist and co-founder of Humanity Crew, an NGO providing psychological aid and first-response mental health interventions to refugees and displaced populations.


Laura L. Dunn
Laura L. Dunn (USA)
Victims’ rights attorney
Attorney and Founder of SurvJustice, a national nonprofit increasing the prospect of justice for survivors of campus sexual violence through legal assistance, policy advocacy and institutional training.


Rola Hallam
Rola Hallam (Syria | UK)
Humanitarian aid entrepreneur 
Medical doctor and founder of CanDo, a social enterprise and crowdfunding platform that enables local humanitarians to provide healthcare to their own war-devastated communities.


Olga Iurkova
Olga Iurkova (Ukraine)
Journalist + editor
Journalist and co-founder of StopFake.org, an independent Ukrainian organization that trains an international cohort of fact-checkers in an effort to curb propaganda and misinformation in the media.


Glaciologist M Jackson studies glaciers like this one — the glacier Svínafellsjökull in southeastern Iceland. The high-water mark visible on the mountainside indicates how thick the glacier once was, before climate change caused its rapid recession. (Photo: M Jackson)

M Jackson
M Jackson (USA)
Geographer + glaciologist
Glaciologist researching the cultural and social impacts of climate change on communities across all eight circumpolar nations, and an advocate for more inclusive practices in the field of glaciology.


Romain Lacombe
Romain Lacombe (France)
Environmental entrepreneur
Founder of Plume Labs, a company dedicated to raising awareness about global air pollution by creating a personal electronic pollution tracker that forecasts air quality levels in real time.


Saran Kaba Jones
Saran Kaba Jones (Liberia | USA)
Clean water advocate
Founder and CEO of FACE Africa, an NGO that strengthens clean water and sanitation infrastructure in Sub-Saharan Africa through innovative community support services.


Yasin Kakande
Yasin Kakande (Uganda)
Investigative journalist + author
Journalist working undercover in the Middle East to expose the human rights abuses of migrant workers there.


In one of her long-term projects, “The Three: Senior Love Triangle,” documentary photographer Isadora Kosofsky shadowed a three-way relationship between aged individuals in Los Angeles, CA – Jeanie (81), Will (84), and Adina (90). Here, Jeanie and Will kiss one day after a fight.

Isadora Kosofsky
Isadora Kosofsky (USA)
Photojournalist + filmmaker
Photojournalist exploring underrepresented communities in America with an immersive approach, documenting senior citizen communities, developmentally disabled populations, incarcerated youth, and beyond.


Adam Kucharski
Adam Kucharski (UK)
Infectious disease scientist
Infectious disease scientist creating new mathematical and computational approaches to understand how epidemics like Zika and Ebola spread, and how they can be controlled.


Lucy Marcil
Lucy Marcil (USA)
Pediatrician + social entrepreneur
Pediatrician and co-founder of StreetCred, a nonprofit addressing the health impact of financial stress by providing fiscal services to low-income families in the doctor’s waiting room.


Burçin Mutlu-Pakdil
Burçin Mutlu-Pakdil (Turkey | USA)
Astrophysicist
Astrophysicist studying the structure and dynamics of galaxies — including a rare double-ringed elliptical galaxy she discovered — to help us understand how they form and evolve.


Faith Osier
Faith Osier (Kenya | Germany)
Infectious disease doctor
Scientist studying how humans acquire immunity to malaria, translating her research into new, highly effective malaria vaccines.


In “Birth of a Nation” (2015), artist Paul Rucker recast Ku Klux Klan robes in vibrant, contemporary fabrics like spandex, Kente cloth, camouflage and white satin – a reminder that the horrors of slavery and the Jim Crow South still define the contours of American life today. (Photo: Ryan Stevenson)

Paul Rucker
Paul Rucker (USA)
Visual artist + cellist
Multidisciplinary artist exploring issues related to mass incarceration, racially motivated violence, police brutality and the continuing impact of slavery in the US.


Kaitlyn Sadtler
Kaitlyn Sadtler (USA)
Regenerative tissue engineer
Tissue engineer harnessing the body’s natural immune system to create new regenerative medicines that mend muscle and more quickly heal wounds.


DeAndrea Salvador (USA)
Environmental justice advocate
Sustainability expert and founder of RETI, a nonprofit that advocates for inclusive clean-energy policies that help low-income families access cutting-edge technology to reduce their energy costs.


Harbor seal patient Bogey gets a checkup at the Marine Mammal Center in California. Veterinarian Claire Simeone studies marine mammals like harbor seals to understand how the health of animals, humans and our oceans are interrelated. (Photo: Ingrid Overgard / The Marine Mammal Center)

Claire Simeone
Claire Simeone (USA)
Marine mammal veterinarian
Veterinarian and conservationist studying how the health of marine mammals, such as sea lions and dolphins, informs and influences both human and ocean health.


Kotchakorn Voraakhom
Kotchakorn Voraakhom (Thailand)
Urban landscape architect
Landscape architect and founder of Landprocess, a Bangkok-based design firm building public green spaces and green infrastructure to increase urban resilience and protect vulnerable communities from climate change.


Mikhail Zygar
Mikhail Zygar (Russia)
Journalist + historian
Journalist covering contemporary and historical Russia and founder of Project1917, a digital documentary project that narrates the 1917 Russian Revolution in an effort to contextualize modern-day Russian issues.


TED2018 Senior Fellows

Senior Fellows embody the spirit of the TED Fellows program. They attend four additional TED events, mentor new Fellows and continue to share their remarkable work with the TED community.

Prosanta Chakrabarty
Prosanta Chakrabarty (USA)
Ichthyologist
Evolutionary biologist and natural historian researching and discovering fish around the world in an effort to understand fundamental aspects of biological diversity.


Aziza Chaouni
Aziza Chaouni (Morocco)
Architect
Civil engineer and architect creating sustainable built environments in the developing world, particularly in the deserts of the Middle East.


Shohini Ghose
Shohini Ghose (Canada)
Quantum physicist + educator
Theoretical physicist developing quantum computers and novel protocols like teleportation, and an advocate for equity, diversity and inclusion in science.


A pair of shrimpfish collected in Tanzanian mangroves by ichthyologist Prosanta Chakrabarty and his colleagues this past year. They may represent an unknown population or even a new species of these unusual fishes, which swim head down among aquatic plants.

Zena el Khalil
Zena el Khalil (Lebanon)
Artist + cultural activist
Artist and cultural activist using visual art, site-specific installation, performance and ritual to explore and heal the war-torn history of Lebanon and other global sites of trauma.


Bektour Iskender
Bektour Iskender (Kyrgyzstan)
Independent news publisher
Co-founder of Kloop, an NGO and leading news publication in Kyrgyzstan, committed to freedom of speech and training young journalists to cover politics and investigate corruption.


Mitchell Jackson
Mitchell Jackson (USA)
Writer + filmmaker
Writer exploring race, masculinity, the criminal justice system, and family relationships through fiction, essays and documentary film.


Jessica Ladd
Jessica Ladd (USA)
Sexual health technologist
Founder and CEO of Callisto, a nonprofit organization developing technology to combat sexual assault and harassment on campus and beyond.


Jorge Mañes Rubio
Jorge Mañes Rubio (Spain)
Artist
Artist investigating overlooked places on our planet and beyond, creating artworks that reimagine and revive these sites through photography, site-specific installation and sculpture.


An asteroid impact is the only natural disaster we have the technology to prevent, but since prevention takes time, we must search for near-Earth asteroids now. Astronomer Carrie Nugent does just that, discovering and studying asteroids like this one. (Illustration: Tim Pyle and Robert Hurt / NASA/JPL-Caltech)

v
Carrie Nugent (USA)
Asteroid hunter
Astronomer using machine learning to discover and study near-Earth asteroids, our smallest and most numerous cosmic neighbors.


David Sengeh
David Sengeh (Sierra Leone + South Africa)
Biomechatronics engineer
Research scientist designing and deploying new healthcare technologies, including artificial intelligence, to cure and fight disease in Africa.

TEDWhy Oprah’s talk works: Insight from a TED speaker coach

By Abigail Tenembaum and Michael Weitz of Virtuozo

When Oprah Winfrey spoke at the Golden Globes last Sunday night, her speech lit up social media within minutes. It was powerful, memorable and somehow exactly what the world wanted to hear. It inspired multiple standing O’s — and even a semi-serious Twitter campaign to elect her president #oprah2020

All this in 9 short minutes.

What made this short talk so impactful? My colleagues and I were curious. We are professional speaker coaches who’ve worked with many, many TED speakers, analyzing their scripts and their presentation styles to help each person make the greatest impact with their idea. And when we sat down and looked at Oprah’s talk, we saw a lot of commonality with great TED Talks.

Among the elements that made this talk so effective:

A strong opening that transports us. Oprah got on stage to give a “thank you” speech for a lifetime achievement award. But she chose not to start with the “thank you.” Instead she starts with a story. Her first words? “In 1964, I was a little girl sitting on the linoleum floor of my mother’s house in Milwaukee.” Just like a great story should, this first sentence transports us to a different time and place, and introduces the protagonist. As TED speaker Uri Hasson says: Our brain loves stories. Oprah’s style of opening signals to the audience that it’s story time, by using the opening similar to any fairy tale: “Once upon a time” (In 1964) “There was a princess” (I was a little girl) “In a land far far away” (…my mother’s house in Milwaukee.”

Alternating between ideas and anecdotes. A great TED Talk illustrates an idea. And, just like Oprah does in her talk, the idea is illustrated through a mix of stories, examples and facts. Oprah tells a few anecdotes, none longer than a minute. But they are masterfully crafted, to give us, the audience, just enough detail to invite us to imagine it. When TED speaker Stefan Larsson tells us an anecdote about his time at medical school, he says: “I wore the white coat” — one concrete detail that allows us, the audience, to imagine a whole scene. Oprah describes Sidney Poitier with similar specificity – down to the detail that “his tie was white.” Recy Taylor was “walking home from a church service.” Oprah the child wasn’t sitting on the floor but on the “linoleum floor.” Like a great sketch artist, a great storyteller draws a few defined lines and lets the audience’s imagination fill in the rest to create the full story.

A real conversation with the audience. At TED, we all know it’s called a TED talk — not “speech,” not “lecture.” We feel it when Sir Ken Robinson looks at the audience and waits for their reaction. But it’s mostly not in the words. It’s in the tone, in the fact that the speaker’s attention is on the audience, focusing on one person at a time, and having a mini conversation with us. Oprah is no different. She speaks to the people in the room, and this intimacy translates beautifully on camera.

It’s Oprah’s talk — and only Oprah’s. A great TED talk, just like any great talk or speech, is deeply connected to the person delivering it. We like to ask speakers, “What makes this a talk that only you can give?” Esther Perel shares anecdotes from her unique experience as a couples therapist, intimate stories that helped her develop a personal perspective on love and fidelity. Only Ray Dalio could tell the story of personal failure and rebuilding that lies behind the radical transparency he’s created in his company. Uri Hasson connects his research on the brain and stories to his own love of film. Oprah starts with the clearest personal angle – her personal story. And along her speech she brings her own career as an example, and her own way of articulating her message.

A great TED Talk invites the audience to think and to feel. Oprah’s ending is a big invitation to the audience to act. And it’s done not by telling us what to do, but by offering an optimistic vision of the future and inviting us all to be part of it.

Here’s a link to the full speech.

TEDGet ready for TED Talks India: Nayi Soch, premiering Dec. 10 on Star Plus

This billboard is showing up in streets around India, and it’s made out of pollution fumes that have been collected and made into ink — ink that’s, in turn, made into an image of TED Talks India: Nayi Soch host Shah Rukh Khan. Tune in on Sunday night, Dec. 10, at 7pm on Star Plus to see what it’s all about.

TED is a global organization with a broad global audience. With our TED Translators program working in more than 100 languages, TEDx events happening every day around the world and so much more, we work hard to present the latest ideas for everyone, regardless of language, location or platform.

Now we’ve embarked on a journey with one of the largest TV networks in the world — and one of the biggest movie stars in the world — to create a Hindi-language TV series and digital series that’s focused on a country at the peak of innovation and technology: India.

Hosted and curated by Shah Rukh Khan, the TV series TED Talks India: Nayi Soch will premiere in India on Star Plus on December 10.

The name of the show, Nayi Soch, literally means ‘new ideas’ — and this kick-off episode seeks to inspire the nation to embrace and cultivate ideas and curiosity. Watch it and discover a program of speakers from India and the world whose ideas might inspire you to some new thinking of your own! For instance — the image on this billboard above is made from the fumes of your car … a very new and surprising idea!

If you’re in India, tune in at 7pm IST on Sunday night, Dec. 10, to watch the premiere episode on Star Plus and five other channels. Then tune in to Star Plus on the next seven Sundays, at the same time, to hear even more great talks on ideas, grouped into themes that will certainly inspire conversations. You can also explore the show on the HotStar app.

On TED.com/india and for TED mobile app users in India, each episode will be conveniently turned into five to seven individual TED Talks, one talk for each speaker on the program. You can watch and share them on their own, or download them as playlists to watch one after another. The talks are given in Hindi, with professional subtitles in Hindi and in English. Almost every talk will feature a short Q&A between the speaker and the host, Shah Rukh Khan, that dives deeper into the ideas shared onstage.

Want to learn more about TED Talks? Check out this playlist that SRK curated just for you.

Google AdsenseOur continued investment in AdSense Experiments

Experimentation is at the heart of everything we do at Google — so much so that many of our products, including Analytics and AdSense, allow you to run your own experiments.

The AdSense Experiments page has allowed you to experiment with ad unit settings, and allowing and blocking ad categories to see how this affects your earnings. As of today, you can run more experiment types and have a better understanding of how they impact your earnings and users with some new updates.

Understand user impact with session metrics

Curious to know how the settings you experiment with impact your user experience? You can now see how long users spend on your site with a new “Ad session length” metric that has been added to the Experiments results page. Longer ad session lengths are usually a good indicator of a healthy user experience.

Ad balance experiments

Ad balance is a tool that allows you to reduce the number of ads shown by displaying only those ads that perform the best. You can now run experiments to see how different ad fill rates impact revenue and ad session lengths. Try it out and let us know what you think in the comments below!

Service announcement: We're auto-completing some experiments, and deleting experiments that are more than a year old.

To ensure you can focus your time efficiently on experiments, we'll soon be auto-completing the experiments for which no winner has been chosen after 30 days of being marked “Ready to complete”. You can manually choose a winner during those 30 days, or (if you’re happy for us to close the experiment) you don't need to do anything. Learn more about the status of experiments.

We’ll also be deleting experiments that were completed more than one year ago. Old experiments are rarely useful in the fast-moving world of the Internet and clutter the Experiments page with outdated information. If you wish to keep old experiments, you can download all existing data by using the “Download Data” button on the Experiments page.

We look forward to hearing your thoughts on these new features.

Posted by: Amir Hosseini Rad, AdSense Product Manager

TEDA photograph by Paul Nicklen shows the tragedy of extinction, and more news from TED speakers

The past few weeks have brimmed over with TED-related news. Here, some highlights:

This is what extinction looks like. Photographer Paul Nicklen shocked the world with footage of a starving polar bear that he and members of his conservation group SeaLegacy captured in the Canadian Arctic Archipelago. “It rips your heart out of your chest,” Nicklen told The New York Times. Published in National Geographic, on Nicklen’s Instagram channel, and via SeaLegacy in early December, the footage and a photograph taken by Cristina Mittermeier spread rapidly across the Internet, to horrified reaction. Polar bears are hugely threatened by climate change, in part because of their dependence on ice cover, and their numbers are projected to drop precipitously in coming years. By publishing the photos, Nicklen said to the Times, he hoped to make a scientific data point feel real to people. (Watch Nicklen’s TED Talk)

Faster 3D printing with liquids. Attendees at Design Miami witnessed the first public demonstration of MIT’s 3D liquid printing process. In a matter of minutes, a robotic arm printed lamps and handbags inside a glass tank filled with gel, showing that 3D printing doesn’t have to be painfully slow. The technique upends the size constraints and poor material quality that have plagued 3D printing, say the creators, and could be used down the line to print larger objects like furniture, reports Dezeen. Steelcase and the Self-Assembly lab at MIT, co-directed by TED Fellow Skylar Tibbits and Jared Laucks, developed the revolutionary technique. (Watch Tibbits’ TED Talk)

The crazy mathematics of swarming and synchronization. Studies on swarming often focus on animal movement (think schools of fish) but ignore their internal framework, while studies on synchronization tend to focus solely on internal dynamics (think coupled lasers). The two phenomena, however, have rarely been studied together. In new research published in Nature Communications, mathematician Steven Strogatz and his former postdoctoral student Kevin O’Keefe studied systems where both synchronization and swarming occur simultaneously. Male tree frogs were one source of inspiration for the research by virtue of the patterns that they form in both space and time, mainly related to reproduction. The findings open the door to future research of unexplored behaviors and systems that may also exhibit these two behaviors concurrently. (Watch Strogatz’s TED Talk)

A filmmaker’s quest to understand white nationalism. Documentary filmmaker and human rights activist Deeyah Khan’s new documentary, White Right: Meeting the Enemy, seeks to understand neo-Nazis and white nationalists beyond their sociopolitical beliefs. All too familiar with racism and hate related threats in her own life, her goal is not to sympathize or rationalize their beliefs or behaviors. She instead intends to discover the evolution of their ideology as individuals, which can provide insights into how they became attracted to and involved in these movements. Deeyah uses this film to answer the question: “Is it possible for me to sit with my enemy and for them to sit with theirs?” (Watch Khan’s TED Talk)

The end of an era at the San Francisco Symphony. Conductor Michael Tilson Thomas announced that he will be stepping down from his role as music director of the San Francisco Symphony in 2020. In that year, he will be celebrating his 75th birthday and his 25th anniversary at the symphony, and although his forthcoming departure will be the end of an era, Thomas will continue to work as the artistic director for the New World Symphony at the training academy he co-founded in Miami. Thus, 2020 won’t be the last time we hear from the musical great, given that he intends to pick up compositions, stories, and poems that he’s previously worked on. (Watch Tilson Thomas’ TED Talk)

A better way to weigh yourself. The Shapa Smart Scale is all words, no numbers. Behavioral economist Dan Ariely helped redesign the scale in the hope that eliminating the tyranny of the number would help people make better decisions about their health (something we’re notoriously bad at). The smart scale sends a small electrical current through the person’s body and gathers information, such as muscle mass, bone density, and water percentage. Then, it compares it to personal data collected over time. Instead of spitting out a single number, it simply tells you whether you’re doing a little better, a little worse, much better, much worse, or essentially the same. (Watch Ariely’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.

Krebs on SecurityMicrosoft’s Jan. 2018 Patch Tuesday Lowdown

Microsoft on Tuesday released 14 security updates, including fixes for the Spectre and Meltdown flaws detailed last week, as well as a zero-day vulnerability in Microsoft Office that is being exploited in the wild. Separately, Adobe pushed a security update to its Flash Player software.

Last week’s story, Scary Chip Flaws Raise Spectre of Meltdown, sought to explain the gravity of these two security flaws present in most modern computers, smartphones, tablets and mobile devices. The bugs are thought to be mainly exploitable in chips made by Intel and ARM, but researchers said it was possible they also could be leveraged to steal data from computers with chips made by AMD.

By the time that story had published, Microsoft had already begun shipping an emergency update to address the flaws, but many readers complained that their PCs experienced the dreaded “blue screen of death” (BSOD) after applying the update. Microsoft warned that the BSOD problems were attributable to many antivirus programs not yet updating their software to play nice with the security updates.

On Tuesday, Microsoft said it was suspending the patches for computers running AMD chipsets.

“After investigating, Microsoft determined that some AMD chipsets do not conform to the documentation previously provided to Microsoft to develop the Windows operating system mitigations to protect against the chipset vulnerabilities known as Spectre and Meltdown,” the company said in a notice posted to its support site.

“To prevent AMD customers from getting into an unbootable state, Microsoft has temporarily paused sending the following Windows operating system updates to devices that have impacted AMD processors,” the company continued. “Microsoft is working with AMD to resolve this issue and resume Windows OS security updates to the affected AMD devices via Windows Update and WSUS as soon as possible.”

In short, if you’re running Windows on a computer powered by an AMD, you’re not going to be offered the Spectre/Meltdown fixes for now. Not sure whether your computer has an Intel or AMD chip? Most modern computers display this information (albeit very briefly) when the computer first starts up, before the Windows logo appears on the screen.

Here’s another way. From within Windows, users can find this information by pressing the Windows key on the keyboard and the “Pause” key at the same time, which should open the System Properties feature. The chip maker will be displayed next to the “Processor:” listing on that page.

Microsoft also on Tuesday provided more information about the potential performance impact on Windows computers after installing the Spectre/Meltdown updates. To summarize, Microsoft said Windows 7, 8.1 and 10 users on older chips (circa 2015 or older), as well as Windows server users on any silicon, are likely to notice a slowdown of their computer after applying this update.

Any readers who experience a BSOD after applying January’s batch of updates may be able to get help from Microsoft’s site: Here are the corresponding help pages for Windows 7, Windows 8.1 and Windows 10 users.

As evidenced by this debacle, it’s a good idea to get in the habit of backing up your system on a regular basis. I typically do this at least once a month — but especially right before installing any updates from Microsoft. 

Attackers could exploit a zero-day vulnerability in Office (CVE-2018-0802) just by getting a user to open a booby-trapped Office document or visit a malicious/hacked Web site. Microsoft also patched a flaw (CVE-2018-0819) in Office for Mac that was publicly disclosed prior to the patch being released, potentially giving attackers a heads up on how to exploit the bug.

Of the 56 vulnerabilities addressed in the January Patch Tuesday batch, at least 16 earned Microsoft’s critical rating, meaning attackers could exploit them to gain full access to Windows systems with little help from users. For more on Tuesday’s updates from Microsoft, check out blogs from Ivanti and Qualys.

As per usual, Adobe issued an update for Flash Player yesterday. The update brings Flash to version 28.0.0.137 on Windows, Mac, and Linux systems. Windows users who browse the Web with anything other than Internet Explorer may need to apply the Flash patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version.

When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then. Chrome will replace that three dot icon with an up-arrow inside of a circle when updates are waiting to be installed.

Standard disclaimer: Because Flash remains such a security risk, I continue to encourage readers to remove or hobble Flash Player unless and until it is needed for a specific site or purpose. More on that approach (as well as slightly less radical solutions ) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

Another, perhaps less elegant, solution is to keep Flash installed in a browser that you don’t normally use, and then to only use that browser on sites that require it.

CryptogramDetecting Adblocker Blockers

Interesting research on the prevalence of adblock blockers: "Measuring and Disrupting Anti-Adblockers Using Differential Execution Analysis":

Abstract: Millions of people use adblockers to remove intrusive and malicious ads as well as protect themselves against tracking and pervasive surveillance. Online publishers consider adblockers a major threat to the ad-powered "free" Web. They have started to retaliate against adblockers by employing anti-adblockers which can detect and stop adblock users. To counter this retaliation, adblockers in turn try to detect and filter anti-adblocking scripts. This back and forth has prompted an escalating arms race between adblockers and anti-adblockers.

We want to develop a comprehensive understanding of anti-adblockers, with the ultimate aim of enabling adblockers to bypass state-of-the-art anti-adblockers. In this paper, we present a differential execution analysis to automatically detect and analyze anti-adblockers. At a high level, we collect execution traces by visiting a website with and without adblockers. Through differential execution analysis, we are able to pinpoint the conditions that lead to the differences caused by anti-adblocking code. Using our system, we detect anti-adblockers on 30.5% of the Alexa top-10K websites which is 5-52 times more than reported in prior literature. Unlike prior work which is limited to detecting visible reactions (e.g., warning messages) by anti-adblockers, our system can discover attempts to detect adblockers even when there is no visible reaction. From manually checking one third of the detected websites, we find that the websites that have no visible reactions constitute over 90% of the cases, completely dominating the ones that have visible warning messages. Finally, based on our findings, we further develop JavaScript rewriting and API hooking based solutions (the latter implemented as a Chrome extension) to help adblockers bypass state-of-the-art anti-adblockers.

News article.

Worse Than FailureCodeSOD: Warp Me To Halifax

Greenwich must think they’re so smart, being on the prime meridian. Starting in the 1840s, the observatory was the international standard for time (and thus vital for navigation). And even when the world switched to UTC, GMT is only different from that by 0.9s. If you want to convert times between time zones, you do it by comparing against UTC, and you know what?

I’m sick of it. Boy, I wish somebody would take them down a notch. Why is a tiny little strip of London so darn important?

Evan’s co-worker obviously agrees with the obvious problem of Greenwich’s unearned superiority, and picks a different town to make the center of the world: Halifax.

function time_zone_time($datetime, $time_zone, $savings, $return_format="Y-m-d g:i a"){
        date_default_timezone_set('America/Halifax');
        $time = strtotime(date('Y-m-d g:i a', strtotime($datetime)));
        $halifax_gmt = -4;
        $altered_tdf_gmt = $time_zone;
        if ($savings && date('I', $time) == 1) {
                $altered_tdf_gmt++;
        } // end if
        if(date('I') == 1){
                $halifax_gmt++;
        }
        $altered_tdf_gmt -= $halifax_gmt;
        $new_time = mktime(date("H", $time), date("i", $time), date("s", $time),date("m", $time)  ,date("d", $time), date("Y", $time)) + ($altered_tdf_gmt*3600);
        $new_datetime = date($return_format, $new_time);
        return $new_datetime;
}
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaJonathan Adamczewski: Priorities for my team

(unthreaded from here)

During the day, I’m a Lead of a group of programmers. We’re responsible for a range of tools and tech used by others at the company for making games.

I have a list of the my priorities (and some related questions) of things that I think are important for us to be able to do well as individuals, and as a team:

  1. Treat people with respect. Value their time, place high value on their well-being, and start with the assumption that they have good intentions
    (“People” includes yourself: respect yourself, value your own time and well-being, and have confidence in your good intentions.)
  2. When solving a problem, know the user and understand their needs.
    • Do you understand the problem(s) that need to be solved? (it’s easy to make assumptions)
    • Have you spoken to the user and listened to their perspective? (it’s easy to solve the wrong problem)
    • Have you explored the specific constraints of the problem by asking questions like:
      • Is this part needed? (it’s easy to over-reach)
      • Is there a satisfactory simpler alternative? (actively pursue simplicity)
      • What else will be needed? (it’s easy to overlook details)
    • Have your discussed your proposed solution with users, and do they understand what you intend to do? (verify, and pursue buy-in)
    • Do you continue to meet regularly with users? Do they know you? Do they believe that you’re working for their benefit? (don’t under-estimate the value of trust)
  3. Have a clear understanding of what you are doing.
    • Do you understand the system you’re working in? (it’s easy to make assumptions)
    • Have you read the documentation and/or code? (set yourself up to succeed with whatever is available)
    • For code:
      • Have you tried to modify the code? (pull a thread; see what breaks)
      • Can you explain how the code works to another programmer in a convincing way? (test your confidence)
      • Can you explain how the code works to a non-programmer?
  4. When trying to solve a problem, debug aggressively and efficiently.
    • Does the bug need to be fixed? (see 1)
    • Do you understand how the system works? (see 2)
    • Is there a faster way to debug the problem? Can you change code or data to cause the problem to occur more quickly and reliably? (iterate as quickly as you can, fix the bug, and move on)
    • Do you trust your own judgement? (debug boldly, have confidence in what you have observed, make hypotheses and test them)
  5. Pursue excellence in your work.
    • How are you working to be better understood? (good communication takes time and effort)
    • How are you working to better understand others? (don’t assume that others will pursue you with insights)
    • Are you responding to feedback with enthusiasm to improve your work? (pursue professionalism)
    • Are you writing high quality, easy to understand, easy to maintain code? How do you know? (continue to develop your technical skills)
    • How are you working to become an expert and industry leader with the technologies and techniques you use every day? (pursue excellence in your field)
    • Are you eager to improve (and fix) systems you have worked on previously? (take responsibility for your work)

The list was created for discussion with the group, and as an effort to articulate my own expectations in a way that will help my team understand me.

Composing this has been useful exercise for me as a lead, and definitely worthwhile for the group. If you’ve never tried writing down your own priorities, values, and/or assumptions, I encourage you to try it :)

,

CryptogramDaniel Miessler on My Writings about IoT Security

Daniel Miessler criticizes my writings about IoT security:

I know it's super cool to scream about how IoT is insecure, how it's dumb to hook up everyday objects like houses and cars and locks to the internet, how bad things can get, and I know it's fun to be invited to talk about how everything is doom and gloom.

I absolutely respect Bruce Schneier a lot for what he's contributed to InfoSec, which makes me that much more disappointed with this kind of position from him.

InfoSec is full of those people, and it's beneath people like Bruce to add their voices to theirs. Everyone paying attention already knows it's going to be a soup sandwich -- a carnival of horrors -- a tragedy of mistakes and abuses of trust.

It's obvious. Not interesting. Not novel. Obvious. But obvious or not, all these things are still going to happen.

I actually agree with everything in his essay. "We should obviously try to minimize the risks, but we don't do that by trying to shout down the entire enterprise." Yes, definitely.

I don't think the IoT must be stopped. I do think that the risks are considerable, and will increase as these systems become more pervasive and susceptible to class breaks. And I'm trying to write a book that will help navigate this. I don't think I'm the prophet of doom, and don't want to come across that way. I'll give the manuscript another read with that in mind.

Cory DoctorowWith repetition, most of us will become inured to all the dirty tricks of Facebook attention-manipulation

In my latest Locus column, “Persuasion, Adaptation, and the Arms Race for Your Attention,” I suggest that we might be too worried about the seemingly unstoppable power of opinion-manipulators and their new social media superweapons.


Not because these techniques don’t work (though when someone who wants to sell you persuasion tools tells you that they’re amazing and unstoppable, some skepticism is warranted), but because a large slice of any population will eventually adapt to any stimulus, which is why most of us aren’t addicted to slot machines, Farmville and Pokemon Go.


When a new attentional soft spot is discovered, the world can change overnight. One day, every­one you know is signal boosting, retweeting, and posting Upworthy headlines like “This video might hurt to watch. Luckily, it might also explain why,” or “Most Of These People Do The Right Thing, But The Guys At The End? I Wish I Could Yell At Them.” The style was compelling at first, then reductive and simplistic, then annoying. Now it’s ironic (at best). Some people are definitely still susceptible to “This Is The Most Inspiring Yet Depressing Yet Hilarious Yet Horrifying Yet Heartwarming Grad Speech,” but the rest of us have adapted, and these headlines bounce off of our attention like pre-penicillin bacteria being batted aside by our 21st century immune systems.

There is a war for your attention, and like all adversarial scenarios, the sides develop new countermeasures and then new tactics to overcome those countermeasures. The predator carves the prey, the prey carves the preda­tor. To get a sense of just how far the state of the art has advanced since Farmville, fire up Universal Paperclips, the free browser game from game designer Frank Lantz, which challenges you to balance resource acquisi­tion, timing, and resource allocation to create paperclips, progressing by purchasing upgraded paperclip-production and paperclip-marketing tools, until, eventually, you produce a sentient AI that turns the entire universe into paperclips, exterminating all life.

Universal Paperclips makes Farmville seem about as addictive as Candy­land. Literally from the first click, it is weaving an attentional net around your limbic system, carefully reeling in and releasing your dopamine with the skill of a master fisherman. Universal Paperclips doesn’t just suck you in, it harpoons you.

Persuasion, Adaptation, and the Arms Race for Your Attention [Cory Doctorow/Locus]