Planet Russell

,

Planet Linux AustraliaMichael Still: So you want to setup a Ceph dev environment using OSA

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I've never seen before called a "Scenario". Basically this means that you need to export an environment variable called "SCENARIO" before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph
    


Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

    --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:55:07.803635173 +1000
    +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:58:30.417019878 +1000
    @@ -338,7 +338,9 @@
     #     foo: 1234
     #     bar: 5678
     #
    -ceph_conf_overrides: {}
    +ceph_conf_overrides:
    +  global:
    +    osd_pool_default_pg_num: 8
     
     
     #############
    @@ -373,4 +375,4 @@
     # Set this to true to enable File access via NFS.  Requires an MDS role.
     nfs_file_gw: true
     # Set this to true to enable Object access via NFS. Requires an RGW role.
    -nfs_obj_gw: false
    \ No newline at end of file
    +nfs_obj_gw: false
    


That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I'll never need to think about it again which is nice.

Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

    root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s
        cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd
         health HEALTH_OK
         monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0}
                election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1
         osdmap e20: 3 osds: 3 up, 3 in
                flags sortbitwise,require_jewel_osds
          pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects
                102156 kB used, 3070 GB / 3070 GB avail
                      40 active+clean
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree
    ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
    -1 2.99817 root default                                      
    -2 2.99817     host labosa                                   
     0 0.99939         osd.0        up  1.00000          1.00000 
     1 0.99939         osd.1        up  1.00000          1.00000 
     2 0.99939         osd.2        up  1.00000          1.00000 
    


Tags for this post: openstack osa ceph openstack-ansible

Comment

,

Planet DebianRuss Allbery: On time management

Last December, the Guardian published a long essay by Oliver Burkeman entitled "Why time management is ruining our lives". Those who follow my book reviews know I read a lot of time management books, so of course I couldn't resist this. And, possibly surprisingly, not to disagree with it. It's an excellent essay, and well worth your time.

Burkeman starts by talking about Inbox Zero:

If all this fervour seems extreme – Inbox Zero was just a set of technical instructions for handling email, after all – this was because email had become far more than a technical problem. It functioned as a kind of infinite to-do list, to which anyone on the planet could add anything at will.

This is, as Burkeman develops in the essay, an important critique of time management techniques in general, not just Inbox Zero: perhaps you can become moderately more efficient, but what are you becoming more efficient at doing, and why does it matter? If there were a finite amount of things that you had to accomplish, with leisure the reward at the end of the fixed task list, doing those things more efficiently makes perfect sense. But this is not the case in most modern life. Instead, we live in a world governed by Parkinson's Law: "Work expands to fill the time available for its completion."

Worse, we live in a world where the typical employer takes Parkinson's Law, not as a statement on the nature of ever-expanding to-do lists, but a challenge to compress the time made available for a task to try to force the work to happen faster. Burkeman goes farther into the politics, pointing out that a cui bono analysis of time management suggests that we're all being played by capitalist employers. I wholeheartedly agree, but that's worth a separate discussion; for those who want to explore that angle, David Graeber's Debt and John Kenneth Galbraith's The Affluent Society are worth your time.

What I want to write about here is why I still read (and recommend) time management literature, and how my thinking on it has changed.

I started in the same place that most people probably do: I had a bunch of work to juggle, I felt I was making insufficient forward progress on it, and I felt my day contained a lot of slack that could be put to better use. The alluring promise of time management is that these problems can be resolved with more organization and some focus techniques. And there is a huge surge of energy that comes with adopting a new system and watching it work, since the good ones build psychological payoff into the tracking mechanism. Starting a new time management system is fun! Finishing things is fun!

I then ran into the same problem that I think most people do: after that initial surge of enthusiasm, I had lists, systems, techniques, data on where my time was going, and a far more organized intake process. But I didn't feel more comfortable with how I was spending my time, I didn't have more leisure time, and I didn't feel happier. Often the opposite: time management systems will often force you to notice all the things you want to do and how slow your progress is towards accomplishing any of them.

This is my fundamental disagreement with Getting Things Done (GTD): David Allen firmly believes that the act of recording everything that is nagging at you to be done relieves the brain of draining background processing loops and frees you to be more productive. He argues for this quite persuasively; as you can see from my review, I liked his book a great deal, and used his system for some time. But, at least for me, this does not work. Instead, having a complete list of goals towards which I am making slow or no progress is profoundly discouraging and depressing. The process of maintaining and dwelling on that list while watching it constantly grow was awful, quite a bit worse psychologically than having no time management system at all.

Mark Forster is the time management author who speaks the best to me, and one of the points he makes is that time management is the wrong framing. You're not going to somehow generate more time, and you're usually not managing minutes and seconds. A better framing is task management, or commitment management: the goal of the system is to manage what you mentally commit to accomplishing, usually by restricting that list to something far shorter than you would come up with otherwise. How, in other words, to limit your focus to a small enough set of goals that you can make meaningful progress instead of thrashing.

That, for me, is now the merit and appeal of time (or task) management systems: how do I sort through all the incoming noise, distractions, requests, desires, and compelling ideas that life throws at me and figure out which of them are worth investing time in? I also benefit from structuring that process for my peculiar psychology, in which backlogs I have to look at regularly are actively dangerous for my mental well-being. Left unchecked, I can turn even the most enjoyable hobby into an obligation and then into a source of guilt for not meeting the (entirely artificial) terms of the obligation I created, without even intending to.

And here I think it has a purpose, but it's not the purpose that the time management industry is selling. If you think of time management as a way to get more things done and get more out of each moment, you're going to be disappointed (and you're probably also being taken advantage of by the people who benefit from unsustainable effort without real, unstructured leisure time). I practice Inbox Zero, but the point wasn't to be more efficient at processing my email. The point was to avoid the (for me) psychologically damaging backlog of messages while acting on the knowledge that 99% of email should go immediately into the trash with no further action. Email is an endless incoming stream of potential obligations or requests for my time (even just to read a longer message) that I should normallly reject. I also take the time to notice patterns of email that I never care about and then shut off the source or write filters to delete that email for me. I can then reserve my email time for moments of human connection, directly relevant information, or very interesting projects, and spend the time on those messages without guilt (or at least much less guilt) about ignoring everything else.

Prioritization is extremely difficult, particularly once you realize that true prioritization is not about first and later, but about soon or never. The point of prioritization is not to choose what to do first, it's to choose the 5% of things that you going to do at all, convince yourself to be mentally okay with never doing the other 95% (and not lying to yourself about how there will be some future point when you'll magically have more time), and vigorously defend your focus and effort for that 5%. And, hopefully, wholeheartedly enjoy working on those things, without guilt or nagging that there's something else you should be doing instead.

I still fail at this all the time. But I'm better than I used to be.

For me, that mental shift was by far the hardest part. But once you've made that shift, I do think the time management world has a lot of tools and techniques to help you make more informed choices about the 5%, and to help you overcome procrastination and loss of focus on your real goals.

Those real goals should include true unstructured leisure and "because I want to" projects. And hopefully, if you're in a financial position to do it, include working less on what other people want you to do and more on the things that delight you. Or at least making a well-informed strategic choice (for the sake of money or some other concrete and constantly re-evaluated reason) to sacrifice your personal goals for some temporary external ones.

Planet DebianRuss Allbery: Optimistic haul

I never have as much time to read as I wish I did, but I keep buying books, of course. Maybe someday I'll have a good opportunity to take extended time off work and just read for a bit. Well, retirement, at least, right?

Charlie Jane Anders — All the Birds in the Sky (sff)
Peter C. Brown, et al. — Make It Stick (nonfiction)
April Daniels — Dreadnought: Nemesis (sff)
T. Kingfisher — The Halcyon Fairy Book (sff collection)
T. Kingfisher — Jackalope Wives and Other Stories (sff collection)
Margot Lee Shetterly — Hidden Figures (nonfiction)
Cordwainer Smith — Norstrilia (sff)
Kristine Smith — Code of Conduct (sff)
Jonathan Taplin — Move Fast and Break Things (nonfiction)
Sarah Zettel — Fool's War (sff)
Sarah Zettel — Playing God (sff)
Sarah Zettel — The Quiet Invasion (sff)

It doesn't help that James Nicoll keeps creating new lists of books that all sound great. And there's some really interesting nonfiction being written right now.

Make It Stick is the current book for the work book club.

Planet DebianLars Wirzenius: Distix movement

Distix is my distributed ticketing system. I initially wrote the core of it as a bit of programming performance art, to celebrate my 30 years as a programmer. Distix is built on top of git and emails in Maildirs. It is a silent listener to your issue and bug discussions: as long as you ensure it gets a copy of each mail, it takes care of automatically arranging things into separate tickets based on email threading. Users and customers do not need to even know Distix is being used. Only the "support staff" need ever interact with Distix, and they mostly only need to close tickets that have been dealt with.

I've been using Distix for my own stuff for some time now, and recently we've started using it at work. I slowly improve it as we find problems.

It's not a sleek, smooth, finished tool. It's clunky, weird, and probably not what you want. But it's what I want.

Changes in recent months:

  • There is a new website: http://distix.eu/. No particular good reason for a new website, but I won the domain for free a couple of years ago, so I might as well use it.

  • In addition, a ticketing system for Distix itself: http://tickets.distix.eu/. Possibly I should've called the subdomain dogfood, but I'm a serious person, not prone to trying to be funny.

  • Mails can now be imported using IMAP.

  • Importing has been optimized for speed and memory use, making my own production use more practical.

I've discussed with a friend the possibility of writing a web UI, and some day maybe that will happen. For now, distix is a command line applicaton that can generate a static HTML site.

Don MartiSome questions on a screenshot

Here's a screenshot of an editorial from Der Spiegel, with Ghostery turned on.

article from Der Spiegel

Is it just me, or does it look to anyone else like the man in the photo is checking the list of third-party web trackers on the site to see who he can send a National Security Letter to?

Could a US president who is untrustworthy enough to be removed from office possibly be trustworthy enough to comply with his side of a "Privacy Shield" agreement?

If it's necessary for the rest of the world to free itself of its dependence on the U.S., does that apply to US-based Internet companies that have become a bottleneck for news site ad revenue, and how is that going to work?

Bonus links:

CryptogramForbes Names Beyond Fear as One of the "13 Books Technology Executives Should Have On Their Shelves"

,

CryptogramFriday Squid Blogging: Squid and Chips

The excellent Montreal chef Marc-Olivier Frappier, of Joe Beef fame, has created a squid and chips dish for Brit & Chips restaurant.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramHacking the Galaxy S8's Iris Biometric

It was easy:

The hackers took a medium range photo of their subject with a digital camera's night mode, and printed the infrared image. Then, presumably to give the image some depth, the hackers placed a contact lens on top of the printed picture.

Sociological ImagesCountering class-based food stigma with a “hierarchy of food needs”

Flashback Friday. 

Responding to critics who argue that poor people do not choose to eat healthy food because they’re ignorant or prefer unhealthy food, dietitian Ellyn Satter wrote a hierarchy of food needs. Based on Maslow’s hierarchy of needs, it illustrates Satter’s ideas as to the elements of food that matter first, second, and so on… starting at the bottom.

The graphic suggests that getting enough food to eat is the most important thing to people. Having food be acceptable (e.g., not rotten, something you are not allergic to) comes second. Once those two things are in place, people hope for reliable access to food and only then do they begin to worry about taste. If people have enough, acceptable, reliable, good-tasting food, then they seek out novel food experiences and begin to make choices as to what to eat for instrumental purposes (e.g., number of calories, nutritional balance).

As Michelle at The Fat Nutritionist writes, sometimes when a person chooses to eat nutritionally deficient or fattening foods, it is not because they are “stupid, ignorant, lazy, or just a bad, bad person who loves bad, bad food.”  Sometimes, it’s “because other needs come first.”

Originally posted in 2010; hat tip to Racialicious; cross-posted at Jezebel.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianSteinar H. Gunderson: Last minute stretch bugs

The last week, I found no less than three pet bugs I have hope that will be allowed to go in before stretch release:

  • #863286: lua-http: completely broken in non-US locales
  • #843448: linux-image-4.8.0-1-armmp-lpae: fails to boot on Odroid-Xu4 with rootfs on USB (actually my problem is that the NIC doesn't work, but same root cause—this makes stretch basically unusable on XU4)
  • #863280: cubemap: streams with paths exactly 7 characters long get broken buffering behavior

I promise, none of these were found late because I upgraded to stretch too late—just a perfect storm. :-)

Sam VargheseAll your gods have feet of clay: even at 53, some people don’t know that

In a recent interview with Newsweek after the release of her film, Risk, the Oscar-winning filmmaker Laura Poitras asks “What is the motivation of the source?” as part of a reply to a question about a decision on what is newsworthy.

That should tell an observant reader one thing: Poitras may be 53, but she it still very naive. Every leak that ends up on the front or other pages of a publication, or on the TV screen, emanates from someone with an axe to grind. Perhaps one is looking for a business advantage and leaks some details about a rival. Or else, one may be from one political faction and looking to gain an advantage over a rival faction.

Or indeed it could be someone inside one political faction leaking against one’s own, in order to challenge for the leadership. Or it could be a person who has been jilted who is looking to gain revenge. But this is of no concern to a real journalist; the only point of debate for one in the journalism profession is whether it is newsworthy or not.

Poitras’ comment tells one that she is not really versed in the art of journalism, though her byline has appeared on some pretty big stories. She is uncertain about what makes up news.

It is this naivety that leads her to believe that people who are fighting for a cause have to be perfect. Which, in the main, accounts for the split that has arisen between her and WikiLeaks, after she violated the terms of an understanding under which she was allowed carte blanche to film Julian Assange and others who are part of WikiLeaks for the purpose of making a documentary.

(Poitras was involved with Jacob Appelbaum, a developer for the Tor project, and someone who has had a high profile in the security community. Appelbaum has been accused by multiple people of sexual harassment; whether Poitras was also harassed is unknown.)

But for someone who has any worldliness about them, it should be apparent that one cannot run an organisation like WikiLeaks and make it what has become, a thorn in the flesh of world powers, by being nice to all and sundry. One has to be mean, nasty, vicious and able to give as good as one gets. One has to be cunning, crafty, learned and willing to take risks. And one cannot be nice to everyone and still achieve as much as Assange has.

Poitras chose to release her final cut of Risk, the one that went to theatres in the US, as something that focuses on what she deems to be sexism in multiple communities: “It was important to me to look at not just allegations of abuse but the culture of sexism that exists not only within the hacker community but in other communities.”

She says, “I don’t see any incentive for any woman to make claims around abuse if they didn’t experience that”, without being aware that the two women who were pushed to make allegations about rape against Assange were not doing it of their own volition. It is a naive and emotional reaction to a situation where politics was the decisive factor.

There are some similarities to the situation that developed around Linus Torvalds, the creator of the Linux kernel. Some women felt that he was too aggressive and abusive and tried to bring him down. They used similar arguments to that which Poitras has raised. Torvalds manages the kernel development team and is known for not beating around the bush when people screw up.

Poitras’ film has been released at a time when WikiLeaks is under great pressure. Now that the probe into Assange in Sweden has been dropped, he will be targeted by the US which is desperate to extradite him and try him for releasing footage of the Iraq war that showed exactly how barbaric US troops have been in Iraq.

Thus it is unlikely that Poitras will ever be allowed to film anything to do with Assange or WikiLeaks again. It also casts a shadow on her reputation as an unbiased observer.

Planet DebianMichal Čihař: Running Bitcoin node on Turris Omnia

For quite some I'm happy user of Turris Omnia router. The router has quite good hardware, so I've decided to try if I can run Bitcoin node on that and ElectrumX server.

To make the things easier to manage, I've decided to use LXC and run all these in separate container. First of all you need LXC on the router. This is the default setup, but in case you've removed it, you can add it back in the Updater settings.

Now we will create Debian container. There is basic information on using in Turris Documentation on how to create the container, in latter documentation I assume it is called debian.

It's also good idea to enable LXC autostart, to do so add your container to cat /etc/config/lxc-auto on :

config container
    option name debian

You might also want to edit lxc container configration to enable clean shutdown:

# Send SIGRTMIN+3 to shutdown systemd
lxc.haltsignal = 37

To make the system more recent, I've decided to use Debian Stretch (one of reasons was that ElectrumX needs Python 3.5.3 or newer). Which is anyway probably sane choice right now given that it's already frozen and will be soon stable. As Stretch is not available as a download option in Omnia, I've chosen to use Debian Jessie and upgrate it later:

$ lxc-attach  --name debian
$ sed -i s/jessie/stretch/ /etc/apt/sources.list
$ apt update
$ apt full-upgrade

Now you have up to date system and we can start installing dependencies. First thing to install is Bitcoin Core. Just follow the instructions on their website to do that. Now it's time to set it up and wait for downloading full blockchain:

$ adduser bitcoin
$ su - bitcoin
$ bitcoind -daemon

Depending on your connection speed, the download will take few hours. You can monitor the progress using bitcoin-cli, you're waiting for 450k blocks:

$ bitcoin-cli getinfo
{
  "version": 140000,
  "protocolversion": 70015,
  "walletversion": 130000,
  "balance": 0.00000000,
  "blocks": 301242,
  "timeoffset": -1,
  "connections": 8,
  "proxy": "",
  "difficulty": 8853416309.1278,
  "testnet": false,
  "keypoololdest": 1490267950,
  "keypoolsize": 100,
  "paytxfee": 0.00000000,
  "relayfee": 0.00001000,
  "errors": ""
}

Depending how much memory you have (mine has 2G) and what all you run on the router, you will have to tweak bitcoind configuration to consume less memory. This can be done by editing .bitcoin/bitcoin.conf, I've ended up with following settings:

par=1
dbcache=150
maxmempool=150

You can also create startup unit for Bitcoin daemon (place that as /etc/systemd/system/bitcoind.service):

[Unit]
Description=Bitcoind
After=network.target

[Service]
ExecStart=/opt/bitcoin/bin/bitcoind
User=bitcoin
TimeoutStopSec=30min
Restart=on-failure
RestartSec=30

[Install]
WantedBy=multi-user.target

Now we can enable services to start on container start:

systemctl enable bitcoind.service

Then I wanted to setup ElectrumX as well, but I've quickly realized that it uses way more memory that my router has, so there is no option to run it without using swap, what will probably make it quite slow (I haven't tried that).

Filed under: Debian English OpenWrt

Worse Than FailureError'd: The Developer Test

"Apparently, if AMEX's site knows that you're a developer, it will present a REGEX challenge before allowing you to reset your password," Jim wrote.

 

Stuart writes, "Wait, I have a network drive!? Cool! I wonder how much space is available? Oh..."

 

"To you and me, $0.00375 isn't much, but to Western Digital, collecting the exact amount due is a pretty big deal," writes Jonathan.

 

"The page can say whatever it wants, but, well, here I am anyway," wrote Mark

 

Andy J. writes, "I don't know what I have just agreed to but at least the paper is now listed as one of my own publications."

 

"Um, excuse me, but it's not a 'hashtag'," Steve B. wrote, "It's an octothorpe! Or a number sign, or a pig pen..."

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianMichael Prokop: The #newinstretch game: dbgsym packages in Debian/stretch

Debug packages include debug symbols and so far were usually named <package>-dbg in Debian. Those packages are essential if you’ve to debug failing (especially: crashing) programs. Since December 2015 Debian has automatic dbgsym packages, being built by default. Those packages are available as <package>-dbgsym, so starting with Debian/stretch you should no longer look for -dbg packages but for -dbgsym instead. Currently there are 13.369 dbgsym packages available for the amd64 architecture of Debian/stretch, comparing this to the 2.250 packages which I counted being available for Debian/jessie this is really a huge improvement. (If you’re interested in the details of dbgsym packages as a package maintainer take a look at the Automatic Debug Packages page in the Debian wiki.)

The dbgsym packages are NOT provided by the usual Debian archive though (which is good thing, since those packages are quite disk space consuming, e.g. just the amd64 stretch mirror of debian-debug consumes 47GB). Instead there’s a new archive called debian-debug. To get access to the dbgsym packages via the debian-debug suite on your Debian/stretch system include the following entry in your apt’s sources.list configuration (replace deb.debian.org with whatever mirror you prefer):

deb http://deb.debian.org/debian-debug/ stretch-debug main

If you’re not yet familiar with usage of such debug packages let me give you a short demo.

Let’s start with sending SIGILL (Illegal Instruction) to a running sha256sum process, causing it to generate a so called core dump file:

% sha256sum /dev/urandom &
[1] 1126
% kill -4 1126
% 
[1]+  Illegal instruction     (core dumped) sha256sum /dev/urandom
% ls
core
$ file core
core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 'sha256sum /dev/urandom', real uid: 1000, effective uid: 1000, real gid: 1000, effective gid: 1000, execfn: '/usr/bin/sha256sum', platform: 'x86_64'

Now we can run the GNU Debugger (gdb) on this core file, executing:

% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...(no debugging symbols found)...done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in ?? ()
(gdb) bt
#0  0x000055fe9aab63db in ?? ()
#1  0x000055fe9aab8606 in ?? ()
#2  0x000055fe9aab4e5b in ?? ()
#3  0x000055fe9aab42ea in ?? ()
#4  0x00007faec30872b1 in __libc_start_main (main=0x55fe9aab3ae0, argc=2, argv=0x7ffc512951f8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc512951e8) at ../csu/libc-start.c:291
#5  0x000055fe9aab4b5a in ?? ()
(gdb) 

As you can see by the several “??” question marks, the “bt” command (short for backtrace) doesn’t provide useful information.
So let’s install the according debug package, which is coreutils-dbgsym in this case (since the sha256sum binary which generated the core file is part of the coreutils package). Then let’s rerun the same gdb steps:

% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526     lib/sha256.c: No such file or directory.
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036

As you can see it’s reading the debug symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug and this is what we were looking for.
gdb now also tells us that we don’t have lib/sha256.c available. For even better debugging it’s useful to have the according source code available. This is also just an `apt-get source coreutils ; cd coreutils-8.26/` away:

~/coreutils-8.26 % gdb sha256sum ~/core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526           R( h, a, b, c, d, e, f, g, K(25), M(25) );
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036
(gdb) 

Now we’re ready for all the debugging magic. :)

Thanks to everyone who was involved in getting us the automatic dbgsym package builds in Debian!

Krebs on SecurityTrump’s Dumps: ‘Making Dumps Great Again’

It’s not uncommon for crooks who peddle stolen credit cards to seize on iconic American figures of wealth and power in the digital advertisements for their shops that run incessantly on various cybercrime forums. Exhibit A: McDumpals, a hugely popular carding site that borrows the Ronald McDonald character from McDonald’s and caters to bulk buyers. Exhibit B: Uncle Sam’s dumps shop, which wants YOU! to buy American. Today, we’ll look at an up-and-coming stolen credit card shop called Trump’s-Dumps, which invokes the 45th president’s likeness and promises to make credit card fraud great again.

trumpsdumps

One reason thieves who sell stolen credit cards like to use popular American figures in their ads may be that a majority of their clients are people in the United States. Very often we’re talking about street gang members in the U.S. who use their purchased “dumps” — the data copied from the magnetic stripes of cards swiped through hacked point-of-sale systems — to make counterfeit copies of the cards. They then use the counterfeit cards in big-box stores to buy merchandise that they can easily resell for cash, such as gift cards, Apple devices and gaming systems.

When most of your clientele are street thugs based in the United States, it helps to leverage a brand strongly associated with America because you gain instant brand recognition with your customers. Also, a great many of these card shops are run by Russians and hosted at networks based in Russia, and the abuse of trademarks closely tied to the U.S. economy is a not-so-subtle “screw you” to American consumers.

In some cases, the guys running these card shops are openly hostile to the United States. Loyal readers will recall the stolen credit card shop “Rescator” — which was the main source of cards stolen in the Target, Home Depot and Sally Beauty breaches (among others) — was tied to a Ukrainian man who authored a nationalistic, pro-Russian blog which railed against the United States and called for the collapse of the American economy.

In deconstructing the 2014 breach at Sally Beauty, I interviewed a former Sally Beauty corporate network administrator who said the customer credit cards being stolen with the help of card-stealing malware installed on Sally Beauty point-of-sale devices that phoned home to a domain called “anti-us-proxy-war[dot]com.”

Trump’s Dumps currently advertises more than 133,000 stolen credit and debit card dumps for sale. The prices range from just under $10 worth of Bitcoin to more than $40 in Bitcoin, depending on which bank issued the card, the cardholder’s geographic location, and whether the cards are tied to premium, prepaid, business or executive accounts.

A "state of the dumps" address on Trump's-Dumps.

A “state of the dumps” address on Trump’s-Dumps.

Trump’s Dumps is currently hosted on a Russian server that caters to a handful of other high-profile carding shops, including the long-running “Fe-shop” and “Monopoly” dumps stores.

Sites like Trump’s Dumps can be taken offline — by forcing a domain name registrar to revoke the domain — but the people responsible for running this shop have already registered a slew of similar domains and no doubt have fresh bulletproof hosting standing by in case their primary domain is somehow seized.

Also, like many other modern carding sites this one has versions of itself running on the Dark Web — sites that are only accessible using Tor and are far more difficult to force offline.

The home page of Trump’s Dumps takes some literary license with splices of President Trump’s inaugural address (see the above screenshot for the full text):

“WE, THE CITIZENS OF DARK WEB, ARE NOW JOINED IN A GREAT NATIONAL EFFORT TO REBUILD OUR COMMUNITY AND RESTORE ITS PROMISE FOR ALL OF OUR PEOPLE.”

TOGETHER, WE WILL DETERMINE THE COURSE OF CARDING AND THE BLACKHAT COMMUNITY FOR MANY, MANY YEARS TO COME. WE WILL FACE CHALLENGES. WE WILL CONFRONT HARDSHIPS. BUT WE WILL GET THE JOB DONE.”

The U.S. Secret Service, which has the dual role of protecting the President and busting up counterfeiters (including credit card theft rings), declined to comment for this story.

WHO RUNS TRUMP’S DUMPS?

For now, I’m disinclined to believe much about a dox supposedly listing the Trump’s Dumps administrator’s various contacts that was released by one of his competitors in the cybercrime underground. However, there are some interesting clues that tie Trump’s Dumps to a series of hacking attacks on e-commerce providers over the past year. Those clues suggest the criminals behind Trump’s Dumps are massively into stealing credit card data that fuels both card-present and online fraud.

In the “contacts” section of Trump’s Dumps the proprietors list three Jabber instant messenger IDs. All of them end in @trumplink[dot]su. That site is not currently active, but Web site registration records for the domain show it is tied to the email address “rudneva-y@mail.ua.”

A reverse WHOIS website registration record search ordered from domaintools.com [full disclosure: Domaintools is an advertiser on this blog] shows that this email address is associated with at least 15 other domains. Most of those domains appear to have been registered to look like legitimate Javascript calls that many e-commerce sites routinely make to process transactions, such as “js-link[dot]su,” “js-stat[dot]su,” and “js-mod[dot]su” (the full list is in this PDF).

A Google search on those domains produces a report from security firm RiskIQ, which explains how those domains featured prominently in a series of hacking campaigns against e-commerce websites dating back to March 2016. According to RiskIQ, the attacks targeted online stores running outdated and unpatched versions of shopping cart software from Magento, Powerfront and OpenCart.

These same domains showed up in an attack last October when it was revealed that hackers had compromised the Web site for the U.S. Senate GOP Senatorial Committee, among more than 5,900 other sites that accept credit cards. The intruders tinkered with the GOP Committee site’s HTML code to insert calls to domains like “jquery-cloud[dot]net” to hide the fact that they were stealing all credit card data that donors submitted via the Web site.

CryptogramSecurity and Human Behavior (SHB 2017)

I'm in Cambridge University, at the tenth Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Ross Anderson, Alessandro Acquisti, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is maximum interaction and discussion. We do that by putting everyone on panels. There are eight six-person panels over the course of the two days. Everyone gets to talk for ten minutes about their work, and then there's half an hour of questions and discussion. We also have lunches, dinners, and receptions -- all designed so people from different disciplines talk to each other.

It's the most intellectually stimulating conference of my year, and influences my thinking about security in many different ways.

This year's schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, and ninth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

I don't think any of us imagined that this conference would be around this long.

,

Cory DoctorowTalking WALKAWAY with Reason


My novel WALKAWAY is something of a fusion of the best elements of the anti-authoritarian left and the anti-authoritarian right. In a meaty interview with Reason Magazine, I discuss the politics and economics, and theories of human action with Reason magazine Editor in Chief Katherine Mangu-Ward.

Google AdsenseAdSense Beginner’s Frequently Asked Questions





What is AdSense?

In short, AdSense is a free, simple way to make money by placing ads on your site.

Google’s ad network connects advertisers looking to run their ads on the web with publishers like you, looking to monetise your website, making it simple for everyone to succeed. AdSense connects publishers and advertisers, giving AdWords advertisers the opportunity to bid on ad space on websites like yours. 

By working with an ad network like AdSense, you can spend less time marketing and selling your advertising space and more time focused on creating the best content for your users. Learn more in the AdSense 101 article.

What do I need to sign up for AdSense?

As a site owner, you’ll need pages with unique content that's relevant to your visitors and that provides a great user experience. Before you apply to AdSense, make sure your site's pages are ready by visiting the AdSense Help Center. If you’re ready to turn your #PassionIntoProfit, sign up for AdSense today.

How do I know if I’m eligible to join the AdSense network?

Before applying for an account, make sure that the site you own has been active for at least six months, and complies with the AdSense program policies.

AdSense also works with products such as Blogger and YouTube to allow you to create host partner accounts. To be eligible for a hosted AdSense account via Blogger or YouTube, you must first meet certain eligibility requirements. Learn more about signing up for an AdSense account via YouTube.

How do I monetize my YouTube channel using AdSense?

To start earning money from your video content, you’ll need to apply for an AdSense account to link to your YouTube account. Follow these steps to become eligible to receive payments:
  1. Make sure your YouTube account is enabled for monetization. 
  2. Submit your application to create a new AdSense account to link with your YouTube account. Once approved, you'll see a "Host account" label on your AdSense homepage.
  3. If you also have your own non-host site where you’d like to show ads, then you’ll need to submit a one-time application form to tell us the URL of your site (you only need to complete this step if you want to monetize your site, and don’t need to take this action to earn money from your YouTube videos).
Providing that your site is following the AdSense program policies and you’ve completed 5 steps to getting paid, Google will send your first payment the month after your account exceeds $100 in earnings.

How can AdSense help me grow my online business?

AdSense helps you to create a revenue stream from the valuable content you host on your site. AdSense saves you time with a quick and easy setup allowing you to focus on the elements of your business that require your focus.

It’s like an automatic car -- it removes most of the manual adjustments, allowing you to cruise along with less effort. You still need regular “tune ups” to get optimal performance, but you won’t need to shift gears to get from point A to point B.

What is DoubleClick Ad Exchange?

DoubleClick Ad Exchange gives you real-time access to the largest pool of advertisers.The main difference between DoubleClick Ad Exchange and AdSense is that AdSense does a lot of the technical settings and optimization work for you, such as automating the sale of all your ad space to the highest bidder. With DoubleClick Ad Exchange you manage these adjustments yourself, controlling exactly how your inventory is sold.

As an example, DoubleClick Ad Exchange allows you to choose which ad space is for public sale and which is reserved for private auctions. This additional input from you helps you to get the best results from DoubleClick Ad Exchange.

Another important distinction is that through AdSense, demand is primarily from AdWords advertisers, whereas DoubleClick Ad Exchange pulls demand from multiple sources. This means that in addition to AdWords advertisers, you can also access other major ad networks and agencies. Check out the more comprehensive list of differences at our Help Center.

Should I use DoubleClick for my site?

Check out our Choosing the right tools for you article to figure out if solutions from DoubleClick are right for your site.

Can I use AdSense to monetize my WordPress site?

WordPress doesn’t allow advertising on their free hosting plan (WordPress.com). If you’d like to show ads on your WordPress hosted site, then you’ll need to switch to a self-hosted WordPress.org domain. Visit our product forum and learn how to make this switch.

Can I monetize multiple sites under a single AdSense account?

Yes! AdSense allows you to add multiple sites in association to the same account. This will enable you to monitor the inventory on your sites from the same place, saving you valuable time. Here’s how to set up multiple sites on your account:

  1. Sign in to your AdSense account.
  2. In the left navigation panel, click Settings.
  3. Click My sites.
  4. Click Add.
  5. Enter the URL of the site that you'd like to add (e.g., example.com). If you're unsure which URL to enter, check the section on formatting the URLs of your sites below.
  6. Click Add site.
  7. Your site is added to your site list with a status of "verified".
Check out the Help Center for more info on this!

I’d like help growing my business with AdSense, does AdSense offer performance growth tips?

Yes it does! When setting up your account, simply check the box beside the “I’d like to receive performance suggestions” message.

If you already have an account, go to the Settings tab and check the “Performance Suggestions” box.

If I’m not using AdSense yet, how can I receive performance tips?

Follow AdSense on social media for daily tips on how to optimize your account and product updates; Twitter, Google+, and Facebook.

Also, watch the AdSense Optimization Library playlist on YouTube to learn AdSense best practices, and don’t forget to subscribe to our YouTube channel.



How do I get more traffic to my site?

First, you’ll want to take advantage of the options within Google Search Console to make sure your site is visible for search users. The AdSense Help Center has all the information you’ll need to submit your site for index, diagnose any problems, and identify which keyword queries are driving traffic to your site.

Next, follow the Long-term revenue framework, a tool often used by our AdSense optimization experts, to better understand the four levers that can help you grow and develop your site.




Also, catchy titles are a way of pushing referral traffic from social media platforms, and Brandon Gaille wrote a great guest-blog for us that lays out how to approach naming your content.

Be sure to follow AdSense best practice policies when planning your strategy, to avoid violations on your account. It’s important to know that in order to maintain ad serving on your site and keep an AdSense account active, it's the responsibility of the publisher to keep up to date with, and adhere to, the AdSense program policies.

How do I increase clickthrough rates (CTR) in AdSense?

Please see our recent response to this question on Quora where Symone Gamble shared her best practices.

What are responsive ad units?

Responsive ad units automatically control the size of your ads using a single piece of ad code. They also allow ads to be resized after a screen orientation change. Using a responsive ad unit will allow your site to show the most appropriate ad size according to your user's device, and will help you to maximize your revenue potential. Learn more about how responsive ad units work and then check out how to customize responsive ads to match the style of your site.



How can I make my site load faster?

Tools like PageSpeed Insights and Mobile-Friendly Test can be used to audit the speed of your site, identify possible problems, and even suggest solutions. From there, follow these 5 tips to improve page speed published on the Inside AdSense blog and then print out the Ultimate Page Speed Infographic for mobile-specific advice.

Why do I have invalid traffic deductions in my AdSense account?



How do I stop my site getting hacked?

HTTPS protocol ensures your servers are talking to who they are expected to be talking to and the conversation cannot be interrupted by anyone else, and that content cannot be altered in transit.

Make sure your software is up to date: this might seem common sense, but up-to-date software means there are no holes or faults that might make you vulnerable to people with a less than honourable intention.

Passwords: Using safe, secure, and complex passwords will protect the security of your site. Avoid words that contain common words, names of friends, family and pets as these are all easily attainable using social media, leaving yourself at risk. Don’t forget to change your password regularly; set yourself a reminder to change your password every few months.

What can I do if a site that I don't own is using my AdSense code?

Since site code is readily available to anyone who inspects your page, it's possible for someone to copy your ad code and paste it on a site you don’t own. In this case, if the ad code ends up on a site that violates the AdSense program policies, your account will receive a policy warning.

To prevent this, site authorization is an optional feature that lets you identify your verified sites as the only sites that are permitted to use your Google ad code.

Learn more about site authorization on the AdSense YouTube channel.

How to stop certain ads from appearing on my site?

The AdSense 101: You’re in control video will show you how to control the ads displayed on your site.

Why is there a decline in my AdSense revenue?

If your earnings have taken a dip, we recommend you check 4 engagement metrics in the performance tab of your AdSense account:
  1. Clickthrough rate (CTR)
  2. Cost per click (CPC)
  3. Page revenue per thousand impressions (page RPM)
  4. Page views
Here’s a few resources to help you identify the potential causes, and nextstep solutions to solve them.
What are AdSense experiments and how can I run an experiment?

AdSense recently introduced automatic experiments, which allow you to take a back seat as Google runs A/B tests on a small portion of your traffic. To enable the feature, just visit the "Experiments" page on your Optimization tab, and switch on "automatic experiments".

The opportunities generated from these experiments will appear in the “Opportunities” page on your Optimization tab. They'll be labelled "verified by experiment,” so you’ll know they’re backed by data and tailored to your site and users.



Posted by: Jay Castro from the AdSense team

Krebs on SecurityMolinaHealthcare.com Exposed Patient Records

Earlier this month, KrebsOnSecurity featured a story about a basic security flaw in the Web site of medical diagnostics firm True Health Group that let anyone who was logged in to the site view all other patient records. In that story I mentioned True Health was one of three major healthcare providers with similar website problems, and that the other two providers didn’t even require a login to view all patient records. Today we’ll examine a flaw that was just fixed by Molina Healthcare, a Fortune 500 company that until recently was exposing countless patient medical claims to the entire Internet without requiring any authentication.

molinalogoIn April 2017 I received an anonymous tip from a reader who said he’d figured out that just by changing a single number in the Web address when accessing his recent medical claim at MolinaHealthcare.com he could then view any and all other patient claims.

More alarmingly, the link he was given to access his claim with Molina was accessible to anyone who had the link; no authentication was required to view it. Nor was any authentication required to view any other records that could be accessed by fiddling with the numbers after the bit at the end of Molinahealthcare.com address (e.g., claimID=123456789).

In other words, having access to a single hyperlink to a patient record would allow an attacker to enumerate and download all other claims. The source showed me screenshots of his medical records at Molina, and how when he changed a single number in the URL it happily displayed another patient’s records.

The records did not appear to include Social Security numbers, but they do include patient names, addresses and dates of birth, as well as potentially sensitive information that may point to specific diseases, such as medical procedure codes and any prescribed medications.

I contacted Molina about the issue, and the company released a brief statement saying it had fixed the problem. Molina also said it was trying to figure out how such a mistake was made, and if there was any evidence to suggest the Web site bug had been widely abused.

“The previously identified security issue has been remediated,” the company said. “Because protecting our members’ information is of utmost importance to Molina and out of an abundance of caution, we are taking our ePortal temporarily offline to perform additional testing of our system security. Molina has also engaged Mandiant to assist the company in continuing to strengthen our system security.”

The company declined to say how many records may have been exposed, but it looks like potentially all of them.

Headquartered in Long Beach, Calif., Molina Healthcare was ranked 201 in 2016 in the Fortune 500. It’s unconscionable that such a basic, Security 101 flaw could still exist at a major healthcare provider today. However, the more I write about these lame but otherwise very serious vulnerabilities at healthcare firms the more I hear about how common they are from individual readers.

Since that True Health Group story was published, I’ve heard about and confirmed two very similar flaws at healthcare/insurance companies. Please keep the tips coming, Dear Readers, and I will do my best to encourage these companies to do more than just pay lip service to security.

TEDTED Prize winner Sarah Parcak unearths ancient mysteries on “60 Minutes”

What’s the best way to find something lost on the ground, like a historical site from a civilization lost to time? For archaeologist Sarah Parcak, the answer’s obvious — from way up above, using satellites, of course. As a space archaeologist, she’s mapped the lost city of Tanis (of Indiana Jones: Raiders of the Lost Ark fame) and identified thousands of other potential ancient sites in Iceland, Europe and across North Africa — and now she’s letting everyone in on the fun with her $1 million TED Prize wish, GlobalXplorer.

To get an up-close introduction to the revolutionary techniques of space archaeology, 60 Minutes joined Parcak at her tomb excavation site in Lisht, Egypt, a village 40 minutes south of Cairo with a history dating back more than 4,000 years.

When they arrived, the biggest find of the season had just been unearthed — a hand, and a piece of stone tablet describing a powerful man, inscribed with one name: Intef. Interestingly, the slab is damaged in a way that hints it might have been intentionally desecrated. “Did he step on too many people on his way to the top?” Parcak speculates. “Who was this guy? What did he do?”

“But that’s what makes archeology interesting,” says Parcak. “It’s like you’re reading the ancient version of the National Enquirer in slow time.”

Yet, ironically, archaeologists like Sarah are in a perpetual race against time — hoping to find and secure ancient sites before they can be looted.  

So far, less than 10% of the Earth has been explored and secured by archeologists, leaving many sites vulnerable to looting. For instance, after the Arab Spring in 2011, hundreds of ancient sites and antiquities in Egypt were left unprotected and open for pillage. Looking at satellite images, Parcak was able to identify some 800 places where looters were digging into unprotected tombs to bring out antiquities for sale. When they saw the satellite evidence of looting, the Egyptian government asked Parcak to excavate Intef’s tomb at Lisht, to preserve and protect what remains.

This isn’t a new development — looting, says Parcak, has been going on for thousands of years, at a cost to history that’s priceless.

“The most important thing for archeological discovery is context,” she tells 60 Minutes. “That’s why for us, as archeologists, looting is such a huge problem. Because when an object is taken out of its original context, we don’t know where it comes from. We can’t tell you anything about it aside from, ‘Well, it’s a mummy, or, ‘It’s a statue.’ But that’s kind of it. The story doesn’t get told.”

Which is why Parcak is so excited about GlobalXplorer, which lets thousands of people help pore over satellite maps together to find potentially historic sites — which local governments can then help secure for future generations to learn from. Join her and thousands of other citizen scientists (now scouring Peru) in the fight to protect history and our global heritage.


CryptogramRansomware and the Internet of Things

As devastating as the latest widespread ransomware attacks have been, it's a problem with a solution. If your copy of Windows is relatively current and you've kept it updated, your laptop is immune. It's only older unpatched systems on your computer that are vulnerable.

Patching is how the computer industry maintains security in the face of rampant Internet insecurity. Microsoft, Apple and Google have teams of engineers who quickly write, test and distribute these patches, updates to the codes that fix vulnerabilities in software. Most people have set up their computers and phones to automatically apply these patches, and the whole thing works seamlessly. It isn't a perfect system, but it's the best we have.

But it is a system that's going to fail in the "Internet of things": everyday devices like smart speakers, household appliances, toys, lighting systems, even cars, that are connected to the web. Many of the embedded networked systems in these devices that will pervade our lives don't have engineering teams on hand to write patches and may well last far longer than the companies that are supposed to keep the software safe from criminals. Some of them don't even have the ability to be patched.

Fast forward five to 10 years, and the world is going to be filled with literally tens of billions of devices that hackers can attack. We're going to see ransomware against our cars. Our digital video recorders and web cameras will be taken over by botnets. The data that these devices collect about us will be stolen and used to commit fraud. And we're not going to be able to secure these devices.

Like every other instance of product safety, this problem will never be solved without considerable government involvement.

For years, I have been calling for more regulation to improve security in the face of this market failure. In the short term, the government can mandate that these devices have more secure default configurations and the ability to be patched. It can issue best-practice regulations for critical software and make software manufacturers liable for vulnerabilities. It'll be expensive, but it will go a long way toward improved security.

But it won't be enough to focus only on the devices, because these things are going to be around and on the Internet much longer than the two to three years we use our phones and computers before we upgrade them. I expect to keep my car for 15 years, and my refrigerator for at least 20 years. Cities will expect the networks they're putting in place to last at least that long. I don't want to replace my digital thermostat ever again. Nor, if I ever need one, do I want a surgeon to ever have to go back in to replace my computerized heart defibrillator in order to fix a software bug.

No amount of regulation can force companies to maintain old products, and it certainly can't prevent companies from going out of business. The future will contain billions of orphaned devices connected to the web that simply have no engineers able to patch them.

Imagine this: The company that made your Internet-enabled door lock is long out of business. You have no way to secure yourself against the ransomware attack on that lock. Your only option, other than paying, and paying again when it's reinfected, is to throw it away and buy a new one.

Ultimately, we will also need the network to block these attacks before they get to the devices, but there again the market will not fix the problem on its own. We need additional government intervention to mandate these sorts of solutions.

None of this is welcome news to a government that prides itself on minimal intervention and maximal market forces, but national security is often an exception to this rule. Last week's cyberattacks have laid bare some fundamental vulnerabilities in our computer infrastructure and serve as a harbinger. There's a lot of good research into robust solutions, but the economic incentives are all misaligned. As politically untenable as it is, we need government to step in to create the market forces that will get us out of this mess.

This essay previously appeared in the New York Times. Yes, I know I'm repeating myself.

EDITED TO ADD: A good cartoon.

Worse Than FailureIcon on Fire

Tim joined a company that provided a SaaS solution for tracking attendance and grades in schools. The job was mostly minor updates to a ColdFusion application, although there was an active project to replace it with something more modern. Tim felt like half of his hiring was based on him knowing when to throw out buzzwords like SPA or REST or Reactive Programming.

The fire emoji, as an image.

“It’s not the first time,” Karmen explained. She’d been with the company for some time. “When I joined, they had just upgraded to ColdFusion from a VBA hack on Microsoft Access. Crazy days, back then, when the whole ‘selling service, not software’ thing was new. Sometimes, I think I was hired because I knew the right buzzwords.”

Ostensibly, Karmen and Tim were meant to be focused on the new version of the application, but the ColdFusion version was so old and so poorly maintained that they spent 75% of their time in firefighting mode. The rest of the team had neatly adapted to the culture of “just put out the fire, and don’t worry about code quality, there’ll be a new fire tomorrow”, which of course only created more fires to put out.

One day, near the end of the month, the webmaster inbox had an alert message from their webhost.

This is to alert you that your account is nearing the data transfer threshold. Your account has transferred 995GB of data during this billing period, with a total account limit of 1,000GB of data allowed. At your pricing tier, overages will be charged at $10/GB.

“Wow, what?” Tim was pretty shocked. He knew they had a goodly number of schools, and were tracking many thousands of students, but there was no way that the application should be serving up 1TB of data in a month. 95% of the application just was text, and while they did have photographs of every student, those photos were only 9kb after resizing and compressing.

“Oh, yeah,” Karmen said. “I heard about this a few years back. We had to upgrade to the higher plan with our host. Guess they’ll probably do that again.”

“I mean, don’t you think this is wrong?”

“Probably, sure, but… y’know. Not my circus, not my monkeys.” Karmen shrugged, and got back to work on a different fire.

Tim fired up the browser debugger and loaded a page in test that was sure to have a lot of pictures, and was definitely the heaviest page. With 50 student images displayed, the payload of the HTML and assets was a whopping 452KB. That was the HTML and assets… except for one file.

The favicon.ico weighed in at 307kb. Apparently, at some point in the past, someone had decided that it needed to look good at any size. Since the ICO format lets you have multiple resolutions of the image stored at different bit-depths and resolutions, they had made sure to include everything up to 256x256 at 32-bits per color. Ironically, the source image had probably been a much smaller resolution, because the 256x256 version showed clear signs of having been upscaled.

Compounding the problem, since once-upon-a-time there had been issues with browsers serving up cached versions of pages, their web server had been configured to disable caching for every file served, guaranteeing that the favicon would be transferred for every request.

307kb wasn’t a lot of data, but it was certainly a lot for a favicon. Even at a massive 256x256 resolution, given the design of their icon, he could fit it into a PNG that was bytes in size- and every decent browser supported it. A quick check of their traffic showed that they still had a good number of users on old versions of IE that couldn’t support anything but ICO files, so he cut the massive resolutions out of the ICO file, and whipped up a little CFML that would serve the ICO to those users, and everyone else would get the PNG.

That cut their traffic nearly in half, but Tim didn’t get much chance to celebrate- there was another fire to put out.

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Cory DoctorowLiverpool, I’ll see you tonight on the Walkaway tour! (then Birmingham, Hay-on-Wye, San Francisco…) (!)


Thanks to everyone who came out for last night’s final London event on the UK Walkaway tour, at Pages of Hackney with Olivia Sudjic; today I’m heading to Waterstones Liverpool One for an event with Dr Chris Pak, followed by a stop tomorrow at Waterstones in Birmingham and then wrapping up in the UK with an event with Adam Rutherford at the Hay Festival.


Then I hit the road again in the USA, with stops at the Bay Area Book Festival, BookCon NYC, ALA Chicago, Printers Row Chicago, Denver Comic-Con, San Diego Comic-Con, and Defcon Las Vegas.

Planet DebianMichael Prokop: The #newinstretch game: new forensic packages in Debian/stretch

Repeating what I did for the last Debian releases with the #newinwheezy and #newinjessie games it’s time for the #newinstretch game:

Debian/stretch AKA Debian 9.0 will include a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are new in the Debian/stretch release as compared to Debian/jessie (and ignoring jessie-backports):

  • bruteforce-salted-openssl: try to find the passphrase for files encrypted with OpenSSL
  • cewl: custom word list generator
  • dfdatetime/python-dfdatetime: Digital Forensics date and time library
  • dfvfs/python-dfvfs: Digital Forensics Virtual File System
  • dfwinreg: Digital Forensics Windows Registry library
  • dislocker: read/write encrypted BitLocker volumes
  • forensics-all: Debian Forensics Environment – essential components (metapackage)
  • forensics-colorize: show differences between files using color graphics
  • forensics-extra: Forensics Environment – extra console components (metapackage)
  • hashdeep: recursively compute hashsums or piecewise hashings
  • hashrat: hashing tool supporting several hashes and recursivity
  • libesedb(-utils): Extensible Storage Engine DB access library
  • libevt(-utils): Windows Event Log (EVT) format access library
  • libevtx(-utils): Windows XML Event Log format access library
  • libfsntfs(-utils): NTFS access library
  • libfvde(-utils): FileVault Drive Encryption access library
  • libfwnt: Windows NT data type library
  • libfwsi: Windows Shell Item format access library
  • liblnk(-utils): Windows Shortcut File format access library
  • libmsiecf(-utils): Microsoft Internet Explorer Cache File access library
  • libolecf(-utils): OLE2 Compound File format access library
  • libqcow(-utils): QEMU Copy-On-Write image format access library
  • libregf(-utils): Windows NT Registry File (REGF) format access library
  • libscca(-utils): Windows Prefetch File access library
  • libsigscan(-utils): binary signature scanning library
  • libsmdev(-utils): storage media device access library
  • libsmraw(-utils): split RAW image format access library
  • libvhdi(-utils): Virtual Hard Disk image format access library
  • libvmdk(-utils): VMWare Virtual Disk format access library
  • libvshadow(-utils): Volume Shadow Snapshot format access library
  • libvslvm(-utils): Linux LVM volume system format access librar
  • plaso: super timeline all the things
  • pompem: Exploit and Vulnerability Finder
  • pytsk/python-tsk: Python Bindings for The Sleuth Kit
  • rekall(-core): memory analysis and incident response framework
  • unhide.rb: Forensic tool to find processes hidden by rootkits (was already present in wheezy but missing in jessie, available via jessie-backports though)
  • winregfs: Windows registry FUSE filesystem

Join the #newinstretch game and present packages and features which are new in Debian/stretch.

Planet Linux AustraliaOpenSTEM: This Week in HASS, term 2 week 5

pipsie little birdNAPLAN’s over and it’s time to sink our teeth into the main body of curriculum work before mid-year reporting rolls around. Our younger students are using all their senses to study the environment and local area around them, whilst our older students are hard at work on their Explorer projects.

Foundation/Prep/Kindy to Year 3

Unit F.2 for stand-alone Foundation/Prep/Kindy classes has the students continuing to think about their Favourite Place. This week students are considering what they can hear in their Favourite Place and how they will depict that in their model of their Favourite Place. Students can also think about what their Favourite Sounds are and whether or not these would occur in their Favourite Place. Students in integrated Foundation/Prep/Kindy classes (Unit F.6) and Years 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) have this week set aside for an excursion to a local park or area of heritage significance. If an excursion outside school grounds is impractical teachers can achieve similar results from an excursion around the school and oval. Students are using their senses to interpret their environment, as well as thinking about living and non-living things, natural and managed landscapes and sources of heat and light.

Years 3 to 6

Students in Years 3 (Unit 3.6), 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are continuing their project on an explorer. This week the focus for most students is on animals which may have been encountered by their explorer. Year 3 students are examining animals from different climate zones and how they are adapted to deal with climate extremes. Students in Years 4 and 5 look at extinct animals from Africa, South America or North America, assessing impact and sustainability issues. Students in Year 4 (and optionally as an extension for Year 3) consider the life cycle of their chosen animal. Students in years 4, 5 and 6 also start to examine the differences between Primary and Secondary sources and some of the OpenSTEM resources contain quotes or copies of primary material, so that students can refer to these in their project. Year 6 students are examining the changing Economies and Politics of Asia through time, in order to place the explorations within a broader context and to gain a greater understanding of the development of the global situation. Students have another 2 weeks to complete their presentation on their explorer (including environment and other aspects), before assessment of this project.

Planet DebianJaldhar Vyas: For Downtown Hoboken

Q: What should you do if you see a spaceman?

A: Park there before someone takes it, man.

Planet Linux AustraliaDavid Rowe: Urban HF Noise

Over the past 30 years, HF radio noise in urban areas has steadily increased. S6-S9 noise levels are common, which makes it hard to listen to the signals we want to receive.

I’ve been wondering if we can attenuate this noise using knowledge of the properties of the noise, and some clever DSP. Even 6dB would be useful, that’s like the transmitting station increasing their power by a factor of 4. I’ve just spent 2 months working on a 4dB improvement in my FreeDV work. So this week I’ve been messing about with pen and paper and a few simulations, exploring the problem of man-made noise on HF radio.

PWM Noise

One source of noise is switching power supplies, which have short, high current pulses flowing through them at a rate of a few hundred kHz. A series of short impulses in the time domain produces a series of spectral lines (i.e sinusoids or tones) in the frequency domain, so a 200kHz switcher produces tones at 200kHz, 400kHz, 600kHz etc. These tones are the “birdies” we hear as we tune our HF radios. The shorter the pulses are, the higher in frequency they will extend.

Short pulses lead to efficient switch mode power supplies, which is useful for energy efficiency, and especially desirable for high power devices like electric car chargers and solar panel inverters. So the trend is shorter switching times, higher currents and therefore more HF noise.

The power supplies adjust the PWM pulse-width back and forth as they adjust to varying conditions, which introduces a noise component. This is similar to phase noise in oscillators, and causes a continuous noise floor to appear in addition to the tones. The birdies we can tune around, but the noise floor sets a limit on urban HF operations.

The Octave script impulse_noise.m was used to generate the plots in this post. Here is a plot of some PWM impulse samples (top), and the HF spectrum.

I’ve injected a “wanted” signal at 1MHz for comparison. Given a switcher frequency of 255kHz, with 0.1V impulse amplitude, the noise floor is -90dBV down, or about 10uV. This is S5-S6 level noise, assuming 0.1V impulse amplitude induced onto our antenna by local switcher noise, e.g. nearby house wiring, or the neighbors TV. These numbers seem reasonable and match what we hear in our receivers.

Single Pulses

Single, isolated pulses are an easier problem. Examples are lightning or man-made sources that produce pulses at a rate slower than the bandwidth of the signal we are interested in.

A single impulse produces a flat spectrum, so the noise at frequency f Hz is almost the same as the noise at frequency f+delta Hz, where delta is small. This means you can use the noise at frequencies next to the one you are interested in to estimate and remove the noise in your frequency of interest.

Here is an impulse that lasts two samples, the magnitude spectrum changes slowly, although the phase changes quickly due to the time offset of the impulse.

Turns out that if the impulse position is known, and most of the energy is confined to that impulse, we can make a reasonable estimate of the noise at one frequency, from the noise at adjacent frequencies. Below we estimate the phase and magnitude (green cross) of frequency bin H(k+1) (nearby blue cross) from bin H(k). I’ve actually plotted H(k-1), H(k), and H(k+1) for comparison. The error in the estimation is -44dB down, so that’s a lot of noise removed.

Unfortunately this gets harder when there are multiple impulses in the same time window, and I can’t work out how to remove noise is this case. However this idea might be useful for some classes of impulse noise.

Noise Blanker

Another idea I tried was “blanking” out the impulses, buy opening and closing a switch so that the impulses are not allowed into the receiver. This works OK when we have a wideband signal, but falls over when just a bandpass version is available. In the bandpass version the “pulse” is smeared over time and we are no longer able to gate it out.

There will also be problems dealing with multiple PWM signals, that have different timing and frequency.

I haven’t looked at samples of the RF received from any real world switcher signals yet. I anticipate the magnitude and phase of the switcher signal will be all over the place, due to some torturous transfer function between the switcher and the terminals of my receiver. Plus various other signals will be present. Possibly there is a wide spectrum (short noise pulses) that we can work with. However I’d much rather deal with narrow bandpass signals consisting of just our wanted signal plus the switcher noise floor.

Next Steps

I might get back to my FreeDV work now, and leave this work on the back burner. I do feel I’m getting my head around the problem, and developing a “bag of tricks” that will be useful when other pieces fall into place.

The urban noise appears to be localised, e.g. if you head out into the country the background noise level is much lower. This suggests it’s coupled into the HF antenna by some local effect like induction. So another approach is to estimate the noise using a separate receiver that just picks up the local noise, through a sense antenna that is inefficient for long distance HF signals.

The local noise sequence could then be subtracted from the HF signal. I am aware of analog boxes that do this, using a magnitude and phase network to match the differences in signals received by the sense and HF antennas.

However a DSP approach will allow a more complex relationship (like an impulse response that extends for several microseconds) between the two antenna signals, and allow automatic adjustment. The noise spectrum can change quickly, as PWM is modulated and multiple devices turn on and off in the neighborhood. However the relationship between the two antennas will change slowly if they are fixed in space. This problem reminds me of echo cancellation, something I have played with before. Given radio hardware is now very cheap ($20 SDR dongles), multiple receivers could also be used.

So my gut feel remains that HF urban noise can be reduced to some extent (e.g. 6 or 12dB suppression) using DSP. If those nasty PWM switchers are inducing RF voltages into our antennas, we can work out a way to subtract those voltages.

,

Planet DebianSteve Kemp: Getting ready for Stretch

I run about 17 servers. Of those about six are very personal and the rest are a small cluster which are used for a single website. (Partly because the code is old and in some ways a bit badly designed, partly because "clustering!", "high availability!", "learning!", "fun!" - seriously I had a lot of fun putting together a fault-tolerant deployment with haproxy, ucarp, etc, etc. If I were paying for it the site would be both retired and static!)

I've started the process of upgrading to stretch by picking a bunch of hosts that do things I could live without for a few days - in case there were big problems, or I needed to restore from backups.

So far I've upgraded:

  • master.steve
    • This is a puppet-master, so while it is important killing it wouldn't be too bad - after all my nodes are currently setup properly, right?
    • Upgrading this host changed the puppet-server from 3.x to 4.x.
    • That meant I had to upgrade all my client-systems, because puppet 3.x won't talk to a 4.x master.
    • Happily jessie-backports contains a recent puppet-client.
    • It also meant I had to rework a lot of my recipes, in small ways.
  • builder.steve
    • This is a host I use to build packages upon, via pbuilder.
    • I have chroots setup for wheezy, jessie, and stretch, each in i386 and amd64 flavours.
  • git.steve
    • This is a host which stores my git-repositories, via gitbucket.
    • While it is an important host in terms of functionality, the software it needs is very basic: nginx proxies to a java application which runs on localhost:XXXX, with some caching magic happening to deal with abusive clients.
    • I do keep considering using gitlab, because I like its runners, etc. But that is pretty resource intensive.
    • On the other hand If I did switch I could drop my builder.steve host, which might mean I'd come out ahead in terms of used resources.
  • leave.steve
    • Torrent-box.
    • Upgrading was painless, I only run rtorrent, and a simple object storage system of my own devising.

All upgrades were painless, with only one real surprise - the attic-backup software was removed from Debian.

Although I do intend to retry using Larss' excellent obnum in the near future pragmatically I wanted to stick with what I'm familiar with. Borg backup is a fork of attic I've been aware of for a long time, but I never quite had a reason to try it out. Setting it up pretty much just meant editing my backup-script:

s/attic/borg/g

Once I did that, and created some new destinations all was good:

borg@rsync.io ~ $ borg init /backups/git.steve.org.uk.borg/
borg@rsync.io ~ $ borg init /backups/master.steve.org.uk.borg/
borg@rsync.io ~ $ ..

Upgrading other hosts, for example my website(s), and my email-box, will be more complex and fiddly. On that basis they will definitely wait for the formal stretch release.

But having a couple of hosts running the frozen distribution is good for testing, and to let me see what is new.

LongNowGöbekli Tepe and the Worst Day in History

Technological advances are revolutionizing the field of archaeology, resulting in new discoveries that are upending our previous understanding of the birth of civilization. Many scholars believe that few will be as consequential as Göbekli Tepe.

The ruins of Göbekli Tepe. Photograph by Vincent J. Musi.


IN 01963, anthropologists from the University of Chicago and the University of Istanbul surveyed ruins atop of a hill in Southern Turkey that the locals called Göbekli Tepe (“potbelly hill” in Turkish). Examining the broken limestone slabs dotting the site, the anthropologists concluded that the mound was nothing more than a Byzantine cemetery—a dime a dozen in the ruin-rich Levant region.

Three decades later, German archaeologist Klaus Schmidt made a startling claim: Göbekli Tepe was the site of the world’s oldest temple. Geomagnetic surveys of the site revealed circles of  limestone megaliths dating back 11,600 years—seven millennia before the construction of Stonehenge and the Great Pyramids of Giza, six millennia before the invention of writing, and five centuries before the development of agriculture.

Photograph by Vincent J. Musi.


The implications of Schmidt’s discoveries were profound, and called into question previous archaeological and scientific understandings about the Neolithic Revolution, the key event in human development pointed to as the birth of human civilization. “We used to think agriculture gave rise to cities and later, to civilization,” journalist Charles Mann wrote in a 02011 National Geographic cover story on the site. “[Göbekli Tepe] suggests the urge to worship sparked civilization.” As Andrew Curry of The Smithsonian put it after a visit to Göbekli Tepe with Schmidt:

Scholars have long believed that only after people learned to farm and live in settled communities did they have the time, organization and resources to construct temples and support complicated social structures. But Schmidt argues it was the other way around: the extensive, coordinated effort to build the monoliths literally laid the groundwork for the development of complex societies.

Einkorn wheat was first domesticated near Göbekli Tepe—perhaps, posits Charles Mann, to feed those who came to worship. Photo by Vincent J. Musi.


Schmidt believed that humans made pilgrimages to Göbekli Tepe from as far away as 90 miles. But then there’s the question of what, exactly, these pilgrims were worshipping. As Curry mused after his visit to Göbekli Tepe:

What was so important to these early people that they gathered to build (and bury) the stone rings? The gulf that separates us from Gobekli Tepe’s builders is almost unimaginable. Indeed, though I stood among the looming megaliths eager to take in their meaning, they didn’t speak to me. They were utterly foreign, placed there by people who saw the world in a way I will never comprehend. There are no sources to explain what the symbols might mean

In a March 02017 article in the Journal of Mediterranean Archaeology and Archaeometry, Martin B. Sweatman and Dimitrios Tsikritsis proposed a bold theory: the pillars are telling the story of a comet hitting the earth and triggering an ice age some 13,000 years ago. The comet strike, known as the Younger Dryas Impact Event, is hypothesized to have set off a global cooling period that depleted hunter-gatherer resources and forced humans to settle into areas where they could cultivate crops.

On the left, an artistic rendering of the Younger Dryas Impact Event. On the right, the night sky around 10,950 BC when the impact hypothetically occurred. Image: Martin B. Sweatman and Dimitrios Tsikritsis


Combining the approaches of astronomy and archaeology, Sweatman and Tsikritsis claim that the animals carved on the pillars depict constellations, with the famous vulture stone indicating a time stamp of the night sky at the time of the catastrophe. Using computer software, Sweatman and Tsikritsis matched the animal carving to patterns of the stars, yielding three possibilities that synced up to their astronomical interpretations, plus or minus 250 years: 02000, 4350 BCE, 10,950 BCE, and 18,000 BCE.

The date of 10,950 BCE aligns with the latest hypotheses as to when the Younger Dryas Impact Event occurred, lending credence to Sweatman and Tsikritsis’ interpretation that the Vulture Stone depicts what Sweatman calls “probably the worst day in history since the end of the Ice Age.”

The famous vulture stone, which Sweatman and Tsikritsis claim depicts the constellations of the night sky. Photo by Vincent J. Musi.


But, as Becky Ferreira of Motherboard reports, there’s reason to regard Sweatman and Tsikritsis’ claims with skepticism. For one, many scholars do not accept the Younger Dryas Impact Hypothesis that a comet strike served as the catalyst for the Ice Age that followed. Some have also criticized Sweatman and Tsikritsis’ study for omitting crucial information to make their case. Archaeologist Jens Notroff, a researcher at the Göbekli Tepe site, takes Sweatman and Tsikritsis to task for failing to mention that the headless man on the vulture stone, which they claim symbolizes the devastating loss of human life after the comet, also possesses an erect phallus—hardly a robust indicator of loss of life.

“There’s more time between Gobekli Tepe and the Sumerian clay tablets [etched in 3300 B.C.] than from Sumer to today,” says Gary Rollefson, an archaeologist at Whitman College in Walla Walla, Washington. “Trying to pick out symbolism from prehistoric context is an exercise in futility.”

Perhaps. But if the recent archaeological discoveries are any indication, we are often mistaken in our assumptions about the complexity and historic trajectory of ancient civilizations. Time will tell. And technology will help.

FURTHER READING

As of this writing, Sweatman and Tsikritsis are working on a rebuttal to critiques of their paper.

Read Sweatman and Tsikritsis’ article in full.

Göbekli Tepe was added to the UNESCO World Heritage Tentative List five years ago and is expected to become a protected UNESCO Heritage site next year.

Read our February 02017 feature on how one historian is combining the approaches of comparative mythology of evolutionary biology with new computational modeling technologies to reconstruct some of humanity’s oldest myths.

Read Charles Mann’s National Geographic story in full. Mann also gave a Seminar for Long Now in April 02012.

Sociological ImagesPitting homeless vets against Syrian refugees: A theme for online right-wing activism

When we see individuals holding cardboard signs and asking for spare change wearing camouflage, homelessness among veterans can seem like an epidemic. Recently, however, government efforts to reduce veteran homelessness have had great success. In response to a federal strategy known as Opening Doors, since 2010 veteran homelessness has declined by almost 50%. And in that time period some cities, such as New Orleans, have reported veteran homelessness at functional zero. 

You would never know it from social media. As the world has grappled with the Syrian civil war, political memes have emerged in the U.S. that make the case that we should prioritize homeless veterans over Syrian refugees. These memes foreground a competition between homeless veterans and Syrian refugees in order to make a misleading, emotionally-appealing argument against the resettlement of Syrian refugees.

Deliberately or not, the online images are similar to propaganda. Actors create emotionally-charged illustrations with biased and one-sided evidence to encourage a political point. The memes push a narrative of homeless veterans as overlooked by the government, while this goes against the facts. They also suggest a fallacious argument that the Department of Veterans Affairs will lose funds because of the refugee resettlement program. This is not the case.

At the same time the memes appeal to our sentiments. Features writer for Mashable, Rebecca Ruiz, contends that memes like these pose the emotional question, “If people in the U.S. are suffering, why are we helping refugees?” What if veterans are those slighted? This is a powerful idea because Americans revere veterans.

In Coming Home: Attitudes toward U.S. Veterans Returning from Iraq, sociologists Alair MacLean and Meredith Kleykamp argue that male veterans involved in recent military-related combat are still supported by the general public, even in light of the idea that those exposed to combat have mental health issues and substance abuse problems. They add that veterans are privileged by symbolic capital, or prestige related to their service. A meme that presents veterans as treated unfairly is likely to produce an emotional reaction, something that is known to simplify our thinking and decision-making.

While the digital messages premised on helping veterans are compelling, they are false and a strategic exploitation of our feelings, one with xenophobic, white nationalist, and anti-immigrant goals. They urge us to advocate against Syrian resettlement to solve an unrelated problem that is already diminishing.

Ian Nahan has a Bachelor’s of Arts degree in both sociology and social work. He plans on working with veterans once he obtains a master’s degree in social work at the University of Pennsylvania.

(View original at https://thesocietypages.org/socimages)

Planet DebianJonathan Dowland: yakking

I've written a guest post for the Yakking Blog — "A WadC successor in Haskell?. It's mainly on the topic of Haskell with WadC as a use-case for a thought experiment.

Yakking is a collaborative blog geared towards beginner software engineers that is put together by some friends of mine. I was talking to them about contributing a blog post on a completely different topic a while ago, but that has not come to fruition (there or anywhere, yet). When I wrote up the notes that formed the basis of this blog post, I realised it might be a good fit.

Take a look at some of their other posts, and if you find it interesting, subscribe!

CryptogramHacking Fingerprint Readers with Master Prints

There's interesting research on using a set of "master" digital fingerprints to fool biometric readers. The work is theoretical at the moment, but they might be able to open about two-thirds of iPhones with these master prints.

Definitely something to keep watching.

Research paper (behind a paywall).

Worse Than FailureCodeSOD: Lucee Execution

I Love Lucy title

Recently, at my dayjob, I had a burning need to understand how scheduled tasks work. You see, we've recently switched from Adobe Coldfusion to Lucee, and I was shaky on how Adobe did things before, so I wanted a deeper understanding of how the code I was working on would be executed. For the uninitiated, Lucee is an open-source reimplementation of Cold Fusion. And that's not the WTF.

It's open source, I thought to myself. I'll just take a look at the code.

I had one problem. Then I looked at the code. Now I have two problems ... and a headache:


private long calculateNextExecution(long now, boolean notNow) {
	long nowTime=util.getMilliSecondsInDay(timeZone,now);
	long nowDate=now-nowTime;
		
		
	// when second or date intervall switch to current date
	if(startDate<nowDate && (cIntervall==Calendar.SECOND || cIntervall==Calendar.DATE))
		startDate=nowDate;
		
	// init calendar
	Calendar calendar = JREDateTimeUtil.getThreadCalendar(timeZone);
	calendar.setTimeInMillis(startDate+startTime);
		
	long time;
	while(true) {
		time=getMilliSecondsInDay(calendar);
		if(now<=calendar.getTimeInMillis() && time>=startTime) {
			// this is used because when cames back sometme to early
			if(notNow && (calendar.getTimeInMillis()-now)<1000);
			else if(intervall==ScheduleTaskImpl.INTERVAL_EVEREY && time>endTime)
				now=nowDate+DAY;
			else 
				break;
		}
		calendar.add(cIntervall, amount);
	}
	return calendar.getTimeInMillis();
}

"So okay, if now is before or starting at—hang on, what's calendar again?" I found myself muttering aloud. "Okay, if now is before or equal to the start date plus the start time, and time—which, if I understand that method correctly, is the elapsed time in the current day—is after or equal to the start time ... when is that true exactly? You know what would be nice? Some #%#@$%@ Javadoc!"

This is only one representative method, and yet, there's just so much here. Why an if statement that does nothing, terminating in an easily-overlooked semicolon? Why the misspelling of EVERY? Or "Intervall?" Why are the programmers allergic to spaces? Why can't they name variables worth anything? Do I even really want to know how this works anymore?

If you want to witness the madness for yourself, may I remind you: this code is open source. Have at ye. According to the copyright statement at the top, this code was inherited from the Railo project, so if you're in Switzerland, please be sure to send your hate mail to the right address.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianMichal Čihař: Weblate 2.14.1

Weblate 2.14.1 has been released today. It is bugfix release fixing possible migration issues, search results navigation and some minor security issues.

Full list of changes:

  • Fixed possible error when paginating search results.
  • Fixed migrations from older versions in some corner cases.
  • Fixed possible CSRF on project watch and unwatch.
  • The password reset no longer authenticates user.
  • Fixed possible captcha bypass on forgotten password.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Cory DoctorowLondon! I’ll be at Pages of Hackney tonight with Olivia Sudjic! (then Liverpool, Birmingham, Hay…) (!)


Last night’s sold-out Walkaway tour event with Laurie Penny at Waterstones Tottenham Court Road was spectacular (and not just because they had some really good whisky behind the bar), and the action continues today with a conversation with Olivia Sudjic tonight at Pages of Hackney, where we’ll be discussing her novel Sympathy as well as Walkaway.


Tomorrow, I’ll be at Waterstones Liverpool One with Dr Chris Pak, and on Friday I’ll signing at Waterstones Birmingham before heading to Hay-on-Wye for the final event of my UK tour, a dialogue with Adam Rutherford at the Hay Festival.


I hit the road again when I get back to the USA, continuing the US Walkaway tour with stops at the Bay Area Book Festival, BookCon NYC, ALA Chicago, Printers Row Chicago, Denver Comic-Con, San Diego Comic-Con, and Defcon Las Vegas.

,

Harald WeltePower-cycling a USB port should be simple, right?

Every so often I happen to be involved in designing electronics equipment that's supposed to run reliably remotely in inaccessible locations,without any ability for "remote hands" to perform things like power-cycling or the like. I'm talking about really remote locations, possible with no but limited back-haul, and a very high cost of ever sending somebody there for remote maintenance.

Given that a lot of computer peripherals (chips, modules, ...) use USB these days, this is often some kind of an embedded ARM (rarely x86) SoM or SBC, which is hooked up to a custom board that contains a USB hub chip as well as a line of peripherals.

One of the most important lectures I've learned from experience is: Never trust reset signals / lines, always include power-switching capability. There are many chips and electronics modules available on the market that have either no RESET, or even might claim to have a hardware RESET line which you later (painfully) discover just to be a GPIO polled by software which can get stuck, and hence no way to really hard-reset the given component.

In the case of a USB-attached device (even though the USB might only exist on a circuit board between two ICs), this is typically rather easy: The USB hub is generally capable of switching the power of its downstream ports. Many cheap USB hubs don't implement this at all, or implement only ganged switching, but if you carefully select your USB hub (or in the case of a custom PCB), you can make sure that the given USB hub supports individual port power switching.

Now the next step is how to actually use this from your (embedded) Linux system. It turns out to be harder than expected. After all, we're talking about a standard feature that's present in the USB specifications since USB 1.x in the late 1990ies. So the expectation is that it should be straight-forward to do with any decent operating system.

I don't know how it's on other operating systems, but on Linux I couldn't really find a proper way how to do this in a clean way. For more details, please read my post to the linux-usb mailing list.

Why am I running into this now? Is it such a strange idea? I mean, power-cycling a device should be the most simple and straight-forward thing to do in order to recover from any kind of "stuck state" or other related issue. Logical enabling/disabling of the port, resetting the USB device via USB protocol, etc. are all just "soft" forms of a reset which at best help with USB related issues, but not with any other part of a USB device.

And in the case of e.g. an USB-attached cellular modem, we're actually talking about a multi-processor system with multiple built-in micro-controllers, at least one DSP, an ARM core that might run another Linux itself (to implement the USB gadget), ... - certainly enough complex software that you would want to be able to power-cycle it...

I'm curious what the response of the Linux USB gurus is.

TEDMeet the TEDGlobal 2017 Fellows

Meet the new class of TEDGlobal 2017 Fellows! Representing 18 countries — including, for the first time in our program, Somalia, Uruguay, Liberia and Zimbabwe — this class clears a high bar of talent, creativity and eccentricity. Among those selected, you’ll find a Somali computer scientist catalyzing the tech scene in Somalia and Somaliland; a policy influencer working to make healthcare Deaf-friendly; the founder of Botswana’s first and only LGBT-themed theater festival, and many more.

Below, get to know the new group of Fellows who will join us at TEDGlobal 2017, August 27–30, in Arusha, Tanzania.


Nighat Dad
Nighat Dad (Pakistan)
Digital rights activist
Pakistani founder of the Digital Rights Foundation, a research and advocacy NGO that protects women and minorities from cyber harassment and defends their online freedom of expression.


Kyle DeCarlo
Kyle DeCarlo (USA)
Policy influencer + healthcare entrepreneur
US co-founder of the Deaf Health Initiative (DHI), an organization working to make healthcare Deaf-friendly through advocacy, policy changes and the creation of new medical devices.


Abdigani Diriye
Abdigani Diriye (USA)
Tech entrepreneur + inventor
Somali computer scientist catalyzing the tech scene in Somalia and Somaliland through coding camps, incubators and accelerator programs. An inventor and advocate for innovation and research in Africa.


With Moving and Passing, a multidisciplinary project that combines performance, sports and culture, artist Marc Bamuthi Joseph invites immigrant youth to join soccer clinics and writing workshops. (Photo: Joan Osato)


Susan Emmett
Susan Emmett (USA)
Ear surgeon
US public health expert and ear surgeon studying global hearing health disparities in 15 countries and Indigenous groups around the world, in an effort to fight preventable hearing loss


Mennat El Ghalid
Mennat El Ghalid (France | Egypt)
Mycologist
Egyptian microbiologist studying fungal infections in humans, in an effort to discover their causes and develop new treatments and cofounder of ConScience, a nonprofit dedicated to science education.


Victoria Forster
Victoria Forster (UK | Canada)
Cancer researcher
UK scientist researching new treatments for pediatric cancer, drawing on her own experience with leukemia to investigate the devastating side effects of current therapies


Mike Gil
Mike Gil (USA)
Marine biologist + science advocate
US marine biologist who studies the way reef fish communicate — and what these social interactions mean for the future of our coral reefs.


Robert Hakiza
Robert Hakiza (DRC | Uganda)
Urban refugee expert
Congolese cofounder of the Young African Refugees for Integral Development (YARID), which empowers refugees and builds community through vocational education, English classes, access to sports and computer literacy skills.


Miho Janvier
Miho Janvier (France)
Solar storm scientist
French astrophysicist who works to predict “space weather” by studying the nature of solar flares and space storms, and how they impact planetary environments in our solar system and beyond.


Astrophysicist Miho Janvier researches solar flares — the extreme bursts of radiation from the sun’s surface pictured here — and what they might mean for possible interstellar travel. (Photo: Solar Dynamics Observatory, NASA)


Saran Kaba Jones
Saran Kaba Jones (Liberia | USA)
Clean water advocate
Liberian founder and CEO of FACE Africa, which strengthens water, sanitation and hygiene (WASH) infrastructure in rural communities in Sub-Saharan Africa through the establishment of community-based WASH Committees and post-implementation support services.


Marc Bamuthi Joseph
Marc Bamuthi Joseph (USA)
Writer + performer
US artist and curator investigating cultural erasure through performance, ranging from opera to dance theater.


Adong Judith
Adong Judith (Uganda)
Director + playwright
Ugandan director and playwright creating theater that promotes social change and provokes dialogue on issues from LGBTQ rights to war crimes.


The National Theater in Kampala’s 2016 production of Ga-AD!, a play directed by Adong Judith exploring spirituality and the place of women in Pentecostal churches. (Photo: Zahara Abdul)


Yasin Kakande
Yasin Kakande (Uganda)
Investigative journalist + author
Ugandan journalist working undercover in the Middle East to uncover the human-rights abuses of migrant workers.


Katlego Kolanyane-Kesupile
Katlego Kolanyane-Kesupile (Botswana)
Performance artist + activist
Writer, educator and founder of the Queer Shorts Showcase Festival, Botswana’s first and only LGBT-themed theatre festival.


Romain Lacombe
Romain Lacombe (France)
Clean air entrepreneur
French founder of Plume Labs, a company dedicated to raising awareness about global air pollution by creating a personal electronic pollution tracker that forecasts pollution levels in real time.


Kasiva Mutua
Kasiva Mutua (Kenya)
Percussionist
International touring percussionist working to elevate the place of the African woman in music. Her performance style integrates African traditional music with modern styles such as jazz, hip-hop, reggae and zouk.


Kenyan percussionist Kasiva Mutua performs with the Nairobi Horns Project at the Michael Joseph Centre in Nairobi in 2016. (Photo: Mbarathi Karuga)


Carl Joshua Ncube
Carl Joshua Ncube (Zimbabwe)
Comedian
Zimbabwean standup comic who uses his creative work to approach culturally taboo topics on the African continent.


Wale Oyéjidé
Walé Oyéjidé (Nigeria | USA)
Fashion designer + artist
Nigerian fashion designer and artist who uses textile and apparel design to convey stories about immigrant populations to the Western world.


Fashion created by Nigerian designer Walé Oyéjidé. (Photo: Rog Walker for Ikiré Jones)


Christian Rodriguez
Christian Rodriguez (Uruguay | Mexico)
Documentary photographer
Uruguayan photographer exploring global gender and identity issues, with a specific focus on teenage pregnancy in Latin America.


Micaela (15 years old) and her newborn, Franco, in Uruguay, photographed by Christian Rodriguez, who documents teenage pregnancy throughout Latin America.


Edsel Salvana
Edsel Salvana (Philippines)
Molecular epidemiologist + activist
Filipino physician fighting the HIV epidemic by using cutting-edge molecular tools in order to predict which patients are likely to fail treatment.


Pratik Shah
Pratik Shah (India | USA)
Medical technology scientist
Indian scientist developing new artificial intelligence architectures for antibiotic discovery, faster clinical trials and non-ionizing clinical imaging devices that might one day replace dental X-rays.


Planet DebianDirk Eddelbuettel: Rcpp 0.12.11: Loads of goodies

The elevent update in the 0.12.* series of Rcpp landed on CRAN yesterday following the initial upload on the weekend, and the Debian package and Windows binaries should follow as usual. The 0.12.11 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, and the 0.12.10.release in March --- making it the fifteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1026 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases follows on the heels of R's 3.4.0 release and addresses on or two issues from the transition, along with a literal boatload of other fixes and enhancements. James "coatless" Balamuta was once restless in making the documentation better, Kirill Mueller addressed a number of more obscure compiler warnings (triggered under under -Wextra and the like), Jim Hester improved excecption handling, and much more mostly by the Rcpp Core team. All changes are listed below in some detail.

One big change that JJ made is that Rcpp Attributes also generate the now-almost-required package registration. (For background, I blogged about this one, two, three times.) We tested this, and do not expect it to throw curveballs. If you have an existing src/init.c, or if you do not have registration set in your NAMESPACE. It should cover most cases. But one never knows, and one first post-release buglet related to how devtools tests things has already been fixed in this PR by JJ.

Changes in Rcpp version 0.12.11 (2017-05-20)

  • Changes in Rcpp API:

    • Rcpp::exceptions can now be constructed without a call stack (Jim Hester in #663 addressing #664).

    • Somewhat spurious compiler messages under very verbose settings are now suppressed (Kirill Mueller in #670, #671, #672, #687, #688, #691).

    • Refreshed the included tinyformat template library (James Balamuta in #674 addressing #673).

    • Added printf-like syntax support for exception classes and variadic templating for Rcpp::stop and Rcpp::warning (James Balamuta in #676).

    • Exception messages have been rewritten to provide additional information. (James Balamuta in #676 and #677 addressing #184).

    • One more instance of Rf_mkString is protected from garbage collection (Dirk in #686 addressing #685).

    • Two exception specification that are no longer tolerated by g++-7.1 or later were removed (Dirk in #690 addressing #689)

  • Changes in Rcpp Documentation:

  • Changes in Rcpp Sugar:

    • Added sugar function trimws (Nathan Russell in #680 addressing #679).
  • Changes in Rcpp Attributes:

    • Automatically generate native routine registrations (JJ in #694)

    • The plugins for C++11, C++14, C++17 now set the values R 3.4.0 or later expects; a plugin for C++98 was added (Dirk in #684 addressing #683).

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function now creates a package registration file provided R 3.4.0 or later is used (Dirk in #692)

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramICE is Using Stingray to Track Illegal Immigrants

According to court documents, US Immigration and Customs Enforcement is using Stingray cell-site simulators to track illegal immigrants.

Planet DebianReproducible builds folks: Reproducible Builds: week 108 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday May 14 and Saturday May 20 2017:

News and Media coverage

  • We've reached 94.0% reproducible packages on testing/amd64! (NB. without build path variation)
  • Maria Glukhova was interviewed on It's FOSS about her involvement with Reproducible Builds with respect to Outreachy.

IRC meeting

Our next IRC meeting has been scheduled for Thursday June 1 at 16:00 UTC.

Packages reviewed and fixed, bugs filed, etc.

Bernhard M. Wiedemann:

Chris Lamb:

Reviews of unreproducible packages

35 package reviews have been added, 28 have been updated and 12 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

diffoscope development

strip-nondeterminism development

tests.reproducible-builds.org

Holger wrote a new systemd-based scheduling system replacing 162 constantly running Jenkins jobs which were slowing down job execution in general:

  • Nothing fancy really, just 370 lines of shell code in two scripts, out of these 370 lines 80 are comments and 162 are node defitions for those 162 "jobs".
  • Worker logs not yet as good as with Jenkins but usually we dont need realitime log viewing of specific builds. Or rather, its a waste of time to do it. (Actual package build logs remain unchanged.)
  • Builds are a lot faster for the fast archs, but not so much difference on armhf.
  • Since April 12 for i386 (and a week later for the rest), the images below are ordered with i386 on top, then amd64, armhf and arm64. Except for armhf it's pretty visible when the switch was made.

Misc.

This week's edition was written by Chris Lamb, Holver Levsen, Bernhard M. Wiedemann, Vagrant Cascadian and Maria Glukhova & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Rondam RamblingsTrump, the supposedly brilliant businessman, can't do basic math

Look, I can't help it if the Trump administration keeps lobbing these fat pitches. Donald Trump's budget has a $2 Trillion Math Error: One of the ways Donald Trump’s budget claims to balance the budget over a decade, without cutting defense or retirement spending, is to assume a $2 trillion increase in revenue through economic growth. This is the magic of the still-to-be-designed Trump tax cuts

CryptogramThe Future of Ransomware

Ransomware isn't new, but it's increasingly popular and profitable.

The concept is simple: Your computer gets infected with a virus that encrypts your files until you pay a ransom. It's extortion taken to its networked extreme. The criminals provide step-by-step instructions on how to pay, sometimes even offering a help line for victims unsure how to buy bitcoin. The price is designed to be cheap enough for people to pay instead of giving up: a few hundred dollars in many cases. Those who design these systems know their market, and it's a profitable one.

The ransomware that has affected systems in more than 150 countries recently, WannaCry, made press headlines last week, but it doesn't seem to be more virulent or more expensive than other ransomware. This one has a particularly interesting pedigree: It's based on a vulnerability developed by the National Security Agency that can be used against many versions of the Windows operating system. The NSA's code was, in turn, stolen by an unknown hacker group called Shadow Brokers ­ widely believed by the security community to be the Russians ­ in 2014 and released to the public in April.

Microsoft patched the vulnerability a month earlier, presumably after being alerted by the NSA that the leak was imminent. But the vulnerability affected older versions of Windows that Microsoft no longer supports, and there are still many people and organizations that don't regularly patch their systems. This allowed whoever wrote WannaCry ­-- it could be anyone from a lone individual to an organized crime syndicate -- to use it to infect computers and extort users.

The lessons for users are obvious: Keep your system patches up to date and regularly backup your data. This isn't just good advice to defend against ransomware, but good advice in general. But it's becoming obsolete.

Everything is becoming a computer. Your microwave is a computer that makes things hot. Your refrigerator is a computer that keeps things cold. Your car and television, the traffic lights and signals in your city and our national power grid are all computers. This is the much-hyped Internet of Things (IoT). It's coming, and it's coming faster than you might think. And as these devices connect to the Internet, they become vulnerable to ransomware and other computer threats.

It's only a matter of time before people get messages on their car screens saying that the engine has been disabled and it will cost $200 in bitcoin to turn it back on. Or a similar message on their phones about their Internet-enabled door lock: Pay $100 if you want to get into your house tonight. Or pay far more if they want their embedded heart defibrillator to keep working.

This isn't just theoretical. Researchers have already demonstrated a ransomware attack against smart thermostats, which may sound like a nuisance at first but can cause serious property damage if it's cold enough outside. If the device under attack has no screen, you'll get the message on the smartphone app you control it from.

Hackers don't even have to come up with these ideas on their own; the government agencies whose code was stolen were already doing it. One of the leaked CIA attack tools targets Internet-enabled Samsung smart televisions.

Even worse, the usual solutions won't work with these embedded systems. You have no way to back up your refrigerator's software, and it's unclear whether that solution would even work if an attack targets the functionality of the device rather than its stored data.

These devices will be around for a long time. Unlike our phones and computers, which we replace every few years, cars are expected to last at least a decade. We want our appliances to run for 20 years or more, our thermostats even longer.

What happens when the company that made our smart washing machine -- or just the computer part -- goes out of business, or otherwise decides that they can no longer support older models? WannaCry affected Windows versions as far back as XP, a version that Microsoft no longer supports. The company broke with policy and released a patch for those older systems, but it has both the engineering talent and the money to do so.

That won't happen with low-cost IoT devices.

Those devices are built on the cheap, and the companies that make them don't have the dedicated teams of security engineers ready to craft and distribute security patches. The economics of the IoT doesn't allow for it. Even worse, many of these devices aren't patchable. Remember last fall when the Mirai botnet infected hundreds of thousands of Internet-enabled digital video recorders, webcams and other devices and launched a massive denial-of-service attack that resulted in a host of popular websites dropping off the Internet? Most of those devices couldn't be fixed with new software once they were attacked. The way you update your DVR is to throw it away and buy a new one.

Solutions aren't easy and they're not pretty. The market is not going to fix this unaided. Security is a hard-to-evaluate feature against a possible future threat, and consumers have long rewarded companies that provide easy-to-compare features and a quick time-to-market at its expense. We need to assign liabilities to companies that write insecure software that harms people, and possibly even issue and enforce regulations that require companies to maintain software systems throughout their life cycle. We may need minimum security standards for critical IoT devices. And it would help if the NSA got more involved in securing our information infrastructure and less in keeping it vulnerable so the government can eavesdrop.

I know this all sounds politically impossible right now, but we simply cannot live in a future where everything -- from the things we own to our nation's infrastructure ­-- can be held for ransom by criminals again and again.

This essay previously appeared in the Washington Post.

Worse Than FailureTake the Bus

Rachel started working as a web developer for the local bus company. The job made her feel young, since the buses, the IT infrastructure, and most of their back-office code was older than she was. The bus fare-boxes were cash only, and while you could buy a monthly pass, it was just a little cardboard slip that you showed the driver. Their accounting system ran on a mainframe, their garage management software was a 16-bit DOS application. Email ran on an Exchange 5.5 server.

Translink-B8017

In charge of all of the computing systems, from the web to DOS, was Virgil, the IT director. Virgil had been hired back when the accounting mainframe was installed, and had nestled into his IT director position like a tick. The bus company, like many such companies in the US, was ostensibly a private company, but chartered and subsidized by the city. This created a system which had all the worst parts of private-sector and public-sector employment merged together, and Virgil was the master of that system.

Rachel getting hired on was one of his rare “losses”, and he wasn’t shy about telling her so.

“I’ve been doing the web page for years,” Virgil said. “It has a hit counter, so you can see how many hits it actually gets- maybe 1 or 2 a week. But management says we need to have someone dedicated to the website.” He grumbled. “Your salary is coming out of my budget, you know.”

That website was a FrontPage 2000 site, and the hit-counter was broken in any browser that didn’t have ActiveX enabled. Rachel easily proved that there was far more traffic than claimed, not that there was a lot. And why should there be? You couldn’t buy a monthly pass online, so the only feature was the ability to download PDFs of the hand-schedules.

With no support, Rachel did her best to push things forward. She redesigned the site to be responsive. She convinced the guy who maintained their bus routes (in a pile of Excel spreadsheets) to give her regular exports of the data, so she could put the schedules online in a usable fashion. Virgil constantly grumbled about wasting money on a website nobody used, but as she made improvements, more people started using it.

Then it was election season. The incumbent mayor had been complaining about the poor service the bus company was offering, the lack of routes, the costs, the schedules. His answer was, “cut their funding”. Management started talking about belt-tightening, Virgil started dropping hints that Rachel was on the chopping block, and she took the hint and started getting resumes out.

A miracle occurred. The incumbent mayor’s campaign went off the rails. He got caught siphoning money from the city to pay for private trips. A few local cops mentioned that they’d been called in to cover-up the mayor’s frequent DUIs. His re-election campaign’s finances show strange discrepancies, and money had come in that couldn’t be tied back to a legitimate contribution. He tried to get a newly built stadium named after himself, which wasn’t illegal, but was in poor taste and was the final straw. He dropped out of the election, paving the way for “Mayor Fred” to take over.

Mayor Fred was a cool Mayor. He wanted to put in bike lanes. He wanted to be called “Mayor Fred”. He wanted to make it easier for food trucks to operate in the city. And while he shared his predecessor’s complaints about the poor service from the bus company, he had a different solution, which he revealed while taking a tour of the bus company’s offices.

“I’m working right now to secure federal grants, private sector funding, to fund a modernization project,” Mayor Fred said, grinning from behind a lectern. “Did you know we’re paying more to keep our old buses on the road for five years than it would cost to buy new buses?” And thus, Mayor Fred made promises. Promises about new buses, promises about top-flight consultants helping them plan better routes, promises about online functionality.

Promises that made Virgil grumble and whine. Promises that the mayor… actually kept.

New buses started to hit the streets. They had GPS and a radio communication system that gave them up-to-the-second location reporting. Rachel got put in charge of putting that data on the web, with a public API, and tying it to their schedules. A group of consultants swung through to help, and when the dust settled, Rachel’s title was suddenly “senior web developer” and she was in charge of a team of 6 people, integrating new functionality to the website.

Virgil made his opinion on this subject clear to her: “You are eating into my budget!”

“Isn’t your budget way larger?” Rachel asked.

“Yes, but there’s so much more to spend it on! We’re a bus company, we should be focused on getting people moving, not giving them pretty websites with maps that tell them where the buses are! And now there’s that new FlashCard project!”

FlashCard was a big project that didn’t involve Rachel very much. Instead of cash fares and cardboard passes, they were going to get an RFID system. You could fill your card at one of the many kiosks around the city, or even online. “Online” of course, put it in Rachel’s domain, but it was mostly a packaged product. Virgil, of all people, had taken over the install and configuration, Rachel just customized the stylesheet so that it looked vaguely like their main site.

Rachel wasn’t only an employee of the bus company, she was also a customer. She was one of the first in line to get a FlashCard. For a few weeks, it was the height of convenience. The stop she usually needed had a kiosk, she just waved her card at the farebox and paid. And then, one day, when her card was mostly empty and she wasn’t anywhere near a kiosk, she decided to try filling her card online.

Thank you for your purchase. Your transaction will be processed within 72 hours.

That was a puzzle. The kiosks completed the transaction instantly. Why on Earth would a website take 3 days to do the same thing? Rachel became more annoyed when she realized she didn’t have enough on her card to catch the bus, and she needed to trudge a few blocks out of her way to refill the card. That’s when it started raining. And then she missed her bus, and had to wait 30 minutes for the next one. Which is when the rain escalated to a downpour. Which made the next bus 20 minutes late.

Wet, cold, and angry, Rachel resolved to figure out what the heck was going on. When she confronted Virgil about it, he said, “That’s just how it works. I’ve got somebody working full time on keeping that system running, and that’s the best they can do.”

Somebody working full time? “Who? What? Do you need help? I’ve done ecommerce before, I can-”

“Oh no, you’ve already got your little website thing,” Virgil said. “I’m not going to let you try and stage a coup over this.”

With an invitation like that, Rachel decided to figure out what was going on. It wasn’t hard to get into the administration features of the FlashCard website. From there, it was easy to see the status of the ecommerce plugin for processing transactions: “Not installed”. In fact, there was no sign at all that the system could even process transactions at all.

The only hint that Rachel caught was the configuration of the log files. They were getting dumped to /dev/lp1. A printer. Next came a game of hide-and-seek- the server running the FlashCard software wasn’t in their tiny data-center, which meant she had to infer its location based on which routers were between her and it. It took a few days of poking around their offices, but she eventually found it in the basement, in an office.

In that office was one man with coke-bottle glasses, an antique continuous feed printer, a red document shredder, and a FlashCard kiosk running in diagnostic mode. “Um… can I help you?” the man asked.

“Maybe? I’m trying to track down how we’re processing credit card transactions for the FlashCard system?”

The printer coughed to life, spilling out a new line. “Well, you’re just in time then. Here’s the process.” He adjusted his glasses and peered at the output from the printer:

TRANSACTION CONFIRMED: f6ba779d22d5;4012888888881881;$25.00

The man then kicked his rolly-chair over to the kiosk. The first number was the FlashCard the transaction was for, the second was the credit card number, and the third was the amount. He punched those into the kiosk’s keypad, and then hit enter.

“When it gets busy, I get real backed up,” he confessed. “But it’s quiet right now.”

Rachel tracked down Virgil, and demanded to know what he thought he was doing.

“What? It’s not like anybody wants to use a website to buy things,” Virgil said. “And if we bought the ecommerce module, the vendor would have charged us $2,000/mo, on top of an additional transaction fee. This is cheaper, and I barely have enough room in my budget as it is!”

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet DebianTianon Gravi: Debuerreotype

Following in the footsteps of one of my favorite Debian Developers, Chris Lamb / lamby (who is quite prolific in the reproducible builds effort within Debian), I’ve started a new project based on snapshot.debian.org (time-based snapshots of the Debian archive) and some of lamby’s work for creating reproducible Debian (debootstrap) rootfs tarballs.

The project is named “Debuerreotype” as an homage to the photography roots of the word “snapshot” and the daguerreotype process which was an early method of taking photographs. The essential goal is to create “photographs” of a minimal Debian rootfs, so the name seemed appropriate (even if it’s a bit on the “mouthful” side).

The end-goal is to create and release Debian rootfs tarballs for a given point-in-time (especially for use in Docker) which should be fully reproducible, and thus improve confidence in the provenance of the Debian Docker base images.

For more information about reproducibility and why it matters, see reproducible-builds.org, which has more thorough explanations of the why and how and links to other important work such as the reproducible builds effort in Debian (for Debian package builds).

In order to verify that the tool actually works as intended, I ran builds against seven explicit architectures (amd64, arm64, armel, armhf, i386, ppc64el, s390x) and eight explicit suites (oldstable, stable, testing, unstable, wheezy, jessie, stretch, sid).

I used a timestamp value of 2017-05-16T00:00:00Z, and skipped combinations that don’t exist (such as wheezy on arm64) or aren’t supported anymore (such as wheezy on s390x). I ran the scripts repeatedly over several days, using diffoscope to compare the results.

While doing said testing, I ran across #857803, and added a workaround. There’s also a minor outstanding issue with wheezy’s reproducibility that I haven’t had a chance to dig deep very deeply into yet (but it’s pretty benign and Wheezy’s LTS support window ends 2018-05-31, so I’m not too stressed about it).

I’ve also packaged the tool for Debian, and submitted it into the NEW queue, so hopefully the FTP Masters will look favorably upon this being a tool that’s available to install from the Debian archive as well. 😇

Anyhow, please give it a try, have fun, and as always, report bugs!

Cory DoctorowLondon! I’ll see you tonight on the Walkaway tour! (then Liverpool, Birmingham, and Hay…) (!)


Last night’s kick-off event for the UK Walkaway tour was brilliant, thanks to the magic combination of the excellent Tim Harford, the excellent people of Oxford, and the excellent booksellers at Blackwells!


Tonight I’ll be at Forbidden Planet at 6PM to sign books, then we’re walking over to Waterstone’s Tottenham Court Road for a 6:45 event with Laurie Penny. Tomorrow, I’m doing another London event, this one with Olivia Sudjic (author of Sympathy) at Pages of Hackney at 7PM.


After that, I head to Liverpool, Birmingham and Hay-on-Wye before heading back to the USA for events in San Francisco, New York, Chicago, Denver, San Diego and Las Vegas! Hope to see you there!

,

Krebs on SecurityShould SaaS Companies Publish Customers Lists?

A few weeks back, HR and financial management firm Workday.com sent a security advisory to customers warning that crooks were sending targeted malware phishing attacks at customers. At the same time, Workday is publishing on its site a list of more than 800 companies that use its services, making it relatively simple for attackers to chose their targets. This post examines whether it makes sense for software-as-a-service (SaaS) companies to publish lists of their customers when those customers are actively under siege from phishers impersonating the SaaS provider.

At its most basic, security always consists of trade-offs. Many organizations find a natural tension between marketing and security. The security folks warn that publishing too much information about how the company does business and with whom makes it way too easy for phishers and other scammers to target your customers.

A screenshot of a phishing lure used to target Workday customers.

A screenshot of a phishing lure used to target Workday customers.

The marketing folks, quite naturally, often have a different perspective: The benefits of publishing partner data far outweigh the nebulous risks that someone may abuse this information.

So the question is, at what point does marketing take a backseat to security at SaaS firms when their customers are being phished? Is it even reasonable to think that determined attackers would be deterred if they had to pore through press releases and other public data to find a target list?

When I first approached Workday in researching this column, I did so in regard to an alert they emailed customers earlier this month. In the alert, Workday warned that customers using single-factor authentication to access Workday were being targeted by email phishing campaigns. The company said there was no evidence to suggest the phishing a result of the Workday service or infrastructure, but rather it was the result of phishing emails where individuals at customer organizations shared login credentials with a malicious third party. In short, they’d been phished.

A portion of the phishing alert that Workday sent to its customers.

A portion of the phishing alert that Workday sent to its customers.

Workday advised customers to take advantage of the company’s two-factor authentication systems, and to enable secondary approvals for all important transactions.

All good advice, but I also challenged the company that it maybe wasn’t the best idea to also publish a tidy list of more than 800 customers on its Web site. I also noted that Workday’s site makes it simple to find an HTML template for targeted phishing campaigns. Just take one of the companies listed on its site and enter the name in the Workday Sign-in search page. Selecting Netflix from the list of Workday customers, for example, we can find Netflix’s login page:

Netflix's sign-in page at Workday.com.

Netflix’s sign-in page at Workday.com.

That link opens up a page that allows Netflix customers to login to Workday using Google’s OAuth system for linking third-party apps to Google accounts. It’s a good thing we haven’t recently seen targeted phishing attacks that mimic this precise process to hijack Google accounts.

Oh wait, something very similar just happened earlier this month. In the first week of May, phishers began sending Google Docs phishing campaigns via Gmail disguised as an offer to share a document. Recipients who fell for the ruse ended up authorizing an app from Google’s OAuth authentication interface — i.e., handing crooks direct access to their accounts.

Before I go further, let me just say that it is not my intention to single out Workday in this post: There are plenty of other companies in its exact same position. The question I want to explore is at what point does marketing get trumped by security? For me, the juxtaposition between Workday’s warning and its priming the pump for phishers at the same time seemed off.

Workday wasn’t swayed by my logic, and they referred me to an industry analyst for the finer points of that perspective. Michael Krigsman, a tech analyst and host at cxotalk.com, said he often advises smaller companies that may be less sophisticated in their marketing strategies to publish a list of customers on their home pages.

“Even when it comes to larger companies like Workday, they’re selling so many seats that this information is highly public knowledge and very easy to get,” Krigsman said. “If you’re interested in Workday’s customer lists, for example, you can easily find that out because Workday puts out press releases, their customers put out press releases, and this gets picked up in the trade press.”

WHERE I COME FROM

Fair enough, I said, and then I explained my historical perspective on this topic. Ever since I broke a series of stories about breaches at major retailers like Target, Home Depot, Neiman Marcus and Michaels, I’ve been inundated with requests from banks and credit unions to help them figure out which merchants were responsible for credit and debit card fraud that was costing them huge financial losses.

They sought my help in figuring this out because Visa and MasterCard have contractual ways to help banks recover a portion of the funds lost to credit card breaches if the financial institutions can show that specific fraud was traced back to cards all used at the same breached merchant.

As a result, I’ve spent a great deal of my time over the past few years helping these financial institutions find out for themselves which of their cards were breached at which merchants — pointing them to underground forums where — if they so choose — they could buy back a small number of cards and look to see if any of those had a commonality (known in financial industry parlance as a “common point of of purchase” or CPP).

I’ve never sought nor have I received remuneration for any of this assistance. However, one could say that this assistance has paid off in the form of tips about CPPs from various financial industry sources that — in the aggregate — strongly point to breaches at major retailers, hotels and other establishments where credit card transactions are plentiful and traditionally not terribly well protected.

But even financial institution fraud analysts who are adept at doing CPP analysis on cards for sale in the underground markets can be blind to the breach whose only commonality is a third-party provider — such as a credit card processor or a vendor that sells and maintains point-of-sale devices on behalf of other businesses.

Nine times out of ten, when a financial institution can’t figure out the source of a breach related to a batch of fraudulent credit card transactions, the culprit is one of these third-party POS providers. And in the vast majority of cases, a review of the suspect POS provider shows that they list every one of their customers somewhere on their site.

Unsurprisingly, Russian malware gangs that specialize in deploying POS-based malware to record and transmit card data from any card swiped through the cash register very often target POS providers because it is the easiest way into the cash registers at customer stores. Interview the individual store managers who operate compromised tills — as I have on more occasions that I care to count — and what you invariably find is that the malware got on their POS systems because an employee received an email mimicking the POS provider and clicked a booby-trapped link or attachment.

Alas, Workday was unmoved by my analysis of the situation.

“Spotlighting shared success with our customers helps our businesses grow, but security is Workday’s top priority,” the company said in a statement emailed to KrebsOnSecurity. “We are vigilant about identifying issues and consulting customers on best practices — such as deploying multi-factor authentication or conducting security awareness training for their employees– in order to continually help them sharpen security and protect their businesses.”

For his part, CXOTalk’s Krigsman said he was moved by the story about the POS providers.

“So the question becomes is this a strong enough threat that this is a trade off we should make,” Krigsman said. “You make a compelling argument: On the one hand, for marketing and customer convenience purposes companies want to put this all out there, but on other hand maybe it’s creating a bigger threat.”

I should note that regardless of whether a cloud or SaaS service publishes a list of companies they work with, those companies may themselves publish which SaaS providers they frequent. As Mark Stanislav of Rapid7 explained in Feb. 2015, it’s not uncommon for organizations to expose these relationships by including them in anti-spam records that get published to the entire world. See more of Stanislav’s research here.

What do you think, Dear Readers? Where do you come down on the line between marketing and security? Sound off in the comments below.

Krebs on SecurityPrivate Eye Allegedly Used Leaky Goverment Tool in Bid to Find Tax Data on Trump

In March 2017, KrebsOnSecurity warned that thieves who perpetrate tax refund fraud with the U.S. Internal Revenue Service were leveraging a widely-used online student loan tool to find critical data on consumers that allows them to claim huge refunds with the IRS in someone else’s name. This week, it emerged that a Louisiana-based private investigator is being charged with using the same online tool to glean tax data on then-presidential candidate Donald J. Trump.

A story today at Diverseeducation.com points to court filings in the U.S. District Court for the Middle District of Louisiana, in which local private eye Jordan Hamlett is accused by federal prosecutors of abusing an automated tool at the U.S. Department of Education website that is designed to make it easier for families to complete the Education Department’s Free Application for Federal Student Aid (FAFSA) — a lengthy form that serves as the starting point for students seeking federal financial assistance to pay for college or career school.

Grand jury findings in a sealed case against Louisiana private investigator Jordan Hamlett.

Grand jury findings in a sealed case against Louisiana private investigator Jordan Hamlett.

In November 2016, Hamlett — the owner of Baton Rouge-based Averlock Investigations — was indicted on felony charges of trying to glean then President-Elect Trump’s “adjusted gross income,” or AGI, using the FAFSA online tool. In the United States, the AGI is an individual’s total gross income minus specific deductions. Diverse Education’s Jamaal Abdul-Alim cites sources saying the accused may have been trying to get Trump’s tax records.

In any event, he failed, according to prosecutors. Last month, the IRS announced that the Education Department was disabling the FAFSA lookup tool because it was being abused by tax fraudsters.

According to Diverse Education, hints about the case against Hamlett came out earlier this month in an IRS oversight hearing before the U.S. House committee on oversight and government reform. At that hearing, “Timothy P. Camus, deputy inspector general for investigations at the Treasury Inspector General for Tax Administration, or TIGTA, alluded to the Hamlett case but did not mention Hamlett by name, nor did he indicate that then-presidential candidate Trump was the target,” Abdul-Alim writes. “Instead, Camus only mentioned that TIGTA ‘detected an attempted access to the AGI of a prominent individual.'”

Attempts to reach Hamlett for comment have been unsuccessful so far, and the complaint against him remains sealed. However, KrebsOnSecurity obtained a response on Nov. 10, 2016 from U.S. Attorney J. Walter Green that lays out the basic facts in the case. A copy of that document is here (PDF).

It’s interesting to note that this wasn’t the only time U.S. government authorities detected someone trying to access Trump’s AGI information. According to the government’s response, the alleged unauthorized attempt at Trump’s AGI data being attributed to Hamlett occurred on Sept. 13, 2016.

In TIGTA Deputy Inspector General Camus’ testimony to the House committee (PDF), he said his office detected a second attempt to access the same “prominent individual’s” AGI data via the FAFSA online lookup in November 2016, although the testimony doesn’t say whether that attempt was successful.

Amazingly, it wasn’t until an IRS employee on February 27, 2017 complained that his personal data was stolen via the FAFSA tool that the IRS moved to restrict online access to the service, according to response to committee questioning from IRS Chief Information Officer S. Gina Garza.

The government doesn’t say in its pleadings why the accused was allegedly unsuccessful in obtaining President Trump’s AGI data. It could be that the Social Security number he had for Trump wasn’t correct; or, the account may have been flagged prior to the alleged attempt.

In any event, I want to take this opportunity to remind readers to assume that the static facts about who you are — including your income, date of birth, Social Security number, and a whole host of other information you may consider private — are likely at risk thanks to well-intentioned but nonetheless poorly secured third-party services that leak this data if the impersonator has but a few data points with which to work.

And of course these data points are for sale via a myriad places in the Dark Web for less than the Bitcoin equivalent of a regular coffee at Starbucks. On this front I’m reminded of the case of ssndob[dot]ru, a now-defunct identity theft service that held this data on more than 200 million Americans.

That service was used to look up the name, address, previous address, phone number, Social Security number and date of birth on some of America’s top public figures and celebrities — data that was later published on a doxing site called exposed[dot]su. The victims of exposed[dot]su included then First Lady Michelle Obama; then-director of the FBI Robert Mueller; and former U.S. Attorney General Eric Holder.

Exposed[dot]su was built with the help of identity information obtained and/or stolen from ssndob[dot]ru.

Exposed[dot]su was built with the help of identity information obtained and/or stolen from ssndob[dot]ru.

CryptogramNorth Korean Cyberwar Capabilities

Reuters has an article on North Korea's cyberwar capabilities, specifically "Unit 180."

They're still not in the same league as the US, UK, Russia, China, and Israel. But they're getting better.

Planet DebianGunnar Wolf: Open Source Symposium 2017

I travelled (for three days only!) to Argentina, to be a part of the Open Source Symposium 2017, a co-located event of the International Conference on Software Engineering.

This is, all in all, an interesting although small conference — We are around 30 people in the room. This is a quite unusual conference for me, as this is among the first "formal" academic conference I am part of. Sessions have so far been quite interesting.
What am I linking to from this image? Of course, the proceedings! They managed to publish the proceedings via the "formal" academic channels (a nice hard-cover Springer volume) under an Open Access license (which is sadly not usual, and is unbelievably expensive). So, you can download the full proceedings, or article by article, in EPUB or in PDF...
...Which is very very nice :)
Previous editions of this symposium have also their respective proceedings available, but AFAICT they have not been downloadable.
So, get the book; it provides very interesant and original insights into our community seen from several quite novel angles!

AttachmentSize
oss2017_cover.png84.47 KB

Rondam RamblingsTrump hypocrisy watch: it's trifecta week!

I am trying to spend less time sniping at Donald Trump and more time engaged in actual productive activities, but sometimes a pitch is too fat not to take a swing at it. In the last week -- no, in the last week end -- Donald Trump did not one, not two, but three things that he previously excoriating Barack Obama or Hillary Clinton for. 1.  The phrase "radical Islamic terrorism" has suddenly

Cory DoctorowMary Shelley’s Frankenstein shows us how science fiction predicts the present and shapes the future

Frankenstein: Annotated for Scientists, Engineers, and Creators of All Kinds is a new MIT Press book commemorating the bicentennial of the publication of Mary Shelley’s seminal novel “Frankenstein; or, The Modern Prometheus.”

I was honored to be asked to contribute an essay to the edition, which they titled I’ve Created a Monster! And so can you. It’s a look at how Shelley’s book illustrates the relationship of science fiction to the present (it reflects back our hopes and fears) and the future (those hopes and fears shape what we do).


The anthology is part of a year of events at ASU’s Center for Science and the Imagination.


I’m a Facebook vegan. I won’t even use WhatsApp or Instagram because they’re owned by Facebook. That means I basically never get invited to parties; I can’t keep up with what’s going on in my daughter’s school; I can’t find my old school friends or participate in the online memorials when one of them dies. Unless everyone you know chooses along with you not to use Facebook, being a Facebook vegan is hard. But it also lets you see the casino for what it is and make a more informed choice about what technologies you depend on.

Mary Shelley understood social exile. She walked away from the social network of England—ran away, really, at the age of 16 with a married man, Percy Bysshe Shelley, and conceived two children with him before they finally married. Shelley’s life is a story about the adjacent possible of belonging, and Frankenstein is a story about the adjacent possible of deliciously credible catastrophes in an age of technological whiplash and massive dislocation.

In 1989, the Berlin Wall fell, and the end of the ironically named German Democratic Republic was at hand. The GDR, often called East Germany, was one of the most spied-upon countries in the history of the world. The Stasi, its secret police force, were synonymous with totalitarian control, and their name struck terror wherever it was whispered.

The Stasi employed one snitch for every 60 people in the GDR: an army to surveil a nation.

Today, the U.S. National Security Agency has the entire world under surveillance more totally than the Stasi ever dreamed of. It has one employee for every 20,000 people it spies on—not counting the contractors.

I’ve Created a Monster!
And so can you.
[Cory Doctorow/Slate]

CryptogramExtending the Airplane Laptop Ban

The Department of Homeland Security is rumored to be considering extending the current travel ban on large electronics for Middle Eastern flights to European ones as well. The likely reaction of airlines will be to implement new traveler programs, effectively allowing wealthier and more frequent fliers to bring their computers with them. This will only exacerbate the divide between the haves and the have-nots -- all without making us any safer.

In March, both the United States and the United Kingdom required that passengers from 10 Muslim countries give up their laptop computers and larger tablets, and put them in checked baggage. The new measure was based on reports that terrorists would try to smuggle bombs onto planes concealed in these larger electronic devices.

The security measure made no sense for two reasons. First, moving these computers into the baggage holds doesn't keep them off planes. Yes, it is easier to detonate a bomb that's in your hands than to remotely trigger it in the cargo hold. But it's also more effective to screen laptops at security checkpoints than it is to place them in checked baggage. TSA already does this kind of screening randomly and occasionally: making passengers turn laptops on to ensure that they're functional computers and not just bomb-filled cases, and running chemical tests on their surface to detect explosive material.

And, two, banning laptops on selected flights just forces terrorists to buy more roundabout itineraries. It doesn't take much creativity to fly Doha-Amsterdam-New York instead of direct. Adding Amsterdam to the list of affected airports makes the terrorist add yet another itinerary change; it doesn't remove the threat.

Which brings up another question: If this is truly a threat, why aren't domestic flights included in this ban? Remember that anyone boarding a plane to the United States from these Muslim countries has already received a visa to enter the country. This isn't perfect security -- the infamous underwear bomber had a visa, after all -- but anyone who could detonate a laptop bomb on his international flight could do it on his domestic connection.

I don't have access to classified intelligence, and I can't comment on whether explosive-filled laptops are truly a threat. But, if they are, TSA can set up additional security screenings at the gates of US-bound flights worldwide and screen every laptop coming onto the plane. It wouldn't be the first time we've had additional security screening at the gate. And they should require all laptops to go through this screening, prohibiting them from being stashed in checked baggage.

This measure is nothing more than security theater against what appears to be a movie-plot threat.

Banishing laptops to the cargo holds brings with it a host of other threats. Passengers run the risk of their electronics being stolen from their checked baggage -- something that has happened in the past. And, depending on the country, passengers also have to worry about border control officials intercepting checked laptops and making copies of what's on their hard drives.

Safety is another concern. We're already worried about large lithium-ion batteries catching fire in airplane baggage holds; adding a few hundred of these devices will considerably exacerbate the risk. Both FedEx and UPS no longer accept bulk shipments of these batteries after two jets crashed in 2010 and 2011 due to combustion.

Of course, passengers will rebel against this rule. Having access to a computer on these long transatlantic flights is a must for many travelers, especially the high-revenue business-class travelers. They also won't accept the delays and confusion this rule will cause as it's rolled out. Unhappy passengers fly less, or fly other routes on other airlines without these restrictions.

I don't know how many passengers are choosing to fly to the Middle East via Toronto to avoid the current laptop ban, but I suspect there may be some. If Europe is included in the new ban, many more may consider adding Canada to their itineraries, as well as choosing European hubs that remain unaffected.

As passengers voice their disapproval with their wallets, airlines will rebel. Already Emirates has a program to loan laptops to their premium travelers. I can imagine US airlines doing the same, although probably for an extra fee. We might learn how to make this work: keeping our data in the cloud or on portable memory sticks and using unfamiliar computers for the length of the flight.

A more likely response will be comparable to what happened after the US increased passenger screening post-9/11. In the months and years that followed, we saw different ways for high-revenue travelers to avoid the lines: faster first-class lanes, and then the extra-cost trusted traveler programs that allow people to bypass the long lines, keep their shoes on their feet and leave their laptops and liquids in their bags. It's a bad security idea, but it keeps both frequent fliers and airlines happy. It would be just another step to allow these people to keep their electronics with them on their flight.

The problem with this response is that it solves the problem for frequent fliers, while leaving everyone else to suffer. This is already the case; those of us enrolled in a trusted traveler program forget what it's like to go through "normal" security screening. And since frequent fliers -- likely to be more wealthy -- no longer see the problem, they don't have any incentive to fix it.

Dividing security checks into haves and have-nots is bad social policy, and we should actively fight any expansion of it. If the TSA implements this security procedure, it should implement it for every flight. And there should be no exceptions. Force every politically connected flier, from members of Congress to the lobbyists that influence them, to do without their laptops on planes. Let the TSA explain to them why they can't work on their flights to and from D.C.

This essay previously appeared on CNN.com.

EDITED TO ADD: US officials are backing down.

Planet Linux AustraliaDanielle Madeley: Announcing new high-level PKCS#11 HSM support for Python

Recently I’ve been working on a project that makes use of Thales HSM devices to encrypt/decrypt data. There’s a number of ways to talk to the HSM, but the most straight-forward from Linux is via PKCS#11. There were a number of attempts to wrap the PKCS#11 spec for Python, based on SWIG, cffi, etc., but they were all (a) low level, (b) not very Pythonic, (c) have terrible error handling, (d) broken, (e) inefficient for large files and (f) very difficult to fix.

Anyway, given that nearly all documentation on how to actually use PKCS#11 has to be discerned from C examples and thus I’d developed a pretty good working knowledge of the C API, and I’ve wanted to learn Cython for a while, I decided I’d write a new binding based on a high level wrapper I’d put into my app. It’s designed to be accessible, pick sane defaults for you, use generators where appropriate to reduce work, stream large files, be introspectable in your programming environment and be easy to read and extend.

https://github.com/danni/python-pkcs11

It’s currently a work in progress, but it’s now available on pip. You can get a session on a device, create a symmetric key, find objects, encrypt and decrypt data. The Cryptoki spec is quite large, so I’m focusing on the support that I need first, but it should be pretty straightforward for anyone who wanted to add something else they needed. I like to think I write reasonably clear, self-documenting code.

At the moment it’s only tested on SoftHSMv2 and the Thales nCipher Edge, which is what I have access to. If someone at Amazon wanted this to work flawlessly on CloudHSM, send me an account and I’ll do it :-P Then I can look at releasing my Django integrations for fields, storage, signing, etc.

Worse Than FailureCodeSOD: Hard Reboot

Every day in IT, each one of us walks the fine line between "brilliant" and "appalling." We come across things that make our jaws drop, and we're not sure whether we're amazed or horrified or both. Here's a PHP sample that Brett P. was lucky—or unlucky—enough to discover:

This comes from circa 2001, back when there were MySQL stability issues on a server. If in response to the end user's request the script couldn't connect to the session database, PHP would shell out to a VB6 .EXE that rebooted the machine. Ta dum! No more connection error!


if (strpos(mysql_error(), "connect to MySQL server on 'localhost' (10061)")) {
    // The MySQL Error: Can't connect to MySQL server on 'localhost' (10061)
    // means that the MySQL database is not running. This is caused by a failed
    // reboot. Since MySQL is not running, it is safe to execute a hard boot.
    echo "Currently rebooting the server. Please try again in two minutes.";
    flush();
    exec("c:\Progra~1\progs\hardboot.exe");
}

That comment block full of flimsy assumptions and unwarranted confidence makes my insides knot up with dread. I'm leaning toward "appalling" on this one. Anyone else?

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianMichal Čihař: HackerOne experience with Weblate

Weblate has started to use HackerOne Community Edition some time ago and I think it's good to share my experience with that. Do you have open source project and want to get more attention of security community? This post will answer how it looks from perspective of pretty small project.

I've applied with Weblate to HackerOne Community Edition by end of March and it was approved early in April. Based on their recommendations I've started in invite only mode, but that really didn't bring much attention (exactly none reports), so I've decided to go public.

I've asked for making the project public just after coming from two weeks vacation, while expecting the approval to take some time where I'll settle down things which have popped up during vacation. In the end that was approved within single day, so I was immediately under fire of incoming reports:

Reports on HackerOne

I was surprised that they didn't lie - you will really get huge amount of issues just after making your project public. Most of them were quite simple and repeating (as you can see from number of duplicates), but it really provided valuable input.

Even more surprisingly there was second peak coming in when I've started to disclose resolved issues (once Weblate 2.14 has been released).

Overall the issues could be divided to few groups:

  • Server configuration such as lack of Content-Security-Policy headers. This is certainly good security practice and we really didn't follow it in all cases. The situation should be way better now.
  • Lack or rate limiting in Weblate. We really didn't try to do that and many reporters (correctly) shown that this is something what should be addressed in important entry points such as authentication. Weblate 2.14 has brought lot of features in this area.
  • Not using https where applicable. Yes, some APIs or web sites did not support https in past, but now they do and I didn't notice.
  • Several pages were vulnerable to CSRF as they were using GET while POST with CSRF protection would be more appropriate.
  • Lack of password strength validation. I've incorporated Django password validation to Weblate hopefully avoiding the weakest passwords.
  • Several issues in authentication using Python Social Auth. I've never really looked at how the authentication works there and there are some questionable decisions or bugs. Some of the bugs were already addressed in current releases, but there are still some to solve.

In the end it was really challenging week to be able to cope with the incoming reports, but I think I've managed it quite well. The HackerOne metrics states that there are 2 hours in average to respond on incoming incidents, what I think will not work in the long term :-).

Anyway thanks to this, you can now enjoy Weblate 2.14 which more secure than any release before, if you have not yet upgraded, you might consider doing that now or look into our support offering for self hosted Weblate.

The downside of this all was that the initial publishing on HackerOne made our website target of lot of automated tools and the web server was not really ready for that. I'm really sorry to all Hosted Weblate users who were affected by this. This has been also addressed now, but the infrastructure really should have been prepared before on this. To share how it looked like, here is number of requests to the nginx server:

nxing requests

I'm really glad I could make Weblate available on HackerOne as it will clearly improve it's security and security of hosted offering we have. I will certainly consider providing swag and/or bounties on further severe reports, but that won't be possible without enough funding for Weblate.

Filed under: Debian English SUSE Weblate

CryptogramWannaCry Ransomware

Criminals go where the money is, and cybercriminals are no exception.

And right now, the money is in ransomware.

It's a simple scam. Encrypt the victim's hard drive, then extract a fee to decrypt it. The scammers can't charge too much, because they want the victim to pay rather than give up on the data. But they can charge individuals a few hundred dollars, and they can charge institutions like hospitals a few thousand. Do it at scale, and it's a profitable business.

And scale is how ransomware works. Computers are infected automatically, with viruses that spread over the internet. Payment is no more difficult than buying something online ­-- and payable in untraceable bitcoin -­- with some ransomware makers offering tech support to those unsure of how to buy or transfer bitcoin. Customer service is important; people need to know they'll get their files back once they pay.

And they want you to pay. If they're lucky, they've encrypted your irreplaceable family photos, or the documents of a project you've been working on for weeks. Or maybe your company's accounts receivable files or your hospital's patient records. The more you need what they've stolen, the better.

The particular ransomware making headlines is called WannaCry, and it's infected some pretty serious organizations.

What can you do about it? Your first line of defense is to diligently install every security update as soon as it becomes available, and to migrate to systems that vendors still support. Microsoft issued a security patch that protects against WannaCry months before the ransomware started infecting systems; it only works against computers that haven't been patched. And many of the systems it infects are older computers, no longer normally supported by Microsoft --­ though it did belatedly release a patch for those older systems. I know it's hard, but until companies are forced to maintain old systems, you're much safer upgrading.

This is easier advice for individuals than for organizations. You and I can pretty easily migrate to a new operating system, but organizations sometimes have custom software that breaks when they change OS versions or install updates. Many of the organizations hit by WannaCry had outdated systems for exactly these reasons. But as expensive and time-consuming as updating might be, the risks of not doing so are increasing.

Your second line of defense is good antivirus software. Sometimes ransomware tricks you into encrypting your own hard drive by clicking on a file attachment that you thought was benign. Antivirus software can often catch your mistake and prevent the malicious software from running. This isn't perfect, of course, but it's an important part of any defense.

Your third line of defense is to diligently back up your files. There are systems that do this automatically for your hard drive. You can invest in one of those. Or you can store your important data in the cloud. If your irreplaceable family photos are in a backup drive in your house, then the ransomware has that much less hold on you. If your e-mail and documents are in the cloud, then you can just reinstall the operating system and bypass the ransomware entirely. I know storing data in the cloud has its own privacy risks, but they may be less than the risks of losing everything to ransomware.

That takes care of your computers and smartphones, but what about everything else? We're deep into the age of the "Internet of things."

There are now computers in your household appliances. There are computers in your cars and in the airplanes you travel on. Computers run our traffic lights and our power grids. These are all vulnerable to ransomware. The Mirai botnet exploited a vulnerability in internet-enabled devices like DVRs and webcams to launch a denial-of-service attack against a critical internet name server; next time it could just as easily disable the devices and demand payment to turn them back on.

Re-enabling a webcam will be cheap; re-enabling your car will cost more. And you don't want to know how vulnerable implanted medical devices are to these sorts of attacks.

Commercial solutions are coming, probably a convenient repackaging of the three lines of defense described above. But it'll be yet another security surcharge you'll be expected to pay because the computers and internet-of-things devices you buy are so insecure. Because there are currently no liabilities for lousy software and no regulations mandating secure software, the market rewards software that's fast and cheap at the expense of good. Until that changes, ransomware will continue to be profitable line of criminal business.

This essay previously appeared in the New York Daily News.

Cory DoctorowOxford, I’ll see you tonight on the Walkaway tour (then London, Liverpool, Birmingham…) (!)



I’m in the UK for the British Walkaway tour, which kicks off tonight at 7PM in Oxford where I’ll be in conversation with Tim Harford at Blackwells.


Then I’m doing three events in London: a signing at Forbidden Planet at 6PM on Tuesday, then a conversation with Laurie Penny at Waterstones Tottenham Court Road at 7:45 on Tuesday; and on Wednesday, it’s a conversation with Olivia Sudjic (author of Sympathy) at Pages of Hackney at 7PM.

From there, I’m heading back to the USA for appearances in San Francisco (Bay Area Book Fair); NYC (Book Con); Denver (Denver Comic-Con); Chicago (Printer’s Row); San Diego (San Diego Comic-Con) and Las Vegas (Defcon) — more stuff is also being added, so watch this space!

,

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 2, week 6

This week students doing the Understanding Our World™ program are exploring their environment and considering indigenous peoples. Younger students are learning about local history and planning a poster on a local issue. Older students are studying indigenous peoples around the world. All the students are working strongly on their main pieces of assessment for the term.

school iconFoundation/Prep/Kindy to Year 3

Our youngest students, using the stand-along Foundation/Prep/Kindy unit (F.2) are exploring the sense of touch in their environment this week. Students consider a range of fabrics and textiles and choose which ones match their favourite place, for inclusion in their model or collage. Students in integrated classes of Foundation/Prep/Kindy and Year 1 (Unit F.6), Year 1 students (Unit 1.2), Year 2 students (Unit 2.2) and Year 3 (Unit 3.2) are starting to prepare a poster on an issue regarding their school, or local park/heritage place, while considering the local history. These investigations should be based on the excursion from last week. Students will have 2 weeks to prepare their posters, for display either at the school or a local venue, such as the library or community hall.

Years 3 to 6

Students in Years 3 to 6 are continuing with their project on an explorer. Students in Year 3 (Unit 3.6) are examining Australian Aboriginal groups from extreme climate areas of Australia, such as the central deserts, or cold climate areas. Students then choose one of these groups to describe in their Student Workbook, and add to their presentation. Students in Year 4 (Unit 4.2) are studying indigenous peoples of Africa and South America. They will then select a group from the area visited by their explorer, to include in their presentation. Year 5 students (Unit 5.2) do the same with indigenous groups from North America; whilst year 6 students (Unit 6.2) have a wide range of resources on indigenous peoples from Asia to select for study and inclusion in their presentation. Resources are available on groups from across mainland Asia (such as the Mongols, Tatars, Rus, Han), as well as South-East Asia (such as Malay, Dyak, Dani etc.). This is the last section of work to be included in the presentation, and students will then finish their presentation and present it to the class.

Planet DebianRitesh Raj Sarraf: apt-offline 1.8.0 releasedI

I am pleased to announce the release of apt-offline, version 1.8.0. This release is mainly a forward port of apt-offline to Python 3 and PyQt5. There are some glitches related to Python 3 and PyQt5, but overall the CLI interface works fine. Other than the porting, there's also an important bug fixed, related to memory leak when using the MIME library. And then there's some updates to the documentation (user examples) based on feedback from users.

Release is availabe from Github and Alioth

 

 

 

What is apt-offline ?

Description: offline APT package manager
apt-offline is an Offline APT Package Manager.
.
apt-offline can fully update and upgrade an APT based distribution without
connecting to the network, all of it transparent to APT.
.
apt-offline can be used to generate a signature on a machine (with no network).
This signature contains all download information required for the APT database
system. This signature file can be used on another machine connected to the
internet (which need not be a Debian box and can even be running windows) to
download the updates.
The downloaded data will contain all updates in a format understood by APT and
this data can be used by apt-offline to update the non-networked machine.
.
apt-offline can also fetch bug reports and make them available offline.

Categories: 

Keywords: 

Like: 

Planet DebianHolger Levsen: 20170521-this-time-of-the-year

It's this time of the year again…

So it seems summer has finally arrived here and for the first time this year I've been offline for more than 24h, even despite having wireless network coverage. The lake, the people, the bonfire, the music, the mosquitos and the fireworks at 3.30 in the morning were totally worth it! ;-)

Planet DebianRuss Allbery: Review: Sector General

Review: Sector General, by James White

Series: Sector General #5
Publisher: Orb
Copyright: 1983
Printing: 2002
ISBN: 0-312-87770-6
Format: Trade paperback
Pages: 187

Sector General is the fifth book (or, probably more accurately, collection) in the Sector General series. I blame the original publishers for the confusion. The publication information is for the Alien Emergencies omnibus, which includes the fourth through the sixth books in the series.

Looking back on my previous reviews of this series (wow, it's been eight years since I read the last one?), I see I was reviewing them as novels rather than as short story collections. In retrospect, that was a mistake, since they're composed of clearly stand-alone stories with a very loose arc. I'm not going to go back and re-read the earlier collections to give them proper per-story reviews, but may as well do this properly here.

Overall, this collection is more of the same, so if that's what you want, there won't be any negative surprises. It's another four engineer-with-a-wrench stories about biological and medical puzzles, with only a tiny bit of characterization and little hint to any personal life for any of the characters outside of the job. Some stories are forgettable, but White does create some memorable aliens. Sadly, the stories don't take us to the point of real communication, so those aliens stop at biological puzzles and guesswork. "Combined Operation" is probably the best, although "Accident" is the most philosophical and an interesting look at the founding principle of Sector General.

"Accident": MacEwan and Grawlya-Ki are human and alien brought together by a tragic war, and forever linked by a rather bizarre war monument. (It's a very neat SF concept, although the implications and undiscussed consequences don't bear thinking about too deeply.) The result of that war was a general recognition that such things should not be allowed to happen again, and it brought about a new, deep commitment to inter-species tolerance and politeness. Which is, in a rather fascinating philosophical twist, exactly what MacEwan and Grawlya-Ki are fighting against: not the lack of aggression, which they completely agree with, but with the layers of politeness that result in every species treating all others as if they were eggshells. Their conviction is that this cannot create a lasting peace.

This insight is one of the most profound bits I've read in the Sector General novels and supports quite a lot of philosophical debate. (Sadly, there isn't a lot of that in the story itself.) The backdrop against which it plays out is an accidental crash in a spaceport facility, creating a dangerous and potentially deadly environment for a variety of aliens. Given the collection in which this is included and the philosophical bent described above, you can probably guess where this goes, although I'll leave it unspoiled if you can't. It's an idea that could have been presented with more subtlety, but it's a really great piece of setting background that makes the whole series snap into focus. A much better story in context than its surface plot. (7)

"Survivor": The hospital ship Rhabwar rescues a sole survivor from the wreck of an alien ship caused by incomplete safeguards on hyperdrive generators. The alien is very badly injured and unconscious and needs the full attention of Sector General, but on the way back, the empath Prilicla also begins suffering from empathic hypersensitivity. Conway, the protagonist of most of this series, devotes most of his attention to that problem, having delivered the rescued alien to competent surgical hands. But it will surprise no regular reader that the problems turn out to be linked (making it a bit improbable that it takes the doctors so long to figure that out). A very typical entry in the series. (6)

"Investigation": Another very typical entry, although this time the crashed spaceship is on a planet. The scattered, unconscious bodies of the survivors, plus signs of starvation and recent amputation on all of them, convinces the military (well, police is probably more accurate) escort that this is may be a crime scene. The doctors are unconvinced, but cautious, and local sand storms and mobile vegetation add to the threat. I thought this alien design was a bit less interesting (and a lot creepier). (6)

"Combined Operation": The best (and longest) story of this collection. Another crashed alien spacecraft, but this time it's huge, large enough (and, as they quickly realize, of a design) to indicate a space station rather than a ship, except that it's in the middle of nowhere and each segment contains a giant alien worm creature. Here, piecing together the biology and the nature of the vehicle is only the beginning; the conclusion points to an even larger problem, one that requires drawing on rather significant resources to solve. (On a deadline, of course, to add some drama.) This story requires the doctors to go unusually deep into the biology and extrapolated culture of the alien they're attempting to rescue, which made it more intellectually satisfying for me. (7)

Followed by Star Healer.

Rating: 6 out of 10

Planet DebianAdnan Hodzic: Automagically deploy & run containerized WordPress (PHP7 FPM, Nginx, MariaDB) using Ansible + Docker on AWS

In this blog post, I’ve described what started as simple migration of WordPress blog to AWS, ended up as automation project consisting of publishing multiple Ansible roles deploying and running multiple Docker images.

If you’re not interested in reading about my entire journey, cognition gains and how this process came to be, please skim down to “Birth of: containerized-wordpress-project (TL;DR)” section.

Migrating WordPress blog to AWS (EC2, Lightsail?)

Since I’ve been sold on Amazon’s AWS idea of cloud computing “services” for couple of years now. I’ve wanted, and been trying to migrate this (WordPress) blog to AWS, but somehow it never worked out.

Moving it to EC2 instance, with its own ELB volumes, AMI, EIP, Security Group … it just seemed as an overkill.

When AWS Lightsail was first released, it seemed that was an answer to all my problems.

But it wasn’t, disregarding its bit restrictive/dumbed down versions of original features. Living in Amsterdam, my main problem with it was that it was only available in a single US region.

Regardless, I thought it had everything I needed for WordPress site, and as a new service, it had great potential.

Its regional limitations were also good in a sense that they made me realize one important thing. And that’s once I migrate my blog to AWS, I want to be able to seamlessly move/migrate it across different EC2’s and different regions once they were available.

If done properly, it meant I could even have it moved across different clouds (I’m talking to you Google Cloud).

P.S: AWS Lightsail is now available in couple of different regions across Europe. Rollout which was almost smoothless.

Fundamental problem of every migration … is migration

Phase 1: Don’t reinvent the wheel?

When you have a WordPress site that’s not self hosted. You want everything to work, but yet you really don’t want to spend any time managing infrastructure it’s on.

And as soon as I started looking what could fit this criteria, I found that there were pre-configured, running out of box WordPress EC2 images available on AWS Marketplace, great!

But when I took a look, although everything was running out of box, I wasn’t happy with software stack it was all built on. Namely Ubuntu 14.04 and Apache, and all of the services were started using custom scripts. Yuck.

With this setup, when it was time to upgrade (and it’s already that time) you wouldn’t be thinking about upgrade. You’d only be thinking about another migration.

Phase 2: What if I built everything myself?

Installing and configuring everything manually, and then writing huge HowTo which I would follow when I needed to re-create whole stack was not an option. Same case with was scripting whole process, as overhead of changes that had to be tracked was too way too big.

Being a huge Ansible fan, automating this step was a natural next step.

I even found an awesome Ansible role which seemed like it’s going to do everything I need. Except, I realized I needed to update all software that’s deployed with it, and customize it since configuration it was deployed on wasn’t as generic.

So I forked it and got to work. But soon enough, I was knee deep in making and fiddling with various system changes. Something I was trying to get away in this case, and most importantly something I was trying to avoid when it was time for next update.

Phase 3: Marriage made in heaven: Ansible + Docker + AWS

Idea to have everything Dockerized was around from very start. However, it never made a lot of sense until I put Ansible into same picture. And it was at this point where my final idea and requirements become crystal clear.

Use Ansible to configure and setup host ready for Docker ecosystem. Ecosystem consisting of multiple separate containers for each required service (WordPress + Nginx + MariaDB). Link them all together as a single service using Docker Compose.

Idea was backed by thought to spend minimum to no time (and effort) on manual configuration of anything on the server. Level of attachment to this server was so low that I didn’t even want to SSH to it.

If there was something wrong, I could just nuke the whole thing and deploy code on a new healthy rolled out server with everything working out of box.

After it was clear what needed to be done, I got to work.

Birth of: containerized-wordpress-project (TL;DR)

After a lot of work, end result is project which allows you to automagically deploy & run containerized WordPress instance which consists of 3 separate containers running:

  • WordPress (PHP7 FPM)
  • Nginx
  • MariaDB

Once run, containerized-wordpress playbook will guide you through interactive setup of all 3 containers, after which it will run all  Ansible roles created for this project. End result is that host you have never even SSH-ed to will be fully configured and running containerized WordPress instace out of box.

Most importantly, this whole process will be completed in <= 5 minutes and doesn’t require any Docker or Ansible knowledge!

containerized-wordpress demo

Console output of running “containerized-wordpress” Ansible Playbook:

Console output of running "containerized-wordpress" Ansible Playbook

Accessing WordPress instance created from “containerized-wordpress” Ansible Playbook:

Accessing WordPress instance created from "containerized-wordpress" Ansible Playbook

Did I end up migrating to AWS in the end?

You bet. Thanks to efforts made in containerized-wordpress-project, I’m happy to report my whole WordPress migration to AWS was completed in matter of minutes and that this blog is now running on Docker and on AWS!

I hope this same project will help you take a leap in your migration.

Happy hacking!

Planet DebianElena 'valhalla' Grandi: Modern XMPP Server

Modern XMPP Server

I've published a new HOWTO on my website www.trueelena.org/computers/ho:

www.enricozini.org/blog/2017/d already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.


How



I've decided to install prosody.im/, mostly because it was recommended by the RTC QuickStart Guide rtcquickstart.org/; I've heard that similar results can be reached with www.ejabberd.im/ and other servers.

I'm also targeting www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).

Installation and prerequisites



You will need to enable the backports.debian.org/ repository and then install the packages prosody and prosody-modules.

You also need to setup some TLS certificates (I used Let's Encrypt letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide rtcquickstart.org/guide/multi/ for more details.

On your firewall, you'll need to open the following TCP ports:


  • 5222 (client2server)

  • 5269 (server2server)

  • 5280 (default http port for prosody)

  • 5281 (default https port for prosody)



The latter two are needed to enable some services provided via http(s), including rich media transfers.

With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:

prosodyctl adduser alice@example.org

In-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim en.wikipedia.org/wiki/Messagin).

prosody configuration



You can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.

First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:


c2s_require_encryption = true
s2s_secure_auth = true



and then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:


s2s_insecure_domains = { "gmail.com" }


virtualhosts



For each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:


VirtualHost "chat.example.org"
enabled = true
ssl = {
key = "/etc/ssl/private/example.org-key.pem";
certificate = "/etc/ssl/public/example.org.pem";
}


For the domains where you also want to enable MUCs, add the follwing lines:


Component "conference.chat.example.org" "muc"
restrict_room_creation = "local"


the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.

You can also add the following line to enable rich media transfers via http uploads (XEP-0363):


Component "upload.chat.example.org" "http_upload"

The defaults are pretty sane, but see modules.prosody.im/mod_http_up for details on what knobs you can configure for this module

Don't forget to enable the virtualhost by linking the file inside /etc/prosody/conf.d/.

additional modules



Most of the other interesting XEPs are enabled by loading additional modules inside /etc/prosody/prosody.cfg.lua (under modules_enabled); to enable mod_something just add a line like:


"something";

Most of these come from the prosody-modules package (and thus from modules.prosody.im/ ) and some may require changing when prosody 0.10 will be available; when this is the case it is mentioned below.



  • mod_carbons (XEP-0280)
    To keep conversations syncronized while using multiple devices at the same time.

    This will be included by default in prosody 0.10.



  • mod_privacy + mod_blocking (XEP-0191)
    To allow user-controlled blocking of users, including as an anti-spim measure.

    In prosody 0.10 these two modules will be replaced by mod_privacy.



  • mod_smacks (XEP-0198)
    Allow clients to resume a disconnected session before a customizable timeout and prevent message loss.



  • mod_mam (XEP-0313)
    Archive messages on the server for a limited period of time (default 1 week) and allow clients to retrieve them; this is required to syncronize message history between multiple clients.

    With prosody 0.9 only an in-memory storage backend is available, which may make this module problematic on servers with many users. prosody 0.10 will fix this by adding support for an SQL backed storage with archiving capabilities.



  • mod_throttle_presence + mod_filter_chatstates (XEP-0352)
    Filter out presence updates and chat states when the client announces (via Client State Indication) that the user isn't looking. This is useful to reduce power and bandwidth usage for "useless" traffic.




@Gruppo Linux Como @LIFO

,

Sam VargheseThey do things differently in China – and it seems to work

Towards the latter stages of his life, Charles Darwin noted that he could not read serious texts any more; the only thing that grabbed his attention was a book on romance. One of the greatest scientific minds we have known could only enjoy a book about the mating game.

One would not liken oneself to the great man, but over the last nine months one has been similarly drawn away from serious work to become a regular viewer of a Chinese dating show that goes by the name If You Are The One.

The show is a record-breaker; it has about 60 million tuning in for every episode and has been running for seven years. The presenter, Meng Fei, is a national celebrity.

There are many things about the show that grab the attention. First, it is based on an Australian show that flopped after just four episodes.

If this show had been running in any developed country, then the emphasis would have been on sex. All shows that bring men and women together with romance as the aim, always focus on that primeval force.

But the Chinese show could not be more different; while a successful outcome means that a male candidate would get a date with one of the 24 girls on the show, the focus is more on society’s need for such liaisons.

Four or five men appear on each episode and the women can indicate their interest or lack of it. Three videos are shown about the man in question and at any time the girls can indicate their lack of interest by turning off the light that is on the podium in front of them.

In what is considered a male-dominated society, the girls get the first chance to reject a man.

In recent years, a girl has been allowed to indicate her interest in a man by “blowing up her light”; this means she is there at the time when the man makes his choice.

Finally, after the three videos are screened, if two or more girls have their lights still on, the man gets to choose. He initially picks a favourite girl and she is also called up if her light is not on. Then he makes a choice – at times it could be to walk away with nobody.

There is a lot of social commentary that is woven in by the presenter and two guest commentators, both celebrities in different fields. It is entertaining and for one reason: it keep things simple.

The presenter is 40+ and that in itself is a peculiarity in a show that is matching up mostly 20-somethings with each other. The format is the same week after week, with the variety coming in catering to expatriate Chinese on some occasions.

But its success is remarkable. It must be raking in the money, else it would not be going on so long. It is one indication that they do things differently in China and that it works for them.

Cory DoctorowBurbank: I’m coming to you today on the Walkaway tour! (then Oxford, London, Liverpool…) (!)

I took great advantage of my 36 hour hiatus from the Walkaway tour, but I’m back at it today, with a 2PM appearance at Burbank’s Dark Delicacies, before I go straight to the airport to fly to the UK for my British tour.


On Monday, I’ll be at Blackwell’s Oxford at 7PM with Tim Harford; on Tuesday I’ll be in London at Forbidden Planet at 6PM and at Waterstone’s Tottenham Court Road at 7PM with Laurie Penny.

On Wednesday I’ll be in London again, at Clapton’s Pages, with Olivia Sudjic; and then in Liverpool (with Chris Pak), Birmingham, and Hay-on-Wye (with Adam Rutherford).

I’ll be back on the road in the USA when I get back: Bookcon NYC, Denver Comic-Con, San Diego Comic-Con, Defcon, and Printer’s Row Chicago (with more to come!).

I’ll see you there!

Planet DebianNeil Williams: Software, service, data and freedom

Free software, free services but what about your data?

I care a lot about free software, not only as a Debian Developer. The use of software as a service matters as well because my principle free software development is on just such a project, licensed under the GNU Affero General Public License version 3. The AGPL helps by allowing anyone who is suitably skilled to install their own copy of the software and run their own service on their own hardware. As a project, we are seeing increasing numbers of groups doing exactly this and these groups are actively contributing back to the project.

So what is the problem? We've got an active project, an active community and everything is under a free software licence and regularly uploaded to Debian main. We have open code review with anonymous access to our own source code CI and anonymous access to project planning, open mailing list archives as well as an open bug tracker and a very active IRC channel (#linaro-lava on OFTC). We develop in the open, we respond in the open and we publish frequently (monthly, approximately). The code we write defaults to public visibilty at runtime with restrictions available for certain use cases.

What else can we be doing? Well it was a simple question which started me thinking.

The lava documentation has various example test scripts e.g. https://validation.linaro.org/static/docs/v2/examples/test-jobs/qemu-kernel-standard-sid.yaml

these have no licence information, we've adapted them for a Linux Foundation project, what licence should apply to these files?

Robert Marshall

Those are our own examples, contributed as part of the documentation and covered by the AGPL like the rest of the documentation and the software which it documents, so I replied with the same. However, what about all the other submissions received by the service?

Data Freedom

LAVA acts by providing a service to authenticated users. The software runs your test code on hardware which might not be available to the user or which is simply inconvenient for the test writer to setup themselves. The AGPL covers this nicely.

What about the data contributed by the users? We make this available to other users who will, naturally, copy and paste for their own tests. In most cases, because the software defaults to public access, anonymous users also get to learn from the contributions of other test writers. This is a good thing and to be encouraged. (One reason why we moved to YAML for all submissions was to allow comments to help other users understand why the submission does specific things.)

Writing a test job submission or a test shell definition from scratch is a non-trivial amount of work. We've written dozens of pages of documentation covering how and how not to do it but the detail of how a test job runs exactly what the test writer requires can involve substantial effort. (Our documentation recommends using version control for each of these works for exactly these reasons.)

At what point do these works become software? At what point do these need licensing? How could that be declared?

Perils of the Javascript Trap approach

When reading up on the AGPL, I also read about Service as a Software Substitute (SaaSS) and this led to The Javascript Trap.

I don't consider LAVA to be SaaSS although it is Software as a Service (SaaS). (Distinguishing between those is best left to the GNU document as it is an almighty tangle at times.)

I did look at the GNU ideas for licensing Javascript but it seems cumbersome and unnecessary - a protocol designed for the specific purposes of their own service rather than as a solution which could be readily adopted by all such services.

The same problems affect trying to untangle sharing the test job data within LAVA.

Adding Licence text

The traditional way, of course, is simply to add twenty lines or so of comments at the top of every file. This works nicely for source code because the comments are hidden from the final UI (unless an explicit reference is made in the --help output or similar). It is less nice for human readable submissions where the first thing someone has to do is scroll passed the comments to get to what they want to see. At that point, it starts to look like a popup or a nagging banner - blocking the requested content on a website to try and get the viewer to subscribe to a newsletter or pay for the rest of the content. Let's not actively annoy visitors who are trying to get things done.

Adding Licence files

This can be done in the remote version control repository - then a single line in the submitted file can point at the licence. This is how I'm seeking to solve the problem of our own repositories. If the reference URL is included in the metadata of the test job submission, it can even be linked into the test job metadata and made available to everyone through the results API.

metadata:
  licence.text: http://mysite/lava/git/COPYING
  licence.name: BSD 3 clause

Metadata in LAVA test job submissions is free-form but if the example was adopted as a convention for LAVA submissions, it would make it easy for someone to query LAVA for the licences of a range of test submissions.

Currently, LAVA does not store metadata from the test shell definitions except the URL of the git repo for the test shell definition but that may be enough in most cases for someone to find the relevant COPYING or LICENCE file.

Which licence?

This could be a problem too. If users contribute data under unfriendly licences, what is LAVA to do? I've used the BSD 3 clause in the above example as I expect it to be the most commonly used licence for these contributions. A copyleft licence could be used, although doing so would require additional metadata in the submission to declare how to contribute back to the original author (because that is usually not a member of the LAVA project).

Why not Creative Commons?

Although I'm referring to these contributions as data, these are not pieces of prose or images or audio. These are instructions (with comments) for a specific piece of software to execute on behalf of the user. As such, these objects must comply with the schema and syntax of the receiving service, so a code-based licence would seem correct.

Results

Finally, a word about what comes back from your data submission - the results. This data cannot be restricted by any licence affecting either the submission or the software, it can be restricted using the API or left as the default of public access.

If the results and the submission data really are private, then the solution is to take advantage of the AGPL, take the source code of LAVA and run it internally where the entire service can be placed within a firewall.

What happens next?

  1. Please consider editing your own LAVA test job submissions to add licence metadata.
  2. Please use comments in your own LAVA test job submissions, especially if you are using some form of template engine to generate the submission. This data will be used by others, it is easier for everyone if those users do not have to ask us or you about why your test job does what it does.
  3. Add a file to your own repositories containing LAVA test shell definitions to declare how these files can be shared freely.
  4. Think about other services to which you submit data which is either only partially machine generated or which is entirely human created. Is that data free-form or are you essentially asking the service to do a precise task on your behalf as if you were programming that server directly? (Jenkins is a classic example, closely related to LAVA.)
    • Think about how much developer time was required to create that submission and how the service publishes that submission in ways that allow others to copy and paste it into their own submissions.
    • Some of those submissions can easily end up in documentation or other published sources which will need to know about how to licence and distribute that data in a new format (i.e. modification.) Do you intend for that useful purpose to be defeated by releasing your data under All Rights Reserved?

Contact

I don't enable comments on this blog but there are enough ways to contact me and the LAVA project in the body of this post, it really shouldn't be a problem for anyone to comment.

Planet DebianRitesh Raj Sarraf: Patanjali Research Foundation

PSA: Research in the domain of Ayurveda

http://www.patanjaliresearchfoundation.com/patanjali/
 

I am so glad to see this initiative taken by the Patanjali group. This is a great stepping stone in the health and wellness domain.

So far, Allopathy has been blunt in discarding alternate medicine practices, without much solid justification. The only, repetitive, response I've heard is "lack of research". This initiative definitely is a great step in that regard.

Ayurveda (Ancient Hindu art of healing) has a huge potential to touch lives. For the Indian sub-continent, this has the potential of a blessing.

The Prime Minister of India himself inaugurated the research centre.

Categories: 

Keywords: 

Like: 

CryptogramNSA Brute-Force Keysearch Machine

The Intercept published a story about a dedicated NSA brute-force keysearch machine being built with the help of New York University and IBM. It's based on a document that was accidentally shared on the Internet by NYU.

The article is frustratingly short on details:

The WindsorGreen documents are mostly inscrutable to anyone without a Ph.D. in a related field, but they make clear that the computer is the successor to WindsorBlue, a next generation of specialized IBM hardware that would excel at cracking encryption, whose known customers are the U.S. government and its partners.

Experts who reviewed the IBM documents said WindsorGreen possesses substantially greater computing power than WindsorBlue, making it particularly adept at compromising encryption and passwords. In an overview of WindsorGreen, the computer is described as a "redesign" centered around an improved version of its processor, known as an "application specific integrated circuit," or ASIC, a type of chip built to do one task, like mining bitcoin, extremely well, as opposed to being relatively good at accomplishing the wide range of tasks that, say, a typical MacBook would handle. One of the upgrades was to switch the processor to smaller transistors, allowing more circuitry to be crammed into the same area, a change quantified by measuring the reduction in nanometers (nm) between certain chip features.

Unfortunately, the Intercept decided not to publish most of the document, so all of those people with "a Ph.D. in a related field" can't read and understand WindsorGreen's capabilities. What sorts of key lengths can the machine brute force? Is it optimized for symmetric or asymmetric cryptanalysis? Random brute force or dictionary attacks? We have no idea.

Whatever the details, this is exactly the sort of thing the NSA should be spending their money on. Breaking the cryptography used by other nations is squarely in the NSA's mission.

,

Cory DoctorowTalking Walkaway with Suicide Girls Radio

Nicole Powers interviewed me for Suicide Girls Radio and transcribed our wide-ranging, political conversation that ranged over Calexit, computer law, Occupy, and science fiction’s role in the world.


NP: I watched your New York Public Library Q&A with Edward Snowden two days ago. You both spoke about immorality being used as a MacGuffin in the book. However, I read an article recently about a surgeon that successfully transplanted a head on to a rat. That same surgeon says he’s going to do that on a human within the year. Then you have Mark Zuckerberg working on his mind-reading project. We’re already heading in the direction that you describe in the book. And, if that comes to pass, there’s going to be this horrific situation where — if it’s left in the hands of the elite — the one percenters are going to get to decide who donates their body and whose brains get to live on.

CD: Ha,ha!

NP: Is this really a MacGuffin or is the idea that it’s a MacGuffin wishful thinking on your part given what’s actually going on in the real world?

CD: No, I seriously think it’s a MacGuffin. Just because Zuck thinks that he knows about neuroscience doesn’t mean that he knows about neuroscience. Dunning-Kruger is alive and well. The reason that con artists targeted successful, intelligent people is they always overestimated their ability to spot a con in domains other than the one that they knew something about. You find a stock broker and you would hook them with a horse race con because stock brokers would assume that understanding a stock market very well also made them really good at understanding horse races — and they were horribly wrong and got taken for every penny. So I wouldn’t say that Zuck’s enthusiasm is any indication of anything except his hubris.


In terms of the transplantation of a rat head, we can’t interrogate the rat to know whether or not that was a successful operation, right? We have only external factors to evaluate the quality of the experimental outcome. It may be that, if you could talk to the rat, you’d find out that the head transplant was not nearly so successful as we thought… So in my view, anyway, it’s a very metaphorical thing.


Where it does touch with reality is in what James Hughes calls ‘transhumanism.’ He wrote a very good book about this called Citizen Cyborg that’s more generally about the ways that technologies give us longer lives of higher quality, and how the uneven distribution of technology in that domain — where that inequality is a function of economic inequality — that it magnifies economic inequality very, very terribly.


Jim, in particular, is worried and interested about the way that maybe we might alter our germplasm, which does seem to me to be well within reach. I mean, we have parts of our genome that at least there’s burgeoning consensus if they’re expressed in certain ways, they probably only do bad things and not good things. And we can, in theory, eliminate those parts of our genome from fertilized zygotes, at least in vitro. So it may be that there are people who are wealthy enough to have IVF and to have CRISPR surgery on the IVF before implantation whose germplasm is permanently altered to remove things that are potentially very harmful. That to me feels like something that it is a little bit like speciation. So if there’s a thing in Walkaway that resonates with you, the place where I would say you should be taking that resonance and trying to apply it to the real-world is not in the hypothetical life extension technologies, but in very non-hypothetical and very real stuff that we’re doing right now.

CryptogramFriday Squid Blogging: Giant Squid Caught Off the Coast of Ireland

It's rare:

Fishermen caught a 19-foot-long giant squid off the coast of Ireland on Monday, only the fifth to be seen there since 1673.

Also the first in 22 years.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

LongNowIs Anything Original? The Work of Art in the Age of Digital Remediation

As PBS Newshour reports, modern-day renaissance workshop Factum Arte preserves art and historical works threatened by war, looting and the passage of time by creating high tech, full-scale reproductions of them. In so doing, the organization is challenging notions of what constitutes an original work of art.

Factum Arte is recreating works of art recently destroyed by ISIS and damaged in the Syrian Civil War. Via Factum Arte.

 

CryptogramNSA Abandons "About" Searches

Earlier this month, the NSA said that it would no longer conduct "about" searches of bulk communications data. This was the practice of collecting the communications of Americans based on keywords and phrases in the contents of the messages, not based on who they were from or to.

The NSA's own words:

After considerable evaluation of the program and available technology, NSA has decided that its Section 702 foreign intelligence surveillance activities will no longer include any upstream internet communications that are solely "about" a foreign intelligence target. Instead, this surveillance will now be limited to only those communications that are directly "to" or "from" a foreign intelligence target. These changes are designed to retain the upstream collection that provides the greatest value to national security while reducing the likelihood that NSA will acquire communications of U.S. persons or others who are not in direct contact with one of the Agency's foreign intelligence targets.

In addition, as part of this curtailment, NSA will delete the vast majority of previously acquired upstream internet communications as soon as practicable.

[...]

After reviewing amended Section 702 certifications and NSA procedures that implement these changes, the FISC recently issued an opinion and order, approving the renewal certifications and use of procedures, which authorize this narrowed form of Section 702 upstream internet collection. A declassification review of the FISC's opinion and order, and the related targeting and minimization procedures, is underway.

A quick review: under Section 702 of the Patriot Act, the NSA seizes a copy of all communications moving through a telco -- think e-mail and such -- and searches it for particular senders, receivers, and -- until recently -- key words. This pretty clearly violates the Fourth Amendment, and groups like the EFF have been fighting the NSA in court about this for years. The NSA has also had problems in the FISA court about these searches, and cites "inadvertent compliance incidents" related to this.

We might learn more about this change. Again, from the NSA's statement:

After reviewing amended Section 702 certifications and NSA procedures that implement these changes, the FISC recently issued an opinion and order, approving the renewal certifications and use of procedures, which authorize this narrowed form of Section 702 upstream internet collection. A declassification review of the FISC's opinion and order, and the related targeting and minimization procedures, is underway.

And the EFF is still fighting for more NSA surveillance reforms.

TEDFilmmaker Jen Brea gets a Sundance fellowship, Pamela Ronald makes the case for engineered rice, and more

Behold, your recap of TED-related news:

A new Sundance grant helps indie films get seen. Making a film is hard enough — but getting the film seen by an audience can be just as difficult, especially in this era of non-stop media shifts. To help, Sundance just launched the Creative Distribution Fellowship — and among the first recipients is TED Fellow Jennifer Brea, whose documentary Unrest premiered at Sundance in January 2017. The fellowship offers resources, support and mentorship to find creative new ways to reach audiences. In the press release, Keri Putnam, executive director of Sundance, said: “This entrepreneurial approach to marketing, distribution and audience building empowers independent filmmakers to release their own films, on their own terms, while retaining their rights.” (Watch Jen’s TED Talk)

Dance that’s accessible to all. Wayne McGregor has partnered with Sense, a charity that supports people who are deafblind or have sensory impairments, to create an “educational dance resource … to make dance and movement classes accessible to people with disabilities.” Making Sense of Dance, available free online, is a downloadable booklet and videos with lessons, ideas and games to help people lead movement sessions for people of all abilities. (Watch Wayne’s TED Talk)

The case for engineering rice. Growing rice can be a gamble, especially in the face of climate change-induced droughts. That’s why Pamela Ronald and her lab at UC Davis are engineering rice to be more resilient, in hopes of safeguarding the crop against droughts while protecting food security and the livelihood of farmers who could be devastated by climate change in southeast Asia and sub-saharan Africa. Ronald continues to emphasize the importance of using genetic tools to protect both crops and people. “This focus on genes in our food is a distraction from the really, really important issues,” she told the MIT Technology Review. “We need to make policy based on evidence, and based on a broader understanding of agriculture. There are real challenges for farmers, and we need to be united in using all appropriate technologies to tackle these challenges.” (Watch Pamela’s TED Talk)

How to prepare workers for global trade. As trade becomes more globalized, with production scattered across many countries, how should we educate our kids in the skills they will need? That’s the focus of the OECD’s Skills Outlook 2017 report: it suggests that nations around the world should focus on diversifying their population’s skills, to gain advantage in globalized industries. “Countries increasingly compete through the skills of their workers. When workers have a mix of skills that fit with the needs of technologically advanced industries, specialising in those industries means a comparative advantage,” explains the OECD’s Andreas Schleicher. (Watch Andreas’ TED Talk)

New additions to the Academy of Sciences. Three of our TEDsters have just been elected to the National Academy of Sciences! Sangeeta Bhatia, Esther Duflo and Gabriela González have all been recognized for “distinguished and continuing achievements in original research.” Bhatia is the director of MIT’s Laboratory for Multiscale Regenerative Technologies, which engineers nanotechnologies to improve human health. Also hailing from MIT, as the co-founder and co-director of the Abdul Latif Jameel Poverty Action Lab (J-PAL), Esther Duflo aims to eradicate poverty by informing policies with scientific research. Gabriela González, who spoke at the TED en Español session at TED2017, contributed to the detection of gravitational waves, as predicted by Einstein, through her research with LIGO, the Laser Interferometer Gravitational-Wave Observatory. (Watch Sangeeta’s TED Talk and Esther’s TED Talk)

New books of note from TED Talks speakers. Lidia Yuknavitch channels Joan of Arc in her exploration of a world torn apart by unending violence, and Manuel Lima takes us on a tour of circles and the history of information design behind that shape, while James Stavridis navigates the reading habits and libraries of more than 200 four-star military officers. (Watch Lidia’s TED Talk, Manuel’s TED Talk, and James’ TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.

Planet DebianClint Adams: Help the Aged

I keep meeting girls from Walnut Creek who don’t know about the CDROM.

Posted on 2017-05-19
Tags: ranticore

Sociological ImagesUnbearable bodies: When nobody is good enough

Flashback Friday. 

In a society that objectifies women, women learn that, to many others, they are their bodies. Because our bodies are the means by which others judge us, we place our bodies under deep and critical scrutiny. In such a world, all bodies are always potentially problematic. Women are too much of this or not enough of that. Even when women like their bodies overall, there is always some part that some person would judge unacceptable. And, in any case, our bodies will inevitably (continue to) disappoint us if we lose the ability to invest time and money on them or, of course, dare to age.

Two postcards recently presented at Post Secret illustrate this idea. In one a woman expresses her discomfort with her small breasts:

In the other, a woman explains that her breasts make her feel insecure:


Large breasts are desirable? Right? At least that’s what the first woman believes. But large breasts can also be intimidating. Carrying around large breasts can bring attention one doesn’t want (“hey baby”) and judgments that are unfair (“she is flaunting her body”). Small breasts, however, may be de-sexualizing or, conversely, they may attract the attention of men who like to pretend that the women they sleep with are girls.

No matter what size and shape a woman’s breasts, the focus on her body that an objectifying culture makes others feel entitled to make them meaningful in ways that women can’t control. And that will be a problem for all women sometimes, no matter what her body looks like.

Originally posted in 2010; cross-posted at Jezebel.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: The Maybe Compiler

"Maybe it it compiled...maybe it didn't. I guess that I have to find out myself from here on out," wrote, Y. Diomidov.

 

Ken W. writes, "Does buying the 4 year warranty mean Amazon will replace any steaks that don't last?"

 

"I *KNEW* that I should have gone for the blue one!" Connor C. wrote.

 

Adam writes, "Today was a bad day to switch from Chrome."

 

"Looks like my HP printer had been eating paper behind my back," wrote Peter D.

 

Jeanne P. writes, "Ironically, we came across this while looking for colleges that offer Computer Science or Software Engineering majors. The pie charts show acceptance rates."

 

[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

Planet DebianMichael Prokop: Debian stretch: changes in util-linux #newinstretch

We’re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it’s time for #newinstretch!

Hideki Yamane already started the game by blogging about GitHub’s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch.

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available.

Tools that have been taken over from other packages

  • last: used to be shipped via sysvinit-utils in Debian/jessie
  • lastb: used to be shipped via sysvinit-utils in Debian/jessie
  • mesg: used to be shipped via sysvinit-utils in Debian/jessie
  • mountpoint: used to be shipped via initscripts in Debian/jessie
  • sulogin: used to be shipped via sysvinit-utils in Debian/jessie

New tools

  • lsipc: show information on IPC facilities, e.g.:
  • root@ff2713f55b36:/# lsipc
    RESOURCE DESCRIPTION                                              LIMIT USED  USE%
    MSGMNI   Number of message queues                                 32000    0 0.00%
    MSGMAX   Max size of message (bytes)                               8192    -     -
    MSGMNB   Default max size of queue (bytes)                        16384    -     -
    SHMMNI   Shared memory segments                                    4096    0 0.00%
    SHMALL   Shared memory pages                       18446744073692774399    0 0.00%
    SHMMAX   Max size of shared memory segment (bytes) 18446744073692774399    -     -
    SHMMIN   Min size of shared memory segment (bytes)                    1    -     -
    SEMMNI   Number of semaphore identifiers                          32000    0 0.00%
    SEMMNS   Total number of semaphores                          1024000000    0 0.00%
    SEMMSL   Max semaphores per semaphore set.                        32000    -     -
    SEMOPM   Max number of operations per semop(2)                      500    -     -
    SEMVMX   Semaphore max value                                      32767    -     -
    
  • lslogins: display information about known users in the system, e.g.:
  • root@ff2713f55b36:/# lslogins 
      UID USER     PROC PWD-LOCK PWD-DENY LAST-LOGIN GECOS
        0 root        2        0        1            root
        1 daemon      0        0        1            daemon
        2 bin         0        0        1            bin
        3 sys         0        0        1            sys
        4 sync        0        0        1            sync
        5 games       0        0        1            games
        6 man         0        0        1            man
        7 lp          0        0        1            lp
        8 mail        0        0        1            mail
        9 news        0        0        1            news
       10 uucp        0        0        1            uucp
       13 proxy       0        0        1            proxy
       33 www-data    0        0        1            www-data
       34 backup      0        0        1            backup
       38 list        0        0        1            Mailing List Manager
       39 irc         0        0        1            ircd
       41 gnats       0        0        1            Gnats Bug-Reporting System (admin)
      100 _apt        0        0        1            
    65534 nobody      0        0        1            nobody
    
  • lsns: list system namespaces, e.g.:
  • root@ff2713f55b36:/# lsns
            NS TYPE   NPROCS PID USER COMMAND
    4026531835 cgroup      2   1 root bash
    4026531837 user        2   1 root bash
    4026532473 mnt         2   1 root bash
    4026532474 uts         2   1 root bash
    4026532475 ipc         2   1 root bash
    4026532476 pid         2   1 root bash
    4026532478 net         2   1 root bash
    
  • setpriv: run a program with different privilege settings
  • zramctl: tool to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices

New features/options

addpart (show or change the real-time scheduling attributes of a process):

--reload reload prompts on running agetty instances

blkdiscard (discard the content of sectors on a device):

-p, --step <num>    size of the discard iterations within the offset
-z, --zeroout       zero-fill rather than discard

chrt (show or change the real-time scheduling attributes of a process):

-d, --deadline            set policy to SCHED_DEADLINE
-T, --sched-runtime <ns>  runtime parameter for DEADLINE
-P, --sched-period <ns>   period parameter for DEADLINE
-D, --sched-deadline <ns> deadline parameter for DEADLINE

fdformat (do a low-level formatting of a floppy disk):

-f, --from <N>    start at the track N (default 0)
-t, --to <N>      stop at the track N
-r, --repair <N>  try to repair tracks failed during the verification (max N retries)

fdisk (display or manipulate a disk partition table):

-B, --protect-boot            don't erase bootbits when creating a new label
-o, --output <list>           output columns
    --bytes                   print SIZE in bytes rather than in human readable format
-w, --wipe <mode>             wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>  wipe signatures from new partitions (auto, always or never)

New available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

findmnt (find a (mounted) filesystem):

-J, --json             use JSON output format
-M, --mountpoint <dir> the mountpoint directory
-x, --verify           verify mount table content (default is fstab)
    --verbose          print more details

flock (manage file locks from shell scripts):

-F, --no-fork            execute command without forking
    --verbose            increase verbosity

getty (open a terminal and set its mode):

--reload               reload prompts on running agetty instances

hwclock (query or set the hardware clock):

--get            read hardware clock and print drift corrected result
--update-drift   update drift factor in /etc/adjtime (requires --set or --systohc)

ldattach (attach a line discipline to a serial line):

-c, --intro-command <string>  intro sent before ldattach
-p, --pause <seconds>         pause between intro and ldattach

logger (enter messages into the system log):

-e, --skip-empty         do not log empty lines when processing files
    --no-act             do everything except the write the log
    --octet-count        use rfc6587 octet counting
-S, --size <size>        maximum size for a single message
    --rfc3164            use the obsolete BSD syslog protocol
    --rfc5424[=<snip>]   use the syslog protocol (the default for remote);
                           <snip> can be notime, or notq, and/or nohost
    --sd-id <id>         rfc5424 structured data ID
    --sd-param <data>    rfc5424 structured data name=value
    --msgid <msgid>      set rfc5424 message id field
    --socket-errors[=<on|off|auto>] print connection errors when using Unix sockets

losetup (set up and control loop devices):

-L, --nooverlap               avoid possible conflict between devices
    --direct-io[=<on|off>]    open backing file with O_DIRECT 
-J, --json                    use JSON --list output format

New available --list column:

DIO  access backing file with direct-io

lsblk (list information about block devices):

-J, --json           use JSON output format

New available columns (for --output):

HOTPLUG  removable or hotplug device (usb, pcmcia, ...)
SUBSYSTEMS  de-duplicated chain of subsystems

lscpu (display information about the CPU architecture):

-y, --physical          print physical instead of logical IDs

New available column:

DRAWER  logical drawer number

lslocks (list local system locks):

-J, --json             use JSON output format
-i, --noinaccessible   ignore locks without read permissions

nsenter (run a program with namespaces of other processes):

-C, --cgroup[=<file>]      enter cgroup namespace
    --preserve-credentials do not touch uids or gids
-Z, --follow-context       set SELinux context according to --target PID

rtcwake (enter a system sleep state until a specified wakeup time):

--date <timestamp>   date time of timestamp to wake
--list-modes         list available modes
-r, --reorder <dev>  fix partitions order (by start offset)

sfdisk (display or manipulate a disk partition table):

New Commands:

-J, --json <dev>                  dump partition table in JSON format
-F, --list-free [<dev> ...]       list unpartitioned free areas of each device
-r, --reorder <dev>               fix partitions order (by start offset)
    --delete <dev> [<part> ...]   delete all or specified partitions
--part-label <dev> <part> [<str>] print or change partition label
--part-type <dev> <part> [<type>] print or change partition type
--part-uuid <dev> <part> [<uuid>] print or change partition uuid
--part-attrs <dev> <part> [<str>] print or change partition attributes

New Options:

-a, --append                   append partitions to existing partition table
-b, --backup                   backup partition table sectors (see -O)
    --bytes                    print SIZE in bytes rather than in human readable format
    --move-data[=<typescript>] move partition data after relocation (requires -N)
    --color[=<when>]           colorize output (auto, always or never)
                               colors are enabled by default
-N, --partno <num>             specify partition number
-n, --no-act                   do everything except write to device
    --no-tell-kernel           do not tell kernel about changes
-O, --backup-file <path>       override default backup file name
-o, --output <list>            output columns
-w, --wipe <mode>              wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>   wipe signatures from new partitions (auto, always or never)
-X, --label <name>             specify label type (dos, gpt, ...)
-Y, --label-nested <name>      specify nested label type (dos, bsd)

Available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start  End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

swapon (enable devices and files for paging and swapping):

-o, --options <list>     comma-separated list of swap options

New available columns (for --show):

UUID   swap uuid
LABEL  swap label

unshare (run a program with some namespaces unshared from the parent):

-C, --cgroup[=<file>]                              unshare cgroup namespace
    --propagation slave|shared|private|unchanged   modify mount propagation in mount namespace
-s, --setgroups allow|deny                         control the setgroups syscall in user namespaces

Deprecated / removed options

sfdisk (display or manipulate a disk partition table):

-c, --id                  change or print partition Id
    --change-id           change Id
    --print-id            print Id
-C, --cylinders <number>  set the number of cylinders to use
-H, --heads <number>      set the number of heads to use
-S, --sectors <number>    set the number of sectors to use
-G, --show-pt-geometry    deprecated, alias to --show-geometry
-L, --Linux               deprecated, only for backward compatibility
-u, --unit S              deprecated, only sector unit is supported

Don MartiWhat happened to Twitter? We can't look away...

Hey, everybody, check it out.

Here's a Twitter ad.

some dumb Twitter ad

If you're "verified" on Twitter, you probably miss these, so I'll just use my Fair Use rights to share that one with you.

You're welcome.

Twitter is a uniquely influential medium, one that shows up on the TV news every night and on news sites all day. But somehow, the plan to make money from Twitter is to run the same kind of targeted ads that anyone with a WordPress site can. And the latest Twitter news is a privacy update that includes, among other things, more tracking of users from one site to another. Yes, the same kind of thing that Facebook already does, and better, with more users. And the same kind of thing that any web site can already get from an entire Lumascape of companies. Boring.

If you want to stick this kind of ad on your WordPress site, you just have to cut and paste some ad network HTML—not build out a deluxe office space on Market Street in San Francisco the way Twitter has. But the result is about the same.

What makes Twitter even more facepalm-worthy is that they make a point of not showing the ads to the influential people who draw attention to Twitter to start with. It's like they're posting a big sign that says STUPID AD ZONE: UNIMPORTANT PEOPLE ONLY. Twitter is building something unique, but they're selling generic impressions that advertisers can get anywhere. So as far as I can tell, the Twitter business model is something like:

Money out: build something unique and expensive.

Money in: sell the most generic and shitty thing in the world.

Facebook can make this work because they have insane numbers of eyeball-minutes. Chump change per minute on Facebook still adds up to real money. But Facebook is an outlier on raw eyeball-minutes, and there aren't enough minutes in the day for another. So Twitter is on track to get sold for $500,000, like Digg was. Which is good news for me because I know enough Twitter users that I can get that kind of money together.

So why should you help me buy Twitter when you could just get the $500,000 yourself? Because I have a secret plan, of course. Twitter is the site that everyone is talking about, right? So run the ads that people will talk about. Here's the plan.

Sell one ad per day. And everybody sees the same one.

Sort of like the back cover of the magazine that everybody in the world reads (but there is no such magazine, so that's why this is an opportunity.) No more need to excuse the verified users from the ads. Yes, an advertiser will have to provide a variety of sizes and localizations for each ad (and yes, Twitter will have to check that the translations match). But it's the same essential ad, shown to every Twitter user in the world for 24 hours.

No point trying to out-Facebook Facebook or out-Lumascape the Lumascape. Targeted ads are weak on signal, and a bunch of other companies are doing them more cost-effectively and at higher volume, anyway.

Of course, this is not for everybody. It's for brands that want to use a memorable, creative ad to try for the same kind of global signal boost that a good Tweet® can get. But if you want generic targeted ads you can get those everywhere else on the Internet. Where else can you get signal? In order to beat current Twitter revenue, the One Twitter Ad needs to go for about the same price as a Super Bowl commercial. But if Twitter stays influential, that's reasonable, and I make back the 500 grand and a lot more.

Planet Linux AustraliaOpenSTEM: Borrowing a Pencil

Student: Can I borrow a pencil?

Teacher: I don’t know. Can you?

Student: Yes. I might add that colloquial irregularities occur frequently in any language. Since you and the rest of our present company understood perfectly my intended meaning, being particular about the distinctions between “can” and “may” is purely pedantic and arguably pretentious.

Teacher: True, colloquialism and the judicious interpretation of context help us communicate with nuance, range, and efficiency. And yet, as your teacher, my job is to teach you to think about language with care and rigour. Understanding the shades of difference between one word and another, and to think carefully about what you want to say, will give you greater power and versatility in your speech and writing.

Student: Point taken. May I have a pencil?

Teacher: No, you may not. We do not have pencils since the department cut funding for education again last year.

Planet Linux AustraliaDanielle Madeley: PostgreSQL date ranges in Django forms

Django’s postgres extensions support data types like DateRange which is super useful when you want to query your database against dates, however they have no form field to expose this into HTML.

Handily Django 1.11 has made it super easy to write custom widgets with complex HTML.

Start with a form field based off MultiValueField:

from django import forms
from psycopg2.extras import DateRange


class DateRangeField(forms.MultiValueField):
    """
    A date range
    """

    widget = DateRangeWidget

    def __init__(self, **kwargs):
        fields = (
            forms.DateField(required=True),
            forms.DateField(required=True),
        )
        super().__init__(fields, **kwargs)

    def compress(self, values):
        try:
            lower, upper = values
            return DateRange(lower=lower, upper=upper, bounds='[]')
        except ValueError:
            return None

The other side of a form field is a Widget:

from django import forms
from psycopg2.extras import DateRange


class DateRangeWidget(forms.MultiWidget):
    """Date range widget."""
    template_name = 'forms/widgets/daterange.html'

    def __init__(self, **kwargs):
        widgets = (
            forms.DateInput(),
            forms.DateInput(),
        )
        super().__init__(widgets, **kwargs)

    def decompress(self, value):
        if isinstance(value, DateRange):
            return (value.lower, value.upper)
        elif value is None:
            return (None, None)
        else:
            return value

    class Media:
        css = {
            'all': ('//cdnjs.cloudflare.com/ajax/libs/jquery-date-range-picker/0.14.4/daterangepicker.min.css',)  # noqa: E501
        }

        js = (
            '//cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js',
            '//cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js',
            '//cdnjs.cloudflare.com/ajax/libs/jquery-date-range-picker/0.14.4/jquery.daterangepicker.min.js',  # noqa: E501
        )

Finally we can write a template to use the jquery-date-range-picker:

{% for widget in widget.subwidgets %}
<input type="hidden" name="{{ widget.name }}"{% if widget.value != None %} value="{{ widget.value }}"{% endif %}{% include "django/forms/widgets/attrs.html" %} />
{% endfor %}

<div id='container_for_{{ widget.attrs.id }}'></div>

With a script block:

(function() {
    var format = 'D/M/YYYY';
    var isoFormat = 'YYYY-MM-DD';
    var startInput = $('#{{ widget.subwidgets.0.attrs.id }}');
    var endInput = $('#{{ widget.subwidgets.1.attrs.id }}');

    $('#{{ widget.attrs.id }}').dateRangePicker({
        inline: true,
        container: '#container_for_{{ widget.attrs.id }}',
        alwaysOpen: true,
        format: format,
        separator: ' ',
        getValue: function() {
            if (!startInput.val() || !endInput.val()) {
                return '';
            }

            var start = moment(startInput.val(), isoFormat);
            var end = moment(endInput.val(), isoFormat);

            return start.format(format) + ' ' + end.format(format);
        },
        setValue: function(s, start, end) {
            start = moment(start, format);
            end = moment(end, format);

            startInput.val(start.format(isoFormat));
            endInput.val(end.format(isoFormat));
        }
    });
})();

You can now use this DateRangeField in a form, retrieve it from cleaned_data for database queries or store it in a model DateRangeField.

Planet DebianBenjamin Mako Hill: Children’s Perspectives on Critical Data Literacies

Last week, we presented a new paper that describes how children are thinking through some of the implications of new forms of data collection and analysis. The presentation was given at the ACM CHI conference in Denver last week and the paper is open access and online.

Over the last couple years, we’ve worked on a large project to support children in doing — and not just learning about — data science. We built a system, Scratch Community Blocks, that allows the 18 million users of the Scratch online community to write their own computer programs — in Scratch of course — to analyze data about their own learning and social interactions. An example of one of those programs to find how many of one’s follower in Scratch are not from the United States is shown below.

Last year, we deployed Scratch Community Blocks to 2,500 active Scratch users who, over a period of several months, used the system to create more than 1,600 projects.

As children used the system, Samantha Hautea, a student in UW’s Communication Leadership program, led a group of us in an online ethnography. We visited the projects children were creating and sharing. We followed the forums where users discussed the blocks. We read comment threads left on projects. We combined Samantha’s detailed field notes with the text of comments and forum posts, with ethnographic interviews of several users, and with notes from two in-person workshops. We used a technique called grounded theory to analyze these data.

What we found surprised us. We expected children to reflect on being challenged by — and hopefully overcoming — the technical parts of doing data science. Although we certainly saw this happen, what emerged much more strongly from our analysis was detailed discussion among children about the social implications of data collection and analysis.

In our analysis, we grouped children’s comments into five major themes that represented what we called “critical data literacies.” These literacies reflect things that children felt were important implications of social media data collection and analysis.

First, children reflected on the way that programmatic access to data — even data that was technically public — introduced privacy concerns. One user described the ability to analyze data as, “creepy”, but at the same time, “very cool.” Children expressed concern that programmatic access to data could lead to “stalking“ and suggested that the system should ask for permission.

Second, children recognized that data analysis requires skepticism and interpretation. For example, Scratch Community Blocks introduced a bug where the block that returned data about followers included users with disabled accounts. One user, in an interview described to us how he managed to figure out the inconsistency:

At one point the follower blocks, it said I have slightly more followers than I do. And, that was kind of confusing when I was trying to make the project. […] I pulled up a second [browser] tab and compared the [data from Scratch Community Blocks and the data in my profile].

Third, children discussed the hidden assumptions and decisions that drive the construction of metrics. For example, the number of views received for each project in Scratch is counted using an algorithm that tries to minimize the impact of gaming the system (similar to, for example, Youtube). As children started to build programs with data, they started to uncover and speculate about the decisions behind metrics. For example, they guessed that the view count might only include “unique” views and that view counts may include users who do not have accounts on the website.

Fourth, children building projects with Scratch Community Blocks realized that an algorithm driven by social data may cause certain users to be excluded. For example, a 13-year-old expressed concern that the system could be used to exclude users with few social connections saying:

I love these new Scratch Blocks! However I did notice that they could be used to exclude new Scratchers or Scratchers with not a lot of followers by using a code: like this:
when flag clicked
if then user’s followers < 300
stop all.
I do not think this a big problem as it would be easy to remove this code but I did just want to bring this to your attention in case this not what you would want the blocks to be used for.

Fifth, children were concerned about the possibility that measurement might distort the Scratch community’s values. While giving feedback on the new system, a user expressed concern that by making it easier to measure and compare followers, the system could elevate popularity over creativity, collaboration, and respect as a marker of success in Scratch.

I think this was a great idea! I am just a bit worried that people will make these projects and take it the wrong way, saying that followers are the most important thing in on Scratch.

Kids’ conversations around Scratch Community Blocks are good news for educators who are starting to think about how to engage young learners in thinking critically about the implications of data. Although no kid using Scratch Community Blocks discussed each of the five literacies described above, the themes reflect starting points for educators designing ways to engage kids in thinking critically about data.

Our work shows that if children are given opportunities to actively engage and build with social and behavioral data, they might not only learn how to do data analysis, but also reflect on its implications.

This blog-post and the work that it describes is a collaborative project by Samantha Hautea, Sayamindu Dasgupta, and Benjamin Mako Hill. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson from MIT CSAIL. Financial support came from the US National Science Foundation.

,

CryptogramHuman Rights Watch Needs an Information Security Director

I'm sure it pays less than the industry average, and the stakes are much higher than the average. But if you want to be a Director of Information Security that makes a difference, Human Rights Watch is hiring.

Krebs on SecurityFraudsters Exploited Lax Security at Equifax’s TALX Payroll Division

Identity thieves who specialize in tax refund fraud had big help this past tax year from Equifax, one of the nation’s largest consumer data brokers and credit bureaus. The trouble stems from TALX, an Equifax subsidiary that provides online payroll, HR and tax services. Equifax says crooks were able to reset the 4-digit PIN given to customer employees as a password and then steal W-2 tax data after successfully answering personal questions about those employees.

In a boilerplate text sent to several affected customers, Equifax said the unauthorized access to customers’ employee tax records happened between April 17, 2016 and March 29, 2017.

Beyond that, the extent of the fraud perpetrated with the help of hacked TALX accounts is unclear, and Equifax refused requests to say how many consumers or payroll service customers may have been impacted by the authentication weaknesses.

Equifax's TALX -- now called Equifax Workforce Solutions -- aided tax thieves by relying on outdated and insufficient consumer authentication methods.

Equifax’s subsidiary TALX — now called Equifax Workforce Solutions — aided tax thieves by relying on outdated and insufficient consumer authentication methods.

Thanks to data breach notification laws in nearly all U.S. states now, we know that so far at least five organizations have received letters from Equifax about a series of incidents over the past year, including defense contractor giant Northrop Grumman; staffing firm Allegis Group; Saint-Gobain Corp.; Erickson Living; and the University of Louisville.

A snippet from TALX’s letter to the New Hampshire attorney general (PDF) offers some insight into the level of security offered by this wholly-owned subsidiary of Equifax. In it, lawyers for TALX downplay the scope of the breach even as they admit the company wasn’t able to tell exactly how much unauthorized access to tax records may have occurred.

“TALX believes that the unauthorized third-party(ies) gained access to the accounts primarily by successfully answering personal questions about the affected employees in order to reset the employees’ pins (the password to the online account portal),” wrote Nicholas A. Oldham, an attorney representing TALX. “Because the accesses generally appear legitimate (e.g., successful use of login credentials), TALX cannot confirm forensically exactly which accounts were, in fact, accessed without authorization, although TALX believes that only a small percentage of these potentially affected accounts were actually affected.”

ANALYSIS

Generally. Forensically. Exactly. Potentially. Actually. Lots of hand-waving from the TALX/Equifax suits. But Equifax should have known better than to rely on a simple PIN for a password, says Avivah Litan, a fraud analyst with Gartner Inc.

“That’s so 1990s,” Litan said. “It’s pretty unbelievable that a company like Equifax would only protect such sensitive data with just a PIN.”

Litan said TALX should have required customers to use stronger two-factor authentication options, such as one-time tokens sent to an email address or mobile device (as Equifax now says TALX is doing — at least with those we know were notified about possible employee account abuse).

The big consumer credit bureaus like Equifax, Experian, Innovis and Trans Union are all regulated by the Fair Credit Reporting Act (FCRA), which strives to promote accuracy, fairness and privacy for data used by consumer reporting agencies.  But Litan said there are no federal requirements that credit bureaus use stronger authentication for access to consumer data — such as two-factor authentication.

“There’s about 500 percent more protection for credit card data right now than there is for identity data,” Litan said. “And yet I don’t know of one document from the federal government that spells out how these credit bureaus and other companies have to protect PII (personally identifiable information).”

Then there is the small matter of the questions that ID thieves were able to successfully answer about their victims via TALX’s online portal. Security experts have been warning for years about the waning effectiveness of using so-called “knowledge-based authentication questions” (KBA) — such as details about the consumer’s historic location and financial activity — for online authentication.

The problem with relying on KBA questions to authenticate consumers online is that so much of the information needed to successfully guess the answers to those multiple-choice questions is now indexed or exposed by search engines, social networks and third-party services online — both criminal and commercial.

What’s more, many of the companies that provide and resell these types of KBA challenge/response questions have been hacked in the past by criminals that run their own identity theft services.

“Whenever I’m faced with KBA-type questions I find that database tools like Spokeo, Zillow, etc are my friend because they are more likely to know the answers for me than I am,” said Nicholas Weaver, a senior researcher in networking and security for the International Computer Science Institute (ICSI).

In short: The crooks broadly have access to the data needed to reliably answer KBA questions on most consumers.

Litan said the key is reducing reliance on static data – much of which is PII data that has been compromised by the crooks – and increasing reliance on dynamic data, like reputation, behavior and relationships between non-PII data elements.

Identity thieves prize the W-2 and payroll data held by companies like TALX because they can use it to file fraudulent tax refund requests with the IRS and the states on behalf of victim consumers. According to the Internal Revenue Service, some 787,000 Americans reported being victimized by tax refund fraud last year.

Extra security and screening precautions by the states and the IRS brought last year’s victim numbers down 50 percent from 2015. But even the IRS has struggled with its own tax fraud-related security foibles tied to weak consumer authentication. In 2015, it issued more than $490 million in fraudulent refunds requested on behalf of hundreds of thousands of Americans who were victimized by data stolen directly from the “Get Transcript” feature of the IRS’s own Web site.

It’s worth noting that – as with the TALX incidents — the IRS’s Get Transcript fiasco also failed because it relied primarily on KBA questions asked by Equifax.

Tax-related identity theft occurs when someone uses a Social Security number (SSN) — either a client’s, a spouse’s, or dependent’s — to file a tax return claiming a fraudulent refund. Thieves may also use a stolen Employer Identification Number (EIN) from a business client to create false Forms W-2 to support refund fraud schemes. Increasingly, fraudsters are simply phishing W-2 data in large quantities from human resource professionals at a variety of organizations.

Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

“If the federal government is smart, they will consider suing Equifax for false returns filed using W2 information stolen from TALX customers, since this is exactly the sort of mass scale attack that even the most basic SMS-based 2-factor would block,” the ICSI’s Weaver said.

It’s high time for consumers to come face-to-face with the reality that the basic data needed to open new lines of credit on them or file taxes in their name is broadly available for sale in the cybercrime underground. What little consumer data that cannot be found in the bowels of the Dark Web can be coaxed out of countless poorly-secured and automated services like TALX that hold extremely sensitive consumer data and yet safeguard it with antiquated and insufficient authentication measures.

In light of the above, the sobering reality is that we have no business using these static identifiers (SSN, DOB, address, previous address, income, mother’s maiden name) for authentication, and yet this practice remains rampant across vast sectors of the American economy today, including consumer banking, higher education and government services.

Predictably, Equifax is offering identity theft detection services (for two years) to employees of TALX customers. Loyal readers here know where I come down on these credit monitoring services, because nobody should confuse these services with a reliable method to block identity theft. The most consumers can hope for out of a credit monitoring service is that it alerts you when ID thieves hijack your data; these services generally don’t prevent ID theft. Also, they can be useful for helping to clean up after a confirmed ID theft incident.

The consumer’s best weapon against new account fraud and other forms of identity theft is the security freeze, also known as a credit freeze. I explain more about the benefits of the freeze as well as other options in multiple posts on this blog. I should note, however, that a security freeze will do nothing to stop fraudsters from filing phony tax refunds in your name with the IRS. For tips on avoiding tax refund fraud, check out this post.

Planet Linux AustraliaDavid Rowe: FreeDV 700D – First Over The Air Tests

OK so after several attempts I finally managed to push a 700D signal from my QTH in Adelaide (PF95gc) 1170km to the Manly Warringah Radio Society WebSDR in Sydney (QF56oh). Bumped my power up a little, raised my antenna, and hunted around until I found a relatively birdie-free frequency, as even low level birdies are stronger than my very weak signal.

Have a listen:

Analog SSB 700D modem Decoded 700D DV

Here is a spectrogram (i.e. a waterfall with the water falling from left to right) of the analog then digital signal:

Faint birdies (tones) can be seen as horizontal lines at 1000 and 2000 Hz. You can see the slow fading on the digital signal as it dips beneath the noise every few seconds.

The scatter diagram looks like bugs (bits?) splattered on a windscreen:

The slow fading causes the errors to bounce up and down over time (above). The packet error rate (measured on the 28 bit Codec 2 frames) is 26%. This is rather high, but I would argue we have intelligible speech here, and that the intelligibility is better than SSB.

Yayyyyyyy.

I used 4 interleaver frames, which is about 640ms. Perhaps a longer interleaver would ride over the fades.

I’m impressed! Conditions were pretty bad on 40m, the band was “closed”. This is day 1 of FreeDV 700D. It will improve from here.

Command Lines

The Octave demodulator doing it’s thing:

octave:56> ofdm_rx("~/Desktop/700d_part2/manly5_4.wav",4, "manly5_4.err")
Coded BER: 0.0206 Tbits: 12544 Terrs:   259 PER: 0.2612 Tpacketerrs:   117 Tpackets:   448
Raw BER..: 0.0381 Tbits: 26432 Terrs:  1007

Not sure if I’m working out raw and coded BER right as they are not usually this close. Will look into that. Maybe all the errors are in the fades, where both the demod and LDPC decoder fall in a heap.

The ofdm_tx/ofdm_rx system transmits test frames of known data, so we can work out the BER. By xor-ing the tx and rx bits we can generate an error pattern that can be used to insert errors into a Codec 2 700C bit stream, using this magic incantation:

~/codec2-dev/build_linux/src$ sox ~/Desktop/cq_freedv_8k.wav ~/Desktop/cq_freedv_8k.wav -t raw -r 8000 -s -2 - | ./c2enc 700C - - | ./insert_errors - - ../../octave/manly5_4.err 28 | ./c2dec 700C - - | sox -t raw -r 8000 -s -2 - ~/Desktop/manly5_4_ldpc224_4.wav

It’s just like the real thing. Trust me. And it gives me a feel for how the system is hanging together earlier rather than after months more development.

Links

Lots of links on the Towards FreeDV 700D post earlier today.

Worse Than FailureThe Smell-O-Vision

Ron used to work for a company which built “smell-o-visions”. These were customized systems running small form factor Windows PCs that operated smell pumps and fans using USB relays timed to a video to give a so-called “4D Experience.” Their product was gimmicky, and thus absolutely loved by marketing groups.

One such marketing group, whose client was a branch of the military, worked with them to create a gimmick to help with recruiting. A smell-o-vision was installed on a trailer and towed around the country, used to convince teenagers to join the service by making them smell fresh-squeezed orange juice while watching a seizure-inducing video with guns. The trailer was staffed by grunts, and these guys cycled through so frequently that they received little or no training on the system.

A vintage ad for a smell-o-vision film called 'Scent of Mystery'

“Hey Ron,” Sam, his boss, told him one day. “The recruiter is having a ton of trouble with their system and I need you to go on-site and take a look.”

Going on-site was the only option. The recruiter’s system was not on the Internet so remote diagnosis was impossible. And so he soon found himself cooking in the hot desert sun on a base in the Southwest, looking at a small LCD screen on a 1U pull-out tray accessible through an access panel in the trailer.

“This is what it always does,” said the grunt assigned to work with him. “It always says it’s corrupted and the 4D stuff doesn’t work. Rebooting doesn’t fix it.”

Ron looked at the screen as the outdated Windows XP booted up and watched it flash an error stating “Windows was not properly shut down” before beginning a long filesystem scan and repair and then using System Restore to return to an earlier checkpoint. A checkpoint created before the presentation software was installed…

Ron shook his head and started the Help Desk 101 questions. “Do you shut down the system when you’re packing up?”

“Oh, we just turn the generator off,” the grunt answered. “That shuts everything down.”

Ron mentally facepalmed. “You need to properly shut down the system before killing power or this will happen again.” He went inside the trailer and showed them a power button on the control panel. The button was just wired to the power switch header on the PC’s motherboard and acted exactly like any computer case’s power button. Pressing it briefly tells the operating system to shut itself down. “Just press this button and it will shut itself down in a minute or two.”

“Okay, that’s simple enough,” the grunt said.

Then Ron spent the day re-installing and configuring their system by downloading all the software over his phone’s mobile hotspot to his laptop, then transferring it via USB stick. Eventually he had it back up and running. He made sure to stress how important it was to shut down the system, and on a whim left them a DVD with all the software so he could talk someone through the process over the phone if it happened again.

He returned home. And two weeks later, it happened again. He spent three days on the phone talking to a different grunt and walking him through restoring the system. Again he emphasized the importance of shutting down the system with the power button before killing power, but this grunt explained he did that. Ron was confused, but finished the support call.

Just a few days later, they called again. He talked to yet another grunt, asked them how shutdown was handled, and he explained he pressed the button until it shut down. His boss decided to send Ron on-site again, and he repaired the system. It was much faster this time since he had a DVD with everything he needed on it, only taking most of a morning. When he was done, he tested the power button. It shut down the system properly.

“Show me how you get ready for a presentation,” he asked. “From setup once you’re on-site, to shutdown.”

The pair of grunts assigned to him–two more he’d never worked with before–spent an hour walking through the process. When they got to the end, Ron reminded him to shut down the system.

And the grunt did so. By pressing and holding the power button for about five seconds. Which killed power to the PC without properly shutting down the operating system.

He mentally facepalmed again. “No, you can’t press and hold. Just tap the button. It will eventually shut down.” He was getting fast at restoring the system and spent that afternoon correcting it all. This time he took a System Restore snapshot once everything was in place, and taped a hand-written note on the control panel saying “DO NOT HOLD THIS BUTTON TO TURN OFF THE MACHINE. DO NOT SHUT OFF THE GENERATOR WITHOUT FIRST TURNING THE SYSTEM OFF PROPERLY.”

He returned home. And a couple weeks later, they called again. He was unable to fix it over the phone and was sent on-site yet again. This time the filesystem was too corrupted to repair and he had to set everything up from scratch. Upon querying the new grunts he discovered they ignored the note and were still holding the power button to kill the system.

This continued for a couple more months. The client refused to shut down the system properly and was constantly causing severe filesystem corruption or even killing hard disks. Each time the client called, getting angrier and angrier. “It says there is no OS! I thought you fixed this bug!!” Each time the problems were caused by operator error.

During a meeting with his boss Sam, Ron explained all the issues and how much of his time it was taking, and while brainstorming a solution Ron pitched a new idea. “What if we modify the custom power button? Alter it to only send a pulse, instead of a constant closed signal, so it’s impossible to hold it down?”

“Figure it out and do it,” Sam ordered. “These guys are idiots and won’t do things right. They’re killing us on support time, we have to fix this or fire the client. How hard is it to shut this thing down properly?”

So, being a software engineer, Ron devised a clever solution based on an Arduino that sampled the power button’s state and sent pulses to the motherboard’s power button header. It passed all his testing and he went on-site to install it. And finally the support calls ceased completely. This unit was the only one that received the update since none of their other clients had issues with operator error.

With the issue solved, Ron decided to take some badly-needed vacation to attend a family reunion in North Carolina and tried to stay off the grid. Meanwhile, Sam took a demo unit to an expo in London so visitors could smell what it was like to go skiiing.

During his vacation, Ron received a phone call from Sam in the middle of the night. Being on vacation and trying to stay offline, he ignored it. But the phone rang in quick succession several more times and he finally answered.

“Ron! Sorry to wake you, but our demo unit quit working and I need your help!”

Ron grunted. “It’s one AM here…”

“I know but I need to get this back up before the conference starts this morning!”

“Okay,” he said with a sigh. “What’s it doing?”

Sam proceeded to describe a series of error screens over the phone. It stated that “Windows was not shut down properly,” followed by a long filesystem scan, followed by an error that critical system files were corrupted and Windows could not boot.

Ron facepalmed for real this time.

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet DebianAlessio Treglia: Digital Ipseity: Which Identity?

 

Within the next three years, more than seven billion people and businesses will be connected to the Internet. During this time of dramatic increases in access to the Internet, networks have seen an interesting proliferation of systems for digital identity management (i.e. our SPID in Italy). But what is really meant by “digital identity“? All these systems are implemented in order to have the utmost certainty that the data entered by the subscriber (address, name, birth, telephone, email, etc.) is directly coincident with that of the physical person. In other words, data are certified to be “identical” to those of the user; there is a perfect overlap between the digital page and the authentic user certificate: an “idem“, that is, an identity.

This identity is our personal records reflected on the net, nothing more than that. Obviously, this data needs to be appropriately protected from malicious attacks by means of strict privacy rules, as it contains so-called “sensitive” information, but this data itself is not sufficiently interesting for the commercial market, except for statistical purposes on homogeneous population groups. What may be a real goldmine for the “web company” is another type of information: user’s ipseity. It is important to immediately remove the strong semantic ambiguity that weighs on the notion of identity. There are two distinct meanings…

<Read More…[by Fabio Marzocca]>

Planet DebianMichael Prokop: Debugging a mystery: ssh causing strange exit codes?

XKCD comic 1722

Recently we had a WTF moment at a customer of mine which is worth sharing.

In an automated deployment procedure we’re installing Debian systems and setting up MySQL HA/Scalability. Installation of the first node works fine, but during installation of the second node something weird is going on. Even though the deployment procedure reported that everything went fine: it wasn’t fine at all. After bisecting to the relevant command lines where it’s going wrong we identified that the failure is happening between two ssh/scp commands, which are invoked inside a chroot through a shell wrapper. The ssh command caused a wrong exit code showing up: instead of bailing out with an error (we’re running under ‘set -e‘) it returned with exit code 0 and the deployment procedure continued, even though there was a fatal error. Initially we triggered the bug when two ssh/scp command lines close to each other were executed, but I managed to find a minimal example for demonstration purposes:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

What we’d expect is the following behavior, receive exit code 1 from the last command line in the chroot wrapper:

# ./ssh_wrapper 
return code = 1

But what we actually get is exit code 0:

# ./ssh_wrapper 
return code = 0

Uhm?! So what’s going wrong and what’s the fix? Let’s find out what’s causing the problem:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost command_does_not_exist >/dev/null 2>&1
exit "$?"
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 127

Ok, so if we invoke it with a binary that does not exist we properly get exit code 127, as expected.
What about switching /bin/bash to /bin/sh (which corresponds to dash here) to make sure it’s not a bash bug:

# cat ssh_wrapper 
chroot << "EOF" / /bin/sh
ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Oh, but that works as expected!?

When looking at this behavior I had the feeling that something is going wrong with file descriptors. So what about wrapping the ssh command line within different tools? No luck with `stdbuf -i0 -o0 -e0 ssh root@localhost hostname`, nor with `script -c “ssh root@localhost hostname” /dev/null` and also not with `socat EXEC:”ssh root@localhost hostname” STDIO`. But it works under unbuffer(1) from the expect package:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
unbuffer ssh root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

So my bet on something with the file descriptor handling was right. Going through the ssh manpage, what about using ssh’s `-n` option to prevent reading from standard input (stdin)?

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh -n root@localhost hostname >/dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Bingo! Quoting ssh(1):

     -n      Redirects stdin from /dev/null (actually, prevents reading from stdin).
             This must be used when ssh is run in the background.  A common trick is
             to use this to run X11 programs on a remote machine.  For example,
             ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi,
             and the X11 connection will be automatically forwarded over an encrypted
             channel.  The ssh program will be put in the background.  (This does not work
             if ssh needs to ask for a password or passphrase; see also the -f option.)

Let’s execute the scripts through `strace -ff -s500 ./ssh_wrapper` to see what’s going in more detail.
In the strace run without ssh’s `-n` option we see that it’s cloning stdin (file descriptor 0), getting assigned to file descriptor 4:

dup(0)            = 4
[...]
read(4, "exit 1\n", 16384) = 7

while in the strace run with ssh’s `-n` option being present there’s no file descriptor duplication but only:

open("/dev/null", O_RDONLY) = 4

This matches ssh.c’s ssh_session2_open function (where stdin_null_flag corresponds to ssh’s `-n` option):

        if (stdin_null_flag) {                                            
                in = open(_PATH_DEVNULL, O_RDONLY);
        } else {
                in = dup(STDIN_FILENO);
        }

This behavior can also be simulated if we explicitly read from /dev/null, and this indeed works as well:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null </dev/null
exit 1
EOF
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

The underlying problem is that both bash and ssh are consuming from stdin. This can be verified via:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
echo "Inner: pre"
while read line; do echo "Eat: $line" ; done
echo "Inner: post"
exit 3
EOF
echo "Outer: exit code = $?"

# ./ssh_wrapper
Inner: pre
Eat: echo "Inner: post"
Eat: exit 3
Outer: exit code = 0

This behavior applies to bash, ksh, mksh, posh and zsh. Only dash doesn’t show this behavior.
To understand the difference between bash and dash executions we can use the following test scripts:

# cat stdin-test-cmp
#!/bin/sh

TEST_SH=bash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-bash.out
TEST_SH=dash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-dash.out

# cat stdin-test
#!/bin/sh

: ${TEST_SH:=dash}

$TEST_SH <<"EOF"
echo "Inner: pre"
while read line; do echo "Eat: $line"; done
echo "Inner: post"
exit 3
EOF

echo "Outer: exit code = $?"

When executing `./stdin-test-cmp` and comparing the generated files stdin-test-bash.out and stdin-test-dash.out you’ll notice that dash consumes all stdin in one single go (a single `read(0, …)`), instead of character-by-character as specified by POSIX and implemented by bash, ksh, mksh, posh and zsh. See stdin-test-bash.out on the left side and stdin-test-dash.out on the right side in this screenshot:

screenshot of vimdiff on *.out files

So when ssh tries to read from stdin there’s nothing there anymore.

Quoting POSIX’s sh section:

When the shell is using standard input and it invokes a command that also uses standard input, the shell shall ensure that the standard input file pointer points directly after the command it has read when the command begins execution. It shall not read ahead in such a manner that any characters intended to be read by the invoked command are consumed by the shell (whether interpreted by the shell or not) or that characters that are not read by the invoked command are not seen by the shell. When the command expecting to read standard input is started asynchronously by an interactive shell, it is unspecified whether characters are read by the command or interpreted by the shell.

If the standard input to sh is a FIFO or terminal device and is set to non-blocking reads, then sh shall enable blocking reads on standard input. This shall remain in effect when the command completes.

So while we learned that both bash and ssh are consuming from stdin and this needs to prevented by either using ssh’s `-n` or explicitly specifying stdin, we also noticed that dash’s behavior is different from all the other main shells and could be considered a bug (which we reported as #862907).

Lessons learned:

  • Be aware of ssh’s `-n` option when using ssh/scp inside scripts.
  • Feeding shell scripts via stdin is not only error-prone but also very inefficient, as for a standards compliant implementation it requires a read(2) system call per byte of input. Instead create a temporary script you safely execute then.
  • When debugging problems make sure to explore different approaches and tools to ensure you’re not relying on a buggy behavior in any involved tool.

Thanks to Guillem Jover for review and feedback regarding this blog post.

Planet DebianTianon Gravi: My Docker Install Process (redux)

Since I wrote my first post on this topic, Docker has switched from apt.dockerproject.org to download.docker.com, so this post revisits my original steps, but tailored for the new repo.

There will be less commentary this time (straight to the beef). For further commentary on “why” for any step, see my previous post.

These steps should be fairly similar to what’s found in upstream’s “Install Docker on Debian” document, but do differ slightly in a few minor ways.

grab Docker’s APT repo GPG key

# "Docker Release (CE deb)"

export GNUPGHOME="$(mktemp -d)"
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 9DC858229FC7DD38854AE2D88D81803C0EBFCD88

# stretch+
gpg --export --armor 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 | sudo tee /etc/apt/trusted.gpg.d/docker.gpg.asc

# jessie
# gpg --export 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 | sudo tee /etc/apt/trusted.gpg.d/docker.gpg > /dev/null

rm -rf "$GNUPGHOME"

Verify:

$ apt-key list
...

/etc/apt/trusted.gpg.d/docker.gpg.asc
-------------------------------------
pub   rsa4096 2017-02-22 [SCEA]
      9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
sub   rsa4096 2017-02-22 [S]

...

add Docker’s APT source

With the switch to download.docker.com, HTTPS is now mandated:

$ apt-get update && apt-get install apt-transport-https

Setup sources.list:

echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable" | sudo tee /etc/apt/sources.list.d/docker.list

Add edge component for every-month releases and test for release candidates (ie, ... stretch stable edge). Replace stretch with jessie for Jessie installs.

At this point, you should be safe to run apt-get update to verify the changes:

$ sudo apt-get update
...
Get:5 https://download.docker.com/linux/debian stretch/stable amd64 Packages [1227 B]
...
Reading package lists... Done

(There shouldn’t be any warnings or errors about missing keys, etc.)

configure Docker

This step could be done after Docker’s installed (and indeed, that’s usually when I do it because I forget that I should until I’ve got Docker installed and realize that my configuration is suboptimal), but doing it before ensures that Docker doesn’t have to be restarted later.

sudo mkdir -p /etc/docker
sudo sensible-editor /etc/docker/daemon.json

(sensible-editor can be replaced by whatever editor you prefer, but that command should choose or prompt for a reasonable default)

I then fill daemon.json with at least a default storage-driver. Whether I use aufs or overlay2 depends on my kernel version and available modules – if I’m on Ubuntu, AUFS is still a no-brainer (since it’s included in the default kernel if the linux-image-extra-XXX/linux-image-extra-virtual package is installed), but on Debian AUFS is only available in either 3.x kernels (jessie’s default non-backports kernel) or recently in the aufs-dkms package (as of this writing, still only available on stretch and sid – no jessie-backports option).

If my kernel is 4.x+, I’m likely going to choose overlay2 (or if that errors out, the older overlay driver).

Choosing an appropriate storage driver is a fairly complex topic, and I’d recommend that for serious production deployments, more research on pros and cons is performed than I’m including here (especially since AUFS and OverlayFS are not the only options – they’re just the two I personally use most often).

{
	"storage-driver": "overlay2"
}

configure boot parameters

I usually set a few boot parameters as well (in /etc/default/grub’s GRUB_CMDLINE_LINUX_DEFAULT option – run sudo update-grub after adding these, space-separated).

  • cgroup_enable=memory – enable “memory accounting” for containers (allows docker run --memory for setting hard memory limits on containers)
  • swapaccount=1 – enable “swap accounting” for containers (allows docker run --memory-swap for setting hard swap memory limits on containers)
  • systemd.legacy_systemd_cgroup_controller=yes – newer versions of systemd may disable the legacy cgroup interfaces Docker currently uses; this instructs systemd to keep those enabled (for more details, see systemd/systemd#4628, opencontainers/runc#1175, docker/docker#28109)
  • vsyscall=emulate – allow older binaries to run (debian:wheezy, etc.; see docker/docker#28705)

All together:

...
GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1 systemd.legacy_systemd_cgroup_controller=yes vsyscall=emulate"
...

install Docker!

Finally, the time has come.

$ sudo apt-get install -V docker-ce
...
   docker-ce (17.03.1~ce-0~debian-stretch)
...

$ sudo docker version
Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:07:28 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:07:28 2017
 OS/Arch:      linux/amd64
 Experimental: false

$ sudo usermod -aG docker "$(id -un)"

Planet Linux AustraliaMichael Still: The Collapsing Empire




ISBN: 076538888X
LibraryThing
This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don't know that and are busy having petty trade wars instead. It isn't a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire...

Tags for this post: book john_scalzi
Related posts: The Last Colony ; The End of All Things; Zoe's Tale; Agent to the Stars; Redshirts; Fuzzy Nation
Comment Recommend a book

Planet Linux AustraliaDavid Rowe: Towards FreeDV 700D

For the last two months I have been beavering away at FreeDV 700D, as part my eternal quest to show SSB who’s house it is.

This work was inspired by Bill, VK5DSP, who kindly developed some short LDPC codes for me; and suggested I could improve on the synchronisation overhead of the cohpsk modem. As an aside – Bill is part if the communications payload team for the QB50 SUSat Cubesat – currently parked at the ISS awaiting launch! Very Kerbal.

Anyhoo – I’ve developed a new OFDM modem that has less syncronisation overhead, works better, and occupies less RF bandwidth (1000 Hz) than the cohpsk modem used for 700C. I have wrapped my head around such arcane mysteries as coding gain and now have LDPC codes playing nicely over that nasty old HF channel.

It looks like FreeDV 700D has a gain of 4dB over 700C. This means error free operation at -2dB SNR for AWGN, and 2dB SNR over a challenging fast fading HF channel (two paths, 1Hz Doppler, 1ms delay).

Major Innovations:

  1. An OFDM modem with with low overhead (small Eb/No penalty) synchronisation, even on fading channels.
  2. Use of LDPC codes.
  3. Long (several seconds) interleaver.
  4. Ruthlessly hunting down any dB’s leaking out of my performance curves.

One nasty surprise was that after a closer look at the short (224,112) LDPC codes, I discovered they don’t give any real improvement over the simple diversity scheme used for FreeDV 700C. However with long interleaving (several seconds) of the short codes, or a long (few thousand bit/several seconds) LDPC code we get an additional 3dB gain. The interleaver allows us to ride over the ups and downs of the fast fading channel.

Interleaving has a few downsides. One is delay, the other is when they fail you lose a big chunk of data.

I’ve avoided delay until now, using the argument that low delay is essential for PTT radio. However I’d like to test long delays and see what the trade off/end user experience is. Once someone is speaking – i.e in the middle of an “over” – I suspect we won’t notice the delay. However it could get confusing in fast handovers. This is experimental radio, designed for very low SNRs, so lets give it a try.

We could send the uncoded data without interleaving – allowing low delay decoding when the SNR is high. A switch could control LDPC decoding, allowing a user selection of coded-high-delay or uncoded-low-delay, like a noise banker. Mark, VK5QI, has suggested interleaver depth also be adjustable which I think is a good idea. The decoder could automagically determine interleaver depth by attempting decoding over a range of depths (1,2,4,8,16 frames etc) and noting when the LDPC code converges.

Or maybe we could use a small, low delay, interleaver, and just live with the fades (like we do on SSB) and get the vocoder to mute or interpolate over them, and enjoy low or modest latency.

I’m also interested to see how the LDPC code mops up errors like static bursts and other real-world HF rubbish that SSB subjects us to even on high SNR channels.

So, lots of room for experimentation. At this stage it’s all in GNU Octave simulation form, no C implementation or FreeDV GUI mode exists yet.

Lots more I could write about the engineering behind the modem, but lets leave it there for now and take a look at some results.

Results

Here is a rather busy set of BER versus SNR curves (click for larger version, and here is an EPS file version):

The 10-2 line is where the codec gets easy to listen to.

Observe far-right green (700C) to black (700D candidate with lots of interleaving) HF curves, which are about 4dB apart. Also the far-left cyan shows 700D working at -3dB SNR on AWGN channels. One dB later (-2dB) LDPC magic stomps all errors.

Here are some speech/modem tone samples on simulated channels:

AWGN -2dB SNR Analog SSB 700D modem 700D DV
HF +0.8dB SNR Analog SSB 700D modem 700D DV

The analog samples have a 300 to 2600 Hz BPF applied at the tx and rx side, to model an analog SSB radio. The analog SSB and 700D modem signals have exactly the same RMS power and channel models applied to them. In the AWGN channel, it’s difficult to hear the 700D modem signal, however the SSB is audible as it has peaks 9dB above the average.

OK so the 700 bit/s vocoder (Codec 2 700C) speech quality is not great even with no errors, but we have found it supports conversations just fine, and there is plenty of room for improvement. The same techniques (OFDM modem, LDPC interleaving) can also be applied to high quality/high bit rate/high SNR voice modes. But first – I want to push this low SNR DV work through to completion.

Simulation Code

This list summarises the GNU Octave code I’ve developed, as I’ll probably forget the details when I move onto the next project. Feel free to try any of these scripts and let me know what I’ve forgotten to check in. It’s all checked into codec2-dev/octave.

ldpc.m Wrapper functions for using the CML library LDPC functions with Octave
ldpcut.m Unit test/demo for ldpc.m
ldpc_qpsk.m Runs simulations for a bunch of codes for AWGN and HF channels using a simulated QPSK OFDM modem. Runs at the Rs (the symbol rate), assumes ideal modem
ldpc_short.m Simulation used for initial short LDPC code investigation using an ideal rate Rs BPSK modem. Bunch of codes and interleaving schemes tested
ofdm_lib.m Library of OFDM modem functions
ofdm_rs.m Rate Rs OFDM modem simulation used to develop low overhead pilot symbol phase estimation scheme
ofmd_dev.m Rate Fs OFDM modem simulation. This is the real deal, with timing and frequency offset estimation, LDPC integration, and tests for coarse timing and frequency offset estimation
ofdm_tx.m Generates test frames of OFDM raw file samples to play over your HF radio
ofdm_rx.m Receives raw file samples from your HF radio and 700D-demodulates-decodes, and measures BER and PER

Sing Along

Just this morning I tried to radiate some FreeDV 700D from my home to some interstate SDRs on 40M, but alas conditions were against me. I did manage to radiate across my bench so I know the waveform does make it through real HF radios OK.

Please try sending these files through your radio:

ssb_otx_224_32.wav 32 frame (5.12 second) interleaver
ssb_otx_224_4.wav 4 frame (0.64 second) interleaver

Get someone (or a websdr) to sample the received signal (8000Hz sample rate, 16 bit mono), and email me the received file.

Or you can decode it yourself using:

octave:10> ofdm_rx('~/Desktop/otx_224_32_mysample.wav',32);

or:

octave:10> ofdm_rx('~/Desktop/otx_224_4_mysample.wav',4);

The rx side is still a bit rough, I’ll refine it as I try the system with real off-air signals and flush out the bugs.

Update: FreeDV 700D – First Over The Air Tests.

Links

QB50 SUSat cubesat – Bill and team’s Cubesat currently parked at the ISS!
Codec 2 700C and Short LDPC Codes
Testing FreeDV 700C
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2
FreeDV 700D – First Over The Air Tests

,

CryptogramThe US Senate Is Using Signal

The US Senate just approved Signal for staff use. Signal is a secure messaging app with no backdoor, and no large corporate owner who can be pressured to install a backdoor.

Susan Landau comments.

Maybe I'm being optimistic, but I think we just won the Crypto War. A very important part of the US government is prioritizing security over surveillance.

Planet DebianDaniel Pocock: Hacking the food chain in Switzerland

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101).

The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis.

One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides.

An open source approach to food

An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable.

Do we need it?

Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need.

Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers.

Next steps

People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest.

Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this?

Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum.

There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc.

If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich.

One final thing to contemplate: if you are not hacking your own food supply, who is?

Cory DoctorowVancouver, I’ll see you at tonight’s Walkaway tour stop (then Burbank, Oxford, London…) (!)

Many thanks to the good folks who came out to Bellingham’s Village Books for last night’s Walkaway event; tonight, I’ll be appearing in Vancouver before flying home to Burbank for an event at my local Dark Delicacies on Saturday and then going straight to the airport for the start of my UK tour.


I’ll be starting that tour in Oxford with Tim Harford; then in London on two consecutive nights, the first with Laurie Penny and the second with Olivia Sudjic; then I’ll be in Liverpool with Chris Pak; then Birmingham, and finally at Hay-on-Wye with Adam Rutherford.


After that, I go back on the road in the USA, stopping at Bookcon NYC, Denver Comic-Con, San Diego Comic-Con, Defcon, and Printer’s Row.

Planet DebianReproducible builds folks: Reproducible Builds: week 107 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday May 7 and Saturday May 13 2017:

Report from Reproducible Builds Hamburg Hackathon

We were 16 participants from 12 projects: 7 Debian, 2 repeatr.io, 1 ArchLinux, 1 coreboot + LEDE, 1 F-Droid, 1 ElectroBSD + privoxy, 1 GNU R, 1 in-toto.io, 1 Meson and 1 openSUSE. Three people came from the USA, 3 from the UK, 2 Finland, 1 Austria, 1 Denmark and 6 from Germany, plus we several guests from our gracious hosts at the CCCHH hackerspace as well as a guest from Australia…

We had four presentations:

Some of the things we worked on:

  • h01ger did orga stuff for this very hackathon, discussed tests.r-b.o with various non-Debian contributors, filed some bugs and restarted the policy discussion in #844431. He also did some polishing work on tests.r-b.o which shall be covered in next issue of our weekly blog.
  • Justin Cappos involved many of us in interesting discussions and started to write an academic paper about Reproducible Builds of which he shared an early beta on our mailinglist.
  • Chris Lamb (lamby) filed a number of patches for individual packages, worked on diffoscope, merged many changes to strip-nondeterminism and also filed #862073 against dak to upload buildinfo files to external services.
  • Maria Glukhova (siamezzze) fixed a bug with plots on tests.reproducible-builds.org and worked on diffoscope test coverage.
  • Lynxis worked on a new squashfs upstream release improving support for reproducible squashfs filesystems and also had some time to hack on coreboot and show others how to install coreboot on real hardware.
  • Michael Poehn worked on integrating F-Droid builds into tests.reproducible-builds.org, on the F-Droid verification utility and also ran some app reproducibility tests.
  • Bernhard worked on various unreproducible issues upstream and submitted fixes for curl, bzr, ant.
  • Erin Myhre worked on bootstrapping cleanroom builds of compiler components in Repeatr sandboxes.
  • Calvin Behling merged improvements to reppl for a cleaner storage format and better error handling and did design work for next version of repeatr pipeline execution. Calvin also lead the reproducibility testing of restaurant mood lighting.
  • Eric and Calvin also claim to have had all sorts of useful exchanges about the state of other projects, and learned a lot about where to look for more info about debian bootstrap and archive mirroring from steven and lamby :)
  • Phil Hands came by to say hi and worked on testing d-i on jenkins.debian.net.
  • Chris West (Faux) worked on extending misc.git:has-only.py, and started looking at Britney.

We had a Debian focussed meeting where we discussed a number of topics:

  • IRC meetings: yes, we want to try again to have them, monthly, a poll for a good date is being held.
  • Debian tests post Stretch: we'll add tests for stable/Stretch.
  • .buildinfo files, how forward: we need sourceful uploads for any arch:all packages. dak should send .buildinfo files to buildinfo.debian.net.
  • (pre?) Stretch release press release: we should do that, esp. as our achievements are largely unrelated to Stretch.
  • Reproducible Builds Summit 3: yes, we want that.
  • what to do (in notes.git) with resolved issues: keep the issues.
  • strip-nondeterminism quo vadis: Justin reminded us that strip-nondeterminism is a workaround we want to get rid off.

And then we also had a lot of fun in the hackerspace, enjoying some of their gimmicks, such as being able to open physical doors with ssh or controlling light and music with an webbrowser without authentication (besides being in the right network).

Not quite the hackathon

(This wasn't the hackathon per-se, but some of us appreciated these sights and so we thought you would too.)

Many thanks to:

  • Debian for sponsoring food and accomodation!
  • Dock Europe for providing us with really nice accomodation in the house!
  • CCC Hamburg for letting us use their hackerspace for >3 days non-stop!

News and media coverage

openSUSE has had a security breach in their infrastructure, including their build services. As of this writing, the scope and impact are still unclear, however the incident illustrates that no one should rely on being able to secure their infrastructure at all times. Reproducible Builds help mitigate this by allowing independent verification of build results, by parties that are unaffected by the compromise.

(Whilst this can happen to anyone. Kudos to openSUSE for being open about it. Now let's continue working on Reproducible Builds everywhere!)

On May 13th Chris Lamb gave a talk on Reproducible Builds at OSCAL 2017 in Tirana, Albania.

OSCAL 2017

Toolchain bug reports and fixes

Packages' bug reports

Reviews of unreproducible packages

11 package reviews have been added, 2562 have been updated and 278 have been removed in this week, adding to our knowledge about identified issues. Most of the updates were to move ~1800 packages affected by the generic catch-all captures_build_path (out of ~2600 total) to the more specific gcc_captures_build_path, fixed by our proposed patches to GCC.

5 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (1)
  • Chris Lamb (2)
  • Chris West (1)

diffoscope development

diffoscope development continued on the experimental branch:

  • Maria Glukhova:
    • Code refactoring and more tests.
  • Chris Lamb:
    • Add safeguards against unpacking recursive or deeply-nested archives. (Closes: #780761)

strip-nondeterminism development

  • strip-nondeterminism 0.033-1 and -2 were uploaded to unstable by Chris Lamb. It included contributions from:

  • Bernhard M. Wiedemann:

    • Add cpio handler.
    • Code quality improvements.
  • Chris Lamb:
    • Add documentation and increase verbosity, in support of the long-term aim of removing the need for this tool.

reprotest development

  • reprotest 0.6.1 and 0.6.2 were uploaded to unstable by Ximin Luo. It included contributions from:

  • Ximin Luo:

    • Add a documentation section on "Known bugs".
    • Move developer documentation away from the man page.
    • Mention release instructions in the previous changelog.
    • Preserve directory structure when copying artifacts. Otherwise hash output on a successful reproduction sometimes fails, because find(1) can't find the artifacts using the original artifact_pattern.
  • Chris Lamb
    • Add proper release instructions and a keyring.

trydiffoscope development

  • Chris Lamb:
    • Uses the diffoscope from Debian experimental if possible.

Misc.

This week's edition was written by Ximin Luo, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianJamie McClelland: Late to the Raspberry Pi party

I finally bought my first raspberry pi to setup as a router and wifi access point.

It wasn't easy.

I first had to figure out what to buy. I think that was the hardest part.

I ended up with:

  • Raspberry PI 3 Model B A1.2GHz 64-bit quad-core ARMv8 CPU, 1GB RAM (Model number: RASPBERRYPI3-MODB-1GB)
  • Transcend USB 3.0 SDHC / SDXC / microSDHC / SDXC Card Reader, TS-RDF5K (Black). I only needed this because I don't have one already and I will need a way to copy a raspbian image from my laptop to a micro SD card.
  • Centon Electronics Micro SD Card 16 GB (S1-MSDHC4-16G). This is the micro sd card.
  • Smraza Clear case for Raspberry Pi 3 2 Model B with Power Supply,2pcs Heatsinks and Micro USB with On/Off Switch. And this is the box to put it all in.

I already have a cable matters USB to ethernet device, which will provide the second ethernet connection so this device can actually work as a router.

I studiously followed the directions to download the raspbian image and copy it to my micro sd card. I also touched a file on the boot partition called ssh so ssh would start automatically. Note: I first touched the ssh file on the root partition (sdb2) before realizing it belonged on the boot partition (sdb1). And, despite ambiguous directions found on the Internet, lowercase 'ssh' for the filename seems to do the trick.

Then, I found the IP address with the help of NMAP (sudo nmap -sn 192.168.69.*) and tried to ssh in but alas...

Connection reset by 192.168.69.116 port 22

No dice.

So, I re-mounted the sdb2 partition of the micro sd card and looked in var/log/auth.log and found:

May  5 19:23:00 raspberrypi sshd[760]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
May  5 19:23:00 raspberrypi sshd[760]: fatal: No supported key exchange algorithms [preauth]
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format
May  5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_rsa_key
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format
May  5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_dsa_key
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format
May  5 19:23:07 raspberrypi sshd[762]: error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key
May  5 19:23:07 raspberrypi sshd[762]: error: key_load_public: invalid format

How did that happen? And wait a minute...

0 jamie@turkey:~$ ls -l /mnt/etc/ssh/ssh_host_ecdsa_key
-rw------- 1 root root 0 Apr 10 05:58 /mnt/etc/ssh/ssh_host_ecdsa_key
0 jamie@turkey:~$ date
Fri May  5 15:44:15 EDT 2017
0 jamie@turkey:~$

Are the keys embedded in the image? Isn't that wrong?

I fixed with:

0 jamie@turkey:mnt$ sudo rm /mnt/etc/ssh/ssh_host_*
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_rsa_key -N '' -t rsa
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_dsa_key -N '' -t dsa
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_ecdsa_key -N '' -t ecdsa
0 jamie@turkey:mnt$ sudo ssh-keygen -q -f /mnt/etc/ssh/ssh_host_ed25519_key -N '' -t ed25519
0 jamie@turkey:mnt$

NOTE: I just did a second installation and this didn't happen. Maybe something went wrong as I experiment with SSH vs ssh on the boot partition?

Then I could ssh in. I removed the pi user account and added my ssh key to /root/.ssh/authorized_keys and put a new name "mondragon" in the /etc/hostname file.

And... I upgraded to Debian stretch and rebooted.

Then, I followed these instructions for fixing the wifi (replacing the firmware does still work for me).

I plugged my cable matters USB/Ethernet adapter into the device so it would be recognized, but left it dis-connected.

Next I started to configure the device to be a wifi access point using this excellend tutorial, but decided I wanted to setup my networks using systemd-networkd instead.

Since /etc/network/interaces already had eth0 set to manual (because apparently it is controlled by dhcpcd instead), I didn't need any modifications there.

However, I wanted to use the dhcp client built-in to systemd-networkd, so to prevent dhcpcd from obtaining an IP address, I purged dhcpcd:

apt-get purge dhcpcd5

I was planning to also use systemd-networkd to name the devices (using *.link files) but nothing I could do could convince systemd to rename them, so I gave up and added /etc/udev/rules.d/70-persistent-net.rules:

    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="b8:27:eb:ce:b5:c3", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME:="wan"
    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="a0:ce:c8:01:20:7d", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME:="lan"
    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="b8:27:eb:9b:e0:96", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME:="wlan"

(If you are copying and pasting the mac addresses will have to change.)

Then I added the following files:

root@mondragon:~# head /etc/systemd/network/*
==> /etc/systemd/network/50-lan.network <==
[Match]
Name=lan

[Network]
Address=192.168.69.1/24

==> /etc/systemd/network/55-wlan.network <==
[Match]
Name=wlan

[Network]
Address=10.0.69.1/24

==> /etc/systemd/network/60-wan.network <==
[Match]
Name=wan

[Network]
DHCP=v4
IPForward=yes
IPMasquerade=yes
root@mondragon:~#

Sadly, IPMasquerade doesn't seem to work either for some reason, so...

root@mondragon:~# cat /etc/systemd/system/masquerade.service 
[Unit]
Description=Start masquerading because Masquerade=yes not working in wan.network.

[Service]
Type=oneshot
ExecStart=/sbin/iptables -t nat -A POSTROUTING -o wan -j MASQUERADE

[Install]
WantedBy=network.target
root@mondragon:~#

And, systemd DHCPServer worked, but then it didn't and I couldn't figure out how to debug, so...

apt-get install dnsmasq

Followed by:

root@mondragon:~# cat /etc/dnsmasq.d/mondragon.conf 
# Don't provide DNS services (unbound does that).
port=0

interface=lan
interface=wlan

# Only provide dhcp services since systemd-networkd dhcpserver seems
# flakey.
dhcp-range=set:cable,192.168.69.100,192.168.69.150,255.255.255.0,4h
dhcp-option=tag:cable,option:dns-server,192.168.69.1
dhcp-option=tag:cable,option:router,192.168.69.1

dhcp-range=set:wifi,10.0.69.100,10.0.69.150,255.255.255.0,4h
dhcp-option=tag:wifi,option:dns-server,10.0.69.1
dhcp-option=tag:wifi,option:router,10.0.69.1

root@mondragon:~#

It would probably be simpler to have dnsmasq provide DNS service also, but I happen to like unbound:

apt-get install unbound

And...

root@mondragon:~# cat /etc/unbound/unbound.conf.d/server.conf 
server:
    interface: 127.0.0.1
    interface: 192.168.69.1
    interface: 10.0.69.1

    access-control: 192.168.69.0/24 allow
    access-control: 10.0.69.0/24 allow

    # We do query localhost for our stub zone: loc.cx
    do-not-query-localhost: no

    # Up this level when debugging.
    log-queries: no
    logfile: ""
    #verbosity: 1

    # Settings to work better with systemcd
    do-daemonize: no
    pidfile: ""
root@mondragon:~# 

Now on to the wifi access point.

apt-get install hostapd

And the configuration file:

root@mondragon:~# cat /etc/hostapd/hostapd.conf
# This is the name of the WiFi interface we configured above
interface=wlan

# Use the nl80211 driver with the brcmfmac driver
driver=nl80211

# This is the name of the network
ssid=peacock

# Use the 2.4GHz band
hw_mode=g

# Use channel 6
channel=6

# Enable 802.11n
ieee80211n=1

# Enable WMM
wmm_enabled=1

# Enable 40MHz channels with 20ns guard interval
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]

# Accept all MAC addresses
macaddr_acl=0

# Use WPA authentication
auth_algs=1

# Require clients to know the network name
ignore_broadcast_ssid=0

# Use WPA2
wpa=2

# Use a pre-shared key
wpa_key_mgmt=WPA-PSK

# The network passphrase
wpa_passphrase=xxxxxxxxxxxx

# Use AES, instead of TKIP
rsn_pairwise=CCMP
root@mondragon:~#

The hostapd package doesn't have a systemd start up file so I added one:

root@mondragon:~# cat /etc/systemd/system/hostapd.service 
[Unit]
Description=Hostapd IEEE 802.11 AP, IEEE 802.1X/WPA/WPA2/EAP/RADIUS Authenticator
Wants=network.target
Before=network.target
Before=network.service

[Service]
ExecStart=/usr/sbin/hostapd /etc/hostapd/hostapd.conf

[Install]
WantedBy=multi-user.target
root@mondragon:~#

My last step was to modify /etc/ssh/sshd_config so it only listens on the lan and wlan interfaces (listening on wlan is a bit of a risk, but also useful when mucking with the lan network settings to ensure I don't get locked out).

LongNowA Monument to Outlast Humanity

“City” is made almost entirely from rocks, sand, and concrete that is mined and mixed on site. Via Jamie Hawkesworth / The New Yorker

Michael Heizer, an eccentric pioneer of the Earthworks movement, is almost done with the mile-and-a-half sculpture he’s been working on for upwards of half a century in a remote Nevada desert. And almost nobody has seen it. “City,” inspired by the ancient ritual sites of past civilizations and set to open to the public in 02020, is one of the most ambitious artworks ever attempted. The New Yorker recently profiled Heizer’s life and work, providing the first in-depth look at his efforts to build a monument to outlast humanity.

Read the feature in full here. To see more photos of “City,” head here.

SEE ALSO: Long Now’s 02012 profile of Heizer’s Levitating Mass, a 340-ton mass of granite and one of the only sculptures in the world meant to be walked under, now on permanent display at LACMA.

Planet DebianMichal Čihař: Weblate 2.14

Weblate 2.14 has been released today slightly ahead of the schedule. There are quite a lot of security improvements based on reports we got from HackerOne program, API extensions and other minor improvements.

Full list of changes:

  • Add glossary entries using AJAX.
  • The logout now uses POST to avoid CSRF.
  • The API key token reset now uses POST to avoid CSRF.
  • Weblate sets Content-Security-Policy by default.
  • The local editor URL is validated to avoid self-XSS.
  • The password is now validated against common flaws by default.
  • Notify users about imporant activity with their account such as password change.
  • The CSV exports now escape potential formulas.
  • Various minor improvements in security.
  • The authentication attempts are now rate limited.
  • Suggestion content is stored in the history.
  • Store important account activity in audit log.
  • Ask for password confirmation when removing account or adding new associations.
  • Show time when suggestion has been made.
  • There is new quality check for trailing semicolon.
  • Ensure that search links can be shared.
  • Included source string information and screenshots in the API.
  • Allow to overwrite translations through API upload.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

TEDWe asked 3 experts: How will AI change our lives in the near future?

Imagine a world where your car drives itself, your fridge does the grocery shopping, and robots work alongside you. Rapid advances in artificial intelligence are turning that world into a near-future possibility. But what will that future really look like, and how will it change our lives?

We spoke with three artificial intelligence experts at TED2017 in Vancouver, at a dinner on the future of AI, hosted by Toyota. Here are their thoughts on how AI will change our lives in the coming years:

When we talk about AI transforming our lives, what will that really look like? How will it change life as we know it?

One of the more transformative changes I see coming is the mobility network: an internet of “physical” things, if you will. Everything is going to be able to move around the world autonomously, and we’re going to see an incredible number of different services running on this network. — Michael Hanuschik, CEO of a stealth-mode startup

AI will continue to provide a set of tools to people that expand their horizons and enhance their ability to work and play. — Janet Baker, founder of Dragon Systems

Do you think AI will help people make decisions and enhance our lives, or are we basically programming ourselves into oblivion? What will the role of humans become in the future?

I certainly don’t believe we’ll program ourselves into oblivion any time soon. AIs are specialized tools. Very powerful tools, but tools nonetheless. AIs are great at making statistical guesses based on enormous data sets, but they have no real understanding or comprehension of the tasks they are performing. — Hanuschik

Powerful technologies will be used and abused. Sophisticated AI-based technology for pattern recognition can be used to recognize the words we speak, faces in crowds, cancer cells in images, or protective radar signal analysis. It can also enable the automated surveillance of vast quantities of audio and visual materials, and unprecedented profiling and tracking through the collection and convergence of personal data. We must be aware and take active roles in advancing our capabilities and protecting ourselves from harm––including the harm from escalating prejudices we foster by isolating ourselves from differing ideas (e.g., with polarized news feeds) and productive discourse about them.  — Baker

AI will enhance and augment the human experience. Historically, humans have formed strong bonds — even relationships — with their automobiles (machines). The bond between humans and human-support robots may well prove to be even stronger. — James Kuffner, roboticist and CTO at Toyota Research Institute

There’s a lot of talk about how AI will affect the workplace. Do you think robots will take our jobs, or free us to perform new ones?

Jobs based on fairly simple and repetitive tasks will probably continue to disappear, but anything more complex is likely to be around for quite some time. I haven’t seen evidence that a true AI, with the ability to understand and reason, will be seen in our lifetimes. — Hanuschik

This is not a dichotomy. AI will replace workers, including many presently highly paid professionals, and it will provide a means for new jobs. As always, adaptation is the key for survival and success. — Baker

Humans and robots working together, each with their own strengths, will be more productive and more efficient than either one on its own. — Kuffner

As we develop more sophisticated AI technology — like self-driving cars or intelligent weapons — we put our lives in the hands of machines. Should we trust these systems, and how should we react when they fail?

I think it’s less about trusting the machines and more about trusting regulatory agencies to require implementation of best practices for developing safe, highly complex, electromechanical systems. An “inconvenient truth” is that humans love the convenience provided by technology and will choose that convenience even if it puts them in harm’s way. An example of this is the smartphone, which is the likely culprit for a 14% uptick in deadly car accidents since 2014. Even with those staggering statistics, no one is going to recall them. And where technology created a problem, technology will also solve it, probably with self-driving vehicles that will be able to significantly reduce the number of deaths every year … eventually to zero. — Hanuschik

People already put their lives in the hands of technology: planes, trains, etc. Autonomous vehicles are just the next step. Nothing is fail-proof, and we know it. Convenience and economics, along with safeguards, will drive the adoption of this new technology. — Baker

Photo: iStock


CryptogramKeylogger Found in HP Laptop Audio Drivers

This is a weird story: researchers have discovered that an audio driver installed in some HP laptops includes a keylogger, which records all keystrokes to a local file. There seems to be nothing malicious about this, but it's a vivid illustration of how hard it is to secure a modern computer. The operating system, drivers, processes, application software, and everything else is so complicated that it's pretty much impossible to lock down every aspect of it. So many things are eavesdropping on different aspects of the computer's operation, collecting personal data as they do so. If an attacker can get to the computer when the drive is unencrypted, he gets access to all sorts of information streams -- and there's often nothing the computer's owner can do.

Worse Than FailureBring Your Own Code: Your Private Foursome

Last week, I shared some code that, while imperfect, wasn’t that bad. I then issued a challenge: make it worse. Or better, if you really want. As many comments noted: one case covers only the first iteration of the loop, and one case only covers the last iteration of the loop. You could easily pull those out of the loop, and not need a for-case at all. Others noticed that this pattern looked like odd slices out of an identity matrix.

With that in mind, we got a few numpy, Matlab, or MatrixUtils based solutions generally were the “best” solutions to the problem: generate an identity matrix and take slices out of it. This is reasonable and fine. It makes perfect sense. Let’s see if we can avoid making sense.

I’ll start with Abner Qian’s Ruby solution.

module MagicalArrayGenerator
  def magical_array_generator
    main_array = []
    self.times do |i|
      inner_array = []
      self.times do |j|
        i == j ? inner_array << 1 : inner_array << 0
      end
      main_array << inner_array
    end

    e_1 = []
    n_1 = []
    e_n = []
    n_n = []

    self.times do |i|
      e_1 << [main_array[i].first]
      e_n << [main_array[i].last]
      n_1 << main_array[i][1..-1]
      n_n << main_array[i][0..-2]
    end

    [e_1, n_1, e_n, n_n]
  end
end

class Integer
  include MagicalArrayGenerator
end

e_1, n_1, e_n, n_n = 4.magical_array_generator

At it’s core, this is simply an implementation that generates an identity matrix and slices it up. The actual implementation, however, is a pitch-perfect parody of Ruby development: “There’s no problem that can’t be solved by monkey-patching a method into a built-in type”. That’s what happens here- the include statment injects this method into the build-in Integer data-type, meaning you can call 4.magical_array_generator and get your arrays. Abner also points out that Ruby uses 62-bit integers, just in case you want some 4611686018427387903 by 4611686018427387904 arrays.

Several folks looked at the idea of taking slices, and said, “Gee, I bet you I could do this with pointers in C”. My personal favorite in that category would have to be Ron P’s approach.

#include <stdio.h>

int main( int argc, char **argv)
{
    int *e_1 = 0;
    int *e_n = 0;
    int **n_1 = 0;
    int **n_n = 0;
    int *fugly = 0;
    int i,j;

    if ( argc != 2 ) return 1;

    int n = atoi(argv[1]);

    fugly = calloc( n*(n+1),sizeof(int));

    n_1 = calloc(n,sizeof(int *));
    n_n = calloc(n,sizeof(int *));

    for ( i = 0, j=n; i < n; ++i, j+=n+1 )
    {
        fugly[j]=1;
        n_1[i]=fugly+n*i;
        n_n[i]=n_1[i]+n;
    }
    e_1 = fugly+n;
    e_n = fugly+1;

    printf( "e_1\n" );
    for ( i = 0; i < n; ++i ) {
      printf( "  %d\n", e_1[i]);
    }

    printf( "\ne_n\n" );
    for ( i = 0; i < n; ++i ) {
      printf( "  %d\n", e_n[i]);
    }

    printf( "\nn_1\n" );
    for ( i = 0; i < n; ++i ) {
      printf( "  " );
      for ( j = 0; j < n-1; ++j ) {
        printf("%d ", n_1[i][j]);
      }
      printf("\n" );
    }

    printf( "\nn_n\n" );
    for ( i = 0; i < n; ++i ) {
      printf( "  " );
      for ( j = 0; j < n-1; ++j ) {
        printf("%d ", n_n[i][j]);
      }
      printf("\n" );
    }

    return 0;
}

Now, Martin Scolding gets bonus points for two reasons: first, he uses one of the worst languages in the world (not designed as an esolang), and second, this language doesn’t technically support multi-dimensional arrays. I speak, of course, of PL/SQL. Note the use of substrings to figure out what number to put in each position of the array.

DECLARE

    TYPE data_t  IS TABLE OF INTEGER INDEX BY PLS_INTEGER;
    TYPE array_t IS TABLE OF data_t  INDEX BY PLS_INTEGER;

   e_1 array_t;
   e_n array_t;
   n_1 array_t;
   n_n array_t;

   l_array_size INTEGER := 0;

PROCEDURE gen_arrays(n INTEGER, p_e_1 IN OUT array_t, p_e_n IN OUT array_t, p_n_1 IN OUT array_t, p_n_n IN OUT array_t)
--' Generate 4 Arrays of the form (example n=4)
--
--    '       | 1 |         | 0 0 0 |
--    ' e_1 = | 0 |   n_1 = | 1 0 0 |
--    '       | 0 |         | 0 1 0 |
--    '       | 0 |         | 0 0 1 |
--    '
--    '       | 0 |         | 1 0 0 |
--    ' e_n = | 0 |   n_n = | 0 1 0 |
--    '       | 0 |         | 0 0 1 |
--    '       | 1 |         | 0 0 0 |
--
IS
    l_n_string LONG := RPAD('1',n+1,'0');
BEGIN

    For i in 1..n Loop
        p_e_1(i)(1)   := TO_NUMBER(SUBSTR(l_n_string, 1, 1));
        p_e_n(i)(1)   := TO_NUMBER(SUBSTR(l_n_string, n, 1));
        For j in 1..n-1 Loop
            p_n_1(i)(j) := TO_NUMBER(SUBSTR(l_n_string, j+1, 1));
            p_n_n(i)(j) := TO_NUMBER(SUBSTR(l_n_string, j,   1));
        End Loop;
        l_n_string := LPAD(SUBSTR(l_n_string, 1, n), n+1, '0');
    End Loop;

END;

BEGIN
    l_array_size := &inp_array;

    gen_arrays(l_array_size, e_1, e_n, n_1, n_n);

    --==========================================================================
    -- DISPLAY RESULTS
    --==========================================================================
     DBMS_OUTPUT.PUT_LINE('e_1 = ');
     For i in 1..l_array_size Loop
        DBMS_OUTPUT.PUT_LINE('         | ' || e_1(i)(1) || ' |');
     End Loop;
     DBMS_OUTPUT.PUT_LINE('--------------------------------------------------');
     DBMS_OUTPUT.PUT_LINE('e_n = ');
     For i in 1..l_array_size Loop
        DBMS_OUTPUT.PUT_LINE('         | ' || e_n(i)(1) || ' |');
     End Loop;
     DBMS_OUTPUT.PUT_LINE('--------------------------------------------------');
     DBMS_OUTPUT.PUT_LINE('n_1 = ');
     For i in 1..l_array_size Loop
        DBMS_OUTPUT.PUT('         | ');
        For j in 1..l_array_size-1 Loop
            DBMS_OUTPUT.PUT(n_1(i)(j) || ' ');
        End Loop;
        DBMS_OUTPUT.PUT('|');
        DBMS_OUTPUT.NEW_LINE;
     End Loop;
     DBMS_OUTPUT.PUT_LINE('--------------------------------------------------');
     DBMS_OUTPUT.PUT_LINE('n_n = ');
     For i in 1..l_array_size Loop
        DBMS_OUTPUT.PUT('         | ');
        For j in 1..l_array_size-1 Loop
            DBMS_OUTPUT.PUT(n_n(i)(j) || ' ');
        End Loop;
        DBMS_OUTPUT.PUT('|');
        DBMS_OUTPUT.NEW_LINE;
     End Loop;
     DBMS_OUTPUT.PUT_LINE('--------------------------------------------------');
    --==========================================================================
    --
    --==========================================================================

END;
/

Finally, though, I have to give a little space to Airdrik. While the code may contain some errors, it is in Visual Basic, as was the original solution, and it knows that recursion makes everything better.

Public Sub GenerateIdentitySquare(ByVal n As Long, ByRef sq As Variant, ByVal i As Long, ByVal j As Long)
        Select Case j
        Case i:
                sq(i, j) = #1
                If i < n Then
                        GenerateIdentitySquare(n, sq, i, j+1)
                End If
        Case n:
                sq(i, j) = #0
                GenerateIdentitySquare(n, sq, i+1, 1)
        Case Else:
                sq(i, j) = #0
                GenerateIdentitySquare(n, sq, i, j+1)
        End Select
End Sub

Public Sub CopyRowValues(ByVal n As Long, ByRef sq As Variant, ByRef e As Variant, ByVal sq_i As Long, ByVal e_i As Long, ByVal j As Long)
        e(e_i, j) = sq(sq_i, j)
        if j < n Then
                CopyRowValues(n, sq, e, sq_i, e_i, j+1)
        End If
End Sub

Public Sub CopyRows(ByVal n As Long, ByRef sq As Variant, ByRef e_1 As Variant, ByRef e_n As Variant, ByRef n_1 As Variant, ByRef n_n As Variant, ByVal i As Long)
        Select Case i
        Case 1:
                CopyRowValues(n, sq, e_1, i, 1, 1)
                CopyRowValues(n, sq, n_n, i, i, 1)
                CopyRows(n, sq, e_1, e_n, n_1, n_n, i+1)
        Case n:
                CopyRowValues(n, sq, n_1, i, i-1, 1)
                CopyRowValues(n, sq, n_n, i, i, 1)
        Case Else:
                CopyRowValues(n, sq, n_1, i, i-1, 1)
                CopyRowValues(n, sq, e_n, i, 1, 1)
                CopyRows(n, sq, e_1, e_n, n_1, n_n, i+1)
        End Select
End Sub

Public Sub DefineProjectionArrays(ByVal n As Long, ByRef e_1 As Variant, ByRef e_n As Variant, ByRef n_1 As Variant, ByRef n_n As Variant)
    Dim i As Long, j As Long

    ' Generate 4 Arrays of the form (example n=4)
    '       | 1 |         | 0 0 0 |
    ' e_1 = | 0 |   n_1 = | 1 0 0 |
    '       | 0 |         | 0 1 0 |
    '       | 0 |         | 0 0 1 |
    '
    '       | 0 |         | 1 0 0 |
    ' e_n = | 0 |   n_n = | 0 1 0 |
    '       | 0 |         | 0 0 1 |
    '       | 1 |         | 0 0 0 |

        Dim sq(n, n) As Variant
        GenerateIdentitySquare(n, sq, 1, 1)

    ReDim e_1(n, 1)
    ReDim e_n(n, 1)
    ReDim n_1(n, n - 1)
    ReDim n_n(n, n - 1)

        CopyRows(n, sq, e_1, e_n, n_1, n_n, 1)
End Sub

Functional programming is always the best approach, obviously.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianDirk Eddelbuettel: Upcoming Rcpp Talks

Very excited about the next few weeks which will cover a number of R conferences, workshops or classes with talks, mostly around Rcpp and one notable exception:

  • May 19: Rcpp: From Simple Examples to Machine learning, pre-conference workshop at our R/Finance 2017 conference here in Chicago

  • May 26: Extending R with C++: Motivation and Examples, invited keynote at R à Québec 2017 at Université Laval in Quebec City, Canada

  • June 28-29: Higher-Performance R Programming with C++ Extensions, two-day course at the Zuerich R Courses @ U Zuerich in Zuerich, Switzerland

  • July 3: Rcpp at 1000+ reverse depends: Some Lessons Learned (working title), at DSC 2017 preceding useR! 2017 in Brussels, Belgium

  • July 4: Extending R with C++: Motivation, Introduction and Examples, tutorial preceding useR! 2017 in Brussels, Belgium

  • July 5, 6, or 7: Hosting Data Packages via drat: A Case Study with Hurricane Exposure Data, accepted presentation, joint with Brooke Anderson

If you are near one those events, interested and able to register (for the events requiring registration), I would love to chat before or after.

,

Cory DoctorowTalking Walkaway on the Techdirt podcast


Last week I sat down with Mike Masnick, the crusading technology journalist who coined the “Streisand Effect” and runs the fantastic site Techdirt, and we had a good, chewy discussion (MP3) about my new novel Walkaway; he’s just posted it to the Techdirt podcast. Hope you enjoy it as much as I did!

Planet DebianEnrico Zini: Accident on the motorway

There was an accident on the motorway, luckily noone got seriously wounded, but a truckful of sugar and a truckful of cereals completely spilled on the motorway, and took some time to clean.

19:15:23 19:45:07 20:02:37 20:11:52 20:28:43 20:32:34 20:44:03 21:27:41 21:44:20 22:10:50

Planet DebianDaniel Pocock: Building an antenna and receiving ham and shortwave stations with SDR

In my previous blog on the topic of software defined radio (SDR), I provided a quickstart guide to using gqrx, GNU Radio and the RTL-SDR dongle to receive FM radio and the amateur 2 meter (VHF) band.

Using the same software configuration and the same RTL-SDR dongle, it is possible to add some extra components and receive ham radio and shortwave transmissions from around the world.

Here is the antenna setup from the successful SDR workshop at OSCAL'17 on 13 May:

After the workshop on Saturday, members of the OSCAL team successfully reconstructed the SDR and antenna at the Debian info booth on Sunday and a wide range of shortwave and ham signals were detected:

Here is a close-up look at the laptop, RTL-SDR dongle (above laptop), Ham-It-Up converter (above water bottle) and MFJ-971 ATU (on right):

Buying the parts

Component Purpose, Notes Price/link to source
RTL-SDR dongle Converts radio signals (RF) into digital signals for reception through the USB port. It is essential to buy the dongles for SDR with TCXO, the generic RTL dongles for TV reception are not stable enough for anything other than TV. ~ € 25
Enamelled copper wire, 25 meters or more Loop antenna. Thicker wire provides better reception and is more suitable for transmitting (if you have a license) but it is heavier. The antenna I've demonstrated at recent events uses 1mm thick wire. ~ € 10
4 (or more) ceramic egg insulators Attach the antenna to string or rope. Smaller insulators are better as they are lighter and less expensive. ~ € 10
4:1 balun The actual ratio of the balun depends on the shape of the loop (square, rectangle or triangle) and the point where you attach the balun (middle, corner, etc). You may want to buy more than one balun, for example, a 4:1 balun and also a 1:1 balun to try alternative configurations. Make sure it is waterproof, has hooks for attaching a string or rope and an SO-239 socket. from € 20
5 meter RG-58 coaxial cable with male PL-259 plugs on both ends If using more than 5 meters or if you want to use higher frequencies above 30MHz, use thicker, heavier and more expensive cables like RG-213. The cable must be 50 ohm. ~ € 10
Antenna Tuning Unit (ATU) I've been using the MFJ-971 for portable use and demos because of the weight. There are even lighter and cheaper alternatives if you only need to receive. ~ € 20 for receive only or second hand
PL-259 to SMA male pigtail, up to 50cm, RG58 Joins the ATU to the upconverter. Cable must be RG58 or another 50 ohm cable ~ € 5
Ham It Up v1.3 up-converter Mixes the HF signal with a signal from a local oscillator to create a new signal in the spectrum covered by the RTL-SDR dongle ~ € 40
SMA (male) to SMA (male) pigtail Join the up-converter to the RTL-SDR dongle ~ € 2
USB charger and USB type B cable Used for power to the up-converter. A spare USB mobile phone charge plug may be suitable. ~ € 5
String or rope For mounting the antenna. A ligher and cheaper string is better for portable use while a stronger and weather-resistent rope is better for a fixed installation. € 5

Building the antenna

There are numerous online calculators for measuring the amount of enamelled copper wire to cut.

For example, for a centre frequency of 14.2 MHz on the 20 meter amateur band, the antenna length is 21.336 meters.

Add an extra 24 cm (extra 12 cm on each end) for folding the wire through the hooks on the balun.

After cutting the wire, feed it through the egg insulators before attaching the wire to the balun.

Measure the extra 12 cm at each end of the wire and wrap some tape around there to make it easy to identify in future. Fold it, insert it into the hook on the balun and twist it around itself. Use between four to six twists.

Strip off approximately 0.5cm of the enamel on each end of the wire with a knife, sandpaper or some other tool.

Insert the exposed ends of the wire into the screw terminals and screw it firmly into place. Avoid turning the screw too tightly or it may break or snap the wire.

Insert string through the egg insulators and/or the middle hook on the balun and use the string to attach it to suitable support structures such as a building, posts or trees. Try to keep it at least two meters from any structure. Maximizing the surface area of the loop improves the performance: a circle is an ideal shape, but a square or 4:3 rectangle will work well too.

For optimal performance, if you imagine the loop is on a two-dimensional plane, the first couple of meters of feedline leaving the antenna should be on the plane too and at a right angle to the edge of the antenna.

Join all the other components together using the coaxial cables.

Configuring gqrx for the up-converter and shortwave signals

Inspect the up-converter carefully. Look for the crystal and find the frequency written on the side of it. The frequency written on the specification sheet or web site may be wrong so looking at the crystal itself is the best way to be certain. On my Ham It Up, I found a crystal with 125.000 written on it, this is 125 MHz.

Launch gqrx, go to the File menu and select I/O devices. Change the LNB LO value to match the crystal frequency on the up-converter, with a minus sign. For my Ham It Up, I use the LNB LO value -125.000000 MHz.

Click OK to close the I/O devices window.

On the Input Controls tab, make sure Hardware AGC is enabled.

On the Receiver options tab, change the Mode value. Commercial shortwave broadcasts use AM and amateur transmission use single sideband: by convention, LSB is used for signals below 10MHz and USB is used for signals above 10MHz. To start exploring the 20 meter amateur band around 14.2 MHz, for example, use USB.

In the top of the window, enter the frequency, for example, 14.200 000 MHz.

Now choose the FFT Settings tab and adjust the Freq zoom slider. Zoom until the width of the display is about 100 kHZ, for example, from 14.15 on the left to 14.25 on the right.

Click the Play icon at the top left to start receiving. You may hear white noise. If you hear nothing, check the computer's volume controls, move the Gain slider (bottom right) to the maximum position and then lower the Squelch value on the Receiver options tab until you hear the white noise or a transmission.

Adjust the Antenna Tuner knobs

Now that gqrx is running, it is time to adjust the knobs on the antenna tuner (ATU). Reception improves dramatically when it is tuned correctly. Exact instructions depend on the type of ATU you have purchased, here I present instructions for the MFJ-971 that I have been using.

Turn the TRANSMITTER and ANTENNA knobs to the 12 o'clock position and leave them like that. Turn the INDUCTANCE knob while looking at the signals in the gqrx window. When you find the best position, the signal strength displayed on the screen will appear to increase (the animated white line should appear to move upwards and maybe some peaks will appear in the line).

When you feel you have found the best position for the INDUCTANCE knob, leave it in that position and begin turning the ANTENNA knob clockwise looking for any increase in signal strength on the chart. When you feel that is correct, begin turning the TRANSMITTER knob.

Listening to a transmission

At this point, if you are lucky, some transmissions may be visible on the gqrx screen. They will appear as darker colours in the waterfall chart. Try clicking on one of them, the vertical red line will jump to that position. For a USB transmission, try to place the vertical red line at the left hand side of the signal. Try dragging the vertical red line or changing the frequency value at the top of the screen by 100 Hz at a time until the station is tuned as well as possible.

Try and listen to the transmission and identify the station. Commercial shortwave broadcasts will usually identify themselves from time to time. Amateur transmissions will usually include a callsign spoken in the phonetic alphabet. For example, if you hear "CQ, this is Victor Kilo 3 Tango Quebec Romeo" then the station is VK3TQR. You may want to note down the callsign, time, frequency and mode in your log book. You may also find information about the callsign in a search engine.

The video demonstrates reception of a transmission from another country, can you identify the station's callsign and find his location?

If you have questions about this topic, please come and ask on the Debian Hams mailing list. The gqrx package is also available in Fedora and Ubuntu but it is known to crash on startup in Ubuntu 17.04. Users of other distributions may also want to try the Debian Ham Blend bootable ISO live image as a quick and easy way to get started.

Cory DoctorowSee you tonight for the Walkaway tour stop in Bellingham! (then Vancouver, Burbank, Oxford…) (!)


Thanks to everyone (especially Neal Stephenson) who came out to last night’s Walkaway event in Seattle: if you’re in the area and couldn’t make it, you get another chance tonight when I’ll be at Bellingham’s Village Books at 7PM.


After the event, I’m driving to Vancouver for an appearance tomorrow at the Vancouver Writers’ Festival, and then on Saturday I’m doing an afternoon event at my local bookstore in Burbank, the excellent Dark Delicacies, just before I hop in a cab to LAX to fly to the UK.

In the UK, I’m stopping first in Oxford (with Tim Harford), then London (with Laurie Penny), then Liverpool (with Chris Pak), then Birmingham, before finishing in Hay-on-Wye (with Adam Rutherford).

After that, the US touring resumes anew, with stops at Denver Comic-Con, San Diego Comic-Con, Book Con NYC, Printer’s Row Chicago, and Defcon in Las Vegas.

Can’t wait to see you!

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, about 190 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Antoine Beaupré did 19.5 hours (out of 16h allocated + 5.5 remaining hours, thus keeping 2 extra hours for May).
  • Ben Hutchings did 12 hours (out of 15h allocated, thus keeping 3 extra hours for May).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did 17.5 hours (out of 16 hours allocated + 3.5 hours remaining, thus keeping 2 hours for May).
  • Guido Günther did 12 hours (out of 8 hours allocated + 4 hours remaining).
  • Hugo Lefeuvre did 15.5 hours (out of 6 hours allocated + 9.5 hours remaining).
  • Jonas Meurer did nothing (out of 4 hours allocated + 3.5 hours remaining, thus keeping 7.5 hours for May).
  • Markus Koschany did 23.75 hours.
  • Ola Lundqvist did 14 hours (out of 20h allocated, thus keeping 6 extra hours for May).
  • Raphaël Hertzog did 11.25 hours (out of 10 hours allocated + 1.25 hours remaining).
  • Roberto C. Sanchez did 16.5 hours (out of 20 hours allocated + 1 hour remaining, thus keeping 4.5 extra hours for May).
  • Thorsten Alteholz did 23.75 hours.

Evolution of the situation

The number of sponsored hours decreased slightly and we’re now again a little behind our objective.

The security tracker currently lists 54 packages with a known CVE and the dla-needed.txt file 37. The number of open issues is comparable to last month.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

CryptogramDid North Korea Write WannaCry?

The New York Times is reporting that evidence is pointing to North Korea as the author of the WannaCry ransomware. Note that there is no proof at this time, although it would not surprise me if the NSA knows the origins of this malware attack.

CryptogramUsing Wi-Fi to Get 3D Images of Surrounding Location

Interesting research:

The radio signals emitted by a commercial Wi-Fi router can act as a kind of radar, providing images of the transmitter's environment, according to new experiments. Two researchers in Germany borrowed techniques from the field of holography to demonstrate Wi-Fi imaging. They found that the technique could potentially allow users to peer through walls and could provide images 10 times per second.

News article.

Worse Than FailureThe New Manager

Error Message Example vbs

She'd resisted the call for years. As a senior developer, Makoto knew how the story ended: one day, she'd be drafted into the ranks of the manager, forswearing her true love webdev. She knew she'd eventually succumb, but she'd expected to hold out for a few years before she had to decide if she were willing to change jobs to avoid management.

But when her boss was sacked unexpectedly, mere weeks after the most senior dev quit, she looked around and realized she was holding the short straw. She was the most senior. Even if she didn't put in for the job, she'd be drafted into acting as manager while they filled the position.

This is the story of her first day on the job.

Makoto spent the weekend pulling together a document for their external contractors, who'd been plaguing the old boss with questions night and day— in Spanish, no less. Makoto made sure to document as clearly as she could, but the docs had to be in English; she'd taken Japanese in high school for an easy A. She sent it over first thing Monday morning, hoping to have bought herself a couple of days to wrap up her own projects before the deluge began in earnest.

It seemed at first to be working, but perhaps it just took time for them to translate the change announcement for the team. Just before noon, she received an instant message.

Well, I can just point them to the right page and go to lunch anyway, she thought, bracing herself.

Emilio: I am having error in application.
Makoto: What error are you having?

A minute passed, then another. She was tempted to go to lunch, but the message client kept taunting her, assuring her that Emilio was typing. Surely his question was just long and complicated. She should give him the benefit of the doubt, right?

Emilio: error i am having is: File path is too long

Makoto winced. Oh, that bug ... She'd been trying to get rid of the dependencies with the long path names for ages, but for the moment, you had to install at the root of C in order to avoid hitting the Windows character limits.

But I documented that. In bold. In three places!

Makoto: Did you clone the repository to a folder in the root of a drive? As noted in the documentation there are paths contained within that will exceed the windows maximum path length otherwise
Emilio: No i cloned it to C:\Program Files\Intelligent Communications Inc\Clients\Anonymized Company Name\Padding for length\

Makoto's head hit the desk. She didn't even look up as her fingers flew across the keys. I'll bet he didn't turn on nuget package restore, she thought, or configure IIS correctly.

Makoto: please clone the repository as indicated in the provided documentation, Additionally take careful note of the documented steps required to build the Visual Studio Solution for the first time, as the solution will not build successfully otherwise
Emilio: Yes.

Whatever that means. Makoto sighed. Whatever, I'm out, lunchtime.

Two hours later she was back at her desk, belly full, working away happily at her next feature, when the message bar blinked again.

Dammit!

Emilio: I am having error building application.
Makoto: Have you followed the documentation provided to you? Have you made sure to follow the "first time build" section?
Emilio: yes.
Makoto: And has that resolved your issue?
Emilio: Yes. I am having error building application
Makoto: And what error are you having?
Emilio: Yes. I am having error building application.

"Oh piss off," she said aloud, safe in the knowledge that he was located thousands of miles from her office and thus could not hear her.

"That bad?" asked her next-door neighbor, Mike, with a sympathetic smile.

"He'll figure it out, or he won't," she replied grimly. "I can't hold his hand through every little step. When he figures out his question, I'll be happy to answer him."

And, a few minutes later, it seemed he did figure it out:

Emilio: I am having error with namespaces relating to the nuget package. I have not yet performed nuget package restore

The sound of repeated thumps sent Mike scurrying back across the little hallway into Makoto's cube. He took one look at her screen, winced, and went to inform the rest of the team that they'd be taking Makoto out for a beer later to "celebrate her first day as acting manager." That cheered her enough to answer, at least.

Makoto: Please perform the steps indicated in the documentation for first time builds of the solution in order to resolve your error building the application.
Emilio: i will attempt this fix.

Ten minutes passed: just long enough for her to get back to work, but not so long she'd gotten back into flow before her IM lit up again.

Emilio: I am no longer having error build application.

"Halle-frickin-lujah", she muttered, closing the chat window and promptly resolving to forget all about Emilio ... for now.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners May Meeting: Dealing with Security as a Linux Desktop User

May 20 2017 12:30
May 20 2017 16:30
May 20 2017 12:30
May 20 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

This presentation will introduce the various aspects of IT security that Linux Desktop users may be grappling with on an ongoing basis. The target audience of the talk will be beginners (newbies) - who might have had bad experiences using Windows OS all these years, and don't know what to expect when tiptoeing into the new world of Linux.  General Linux users who don't always pay much attention to aspects of security may also find interest in sharing some of the commonsense practices that are essential to using our computers safely.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

May 20, 2017 - 12:30

read more

Planet DebianFrancois Marier: Recovering from an unbootable Ubuntu encrypted LVM root partition

A laptop that was installed using the default Ubuntu 16.10 (xenial) full-disk encryption option stopped booting after receiving a kernel update somewhere on the way to Ubuntu 17.04 (zesty).

After showing the boot screen for about 30 seconds, a busybox shell pops up:

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)
Enter 'help' for list of built-in commands.

(initramfs)

Typing exit will display more information about the failure before bringing us back to the same busybox shell:

Gave up waiting for root device. Common problems:
  - Boot args (cat /proc/cmdline)
    - Check rootdelay= (did the system wait long enough?)
    - Check root= (did the system wait for the right device?)
  - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! 

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)   
Enter 'help' for list of built-in commands.  

(initramfs)

which now complains that the /dev/mapper/ubuntu--vg-root root partition (which uses LUKS and LVM) cannot be found.

There is some comprehensive advice out there but it didn't quite work for me. This is how I ended up resolving the problem.

Boot using a USB installation disk

First, create bootable USB disk using the latest Ubuntu installer:

  1. Download an desktop image.
  2. Copy the ISO directly on the USB stick (overwriting it in the process):

     dd if=ubuntu.iso of=/dev/sdc1
    

and boot the system using that USB stick (hold the option key during boot on Apple hardware).

Mount the encrypted partition

Assuming a drive which is partitioned this way:

  • /dev/sda1: EFI partition
  • /dev/sda2: unencrypted boot partition
  • /dev/sda3: encrypted LVM partition

Open a terminal and mount the required partitions:

cryptsetup luksOpen /dev/sda3 sda3_crypt
vgchange -ay
mount /dev/mapper/ubuntu--vg-root /mnt
mount /dev/sda2 /mnt/boot
mount -t proc proc /mnt/proc
mount -o bind /dev /mnt/dev

Note:

  • When running cryptsetup luksOpen, you must use the same name as the one that is in /etc/crypttab on the root parition (sda3_crypt in this example).

  • All of these partitions must be present (including /proc and /dev) for the initramfs scripts to do all of their work. If you see errors or warnings, you must resolve them.

Regenerate the initramfs on the boot partition

Then "enter" the root partition using:

chroot /mnt

and make sure that the lvm2 package is installed:

apt install lvm2

before regenerating the initramfs for all of the installed kernels:

update-initramfs -c -k all

Krebs on SecurityBreach at DocuSign Led to Targeted Email Malware Campaign

DocuSign, a major provider of electronic signature technology, acknowledged today that a series of recent malware phishing attacks targeting its customers and users was the result of a data breach at one of its computer systems. The company stresses that the data stolen was limited to customer and user email addresses, but the incident is especially dangerous because it allows attackers to target users who may already be expecting to click on links in emails from DocuSign.

San Francisco-based DocuSign warned on May 9 that it was tracking a malicious email campaign where the subject line reads, “Completed: docusign.com – Wire Transfer Instructions for recipient-name Document Ready for Signature.” The missives contained a link to a downloadable Microsoft Word document that harbored malware.

A typical DocuSign email. Image: DocuSign.

A typical DocuSign email. Image: DocuSign.

The company said at the time that the messages were not associated with DocuSign, and that they were sent from a malicious third-party using DocuSign branding in the headers and body of the email. But in an update late Monday, DocuSign confirmed that this malicious third party was able to send the messages to customers and users because it had broken in and stolen DocuSign’s list of customers and users.

“As part of our ongoing investigation, today we confirmed that a malicious third party had gained temporary access to a separate, non-core system that allows us to communicate service-related announcements to users via email,” DocuSign wrote in an alert posted to its site. “A complete forensic analysis has confirmed that only email addresses were accessed; no names, physical addresses, passwords, social security numbers, credit card data or other information was accessed. No content or any customer documents sent through DocuSign’s eSignature system was accessed; and DocuSign’s core eSignature service, envelopes and customer documents and data remain secure.”

The company is asking people to forward any suspicious emails related to DocuSign to spam@docusign.com, and then to delete the missives. 

“They may appear suspicious because you don’t recognize the sender, weren’t expecting a document to sign, contain misspellings (like “docusgn.com” without an ‘i’ or @docus.com), contain an attachment, or direct you to a link that starts with anything other than https://www.docusign.com or https://www.docusign.net,” reads the advisory.

If you have reason to expect a DocuSign document via email, don’t respond to an email that looks like it’s from DocuSign by clicking a link in the message. When in doubt, access your documents directly by visiting docusign.com, and entering the unique security code included at the bottom of every legitimate DocuSign email. DocuSign says it will never ask recipients to open a PDF, Office document or ZIP file in an email.

DocuSign was already a perennial target for phishers and malware writers, but this incident is likely to intensify attacks against its users and customers. DocuSign says it has more than 100 million users, and it seems all but certain that the criminals who stole the company’s customer email list are going to be putting it to nefarious use for some time to come.

,

CryptogramThe Quick vs. the Strong: Commentary on Cory Doctorow's Walkaway

Technological advances change the world. That's partly because of what they are, but even more because of the social changes they enable. New technologies upend power balances. They give groups new capabilities, increased effectiveness, and new defenses. The Internet decades have been a never-ending series of these upendings. We've seen existing industries fall and new industries rise. We've seen governments become more powerful in some areas and less in others. We've seen the rise of a new form of governance: a multi-stakeholder model where skilled individuals can have more power than multinational corporations or major governments.

Among the many power struggles, there is one type I want to particularly highlight: the battles between the nimble individuals who start using a new technology first, and the slower organizations that come along later.

In general, the unempowered are the first to benefit from new technologies: hackers, dissidents, marginalized groups, criminals, and so on. When they first encountered the Internet, it was transformative. Suddenly, they had access to technologies for dissemination, coordination, organization, and action -- things that were impossibly hard before. This can be incredibly empowering. In the early decades of the Internet, we saw it in the rise of Usenet discussion forums and special-interest mailing lists, in how the Internet routed around censorship, and how Internet governance bypassed traditional government and corporate models. More recently, we saw it in the SOPA/PIPA debate of 2011-12, the Gezi protests in Turkey and the various "color" revolutions, and the rising use of crowdfunding. These technologies can invert power dynamics, even in the presence of government surveillance and censorship.

But that's just half the story. Technology magnifies power in general, but the rates of adoption are different. Criminals, dissidents, the unorganized -- all outliers -- are more agile. They can make use of new technologies faster, and can magnify their collective power because of it. But when the already-powerful big institutions finally figured out how to use the Internet, they had more raw power to magnify.

This is true for both governments and corporations. We now know that governments all over the world are militarizing the Internet, using it for surveillance, censorship, and propaganda. Large corporations are using it to control what we can do and see, and the rise of winner-take-all distribution systems only exacerbates this.

This is the fundamental tension at the heart of the Internet, and information-based technology in general. The unempowered are more efficient at leveraging new technology, while the powerful have more raw power to leverage. These two trends lead to a battle between the quick and the strong: the quick who can make use of new power faster, and the strong who can make use of that same power more effectively.

This battle is playing out today in many different areas of information technology. You can see it in the security vs. surveillance battles between criminals and the FBI, or dissidents and the Chinese government. You can see it in the battles between content pirates and various media organizations. You can see it where social-media giants and Internet-commerce giants battle against new upstarts. You can see it in politics, where the newer Internet-aware organizations fight with the older, more established, political organizations. You can even see it in warfare, where a small cadre of military can keep a country under perpetual bombardment -- using drones -- with no risk to the attackers.

This battle is fundamental to Cory Doctorow's new novel Walkaway. Our heroes represent the quick: those who have checked out of traditional society, and thrive because easy access to 3D printers enables them to eschew traditional notions of property. Their enemy is the strong: the traditional government institutions that exert their power mostly because they can. This battle rages through most of the book, as the quick embrace ever-new technologies and the strong struggle to catch up.

It's easy to root for the quick, both in Doctorow's book and in the real world. And while I'm not going to give away Doctorow's ending -- and I don't know enough to predict how it will play out in the real world -- right now, trends favor the strong.

Centralized infrastructure favors traditional power, and the Internet is becoming more centralized. This is true both at the endpoints, where companies like Facebook, Apple, Google, and Amazon control much of how we interact with information. It's also true in the middle, where companies like Comcast increasingly control how information gets to us. It's true in countries like Russia and China that increasingly legislate their own national agenda onto their pieces of the Internet. And it's even true in countries like the US and the UK, that increasingly legislate more government surveillance capabilities.

At the 1996 World Economic Forum, cyber-libertarian John Perry Barlow issued his "Declaration of the Independence of Cyberspace," telling the assembled world leaders and titans of Industry: "You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear." Many of us believed him a scant 20 years ago, but today those words ring hollow.

But if history is any guide, these things are cyclic. In another 20 years, even newer technologies -- both the ones Doctorow focuses on and the ones no one can predict -- could easily tip the balance back in favor of the quick. Whether that will result in more of a utopia or a dystopia depends partly on these technologies, but even more on the social changes resulting from these technologies. I'm short-term pessimistic but long-term optimistic.

This essay previously appeared on Crooked Timber.

Planet DebianGunnar Wolf: Starting a project on private and anonymous network usage

I am starting a work with the students of LIDSOL (Laboratorio de Investigación y Desarrollo de Software Libre, Free Software Research and Development Laboratory) of the Engineering Faculty of UNAM:

We want to dig into the technical and social implications of mechanisms that provide for anonymous, private usage of the network. We will have our first formal work session this Wednesday, for which we have invited several interesting people to join the discussion and help provide a path for our oncoming work. Our invited and confirmed guests are, in alphabetical order:

  • Salvador Alcántar (Wikimedia México)
  • Sandino Araico (1101)
  • Gina Gallegos (ESIME Culhuacán)
  • Juliana Guerra (Derechos Digitales)
  • Jacobo Nájera (Enjambre Digital)
  • Raúl Ornelas (Instituto de Investigaciones Económicas)

  • As well as LIDSOL's own teachers and students.
    This first session is mostly exploratory, we should keep notes and decide which directions to pursue to begin with. Do note that by "research" we are starting from the undergraduate student level — Not that we want to start by changing the world. But we do want to empower the students who have joined our laboratory to change themselves and change the world. Of course, helping such goals via the knowledge and involvement of projects (not just the tools!) such as Tor.

Planet DebianMichal Čihař: New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue was over one month long, so it's time to process it and include new project.

This time, the newly hosted projects include:

We now also host few new Minetest mods:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do them on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Sociological ImagesWhy the American Health Care Act is bad for women’s health

Photo by Ted Eytan; flickr creative commons.

President Trump recently declared that Obamacare is “essentially dead” after the House of Representatives passed legislation to replace existing health care policy. While members of the Senate are uncertain about the future of the proposed American Health Care Act (AHCA) — which could ultimately result in as many as 24 million people losing their health insurance and those with pre-existing conditions facing increasing health coverage costs — a growing number of Americans, especially women, are sure that the legislation will be bad for their health, if enacted.

On the same day that the House passed the Republican-backed plan, for example, a friend of mine revealed on social media that she had gotten her yearly mammogram and physical examination. She posted that the preventative care did not cost anything under her current employer benefit plan, but would have been prohibitively expensive without insurance coverage, a problem faced by many women across the United States. For instance, the American Cancer Society reports that in 2013 38% of uninsured women had a mammogram in the last two years, while 70% of those with insurance did the same. These disparities are certainly alarming, but the problem is likely to worsen under the proposed AHCA.

Breast care screenings are currently protected under the Affordable Care Act’s Essential Health Benefits, which also covers birth control, as well as pregnancy, maternity, and newborn care. The proposed legislation supported by House Republicans and Donald Trump would allow individual states to eliminate or significantly reduce essential benefits for individuals seeking to purchase health insurance on the open market.

Furthermore, the current version of the AHCA would enable individual states to seek waivers, permitting insurance companies to charge higher premiums to people with pre-existing conditions, when they purchase policies on the open market. Making health insurance exorbitantly expensive could have devastating results for women, like those with a past breast cancer diagnosis, who are at risk of facing recurrence. Over 40,000 women already die each year from breast cancer in our country, with African-American women being disproportionately represented among these deaths.

Such disparities draw attention to the connection between inequality and health, patterns long documented by sociologists. Recent work by David R. Williams and his colleagues, for instance, examines how racism and class inequality help to explain why the breast cancer mortality rate in 2012 was 42% higher for Black women than for white women. Limiting affordable access to health care — which the AHCA would most surely do — would exacerbate these inequalities, and further jeopardize the health and lives of the most socially and economically vulnerable among us.

Certainly, everyone who must purchase insurance in the private market, particularly those with pre-existing conditions stand to lose under the AHCA. But women are especially at risk. Their voices have been largely excluded from discussion regarding health care reform, as demonstrated by the photograph of Donald Trump, surrounded by eight male staff members in January, signing the “global gag order,” which restricted women’s reproductive rights worldwide. Or as illustrated by the photo tweeted  by Vice-President Pence in March, showing him and the President, with over twenty male politicians, discussing possible changes to Essential Health Benefits, changes which could restrict birth control coverage, in addition to pregnancy, maternity, and newborn care. And now, as all 13 Senators slated to work on revisions to the AHCA are men.

Women cannot afford to be silent about this legislation. None of us can. The AHCA is bad for our health and lives.

Jacqueline Clark, PhD is an Associate Professor of Sociology and Chair of the Sociology and Anthropology Department at Ripon College. Her research interests include inequalities, the sociology of health and illness, and the sociology of jobs, work, and organizations.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowA few tickets left for tonight’s event in Seattle! (the Bellingham, Vancouver, Burbank…) (!)

We had a fabulous time last night at Portland’s Powell’s City of Books and now I’m on the runway to fly up to Seattle for tonight’s event at the Neptune Theater with Neal Stephenson (it’s not too late to get tickets!) — then tomorrow I’ll be at Bellingham’s Village Books before heading to the Vancouver Writers’ Festival.


After that, there’s only one more stop on this leg of the US/Canadian tour: an afternoon appearance at Burbank’s Dark Delicacies on Saturday before I fly to the UK for the UK Walkaway Tour, which includes events in Oxford, London, Liverpool, Birmingham and Hay-on-Wye!

Can’t wait to see you there!

Planet Debianintrigeri: GNOME and Debian usability testing, May 2017

During the Contribute your skills to Debian event that took place in Paris last week-end, we conducted a usability testing session. Six people were tasked with testing a few aspects of the GNOME 3.22 desktop environment and of the Debian 9 (Stretch) operating system. A number of other people observed them and took notes. Then, two observers and three testers analyzed the results, that we are hereby presenting: we created a heat map visualization, summed up the challenges met during the tests, and wrote this blog post together. We will point the relevant upstream projects to our results.

A couple of other people also did some usability testing but went in much more depth: their feedback is much more detailed and comes with a number of improvement ideas. I will process and publish their results as soon as possible.

Missions

Testers were provided a laptop running GNOME on a a Debian 9 (Stretch) Live system. A quick introduction (mostly copied from the one we found in some GNOME usability testing reports) was read. Then they were asked to complete the following tasks.

A. Nautilus

Mission A.1 — Download and rename file in Nautilus

  1. Download a file from the web, a PDF document for example.
  2. Open the folder in which the file has been downloaded.
  3. Rename the dowloaded file to SUCCESS.pdf.
  4. Toggle the browser window to full screen.
  5. Open the file SUCCESS.pdf.
  6. Go back to the File manager.
  7. Close the file SUCCESS.pdf.

Mission A.2 — Manipulate folders in Nautilus

  1. Create a new folder named cats in your user directory.
  2. Create a new folder named to do in your user directory.
  3. Move the cats folder to the to do folder.
  4. Delete the cats folder.

Mission A.3 — Create a bookmark in Nautilus

  1. Create a folder named unicorns in your personal directory.
  2. This folder is important. Add a bookmark for unicorns in order to find it again in a few weeks.

Mission A.4 — Nautilus display settings

Folders and files are usually listed as icons, but they can also be displayed differently.

  1. Configure the File manager to make it show items as a list, with one file per line.
  2. You forgot your glasses and the font size is too small for you to see the text: increase the size of the text.

B. Package management

Introduction

On Debian, each application is available as a "package" which contains every file needed for the software to work.

Unlike in other operating systems, it is rarely necessary and almost never a good idea, to download and install software from the authors website. We can rather install it from an online library managed by Debian (like an appstore). This alternative offers several advantages, such as being able to update all the software installed in one single action.

Specific tools are available to install and update Debian packages.

Mission B.1 — Install and remove packages

  1. Install the vlc package.
  2. Start VLC.
  3. Remove the vlc package.

Mission B.2 — Search and install a package

  1. Find a piece of software which can download files with BitTorrent in a graphical interface.
  2. Install the corresponding package.
  3. Launch that BitTorrent software.

Mission B.3 — Upgrade the system

Make sure the whole system (meaning all installed packages) is up to date.

C. Settings

Mission C.1 — Change the desktop background

  1. Download an image you like from the web.
  2. Set the downloaded image as the desktop wallpaper.

Mission C.2 — Tweak temporary files management

Configure the system so that temporary files older than three days are deleted automatically.

Mission C.3 — Change the default video player

  1. Install VLC (ask for help if you could not do it during the previous mission).
  2. Make VLC the default video player.
  3. Download a video file from the web.
  4. Open the downloaded video, then check if it opens with VLC.

Mission C.4 — Add and remove world clocks

When you click the time and date in the top bar, a menu pops-up. There, you can display clocks in several time-zones.

  1. Add a clock with Rio de Janeiro timezone, then another showing the current time in Boston.
  2. Check that the time and date menu now displays these two additional clocks.
  3. Remove the Boston clock.

Results and analysis

Heat map

We used Jim Hall's heat map technique to summarize our usability test results. As Renata puts it, it is "a great way to see how the users performed on each task. The heat map clarifies how easy or difficult it was for the participant to accomplish a certain task.

  1. Scenario tasks (from the usability test) are arranged in rows.
  2. Test participants (for each tester) are arranged in columns.
  3. The colored blocks represent each tester’s difficulty with each scenario task.

Green blocks represent the ability of the participant to accomplish the tasks with little or no difficulty.

Yellow blocks indicate the tasks that the tester had significant difficulties accomplishing.

Red blocks indicate that testers experienced extreme difficulty or where testers completed the tasks incorrectly.

Black blocks indicate tasks the tester was unable to complete."

Alternatively, here is the spreadsheet that was used to create this picture, with added text to avoid relying on colors only.

Most tasks were accomplished with little or no difficulty so we will now focus on the problematic parts.

What were the challenges?

The heat map shows several "hot" rows, that we will now be looking at in more details.

Mission A.3 — Create a bookmark in Nautilus

Most testers right-clicked the folder first, and eventually found they could simply drag'n'drop to the bookmarks location in the sidebar.

One tester thought that he could select a folder, click the hamburger icon, and from there use the "Bookmark this folder" menu item. However, this menu action only works on the folder one has entered, not on the selected one.

Mission B.1 — Install and remove a package

Here we faced a number of issues caused by the fact that Debian Live images don't include package indices (with good reason), so no package manager can list available software.

Everyone managed to start a graphical package manager via the Overview (or via the CLI or Alt-F2 for a couple power users).

Some testers tried to use GNOME Software, which listed only already installed packages (Debian bug #862560) and provided no way we could find to refresh the package indices. That's arguably a bug in Debian Live, but still: GNOME Software might display some useful information when it detects this unusual situation.

We won't list here all the obstacles that were met in Synaptic: it's no news its usability is rather sub-optimal and better alternatives (such as GNOME Software) are in the works.

Mission C.2 — Tweak temporary files management

The mission was poorly phrased: some observers had to clarify that it was specifically about GNOME, and not generic Linux system administration: some power-users were already searching the web for command-line tools to address the task at hand.

Even with this clarification, no tester would have succeeded without being told they were allowed to use the web with a search query including the word "GNOME", or use the GNOME help or the Overview. Yet eventually all testers succeeded.

It's interesting to note that regular GNOME users had the same problem as others: they did not try searching "temporary" in the Overview and did not look-up the GNOME Help until they were suggested to do so.

Mission C.3 — Change the default video player

One tester configured one single video file format to be opened by default with VLC, via right-click in Nautilus → Open with → etc. He believed this would be enough to make VLC the default video player, missing the subtle difference between "default video player" and "default player for one single video format".

One tester tried to complete this task inside VLC itself and then needed some help to succeed. It might be that the way web browsers ask "Do you want ThisBrowser to become the default web browser?" gave a hint an application GUI is the right place to do it.

Two testers searched "default" in the Overview (perhaps the previous mission dealing with temporary files was enough to put them in this direction). At least one tester was confused since the only search result (Details – View information about your system), which is the correct one to get there, includes the word View, which suggests that one cannot modify settings there, but only view them.

One long-term GNOME user looked in Tweak Tool first, and then used the Overview.

Here again, GNOME users experienced essentially the same issues as others.

Mission C.4 — Add and remove world clocks

One tester tried to look for the clock on the top right corner of the screen, then realized it was in the middle. Other than this, all testers easily found a way to add world clocks.

However, removing a world clock was rather difficult; although most testers managed to do it, it took them a number of attempts to succeed:

  1. Several testers left-clicked or right-clicked the clock they wanted to remove, expecting this would provide them with a way to remove it (which is not the case).
  2. After a while, all testers noticed the Select button (that has no text label nor tooltip info), which allowed them to select the clock they wanted to remove; then, most testers clicked the 1 selected button, hoping it would provide a contextual menu or some other way to act on the selected clocks (it doesn't).
  3. Eventually, everyone managed to locate the Delete button on the bottom right corner of the window; some testers mentioned that it is less visible and flashy than the blue bar that appears on the top of the screen once they had entered "Selection" mode.

General notes and observations

  • None of the participants sollicited the GNOME Help, which is unfortunate knowing its:
    • great quality;
    • translations in several languages;
    • availability and adaptability to regional specifications;
    • adequacy to the currently running version of GNOME.

    Some users found the relevant help page online via web searches; others initially ignored it among search results, then looked for it later after being told that the mission was more about GNOME.

  • Whether testers were already GNOME users or not seldom impacted their chances of success.
  • Unfortunately, we haven't compiled enough information about the testers to provide useful data about who they are and what their background is. Still, we had an interesting mix in terms of genders, age (between 17 and 52 years old), skin color and computer experience.

CryptogramYacht Security

Turns out, multi-million dollar yachts are no more secure than anything else out there:

The ease with which ocean-going oligarchs or other billionaires can be hijacked on the high seas was revealed at a superyacht conference held in a private members club in central London this week.

[...]

Murray, a cybercrime expert at BlackBerry, was demonstrating how criminal gangs could exploit lax data security on superyachts to steal their owners' financial information, private photos ­ and even force the yacht off course.

I'm sure it was a surprise to the yacht owners.

Worse Than FailureCodeSOD: Documented Concerns

There’s a lot of debate about how much a developer should rely on comments. Clear code should, well, be clear, and thus not need comments. On the other hand, code that’s clear the minute you write it might not be as clear six months later when you forget what it was for. On the other, other hand, sometimes a crime has been committed and we need the comments for a confession.

Austin S confesses his crime.


                private static readonly HtmlElementFlag brokenFormFlag;
                private static readonly AtomicInteger currentConfiguredVersion = (AtomicInteger)HTML_PROCESSING_VERSION;
                private static readonly ReaderWriterLockSlim configLock = new ReaderWriterLockSlim(LockRecursionPolicy.SupportsRecursion);

                static HtmlLogic()
                {
                        brokenFormFlag = HtmlNode.ElementsFlags["form"];
                        //fix stupidity
                        ConfigureFor(HTML_PROCESSING_VERSION);
                }
                private static void ConfigureFor(int version)
                {
                        if (version >= 8)
                        {
                                //form elements are NOT EMPTY!!!!
                                HtmlNode.ElementsFlags.Remove("form");
                        }
                        else
                        {
                                //Need to handle broken things properly...
                                HtmlNode.ElementsFlags["form"] = brokenFormFlag;
                        }
                        currentConfiguredVersion.Value = version;
                }
                /// <summary>
                /// A horrible abomination you need to call whenever you want
                /// HtmlAgilityPack to load html or set inner html without breaking
                /// things... See top of method for reasons.
                /// </summary>
                /// <param name="action">Action to perform while in the version lock</param>
                /// <param name="version">Version to lock for</param>
                private static void ExecuteWithVersion(Action action, int version)
                {
                        // (Note: you should only need to use this when importing/parsing
                        // html.I do not believe that you need it when exporting html)
                        /** Comment author = Austin S. Using CDATA elements to keep
                         * the IDE sane.
                         *
                         * So, you may ask, "Why do we have this unholy mother of
                         * abominations in our code?" That is an excellent question, allow
                         * me to explain why. To ensure that both the Java and C# versions
                         * handle html text the same despite using different html dom
                         * libraries, we now sanitize the initial html with HtmlTidy. This
                         * results in both libraries being handed the same (except for the
                         * "generator" meta attribute that we don't care about) <b>VALID</b>
                         * html that contains NO ERRORS.
                         * Ever since version 0 (with BoolOption3 set
                         * to true I think), we used the HtmlAgilityPack library to parse
                         * the html we are reading and save it out. Ever since then, it
                         * has had a certain bug. Allow us to consider the following
                         * fragment of html (sans the CDATA wrapper):<![CDATA[
<div id="p-search" role="search">
        <h3><label for="searchInput">Search</label></h3>
        <form action="https://en.wikipedia.org/w/index.php" id="searchform" name="searchform">
                <div id="simpleSearch">
                        <input type="search" name="search" placeholder="Search Wikipedia" title="Search Wikipedia [alt-shift-f]" accesskey="f" id="searchInput" tabindex="1" autocomplete="off">
                        <input type="hidden" value="Special:Search" name="title">
                        <input type="submit" name="go" value="Go" title="Go to a page with this exact name if it exists" id="searchButton" class="searchButton">
                </div>
        </form>
</div>
]]>
                         * "What is so special about this fragment?" you ask. Simple, it
                         * has a form element. When HtmlAgilityPack parses this fragment,
                         * we get something like this (comments from me):<![CDATA[
Element: div
        Text:\n
        Element: h3
                Element: label
                        Text: Search
        Text: \n
        Element: form //why is it empty?
        Text: \n
        Element: div
                Text: \n
                Element: input
                Element: input
                Element: input
        Text: </form> //<---WTF(udge)?!?!?!?!?!
]]>
                         * As you can see, <![CDATA[</form>]]> is interpreted as text.
                         * This means that [PRODUCT_NAME] WILL NOT TAG IT AWAY and WILL BE
                         * VISIBLE TO THE USER FOR TRANSLATION. The Java library does not
                         * have this issue. The simplest solution to this problem is to
                         * not have the Java version support html, which is sheer lunacy
                         * and is not permitted for a number of reasons, such as "Sorry,
                         * you need to use the Windows version to translate any html."
                         * Luckily, this can be fixed by modifying a <b>static</b> variable
                         * in the library like so:<code>
                         * <see cref="HtmlNode.ElementsFlags"/>.Remove("form");
                         * </code>
                         * But, as a comment on the StackOverflow answer I got this from
                         * (http://stackoverflow.com/a/4237224/1366594) mentions, this
                         * is a breaking change, and the variable is <b>STATIC</b>. If
                         * you don't know what that means, it means that this setting will
                         * be read on all threads, regardless of their origin. This
                         * wouldn't be an issue if it weren't for the fact that we need to
                         * be able to do the following two things at the same time:
                         * 1. Keep the code backwards compatible so users can save out old
                         *    Files.
                         * 2. Support concurrent execution since the user may be saving out
                         *    more than one file at a time and/or the File Handler that
                         *    calls this logic will be doing so on multiple threads.
                         * It wouldn't be so complex if we only had to handle one of these.
                         * Our second "simple" fix would be to drop support for one of
                         * those two things, but we can't drop backwards compatibility
                         * support in the C# version because that would be stupid and we
                         * would have angry customers, and we can't drop support for
                         * concurrent saving since that is also utter lunacy (try saving
                         * out 100 html files at once...)
                         * So, we need to move to our next approach, which is this
                         * abomination. Luckily, I only think we need to do this when
                         * parsing html. We create a method that wraps all html "parsing"
                         * calls so we can ensure that the html is parsed with the correct
                         * settings. Then we have this issue: concurrency. To fix this
                         * issue, we use an <see cref="AtomicInteger"/> (since I am more
                         * used to that than using a volatile int that is changed via
                         * calls to Interlocked) to keep track of the current configured
                         * version, and a <see cref="ReaderWriterLockSlim"/> to make sure
                         * we don't try to run logic for an old version at the same time
                         * we are running logic for a new version. The new versions use
                         * the "Read" lock, and the old versions use the "Write" lock.
                         * This permits us to execute multiple actions with the "current"
                         * configuration, but has the downside of only letting one action
                         * with the old versions run at a time. The best way to work
                         * around this is to implement some sort of concurrent multi lock
                         * thing that only permits locks of the same key to run at the same
                         * time, but I haven't found any such thing, and creating our own
                         * will surely end with someone going mad.
                         * So now we need to pass in a version number with any call that
                         * parses html, so are we done, right? WRONG!
                         * Recently I implemented some version checking on saving out so
                         * we don't try to save any documents that are too "new", as in
                         * the user hasn't updated their copy of PRODUCT_NAME and try to save
                         * out a file they got from a co-worker who did update their copy.
                         * If we allowed them to do so, the result would likely be invalid
                         * output, and would result in their customers being unhappy with
                         * the result since it is useless. If the customer of our customer
                         * does not catch this before deploying the translation into the
                         * wild, it would make them look like a fool, so we cannot allow it.
                         * We store the opening code's primary version in IntOption1 and
                         * the secondary version (usually the html opener's version) in
                         * IntOption2 if applicable. We would rename these variables,
                         * but some users insist on using OLD_PRODUCT_NAME (which is no longer
                         * receiving updates and also doesn't have any version checking
                         * on saving out). This is a pain, but there currently isn't much
                         * that can be done about it, so I digress.
                         * Due to this stupid breaking change thing, EVERY call to parse
                         * some html into a <see cref="HtmlAgilityPack.HtmlDocument"/> or
                         * a <see cref="HtmlNode"/> needs to be wrapped with this method
                         * (<see cref="ExecuteWithVersion(Action, int)"/>) with an int
                         * version passed into it. That version number needs to be stored
                         * somewhere and is usually stored in IntOption2 (or IntOption1 if
                         * this was an html file we opened). Then we encounter a new issue.
                         * The new issue is that the file adapters for .po and .properties
                         * use some html logic and use IntOption2 for other things, so I
                         * had to increment the main version numbers on them and move
                         * those values to IntOption3. Finally, at the end of this method
                         * (<see cref="ExecuteWithVersion(Action, int)"/>) if the current
                         * settings are not set to the current version and no other threads
                         * still want to use the old version, we set the config back to
                         * the current version because we all know that some poor sap will
                         * forget to wrap his html parsing call in this method, or we might
                         * do html parsing elsewhere where we cannot reach this method.
                         * Hopefully, we do not need to do any more insanity...
                         */
                        bool currentVer = version >= 8;
                        try
                        {
                                if (currentVer)
                                {
                                        configLock.EnterReadLock();
                                }
                                else
                                {
                                        //sadly, this will make it so old html files can only be
                                        //"saved" one at a time, but there isn't much that can
                                        //be done about that without breaking out evil voodo.
                                        configLock.EnterWriteLock();
                                }
                                if (currentConfiguredVersion.Value != version)
                                {
                                        //we need to make sure only one thread makes changes
                                        lock (configLock)
                                        {
                                                if (currentConfiguredVersion.Value != version)
                                                {
                                                        ConfigureFor(version);
                                                }
                                        }
                                }
                                action();
                        }
                        finally
                        {
                                if (currentVer)
                                {
                                        configLock.ExitReadLock();
                                }
                                else
                                {
                                        configLock.ExitWriteLock();
                                        if (configLock.CurrentReadCount < 1 &&
                                                configLock.WaitingReadCount < 1 &&
                                                configLock.WaitingWriteCount < 1)
                                        {
                                                try
                                                {
                                                        //reset config
                                                        configLock.EnterWriteLock();
                                                        lock (configLock)
                                                        {
                                                                if (currentConfiguredVersion.Value !=
                                                                        HTML_PROCESSING_VERSION)
                                                                {
                                                                        ConfigureFor(HTML_PROCESSING_VERSION);
                                                                }
                                                        }
                                                }
                                                finally
                                                {
                                                        configLock.ExitWriteLock();
                                                }
                                        }
                                }
                        }
                }

Honestly, saying much more would be gilding the lily.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

,

Planet DebianSteve Kemp: Some minor updates ..

The past few weeks have been randomly busy, nothing huge has happened, but several minor diversions.

Coding

I made a new release of my console-based mail-client, with integrated Lua scripting, this is available for download over at https://lumail.org/

I've also given a talk (!!) on using a literate/markdown configuration for GNU Emacs. In brief I created two files:

~/.emacs/init.md

This contains both my configuration of GNU Emacs as well as documentation for the same. Neat.

~/.emacs/init.el

This parse the previous file, specifically looking for "code blocks" which are then extracted and evaluated.

This system is easy to maintain, and I'm quite happy with it :)

Fuzzing

Somebody nice took the time to report a couple of bugs against my simple bytecode-intepretting virtual-machine project - all found via fuzzing.

I've done some fun fuzzing of my own in the past, so this was nice to see. I've now resolved those bugs, and updated the README.md file to include instructions on fuzzing it. (Which I started doing myself, after receiving the first of the reports )

Finally I have more personal news too: I had a pair of CT-scans carried out recently, and apparently here in sunny Finland (that's me being ironic, it was snowing in the first week of May) when you undergo a CT-scan you can pay to obtain your data on CD-ROM.

I'm 100% definitely going to get a copy of my brain-scan data. I'll be able to view a 3d-rendered model of my own brain on my desktop. (Once upon a time I worked for a company that produced software, sold to doctors/surgeons, for creating 3d-rendered volumes from individual slices. I confirmed with the radiologist that handled my tests that they do indeed use the standard DICOM format. Small world.)

CryptogramSecuring Elections

Technology can do a lot more to make our elections more secure and reliable, and to ensure that participation in the democratic process is available to all. There are three parts to this process.

First, the voter registration process can be improved. The whole process can be streamlined. People should be able to register online, just as they can register for other government services. The voter rolls need to be protected from tampering, as that's one of the major ways hackers can disrupt the election.

Second, the voting process can be significantly improved. Voting machines need to be made more secure. There are a lot of technical details best left to the voting-security experts who can deal with them, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters and counted by computer, but recountable by hand.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards.

This means no Internet voting. While that seems attractive, and certainly a way technology can improve voting, we don't know how to do it securely. We simply can't build an Internet voting system that is secure against hacking because of the requirement for a secret ballot. This makes voting different from banking and anything else we do on the Internet, and it makes security much harder. Even allegations of vote hacking would be enough to undermine confidence in the system, and we simply cannot afford that. We need a system of pre-election and post-election security audits of these voting machines to increase confidence in the system.

The third part of the voting process we need to secure is the tabulation system. After the polls close, we aggregate votes -- ­from individual machines, to polling places, to precincts, and finally to totals. This system is insecure as well, and we can do a lot more to make it reliable. Similarly, our system of recounts can be made more secure and efficient.

We have the technology to do all of this. The problem is political will. We have to decide that the goal of our election system is for the most people to be able to vote with the least amount of effort. If we continue to enact voter suppression measures like ID requirements, barriers to voter registration, limitations on early voting, reduced polling place hours, and faulty machines, then we are harming democracy more than we are by allowing our voting machines to be hacked.

We have already declared our election system to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states. We can do much more. We owe it to democracy to do it.

This essay previously appeared on TheAtlantic.com.

Planet DebianBits from Debian: New Debian Developers and Maintainers (March and April 2017)

The following contributors got their Debian Developer accounts in the last two months:

  • Guilhem Moulin (guilhem)
  • Lisa Baron (jeffity)
  • Punit Agrawal (punit)

The following contributors were added as Debian Maintainers in the last two months:

  • Sebastien Jodogne
  • Félix Lechner
  • Uli Scholler
  • Aurélien Couderc
  • Ondřej Kobližek
  • Patricio Paez

Congratulations!

Cory DoctorowSee you tonight at Powell’s, Portland! (then Seattle, Bellingham, Vancouver…) (!)

Yesterday’s Walkaway event at San Diego’s Mysterious Galaxy was terrific (there was birthday cake) and now I’m flying to Portland for an event at Powell’s City of Books tonight with Andy “Waxy” Baio before heading to Seattle for an event with Neal Stephenson at the Neptune Theater, then a stop in Bellingham’s Village Books.


From there, I’ll head to the Vancouver Writer’s Festival and then Burbank’s Dark Delicacies — here are the US/Canadian dates.

Then I head to the UK where I’ll be in Cambridge (with Tim Harford), London (with Laurie Penny), Liverpool (with Chris Pak), Birmingham, and Hay-on-Wye (with Adam Rutherford).

See you there!

Planet DebianRuss Allbery: Review: The Raven and the Reindeer

Review: The Raven and the Reindeer, by T. Kingfisher

Publisher: Red Wombat Tea Company
Copyright: 2016
ASIN: B01BKTT73A
Format: Kindle
Pages: 191

Once upon a time, there was a boy born with frost in his eyes and frost in his heart.

There are a hundred stories about why this happens. Some of them are close to true. Most of them are merely there to absolve the rest of us of blame.

It happens. Sometimes it's no one's fault.

Kay is the boy with frost in his heart. Gerta grew up next door. They were inseparable as children, playing together on cold winter days. Gerta was in love with Kay for as long as she could remember. Kay, on the other hand, was, well, kind of a jerk.

There are not many stories about this sort of thing. There ought to be more. Perhaps if there were, the Gertas of the world would learn to recognize it.

Perhaps not. It is hard to see a story when you are standing in the middle of it.

Then, one night, Kay is kidnapped in the middle of the night by the Snow Queen while Gerta watches, helpless. She's convinced that she's dreaming, but when she wakes up, Kay is indeed gone, and eventually the villagers stop the search. But Gerta has defined herself around Kay her whole life, so she sets off, determined to find him, totally unprepared for the journey but filled with enough stubborn, practical persistence to overcome a surprising number of obstacles.

Depending on your past reading experience (and cultural consumption in general), there are two things that may be immediately obvious from this beginning. First, it's written by Ursula Vernon, under her T. Kingfisher pseudonym that she uses for more adult fiction. No one else has quite that same turn of phrase, or writes protagonists with quite the same sort of overwhelmed but stubborn determination. Second, it's a retelling of Hans Christian Andersen's "The Snow Queen."

I knew the first, obviously. I was completely oblivious to the second, having never read "The Snow Queen," or anything else by Andersen for that matter. I haven't even seen Frozen. I therefore can't comment in too much detail on the parallels and divergences between Kingfisher's telling and Andersen's (although you can read the original to compare if you want) other than some research on Wikipedia. As you might be able to tell from the quote above, though, Kingfisher is rather less impressed by the idea of childhood true love than Andersen was. This is not the sort of story in which the protagonist rescues the captive boy through the power of pure love. It's something quite a bit more complicated and interesting: a coming-of-age story for Gerta, in which her innocence is much less valuable than her fundamental decency, empathy, and courage, and in which her motives for her journey change as the journey proceeds. It helps that Kingfisher's world is populated by less idealized characters, many of whom are neither wholly bad nor wholly good, but who think of themselves as basically decent and try to do vaguely the right thing. Although sometimes they need some reminding.

The story does feature a talking raven. (Most certainly not a crow.) His name is the Sound of Mouse Bones Crunching Under the Hooves of God. He's quite possibly the best part.

Gerta does not rescue Kay through the power of pure love. But there is love here, of a sort that Gerta wasn't expecting at all, and of a sort that Andersen never had in mind when he wrote the original. There's also some beautifully-described shapeshifting, delightful old women, and otters. (Also, I find the boy who appears at the very end of the story utterly fascinating, with all his implied parallel story and the implicit recognition that the world does not revolve around Kay and Greta.) But I think my favorite part is how clearly different Greta is at the end of her journey than at the beginning, how subtly Kingfisher makes that happen through the course of the story, and how understated but just right her actions are at the very end.

This is really excellent stuff. The next time you're feeling in the mood for a retold and modernized fairy tale, I recommend it.

Rating: 8 out of 10

,

Planet DebianVincent Fourmond: Run QSoas complely non-interactively

QSoas can run scripts, and, since version 2.0, it can be run completely without user interaction from the command-line (though an interface may be briefly displayed). This possibility relies on the following command-line options:

  • --run, which runs the command given on the command-line;
  • --exit-after-running, which closes automatically QSoas after all the commands specified by --run were run;
  • --stdout (since version 2.1), which redirects QSoas's terminal directly to the shell output.
If you create a script.cmds file containing the following commands:
generate-buffer -10 10 sin(x)
save sin.dat
and run the following command from your favorite command-line interpreter:
~ QSoas --stdout --run '@ script.cmds' --exit-after-running
This will create a sin.dat file containing a sinusoid. However, if you run it twice, a Overwrite file 'sin.dat' ? dialog box will pop up. You can prevent that by adding the /overwrite=true option to save. As a general rule, you should avoid all commands that may ask questions in the scripts; a /overwrite=true option is also available for save-buffers for instance.

I use this possibility massively because I don't like to store processed files, I prefer to store the original data files and run a script to generate the processed data when I want to plot or to further process them. It can also be used to generate fitted data from saved parameters files. I use this to run automatic tests on Linux, Windows and Mac for every single build, in order to quickly spot platform-specific regressions.

To help you make use of this possibility, here is a shell function (Linux/Mac users only, add to your $HOME/.bashrc file or equivalent, and restart a terminal) to run directly on QSoas command files:

qs-run () {
        QSoas --stdout --run "@ $1" --exit-after-running
}
To run the script.cmds script above, just run
~ qs-run script.cmds

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1

Krebs on SecurityGlobal ‘Wana’ Ransomware Outbreak Earned Perpetrators $26,000 So Far

As thousands of organizations work to contain and clean up the mess from this week’s devastating Wana ransomware attack, the fraudsters responsible for releasing the digital contagion are no doubt counting their earnings and congratulating themselves on a job well done. But according to a review of the Bitcoin addresses hard-coded into Wana, it appears the perpetrators of what’s being called the worst ransomware outbreak ever have made little more than USD $26,000 so far from the scam.

Victims of the Wana ransomware will see this lock screen demanding a $300 ransom to unlock all encrypted files.

Victims of the Wana ransomware will see this lock screen demanding a $300 ransom to unlock all encrypted files.

The Wana ransomware became a global epidemic virtually overnight this week, after criminals started distributing copies of the malware with the help of a security vulnerability in Windows computers that Microsoft patched in March 2017. Infected computers have all their documents and other important user files scrambled with strong encryption, and victims without access to good backups of that data have two choices: Kiss the data goodbye, or pay the ransom — the equivalent of approximately USD $300 worth of the virtual currency Bitcoin.

According to a detailed writeup on the Wana ransomware published Friday by security firm Redsocks, Wana contains three bitcoin payment addresses that are hard-coded into the malware. One of the nice things about Bitcoin is that anyone can view all of the historic transactions tied a given Bitcoin payment address. As a result, it’s possible to tell how much the criminals at the helm of this crimeware spree have made so far and how many victims have paid the ransom.

A review of the three payment addresses hardcoded into the Wana ransomware strain indicates that these accounts to date have received 100 payments totaling slightly more than 15 Bitcoins — or approximately $26,148 at the current Bitcoin-to-dollars exchange rate.

ANALYSIS

It is possible that the crooks responsible for this attack maintained other Bitcoin addresses that were used to receive payments in connection with this attack, but there is currently no evidence of that. It’s worth noting that the ransom note Wana popped up on victim screens (see screenshot above) included a “Contact Us” feature that may have been used by some victims to communicate directly with the fraudsters. Also, I realize that in many ways USD $26,000 is a great deal of money.

However, I find it depressing to think of the massive financial damage likely wrought by this ransom campaign in exchange for such a comparatively small reward. It’s particularly galling because this attack potentially endangered the lives of many. At least 16 hospitals in the United Kingdom were diverting patients and rescheduling procedures on Friday thanks to the Wana outbreak, meaning the attack may well have hurt people physically (no deaths have been reported so far, thank goodness).

Unfortunately, this glaring disparity is par for the course with cybercrime in general. As I observed on several occasions in my book Spam Nation — which tracked the careers of some of the most successful malware writers and pharmacy pill spammers on the planet — it was often disheartening to see how little money most of those guys made given the sheer amount of digital disease they were pumping out into the Internet on a daily basis.

In fact, very few of these individuals made much money at all, and yet they were responsible for perpetuating a global crime machine that inflicted enormous damage on businesses and consumers. A quote in the book from Stefan Savage, a computer science professor at the University of California, San Diego (UCSD) encapsulates the disparity quite nicely and seems to have aged quite well:

“What’s fascinating about all this is that at the end of the day, we’re not talking about all that much money,” Savage said. “These guys running the pharma programs are not Donald Trumps, yet their activity is going to have real and substantial financial impact on the day-to-day lives of tens of millions of people. In other words, for these guys to make modest riches, we need a multibillion-dollar industry to deal with them.”

Planet DebianRicardo Mones: Disabling "flat-volumes" in pulseaudio

Today I've just faced another of those happy ideas some people implements in software, which can be useful for some cases, but can also also be bad as default behaviour.

The problems caused were already posted to Debian mailing lists, fortunately, as well as its solution, which basically in a default Debian configuration means to:

$ echo "flat-volumes = no" | sudo tee -a /etc/pulse/daemon.conf
$ pulseaudio -k && pulseaudio

And I think the default for Stretch should be set as above: raising volume to 100% just because of a system notification, while useful for some, it's not what common users expect.

Note: edited to fix first command as explained in comments. Thanks!

Cory DoctorowSee you this afternoon, San Diego! (then Portland, Seattle, Bellingham…) (!)


We had a great Walkaway tour stop last night in Scottsdale, AZ, and now I’m headed to San Diego to help the legendary Mysterious Galaxy store celebrate its birthday, closing out a fantastic day of speakers, readers and signings at 4PM.


Tomorrow, I’ll be in Portland, OR, with Andy “Waxy” Baio, speaking at Powell’s City of Books, and on Monday you can catch me on-stage with Neal Stephenson at Seattle’s Neptune Theater.

From there, I’ll head to Bellingham, WA, then Vancouver, then Burbank (here’s the US/Canadian tour schedule), before I fly to the UK for stops in Cambridge, London, Liverpool, Birmingham, and Hay-on-Wye!

I can’t wait to see you there!

Krebs on SecurityMicrosoft Issues WanaCrypt Patch for Windows 8, XP

Microsoft Corp. today took the unusual step of issuing security updates to address flaws in older, unsupported versions of Windows — including Windows XP and Windows 8. The move is a bid to slow the spread of the WanaCrypt ransomware strain that infected tens of thousands of Windows computers virtually overnight this week.

A map tracking the global spread of the Wana ransomware strain. Image: Malwaretech.com.

A map tracking the global spread of the Wana ransomware strain. Image: Malwaretech.com.

On Friday, May 12, countless organizations around the world began fending off attacks from a ransomware strain variously known as WannaCrypt, WanaDecrypt and Wanna.Cry. Ransomware encrypts a victim’s documents, images, music and other files unless the victim pays for a key to unlock them.

It quickly became apparent that Wanna was spreading with the help of a file-sharing vulnerability in Windows. Microsoft issued a patch to fix this flaw back in March 2017, but organizations running older, unsupported versions of Windows (such as Windows XP) were unable to apply the update because Microsoft no longer supplies security patches for those versions of Windows.

The software giant today made an exception to that policy after it became clear that many organizations hit hardest by Wanna were those still running older, unsupported versions of Windows.

“Seeing businesses and individuals affected by cyberattacks, such as the ones reported today, was painful,” wrote Phillip Misner, principal security group manager at the Microsoft Security Response Center. “Microsoft worked throughout the day to ensure we understood the attack and were taking all possible actions to protect our customers.”

The update to address the file-sharing bug that Wanna is using to spread is now available for Windows XP, Windows 8, and Windows Server 2003 via the links at the bottom of this advisory.

On Friday, at least 16 hospitals in the United Kingdom were forced to divert emergency patients after computer systems there were infected with Wanna. According to multiple stories in the British media, approximately 90 percent of care facilities in the U.K.’s National Health Service are still using Windows XP – a 16-year-old operating system.

According to a tweet from Jakub Kroustek, a malware researcher with security firm Avast, the company’s software has detected more than 100,000 instances of the Wana ransomware.

For advice on how to harden your systems against ransomware, please see the tips in this post.

,

Planet DebianSteve McIntyre: Fonts and presentations

When you're giving a presentation, the choice of font can matter a lot. Not just in terms of how pretty your slides look, but also in terms of whether the data you're presenting is actually properly legible. Unfortunately, far too many fonts are appallingly bad if you're trying to tell certain characters apart. Imagine if you're at the back of a room, trying to read information on a slide that's (typically) too small and (if you're unlucky) the presenter's speech is also unclear to you (noisy room, bad audio, different language). A good clear font is really important here.

To illustrate the problem, I've picked a few fonts available in Google Slides. I've written the characters "1lIoO0" (that's one, lower case L, upper case I, lower case o, upper case O, zero) in each of those fonts. Some of the sans-serif fonts in particular are comically bad for trying to distinguish between these characters.

font examples

It may not matter in all cases if your audience can read all the characters on your slides and tell them apart, put if you're trying to present scientific or numeric results it's critical. Please consider that before looking for a pretty font.

Sociological ImagesFrom our archives: Mother’s Day

For Mother’s Day, I wrote my third post on mothering for Money magazine about the divergence in income among other-sex couples once kids arrive, called the “motherhood penalty” and “fatherhood premium.” Here’s the whole list:

And please enjoy these posts from SocImages’ Mother’s Days past!

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityU.K. Hospitals Hit in Widespread Ransomware Attack

At least 16 hospitals in the United Kingdom are being forced to divert emergency patients today after computer systems there were infected with ransomware, a type of malicious software that encrypts a victim’s documents, images, music and other files unless the victim pays for a key to unlock them.

It remains unclear exactly how this ransomware strain is being disseminated and why it appears to have spread so quickly, but there are indications the malware may be spreading to vulnerable systems through a security hole in Windows that was recently patched by Microsoft.

The ransom note left behind on computers infected with the Wanna Decryptor ransomware strain. Image: BleepingComputer.

The ransom note left behind on computers infected with the Wanna Decryptor ransomware strain. Image: BleepingComputer.

In a statement, the U.K.’s National Health Service (NHS) said a number of NHS organizations had suffered ransomware attacks.

“This attack was not specifically targeted at the NHS and is affecting organizations from across a range of sectors,” the NHS said. “At this stage we do not have any evidence that patient data has been accessed.”

According to Reuters, hospitals across England are diverting patients requiring emergency treatment away from the affected hospitals, and the public is being advised to seek medical care only for acute medical conditions.

NHS said the investigation is at an early stage but the ransomware that hit at least 16 NHS facilities is a variant of Wana Decryptor (a.k.a. “WannaCry“), a ransomware strain that surfaced roughly two weeks ago.

Lawrence Abrams, owner of the tech-help forum BleepingComputer, said Wana Decryptor wasn’t a big player in the ransomware space until the past 24 hours, when something caused it to be spread far and wide very quickly.

“It’s been out for almost two weeks now, and until very recently it’s just been sitting there,” Abrams said. “Today, it just went nuts. This is by far the biggest outbreak we have seen to date.”

For example, the same ransomware strain apparently today also hit Telefonica, one of Spain’s largest telecommunications companies. According to an article on BleepingComputer, Telefonica has responded by “desperately telling employees to shut down computers and VPN connections in order to limit the ransomware’s reach.”

An alert published by Spain’s national computer emergency response team (CCN-CERT) suggested that the reason for the rapid spread of Wana Decryptor is that it is leveraging a software vulnerability in Windows computers that Microsoft patched in March.

According to CCN-CERT, that flaw is MS17-010, a vulnerability in the Windows Server Message Block (SMB) service, which Windows computers rely upon to share files and printers across a local network. Malware that exploits SMB flaws could be extremely dangerous inside of corporate networks because the file-sharing component may help the ransomware spread rapidly from one infected machine to another.

That SMB flaw has enabled Wana Decryptor to spread to more than 36,000 Windows computers so far, according to Jakub Kroustek, a malware researcher with Avast, a security firm based in the Czech Republic.

“So far, Russia, Ukraine, and Taiwan leading,” the world in new infections, Kroustek wrote in a tweet. “This is huge.”

Abrams said Wana Decryptor — like many ransomware strains — encrypts victim computer files with extremely strong encryption, but the malware itself is not hard to remove from infected computers. Unfortunately, removing the infection does nothing to restore one’s files to their original, unencrypted state.

“It’s not difficult to remove, but it also doesn’t seem to be decryptable,” Abrams said. “It also seems to be very persistent. Every time you make a new file [on an infected PC], it encrypts that new file too.”

Experts may yet find a weakness in Wana that allows them to way to decode the ransomware strain without paying the ransom. For now, however, victims who don’t have backups of their files have one option: Pay the $300 Bitcoin ransom being demanded by the program.

Wana Decryptor is one of hundreds of strains of ransomware. Victims who are struggling with ransomware should pay a visit to BleepingComputer’s ransomware help forum, which often has tutorials on how to remove the malware and in some cases unlock encrypted files without paying the ransom. In addition, the No More Ransom Project also includes an online tool that enables ransomware victims to learn if a free decryptor is available by uploading a single encrypted file.

Update, May 13, 9:33 a.m.: Microsoft today took the unusual step of releasing security updates to fix the SMB flaw in unsupported versions of Windows, including Windows XP, Windows 8, and Windows Server 2003. See this post for more details.

Planet DebianDaniel Pocock: Thank you to the OSCAL team

The welcome gift deserves its own blog post. If you want to know what is inside, I hope to see you at OSCAL'17.

Cory DoctorowSee you tonight, Scottsdale, AZ! (then San Diego, Portland, Seattle…) (!)

Thanks to everyone who came out to last night’s Walkaway tour-stop at Houston’s Brazos Books; I’m just arriving at the airport to fly to Phoenix for tonight’s event at Scottsdale’s Poisoned Pen Books with Brian David Johnson.


Tomorrow, you’ll find me in San Diego at Mysterious Galaxy’s Birthday Bash, and then on Sunday I’ll be at the Powell’s City of Books in Portland, OR, with Andy “Waxy” Baio. From there, it’s off to Seattle for an event with Neal Stephenson.

Then the tour continues! Bellingham, Vancouver and Burbank!

From there, I head to the UK: Oxford (with Tim Harford); London (with Laurie Penny); Liverpool (with Chris Pak); Birmingham, and Hay-on-Wye (with Adam Rutherford).

Looking forward to seeing you there!

CryptogramStealing Voice Prints

This article feels like hyperbole:

The scam has arrived in Australia after being used in the United States and Britain.

The scammer may ask several times "can you hear me?", to which people would usually reply "yes."

The scammer is then believed to record the "yes" response and end the call.

That recording of the victim's voice can then be used to authorise payments or charges in the victim's name through voice recognition.

Are there really banking systems that use voice recognition of the word "yes" to authenticate? I have never heard of that.

Planet DebianMartín Ferrari: 6 days to SunCamp

Only six more days to go before SunCamp! If you are still considering it, hurry up, you might still find cheap tickets for the low season.

It will be a small event (about 20-25 people), with a more intimate atmosphere than DebConf. There will be people fixing RC bugs, preparing stuff for after the release, or just discussing with other Debian folks.

There will be at least one presentation from a local project, and surely some members of nearby communities will join us for the day like they did last year.

See you all in Lloret!

Comment

Worse Than FailureError'd: Just Buttons

"What do you think the buttons do? If you thought the dialog was trying to help you log in or give an opportunity to send them an e-mail, instant message, or tweet pointing out how the buttons don't do anything, you might be mistaken," writes Alex M., "

 

Robert wrote, "This appeared after attempting to clean up a working copy. Apparently, I'm expected to do the same thing again (and again, and again, and AGAIN...)and expect a different result."

 

"I'm not sure that I can wait another 90 days for April's caffeine supply," writes Martin R.

 

"I was innocently researching time travel when this happened," wrote Bruce R.

 

Stephen E. wrote, "I was trying to figure out the order I needed to do my cross product in. I'm not quite sure how that's related to constipation?"

 

"According to Microsoft, MsoTriState is a 'tri-state Boolean' that apparently has 5 possible values," writes Mark, "But that's OK since only two of them are supported.

 

Maarten writes, "After installing the latest version of Visual Studio Community Edition, I was excited to give it a try. Unfortunately, after pressing the Launch button I received this instead." At the end of the installation, after pressing the launch button, this error dialog shows up."

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianDaniel Pocock: Kamailio World and FSFE team visit, Tirana arrival

This week I've been thrilled to be in Berlin for Kamailio World 2017, one of the highlights of the SIP, VoIP and telephony enthusiast's calendar. It is an event that reaches far beyond Kamailio and is well attended by leaders of many of the well known free software projects in this space.

HOMER 6 is coming

Alexandr Dubovikov gave me a sneak peek of the new version of the HOMER SIP capture framework for gathering, storing and analyzing messages in a SIP network.

exploring HOMER 6 with Alexandr Dubovikov at Kamailio World 2017

Visiting the FSFE team in Berlin

Having recently joined the FSFE's General Assembly as the fellowship representative, I've been keen to get to know more about the organization. My visit to the FSFE office involved a wide-ranging discussion with Erik Albers about the fellowship program and FSFE in general.

discussing the Fellowship program with Erik Albers

Steak and SDR night

After a hard day of SIP hacking and a long afternoon at Kamailio World's open bar, a developer needs a decent meal and something previously unseen to hack on. A group of us settled at Escados, Alexanderplatz where my SDR kit emerged from my bag and other Debian users found out how easy it is to apt install the packages, attach the dongle and explore the radio spectrum.

playing with SDR after dinner

Next stop OSCAL'17, Tirana

Having left Berlin, I'm now in Tirana, Albania where I'll give an SDR workshop and Free-RTC talk at OSCAL'17. The weather forecast is between 26 - 28 degrees celsius, the food is great and the weekend's schedule is full of interesting talks and workshops. The organizing team have already made me feel very welcome here, meeting me at the airport and leaving a very generous basket of gifts in my hotel room. OSCAL has emerged as a significant annual event in the free software world and if it's too late for you to come this year, don't miss it in 2018.

OSCAL'17 banner

Planet Linux AustraliaMichael Still: Python3 venvs for people who are old and grumpy

I've been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn't a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad...

First, install the dependencies:

    git clone git://github.com/yyuu/pyenv.git .pyenv
    echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
    echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
    echo 'eval "$(pyenv init -)"' >> ~/.bashrc
    git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv
    source ~/.bashrc
    


Now to make a venv, do something like this (in this case, infrasot is the name of the venv):

    mkdir -p ~/.virtualenvs/pyenv-infrasot
    cd ~/.virtualenvs/pyenv-infrasot
    pyenv virtualenv system infrasot
    


You can see your installed venvs like this:

    $ pyenv versions
    * system (set by /home/user/.pyenv/version)
      infrasot
    


Where system is the system installed python, and not a venv. To activate and deactivate the venv, do this:

    $ pyenv activate infrasot
    $ ... stuff you're doing ...
    $ pvenv deactivate
    


I'll probably write wrappers at some point so that this looks like virtualenvwrapper, but its good enough for now.

Tags for this post: python venv virtualenvwrapper python3
Related posts: Implementing SCP with paramiko; Packet capture in python; A pythonic example of recording metrics about ephemeral scripts with prometheus; mbot: new hotness in Google Talk bots; Calculating a SSH host key with paramiko; Twisted conch

Comment

,

Planet Linux AustraliaOpenSTEM: Assessments

Well, NAPLAN is behind us for another year and so we can all concentrate on curriculum work again! This year we have updated our assessment material to make it even easier to map the answers in the Student Workbooks with the curriculum codes. Remember, our units integrate across several curriculum areas. The model answers now contain colour coded curriculum codes that look like this:  These numbers refer to specific curriculum strands, which are now also listed in our Assessment Guides. In the back of each Assessment Guide is a colour coded table – Gold for History; Green for Geography; Light Green for HASS Skills; Orange for Civics and Citizenship; Purple for Economics and Business and Blue for Science. Each curriculum code is included in this table, along with the rubric for grades A to E, or AP to BA for the younger students.

These updates mean that teachers can now match each question to the specific curriculum area being assessed, thus simplifying the process for grading, and reporting on, each curriculum area. So, if you need to report separate grades for Science and HASS, or even History and Civics and Citizenship, you can tally the results across the questions which address those subject areas, to obtain an overall grade for each subject. Since this can be done on a question-by-question basis, you can even keep a running tally of how each student is doing in each subject area through the term, by assessing those questions they have answered, on a regular basis.

Please make sure that you have the latest updates of both the model answers and the assessment guides for each unit that you are teaching, with the codes as shown here. If you don’t have the latest updates, please download them from our site. Log in with your account, go to your downloads (click on “My Account” on the top right and then “Downloads” on the left). Find the Model Answers PDF for your unit(s) in the list of available downloads and click the button(s) to download each one again. Email us if there are any issues.

Rondam RamblingsTaking "missing the point" to a whole new level

It's a real struggle to keep upright in the maelstrom of cluelessness that swirls around Donald Trump. He's like a black hole, sucking in all facts and reason beyond his event horizon, never to be seen again, leaving behind an accretion disk of chaos and contradiction.  It's hard to know where to begin to attack this monster.  But you've gotta start somewhere, and this seems like as good a place

Cory DoctorowHouston, I’ll see you tonight on the Walkaway tour! (then Scottsdale, San Diego, Portland…) (!)

Thanks to everyone who turned up last night for a stellar event at Austin’s Book People! I’m about to head to the airport to fly to Houston, where I’ll appearing tonight at 7PM at Brazos Books, before heading to Scottsdale, AZ for appearance at Poisoned Pen (with Brian David Johnson)>


From there, it’s appearances in Portland, OR (with Andy “Waxy” Baio), Seattle (with Neal Stephenson), Bellingham, Vancouver, and Burbank.


Then I head to the UK, where I’ll be in Oxford (with Tim Harford), London (with Laurie Penny), Liverpool (with Chris Pak), Birmingham, and Hay-on-Wye (with Adam Rutherford).

See you there!


(Image: Chris Brown)

Planet Linux AustraliaPaul Wayper: How to get Fedora working on a System 76 Oryx Pro

Problems: a) No sound b) Only onboard screen, does not recognise HDMI or Mini-DP Solutions: 1) Install Korora 2) Make sure you're not using an outdated kernel that doesn't have the snd-hda-intel driver available. 3) dnf install akmod-nvidia xorg-x11-drv-nvidia Extra resources: http://sub-pop.net/post/fedora-23-on-system76-oryx-pro/

Planet Linux AustraliaPaul Wayper: LCA 2017 roundup

I've just come back from LCA at the Wrest Point hotel and fun complex in Hobart, over the 16th to the 20th of January. It was a really great conference and keeps the bar for both social and technical enjoyment at a high level.

I stayed at a nearby AirBNB property so I could have my own kitchenette - I prefer to be able to at least make my own breakfast rather than purchase it - and to give me a little exercise each day walking to and from the conference. Having the conference in the same building as a hotel was a good thing, though, as it both simplified accommodation for many attendees and meant that many other facilities were available. LCA this year provided lunch, which was a great relief as it meant more time to socialise and learn and it also spared the 'nearby' cafes and the hotel's restaurants from a huge overload. The catering worked very well.

From the first keynote right to the last closing ceremony, the standard was very high. I enjoyed all the keynotes - they really challenged us in many different ways. Pia gave us a positive view of the role of free, open source software in making the world a better place. Dan made us think of what happens to projects when they stop, for whatever reason. Nadia made us aware of the social problems facing maintainers of FOSS - a topic close to my heart, as I see the way we use many interdependent pieces of software as in conflict with users' social expectations that we produce some kind of seamless, smooth, cohesive whole for their consumption. And Robert asked us to really question our relationship with our users and to look at the "four freedoms" in terms of how we might help everyone, even people not using FOSS. The four keynotes really linked together well - an amazing piece of good work compared to other years - and I think gave us new drive.

I never had a session where I didn't want to see something - which has not always been true for LCA - and quite often I skipped seeing something I wanted to see in order to see something even more interesting. While the miniconferences sometimes lacked the technical punch or speaker polish, they were still all good and had something interesting to learn. I liked the variety of miniconf topics as well.

Standout presentations for me were:

  • Tom Eastman talking about building application servers - which, in a reversal of the 'cloud' methodology, have to sit inside someone else's infrastructure and maintain a network connection to their owners.
  • Christoph Lameter talking about making kernel objects movable - particularly the inode and dentry caches. Memory fragmentation affects machines with long uptimes, and it was fascinating to hear Matthew Wilcox, Dave Chinner, Keith Packard and Christoph talking about how to fix some of these issues. That's the kind of opportunity that a conference like LCA provides.
  • James Dumay's talk on Blue Ocean, a new look for Jenkins. It really brings it a modern, interactive look and I hope this becomes the new default.

CryptogramInterview with Ross Anderson

Cybersecurity researcher Ross Anderson has a good interview on edge.org.

Worse Than FailureA Naughty Bot

Ambox warning yellow

As many of you know, outside of being a writer, I also work on open-source projects. The most famous of these is Sockbot, a chatbot platform that interfaces with various forum and message clients. If you're reading this, my boss didn't object to pointing out that my work has never made this particular mistake—but maybe we'd be more famous if we had.

This particular chatbot was designed for internal use. The company, like many others, had moved to Slack as an IM program for their development teams. Like every other team that found themselves with a slack, the developers quickly set about programming bots to interact with the platform and assist with common tasks like linking to an issue in the company Jira, pushing out a docker container to the QA environment, and ordering lunch.

Today's submitter, Suzie, wanted to add Google search to her local bot, Alabaster. Often, when people started talking about something, they'd grab the first link off Google and dump it into the chat for context. This way, Alabaster could do that for them. Sure, it wasn't the most useful thing in the world, but it was simple and fun, and really, isn't that why we program in the first place?

Of course, Google puts a rate limit of 100 queries per day on their searching, and you need an API key. Bing, on the other hand, had a much higher limit at the time of 5,000 queries a month, and was much easier to integrate with. So Suzie made the executive decision to settle for Bing and eat the gentle razzing she'd get in response.

If you've never used a chatbot before, there's two ways to design them. The harder but much-nicer-to-use way is to design a natural language processor that can tell when you're talking to the bot and respond accordingly. The easier and thus far more common way is to have a trigger word and a terse command syntax, like !ban user to ban a user from the chat room. Command words are typically prefixed with a bit of punctuation so they can't accidentally be used to start a sentence, too. In the example I gave, the ! informs the bot that you're talking to it, and ban is the command, given the argument user to the command handler.

Alabaster was using !! as his trigger, so Suzie settled on !!bing as the bot command to start a search, passing in the rest of the line. Since she wasn't using Sockbot, she had to handle the line parsing herself. She went the easy way and replaced the first instance of the word "Bing" with empty space before handing the rest of the line to the Bing API.

Or ... she tried to. In reality, she wrote var searchText = message.TargetedText.ToLower().ReplaceFirst("big ", "").Trim();, missing the all-important "n" in "bing".

This wouldn't usually be a big deal. If you start your Google search phrase with the word "google", it's usually ignored while Google searches for the rest of the phrase. Bing, however, is also part of several people's names, such as Bing Crosby, Chandler Bing ... or porn star Carmella Bing, the most searched of the three. Somehow, Suzie had also allowed the searches access to NSFW results. Every time Alabaster was asked to search, it came back with X-rated results about the porn star.

Suzie scrapped the module and started over. This time, she decided Google's rate limit was probably just fine ... just fine indeed.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet Linux AustraliaOpenSTEM: Maths Challenge (Basic Operations)

As we are working on expanding our resources in the Maths realm, we thought it would be fun to start a little game here.

Remember “Letters and Numbers” on SBS? (Countdown in UK, Cijfers en Letters in The Netherlands and Belgium, originally Des Chiffres et des Lettres in France).

The core rules for numbers game are: you get 6 numbers, to use with basic operations (add, subtract, multiply, divide) to get as close as possible to a three digit target number. You can only use each number once, but you don’t have to use all numbers. No intermediate result is allowed to be be negative.

Now try this for practice:

Your 6 numbers (4 small, 2 large):    1     9     6     9     25     75

Your target: 316

Comment on this post with your solution (full working)!  We’re not worrying about a time limit, as it’s about the problem solving.

CryptogramWhy Is the TSA Scanning Paper?

I've been reading a bunch of anecdotal reports that the TSA is starting to scan paper separately:

A passenger going through security at Kansas City International Airport (MCI) recently was asked by security officers to remove all paper products from his bag. Everything from books to Post-It Notes, documents and more. Once the paper products were removed, the passenger had to put them in a separate bin to be scanned separately.

When the passenger inquired why he was being forced to remove the paper products from his carry-on bag, the agent told him that it was a pilot program that's being tested at MCI and will begin rolling out nationwide. KSHB Kansas City is reporting that other passengers traveling through MCI have also reported the paper-removal procedure at the airport. One person said that security dug through the suitcase for two "blocks" of Post-It Notes at the bottom.

Does anyone have any guesses as to why the TSA is doing this?

EDITED TO ADD (5/11): This article says that the TSA has stopped doing this. They blamed it on their contractor, Akai Security.

CryptogramForging Voice

LyreBird is a system that can accurately reproduce the voice of someone, given a large amount of sample inputs. It's pretty good -- listen to the demo here -- and will only get better over time.

The applications for recorded-voice forgeries are obvious, but I think the larger security risk will be real-time forgery. Imagine the social engineering implications of an attacker on the telephone being able to impersonate someone the victim knows.

I don't think we're ready for this. We use people's voices to authenticate them all the time, in all sorts of different ways.

EDITED TO ADD (5/11): This is from 2003 on the topic.