Planet Russell

,

Planet DebianShirish Agarwal: A hack and a snowflake

This would be a long post. Before starting, I would like to share or explain that I am not a native English speaker. I say this as what I’m going to write, may or may not be the same terms or meaning that others understand as I’m an Indian who uses American English + British English.

So it’s very much possible that I have picked up all the bad habits of learning and understanding and not any of the good ones in writing English as bad habits as elsewhere in life are the easier ones to pick up. Also I’m not a trained writer or have taken any writing lessons ever apart from when I was learning English in school as a language meant to communicate.

Few days back, I was reading an opinion piece (I tried to find the opinion piece again but have failed to do since) if anybody finds it, please share in the comments so will link here. A feminist author proclaimed how some poets preached or shared violence against women in their writings, poems including some of the most famous poets we admire today. The author of the article was talking about poets and artists like William Wordsworth and others. She picked out particular poems from their body of work which seemed to convey that message. Going further than that, she chose to de-hypnate between the poet and their large body of work. I wished she had cared enough to also look a bit more deeply in the poet’s life than just labeling him from one poem among perhaps tens or hundreds he may have written. I confess I haven’t read much of Wordsworth than what was in school and from those he seemed to be a nature lover rather than a sexual predator he was/is being made out to be. It is possible that I might have been mis-informed.

 

Meaning of author

– Courtesy bluediamondgallery.com – CC-by-SA

The reason I say this is because I’m a hack. A hack in the writing department or ‘business’ is somebody who is not supposed to tell any back story and just go away. Writers though, even those who write short stories need to have a backstory at least in the back of their mind about each character that s/he introduces into the story. Because it’s a short story s/he cannot reveal where they come from but only point at the actions they are doing. I had started the process two times and two small stories got somewhat written through me but I stopped both the times midway.

while I was hammering through the keyboard for the stories, it was as if the characters themselves who were taking me on a journey which was dark and I didn’t want to venture more. I had heard this from quite a few authors, few of them published as well and I had dismissed it as a kind of sales pitch or something.

When I did write those stories for the sake of argument, I realized the only thing that the author has is an idea and the overall arc of the story. You have to have complete faith in your characters even if they led you astray or in unexpected places. The characters speak to you and through you rather than the other way around. It is the most maddest and the most mysterious of journeys and it seemed the characters liked the darker tones more than the lighter ones. I do not know whether it the same for all the writers/hacks (at least in the beginning) or just me ? Or Maybe its a cathartic expression. I do hope to still do more stories and even complete them even if they have dark overtones just to understand the process. By dark I mean violence here.

That is why I asked that maybe if the author of the opinion piece had taken the pain and shared more of the context surrounding the poem themselves as to when did Mr. Wordsworth wrote that poem or other poets did, perhaps I could identify with that as well as many writers/authors themselves.

I was disappointed with the article in two ways, in one way they were dismissing the poet/the artist and yet they seemed or did not want to critique/ban all the other works because

a. either they liked the major part of the work or

b. they knew the audience to whom they were serving the article to probably likes the other works and chose not to provoke them.

Another point was I felt when you are pushing and punishing poets are you doing so because they are the soft targets now more than ever? Almost all the poets she had talked about are unfortunately not in this mortal realm anymore. On the business side of things, the publishing industry is in for grim times . The poets and the poems being perhaps the easiest targets atm as they are not the center of the world anymore as they used to do. Both in United States as well as here in India, literature or even fiction for that matter has been booted out of the educational system. The point I’m trying to make here that publishers would and are not in a position to protect authors or even themselves when such articles are written and opinions are being formed. Also see https://scroll.in/article/834652/debut-authors-heres-why-publishers-are-finding-it-difficult-to-market-your-books for an Indian viewpoint of the same.

I also did not understand what the author wanted when she named and shamed the poets. If you really want to name and shame people who have and are committing acts of violence against women, then the horror film genre apart from action genre should be easily targeted. In many of the horror movies, both in hollywood, Bollywood and perhaps in other countries as well, the female protagonist/lead is often molested,sexually assaulted, maimed, killed, cannibalized so and so forth. Should we ban such movies forthwith ?

Also does ‘banning’ a work of art really work ? The movie ‘Padmavaat‘ has been mired in controversies due to a cultural history where as the story/myth goes ‘Rani Padmavati’ (whether she is real or an imaginary figure is probably fodder for another blog post) when confronted with Khilji committed ‘Jauhar’ or self-immolation so that she remains ‘pure’. The fanatics rally around her as she is supposed to have paid the ultimate price, sacrificing herself. But if she were really a queen, shouldn’t she have thought of her people and lived to lead the fight, run away and fight for another day or if she was cunning enough to worm her way into Khilji’s heart and topple him from within. The history and politics of those days had all those options in front of her if she were a real character, why commit suicide ?

Because of the violence being perpetuated around Rani Padmavati there hasn’t been either a decent critique either about the movie or the historical times in which she lived. It perhaps makes the men of the land secure in the knowledge that the women then and even now should kill themselves than either falling in love with the ‘other’ (a Muslim) romantically thought of as the ‘invader’ a thought which has and was perpetuated by the English ever since the East India company came for their own political gains. Another idea being women being pure, oblivious and not ‘devious’ could also be debated.

(sarcasm) Of course, the idea that Khilji and Rani Padmavati living in the same century is not possible by actual historians is too crackpot to believe as the cultural history wins over real history. (/sarcasm)

The reason this whole thing got triggered was the ‘snowflake’ comments on https://lwn.net/Articles/745817/ . The article itself is a pretty good read as even though I’m an outsider to how the kernel comes together and although I have the theoretical knowledge about how the various subsystem maintainers pull and push patches up the train and how Linus manages to eke out a kernel release every 3-4 months, I did have an opportunity to observe how fly-by-contributors are ignored by subsystem-maintainers.

About a decade or less ago, my 2-button wheel Logitech mouse at the time was going down and I had no idea why sometimes the mouse used to function and why sometimes it didn’t. A hacker named ‘John Hill’ put up a patch. What the patch did essentially was trigger warnings on the console when the system was unable to get signal from my 2-button wheel mouse. I did comment and try to get it pushed into the trunk but it didn’t and there was also no explanation by anyone why the patch was discarded. I did come to know while building the mouse module as to how many types and models of mouse there were which was a revelation to me at that point in time. By pushing I had commented on where the patch was posted and the mailing list where threads for patches are posted and posted couple of times that the patch by John Hill should be considered but nobody either got back to me or him.

It’s been a decade since then and still we do not have any proper error reporting process AFAIK if the mouse/keyboard fails to transmit messages/signals to the system.

That apart the real long thread was about the term ‘snowflake’. I had been called that in the past but had sort of tuned it out as I didn’t know what the term means/meant.

When I went to wikipedia and came up with the ‘snowflake’ and it came with 3 meanings to the same word.

a. A unique crystalline shape of white

b. A person who believes that s/he is unique and hence entitled

c. A person who is weak or thin-skinned (overly sensitive)

I believe we all are of the above, the only difference is perhaps a factor. If we weren’t meant to be unique we wouldn’t have been given a personality, a body type, a sense of reasoning and logic and perhaps most important a sense of what is right or wrong. To be thick-skinned also comes the inability to love and have empathy with others.

To round off on a somewhat hopeful note, I was re-reading maybe for the umpteenth time ‘Sacred Stone‘ an action thriller in which four hindus along with a corrupt, wealthy and hurt billionaire try to blow the most sacred site of the Muslims, the Mecca and Medina. While I don’t know whether it would be possible or not, I would for sure like to see people using the pious days for reflection . I don’t have to do anything, just be.

Similarly, the spanish pilgrimage as shown in the Way . I don’t think any of my issues will be resolved in being either of the two places but it may trigger paths within which I have not yet explored or forgotten longtime ago.

At the end I would like to share two interesting articles that I saw/read over the week, the first one is about the ‘Alphonso‘ and the other Samarkhand . I hope you enjoy both the articles.

,

Planet DebianSteve Kemp: Decoding 433Mhz-transmissions with software-defined radio

This blog-post is a bit of a diversion, and builds upon my previous entry of using 433Mhz radio-transmitters and receivers with Arduino and/or ESP8266 devices.

As mentioned in my post I've recently been overhauling my in-house IoT buttons, and I decided to go down the route of using commercially-available buttons which broadcast signals via radio, rather than using IR, or WiFi. The advantage is that I don't need to build any devices, or worry about 3D-printing a case - the commercially available buttons are cheap, water-proof, portable, and reliable, so why not use them? Ultimately I bought around ten buttons, along with a radio-receiver and radio-transmitter modules for my ESP8266 device. I wrote code to run on my device to receive the transmissions, decode the device-ID, and take different actions based upon the specific button pressed.

In the gap between buying the buttons (read: radio transmitters) and waiting for the transmitter/receiver modules I intended to connect to my ESP8266/arduino device(s) I remembered that I'd previously bought a software-defined-radio receiver, and figured I could use it to receive and react to the transmissions directly upon my PC.

The dongle I'd bought in the past was a simple USB-device which identifies itself as follows when inserted into my desktop:

  [17844333.387774] usb 3-9: New USB device found, idVendor=0bda, idProduct=2838
  [17844333.387777] usb 3-9: New USB device strings: Mfr=1, Product=2, SerialNumber=3
  [17844333.387778] usb 3-9: Product: RTL2838UHIDIR
  [17844333.387778] usb 3-9: Manufacturer: Realtek
  [17844333.387779] usb 3-9: SerialNumber: 00000001

At the time I bought it I wrote a brief blog post, which described tracking aircraft, and I said "I know almost nothing about SDR, except that it can be used to let your computer do stuff with radio."

So my first step was finding some suitable software to listen to the right-frequency and ideally decode the transmissions. A brief search lead me to the following repository:

The RTL_433 project is pretty neat as it allows receiving transmissions and decoding them. Of course it can't decode everything, but it has the ability to recognize a bunch of commonly-used hardware, and when it does it outputs the payload in a useful way, rather than just dumping a bitstream/bytestream.

Once you've got your USB-dongle plugged in, and you've built the project you can start receiving and decoding all discovered broadcasts like so:

  skx@deagol ~$ ./build/src/rtl_433 -U -G
  trying device  0:  Realtek, RTL2838UHIDIR, SN: 00000001
  Found Rafael Micro R820T tuner
  Using device 0: Generic RTL2832U OEM
  Exact sample rate is: 250000.000414 Hz
  Sample rate set to 250000.
  Bit detection level set to 0 (Auto).
  Tuner gain set to Auto.
  Reading samples in async mode...
  Tuned to 433920000 Hz.
  ...

Here we've added flags:

  • -G
    • Enable all decoders. So we're not just listening for traffic at 433Mhz, but we're actively trying to decode the payload of the transmissions.
  • -U
    • Timestamps are in UTC

Leaving this running for a few hours I noted that there are several nearby cars which are transmitting data about their tyre-pressure:

  2018-02-10 11:53:33 :      Schrader       :      TPMS       :      25
  ID:          1B747B0
  Pressure:    2.500 bar
  Temperature: 6 C
  Integrity:   CRC

The second log is from running with "-F json" to cause output to be generated in JSON format:

  {"time" : "2018-02-10 09:51:02",
   "model" : "Toyota",
   "type" : "TPMS",
   "id" : "5e7e0637",
   "code" : "63e6026d",
   "mic" : "CRC"}

In both cases we see "TPMS", and according to wikipedia that is Tyre Pressure Monitoring System. I'm a little shocked to receive this data, unencrypted!

Other events also become visible, when I left the scanner running, which is presumably some kind of temperature-sensor a neighbour has running:

  2018-02-10 13:19:08 : RF-tech
     Id:              0
     Battery:         LOW
     Button:          0
     Temperature:     0.0 C

Anyway I have a bunch of remote-controlled sockets, branded "NEXA", which look like this:

Radio-Controlled Sockets

When I press the remote I can see the transmissions and program my PC to react to them:

  2018-02-11 07:31:20 : Nexa
    House Code:  39920705
    Group:  1
    Channel: 3
    State:   ON
    Unit:    2

In conclusion:

  • SDR can be used to easily sniff & decode cheap and commonly available 433Mhz-based devices.
  • "Modern" cars transmit their tyre-pressure, apparently!
  • My neighbours can probably overhear my button presses.

Rondam RamblingsA Multilogue on Free Will

[Inspired by this comment thread.] The Tortoise is standing next to a railroad track when Achilles, an ancient Greek warrior, happens by.  In the distance, a train whistle sounds. Tortoise: Greetings, friend Achilles.  You have impeccable timing.  I could use your assistance. Achilles: Hello, Mr. T.  Always happy to help.  What seems to be the trouble? Tortoise: Look there. Achilles: Why, it

Planet DebianNorbert Preining: In memoriam Staszek Wawrykiewicz

We have lost a dear member of our community, Staszek Wawrykiewicz. I got notice that our friend died in an accident the other day. My heart stopped for an instant when I read the news, it cannot be – one of the most friendly, open, heart-warming friends has passed.

Staszek was an active member of the Polish TeX community, and an incredibly valuable TeX Live Team member. His insistence and perseverance have saved TeX Live from many disasters and bugs. Although I have been in contact with Staszek over the TeX Live mailing lists since some years, I met him in person for the first time on my first ever BachoTeX, the EuroBachoTeX 2007. His friendliness, openness to all new things, his inquisitiveness, all took a great place in my heart.

I dearly remember the evenings with Staszek and our Polish friends, in one of the Bachotek huts, around the bonfire place, him playing the guitar and singing traditional and not-so-traditional Polish music, inviting everyone to join and enjoy together. Rarely technical and social abilities have found such a nice combination as in Staszek.

Despite his age he often felt like someone in his twens, always ready for a joke, always ready to party, always ready to have fun. It is this kind of attitude I would like to carry with me when I get older. Thanks for giving me a great example.

The few times I managed to come to BachoTeX from far Japan, Staszek was as usual welcoming – it is the feeling of close friends that even if you haven’t seen each other for long time, the moment you meet it feels like it was just yesterday. And wherever you go during a BachoTeX conference, his traces and funniness were always present.

It is a very sad loss for all of those who knew Staszek. If I could I would like to board the plane just now and join the final service to a great man, a great friend.

Staszek, we will miss you. BachoTeX will miss you, TeX Live will miss you, I will miss you badly. A good friend has passed away. May you rest in peace.

Photo credit goes to various people attending BachoTeX conferences.

Planet DebianJunichi Uekawa: Writing chrome extensions.

Writing chrome extensions. I am writing javascript with lots of async/await, and I forget. It's also a bit annoying that many system provided functions don't support promises yet.

Don MartiFOSDEM videos

Check it out. The videos from the Mozilla room at FOSDEM are up, and here's me, talking about bug futures.

All FOSDEM videos

And, yes, the video link Just Works. Bonus link to some background on that: The Fight For Patent-Unencumbered Media Codecs Is Nearly Won by Robert O'Callahan

Another bonus link: FOSDEM video project, including what those custom boxes do.

,

CryptogramCalling Squid "Calamari" Makes It More Appetizing

Research shows that what a food is called affects how we think about it.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJohn Goerzen: The Big Green Weasel Song

One does not normally get into one’s car intending to sing about barfing green weasels to the tune of a Beethoven symphony for a quarter of an hour. And yet, somehow, that’s what I wound up doing today.

Little did I know that when Jacob started band last fall, it would inspire him to sing about weasels to the tune of Beethoven’s 9th. That may come as a surprise to his band teacher, too, who didn’t likely expect that having the kids learn the theme to the Ninth Symphony would inspire them to sing about large weasels.

But tonight, as we were driving, I mentioned that I knew the original German words. He asked me to sing. I did.

Then, of course, Jacob and Oliver tried to sing it back in German. This devolved into gibberish and a fit of laughter pretty quick, and ended with something sounding like “schneezel”. So of course, I had to be silly and added, “I have a big green weasel!”

From then on, they were singing about big green weasels. It wasn’t long before they decided they would sing “the chorus” and I was supposed to improvise verse after verse about these weasels. Improvising to the middle of the 9th Symphony isn’t the easiest thing, but I had verses about giving weasels a hug, about weasels smelling like a bug, about a green weasel on a chair, about a weasel barfing on the stair. And soon, Jacob wanted to record the weasel song to make a CD of it. So we did, before we even got to town. Here it is:

[Youtube link]

I love to hear children delight in making music. And I love to hear them enjoying Beethoven. Especially with weasels.

I just wonder what will happen when they learn about Mozart.

Google AdsenseAdSense now supports Tamil


Continuing our commitment to support more languages and encourage content creation on the web, we’re excited to announce the addition of Tamil, a language spoken by millions of Indians, to the family of AdSense supported languages.

AdSense provides an easy way for publishers to monetize the content they create in Tamil, and help advertisers looking to connect with a Tamil-speaking audience with relevant ads.

To start monetizing your Tamil content website with Google AdSense:

  1. Check the AdSense program policies and make sure your website is compliant.
  2. Sign up for an AdSense account
  3. Add the AdSense code to start displaying relevant ads to your users.

Welcome to AdSense! Sign Up now.

Posted by: The AdSense Internationalization Team

Planet DebianErich Schubert: Booking.com Genius Nonsense & Spam

Booking.com just spammed me with an email that claims that I were a “frequent traveller” (which I am not), and thus would get “Genius” status, and rebates (which means they are going to hide some non-partner search results from me…) - I hate such marketing spam.

What a big rip-off.

I have rarely ever used Booking.com, and in fact I have last used it 2015.

That is certainly not what you would call a “frequent traveler”.

But Booking.com sell this to their hotel customers as “most loyal guests”. As I am clearly not a “loyal guest”, I consider this claim of Booking.com to be borderline to fraud. And beware, that since this is a partner programme, it does come with a downside for the user: the partner results will be “boosted in our search results”. In other words, your search results will be biased. They will hide other results to boost their partners, that would otherwise come first (for example, because they are closer to your desired location, or even cheaper).

Forget Booking.com and their “Genius program”. It’s a marketing fake.

Going to report this as spam, and kill my account there now.

Pro tip: use incognito mode whenever possible for surfing. For Chromium (or Google Chrome), add the option --incognito to your launcher icon, for Firefox use --private-window. On a smartphone, you may want to switch to Firefox Focus, or the DuckDuckGo browser.

Looks like those hotel booking brokers (who are in a fierce competition) are getting quite despeate. We are certainly heading into the second big Dot-com bubble, and it is probably going to bust rather sooner than later. Maybe that current stock market fragility will finally trigger this. If some parts of the “old” economy have to cut down their advertisement budgets, this will have a very immediate effect on Google, Facebook, and many others.

Planet DebianLars Wirzenius: Qvisqve - an authorisation server, first alpha release

My company, QvarnLabs Ab, has today released the first alpha version of our new product, Qvisqve. Below is the press release. I wrote pretty much all the code, and it's free software (AGPL+).


Helsinki, Finland - 2018-02-09. QvarnLabs Ab is happy to announce the first public release of Qvisqve, an authorisation server and identity provider for web and mobile applications. Qvisqve aims to be secure, lightweight, fast, and easy to manage. "We have big plans for Qvisqve, and helping customers' manage cloud identities" says Kaius Häggblom, CEO of QvarnLabs.

In this alpha release, Qvisqve supports the OAuth2 client credentials grant, which is useful for authenticating and authorising automated systems, including IoT devices. Qvisqve can be integrated with any web service that can use OAuth2 and JWT tokens for access control.

Future releases will provide support for end-user authentication by implementing the OpenID Connect protocol, with a variety of authentication methods, including username/password, U2F, TOTP, and TLS client certificates. Multi-factor authentication will also be supported. "We will make Qvisqve be flexible for any serious use case", says Lars Wirzenius, software architect at QvarnLabs. "We hope Qvisqve will be useful to the software freedom ecosystem in general" Wirzenius adds.

Qvisqve is developed and supported by QvarnLabs Ab, and works together with the Qvarn software, which is award-winning free and open-source software for managing sensitive personal information. Qvarn is in production use in Finland and Sweden and manages over a million identities. Both Qvisqve and Qvarn are released under the Affero General Public Licence.

Planet DebianOlivier Berger: A review of Virtual Labs virtualization solutions for MOOCs

I’ve just uploaded a new memo A review of Virtual Labs virtualization solutions for MOOCs in the form of a page on my blog, before I eventually publish something more elaborated (and valuated by peer review).

The subtitle is “From Virtual Machines running locally or on IaaS, to containers on a PaaS, up to hypothetical ports of tools to WebAssembly for serverless execution in the Web browser

Excerpt from the intro :

In this memo, we try to draw an overview of some benefits and concerns with existing approaches at using virtualization techniques for running Virtual Labs, as distributions of tools made available for distant learners.

We describe 3 main technical architectures: (1) running Virtual Machine images locally on a virtual machine manager, or (2) displaying the remote execution of similar virtual machines on a IaaS cloud, and (3) the potential of connecting to the remote execution of minimized containers on a remote PaaS cloud.

We then elaborate on some perspectives for locally running ports of applications to the WebAssembly virtual machine of the modern Web browsers.

I hope this will be of some interest for some.

Feel free to comment in this blog post.

CryptogramLiving in a Smart Home

In "The House that Spied on Me," Kashmir Hill outfits her home to be as "smart" as possible and writes about the results.

Worse Than FailureError'd: Whatever Happened to January 2nd?

"Skype for Business is trying to tell me something...but I'm not sure exactly what," writes Jeremy W.

 

"I was looking for a tactile switch. And yes, I absolutely do want an operating switch," writes Michael B.

 

Chris D. wrote, "While booking a hair appointment online, I found that the calendar on the website was a little confused as to how calendars work."

 

"Don't be fooled by the image on the left," wrote Dan D., "If you get caught in the line of fire, you will assuredly get soaked!"

 

Jonathan G. writes, "My local bar's Facebook ad shows that, depending on how the viewer frames it, even an error message can look appealing."

 

"I'll have to check my calendar - I may or may not have plans on the Nanth," wrote Brian.

 

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Planet Linux AustraliaOpenSTEM: Australia Day in the early 20th century

Australia Day and its commemoration on 26 January, has long been a controversial topic. This year has seen calls once again for the date to be changed. Similar calls have been made for a long time. As early as 1938, Aboriginal civil rights leaders declared a “Day of Mourning” to highlight issues in the Aboriginal […]

,

Planet DebianSteve Kemp: Creating an IoT button, the smart way

There are several projects out there designed to create an IoT button:

  • You press a button.
  • Magic happens, and stuff runs on your computer, or is otherwise triggered remotely.

I made my own internet-button, an esp8266-based alarm-button, and recently I've wanted to have a few more dotted around our flat. To recap, the basic way these things work is that you have a device with a button on it.

Once deployed you would press the button, your device wakes up, connects to your WiFi and sends a "message". That message then goes on to trigger some kind of defined action. In my case my button would mute any existing audio-playback, then trigger the launch of an "alarm.mp3" file. In short - if somebody presses the button I would notice.

I wanted a few more doing slightly more complex things in the flat, such as triggering lights and various programs. Unfortunately these buttons are actually relatively heavy-weight, largely because connecting to WiFi demands a reasonable amount of power-draw. Even with deep-sleeping between invocations, driving such a device from battery-power means the life-span is not great. (In my case I cheat, my button is powered by a small phone-charger, which means power isn't a concern, but my "button" is hobbled.)

Ideally what everybody wants is security, simplicity, and availability. Running from batteries, avoiding the need to program WiFi credentials and having a decent form-factor makes an IoT button a lot simpler to sell - you don't need to do complicated setup, and things are nice and neat.

So I wondered is such an impossible dream actually possible, and it turns out that yes, such a device is trivial.

Instead of building WiFi into a bunch of buttons you could you build the smarts into one device, a receiver, connected to your PC via a USB cable - the buttons are very very simple, don't use WiFi, don't need to be programmed, and don't draw a lot of current. How? Radio.

There exist pre-packaged and simple radio-based buttons, such as this one:

button

You press a button and it broadcasts a simple message on 433Mhz. There exist very cheap and reliable 433Mhz recievers which you can connect to an arduino, or ESP8266-based device. Which means you have a different approach:

  • You build a device based upon an Arduino/ESP8266/similar.
  • It listens for 433Mhz transmissions, decodes the device ID.
  • Once it finds something it recognizes it can write to STDOUT (more or less)
  • The host system opens /dev/ttyUSB0 and reads the output
    • Which it can then use to trigger actions.

The net result is you can buy a bunch of buttons, for €5 each and add them to your system. The transmissions are easy to decode, and each button has a unique ID. You don't need to program them with your WiFi credentials, or set them up - except on the host - and because these devices don't do much, they just sleep waiting for a press, make a brief radio-transmission, then go back to sleep, their batteries can last for months.

So that's what I've done. I've written a simple program which decodes the trasmissions and posts to an MQ instance "button-press-a", "button-press-b", etc, and I can react to them uniquely. (In some cases the actions taken depend upon the time of day.)

No code to show here, because it depends upon the precise flavour of button(s) that you buy. But I had fun because some of the remote-controls around my house use the same frequency - and a lot of the cheap "remote-controlled power-switches" use this fequency too. If you transmit as well as receive you can have a decent amount of fun. :)

Of course radio is "broadcast", so somebody nearby could tell I've pressed a button, but as far as security goes there are no obvious IoT-snafus that I think I care about.

In my next post I might even talk about something more interesting - SMS-based things. In Europe (data)-roaming fees have recently been abolished, and anti-monopoly regulations make it "simple" to get your own SIMs made. This means you can buy a bunch of SIMS, stick them in your devices and get cheap data-transfer Europe-wide. There are obvious commercial aspects available if you go down this route, if you can accept the caveat that you need to add a SIM-card to each transmitter and each receiver. If you can a lot of things that become possible, especially when coupled with GPS. Not only do you gain the ability to send messages/events/data, but you can see where it came from, physically/geographically, and that is something that I think has a lot of interesting use-cases.

Krebs on SecurityU.S. Arrests 13, Charges 36 in ‘Infraud’ Cybercrime Forum Bust

The U.S. Justice Department announced charges on Wednesday against three dozen individuals thought to be key members of ‘Infraud,” a long-running cybercrime forum that federal prosecutors say cost consumers more than a half billion dollars. In conjunction with the forum takedown, 13 alleged Infraud members from the United States and six other countries were arrested.

A screenshot of the Infraud forum, circa Oct. 2014. Like most other crime forums, it had special sections dedicated to vendors of virtually every kind of cybercriminal goods or services imaginable. Click to enlarge.

Started in October 2010, Infraud was short for “In Fraud We Trust,” and collectively the forum referred to itself as the “Ministry of Fraudulently [sic] Affairs.” As a mostly English-language fraud forum, Infraud attracted nearly 11,000 members from around the globe who sold, traded and bought everything from stolen identities and credit card accounts to ATM skimmers, botnet hosting and malicious software.

“Today’s indictment and arrests mark one of the largest cyberfraud enterprise prosecutions ever undertaken by the Department of Justice,” said John P. Cronan, acting assistant attorney general of the Justice Department’s criminal division. “As alleged in the indictment, Infraud operated like a business to facilitate cyberfraud on a global scale.”

The complaint released by the DOJ lists 36 Infraud members — some only by their hacker nicknames, others by their alleged real names and handles, and still others just as “John Does.” Having been a fairly regular lurker on Infraud over the past seven years who has sought to independently identify many of these individuals, I can say that some of these names and nick associations sound accurate but several do not.

The government says the founder and top member of Infraud was Svyatoslav Bondarenko, a hacker from Ukraine who used the nicknames “Rector” and “Helkern.” The first nickname is well supported by copies of the forum obtained by this author several years back; indeed, Rector’s profile listed him an administrator, and Rector can be seen on countless Infraud discussion threads vouching for sellers who had paid the monthly fee to advertise their services in “sticky” threads on the forum.

However, I’m not sure the Helkern association with Bondarenko is accurate. In December 2014, just days after breaking the story about the theft of some 40 million credit and debit cards from retail giant Target, KrebsOnSecurity posted a lengthy investigation into the identity of “Rescator” — the hacker whose cybercrime shop was identified as the primary vendor of cards stolen from Target.

That story showed that Rescator changed his nickname from Helkern after Helkern’s previous cybercrime forum (Darklife) got massively hacked, and it presented clues indicating that Rescator/Helkern was a different Ukrainian man named Andrey Hodirevski. For more on that connection, see Who’s Selling Cards from Target.

Also, Rescator was a separate vendor on Infraud, and there are no indications that I could find suggesting that Rector and Rescator were the same people. Here is Rescator’s most recent sales thread for his credit card shop on Infraud — dated almost a year after the Target breach. Notice the last comment on that thread alleges that Rescator had recently been arrested and that his shop was being run by law enforcement officials: 

Another top administrator of Infraud used the nickname “Stells.” According to the Justice Department, Stells’ real name is Sergey Medvedev. The government doesn’t describe his exact role, but it appears to have been administering the forum’s escrow service (see screenshot below).

Most large cybercrime forums have an escrow service, which holds the buyer’s virtual currency until forum administrators can confirm the seller has consummated the transaction acceptably to both parties. The escrow feature is designed to cut down on members ripping one another off — but it also can add considerably to the final price of the item(s) for sale.

In April 2016, Medvedev would take over as the “admin and owner” of Infraud, after he posted a note online saying that Bondarenko had gone missing, the Justice Department said.

One defendant in the case, a well-known vendor of stolen credit and debit cards who goes by the nickname “Zo0mer,” is listed as a John Doe. But according to a New York Times story from 2006, Zo0mer’s real name is Sergey Kozerev, and he hails from St. Petersburg, Russia.

The indictments also list two other major vendors of stolen credit and debit cards: hackers who went by the nicknames “Unicc” and “TonyMontana” (the latter being a reference to the fictional gangster character played by Al Pacino in the 1983 movie Scarface). Both hackers have long operated and operate to this day their own carding shops:

Unicc shop, which sells stolen credit card data as well as Social Security numbers and other consumer information that can be used for identity theft.

The government says Unicc’s real name is Andrey Sergeevich Novak. TonyMontana is listed in the complaint as John Doe #1.

TonyMontana’s carding shop.

Perhaps the most successful vendor of skimming devices made to be affixed to ATMs and fuel pumps was a hacker known on Infraud and other crime forums as “Rafael101.” Several of my early stories about new skimming innovations came from discussions with Rafael in which this author posed as an interested buyer and asked for videos, pictures and technical descriptions of his skimming devices.

A confidential source who asked not to be named told me a few years back that Rafael had used the same password for his skimming sales accounts on multiple competing cybercrime forums. When one of those forums got hacked, it enabled this source to read Rafael’s emails (Rafael evidently used the same password for his email account as well).

The source said the emails showed Rafael was ordering the parts for his skimmers in bulk from Chinese e-commerce giant Alibaba, and that he charged a significant markup on the final product. The source said Rafael had the packages all shipped to a Jose Gamboa in Norwalk, Calif — a suburb of Los Angeles. Sure enough, the indictment unsealed this week says Rafael’s real name is Jose Gamboa and that he is from Los Angeles.

A private message from the skimmer vendor Rafael101, from on a competing cybercrime forum (carder.su) in 2012.

The Justice Department says the arrests in this case took place in Australia, France, Italy, Kosovo, Serbia, the United Kingdom and the United States. The defendants face a variety of criminal charges, including identity theft, bank fraud, wire fraud and money laundering. A copy of the indictment is available here.

CryptogramWater Utility Infected by Cryptocurrency Mining Software

A water utility in Europe has been infected by cryptocurrency mining software. This is a relatively new attack: hackers compromise computers and force them to mine cryptocurrency for them. This is the first time I've seen it infect SCADA systems, though.

It seems that this mining software is benign, and doesn't affect the performance of the hacked computer. (A smart virus doesn't kill its host.) But that's not going to always be the case.

Worse Than FailureCodeSOD: I Take Exception

We've all seen code that ignores errors. We've all seen code that simply rethrows an exception. We've all seen code that wraps one exception for another. The submitter, Mr. O, took exception to this exceptionally exceptional exception handling code.

I was particularly amused by the OutOfMemoryException handler that allocates another exception object, and if it fails, another layer of exception trapping catches that and attempts to allocate yet another exception object. if that fails, it doesn't even try. So that makes this an exceptionally unexceptional exception handler?! (ouch, my head hurts)

It contains a modest amount of fairly straightforward code to read config files and write assorted XML documents. And it handles exceptions in all of the above ways.

You might note that the exception handling code was unformatted, unaligned and substantially larger than the code it is attempting to protect. To help you out, I've stripped out the fairly straightforward code being protected, and formatted the exception handling code to make it easier to see this exceptional piece of code (you may need to go full-screen to get the full impact).

After all, it's not like exceptions can contain explanatory text, or stack context information...

namespace HotfolderMerger {
  public class Merger : IDisposable {
    public Merger() {
      try {
          object section = ConfigurationManager.GetSection("HFMSettings/DataSettings");
          if (section == null) throw new MergerSetupException();
          _settings = (DataSettings)section;
      } catch (MergerSetupException) {
        throw;
      } catch (ConfigurationErrorsException ex){
        throw new MergerSetupException("Error in configuration", ex);
      } catch (Exception ex) {
        throw new MergerSetupException("Unexpected error while loading configuration",ex);
      }
    }

    // A whole bunch of regex about as complex as this one...
    private readonly Regex _fileNameRegex = new Regex(@"^(?<System>[A-Za-z0-9]{1,10})_(?<DesignName>[A-Za-z0-9]{1,})_(?<DocumentID>\d{1,})_(?<FileTimeUTC>\d{1,})(_(?<BAMID>\d+))?\.(?<extension>\w{0,3})$");

    public void MergeFiles() {
      try {
          foreach (FileElement filElement in _settings.Filelist) {
            // Lots of declarations here...
            foreach (FileInfo fi in hotfolder.GetFiles()) {
              try {
                  // 35 lines of innocuous code..
              } catch (ArgumentException ex) {
                throw new BasisException(ex, int.Parse(ErrorCodes.MergePreRunArgumentException),     ErrorMessages.MergePreRunArgumentException);
              } catch (ConfigurationException ex) {
                throw new BasisException(ex, int.Parse(ErrorCodes.MergePreRunConfigurationException),ErrorMessages.MergePreRunConfigurationException);
              } catch (Exception ex) {
                throw new UnexpectedMergerException("Unexpected exception while setting up for merge!", ex);
              }
            
              try {
                  // 23 lines of StreamReader code to load some XML from a file...
              } catch (OutOfMemoryException ex) {
                // OP: so if we're out of memory, how is this new exception going to be allocated? 
                //     Maybe in the wrapping "try/catch Exception" - which allocates a new UnexpectedMergerException object??? Oh, wait...
                throw new BasisException(  ex,int.Parse(ErrorCodes.MergeRunOutOfMemoryException),   ErrorMessages.MergeRunOutOfMemoryException);
              } catch (ConfigurationException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunConfigurationException),ErrorMessages.MergeRunConfigurationException);
              } catch (FormatException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunFormatException),       ErrorMessages.MergeRunFormatException);
              } catch (ArgumentException ex) { 
                throw new BasisException(    ex, int.Parse(ErrorCodes.MergeRunArgumentException),   ErrorMessages.MergeRunArgumentException);
              } catch (SecurityException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunSecurityException),     ErrorMessages.MergeRunSecurityException);
              } catch (IOException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunIOException),           ErrorMessages.MergeRunIOException);
              } catch (NotSupportedException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunNotSupportedException), ErrorMessages.MergeRunNotSupportedException);
              } catch (Exception ex) {
                throw new UnexpectedMergerException("Unexpected exception while merging!", ex);
              }
            }
            // ...
          }
      } catch (UnexpectedMergerException) {
        throw;
      } catch (BasisException ex) {
        BasisLog.WriteError(ex);
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected error while attempting to parse settings prior to merge", ex);
      }
    }

    private static void prepareNewMergeFile(ref XmlTextWriter xtw, string filename, int numfiles) {
      if (string.IsNullOrEmpty(filename))
         throw new BasisException(    int.Parse(ErrorCodes.MergeSetupNullReferenceException),       ErrorMessages.MergeSetupNullReferenceException, "filename parameter was null or empty");
      try {
          // Use XmlTextWriter to concatenate ~30 lines of canned XML...
      } catch (InvalidOperationException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupInvalidOperationException),     ErrorMessages.MergeSetupInvalidOperationException);
      } catch (ArgumentException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupArgumentException),             ErrorMessages.MergeSetupArgumentException);
      } catch (IOException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupIOException),                   ErrorMessages.MergeSetupIOException);
      } catch (UnauthorizedAccessException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupUnauthorizedAccessException),   ErrorMessages.MergeSetupUnauthorizedAccessException);
      } catch (SecurityException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupSecurityException),             ErrorMessages.MergeSetupSecurityException);
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected exception while setting up for merge!", ex);
      }
    }

    private void closeMergeFile(ref XmlTextWriter xtw, ref List<FileInfo> filesComplete, string filename, double i) {
      if (xtw == null)
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeCleanupNullReferenceException, "xtw ref parameter was null");
      if (filesComplete == null)
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeSetupNullReferenceException,   "filesComplete ref parameter was null");
      if (string.IsNullOrEmpty(filename))
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeSetupNullReferenceException,   "filename parameter was null or empty");

      try {
          // ~ 30 lines of XmlTextWriter, StreamWriter and File IO...
      } catch (ArgumentException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupArgumentException),           ErrorMessages.MergeCleanupArgumentException);
      } catch (InvalidOperationException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupInvalidOperationException),   ErrorMessages.MergeCleanupInvalidOperationException);
      } catch (UnauthorizedAccessException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupUnauthorizedAccessException), ErrorMessages.MergeCleanupUnauthorizedAccessException);
      } catch (IOException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupIOException),                 ErrorMessages.MergeCleanupIOException);
      } catch (NullReferenceException ex) {
        throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeCleanupNullReferenceException, "unknown exception details");
      } catch (NotSupportedException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupNotSupportedException),       ErrorMessages.MergeCleanupNotSupportedException);
      } catch (MergerException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupMergerException),             ErrorMessages.MergeCleanupMergerException);
      } catch (SecurityException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupSecurityException),           ErrorMessages.MergeCleanupSecurityException);
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected exception while merging!", ex);
      }
    }
  }
}
[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianEnrico Zini: Gnome without chrome-gnome-shell

New laptop, has a touchscreen, can be folded into a tablet, I heard gnome-shell would be a good choice of desktop environment, and I managed to tweak it enough that I can reuse existing habits.

I have a big problem, however, with how it encourages one to download random extensions off the internet and run them as part of the whole desktop environment. I have an even bigger problem with gnome-core having a hard dependency on chrome-gnome-shell, a plugin which cannot be disabled without root editing files in /etc, which exposes parts of my destktop environment to websites.

Visit this site and it will know which extensions you have installed, and it will be able to install more. I do not trust that, I do not need that, I do not want that. I am horrified by the idea of that.

I made a workaround.

How can one do the same for firefox?

Description

chrome-gnome-shell is a hard dependency of gnome-core, and it installs a browser plugin that one may not want, and mandates its use by system-wide chrome policies.

I consider having chrome-gnome-shell an unneeded increase of the attack surface of my system, in exchange for the dubious privilege of being able to download and execute, as my main user, random unreviewed code.

This package satifies the chrome-gnome-shell dependency, but installs nothing.

Note that after installing this package you need to purge chrome-gnome-shell if it was previously installed, to have it remove its chromium policy files in /etc/chromium

Instructions

apt install equivs
equivs-build contain-gnome-shell
sudo dpkg -i contain-gnome-shell_1.0_all.deb
sudo dpkg --purge chrome-gnome-shell

CryptogramCabinet of Secret Documents from Australia

This story of leaked Australian government secrets is unlike any other I've heard:

It begins at a second-hand shop in Canberra, where ex-government furniture is sold off cheaply.

The deals can be even cheaper when the items in question are two heavy filing cabinets to which no-one can find the keys.

They were purchased for small change and sat unopened for some months until the locks were attacked with a drill.

Inside was the trove of documents now known as The Cabinet Files.

The thousands of pages reveal the inner workings of five separate governments and span nearly a decade.

Nearly all the files are classified, some as "top secret" or "AUSTEO", which means they are to be seen by Australian eyes only.

Yes, that really happened. The person who bought and opened the file cabinets contacted the Australian Broadcasting Corp, who is now publishing a bunch of it.

There's lots of interesting (and embarassing) stuff in the documents, although most of it is local politics. I am more interested in the government's reaction to the incident: they're pushing for a law making it illegal for the press to publish government secrets it received through unofficial channels.

"The one thing I would point out about the legislation that does concern me particularly is that classified information is an element of the offence," he said.

"That is to say, if you've got a filing cabinet that is full of classified information ... that means all the Crown has to prove if they're prosecuting you is that it is classified ­ nothing else.

"They don't have to prove that you knew it was classified, so knowledge is beside the point."

[...]

Many groups have raised concerns, including media organisations who say they unfairly target journalists trying to do their job.

But really anyone could be prosecuted just for possessing classified information, regardless of whether they know about it.

That might include, for instance, if you stumbled across a folder of secret files in a regular skip bin while walking home and handed it over to a journalist.

This illustrates a fundamental misunderstanding of the threat. The Australian Broadcasting Corp gets their funding from the government, and was very restrained in what they published. They waited months before publishing as they coordinated with the Australian government. They allowed the government to secure the files, and then returned them. From the government's perspective, they were the best possible media outlet to receive this information. If the government makes it illegal for the Australian press to publish this sort of material, the next time it will be sent to the BBC, the Guardian, the New York Times, or Wikileaks. And since people no longer read their news from newspapers sold in stores but on the Internet, the result will be just as many people reading the stories with far fewer redactions.

The proposed law is older than this leak, but the leak is giving it new life. The Australian opposition party is being cagey on whether they will support the law. They don't want to appear weak on national security, so I'm not optimistic.

EDITED TO ADD (2/8): The Australian government backed down on that new security law.

Planet DebianRussell Coker: Thinkpad X1 Carbon

I just bought a Thinkpad X1 Carbon to replace my Thinkpad X301 [1]. It cost me $289 with free shipping from an eBay merchant which is a great deal, a new battery for the Thinkpad X301 would have cost about $100.

It seems that laptops aren’t depreciating in value as much as they used to. Grays Online used to reliably have refurbished Thinkpads with manufacturer’s warranty selling for about $300. Now they only have IdeaPads (a cheaper low-end line from Lenovo) at good prices, admittedly $100 to $200 for an IdeaPad is a very nice deal if you want a cheap laptop and don’t need something too powerful. But if you want something for doing software development on the go then you are looking at well in excess of $400. So I ended up buying a second-hand system from an eBay merchant.

CPU

I was quite excited to read the specs that it has an i7 CPU, but now I have it I discovered that the i7-3667U CPU scores 3990 according to passmark (cpubenchmark.net) [2]. While that is much better than the U9400 in the Thinkpad X301 that scored 968, it’s only slightly better than the i5-2520M in my Thinkpad T420 that scored 3582 [3]. I bought the Thinkpad T420 in August 2013 [4], I had hoped that Moore’s Law would result in me getting a system at least twice as fast as my last one. But buying second-hand meant I got a slower CPU. Also the small form factor of the X series limits the heat dissipation and therefore limits the CPU performance.

Keyboard

Thinkpads have traditionally had the best keyboards, but they are losing that advantage. This system has a keyboard that feels like an Apple laptop keyboard not like a traditional Thinkpad. It still has the Trackpoint which is a major feature if you like it (I do). The biggest downside is that they rearranged the keys. The PgUp/PgDn keys are now by the arrow keys, this could end up being useful if you like the SHIFT-PgUp/SHIFT-PgDn combinations used in the Linux VC and some Xterms like Konsole. But I like to keep my keys by the home keys and I can’t do that unless I use the little finger of my right hand for PgUp/PgDn. They also moved the Home, End, and Delete keys which is really annoying. It’s not just that the positions are different to previous Thinkpads (including X series like the X301), they are different to desktop keyboards. So every time I move between my Thinkpad and a desktop system I need to change key usage.

Did Lenovo not consider that touch typists might use their products?

The keyboard moved the PrtSc key, and lacks ScrLk and Pause keys, but I hardly ever use the PrtSc key, and never use the other 2. The lack of those keys would only be of interest to people who have mapped them to useful functions and people who actually use PrtSc. It’s impractical to have a key as annoying to accidentally press as PrtSc between the Ctrl and Alt keys.

One significant benefit of the keyboard in this Thinkpad is that it has a backlight instead of having a light on the top of the screen that shines on the keyboard. It might work better than the light above the keyboard and looks much cooler! As an aside I discovered that my Thinkpad X301 has a light above the keyboard, but the key combination to activate it sometimes needs to be pressed several times.

Display

X1 Carbon 1600*900
T420 1600*900
T61 1680*1050
X301 1440*900

Above are the screen resolutions for all my Thinkpads of the last 8 years. The X301 is an anomaly as I got it from a rubbish pile and it was significantly older than Thinkpads usually are when I get them. It’s a bit disappointing that laptop screen resolution isn’t increasing much over the years. I know some people have laptops with resolutions as high as 2560*1600 (as high as a high end phone) it seems that most laptops are below phone resolution.

Kogan is currently selling the Agora 8+ phone new for $239, including postage that would still be cheaper than the $289 I paid for this Thinkpad. There’s no reason why new phones should have lower prices and higher screen resolutions than second-hand laptops. The Thinkpad is designed to be a high-end brand, other brands like IdeaPad are for low end devices. Really 1600*900 is a low-end resolution by today’s standards, 1920*1080 should be the minimum for high-end systems. Now I could have bought one of the X series models with a higher screen resolution, but most of them have the lower resolution and hunting for a second hand system with the rare high resolution screen would mean missing the best prices.

I wonder if there’s an Android app to make a phone run as a second monitor for a Linux laptop, that way you could use a high resolution phone screen to display data from a laptop.

This display is unreasonably bright by default. So bright it hurt my eyes. The xbacklight program doesn’t support my display but the command “xrandr –output LVDS-1 –brightness 0.4” sets the brightness to 40%. The Fn key combination to set brightness doesn’t work. Below a brightness of about 70% the screen looks grainy.

General

This Thinkpad has a 180G SSD that supports contiguous reads at 500MB/s. It has 8G of RAM which is the minimum for a usable desktop system nowadays and while not really fast the CPU is fast enough. Generally this is a nice system.

It doesn’t have an Ethernet port which is really annoying. Now I have to pack a USB Ethernet device whenever I go anywhere. It also has mini-DisplayPort as the only video connector, as that is almost never available at a conference venue (VGA and HDMI are the common ones) I’ll have to pack an adaptor when I give a lecture. It also only has 2 USB ports, the X301 has 3. I know that not having HDMI, VGA, and Ethernet ports allows designing a thinner laptop. But I would be happier with a slightly thicker laptop that has more connectivity options. The Thinkpad X301 has about the same mass and is only slightly thicker and has all those ports. I blame Apple for starting this trend of laptops lacking IO options.

This might be the last laptop I own that doesn’t have USB-C. Currently not having USB-C is not a big deal, but devices other than phones supporting it will probably be released soon and fast phone charging from a laptop would be a good feature to have.

This laptop has no removable battery. I don’t know if it will be practical to replace the battery if the old one wears out. But given that replacing the battery may be more than the laptop is worth this isn’t a serious issue. One significant issue is that there’s no option to buy a second battery if I need to have it run without mains power for a significant amount of time. When I was travelling between Australia and Europe often I used to pack a second battery so I could spend twice as much time coding on the plane. I know it’s an engineering trade-off, but they did it with the X301 and could have done it again with this model.

Conclusion

This isn’t a great laptop. The X1 Carbon is described as a flagship for the Thinkpad brand and the display is letting down the image of the brand. The CPU is a little disappointing, but it’s a trade-off that I can deal with.

The keyboard is really annoying and will continue to annoy me for as long as I own it. The X301 managed to fit a better keyboard layout into the same space, there’s no reason that they couldn’t have done the same with the X1 Carbon.

But it’s great value for money and works well.

Geek FeminismBringing the blog to a close

We’re bringing the Geek Feminism blog to a close.

First, some logistics; then some reasons and reminiscences; then, some thanks.

Logistics

The site will still be up for at least several years, barring Internet catastrophe. We won’t post to it anymore and comments will be closed, but we intend to keep the archives up and available at their current URLs, or to have durable redirects from the current URLs to the archive.

This doesn’t affect the Geek Feminism wiki, which will keep going.

There’s a Twitter feed and a Facebook page; after our last blog post, we won’t post to those again.

We don’t have a definite date yet for when we’ll post for the last time. It’ll almost certainly be this year.

I might add to this, or post in the comments, to add stuff. And this isn’t the absolute last post on the blog; it’d be nice to re-run a few of our best-of posts, for instance, like the ones Tim Chevalier linked to here. We’re figuring that out.

Reasons and reminiscences

Alex Bayley and a bunch of their peers — myself included — started posting on this blog in 2009. We coalesced around feminist issues in scifi/fantasy fandom, open culture projects like Wikipedia, gaming, the sciences, the tech industry and open source software development, Internet culture, and so on. Alex gave a talk at Open Source Bridge 2014 about our history to that point, and our meta tag has some further background on what we were up to over those years.

You’ve probably seen a number of these kinds of volunteer group efforts end. People’s lives shift, our priorities change as we adapt to new challenges, and so on. And we’ve seen the birth or growth of other independent media; there are quite a lot of places to go, for a feminist take on the issues I mentioned. For example:

 
We did some interesting, useful, and cool stuff for several years; I try to keep myself from dwelling too much in the sad half of “bittersweet” by thinking of the many communities that have already been carrying on without waiting for us to pass any torches.

Thanks

Thanks of course to all our contributors, past and present, and those who provided the theme, logo, and technical support and built or provided infrastructure, social and digital and financial, for this blog. Thanks to our readers and commenters. Thanks to everyone who did neat stuff for us to write about. And thanks to anyone who used things we said to go make the world happier.

More later; thanks.

Planet DebianDirk Eddelbuettel: RcppEigen 0.3.3.4.0

A new minor release 0.3.3.4.0 of RcppEigen hit CRAN earlier today, and just went to Debian as well. It brings Eigen 3.3.4 to R.

Yixuan once again did the leg-work of bringing the most recent Eigen release in along with the small set of patches we have carried forward for a few years.

One additional and recent change was the accomodation of a recent CRAN Policy change to not allow gcc or clang to mess with diagnostic messages. A word of caution: this may make your compilation of packages uses RcppEigen very noisy so consider adding -Wno-ignored-attributes to the compiler flags added in your ~/.R/Makevars.

The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.3.4.0 (2018-02-05)

  • Updated to version 3.3.4 of Eigen (Yixuan in #49)

  • Also carried over on new upstream (Yixuan, addressing #48)

  • As before, condition long long use on C++11.

  • Pragmas for g++ & clang to suppress diagnostics messages are disabled per CRAN Policy; use -Wno-ignored-attributes to quieten.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianIustin Pop: Releasing Corydalis

It's been a long time baking…

Back when I started doing photography more seriously, I settled for Lightroom as a RAW processor and image catalog solution. It works, but it's not perfect.

The main issue that I had over time with Lightroom is that while on the technical (hard) aspects of RAW processing, editing, etc. it is doing a good job, on the catalog aspect… it leaves things to be desired. Thus, over time I started using more and more of Jeffrey Friedl's plugins for Lightroom, which makes it better, but still it is hard get a grasp of your entire collection, besides just of the RAW sources. And even for the RAW files, Lightroom's UI is sluggish enough that I try to avoid as much as possible, outside of image development.

On top of that, ten years ago most of my image viewing (and my family's) was on the desktop, using things such as geeqie reading the pictures from the NAS. In the meantime, things have changed, and now a lot of image viewing is done either on desktop or mobile clients, but without guaranteed file-system access to the actual images. Thus, I wanted something to be able to view all my pictures, in a somewhat seamless mode, based on various global searches - e.g. "show all my pictures that contain person $x and taken in location $foo". Also, I wanted something that could view all my pictures, RAW or JPEGs, without the user having to care about this aspect (some, but not all, viewing-oriented programs do this).

So, for the last ~5 years or so, I've been slowly working on a very basic program to do what I wanted. First git commit is on August 19th, 2013, titled "Initial commit - After trying to bump deps to Yesod 1.2.", so that was not the start. In an old backup, I find files from April 27th that year, so that's probably around when I started writing it.

At one point I even settled on a name, with commit 3c00458, which was the last major hurdle for releasing this, I thought. Ha! Three years later, I finally was able to bring to a shape where there is probably one other person somewhere who could actually use it and have it be of any help. It even has documentation now!

So, without further ado, … wait, I already said everything! Corydalis v0.2.0 (a rather arbitrarily chosen version number) is up on GitHub.

Looking forward to bug reports/suggestions/feedback, it's been a really long time since I last open sourced anything not entirely trivial.

P.S.: Yes, I know, there are no (meaningful) unit tests; yes, I feel ashamed, this being 2018.

P.P.S.: Yes, of course it's Haskell! with a sprinkle of JavaScript, sadly. And I'm not familiar with best JavaScript practices, so the way I bundled things with the application is probably not good.

P.P.P.S.: If you actually try this, don't try it against your entire picture tree—for large trees (tens of thousands of pictures), it will take many hours/maybe days to scan/extract all; it is designed more for incremental updates, so the initial scan is what it is.

P⁴.S.: It's not slow because of Haskell!!

Sociological ImagesBeyond Racial Binaries: How ‘White’ Latinos Can Experience Racism

Recent reports indicated that FEMA was cuttingand then not cutting—hurricane relief aid to Puerto Rico. When Donald Trump recently slandered Puerto Ricans as lazy and too dependent on aid after Hurricane Maria, Fox News host Tucker Carlson stated that Trump’s criticism could not be racist because “Puerto Rico is 75 percent white, according to the U.S. Census.”

Photo Credit: Coast Guard News, Flickr CC

This statement presents racism as a false choice between nonwhite people who experience racism and white people who don’t. It ignores the fact that someone can be classed as white by one organization but treated as non-white by another, due to the way ‘race’ is socially constructed across time, regions and social contexts.

Whiteness for Puerto Ricans is a contradiction. Racial labels that developed in Puerto Rico were much more fluid than on the U.S. mainland, with at least twenty categories. But the island came under U.S. rule at the height of American nativism and biological racism, which relied on a dichotomy between a privileged white race and a stigmatized black one that was designed to protect the privileges of slavery and segregation. So the U.S. portrayed the islanders with racist caricatures in cartoons like this one:

Clara Rodriguez has shown how Puerto Ricans who migrated to the mainland had to conform to this white-black duality that bore no relation to their self-identifications. The Census only gave two options, white or non-white, so respondents who would have identified themselves as “indio, moreno, mulato, prieto, jabao, and the most common term, trigueño (literally, ‘wheat-colored’)” chose white by default, simply to avoid the disadvantage and stigma of being seen as black bodied.

Choosing the white option did not protect Puerto Ricans from discrimination. Those who came to the mainland to work in agriculture found themselves cast as ‘alien labor’ despite their US citizenship. When the federal government gave loans to white home buyers after 1945, Puerto Ricans were usually excluded on zonal grounds, being subjected to ‘redlining’ alongside African Americans. Redlining was also found to be operating on Puerto Rico itself in the insurance market as late as 1998, suggesting it may have even contributed to the destitution faced by islanders after natural disasters.

The racist treatment of Puerto Ricans shows how it is possible to “be white” without white privilege. There have been historical advantages in being “not black” and “not Mexican”, but they have not included the freedom to seek employment, housing and insurance without fear of exclusion or disadvantage. When a hurricane strikes, Puerto Rico finds itself closer to New Orleans than to Florida.

An earlier version of this post appeared at History News Network

Jonathan Harrison, PhD, is an adjunct Professor in Sociology at Florida Gulf Coast University, Florida SouthWestern State College and Hodges University whose PhD was in the field of racism and antisemitism.

(View original at https://thesocietypages.org/socimages)

,

Planet DebianBen Hutchings: Debian LTS work, January 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from December. I worked all these hours.

I put together and tested a more-or-less complete backport of KPTI/KAISER to Linux 3.2, based on work by Hugh Dickins and several others. This mitigates the Meltdown vulnerability on amd64 (only). I prepared and uploaded an update for wheezy with this and several other security fixes, and issued DLA-1232-1. I also released another update on the Linux 3.2 longterm stable branch (3.2.98), and started work on the next (3.2.99).

Planet DebianRenata D'Avila: Exporting EventCalendar data to ical

I have been working on the EventCalendar code, to allow to export the events' data to icalendar.

A new cal_action

I started by creating a new cal_action (ical) that would call the function I created, download_events_ical(). I added it to the bottom menu of the calendar, along with with other views (Weekly, Daily, Simple) that also use cal_action to return the calendar.

Wiki page showing the calendar, with the 'ical' link at the bottom

Afterwards, discussing it with my mentors, we figured out that this could be improved by always offering the .ics file as an attachment to the page.

Either way, no matter if we decide to leave just the link at the bottom or put it up as an attachment, or both, the file must be properly constructed with the data gathered by EventCalendar from the wiki pages with event data. This is what I'm working on this past week.

Setting the headers

The 'download' or opening the icalendar file directly in the calendar application should happen automatically once the proper headers are set, so I had to specify the Content-Type that will be returned for the 'download ical' request.

A bug error page with code that actually showed me how to do that in moinmoin: EditorContentTypeHttpHeader

Now the browser knows that this file should be opened by a software that handles calendars (this immediately prompts the open/save dialog in browsers).

Debugging

To parse the data, I investigated the EventCalendar variables that had the event data stored.

I knew there was a function loadEvents() already written in the code, that could unpack to dictionary variables (events, cal_events, labels) the info about the events. I needed to see how this data was constructed to figure out how to use it for the icalendar elements. With a print command, I ended up with this:

Events: {
    u'e_Workshops_2': {
        'date_len': 1,
        'startdate': u'20180124',
        'time_len': 0,
        'enddate': u'20180124',
        'description': u'workshop test',
        'recur_freq': 0,
        'title': u'workshop',
        'clone': 0,
        'recur_type': '',
        'label': '',
        'recur_until': '',
        'bgcolor': u'#000000',
        'hid': 'head-f5e173288ca20abb0a37e74aa035e6bbedc3e6ff-2',
        'starttime': '',
        'endtime': '',
        'id': u'e_Workshops_2',
        'refer': u'Workshops'
    },
}

Which shows that each 'events' item is an unicode object (e_Workshops_1, e_Conferences_1...) that contains another dictionary.

In Python, dictionaries may be interacted with using the values method. So that is what I did, and, for each value, I called a make_todo function that would create a VTODO. But scratch that. Even though I indeed created a 'to-do', that isn't what I should have created - I realize that as I write this post. We are talking about events, so I should've created a VEVENT component, as needed for icalendar format. I'm going to begin rectifying this as soon as I finish this post.

Next steps

As it is now, the cal_event does allow to download a file... but the file returned is an html page from the wiki, with the icalendar element inserted in it. This isn't what we need. We need just the icalendar file to be returned, no HTML, and certainly not the wiki page, so it can be understood properly by the calendar software/application.

This is caused both by how MoinMoin works and by the macro code itself. When the EventCalendar macro executes, it appends it's result to the page html. I thought for a bit that if this append didn't happen, the return result would be an empty page and that this might allow our .ics file 'take over', so I kept thinking of turning the download_ical function into a whole new macro which uses EventCalendar data, but MoinMoin doesn't actually work like that.

I have found a macro called Format Converter that 'allows instant single-file conversions from various input formats into various output formats'. Maybe icalendar could be added to it and that would solve the problem? If that so, anyone who wanted to use EventCalendar with the capacity to export to .ics, would need to install this macro as well. Doesn't seem like the most appopriate way to solve this. Maybe just copying part of FormatConverter to EventCalendar could work.

I'm still investigating the issue of returning the file with my mentors. We will see how that works out.

Some thoughts

I remember how I thought it would be "simple", developing this functionality to the EventCalendar macro. Now I see that it is anything but simple, specially because of how much of MoinMoin wiki internals I have to learn in order to do so (and that is just a fraction of the whole system). If I didn't have mentors who patientily help me during the whole process (special thanks to Bruno!) and if I wasn't being paid for the time I spend learning all this - if I was just doing this on my own for the sake of doing it, it's very probable that I would have given up trying to contribute to this complex project a while ago. I'm very glad that Outreachy came along and it's enabling this work - and my technical and personal growth.

By the way, for anyone that might be interested, I have been commiting the changes to the macro to the EventCalendar-099.py file on my foss_events Github repository. I warn you that it is kind of messy because I leave there things I use to debug and things that I think might be useful.

Bonus: how to download attachments from MoinMoin using curl

I was bummed that I couldn't download the macro .py files using the terminal, it always gave me 403 (Forbidden) and returned HTML. I talked about this with my mentor Bruno and... there is actually a way! It turns out that the '&' character in the URL probably confuses wget and curl, so we use quotes for the address and... it works!

curl 'https://moinmo.in/MacroMarket/FormatConverter?action=AttachFile&do=get&target=FormatConverter-1.0.py …' --output http://FormatConverter.py

I'm still curious to see if this won't be a problem when specifying the URL to download the .ics file using Thunderbird.

Planet DebianBits from Debian: DebConf18: Call for Proposals

The DebConf Content team would like to call for proposals in the DebConf18 conference, which will take place in Hsinchu, Taiwan, from 29 July to 5 August 2018.

You can find this Call for Proposals, in its latest form at: https://debconf18.debconf.org/cfp/.

Please refer to this URL for updates on the present information.

Suggesting a Speaker

The content team has a (limited) number of spots for invited speakers and is open to suggestions. Priority will be given to speakers who are not regular DebConf attendees, and who are more likely to bring diverse viewpoints to the conference.

Please keep in mind that some speakers may have very busy schedules and need to be booked far in advance. Therefore, we would like to start inviting speakers as soon as possible.

In order to suggest a speaker, please email content@debconf.org; your email should provide the following information:

  • The speaker's preferred name
  • Their location (for travel budget considerations)
  • Their affiliation (institution and/or project)
  • The suggested talk topic
  • Brief biography (50-100 words) as it relates to the suggested topic
  • The topic's relevance to Debian and/or DebConf

Please, note that the Code of Conduct applies to invited speakers and their talks, and coming to DebConf (incl. accepting an invitation) requires them to accept it.

Submitting an Event

You can now submit an event proposal. Events are not limited to traditional presentations or informal sessions (BoFs): we welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be of interest to the Debian community.

Regular sessions may either be 20 or 45 minutes long (including time for questions), other kinds of sessions (workshops, demos, lightning talks, ...) could have different durations. Please choose the most suitable duration for your event and explain any special requests.

You will need to create an account on the site, to submit a talk. We suggest that Debian account holders (including DDs and DMs) to use Debian SSO when creating an account. However, this isn’t required, as you can sign up with an e-mail address and password.

Timeline

If you depend on having your proposal accepted in order to attend the conference, please submit it in a timely fashion so that it can be considered (and potentially accepted) as soon as possible.

All proposals must be submitted before Sunday 17 June 2018 to be evaluated for the official schedule.

Topics and Tracks

Though we invite proposals on any Debian or FLOSS related subject, we have some broad topics on which we encourage people to submit proposals, including but not limited to:

  • Blends
  • Cloud and containers
  • Debian in Science
  • Embedded
  • Packaging, policy and infrastructure
  • Security
  • Social context
  • Systems administration, automation and orchestration

You are welcome to either suggest more tracks, or to become a coordinator for any of them; please refer to the Content Tracks wiki page for more information on that.

Code of Conduct

Our event is covered by a Code of Conduct designed to ensure everyone’s safety and comfort. The code applies to all attendees, including speakers and the content of their presentations. Do not hesitate to contact us at content@debconf.org if you have any questions or are unsure about certain content you’d like to present.

Video Coverage

Providing video is one of the conference goals, as it makes the content accessible to a wider audience. Unless speakers opt-out, scheduled talks may be streamed live over the Internet to promote remote participation, and recordings will be published later under the DebConf license (MIT/Expat), as well as presentation slides and papers whenever available.

Closing note

DebConf18 is still accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

In case of any questions, or if you wanted to bounce some ideas off us first, please do not hesitate to reach out to the content team at content@debconf.org.

We hope to see you in Hsinchu!

The DebConf team

Worse Than FailureCodeSOD: How To Creat Socket?

JR earned a bit of a reputation as the developer who could solve anything. Like most reputations, this was worse than it sounded, and it meant he got the weird heisenbugs. The weirdest and the worst heisenbugs came from Gerry, a developer who had worked for the company for many, many years, and left behind many, many landmines.

Once upon a time, in those Bad Old Days, Gerry wrote a C++ socket-server. In those days, the socket-server would crash any time there was an issue with network connectivity. Crashing services were bad, so Gerry “fixed” it. Whatever Gerry did fell into the category of “good enough”, but it had one problem: after any sort of network hiccup, the server wouldn’t crash, but it would take a very long time to start servicing requests again. Long enough that other processes would sometime fail. It was infrequent enough that the bug had stuck around for years, but finally, someone wanted Something Done™.

JR got Something Done™, and he did it by looking at the CreatSocket method, buried deep in a "God" class of 15,000 lines.

void UglyClassThatDidEverything::CreatSocket() {
    while (true) {
                try {
                        m_pSocket = new Socket((ip + ":55043").c_str());
                        if (m_pSocket != null) {
                                // LOG.info("Creat socket");
                                m_pSocket->connect();
                                break;
                        } else {
                                // LOG.info("Creat socket failed");
                                // usleep(1000);
                                // sleep(1);
                                sleep(5);
                        }
                } catch (...) {
                    if (m_pSocket == null) {
                                // LOG.info("Creat socket failed");
                                sleep(5);
                                CreatSocket();
                                sleep(5);
                        }
                }
        }
}

The try portion of the code provides an… interesting take on handling socket creation. Create a socket, and grab a handle. If you don’t get a socket for some reason, sleep for 5 seconds, and then the infinite while loop means that it’ll try again. Eventually, this will hopefully get a socket. It might take until the heat death of the universe, or at least until the half-created-but-never-cleaned-up sockets consume all the filehandles on the OS, but eventually.

Unless of course, there’s an exception thrown. In that case, we drop down into the catch, where we sleep for 5 seconds, and then call CreatSocket recursively. If that succeeds, we still have that extra call to sleep which guarantees a little nap, presumably to congratulate ourselves for finally creating a socket.

JR had a simple fix for this code: burn it to the ground and replace it with a more normal approach to creating sockets. Unfortunately, management was a bit gun-shy about making any major changes to Gerry’s work. That recursive call might be more important than anyone imagined.

JR had a simpler, if stupider fix: remove the final call to sleep(5) after creating the socket in the exception handler. It wouldn’t make this code any less terrible, but it would mean that it wouldn’t spend all that time waiting to proceed even after it had created a socket, thus solving the initial problem: that it takes a long time to recover after failure.

Unfortunately, management balked at removing a line of code. “It wouldn’t be there if it weren’t important. Instead of removing it, can you just comment it out?”

JR commented it out, closed VIM, and hoped never to touch this service again.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Planet DebianMario Lang: Rolling Acid - A eurorack improvisation

After more then half a year of researching and building up my dream accessible groovebox, it begins to emit (IMO) sounds that I like and sometimes even love.

Yesterday evening, while experimenting a bit with parallel filters, I suddenly hit a certain sound that I couldn't resist and record a live performance with. Since I've been told that I had the video pretty much centered this time, I thought it might be interesting to share it.

The track is called Rolling Acid and has been uploaded to YouTube.

CryptogramPoor Security at the UK National Health Service

The Guardian is reporting that "every NHS trust assessed for cyber security vulnerabilities has failed to meet the standard required."

This is the same NHS that was debilitated by WannaCry.

CryptogramSensitive Super Bowl Security Documents Left on an Airplane

A CNN reporter found some sensitive -- but, technically, not classified -- documents about Super Bowl security in the front pocket of an airplane seat.

Planet DebianClint Adams: Pret is clearly worse than Chunking Mansions

“Last time I ate there I went to Pret a Manger and there was a guy doing crack on the counter,” she said. “The manager tried to kick him out. He started yelling ‘Fuck you! I work at Merrill Lynch!’ and then a guy started beating another guy’s head in in the train waiting room while his daughter was screaming. Police jumped over the walls and took ’em all away. It’s basically Chunking Mansions but worse.”

Posted on 2018-02-07
Tags: umismu

Planet DebianDirk Eddelbuettel: #16: Complaining Works.

Welcome to the sixteenth post in the relatively random R related series of posts, or R4 for short.

This one will likely be brief. But it is one post I have been meaning to get out for a little while---yet did not get around to. The meta point I am trying to make today is that despite overwhelming odds that may indicate otherwise, it can actually pay off to have voice or platform. Or, as I jokingly called it via slack to Josh: complaining works. Hence the title.

There are two things I have harped about over the years (with respect to R), and very recently (and almost against all odds) both changed for the better.

First, Rscript. There was a of course little bit of pride and personal ownership involved as Jeff Horner and I got the first command-line interface to R out: littler with its r tool. As I recall, we beat the release of Rscript (which comes with R) narrowly by a few months. In any event, the bigger issue remained that Rscript would always fail on code requiring the methods package (also in base R). The given reason was load time and performance were slower due to the nature of S4 classes. But as I once blogged in jest, littler is still faster at doing nothing even though it always loaded methods---as one wants code to behave as it does under an interactive R session. And over the years I must have answered this question of "Why does Rscript fail" half a dozen times on the mailing lists and on StackOverflow. But now, thanks to Luke Tierney who put the change in in early January, R 3.5.0 we will have an Rscript that behaves like R and includes methods by default (unless instructed otherwise, of course). Nice.

Second, an issue I bemoaned repeatedly concerned the (at least to my reading) inconsistent treatment of Suggests: and Depends: in the Writing R Extensions manual on the one hand, and how CRAN did (or, rather, did not) enforce this. In particular, and as I argued not quite a year ago, Suggests != Depends. In particular, tests should not just fail if a suggested package was not present (in the clean-room setup used for tests). Rather, one should make the tests conditional on this optional package being present. And this too now seems to be tested as I was among the recipients of one of those emails from CRAN requiring a change. This one was clear in its title and mission: CRAN packages not using microbenchmark conditionally. Which we fixed. But the bigger issue is that CRAN now seems to agree that Suggests != Depends.

So the main message now is clear: It seems to pay to comment on the mailing lists, or to write a blog post, or do something else to get one's reasoning out. Change may not be immediate, but it may still come. So to paraphrase Michael Pollan's beautiful dictum about food: "Eat food. Not too much. Mostly plants.", we could now say: "Report inconsistencies. Present evidence. Be patient."

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

TEDThe Big Idea: How to find and hire the best employees

So, you want to hire the best employee for the job? Or perhaps you’re the employee looking to be hired. Here’s some counterintuitive and hyper-intuitive advice that could get the right foot in the door.

Expand your definition of the “right” resume

Here’s the hypothetical situation: a position opens up at your company, applications start rolling in and qualified candidates are identified. Who do you choose? Person A: Ivy League, flawless resume, great recommendations — or Person B: state school, fair amount of job hopping, with odd jobs like cashier and singing waitress thrown in the mix. Both are qualified — but have you already formed a decision?

Well, you might want to take a second look at Person B.

Human resources executive Regina Hartley describes these candidates as “The Silver Spoon” (Person A), the one who clearly had advantages and was set up for success, and “The Scrapper” (Person B), who had to fight tremendous odds to get to the same point.

“To be clear, I don’t hold anything against the Silver Spoon; getting into and graduating from an elite university takes a lot of hard work and sacrifice,” she says. But if it so happens that someone’s whole life has been engineered toward success, how will that person handle the low times? Do they seem like they’re driven by passion and purpose?

regina-hartley.gif

Take this resume. This guy never finishes college. He job-hops quite a bit, goes on a sojourn to India for a year, and to top it off, he has dyslexia. Would you hire this guy? His name is Steve Jobs.

That’s not to say every person who has a similar story will ultimately become Steve Jobs, but it’s about extending opportunity to those whose lives have resulted in transformation and growth. Companies that are committed to diversity and inclusive practices tend to support Scrappers and outperform their peers. According to DiversityInc, a study of their top 50 companies for diversity outperformed the S&P 500 by 25 percent.

(Check out Regina’s TED Talk: Why the best hire might not have the perfect resume for more advice and a fantastic suggested reading list full of helpful resources.)

Shake up the face-to-face time

Once you choose candidates to meet in-person, scrap that old hand-me-down list of interview questions — or if you can’t simply toss them, think about adding a couple more.

TED Ideas interview questions

Generally, these conversations ping-pong between two basic questions: one of competency or a one of character. To identify the candidates who have substance and not just smarts, business consultant Anthony Tjan recommends that interviewers ask these five questions to illuminate not just skills and abilities, but intrinsic values and personality traits too.

  1. What are the one or two traits from your parents that you most want to ensure you and your kids have for the rest of your life? A rehearsal is not the result you want. This question calls for a bit more thought on the applicant’s end and sheds light on the things they most value. After hearing the person’s initial response, Tjan says you should immediately follow up with “Can you tell me more?” This is essential if you want to elicit an answer with real depth and substance.
  2. What is 25 times 25? Yes, it sounds ridiculous but trust us — the math adds up. How people react under real-time pressure, and their response can show you how they’ll approach challenging or awkward situations. “It’s about whether they can roll with the embarrassment and discomfort and work with me. When a person is in a job, they’re not always going to be in situations that are in their alley,” he says.
  3. Tell me about three people whose lives you positively changed. What would they say if I called them tomorrow? If a person can’t think of single person, that may say a lot for the role you’re trying to fill. Organizations need employees who can lift each other up. When a person is naturally inclined toward compassionate mentorship, it can have a domino effect in an institution.
  4. After an interview, ask yourself (and other team members, if relevant) “Can I imagine taking this person home with me for the holidays?” This may seem overly personal (because, yes it is), but you’ll most likely trigger a gut reaction.
  5. After an interview, ask security or the receptionist: “How was the candidate’s interaction with you?” How a person treats someone they don’t feel they need to impress is important and telling. It speaks to whether they act with compassion and openness and view others as equals.

(Maybe ask them if they played a lot of Dungeons & Dragons in their life?)

The New York Times’ Adam Bryant suggests getting away from the standard job interview entirely. Reject the played-out choreography — the conference room, the resume, the “Where do you want to be in five years?” — and feel free to shake it up. Instead, get up and move about to observe how they behave in (and out of) the workplace wild.

Take them on a tour of the office (if you can’t take them out for a meal), he proposes, and if you feel so inclined, introduce them to some colleagues. Shake off that stress, walk-and-talk (as TED speaker Nilofer Merchant also advises) and most important, pay attention!

Are they curious about how everything happens? Do they show interest in what your colleagues do? These markers could be the difference between someone you work with and someone you want to work with. Monster has a series of good questions to asks yourself post-meeting potential candidates.

Ultimately, Tjan and Bryant seem to agree, the art of the interview is a tricky but not impossible balance to strike.

Hire for your company’s values, not its internal culture

Culture fit is important, of course, but it can also be used as a shield. The bottom line is hire for diversity — in all its forms.

There’s a chance you may be tired of reading about diversity and inclusion, that you get the point and we don’t need to keep addressing it. Well, tough. Suck it up. Because we do need to talk about it until there’s literally no need to talk about, until this fundamental issue becomes an overarching non-issue (and preferably before we all sink into the sea). This is a concept that can’t just exist in science fictional universes.

Example A: a sci-fi universe featuring a group of people that could be seen working together in a non-fictional universe.

(Ahem.)

MIT Media Lab director Joi Ito and writer Jeff Howe explain that the best way to prepare for a future of unknown complexity is to build on the strength of our differences. Race, gender,  sexual orientation, socioeconomic background and disciplinary training are all important, as are life experiences that produce cognitive diversity (aka different ways of thinking).

Thanks to an increasing body of research, diversity is becoming a strategic imperative for schools, firms and other institutions. It may be good politics and good PR and, depending on an individual’s commitment to racial and gender equity, good for the soul, say Ito and Howe. But in an era in which your challenges are likely to feature maximum complexity as well, it’s simply good management — which marks a striking departure from an age when diversity was presumed to come at the expense of ability.

As TED speaker Mellody Hobson (TED Talk: Color blind or color brave?) says: “I’m actually asking you to do something really simple.  I’m asking you to look at the people around you purposefully and intentionally. Invite people into your life who don’t look like you, don’t think like you, don’t act like you, don’t come from where you come from, and you might find that they will challenge your assumptions.”

So, in conclusion, go out and hire someone and give them the opportunity to change the world. Or at least, give them the opportunity to prove that they have the wherewithal to change something for the better.

 

 

Planet DebianJulien Danjou: Gnocchi 4.2 release

The time of the release arrived. A little more than three months have passed since the latest minor version, 4.1, has been released. There are tons of improvement and a few nice significant features in this release!

Most of the principal changes are recorded in the 4.2 release notes, but here are as a few that I find particularly interesting. There were 141 commits since 4.1.0 that we merged. As a comparison, it is a lot less than the 228 we had between 4.0.0 and 4.1.0 or the 375 we had between 3.1.0 and 4.0.0!

We added two compatibility endpoints on the REST API for InfluxDB and Prometheus. We want users coming from those other database systems or using tools that are compatible with them to be able also to use Gnocchi. This is now possible as Gnocchi offers endpoint to write data using the InfluxDB line protocol and Prometheus HTTP API. Reading data using their API is not supported yet though. For example, this has been tested with Telegraf and works perfectly fine!

Some other improvements were made, such as enhancing the ACL filtering when using Keystone for authentication, a new batch format for passing more information about non-existing metrics to create and tons of performance improvements!

We already started working on the next version of Gnocchi! Come and join us on GitHub! Star us, and stay tuned for some more awesome news around metrics.

Planet DebianJonathan McDowell: collectd scripts for the Virgin Media Super Hub

As I’ve previously stated I’m no longer using Virgin Media but when I was I had written a script to scrape statistics from the cable modem and import them into collectd. Primarily I was recording the upstream/downstream line speed and the per channel signal figures, but they could easily be extended to do more if you wanted. Useful to see when Virgin increase your line speed, or see if your line quality has deteriorated. I’ve shoved the versions I had for the Super Hub v1 and v3 in GitHub in the hope they’ll be of use to someone. Note that I posted my SuperHub 3 back to Virgin yesterday so I no longer have any hardware that needs these scripts.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #145

Here's what happened in the Reproducible Builds effort between Sunday January 28 and Saturday February 3 2018:

Reproducible work in other projects

  • Whilst it has been active for some time, the #archlinux-reproducible Freenode IRC channel is the home to Reproducible efforts on Arch.

  • Please is a cross-language build system with an emphasis on high performance, extensibility and reproduceability. It supports a number of popular languages and can automate nearly any aspect of your build process.

  • Jim MacArthur posted about Reproducible Builds in BuildStream.

Development and fixes in key packages

  • Chris Lamb added .buildinfo support to Lintian 2.5.73. (#853274)

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

39 package reviews have been added, 55 have been updated and 23 have been removed in this week, adding to our knowledge about identified issues.

A new issue was added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Aaron M. Ucko (1)
  • Adrian Bunk (40)
  • Emilio Pozuelo Monfort (2)
  • Matthias Klose (3)
  • Pirate Praveen (4)
  • tony mancill (1)

diffoscope development

Furthermore, work from Juliana—our Outreachy intern—progressed toward a diffoscope supporting parallel processing.

trydiffoscope development

Version 67.0.0 was uploaded to unstable by Chris Lamb.

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo, Paul Wise & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianMarkus Koschany: My Free Software Activities in January 2018

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • The state of Debian Games: We have created a new games-team group at salsa.debian.org. If you are interested in maintaining games or related projects then we’re looking forward to see you there. A couple of Gnome related games are at risk of being removed from Debian. If you are interested in a challenge and want to port them to Gnome 3, we would very much like to hear from you too.
  • I reviewed and sponsored new upstream versions of simutrans-pak64 and simutrans for Jörg Frings-Fürst as well as openmw, mygui, wildmidi and openal for Bret Curtis. Later I could also upload hexalate for Unit193 and pegsolitaire for Juhani Numminen. Great job by Juhani who became the new upstream maintainer of pegsolitaire and saved the game from being removed from Debian.
  • I for myself packaged new upstream releases of springlobby, peg-e, pygame-sdl2, freeciv, renpy and cube2. I was a bit surprised to see upstream activity for the Sauerbraten engine again. Will we see a new major release this year?
  • Peter Green provided a patch to fix Debian RC bug #887929 in trigger-rally which I gladly accepted.
  • I also fixed RC bug #885761 in langdrill.

Debian Java

Debian LTS

This was my twenty-third month as a paid contributor and I have been paid to work 18,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 08.01.2018 until 14.01.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in libhibernate-validator-java, libkohana2-php, xbmc, jasperreports, transmission, wireshark, osc, xmltooling, php5 and openocd.
  • DLA-1241-1. Issued a security update for libkohana2-php fixing 1 CVE.
  • DLA-1242-1. Issued a security update for xmltooling fixing 1 CVE.
  • DLA-1243-1. Issued a security update for xbmc fixing 1 CVE.
  • DLA-1251-1. Issued a security update for php5 fixing 1 CVE.
  • DLA-1253-1. Issued a security update for openocd fixing 1 CVE.
  • DLA-1254-1. Issued a security update for lucene-solr fixing 1 CVE.
  • DLA-1264-1. Issued a security update for unbound fixing 1 CVE.
  • DLA-1265-1. Issued a security update for krb5 fixing 6 CVE.

Misc

  • I reviewed a patch for byzanz (#886439) but wasn’t really happy with the result.
  • I released version 1.4.10 of imlib2.
  • The discussion about a new reportbug feature gathered momentum in #878088 and I am confident now that we can conclude this issue in February.

Thanks for reading and see you next time.

Planet DebianDimitri John Ledkov: Ubuntu Snowsports & Friends Team

Ubuntu Snowsports and Friends Team

After talking to a bunch of people, I've realized that a lot of free & open source, debian / ubuntu, etc people do ski or snowboard. So I have this crazy idea, that maybe we can get enough people together to form a social team on Launchpad.

And maybe if we have enough people there, to possibly try to organize a ski trip with or without conference talks. Kind of like a team building meetup / community event / UDS - Ubuntu Developer Snowsports trip, or maybe an Ubucon Snow.

So here we go - please consider joining https://launchpad.net/~ubuntu-snowsports team, join the mailing list there, and/or hop onto IRC to join #ubuntu-snow on freenode.

I hope we can get more members than https://launchpad.net/~ubuntu-cyclists

Krebs on SecurityWould You Have Spotted This Skimmer?

When you realize how easy it is for thieves to compromise an ATM or credit card terminal with skimming devices, it’s difficult not to inspect or even pull on these machines when you’re forced to use them personally — half expecting something will come detached. For those unfamiliar with the stealth of these skimming devices and the thieves who install them, read on.

Police in Lower Pottsgrove, PA are searching for a pair of men who’ve spent the last few months installing card and PIN skimmers at checkout lanes inside of Aldi supermarkets in the region. These are “overlay” skimmers, in that they’re designed to be installed in the blink of an eye just by placing them over top of the customer-facing card terminal.

The top of the overlay skimmer models removed from several Aldi grocery story locations in Pennsylvania over the past few months.

The underside of the skimmer hides the brains of this little beauty, which is configured to capture the personal identification number (PIN) of shoppers who pay for their purchases with a debit card. This likely describes a great number of loyal customers at Aldi; the discount grocery chain only in 2016 started accepting credit cards, and previously only took cash, debit cards, SNAP, and EBT cards.

The underside of this skimmer found at Aldi is designed to record PINs.

The Lower Pottsgrove police have been asking local citizens for help in identifying the men spotted on surveillance cameras installing the skimming devices, noting that multiple victims have seen their checking accounts cleaned out after paying at compromised checkout lanes.

Local police released the following video footage showing one of the suspects installing an overlay skimmer exactly like the one pictured above. The man is clearly nervous and fidgety with his feet, but the cashier can’t see his little dance and certainly doesn’t notice the half second or so that it takes him to slip the skimming device over top of the payment terminal.

I realize a great many people use debit cards for everyday purchases, but I’ve never been interested in assuming the added risk and so pay for everything with cash or a credit card. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

The Lower Pottsgrove Police have been admonishing people for blaming Aldi for the incidents, saying the thieves are extremely stealthy and that this type of crime could hit virtually any grocery chain.

While Aldi payment terminals in the United States are capable of accepting more secure chip-based card transactions, the company has yet to enable chip payments (although it does accept mobile contactless payment methods such as Apple Pay and Google Pay). This is important because these overlay skimmers are designed to steal card data stored on the magnetic stripe when customers swipe their cards.

However, many stores that have chip-enabled terminals are still forcing customers to swipe the stripe instead of dip the chip.

Want to learn more about self-checkout skimmers? Check out these other posts:

How to Spot Ingenico Self-Checkout Skimmers

Self-Checkout Skimmers Go Bluetooth

More on Bluetooth Ingenico Overlay Skimmers

Safeway Self-Checkout Skimmers Up Close

Skimmers Found at Wal-Mart: A Closer Look

Worse Than FailureFor Want of a CR…

A few years ago I was hired as an architect to help design some massive changes to a melange of existing systems so a northern foreign bank could meet some new regulatory requirements. As a development team, they gave me one junior developer with almost a year of experience. There were very few requirements and most of it would be guesswork to fill in the blanks. OK, typical Wall Street BS.

Horseshoe nails, because 'for want of a nail, the shoe was lost…

The junior developer was, well, junior, but bright, and he remembered what you taught him, so there was a chance we could succeed.

The setup was that what little requirements there were would come from the Almighty Project Architect down to me and a few of my peers. We would design our respective pieces in as generic a way as possible, and then oversee and help with the coding.

One day, my boss+1 has my boss have the junior guy develop a web service; something the guy had never done before. Since I was busy, it was deemed unnecessary to tell me about it. The guy Googled a bit and put something together. However, he was unsure of how the response was sent back to the browser (e.g.: what sort of line endings to use) and admitted he had questions. Our boss said not to worry about it and had him install it on the dev server so boss+1 could demo it to users.

Demo time came, and the resulting output lines needed an extra newline between them to make the output look nice.

The boss+1 was incensed and started telling the users and other teams that our work was crap, inferior and not to be trusted.

WTF?!

When this got back to me, I went to have a chat with him about a) going behind my back and leaving me entirely out of the loop, b) having a junior developer do something in an unfamiliar technology and then deploying it without having someone more experienced even look at it, c) running his mouth with unjustified caustic comments ... to the world.

He was not amused and informed us that the work should be perfect every time! I pointed out that while everyone strives for just that, that it was an unreasonable response, and doesn't do much to foster team morale or cooperation.

This went back and forth for a while until I decided that this idiot simply wasn't worth my time.

A few days later, I hear one of my peers having the same conversation with our boss+1. A few days later, someone else. Each time, the architect had been bypassed and some junior developer missed something; it was always some ridiculous trivial facet of the implementation.

I got together with my peers and discussed possibly instituting mandatory testing - by US - to prevent them from bypassing us to get junior developers to do stuff and then having it thrown into a user-visible environment. We agreed, and were promptly overruled by boss+1. Apparently, all programmers, even juniors, were expected to produce perfect code (even without requirements) every time, without exception, and anyone who couldn't cut it should be exposed as incompetent.

We just shot each other the expected Are you f'g kidding me? looks.

After a few weeks of this, we had all had enough of the abuse and went to boss+2, who was totally disinterested.

We all found other jobs, and made sure to bring the better junior devs with us.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Don MartiFun with numbers

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

Guess what? According to Emil Protalinski at VentureBeat, the browser wars are back on.

Google is doubling down on the user experience by focusing on ads and performance, an opportunity I’ve argued its competitors have completely missed.

Good point. Jonathan Mendez has some good background on that.

The IAB road blocked the W3C Do Not Track initiative in 2012 that was led by a cross functional group that most importantly included the browser makers. In hindsight this was the only real chance for the industry to solve consumer needs around data privacy and advertising technology. The IAB wanted self-regulation. In the end, DNT died as the IAB hoped.

As third-party tracking made the ad experience crappier and crappier, browser makers tried to play nice. Browser makers tried to work in the open and build consensus.

That didn't work, which shouldn't be a surprise. Imagine if email providers had decided to build consensus with spammers about spam filtering rules. The spammers would have been all like, "It replaces the principle of consumer choice with an arrogant 'Hotmail knows best' system." Any sensible email provider would ignore the spammers but listen to deliverability concerns from senders of legit opt-in newsletters. Spammers depend on sneaking around the user's intent to get their stuff through, so email providers that want to get and keep users should stay on the user's side. Fortunately for legit mail senders and recipients, that's what happened.

On the web, though, not so much.

But now Apple Safari has Intelligent Tracking Prevention. Industry consensus achieved? No way. Safari's developers put users first and, like the man said, if you're not first you're last.

And now Google is doing their own thing. Some positive parts about it, but by focusing on filtering annoying types of ad units they're closer to the Adblock Plus "Acceptable Ads" racket than to a real solution. So it's better to let Ben Williams at Adblock Plus explain that one. I still don't get how it is that so many otherwise capable people come up with "let's filter superficial annoyances and not fundamental issues" and "let's shake down legit publishers for cash" as solutions to the web advertising problem, though. Especially when $16 billion in adfraud is just sitting there. It's almost as if the Lumascape doesn't care about fraud because it's priced in so it comes out of the publisher's share anyway.

So with all the money going to fraud and the intermediaries that facilitate it, local digital news publishers are looking for money in other places and writing off ads. That's good news for the surviving web ad optimists (like me) because any time Management stops caring about something you get a big opportunity to do something transformative.

Small victories

The web advertising problem looks big, but I want to think positive about it.

  • billions of web users

  • visiting hundreds of web sites

  • with tens of third-party trackers per site.

That's trillions of opportunities for tiny victories against adfraud.

Right now most browsers and most fraudbots are hard to tell apart. Both maintain a single "cookie jar" across trusted and untrusted sites, and both are subject to fingerprinting.

For fraudbots, cross-site trackability is a feature. A fraudbot can only produce valuable ad impressions on a fraud site if it is somehow trackable from a legit site.

For browsers, cross-site trackability is a bug, for two reasons.

  • Leaking activity from one context to another violates widely held user norms.

  • Because users enjoy ad-supported content, it is in the interest of users to reduce the fraction of ad budgets that go to fraud and intermediaries.

Browsers don't have the solve the whole web advertising problem to make a meaningful difference. As soon as a trustworthy site's real users look diffferent enough from fraudbots, because fraudbots make themselves more trackable than users running tracking-protected browsers do, then low-reputation and fraud sites claiming to offer the same audience will have a harder and harder time trying to sell impressions to agencies that can see it's not the same people.

Of course, the browser market share numbers will still over-represent any undetected fraudbots and under-represent the "conscious chooser" users who choose to turn on extra tracking protection options. But that's an opportunity for creative ad agencies that can buy underpriced post-creepy ad impressions and stay away from overvalued or worthless bot impressions. I expect that data on who has legit users—made more accurate by including tracking protection measurements—will be proprietary to certain agencies and brands that are going after customer segments with high tracking protection adoption, at least for a while.

Now even YouTube serves ads with CPU-draining cryptocurrency miners http://arstechnica.com/information-technology/2018/01/now-even-youtube-serves-ads-with-cpu-draining-cryptocurrency-miners/ … by @dangoodin001

Remarks delivered at the World Economic Forum

Improving privacy without breaking the web

Greater control with new features in your Ads Settings

PageFair’s long letter to the Article 29 Working Party

‘Never get high on your own supply’ – why social media bosses don’t use social media

Can you detect WebDriver sessions from inside a web page? https://hoosteeno.com/2018/01/23/can-you-detect-webdriver-sessions-from-inside-a-web-page/ … via @wordpressdotcom

Making WebAssembly even faster: Firefox’s new streaming and tiering compiler

Newsonomics: Inside L.A.’s journalistic collapse

The State of Ad Fraud

The more Facebook examines itself, the more fault it finds

In-N-Out managers earn triple the industry average

Five loopholes in the GDPR

Why ads keep redirecting you to scammy sites and what we’re doing about it

https://digiday.com/media/local-digital-news-publishers-ignoring-display-revenue/

Website operators are in the dark about privacy violations by third-party scripts

Mark Zuckerberg's former mentor says 'parasitic' Facebook threatens our health and democracy

Craft Beer Is the Strangest, Happiest Economic Story in America

The 29 Stages Of A Twitterstorm In 2018

How Facebook Helped Ruin Cambodia's Democracy

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Firefox 57 delays requests to tracking domains

Direct ad buys are back in fashion as programmatic declines

‘Data arbitrage is as big a problem as media arbitrage’: Confessions of a media exec

Why publishers don’t name and shame vendors over ad fraud

News UK finds high levels of domain spoofing to the tune of $1 million a month in lost revenue • Digiday

The Finish Line in the Race to the Bottom

Something doesn’t ad up about America’s advertising market

Fraud filters don't work

Ad retargeters scramble to get consumer consent

,

Planet DebianJoey Hess: easy-peasy-devicetree-squeezy

I've created a new program, with a silly name, that solves a silly problem with devicetree overlays. Seem that, alhough there's patches to fully support overlays, including loading them on the fly into a running system, it's not in the mainline kernel, and nobody seems to know if/when it will get mainlined.

So easy-peasy-devicetree-squeezy is a hack to make it easy to do device tree overlay type things already. This program makes it easy peasy to squeeze together the devicetree for your board with whatever additions you need. It's pre-deprecated on release; as soon as device tree overlay support lands, there will be no further need for it, probably.

It doesn't actually use overlays, instead it arranges to include the kernel's devicetree file for your board together with whatever additions you need. The only real downside of this approach is that the kernel source tarball is needed. Benefits include being able to refer to any labels you need from the kernel's devicetree files, and being able to #include and use symbols like GPIO_ACTIVE_HIGH from the kernel headers.

It supports integrating into a Debian system so that the devicetree will be updated, with your additions, whenever the kernel is upgraded.

Source is in a git repository at https://git.joeyh.name/index.cgi/easy-peasy-devicetree-squeezy.git/
See the README for details.

If someone wants to package this up and include it in Debian, it's a simple shell script, so it should take about 10 minutes.

example use

Earlier I wrote about cubietruck temperature sensor setup, and the difficulty I had with modifying the device tree for that. With easy-peasy-devicetree-squeezy, I only have to create a file /etc/easy-peasy-devicetree-squeezy/my.dts that contains this:

    /* Device tree addition enabling onewire sensors
     * on CubieTruck GPIO pin PG8 */
    #include <dt-bindings/gpio/gpio.h>

    / {
            onewire_device {
                    compatible = "w1-gpio";
                    gpios = <&pio 6 8 GPIO_ACTIVE_HIGH>; /* PG8 */
                    pinctrl-names = "default";
                    pinctrl-0 = <&my_w1_pin>;
            };
    };

    &pio {
            my_w1_pin: my_w1_pin@0 {
                    allwinner,pins = "PG8";
                    allwinner,function = "gpio_in";
            };
    };

Then run "sudo easy-peasy-devicetree-squeezy --debian sun7i-a20-cubietruck"

Today's work was sponsored by Trenton Cronholm on Patreon.

Planet DebianThorsten Alteholz: My Debian Activities in January 2018

FTP master

This month I was distracted from NEW by private stuff, so I only accepted 141 packages and rejected 4 uploads. The overall number of packages that got accepted this month was 361.

Almost two years ago Moritz filed #817286. After some time of inactivity, this bug draw CIP’s attention. Civil Infrastructure Platform is a project under the umbrella of the Linux Foundation. Their basic goal is to provide security support for a very long time (10 years for software and 15 years for the kernel).

As this bugs meets one of their goals, they would like to support Debian and are going to sponsor my work on this bug. So hopefully in the near future staging repositories will be available in Debian.

Debian LTS

This was my forty third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 18.25h. During that time I did LTS uploads of:

  • [DLA 1235-1] opencv security update for two CVEs
  • [DLA 1252-1] couchdb security update for two CVEs
  • [DLA 1255-1] bind9 security update one CVE
  • [DLA 1258-1] wireshark security update for three CVEs
  • [DSA 4101-1] wireshark security update for three Jessie CVEs and three Stretch CVEs
  • [DLA 1263-1] curl security update one CVE

Unfortunately my debdiffs for opencv have not yet been processed by the security team. But as I also started to work on another round of CVEs for opencv, there will be another chance …

Last but not least I did one week of frontdesk duties.

Other stuff

During January I uploaded new upstream versions of …

Krebs on SecurityAlleged Spam Kingpin ‘Severa’ Extradited to US

Peter Yuryevich Levashov, a 37-year-old Russian computer programmer thought to be one of the world’s most notorious spam kingpins, has been extradited to the United States to face federal hacking and spamming charges.

Levashov, in an undated photo.

Levashov, who allegedly went by the hacker names “Peter Severa,” and “Peter of the North,” hails from St. Petersburg in northern Russia, but he was arrested last year while in Barcelona, Spain with his family.

Authorities have long suspected he is the cybercriminal behind the once powerful spam botnet known as Waledac (a.k.a. “Kelihos”), a now-defunct malware strain responsible for sending more than 1.5 billion spam, phishing and malware attacks each day.

According to a statement released by the U.S. Justice Department, Levashov was arraigned last Friday in a federal court in New Haven, Ct. Levashov’s New York attorney Igor Litvak said he is eager to review the evidence against Mr. Levashov, and that while the indictment against his client is available, the complaint in the case remains sealed.

“We haven’t received any discovery, we have no idea what the government is relying on to bring these allegations,” Litvak said. “Mr. Levashov maintains his innocence and is looking forward to resolving this case, clearing his name, and returning home to his wife and 5-year-old son in Spain.”

In 2010, Microsoft — in tandem with a number of security researchers — launched a combined technical and legal sneak attack on the Waledac botnet, successfully dismantling it. The company would later do the same to the Kelihos botnet, a global spam machine which shared a great deal of computer code with Waledac.

Severa routinely rented out segments of his Waledac botnet to anyone seeking a vehicle for sending spam. For $200, vetted users could hire his botnet to blast one million pieces of spam. Junk email campaigns touting employment or “money mule” scams cost $300 per million, and phishing emails could be blasted out through Severa’s botnet for the bargain price of $500 per million.

Waledac first surfaced in April 2008, but many experts believe the spam-spewing machine was merely an update to the Storm worm, the engine behind another massive spam botnet that first surfaced in 2007. Both Waledac and Storm were major distributors of pharmaceutical and malware spam.

According to Microsoft, in one month alone approximately 651 million spam emails attributable to Waledac/Kelihos were directed to Hotmail accounts, including offers and scams related to online pharmacies, imitation goods, jobs, penny stocks, and more. The Storm worm botnet also sent billions of messages daily and infected an estimated one million computers worldwide.

Both Waledac/Kelihos and Storm were hugely innovative because they each included self-defense mechanisms designed specifically to stymie security researchers who might try to dismantle the crime machines.

Waledac and Storm sent updates and other instructions via a peer-to-peer communications system not unlike popular music and file-sharing services. Thus, even if security researchers or law-enforcement officials manage to seize the botnet’s back-end control servers and clean up huge numbers of infected PCs, the botnets could respawn themselves by relaying software updates from one infected PC to another.

FAKE NEWS

According to a lengthy April 2017 story in Wired.com about Levashov’s arrest and the takedown of Waledac, Levashov got caught because he violated a basic security no-no: He used the same log-in credentials to both run his criminal enterprise and log into sites like iTunes.

After Levashov’s arrest, numerous media outlets quoted his wife saying he was being rounded up as part of a dragnet targeting Russian hackers thought to be involved in alleged interference in the 2016 U.S. election. Russian news media outlets made much hay over this claim. In contesting his extradition to the United States, Levashov even reportedly told the RIA Russian news agency that he worked for Russian President Vladimir Putin‘s United Russia party, and that he would die within a year of being extradited to the United States.

“If I go to the U.S., I will die in a year,” Levashov is quoted as saying. “They want to get information of a military nature and about the United Russia party. I will be tortured, within a year I will be killed, or I will kill myself.”

But there is so far zero evidence that anyone has accused Levashov of being involved in election meddling. However, the Waledac/Kelihos botnet does have a historic association with election meddling: It was used during the Russian election in 2012 to send political messages to email accounts on computers with Russian Internet addresses. Those emails linked to fake news stories saying that Mikhail D. Prokhorov, a businessman who was running for president against Putin, had come out as gay.

SEVERA’S PARTNERS

If Levashov was to plead guilty in the case being prosecuted by U.S. authorities, it could shed light on the real-life identities of other top spammers.

Severa worked very closely with two major purveyors of spam. One was Alan Ralsky, an American spammer who was convicted in 2009 of paying him and other spammers to promote the pump-and-dump stock scams.

The other was a spammer who went by the nickname “Cosma,” the cybercriminal thought to be responsible for managing the Rustock botnet (so named because it was a Russian botnet frequently used to send pump-and-dump stock spam). In 2011, Microsoft offered a still-unclaimed $250,000 reward for information leading to the arrest and conviction of the Rustock author.

Spamdot.biz moderator Severa listing prices to rent his Waledac spam botnet.

Microsoft believes Cosma’s real name may be Dmitri A. SergeevArtem Sergeev, or Sergey Vladomirovich Sergeev. In June 2011, KrebsOnSecurity published a brief profile of Cosma that included Sergeev’s resume and photo, both of which indicated he is a Belorussian programmer who once sought a job at Google. For more on Cosma, see “Flashy Car Got Spam Kingpin Mugged.”

Severa and Cosma had met one another several times in their years together in the stock spamming business, and they appear to have known each other intimately enough to be on a first-name basis. Both of these titans of junk email are featured prominently in “Meet the Spammers,” the 7th chapter of my book, Spam Nation: The Inside Story of Organized Cybercrime.

Much like his close associate — Cosma, the Rustock botmaster — Severa may also have a $250,000 bounty on his head, albeit indirectly. The Conficker worm, a global contagion launched in 2009 that quickly spread to an estimated 9 to 15 million computers worldwide, prompted an unprecedented international response from security experts. This group of experts, dubbed the “Conficker Cabal,” sought in vain to corral the spread of the worm.

But despite infecting huge numbers of Microsoft Windows systems, Conficker was never once used to send spam. In fact, the only thing that Conficker-infected systems ever did was download and spread a new version of the the malware that powered the Waledac botnet. Later that year, Microsoft announced it was offering a $250,000 reward for information leading to the arrest and conviction of the Conficker author(s). Some security experts believe this proves a link between Severa and Conficker.

Both Cosma and Severa were quite active on Spamit[dot]com, a once closely-guarded forum for Russian spammers. In 2010, Spamit was hacked, and a copy of its database was shared with this author. In that database were all private messages between Spamit members, including many between Cosma and Severa. For more on those conversations, see “A Closer Look at Two Big Time Botmasters.

In addition to renting out his spam botnet, Severa also managed multiple affiliate programs in which he paid other cybercriminals to distribute so-called fake antivirus products. Also known as “scareware,” fake antivirus was at one time a major scourge, using false and misleading pop-up alerts to trick and mousetrap unsuspecting computer users into purchasing worthless (and in many cases outright harmful) software disguised as antivirus software.

A screenshot of the eponymous scareware affiliate program run by “Severa,” allegedly the cybercriminal alias of Peter Levashov.

In 2011, KrebsOnSecurity published Spam & Fake AV: Like Ham & Eggs, which sought to illustrate the many ways in which the spam industry and fake antivirus overlapped. That analysis included data from Brett Stone-Gross, a cybercrime expert who later would assist Microsoft and other researchers in their successful efforts to dismantle the Waledac/Kelihos botnet.

Levashov faces federal criminal charges on eight counts, including aggravated identity theft, wire fraud, conspiracy, and intentional damage to protected computers. The indictment in his case is available here (PDF).

Further reading: Mr Waledac — The Peter North of Spamming

Planet DebianDaniel Leidert: Get your namespace_id(s) for salsa.debian.org

In addition to Christoph Bergs script, to import packages to salsa.debian.org, I'd like to provide some more ways to determine the namesapce_id parameter you'll need.

Say that GROUP is the name of the group or a string of the groups name and that TOKEN is a personal access token you created. So this will work much faster:

curl --request GET -d "search=GROUP" -s https://salsa.debian.org/api/v4/groups | jq '.[].id'

If the group is not public, you need to add your token:

curl [..] --header "PRIVATE-TOKEN: TOKEN" [..]

The command might provide several IDs, if the search term matches several groups. In this case, you might want to have a look at the raw output without piping it to jq or look at the result of ...

https://salsa.debian.org/api/v4/groups/?search=GROUP

... in a browser. Interestingly, the latter doesn't provide any output in the browser, if you are not part of the group. But it provides the necessary information using curl. Bug or feature?

Another way that works is, to look at the output of...

https://salsa.debian.org/api/v4/namespaces

... when you are logged in or ...

curl --request GET --header "PRIVATE-TOKEN: TOKEN" https://salsa.debian.org/api/v4/namespaces

This has the advantage of also getting the namespace_id for your personal repository, say for e.g. importing projects hosted on https://people.debian.org/~USER/ or https://anonscm.debian.org/cgit/users/USER/.

Cory DoctorowNominations for the Hugo Awards are now open



If you were a voting member of the World Science Fiction Convention in 2017, or are registered as a voting member for the upcoming conventions in 2018 or 2019, you are eligible to nominate for the Hugo Awards; the Locus List is a great way to jog your memory about your favorite works from last year — and may I humbly remind you that my novel Walkaway is eligible for your nomination?

Worse Than FailureCodeSOD: PRINCESS DEATH CLOWNS

Adam recently tried to claim a rebate for a purchase. Rebate websites, of course, are awful. The vendor doesn’t really want you to claim the rebate, after all, so even if they don’t actively try and make it user hostile, they’re also not going to go out of their way to make the experience pleasant.

In Adam’s case, it just didn’t work. It attempted to use a custom-built auto-complete textbox, which errored out and in some cases popped up an alert which read: [object Object]. Determined to get his $9.99 rebate, Adam did what any of us would do: he started trying to debug the page.

The HTML, of course, was a layout built from nested tables, complete with 1px transparent GIFs for spacing. But there were a few bits of JavaScript code which caught Adam’s eye.

function doTheme(myclass) {
         if ( document.getElementById ) {
                if(document.getElementById("divLog").className=="princess") {
                        document.getElementById("divLog").className="death"
                } else {
                        if(document.getElementById("divLog").className=="death") {
                                document.getElementById("divLog").className="clowns"
                        } else {
                                if(document.getElementById("divLog").className=="clowns") {
                                        document.getElementById("divLog").className="princess"
                                }
                        }
                }
                document.frmLog.myClass.value=document.getElementById("divLog").className;
        } else if ( document.all ) {
                if(document.all["divLog"].className=="princess") {
                        document.all["divLog"].className="death"
                } else {
                        if(document.all["divLog"].className=="death") {
                                document.all["divLog"].className="clowns"
                        } else {
                                if(document.all["divLog"].className=="clowns") {
                                        document.all["divLog"].className="princess"
                                }
                        }
                }
                document.frmLog.myClass.value=document.all["divLog"].className;
        }
}

This implements some sort of state machine. If the state is “princess”, become “death”. If the state is “death”, become “clowns”. If the state is “clowns”, go back to being a “princess”. Death before clowns is a pretty safe rule.

This code also will work gracefully if document.getElementById is unavailable, meaning it works all the way back to IE4. That’s backwards compatibility. Since it doesn't work in Adam's browser, it missed out on the forward compatibility, but it's got backwards compatibility.

To round out the meal, Adam also provides a little bit of dessert for this entry of awful code.

//DEBRA, THIS DOES THE IMAGE MOUSEOVER THING.  
//ALL YOU HAVE TO DO IS TELL IT WHICH OBJECT YOU ARE REFERRING TO, AND WHAT YOU WANT TO  
//CHANGE THE IMAGE SOURCE TO.

function over(myimage,str) {  
        document[myimage].src=eval(str).src  
}

DEBRA I HOPE YOU GOT A JOB WHERE YOU DON’T HAVE TO WORK WITH THIS AWFUL WEBSITE ANYMORE.

Adam used some google-fu and found an alternate site that allowed him to redeem his rebate.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Sam VargheseCricket Australia needs to get player availability policies sorted

Australian cricket authorities are short-charging fans of the national Twenty20 competition, the Big Bash League, through their policies on releasing players from national duty when needed by their BBL sides for crucial encounters.

The Adelaide Strikers and the Hobart Hurricanes, who contested Sunday’s final, were both affected by this policy.

Adelaide won, but had they failed to do so, no doubt there would have been attention drawn to the fact that their main fast bowler, Billy Stanlake, did not play as he was on national duty to play in a tri-nation tournament involving New Zealand and England.

Even though Cricket Australia released some other players – Alex Carey of the Strikers and Darcy Short of the Hurricanes – for the BBL final, it was clear that the travel from Sydney (where a game in the tri-nation tournament was played on Saturday night) to Adelaide (where the BBL final took place on Sunday afternoon) had affected them.

Carey, whose batting has been one of the Strikers’ strengths, was out cheaply, while Short, normally an ebullient six-hitter who had rung up two 90s and a century in the BBL league stage, was totally off his game. He made a a listless 68 and his strike rate was much lower than normal, something which made a big difference as his team was chasing 203 for a win. Both Carey and Short had played for the national team the previous night.

Strikers captain Travis Head was also in Sydney on Saturday but did not play in the game. He rushed back to Adelaide for the final and made a rather subdued 44, playing second fiddle to opener Jake Weatherald who made a quick century.

Given that the international cricket season clashes with the BBL, the good folk at Cricket Australia need to develop some consistent policies about player involvement in both forms of the game.

Weakening the BBL sides at crucial stages of the tournament will mean that the game becomes that much less competitive. And that will affect the crowds, who are already diminishing in numbers. Sunday’s final involved the home team and yet could not fill the Adelaide stadium.

With grandiose plans to expand the BBL next year so that each team plays each other both at home and away, and to also add an AFL style finals process – where the teams that finish higher up get a second chance at qualifying for the final – Cricket Australia would do well to pay heed to player availability policies.

Else, what was once a golden goose may be found to have no more eggs to lay.

Planet DebianRuss Allbery: Debian packaging with Git notes

I finally found the time today to update my notes on how I package for Debian using Git. They're rather long (even after dropping my beginner Git tutorial, which seemed pointless given how many good ones there are now), so I'm not including the full text here. Take a look if you're curious.

The major differences from the previous version (last revised in 2012, and originally written in 2008) are:

  • Standardize on the DEP-14 branch layout, which makes keeping upstream development and Debian packaging in the same repository less of a special case.

  • Document the gbp pq workflow for maintaining patches against upstream instead of the mess of feature branches and merge workflow I was trying to use before. This is much simpler once one gets used to it, and allows using the 3.0 (quilt) package format as intended.

  • Use dgit for the final build and upload to Debian.

  • Modernize the git-buildpackage commands and remove some old workarounds that are no longer needed.

On my current development machine, I'm experimenting with using btrfs as the root file system, including using sbuild instead of pbuilder to build packages since it can use btrfs snapshots. So far, I'm extremely happy with that configuration. It was hard to find good documentation for the process, so I wrote up how I configure sbuild with btrfs in case it's of interest to anyone else. It's noticably faster than using pbuilder, and once I got used to it I think it feels a bit cleaner.

,

Planet DebianRuss Allbery: Free software log (January 2018)

The only sofware releases I got out this month were both for work: versions 0.4.0 and 0.4.1 of groupy, the client library for Merou, the authorization management system we use. We're not doing formal releases of the latter yet, just building from Git, and probably need to settle on a final public project name before we do.

At some point I'll build proper software release pages for both of these, since I seem to be doing most of the release management for groupy.

The 0.4.0 release was in support of the new service account model. The 0.4.1 release was an attempt to reduce the number of Python modules that had to be installed to configure the build, since too much setup_requires was causing various issues and version conflicts in our internal build system. Python has a slightly odd (at least to me) way of allowing modules to inject more commands into setup.py that are only useful in specific situations, and it's hard to find the right place in the dependency metadata to put those modules. I'm still not totally happy with the compromise we arrived at.

I also did a bit of work on control-archive, but didn't finish making a new release. I think the only thing left is to rewrite the documentation to use DocKnot. Hopefully this month.

On the INN front, I tracked down a few test failures and fixed them upstream in rra-c-util, and also expanded the documentation links for INN on my web site.

Finally, I uploaded new versions of a couple of Debian packages: libnews-article-perl (just a packaging refresh), and libnet-duo-perl (now donated to the Perl packaging team). I may end up orphaning Net::Duo, since I'm not currently actively using it, although I'm still hanging on to it since I'm considering using it for some personal stuff.

Overall, I would have liked to get a few more full software releases out, but I'm pretty happy for this month for a January, a return-to-work month, and a month during which I was on-call for a week. Definitely better than the months of "I didn't do much" towards the end of last year!

Planet DebianSean Whitton: This weekend's nutrition

Bird went out into the driveway alone. It wasn’t raining and the wind had died: the clouds sailing the sky were bright, dry. A brilliant morning had broken from the dawn’s cocoon of semidarkness, and the air had a good, first-days-of-summer smell that slackened every muscle in Bird’s body. A night softness had lingered in the hospital, and now the morning light, reflecting off the wet pavement and off the leafy trees, stabbed like icicles at Bird’s pampered eyes. Labouring into this light on his bike was like being poised on the edge of a diving board; Bird felt severed from the certainty of the ground, isolated. And he was as numb as stone, a weak insect on a scorpion’s sting. (Ōe, A Personal Matter (trans. John Nathan), ch. 2)

Forenoon: the most exhilarating hour of an early summer day. And a breeze that recalled elementary school excursions quickened the worms of tingling pleasure on Bird’s cheeks and earlobes, flushed from lack of sleep. The nerve cells in his skin, the farther they were from conscious restraint, the more thirstily they drank the sweetness of the season and the hour. Soon a sense of liberation rose to the surface of his consciousness. (ch. 3)

Planet DebianShirish Agarwal: WordPress.com tracking pictures and a minidebconf in Pune

This would be a big longish post.

I have been a wordpress.com user for a long time and before that blog post for a long long time. Sometime back on few blog posts I began to notice that on planet.debian.org the pictures were not appearing. I asked the planet maintainers what was going on. They in turn shared with me a list of filters that they were using as a default. While I’m not at liberty to share any of the filters, it did become clear from reading of the regular expressions of the filters and conversations with the planet maintainers that wordpress.com was at fault and not Planet.debian. I tried to see if there was anything as a content producer I could do but apparently nothing. The only settings for media or even for general has no settings through which I could stop tracking.

Sharing a screenshot below –

Media settings in normal wordpress.com account.

Sp while there’s nothing I can do atm, I can share about the Debian event that we did in reserved-bit about couple of months ago. Before I start, here’s a small brief about reserved bit, it’s a makerspace right next to where all the big IT companies are and where they come to pass the time after work. It’s on top of a mall.

Reserved-bit is run jointly by Siddhesh and Nisha Poyarekar husband-wife duo. Siddhesh was working with Redhat and now does his own thing. Works with Linaro and is a glibc maintainer and I read somewhere that he was even a releaser of couple of glibc releases (Update = actually 2.25 and 2.26 of glibc which is wow and also a big responsibility.) Pune is to India what Detroit was to the States. We have number of automobile companies and siddesh did share he was working on the glibc variants for the automobile, embedded market.

Nisha on the other hand is more on the maker side of the things, his better half and I believe she knows quite a bit of Aurdino. I believe there was a workshop yesterday on aurdino but due to time and personal constraints was not able to attend it or would have got more to share. She is the one who is looking at the day-to-day operations of maker-bit and Siddhesh chips in as and when he can.

Because of the image issue, I had been dragging my feet to post about the event for more than couple of months now. I do have access of a debconf gallery instance but was not comfortable for this. If I do attend a debconf then probably that would be the place for that.

Anyways, about 3 months back Gaurav shared an e-mail on the local LUG mailing list . We were trying to get the college where the LUG meets as it is one of the central parts of the city but then due to scheduling changes it was decided to be held at reserved-bit. I had not talked with Praveen for some time but had an inkling that he might be bringing one of the big packages which has a lot of dependencies on them which I shared in an email . As can be seen, I also shared the immense list that Paul always has and as can be seen free software is just growing leaps and bounds, what we are missing are more packagers and maintainers.

I also thought that it is possible that somebody might want to install debian and hence shared about that as well.

As I wasn’t feeling strong enough, I decided to forgo taking the lappy along. Fortunately, a friend arrived and we were together able to reach the venue on time. We probably missed about 10 minutes which probably was the introduction session a bit.

Image of Praveen talking about various software

Image – courtesy Gaurav Sitlani

Praveen is in middle, somewhat like me with the beard, and white t-shirt.

I had mentally prepared myself for newbie questions but refreshingly, even though there were lot of amateurs, most of them had used Debian for sometime. So instead of talking about why we need to have Debian as a choice or why X disto is better than Y we had more pointed topical questions. There were questions about privacy as well where Debian is strong and looking to set the bar even higher. I came to know much later than Kali people are interested in porting most of their packages and maintain it in main, more eyes to see the code, a larger superset of people would use the work they do than those who would only use kali and in time higher quality of packages which is win-win to all the people concerned.

As I had suspected Praveen shared two huge lists of potentials software that needs to be packaged. Before starting he took some of the introductory part of the npm2deb tutorial. I had played with build programs before but npm2deb seemed a bit more automated than others, specifically with the way it picks up the metadata about software to be packaged. I do and did realize that npm2deb is for specific bits only and probably that is the reason that it could be a bit more automated than something like makefile, cmake, premake but then the latter are more generic in nature, they are not tied to a specific platform or way of doing things.

He showed a demo of npm2deb, the resultant deb package, ran lintian on top of it . He did share the whole list of software that needs to be packaged in order to see npm come into Debian. He and Shruti also did a crowdfunding for it sometime back.

I am not sure how many people noticed but from what I recollect both nodejs and npm came around June/July 2017 in Debian. While I don’t know it seemed Praveen and Shruti did the boring yet hard work to bring both the biggish packages into Debian. There may be some people involved as well which I might not know about but that is unintentional. If anybody knows any better feel free to correct me and will update it here as well.

Then after a while Raju shared the work he has been doing with Hamara but not in great detail as still lot of work is yet to be done. There were questions about rolling release and how often people update packages, while both Praveen and Raju pointed out that they did monthly updates, I am more of a weekly offender. I usually use debdelta to update packages and its far much easier to track and have the package diffs cheaply without affecting the bandwidth too much.

I wanted to share about adequate as I think it’s one of the better tools but as it has been orphaned and nobody has stepped up, it seems it will die a death after sometime. What a waste of a useful tool.

What we hadn’t prepared for that somebody had actually wanted to install Debian on their laptop then and there. I just had the netinstall usb stick by chance but the person who wanted to install debian had not done the preparatory work which needs to be done before setting up Debian. We had to send couple of people to get a spare external hdd which took time, copying the person’s data and then formatting that partition, sharing the different ways that Debian could be installed onto the system. There was a bit of bike-shedding there as there are just too many ways. I am personally towards have a separate / , /boot (part of it I am still unable to resolve under the whole Windows 10 nightmare, /home, /logs and swap. There was also a bit of discussion about swap as the older model of 1:1 memory doesn’t hold much water in the 8 GB RAM+ scenario.

By the time the external hdd came, we were able to download a CD .iso and show a minimal desktop installation. We had discussions about the various window managers and desktop environments, the difference and the similarities. IIRC, CD 1 has just LXDE as none of the other desktop environments can fit on CD1. I also shared about my South African Debconf experience as well the whole year-long preparation it takes to organize Debconf. IIRC, I *think* I shared having a conference like that costs north of USD 100,000 (it costed that much for South Africa, beautiful country) – the Canadian one might have costed more and the Taiwan one happening coming July would cost the same even though accommodation is free. I did share that we had something like 300+ people for the conference, the Germany one the year before had 500 so for any Indian bid we would have to grow up a whole lot more before we think of getting anywhere of hosting a debconf in India.

There was interest from people to contribute to Debian but this is where it gets a bit foggy, while some of the students want/ed to contribute they were not clear as to where they could contribute. I think we shared with them the lists, shared/showed them IRC/Matrix and sort of left them to their own devices. I do think we did mention #debian-welcome and #debian-mentors at possible points of contact. As all of us are busy with our lives, work etc. it does become hard to tell/advise people. Over the years we have realized that its much better to just share the starting info. and let them find if there is something that interests them.

There was also discussion about different operating systems and how the work culture and things differed from the debian perspective. For e.g. I shared how we have borrowed quite a bit of security software from the BSD stable and some plausible reasons of where BSD has made it big and where it sort of failed. The same was dissected for other operating systems too who are in the free software space and quite a few students realized it’s a big universe out there. We shared about devuan and how a group of people who didn’t like systemd did their own thing but at the same they realized the amount of time it takes to maintain a distro. In many a ways, it is a miracle that Debian is able to be independent and have its own morals and compasses. We also shared bits of the Debian constitution and Manifesto but not too much otherwise it would have become too preachy.

Coming towards the end, it gives me quite a bit of pleasure to share that Debian would be taking part in Outreachy and GSOC at the same time. While the projects seem to be different, I do have some personal favorites. The most exciting to me as a user are –

1. Wizard/GUI helping students/interns apply and get started – While they have narrow-cased it, it should help almost everybody who has to get over the learning curve to make her/is contribution to Debian. Having all the tools configured and ready to work would make the job of on boarding on to Debian a whole lot easier.

2. Firefox and Thunderbird plugins for free software habits – It’s always a good idea to start of with privacy tools, it would make the journey of free software much easier and enjoyable.

3. Android SDK Tools in Debian – This I think would be a multi-year project for as long as Android is there as a competitor in the mobile space. Especially for Pune students doing work with Android might lead to upstream work with Linaro who have been working with companies and various stake-holders to have more homogeneity to a kernel which would make it more secure, more maintainable in the short and long run.

4. Open Agriculture Food Computer – This probably would be a bit difficult but for colleges like COEP who have CNC lathes and 3-d printer and a benefactor in Sharad Pawar and other people who are interested in Agriculture, Nitin Gadkari . The TED link shared and reproduced below does give some idea. Vandana Shiva, who has been a cultural force and has a seed bank so we have culture, recipes and food for generations would be pretty much appropriate for the problems we face. It actually ties in with another ted talk which is also a global concern, the shortage of water and recycling of water.

This again from what I could assess with my almost non-existent agricultural skills, would be multi-year project as the science and understanding of it are in early stages. People from agriculture, IT, Atmospheric Science etc. all would have a role in a project like this. The interesting part of it is that from what has been shared, it seems there are lots that can be done in that arena.

Lastly, I would like some of the more privacy consciously people to weigh in on 1322748. I have used all the addons which have been mentioned on the bugzilla one time or the other and am stymied as my web experience is poorer as I cannot know who to trust and who to not without the info. about what ciphers the webmasters are using. Public pressure can only work when that info. is available.

I am sure I missed a lot, but that’s all I could cover. If people have some more ideas or inputs, feel free to suggest in the comments and I will see if I can incorporate them in the blog post if need be.

,

Planet Linux AustraliaJonathan Adamczewski: Watch as the OS rewrites my buggy program.

I didn’t know that SetErrorMode(SEM_NOALIGNMENTFAULTEXCEPT) was a thing, until I wrote a bad test that wouldn’t crash.

Digging into it, I found that a movaps instruction was being rewritten as movups, which was a thoroughly confusing thing to see.

The one clue I had was that a fault due to an unaligned load had been observed in non-test code, but did not reproduce when written as a test using the google-test framework. A short hunt later (including a failed attempt at writing a small repro case), I found an explanation: google test suppresses this class of failure.

The code below will successfully demonstrate the behavior, printing out the SIMD load instruction before and after calling the function with an unaligned pointer.

[Gist]

View the code on Gist.

Planet DebianKeith Packard: CompositeAcceleration

Composite acceleration in the X server

One of the persistent problems with the modern X desktop is the number of moving parts required to display application content. Consider a simple PresentPixmap call as made by the Vulkan WSI or GL using DRI3:

  1. Application calls PresentPixmap with new contents for its window

  2. X server receives that call and pends any operation until the target frame

  3. At the target frame, the X server copies the new contents into the window pixmap and delivers a Damage event to the compositor

  4. The compositor responds to the damage event by copying the window pixmap contents into the next screen pixmap

  5. The compositor calls PresentPixmap with the new screen contents

  6. The X server receives that call and either posts a Swap call to the kernel or delays any action until the target frame

This sequence has a number of issues:

  • The operation is serialized between three processes with at least three context switches involved.

  • There is no traceable relation between when the application asked for the frame to be shown and when it is finally presented. Nor do we even have any way to tell the application what time that was.

  • There are at least two copies of the application contents, from DRI3 buffer to window pixmap and from window pixmap to screen pixmap.

We'd also like to be able to take advantage of the multi-plane capabilities in the display engine (where available) to directly display the application contents.

Previous Attempts

I've tried to come up with solutions to this issue a couple of times in the past.

Composite Redirection

My first attempt to solve (some of) this problem was through composite redirection. The idea there was to directly pass the Present'd pixmap to the compositor and let it copy the contents directly from there in constructing the new screen pixmap image. With some additional hand waving, the idea was that we could associate that final presentation with all of the associated redirected compositing operations and at least provide applications with accurate information about when their images were presented.

This fell apart when I tried to figure out how to plumb the necessary events through to the compositor and back. With that, and the realization that we still weren't solving problems inherent with the three-process dance, nor providing any path to using overlays, this solution just didn't seem worth pursuing further.

Automatic Compositing

More recently, Eric Anholt and I have been discussing how to have the X server do all of the compositing work by natively supporting ARGB window content. By changing compositors to place all screen content in windows, the X server could then generate the screen image by itself and not require any external compositing manager assistance for each frame.

Given that a primitive form of automatic compositing is already supported, extending that to support ARGB windows and having the X server manage the stack seemed pretty tractable. We would extend the driver interface so that drivers could perform the compositing themselves using a mixture of GPU operations and overlays.

This runs up against five hard problems though.

  1. Making transitions between Manual and Automatic compositing seamless. We've seen how well the current compositing environment works when flipping compositing on and off to allow full-screen applications to use page flipping. Lots of screen flashing and application repaints.

  2. Dealing with RGB windows with ARGB decorations. Right now, the window frame can be an ARGB window with the client being RGB; painting the client into the frame yields an ARGB result with the A values being 1 everywhere the client window is present.

  3. Mesa currently allocates buffers exactly the size of the target drawable and assumes that the upper left corner of the buffer is the upper left corner of the drawable. If we want to place window manager decorations in the same buffer as the client and not need to copy the client contents, we would need to allocate a buffer large enough for both client and decorations, and then offset the client within that larger buffer.

  4. Synchronizing window configuration and content updates with the screen presentation. One of the major features of a compositing manager is that it can construct complete and consistent frames for display; partial updates to application windows need never be shown to the user, nor does the user ever need to see the window tree partially reconfigured. To make this work with automatic compositing, we'd need to both codify frame markers within the 2D rendering stream and provide some method for collecting window configuration operations together.

  5. Existing compositing managers don't do this today. Compositing managers are currently free to paint whatever they like into the screen image; requiring that they place all screen content into windows would mean they'd have to buy in to the new mechanism completely. That could still work with older X servers, but the additional overhead of more windows containing decoration content would slow performance with those systems, making migration less attractive.

I can think of plausible ways to solve the first three of these without requiring application changes, but the last two require significant systemic changes to compositing managers. Ick.

Semi-Automatic Compositing

I was up visiting Pierre-Loup at Valve recently and we sat down for a few hours to consider how to help applications regularly present content at known times, and to always know precisely when content was actually presented. That names just one of the above issues, but when you consider the additional work required by pure manual compositing, solving that one issue is likely best achieved by solving all three.

I presented the Automatic Compositing plan and we discussed the range of issues. Pierre-Loup focused on the last problem -- getting existing Compositing Managers to adopt whatever solution we came up with. Without any easy migration path for them, it seemed like a lot to ask.

He suggested that we come up with a mechanism which would allow Compositing Managers to ease into the new architecture and slowly improve things for applications. Towards that, we focused on a much simpler problem

How can we get a single application at the top of the window stack to reliably display frames at the desired time, and to know when that doesn't occur.

Coming up with a solution for this led to a good discussion and a possible path to a broader solution in the future.

Steady-state Behavior

Let's start by ignoring how we start and stop this new mode and look at how we want applications to work when things are stable:

  1. Windows not moving around
  2. Other applications idle

Let's get a picture I can use to describe this:

In this picture, the compositing manager is triple buffered (as is normal for a page flipping application) with three buffers:

  1. Scanout. The image currently on the screen

  2. Queued. The image queued to be displayed next

  3. Render. The image being constructed from various window pixmaps and other elements.

The contents of the Scanout and Queued buffers are identical with the exception of the orange window.

The application is double buffered:

  1. Current. What it has displayed for the last frame

  2. Next. What it is constructing for the next frame

Ok, so in the steady state, here's what we want to happen:

  1. Application calls PresentPixmap with 'Next' for its window

  2. X server receives that call and copies Next to Queued.

  3. X server posts a Page Flip to the kernel with the Queued buffer

  4. Once the flip happens, the X server swaps the names of the Scanout and Queued buffers.

If the X server supports Overlays, then the sequence can look like:

  1. Application calls PresentPixmap

  2. X server receives that call and posts a Page Flip for the overlay

  3. When the page flip completes, the X server notifies the client that the previous Current buffer is now idle.

When the Compositing Manager has content to update outside of the orange window, it will:

  1. Compositing Manager calls PresentPixmap

  2. X server receives that call and paints the Current client image into the Render buffer

  3. X server swaps Render and Queued buffers

  4. X server posts Page Flip for the Queued buffer

  5. When the page flip occurs, the server can mark the Scanout buffer as idle and notify the Compositing Manager

If the Orange window is in an overlay, then the X server can skip step 2.

The Auto List

To give the Compositing Manager control over the presentation of all windows, each call to PresentPixmap by the Compositing Manager will be associated with the list of windows, the "Auto List", for which the X server will be responsible for providing suitable content. Transitioning from manual to automatic compositing can therefore be performed on a window-by-window basis, and each frame provided by the Compositing Manager will separately control how that happens.

The Steady State behavior above would be represented by having the same set of windows in the Auto List for the Scanout and Queued buffers, and when the Compositing Manager presents the Render buffer, it would also provide the same Auto List for that.

Importantly, the Auto List need not contain only children of the screen Root window. Any descendant window at all can be included, and the contents of that drawn into the image using appropriate clipping. This allows the Compositing Manager to draw the window manager frame while the client window is drawn by the X server.

Any window at all can be in the Auto List. Windows with PresentPixmap contents available would be drawn from those. Other windows would be drawn from their window pixmaps.

Transitioning from Manual to Auto

To transition a window from Manual mode to Auto mode, the Compositing Manager would add it to the Auto List for the Render image, and associate that Auto List with the PresentPixmap request for that image. For the first frame, the X server may not have received a PresentPixmap for the client window, and so the window contents would have to come from the Window Pixmap for the client.

I'm not sure how we'd get the Compositing Manager to provide another matching image that the X server can use for subsequent client frames; perhaps it would just create one itself?

Transitioning from Auto to Manual

To transition a window from Auto mode to Manual mode, the Compositing manager would remove it from the Auto List for the Render image and then paint the window contents into the render image itself. To do that, the X server would have to paint any PresentPixmap data from the client into the window pixmap; that would be done when the Compositing Manager called GetWindowPixmap.

New Messages Required

For this to work, we need some way for the Compositing Manager to discover windows that are suitable for Auto composting. Normally, these will be windows managed by the Window Manager, but it's possible for them to be nested further within the application hierarchy, depending on how the application is constructed.

I think what we want is to tag Damage events with the source window, and perhaps additional information to help Compositing Managers determine whether it should be automatically presenting those source windows or a parent of them. Perhaps it would be helpful to also know whether the Damage event was actually caused by a PresentPixmap for the whole window?

To notify the server about the Auto List, a new request will be needed in the Present extension to set the value for a subsequent PresentPixmap request.

Actually Drawing Frames

The DRM module in the Linux kernel doesn't provide any mechanism to remove or replace a Page Flip request. While this may get fixed at some point, we need to deal with how it works today, if only to provide reasonable support for existing kernels.

I think about the best we can do is to set a timer to fire a suitable time before vblank and have the X server wake up and execute any necessary drawing and Page Flip kernel calls. We can use feedback from the kernel to know how much slack time there was between any drawing and the vblank and adjust the timer as needed.

Given that the goal is to provide for reliable display of the client window, it might actually be sufficient to let the client PresentPixmap request drive the display; if the Compositing Manager provides new content for a frame where the client does not, we can schedule that for display using a timer before vblank. When the Compositing Manager provides new content after the client, it would be delayed until the next frame.

Changes in Compositing Managers

As described above, one explicit goal is to ease the burden on Compositing Managers by making them able to opt-in to this new mechanism for a limited set of windows and only for a limited set of frames. Any time they need to take control over the screen presentation, a new frame can be constructed with an empty Auto List.

Implementation Plans

This post is the first step in developing these ideas to the point where a prototype can be built. The next step will be to take feedback and adapt the design to suit. Of course, there's always the possibility that this design will also prove unworkable in practice, but I'm hoping that this third attempt will actually succeed.

,

Planet DebianDirk Eddelbuettel: RVowpalWabbit 0.0.12

And yet another boring little RVowpalWabbit package update, now to version 0.0.12, and still in response to the CRAN request of not writing files where we should not (as caught by new tests added by Kurt). I had misinterpreted one flag and actually instructed to examples and tests to write model files back to the installed directory. Oops. Now fixed. I also added a reusable script for such tests in the repo for everybody's perusal (but it will require Linux and bindfs).

No new code or features were added.

We should mention once more that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch. I am also thinking that rvw extensions / work may make for a good GSoC 2018 project (and wrote up a short note). Again, if interested, please get in touch.

More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramFriday Squid Blogging: Kraken Pie

Pretty, but contains no actual squid ingredients.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianAntoine Beaupré: January 2018 report: LTS

I have already published a yearly report which covers all of 2017 but also some of my January 2018 work, so I'll try to keep this short.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. I was happy to switch to the new Git repository for the security tracker this month. It feels like some operations (namely pull / push) are a little slower, but others, like commits or log inspection, are much faster. So I think it is a net win.

jQuery

I did some work on trying to cleanup a situation with the jQuery package, which I explained in more details in a long post. It turns out there are multiple databases out there that track security issues in web development environemnts (like Javascript, Ruby or Python) but do not follow the normal CVE tracking system. This means that Debian had a few vulnerabilities in its jQuery packages that were not tracked by the security team, in particular three that were only on Snyk.io (CVE-2012-6708, CVE-2015-9251 and CVE-2016-10707). The resulting discussion was interesting and is worth reading in full.

A more worrying aspect of the problem is that this problem is not limited to flashy new web frameworks. Ben Hutchings estimated that almost half of the Linux kernel vulnerabilities are not tracked by CVE. It seems the concensus is that we want to try to follow the CVE process, and Mitre has been helpful in distributing this by letting other entities, called CVE Numbering Authorities or CNA, issue their own CVEs. After contacting Snyk, it turns out that they have started the process of becoming a CNA and are trying to get this part of their workflow, so that's a good sign.

LTS meta-work

I've done some documentation and architecture work on the LTS project itself, mostly around updating the wiki with current practices.

OpenSSH DLA

I've done a quick security update of OpenSSH for LTS, which resulted in DLA-1257-1. Unfortunately, after a discussion with the security researcher that published that CVE, it turned out that this was only a "self-DOS", i.e. that the NEWKEYS attack would only make the SSH client terminate its own connection, and therefore not impact the rest of the server. One has to wonder, in that case, why this was issue a CVE at all: presumably the vulnerability could be leveraged somehow, but I did not look deeply enough into it to figure that out.

Hopefully the patch won't introduce a regression: I tested this summarily and it didn't seem to cause issue at first glance.

An interesting attack (CVE-2017-18078) was discovered against systemd where the "tmpfiles" feature could be abused to bypass filesystem access restrictions through hardlinks. The trick is that the attack is possible only if kernel hardening (specifically fs.protected_hardlinks) is turned off. That feature is available in the Linux kernel since the 3.6 release, but was actually turned off by default in 3.7. In the commit message, Linus Torvalds explained the change was breaking some userland applications, which is a huge taboo in Linux, and recommended that distros configure this at boot instead. Debian took the reverse approach and Hutchings issued a patch which reverts the default to the more secure default. But this means users of custom kernels are still vulnerable to this issue.

But, more importantly, this affects more than systemd. The vulnerability also happens when using plain old chown with hardening turned off, when running a simple command like this:

chown -R non-root /path/owned/by/non-root

I didn't realize this, but hardlinks share permissions: if you change permissions on file a that's hardlinked to file b, both files have the new permissions. This is especially nasty if users can hardlink to critical files like /etc/password or suid binaries, which is why the hardening was introduced in the first place.

In Debian, this is especially an issue in maintainer scripts, which often call chown -R on arbitrary, non-root directories. Daniel Kahn Gillmor had reported this to the Debian security team all the way back in 2011, but it didn't get traction back then. He now opened Debian bug #889066 to at least enable a warning in lintian and an issue was also opened on colord Debian bug #889060, as an example, but many more packages are vulnerable. Again, this is only if hardening is somewhat turned off.

Normally, systemd is supposed to turn that hardening on, which should protect custom kernels, but this was turned off in Debian. Anyways, Debian still supports non-systemd init systems (although those users mostly probably all migrated to Devuan) so the fix wouldn't be complete. I have therefore filed Debian bug #889098 against procps (which owns /etc/sysctl.conf and related files) to try and fix the issue more broadly there.

And to be fair, this was very explicitly mentioned in the jessie release notes so those people without the protection kind of get what they desserve here...

p7zip

Lastly, I did a fairly trivial update of the p7zip package, which resulted in DLA-1268-1. The patch was sent upstream and went through a few iterations, including coordination with the security researcher.

Unfortunately, the latter wasn't willing to share the proof of concept (PoC) so that we could test the patch. We are therefore trusting the researcher that the fix works, which is too bad because they do not trust us with the PoC...

Other free software work

I probably did more stuff in January that wasn't documented in the previous report. But I don't know if it's worth my time going through all this. Expect a report in February instead! :)

Have happy new year and all that stuff.

Planet DebianJoachim Breitner: The magic “Just do it” type class

One of the great strengths of strongly typed functional programming is that it allows type driven development. When I have some non-trivial function to write, I first write its type signature, and then the writing the implementation often very obvious.

Once more, I am feeling silly

In fact, it often is completely mechanical. Consider the following function:

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))

This is somewhat like the bind for a combination of the error monad and the reader monad, and remembers the intermediate result, but that doesn’t really matter now. What matters is that once I wrote that type signature, I feel silly having to also write the code, because there isn’t really anything interesting about that.

Instead, I’d like to tell the compiler to just do it for me! I want to be able to write

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

And now I can! Assuming I am using GHC HEAD (or eventually GHC 8.6), I can run cabal install ghc-justdoit, and then the following code actually works:

{-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-}
import GHC.JustDoIt
foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

What is this justDoIt?

*GHC.LJT GHC.JustDoIt> :browse GHC.JustDoIt
class JustDoIt a
justDoIt :: JustDoIt a => a
(…) :: JustDoIt a => a

Note that there are no instances for the JustDoIt class -- they are created, on the fly, by the GHC plugin GHC.JustDoIt.Plugin. During type-checking, it looks as these JustDoIt t constraints and tries to construct a term of type t. It is based on Dyckhoff’s LJT proof search in intuitionistic propositional calculus, which I have implemented to work directly on GHC’s types and terms (and I find it pretty slick). Those who like Unicode can write (…) instead.

What is supported right now?

Because I am working directly in GHC’s representation, it is pretty easy to support user-defined data types and newtypes. So it works just as well for

data Result a b = Failure a | Success b
newtype ErrRead r e a = ErrRead { unErrRead :: r -> Result e a }
foo2 :: ErrRead r e a -> (a -> ErrRead r e b) -> ErrRead r e (a,b)
foo2 = (…)

It doesn’t infer coercions or type arguments or any of that fancy stuff, and carefully steps around anything that looks like it might be recursive.

How do I know that it creates a sensible implementation?

You can check the generated Core using -ddump-simpl of course. But it is much more convenient to use inspection-testing to test such things, as I am doing in the Demo file, which you can skim to see a few more examples of justDoIt in action. I very much enjoyed reaping the benefits of the work I put into inspection-testing, as this is so much more convenient than manually checking the output.

Is this for real? Should I use it?

Of course you are welcome to play around with it, and it will not launch any missiles, but at the moment, I consider this a prototype that I created for two purposes:

  • To demonstrates that you can use type checker plugins for program synthesis. Depending on what you need, this might allow you to provide a smoother user experience than the alternatives, which are:

    • Preprocessors
    • Template Haskell
    • Generic programming together with type-level computation (e.g. generic-lens)
    • GHC Core-to-Core plugins

    In order to make this viable, I slightly changed the API for type checker plugins, which are now free to produce arbitrary Core terms as they solve constraints.

  • To advertise the idea of taking type-driven computation to its logical conclusion and free users from having to implement functions that they have already specified sufficiently precisely by their type.

What needs to happen for this to become real?

A bunch of things:

  • The LJT implementation is somewhat neat, but I probably did not implement backtracking properly, and there might be more bugs.
  • The implementation is very much unoptimized.
  • For this to be practically useful, the user needs to be able to use it with confidence. In particular, the user should be able to predict what code comes out. If there a multiple possible implementations, i.e. a clear specification which implementations are more desirable than others, and it should probably fail if there is ambiguity.
  • It ignores any recursive type, so it cannot do anything with lists. It would be much more useful if it could do some best-effort thing here as well.

If someone wants to pick it up from here, that’d be great!

I have seen this before…

Indeed, the idea is not new.

Most famously in the Haskell work is certainly Lennart Augustssons’s Djinn tool that creates Haskell source expression based on types. Alejandro Serrano has connected that to GHC in the library djinn-ghc, but I coudn’t use this because it was still outputting Haskell source terms (and it is easier to re-implement LJT rather than to implement type inference).

Lennart Spitzner’s exference is a much more sophisticated tool that also takes library API functions into account.

In the Scala world, Sergei Winitzki very recently presented the pretty neat curryhoward library that uses for Scala macros. He seems to have some good ideas about ordering solutions by likely desirability.

And in Idris, Joomy Korkut has created hezarfen.

Planet DebianJoey Hess: improving powertop autotuning

I'm wondering about improving powertop's auto-tuning. Currently the situation is that, if you want to tune your laptop's power consumption, you can run powertop and turn on all the tunables and try it for a while to see if anything breaks. The breakage might be something subtle.

Then after a while you reboot and your laptop is using too much power again until you remember to run powertop again. This happens a half dozen or so times. You then automate running powertop --auto-tune or individual tuning commands on boot, probably using instructions you find in the Arch wiki.

Everyone has to do this separately, which is a lot of duplicated and rather technical effort for users, while developers are left with a lot of work to manually collect information, like Hans de Goede is doing now for enabling PSR by default.

To improve this, powertop could come with a service file to start it on boot, read a config file, and apply tunings if enabled.

There could be a simple GUI to configure it, where the user can report when it's causing a problem. In case the problem prevents booting, there would need to be a boot option that disables the autotuning too.

When the user reports a problem, the GUI could optionally walk them through a bisection to find the problematic tuning, which would probably take only 4 or so steps.

Information could be uploaded, anonymously to a hardware tunings database. Developers could then use that to find and whitelist safe tunings. Powertop could also query that to avoid tunings that are known to cause problems on the laptop.

I don't know if this is a new idea, but if it's not been tried before, it seems worth working on.

Krebs on SecurityAttackers Exploiting Unpatched Flaw in Flash

Adobe warned on Thursday that attackers are exploiting a previously unknown security hole in its Flash Player software to break into Microsoft Windows computers. Adobe said it plans to issue a fix for the flaw in the next few days, but now might be a good time to check your exposure to this still-ubiquitous program and harden your defenses.

Adobe said a critical vulnerability (CVE-2018-4878) exists in Adobe Flash Player 28.0.0.137 and earlier versions. Successful exploitation could allow an attacker to take control of the affected system.

The software company warns that an exploit for the flaw is being used in the wild, and that so far the attacks leverage Microsoft Office documents with embedded malicious Flash content. Adobe said it plans to address this vulnerability in a release planned for the week of February 5.

According to Adobe’s advisory, beginning with Flash Player 27, administrators have the ability to change Flash Player’s behavior when running on Internet Explorer on Windows 7 and below by prompting the user before playing Flash content. A guide on how to do that is here (PDF). Administrators may also consider implementing Protected View for Office. Protected View opens a file marked as potentially unsafe in Read-only mode.

Hopefully, most readers here have taken my longstanding advice to disable or at least hobble Flash, a buggy and insecure component that nonetheless ships by default with Google Chrome and Internet Explorer. More on that approach (as well as slightly less radical solutions) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

By default, Mozilla Firefox on Windows computers with Flash installed runs Flash in a “protected mode,” which prompts the user to decide if they want to enable the plugin before Flash content runs on a Web site.

Another, perhaps less elegant, alternative to wholesale kicking Flash to the curb is to keeping it installed in a browser that you don’t normally use, and then only using that browser on sites that require Flash.

CryptogramSigned Malware

Stuxnet famously used legitimate digital certificates to sign its malware. A research paper from last year found that the practice is much more common than previously thought.

Now, researchers have presented proof that digitally signed malware is much more common than previously believed. What's more, it predated Stuxnet, with the first known instance occurring in 2003. The researchers said they found 189 malware samples bearing valid digital signatures that were created using compromised certificates issued by recognized certificate authorities and used to sign legitimate software. In total, 109 of those abused certificates remain valid. The researchers, who presented their findings Wednesday at the ACM Conference on Computer and Communications Security, found another 136 malware samples signed by legitimate CA-issued certificates, although the signatures were malformed.

The results are significant because digitally signed software is often able to bypass User Account Control and other Windows measures designed to prevent malicious code from being installed. Forged signatures also represent a significant breach of trust because certificates provide what's supposed to be an unassailable assurance to end users that the software was developed by the company named in the certificate and hasn't been modified by anyone else. The forgeries also allow malware to evade antivirus protections. Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren't valid.

Worse Than FailureError'd: The Biggest Loser

"I don't know what's more surprising - losing $2,000,000 or that Yahoo! thought I had $2,000,000 to lose," writes Bruce W.

 

"Autodesk sent out an email about my account's password being changed recently. Now it's up to me to guess which $productName it is!" wrote Tom G.

 

Kurt C. writes, "I kept repeating my mantra: 'Must not click forbidden radio buttons...'"

 

"My son boarded a bus in Toronto and got a free ride when the driver showed him this crash message," Ari S. writes.

 

"For those who are in denial about global warming, may I please direct you to conditions in Wisconsin," wrote Chelsie S.

 

Billie J. wrote, "Sorry there, Walmart, but that's not how math works."

 

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianWouter Verhelst: Day four of the pre-FOSDEM Debconf Videoteam sprint

Day four was a day of pancakes and stew. Oh, and some video work, too.

Nattie

Did more documentation review. She finished SReview documentation, got started on the documentation of the examples of our ansible repository.

Kyle

Finished splitting out the ansible configuration from the ansible code repository. The code repository now includes an example configuration that is well documented for getting started, whereas our production configuration lives in a separate repository.

Stefano

Spent much time on the debconf website, mostly working on a new upstream release of wafer.

He also helped review Kyle's documentation, and spent some time together with Tzafrir debugging our ansible test setup.

Tzafrir

Worked on documentation, and did a test run of the ansible repository. Found and fixed issues that cropped up during that.

Wouter

Spent much time trying to figure out why SReview was not doing what he was expecting it to do. Side note: I hate video codecs. Things are working now, though, and most of the fixes were implemented in a way that makes it reusable for other conferences.

There's one more day coming up today. Hopefully won't forget to blog about it tonight.

Planet DebianDaniel Pocock: Everything you didn't know about FSFE in a picture

As FSFE's community begins exploring our future, I thought it would be helpful to start with a visual guide to the current structure.

All the information I've gathered here is publicly available but people rarely see it in one place, hence the heading. There is no suggestion that anything has been deliberately hidden.

The drawing at the bottom includes Venn diagrams to show the overlapping relationships clearly and visually. For example, in the circle for the General Assembly, all the numbers add up to 29, the total number of people listed on the "People" page of the web site. In the circle for Council, there are 4 people in total and in the circle for Staff, there are 6 people, 2 of them also in Council and 4 of them in the GA but not council.

The financial figures on this diagram are taken from the 2016 financial summary. The summary published by FSFE is very basic so the exact amount paid in salaries is not clear, the number in the Staff circle probably pays a lot more than just salaries and I feel FSFE gets good value for this money.

Some observations about the numbers:

  • The volunteers don't elect any representatives to the GA, although some GA members are also volunteers
  • From 1,700 fellowship members, only 2 are elected in 2 of the 29 GA seats yet they provide over thirty percent of the funding through recurring payments
  • Out of 6 staff, all 6 are members of the GA (6 out of 29) since a decision to accept them at the last GA meeting
  • Only the 29 people in the General Assembly are full (legal) members of the FSFE e.V. association with the right to vote on things like constitutional changes. Those people are all above the dotted line on the page. All the people below the line have been referred to by other names, like fellow, supporter, community, donor and volunteer.

If you ever clicked the "Join the FSFE" button or filled out the "Join the FSFE" form on the FSFE web site and made a payment, did you believe you were becoming a member with an equal vote? If so, one procedure you can follow is to contact the president as described here and ask to be recognized as a full member. I feel it is important for everybody who filled out the "Join" form to clarify their rights and their status before the constitutional change is made.

I have not presented these figures to create conflict between staff and volunteers. Both have an important role to play. Many people who contribute time or money to FSFE are very satisfied with the concept that "somebody else" (the staff) can do difficult and sometimes tedious work for the organization's mission and software freedom in general. As I've been elected as a fellowship representative, I feel a responsibility to ensure the people I represent are adequately informed about the organization and their relationship with it and I hope these notes and the diagram helps to fulfil that responsibility.

Therefore, this diagram is presented to the community not for the purpose of criticizing anybody but for the purpose of helping make sure everybody is on the same page about our current structure before it changes.

If anybody has time to make a better diagram it would be very welcome.

Planet DebianJohn Goerzen: How are you handling building local Debian/Ubuntu packages?

I’m in the middle of some conversations about Debian/Ubuntu repositories, and I’m curious how others are handling this.

How are people maintaining repos for an organization? Are you integrating them with a git/CI (github/gitlab, jenkins/travis, etc) workflow? How do packages propagate into repos? How do you separate prod from testing? Is anyone running buildd locally, or integrating with more common CI tools?

I’m also interested in how people handle local modifications of packages — anything from newer versions of C libraries to newer interpreters. Do you just use the regular Debian toolchain, packaging them up for (potentially) the multiple distros/versions that you have in production? Pin them in apt?

Just curious what’s out there.

Some Googling has so far turned up just one relevant hit: Michael Prokop’s DebConf15 slides, “Continuous Delivery of Debian packages”. Looks very interesting, and discusses jenkins-debian-glue.

Some tangentially-related but interesting items:

Edit 2018-02-02: I should have also mentioned BuildStream

Planet DebianBits from Debian: Debian welcomes its Outreachy interns

We'd like to welcome our three Outreachy interns for this round, lasting from December 2017 to March 2018.

Juliana Oliveira is working on reproducible builds for Debian and free software.

Kira Obrezkova is working on bringing open-source mobile technologies to a new level with Debian (Osmocom).

Renata D'Avila is working on a calendar database of social events and conferences for free software developers.

Congratulations, Juliana, Kira and Renata!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Debian will also participate this summer in the next round for Outreachy, and is currently applying as mentoring organisation for the Google Summer of Code 2018 programme. Have a look at the projects wiki page and contact the Debian Outreach Team mailing list to join as a mentor or welcome applicants into the Outreachy or GSoC programme.

Join us and help extend Debian!

Planet Linux AustraliaOpenSTEM: Welcome Back!

Well, most of our schools are back, or about to start the new year. Did you know that there are schools using OpenSTEM materials in every state and territory of Australia? Our wide range of resources, especially those on Australian history, give detailed information about the history of all our states and territories. We pride […]

,

Cory DoctorowThe 2017 Locus List: a must-read list of the best science fiction and fantasy of the past year

Every year, Locus Magazine’s panel of editors reviews the entire field of science fiction and fantasy and produces its Recommended Reading List; the 2017 list is now out, and I’m proud to say that it features my novel Walkaway, in excellent company with dozens of other works I enjoyed in the past year.


2017 Locus Recommended Reading List
[Locus Magazine]

Planet DebianRitesh Raj Sarraf: Laptop Mode Tools 1.72

What a way to make a gift!

I'm pleased to announce the 1.72 release of Laptop Mode Tools. Major changes include the port of the GUI configuration utility to Python 3 and PyQt5. Some tweaks, fixes and enhancements in current modules. Extending {black,white}list of devices to types other than USB. Listing of devices by their devtype attribute.

A filtered list of changes is mentioned below. For the full log, please refer to the git repository. 

Source tarball, Feodra/SUSE RPM Packages available at:
https://github.com/rickysarraf/laptop-mode-tools/releases

Debian packages will be available soon in Unstable.

Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki
Mailing List: https://groups.google.com/d/forum/laptop-mode-tools

 

 

1.72 - Thu Feb  1 21:59:24 IST 2018
    * Switch to PyQt5 and Python3
    * Add btrfs to list of filesystems for which we can set commit interval
    * Add pkexec invocation script
    * Add desktop file to upstream repo and invoke script
    * Update installer to includes gui wrappers
    * Install new SVG pixmap
    * Control all available cards in radeon-dpm
    * Prefer to use the new runtime pm autosuspend_delay_ms interface
    * tolerate broken device interfaces quietly
    * runtime-pm: Make {black,white}lists work with non-USB devices
    * send echo errors to verbose log
    * Extend blacklist by device types of devtype

 

What is Laptop Mode Tools

Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 .
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 .
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power

PS: This release took around 13 months. A lot of things changed, for me, personally. Some life lessons learnt. Some idiots uncovered. But the best of 2017, I got married. I am hopeful to keep work-life balanced, including time for FOSS.

Categories: 

Keywords: 

Like: 

Planet DebianRaphaël Hertzog: My Free Software Activities in January 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

While I continue to manage the administrative side of Debian LTS, I’m taking a break of the technical work (i.e. preparing and releasing security updates). The hope is that it will help me focus more on my book which (still) needs to be updated for stretch. In truth, this did not happen in January but I hope to do better in the upcoming months.

Salsa and related

The switch to salsa.debian.org is a major event in our community. Last month I started with the QA team and the distro-tracker repository as an experiment. This month I took this opportunity to bring to fruition a merge between the pkg-security team and the forensics team that I already proposed in the past and that we postponed because it was deemed busy work for no gains. Now that both teams had to migrate anyway, it was easier to migrate everything at once under a single project.

All our repositories are now managed under the same team in salsa: https://salsa.debian.org/pkg-security-team/ But for the mailing list we are still waiting for the new list to be created on lists.debian.org (#888136).

As part of this work, I contributed some fixes to the scripts maintained by Mehdi Dogguy. I also filed a wishlist request for a new script to make it easy to share repositories with the Debian group.

With the expected demise of alioth mailing lists, there’s some interest in getting the Debian package tracker to host the official maintainer email. As the central hub for most emails related to packages, it seems natural indeed. We made some progress lately on making it possible to use @packages.debian.org emails (with the downside of receiving duplicate emails currently) but that’s not an really an option when you maintain many packages and want to see them grouped under the same maintainer email. Furthermore it doesn’t allow for automatic association of a package to its maintainer team. So I implemented a team+slug@tracker.debian.org email that works for each team registered on the package tracker and that will automatically associate the package to its team. The email is just a black hole for now (not really a problem as most automatic emails are already received through another email) but I expect to forward non-automatic mails to team members to make it useful as a way to discuss between team members.

The package tracker also learned to recognize commit mails generated by GitLab and it will now forward them to the source package whose name is matching the name of the GitLab project that generated them (see #886114).

Misc Debian stuff

Distro Tracker. I got my two first merge requests which I reviewed and merged. One adds native HTML support to toggle action items (i.e. without javascript on recent browsers) and the other improves some of the messages shown by the vcswatch integration. In #886450, we discussed how to better filter build failure mails sent by the build daemons. New headers have been added.

Bug reports and patches. I forwarded and/or got moving a couple of bugs that we encountered in Kali (glibc: new data brought to #820826, raspi3-firmware: #887062, glibc: tracking down #886506 to a glibc regression affecting busybox, gr-fcdproplus: #888853 new watch file, gjs: upstream bug #33). I also needed a new feature in live-build so I filed #888507 which I implemented almost immediately (but released only in Kali because it’s not documented yet and can possibly be improved a bit further).

While doing my yearly accounting, I opened an issue on tryton and pushed a fix after approval. While running unit tests on distro-tracker, I got an unexpected warning that seems to be caused by virtualenv (see upstream issue #1120).

Debian Packaging. I uploaded zim 0.68~rc1-1 to experimental.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianShirish Agarwal: webmail saga continues

I was pleased to see a reply from Daniel as a reaction to my post. I read and re-read the blog couple of times yesterday and another time today to question my own understanding and see if there is anyway I could make life easier and simpler for myself and other people whom I interact with but finding it somewhat of an uphill task. I will not be limiting myself to e-mail alone as I feel until we don’t get/share the big picture it would remain incomplete.

Allow to share me few observations below –

1. The first one is probably cultural in nature (either specific to India or its worldwide I have no contextual information.) Very early in my professional and personal life I understood that e-mails are leaky by design. By leaky I mean being leaked by individuals for profit or some similar motive.

Also e-mails are and were used as misinformation tools by companies and individuals then and now or using sub-set or superset of them without providing contextual information in which they were written. While this could be construed as giving straw man I do not know any other way. So the best way, at least for me is to construct e-mails in a way where even if some information is leaked, I’m ok with it being leaked or being in public domain. It just hurts less. I could probably give 10-15 high-profile public outings in the last 2-3 years itself. And these are millionaires and billionaires, people on whom many people rely on their livelihoods should have known better. In Indian companies, for communications they do have specific clauses where any communication you had with them is subject to privacy and if you share it with somebody you would be prosecuted, on the other hand if the company does it, it gets a free pass.

2. Because of my own experiences I have been pretty circumspect/slightly paranoid of anybody promising or selling the cool-aid of total privacy. Another example which is of slightly recentish vintage and pains me even today was a Mozilla add-on for which I had done RFP (Request for Package) which a person for pkg-mozext-maintainers@lists.alioth.debian.org (probably will be moved to salsa in near future) packaged and I thanked him/her for it. Two years later it came to fore that under the guise of protecting us from bad cookies or whatever the add-on was supposed to do, it was actually tracking us and selling this information to third-parties.

This was found out by some security researcher casually auditing the code two years down the line (not mozilla) and then being confirmed by other security researchers as well. It was a moment of anguish for me as so many people’s privacy had been invaded even though there were good intentions from my side.

It was also a bit sad as I had assumed (perhaps incorrectly) that Debian does do some automated security audit along with hardening flags that it uses when a package is built. This isn’t to show Debian in a bad light but to understand and realize that Debian has its own shortcomings in many ways. I did hear recently that lot of packages from Kali would make it to Debian core, hopefully some of those packages could also serve as an additional tool to look at packages when they are being built 🙂

I do know it’s a lot to ask for as Debian is a volunteer effort. I am happy to test or whichever way I can contribute to Debian if in doing so we can raise the bar for intended or unintended malicious apps. to go through. I am not a programmer but still I’m sure there might be somehow I could add strength to the effort.

3. The other part is I don’t deny that Google is intrusive. Google is intrusive not just in e-mail but in every way, every page that uses Google analytics or the google Spider search-engine be used for tracking where you are and what you are doing. The way they have embedded themselves in web-pages is it has become almost impossible to see almost all web-pages (some exceptions remain) without allowing google.com to see what you are seeing. I use requestpolicy-continued to know what third-party domains are there on web-page and I see fonts.googleapis.com, google.com and some of the others almost all the time. The problem there is we also don’t know how much information google gathers. For e.g. even if I don’t use Google search engine and if I am searching on any particular topic and if 3-4 of the websites use google for any form or manner, it would be easy to know the information and the line/mode or form of the info. I’m looking for. That actually is same if not more of a problem to me than e-mails and I have no solution for it. Tor and torbrowser-launcher are and were supposed to be an answer to this problem but most big CDNs (Content Distributor Networks) like cloudfare.com are against it so privacy remains an elusive dream there as well.

5. It becomes all the more dangerous when in mobile space where Google Android is the only vendor. The rise of carrier-handset locking which is prevalent in the west has also started making inroads in India. In the manufacturer-carrier-Operating System complex such things will become more common. I have no idea about other vendors but from what I have seen I think the majority might probably be doing the same. IPhone is supposed to also have lot of nastiness where it comes to surveillance.

6. My main worry for protonmail or any other vendor is should we just take them at face-value or is there some other way for people around the world to be assured and in case things take a worse time be possible to file claim for damages if those terms and conditions are not met. I looked to see if I could find an answer to this question which I asked in my previous post and I looked but didn’t find any appropriate answer in your post. The only way I see out of is decentralized networks and apps but they too leave much to be desired. Two examples I can share of the latter. Diaspora started with the idea that I could have my profile in one pod, if for some reason I didn’t like the pod, I could take all the info. to another pod with all the messages, everything in an instant. At least till few months back, I tried to migrate to another pod and found that feature doesn’t work/still a work in progress.

Similarly, zeronet.io is another service which claimed to use de-centralization but for last year or so I haven’t been able to send one email to another user till date.

I used both these examples as both are foss and both have considerable communities and traction built around them. Security or/and anonymity is still at a lower path though as of yet.

I hope I was able to share where I’m coming from.

Planet DebianDaniel Pocock: Our future relationship with FSFE

Below is an email that has been distributed to the FSFE community today. FSFE aims to be an open organization and people are welcome to discuss it through the main discussion group (join, thread and reply) whether you are a member or not.

For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site. The "No Cloud" stickers and the Public Money Public Code campaign are examples of initiatives started by FSFE - you can request free stickers and posters by filling in this form.


Dear FSFE Community,

I'm writing to you today as one of your elected fellowship representatives rather than to convey my own views, which you may have already encountered in my blog or mailing list discussions.

The recent meeting of the General Assembly (GA) decided that the annual elections will be abolished but this change has not yet been ratified in the constitution.

Personally, I support an overhaul of FSFE's democratic processes and the bulk of the reasons for this change are quite valid. One of the reasons proposed for the change, the suggestion that the election was a popularity contest, is an argument I don't agree with: the same argument could be used to abolish elections anywhere.

One point that came up in discussions about the elections is that people don't need to wait for the elections to be considered for GA membership. Matthias Kirschner, our president, has emphasized this to me personally as well, he looks at each new request with an open mind and forwards it to all of the GA for discussion. According to our constitution, anybody can write to the president at any time and request to join the GA. In practice, the president and the existing GA members will probably need to have seen some of your activities in one of the FSFE teams or local groups before accepting you as a member. I want to encourage people to become familiar with the GA membership process and discuss it within their teams and local groups and think about whether you or anybody you know may be a good candidate.

According to the minutes of the last GA meeting, several new members were already accepted this way in the last year. It is particularly important for the organization to increase diversity in the GA at this time.

The response rate for the last fellowship election was lower than in previous years and there is also concern that emails don't reach everybody thanks to spam filters or the Google Promotions tab (if you use gmail). If you had problems receiving emails about the last election, please consider sharing that feedback on the discussion list.

Understanding where the organization will go beyond the extinction of the fellowship representative is critical. The Identity review process, championed by Jonas Oberg and Kristi Progri, is actively looking at these questions. Please contact Kristi if you wish to participate and look out for updates about this process in emails and Planet FSFE. Kristi will be at FOSDEM this weekend if you want to speak to her personally.

I'll be at FOSDEM this weekend and would welcome the opportunity to meet with you personally. I will be visiting many different parts of FOSDEM at different times, including the FSFE booth, the Debian booth, the real-time lounge (K-building) and the Real-Time Communications (RTC) dev-room on Sunday, where I'm giving a talk. Many other members of the FSFE community will also be present, if you don't know where to start, simply come to the FSFE booth. The next European event I visit after FOSDEM will potentially be OSCAL in Tirana, it is in May and I would highly recommend this event for anybody who doesn't regularly travel to events outside their own region.

Changing the world begins with the change we make ourselves. If you only do one thing for free software this year and you are not sure what it is going to be, then I would recommend this: visit an event that you never visited before, in a city or country you never visited before. It doesn't necessarily have to be a free software or IT event. In 2017 I attended OSCAL in Tirana and the Digital-Born Media Carnival in Kotor for the first time. You can ask FSFE to send you some free stickers and posters (online request with optional donation) to give to the new friends you meet on your travels. Change starts with each of us doing something new or different and I hope our paths may cross in one of these places.


For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site.

Please feel free to discuss this through the FSFE discussion group (join, thread and reply)

CryptogramJackpotting Attacks Against US ATMs

Brian Krebs is reporting sophisticated jackpotting attacks against US ATMs. The attacker gains physical access to the ATM, plants malware using specialized electronics, and then later returns and forces the machine to dispense all the cash it has inside.

The Secret Service alert explains that the attackers typically use an endoscope -- a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body -- to locate the internal portion of the cash machine where they can attach a cord that allows them to sync their laptop with the ATM's computer.

"Once this is complete, the ATM is controlled by the fraudsters and the ATM will appear Out of Service to potential customers," reads the confidential Secret Service alert.

At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

"In previous Ploutus.D attacks, the ATM continuously dispensed at a rate of 40 bills every 23 seconds," the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert.

Lots of details in the article.

Worse Than FailureWe Sell Bonds!

We Sell Bonds!

The quaint, brick-faced downtown office building was exactly the sort of place Alexis wanted her first real programming job to be. She took a moment to just soak in the big picture. The building's façade oozed history, and culture. The busy downtown crowd flowed around her like a tranquil stream. And this was where she landed right out of college-- if this interview went well.

Alexis went inside, got a really groovy start-up vibe from the place. The lobby slash waiting room slash employee lounge slash kitchen slash receptionist desk was jam packed full of boxes of paperwork waiting to be unpacked and filed (once a filing cabinet was bought). The walls, still the color of unpainted drywall, accented with spats of plaster and glue-tape. Everything was permeated with chaotic beginnings and untapped potential.

Her interviewer, Mr. Chen, the CEO of the company, lead her into the main conference room, which she suspected was the main conference room by virtue of being the only conference room. The faux-wood table, though moderately sized, still barely left room for herself and the five interviewers, crammed into a mish-mash of conference-room chairs, office chairs and one barstool. At least this room's walls had seen a coat of paint-- if only a single coat. Framed artwork sat on the ground, leaned up gently against the wall. She shared the artwork's anticipation-- waiting for the last coat of paint and touch-ups to dry, to hang proudly for all to see, fulfilling their destiny as the company grew and evolved around them.

"Thank you for coming in," said Mr. Chen as he sat at the head of the conference table.

"Thank you for having me," Alexis replied, sitting opposite him, flanked by the five other interviewers. She was glad she'd decided to play cautious and wear her formal 'Interview Suit'. She fit right in with the suits and ties everyone else was wearing. "I really dig the office space. How long have you been here?"

"Five years," Mr. Chen answered.

Her contextual awareness barely had time to register the whiplash of unpainted walls and unhung pictures in a long occupied office-- not that she had time to process that thought anyways.

"Let the interview begin now," Mr. Chen said abruptly. "Tell me your qualifications."

"I-- uh, okay," Alexis sat up straight and opened her leather folder, "Does everyone have a copy of my resume? I printed extra in case-- "

"We are a green company," Mr. Chen stated.

Alexis froze for a moment, her hand inches from the stack of resumes. She folded her hands over her own copy, smiled, and recovered. "Okay. My qualifications..." She filled them in on the usual details-- college education, GPA, co-op jobs, known languages and frameworks, contributions to open source projects. It was as natural as any practice interview she'd ever had. Smile. Talk clearly. Make eye contact with each member of the interview team for an appropriate length of time before looking at the next person.

Though doing so made her acutely aware that she had no idea who the other people were. They'd never been introduced. They were just-- there.

As soon as she'd finished her last qualification point, Mr. Chen spoke. "Are you familiar with the bonds market?"

She'd done some cursory Wikipedia research before her interview, but admitted, "An introductory familiarity."

"You are not expected to know it," Mr. Chen said, "The bond market is complicated. Very complicated. Even experienced brokers who know about futures contracts, forward contractions, options, swaps and warrants often have no idea how bonds work. But their customers still want to buy a bond. The brokers are our customers, and allowing them to buy bonds is the sole purpose of 'We Sell Bonds!'."

Though Mr. Chen had a distinctly dry and matter-of-fact way of speaking, she could viscerally HEAR the exclamation point in the company's name.

"Very interesting," Alexis said. Always be sure to compliment the interviewer at some point. "What sort of programming projects would I be working on?"

"The system is very complicated," Mr. Chen retorted. "Benny is our programmer."

One of the suited individuals to her left nodded, and she smiled back at him. At least now she knew one other person's name.

"He will train you before you may modify the system. It is very important the system be working properly, and any development must be done carefully. At least six months of training. But the system gathers lots of data, from markets, and from our customers. That data must be imported into the system. That will be part of your duties."

Again, Alexis found herself sitting behind a default smile while her brain processed. The ad she'd answered had clearly been for a junior developer. It had asked for developer skills, listed must-know programming languages, and even been titled 'Junior Developer'. Without breaking the smile, she asked, "What would you say the ratio of data handling to programming would be?"

"I would say close to one hundred percent."

Alexis' heart sank, and she curled her toes to keep any physical sign of her disappointment showing. She mentally looked to her sliver-linings view. Sure, it was data entry-- but she'd still be hands-on with a complicated financial system. She'd get training. Six months of training, which would be a breeze compared to full-time college. And if there really was that much data entry, then the job would be perfect for a fresh mind. There'd be TONS of room for introducing automation and efficiency. What more could a junior developer want?

"That sounds great," Alexis said, enthusiastic as ever.

"Good," Mr. Chen said. "The job starts on Monday."

Her whiplash systems had already long gone offline from overload. Was that a job offer?

"That-- sounds great!" Alexis repeated.

"Good. Nadine will email your paperwork. Email it back promptly."

And now Alexis knew a second person's name. "I look forward to meeting the whole company," she said aloud.

"You have," Mr. Chen replied, gesturing to the others at the table. "We will return to work now. Good day."

Alexis found herself back on the sidewalk outside the quaint brick-faced downtown office building, gainfully employed and not sure if she actually understood what the heck had just happened. But that was a problem for Monday.

#

Alexis arrived fifteen minutes early for her first day at the quaint brick-faced downtown office-- no, make that HER quaint brick-faced downtown office.

Fourteen minutes later, Mr. Chen unlocked the front-door from the inside, and let her in.

"You're early," he stated, locking the door behind her.

"The early bird gets the worm," she clichéd.

"You don't need to be early if you are punctual. Follow."

Mr. Chen lead her through the lobby, and once again into the main boardroom. As before, five people sat around the conference table. Alexis figured there'd be formalities and paperwork to file before she got a desk. HER desk! The whole company (all six of them-- though now it was seven) were here to greet her. And, for some reason, they'd brought their laptops.

"You will sit beside Benny," Mr. Chen said, taking his seat.

"I-- huh?"

Next to Benny, there was an empty chair, and an unoccupied laptop. Alexis slunk around the other chairs, careful not to knock over the framed posters that were still propped against the wall, and sat beside the lead programmer.

"Morning meeting before getting down to work, huh?" she said, smiling at him.

Benny gave her a sideways glance. "We are working."

Alexis wasn't sure what he meant-- and then she noticed, for the first time, that everyone was heads down, looking at their screens, typing away. This wasn't just a boardroom. This was her desk. This was everyone's desk.

Over the morning, Benny gave her his split attention-- interspersing his work with muttering instructions to her; how to log in, where the data files were, how to install Excel. He would only talk to her in-between steps of his task; never interrupting a step to give her attention. Sometimes she just sat there and watched him watch a progress bar. She gathered he was upgrading a server's instance of SQL Server from version "way too old" to version "still way too old, but not as much".

After lunch (also eaten at the shared desk), Benny actually looked at her.

"Time for your first task," he said, giving her a sheet of paper. "We have a new financial institution buying bonds from us. They will use our Single SignOn Solution. You will need to create these accounts."

She took the sheet of paper, a simple printed table with first name, last name, company name, username and password.

Alexis was recently enough out of college that "Advanced Security Practices 401" was still fresh in her mind-- and seeing a plaintext password made her bones nearly crawl out of her skin.

"I-- um-- are there supposed to be passwords here?"

Benny nodded. "Yes. To facilitate single sign-on, usernames and passwords in 'We Sell Bonds!' website must exactly match those used in the broker's own system. Their company signs up for 'We Sell Bonds!', and they are provided with a website skinned to look like their own. The company's employees are given a seamless experience. Most don't even know they are on a different site."

Her skin gnawed on her bones to keep them in place. "But, if the passwords here are in plaintext, that is their real, actual password?"

Benny gave her the same nod. "They must be. Otherwise we could not log in to test their account."

That either made perfect sense, or had dumbfounded all the sense out of Alexis, so she just said "Ok." The rest of the day was spent creating accounts through an ASP interface, then logging into the site to test them.

When she arrived at the quaint brick-faced office building the next day, there was a large stack of papers at her spot at the communal desk. Benny said, "Mr. Chen was happy with your data entry yesterday."

Mr. Chen, who was seated at the head of the shared desk, didn't look up from his laptop screen. "You are allowed to enter this data too."

"Thank you?" Alexis settled in, and got to work. For every record she entered, a different way of optimizing this system would flitter through her mind. A better entry form, maybe auto-focus the first field? How about an XML dump on a USB disk? Or a SOAP service that could interface directly with the database? There could be a whole validation layer to check the data and pre-emptively catch data errors.

Data errors like the one she was looking at right now. She waited patiently for Benny to complete whatever step of his task he was on, and showed him the offending records.

"I don't see the problem," Benny said, shortly.

"John Smith and Jon Smith both have the same username, jsmith" she said, not sure how to make it more clear.

"Yes, they do," Benny confirmed.

"They can't both have the same username."

"They can!" Mr. Chen's sudden interjection startled her-- though she wasn't sure if it was because of the sharpness of his tone, or because she hadn't actually heard him speak for a day and a half. "Do you not see that they have different passwords?"

"Uh," Alexis checked, "They do. But the username is still the same."

There was no response. Mr. Chen was already looking back at his screen. Benny was looking at her expectantly.

"So users are differentiated by their-- password?" she said, trying to grasp what the implications of that would be. "What if someone changes their password to be-- "

"Users don't change passwords," Benny replied. "That would break single sign-on. If a user changes their password in their home system, their company will submit a change request to us to modify the password on 'We Sell Bonds!'."

Alexis blinked-- this time certain that this made no sense, and she was actually dumbfounded. But Benny must have taken her silence as 'question answered', and immediately started his next task. It made no sense, but she was still a junior developer, fresh out of school; full of ideas but short on experience. Maybe-- yeah, maybe there was a reason to do it this one. One that made sense once she learned to stop thinking so academically. That must be it.

She dutifully entered two records for jsmith and kept working on the pile.

#

Friday. The end of her first real work week. Such a long, long week of data entry, interspersed by being allowed to witness a small piece of the system as Benny worked on his upgrades. At least she knew now which version of SQL Server was in use; and that Benny avoided the braces-verses-no-braces argument by just using vbscript which was "pure and brace-free"; and that stored procedures were allowed because raw SQL was too advanced to trust to human hands.

Alexis stood in front of the quaint brick-faced office building. It was familiar enough now, after even just a week, that she could see the discoloured mortar, and cracked bricks near the foundation, and the smatterings of dirt and debris that came with being on a busy downtown street.

She went into the office, and sat down at the desk. Another stack of papers for her to enter, just like the day before, just like every day this week. Though something was different. In the middle of the table, there was a box of donuts from the local bakery.

"Well, that's nice," she said as she sat down. "Happy Friday?"

Everyone looked up at her at the same time.

"No," Mr. Chen stated, "Friday is not a celebration; please do not detract from Benny's success."

She felt like she wanted to apologize, but she didn't know why. "What's the occasion, Benny?"

"He has completed the upgrade of the database. We celebrate his success."

That seemed reasonable enough. Mr. Chen opened the box. There was an assortment of donuts. Seven donuts. Exactly seven donuts. Not a dozen. Not half a dozen. Seven. Who buys seven donuts?

Mr. Chen selected one, and then the box was passed around. Alexis didn't know much about her coworkers (a fact that, upon thinking about it, was not normal)-- but she did know enough about their positions to recognize the box being passed around in order of seniority. She took the last one, a plain cake donut.

Of course.

"Well," she said, making a silver lining out of a plain donut, "Congratulations, Benny. Cheers to you."

"Thank you," he said, "I was finally able to successfully update the server for the first time last night."

"Nice. When do we roll it out to the live website?"

Benny looked at her a blankly. "The website is live."

"Yeah, I know," Alexis said, swallowing the last bit of donut. It landed hard on the weird feeling she had in her stomach. "But, y'know-- you upgraded whatever environment you were experimenting on, right? So now that that's done, are you, like-- going to upgrade the live, production server over the weekend or something-- like, off hours?"

"I have upgraded the live, production server. That is our server. That is where we do all the work."

Alexis became acutely aware that the weird feeling in her stomach was a perfectly normal and natural reaction to thinking about doing work directly on a live, production server that served hundreds of customers handling millions of dollars.

"Oh."

Mr. Chen finished his donut and said, "Benny is a proper, careful worker. There is no need to waste resources on many environments when one can just do the job correctly in the first place. Again, good work, Benny, and now the day begins."

Everyone turned to their laptops, and so did Alexis, reflectively. She started in on the first stack of papers to enter into the database-- the live, production database she was interfacing directly with-- when she heard a sound she'd never heard before.

A phone rang.

The man beside Mr. Chen-- Trevor, she thought his name was, stood up and excused himself to the lobby to answer the phone. He returned after a few moments, and put a piece of paper on top of her pile.

"That request should be queued at the bottom of her pile," Mr. Chen said as soon as Trevor's hand was off the paper.

"I believe this may be a case of priority," Trevor replied. He had a nice voice. Why hadn't she heard her co-worker's voice after a week of working here? "A user cannot log in."

She glanced down at the paper. There was a username and password jotted down. When she looked back up at Mr. Chen, he waved her to proceed. Alexis pulled up "We Sell Bonds!" home page, and tried to log in as "a.sanders"

The logged-in page popped up. "Huh, seems to be working now."

"No," Trevor said, "You should be logged in as Andrew Sanders from Initech Bonds and Holdings, not Andrew Sanders from Fiduciary Interests."

"But I am logged in as a.sanders from Initiech, see?" she brought up the account details to show Trevor.

"No, I tried it myself. I will show you." Trevor took her laptop, repeated the login steps. "There."

"Huh." Alexis stared at the account information for Andrew Sanders from Fiduciary Interests. "Maybe one of us is typing in the wrong password?"

Alexis tried again, and Trevor tried-- and this time got the results reversed. They tried a new browser session, and both got Initech. Then try tried different browsers and Trevor got Initech twice in a row. They copy and pasted usernames and passwords to and from Notepad. No matter what they tried, they couldn't consistently reproduce which Andrew Sanders they got.

As Alexis tried over and over to come up with something or anything to explain it, Benny was frantically running through code, adding Response.Write("<!-- some debugging message -->") anywhere he could think might help.

By this point the whole company was watching them. While that shouldn't be noteworthy since the entire company was in the same room, being paid attention to by this particular group of coworkers was extremely noticeable.

And of all the looks that fell on her, the most disconcerting was Mr. Chen's gaze.

"Determine the cause of this disruption to our website," he said flatly.

"I don't get it," Alexis said, "This doesn't make any sense. We should be able to determine what's causing this bug, but-- um-- hang on."

Determine-- the word tugged at her, begging to be noticed. Or begging her to notice something. Something she'd seen on Benny's screen. A SQL query. It reminded her of a term from one of her Database Management exams. Deterministic. Yes, of course!

"Benny, go back to that query you had on screen!" she exclaimed! "Yes, that one!"

As she pointed at Benny's screen, Mr. Chen was already on his feet, heading over. A perfect chance for her to finally prove her worth as a developer.

"That query, right there, for getting the user record. It's using a view and-- may I?" she took over Benny's laptop, focused on the SQL Management Studio, but excitedly talking aloud as she went.

"Programmability... views... VIEW_ALL_USERS... aha! Check it out."

SELECT TOP 100 PERCENT *
FROM TABLE_USERS
ORDER BY UserCreatedDate

"Which," she clicked back to the query, "Is used here..."

SELECT *
FROM TABLE_USERS
WHERE username=@Username and password=@Password

"... and we only use the first record we return, but I've read about this! Okay, like, the select without an ORDER BY returns in random order-- no no no, NON-DETERMINISTIC order, basically however the optimizer felt like returning them, based on which ones returned faster, or what it had for breakfast that day, but totally non-deterministic. No "ORDER BY" means no order. Or at least it is supposed to, but, like, SQL Server 2000 had this bug in the optimizer, which became this epic 'undocumented feature'. When you did TOP 100 PERCENT along with an ORDER BY in a view, the optimizer just bugged the fudge out and did the sorting wrong, but did the sorting wrong in a deterministic way. In effect, it would obey the ORDER BY inside the view, but only by accident. But, like I said, that was a bug in SQL Server 2000, and Benny, WE JUST UPGRADED TO SQL SERVER 2005!"

She held her hands out, the solution at last. Mr. Chen was standing right there. Okay, perfect-- because what had Logical Thinking and Troubleshooting 302 taught her? Don't just identify a PROBLEM, also present a SOLUTION!

"Okay, look-- I bet if I query for users with this username and this password-- " she typed the query in frantically-- "see, right there, that's Andrew Sanders from Initech AND Andrew Sanders from Fiduciary Interests. They both have the same username and password, so they're both returned. I bet no one ever noticed before. That other guy has no activity on his account. So all we really have to do is put the same ORDER BY into the query itself-- and-- click click, copy paste-- there! Log in and-- there's Mr. Initech. Log out, log in, log out, log in-- I could do this all day and we'll get the same results. Tah-dah!"

She sat back in her chair, grinning at her captive audience. But they weren't grinning back. Instead they were averting their gaze. Everyone-- except for Mr. Chen. There was no doubt he was staring right at her. Glaring.

"Undo that immediately," he said, in an extremely calm voice that did not match his reddening face.

"I, uh-- okay?" she reached for the keyboard.

"BENNY!" Mr. Chen rebuked, "Benny, you undo those changes."

Benny snatched the laptop away, and with a barrage of CTRL-Z wiped away her change.

"But-- that fixes the bug-- "

"No," Mr. Chen stated, "The CORRECT fix is to delete the second record, and inform Fiduciary Interest that their Andrew Sanders may not have access to this system until he changes his password to something unique. Then there is no code change needed."

"But but-- " Alexis stumbled, "It's a documented Microsoft bug, and if-- "

"The code of 'We Sell Bonds!' is properly and carefully written. We do not change it to conform to someone else's mistakes. This complex code change you unilaterally impose is unknown, untested, unreliable and utterly unacceptable. You would determine the course of a financial business based on an outrageous outside case?"

"But, it's happening and causing a problem now and-- "

Benny pointed at his screen, where he'd entered a query with a GROUP BY and HAVING. "Only eight usernames are duplicated like that."

"Vanishingly small," Mr. Chen said, "Benny, print out those users, and then delete them. Nadine, contact those companies and inform them those users will not have access to the website until their information is corrected. With that solved, we can all resume work."

Everyone at the company returned to their tasks. Alexis stared at her screen for a moment, at the ASP management screen that waited for her data to be entered. It didn't implement any change. It didn't introduce any progress. It was just an ASP form for data entry. And that was her job.

She entered her data.

At lunch, when everyone in the company got up to take their break, Mr. Chen motioned for her to sit back down. After the rest of the company filed out, he spoke.

"Alexis, although your ability to interface with the system is adequate, I am afraid your inability to focus on your task is not. I require a worker who is careful and proper, and you are not. Thank you for your time. You will be paid for the remainder of the day, but may go home now. I will see you out."

Alexis erred on the better side of valor, and did not shout in his face that he can't fire her, because she was quitting.

Mr. Chen ushered her out the front door, and locked it behind her.

Alexis stood on the busy sidewalk, the lunchtime crowd pushing and shoving their way past her. She looked back on the quaint, brick-faced office building. On the surface, it had been exactly what she'd wanted from her first programming job. She only got one "first" job, and it had ended up being-- that.

She wallowed for a moment, and then pulled herself back together. No. Data entry did not a programming job make. Her real first programming job was still ahead of her, somewhere. And next time, when she thought she'd found it, she would first look-- properly and carefully-- past the quaint surface to what lay beneath.

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet Linux AustraliaCraige McWhirter: Querying Installed Package Versions Across An Openstack Cloud

AKA: The Joy of juju run

Package upgrades across an OpenStack cloud do not always happen at the same time. In most cases they may happen within an hour or so across your cloud but for a variety reasons, some upgrades may be applied inconsistently, delayed or blocked on some servers.

As these packages may be rolling out a much needed patch or perhaps carrying a bug, you may wish to know which services are impacted in fairly short order.

If your OpenStack cloud is running Ubuntu and managed by Juju and MAAS, here's where juju run can come to the rescue.

For example, perhaps there's an update to the Corosync library libcpg4 and you wish to know which of your HA clusters have what version installed.

From your Juju controller, create a list of servers managed by Juju:

Juju 1.x:

$ juju stat --format tabular > jsft.out

Now you could fashion a query like this, utilising juju run:

$ for i in $(egrep -o '[a-z]+-hacluster/[0-9]+' jsft.out | cut -d/ -f1 | sort -u);
do juju run --timeout 30s --service $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';
done

The output returned will look something like this:

2.3.3-1ubuntu4 == ceilometer-hacluster/1
2.3.3-1ubuntu4 == ceilometer-hacluster/0
2.3.3-1ubuntu4 == ceilometer-hacluster/2
2.3.3-1ubuntu4 == cinder-hacluster/0
2.3.3-1ubuntu4 == cinder-hacluster/1
2.3.3-1ubuntu4 == cinder-hacluster/2
2.3.3-1ubuntu4 == glance-hacluster/3
2.3.3-1ubuntu4 == glance-hacluster/4
2.3.3-1ubuntu4 == glance-hacluster/5
2.3.3-1ubuntu4 == keystone-hacluster/1
2.3.3-1ubuntu4 == keystone-hacluster/0
2.3.3-1ubuntu4 == keystone-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/1
2.3.3-1ubuntu4 == mysql-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/1
2.3.3-1ubuntu4 == ncc-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/1
2.3.3-1ubuntu4 == neutron-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/1
2.3.3-1ubuntu4 == osd-hacluster/2
2.3.3-1ubuntu4 == swift-hacluster/1
2.3.3-1ubuntu4 == swift-hacluster/0
2.3.3-1ubuntu4 == swift-hacluster/2

Juju 2.x:

$ juju status > jsft.out

Now you could fashion a query like this:

$ for i in $(egrep -o 'hacluster-[a-z]+/[0-9]+' jsft.out | cut -d/ -f1 |sort -u);
do juju run --timeout 30s --application $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';
done

The output returned will look something like this:

2.3.5-3ubuntu2 == hacluster-ceilometer/1
2.3.5-3ubuntu2 == hacluster-ceilometer/0
2.3.5-3ubuntu2 == hacluster-ceilometer/2
2.3.5-3ubuntu2 == hacluster-cinder/1
2.3.5-3ubuntu2 == hacluster-cinder/0
2.3.5-3ubuntu2 == hacluster-cinder/2
2.3.5-3ubuntu2 == hacluster-glance/0
2.3.5-3ubuntu2 == hacluster-glance/1
2.3.5-3ubuntu2 == hacluster-glance/2
2.3.5-3ubuntu2 == hacluster-heat/0
2.3.5-3ubuntu2 == hacluster-heat/1
2.3.5-3ubuntu2 == hacluster-heat/2
2.3.5-3ubuntu2 == hacluster-horizon/0
2.3.5-3ubuntu2 == hacluster-horizon/1
2.3.5-3ubuntu2 == hacluster-horizon/2
2.3.5-3ubuntu2 == hacluster-keystone/0
2.3.5-3ubuntu2 == hacluster-keystone/1
2.3.5-3ubuntu2 == hacluster-keystone/2
2.3.5-3ubuntu2 == hacluster-mysql/0
2.3.5-3ubuntu2 == hacluster-mysql/1
2.3.5-3ubuntu2 == hacluster-mysql/2
2.3.5-3ubuntu2 == hacluster-neutron/0
2.3.5-3ubuntu2 == hacluster-neutron/2
2.3.5-3ubuntu2 == hacluster-neutron/1
2.3.5-3ubuntu2 == hacluster-nova/1
2.3.5-3ubuntu2 == hacluster-nova/2
2.3.5-3ubuntu2 == hacluster-nova/0

You can of course substitute libcpg4 in the above query for any package that you need to check.

By far and away my most favourite feature of Juju at present, juju run reminds me of knife ssh, which is unsurprisingly one of my favourite features of Chef.

Sociological ImagesSelling the Sport Spectacle

That large (and largely trademarked) sporting event is this weekend. In honor of its reputation for massive advertising, Lisa Wade tipped me off about this interesting content analysis of last year’s event by the Media Education Foundation.

MEF watched last year’s big game and tallied just how much time was devoted to playing and how much was devoted to ads and other branded content during the game. According to their data, the ball was only in play “for a mere 18 minutes and 43 seconds, or roughly 8% of the entire broadcast.”

MEF used a pie chart to illustrate their findings, but readers can get better information from comparing different heights instead of different angles. Using their data, I quickly made this chart to more easily compare branded and non-branded content.

Data Source: Media Education Foundation, 2018

One surprising thing that jumps out of this data is that, for all the hubbub about commercials, far and away the most time is devoted to replays, shots of the crowd, and shots of the field without the ball in play. We know “the big game” is a big sell, but it is interesting to see how the thing it sells the most is the spectacle of the event itself.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianPaul Wise: FLOSS Activities January 2018

Changes

Issues

Review

Administration

  • Debian: try to regain OOB access to a host, try to connect with a hoster, restart bacula after db restart, provide some details to a hoster, add debsnap to snapshot host, debug external email issue, redirect users to support channels
  • Debian mentors: redirect to sponsors, teach someone about dput .upload files, check why a package disappeared
  • Debian wiki: unblacklist IP address, whitelist email addresses, whitelist email domain, investigate DocBook output crash

Communication

  • Initiate discussion about ingestion of more security issue feeds
  • Invite LinuxCNC to the Debian derivatives census

Sponsors

I renewed my support of Software Freedom Conservancy.

The Discord related uploads (harmony, librecaptcha, purple-discord) and the Debian fakeupstream change were sponsored by my employer. All other work was done on a volunteer basis.

,

Planet DebianChris Lamb: Free software activities in January 2018

Here is my monthly update covering what I have been doing in the free software world in January 2018 (previous month):


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:



I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:
    • Compare JSON files using the jsondiff module. (#888112)
    • Report differences in extended file attributes when comparing files. (#888401)
    • Show extended filesystem metadata when directly comparing two files not just when we specify two directories. (#888402)
    • Do some fuzzy parsing to detect JSON files not named .json. [...]
  • Bug fixes:
    • Return unknown if we can't parse the readelf version number for (eg.) FreeBSD. (#886963)
    • If the LLVM disassembler does not work, try the internal one. (#886736)
  • Misc:
    • Explicitly depend on e2fsprogs. (#887180)
    • Clarify Unidentified file log message as we did try and lookup via the comparators first. [...]

I also fixed an issue in the "trydiffoscope" command-line client that was preventing installation on non-Debian systems (#888882).


disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Correct "explicitly" typo in disorderfs.1.txt. [...]
  • Bump Standards-Version to 4.1.3. [...]
  • Drop trailing whitespace in debian/control. [...]


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

In addition to this, I:

  • Published whydoesaptnotusehttps.com, an overview of why APT does not rely solely on SSL for validation of downloaded packages as I noticed it was being asked a lot on support forums.
  • Reported a number of issues for the mentors.debian.net review service.

Patches contributed

  • dput: Suggest --force if package has already been uploaded. (#886829)
  • linux: Add link to the Firmware page on the wiki to failed to load log messages. (#888405)
  • markdown: Make markdown exit with a non-zero exit code if cannot open input file. (#886032)
  • spectre-meltdown-checker: Return a sensible exit code. (#887077)

Debian LTS


This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • Initial draft of a script to automatically detect when CVEs should be assigned to multiple source packages in the case of legacy renames, duplicates or embedded code copies.
  • Issued DLA 1228-1 for the poppler PDF library to fix an overflow vulnerability.
  • Issued DLA 1229-1 for imagemagick correcting two potential denial-of-service attacks.
  • Issued DLA 1233-1 for gifsicle — a command-line tool for manipulating GIF images — to fix a use-after-free vulnerability.
  • Issued DLA 1234-1 to fix multiple integer overflows in the GTK gdk-pixbuf graphics library.
  • Issued DLA 1247-1 for rsync, fixing a command-injection vulnerability.
  • Issued DLA 1248-1 for libgd2 to prevent a potential infinite loop caused by signedness confusion.
  • Issued DLA 1249-1 for smarty3 fixing an arbitrary code execution vulnerability.
  • "Frontdesk" duties, triaging CVEs, etc.

Uploads

  • adminer (4.5.0-1) — New upstream release.
  • bfs (1.2-1) — New upstream release.
  • dbus-cpp (5.0.0+18.04.20171031-1) — Initial upload to Debian.
  • installation-birthday (7) — Add e2fsprogfs to Depends so it can drop Essential: yes. (#887275
  • process-cpp:
    • 3.0.1-1 — Initial upload to Debian.
    • 3.0.1-2 — Fix FTBFS due to symbol versioning.
  • python-django (1:1.11.9-1 & 2:2.0.1-1) — New upstream releases.
  • python-gflags (1.5.1-4) — Always use SOURCE_DATE_EPOCH from the environment.
  • redis:
    • 5:4.0.6-3 — Use --clients argument to runtest to force single-threaded operation over using taskset.
    • 5:4.0.6-4 — Re-add procps to Build-Depends. (#887075)
    • 5:4.0.6-5 — Fix a dangling symlink (and thus a broken package). (#884321)
    • 5:4.0.7-1 — New upstream release.
  • redisearch (1.0.3-1, 1.0.4-1 & 1.0.5-1) — New upstream releases.
  • trydiffoscope (67.0.0) — New upstream release.

I also sponsored the following uploads:


Debian bugs filed

  • gdebi: Invalid gnome-mime-application-x-deb icon in AppStream metadata. (#887056)
  • git-buildpackage: Please make gbp clone not quieten the output by default. (#886992)
  • git-buildpackage: Please word-wrap generated changelog lines. (#887055)
  • isort: Don't install test_isort.py to global Python namespace. (#887816)
  • restrictedpython: Please add Homepage. (#888759)
  • xcal: Missing patches due to 00List != 00list. (#888542)

I also filed 4 bugs against packages missing patches due to incomplete quilt conversions against cernlib geant321, mclibs & paw.


RC bugs

  • gnome-shell-extension-tilix-shortcut: Invalid date in debian/changelog. (#886950)
  • python-qrencode: Missing PIL dependencies due to use of Python 2 substvars in Python 3 package. (#887811)


I also filed 7 FTBFS bugs against lintian, netsniff-ng, node-coveralls, node-macaddress, node-timed-out, python-pyocr & sleepyhead.


FTP Team


As a Debian FTP assistant I ACCEPTed 173 packages: appmenu-gtk-module, atlas-cpp, canid, check-manifest, cider, citation-style-language-locales, citation-style-language-styles, cloudkitty, coreapi, coreschema, cypari2, dablin, dconf, debian-dad, deepin-icon-theme, dh-dlang, django-js-reverse, flask-security, fpylll, gcc-8, gcc-8-cross, gdbm, gitlint, gnome-tweaks, gnupg-pkcs11-scd, gnustep-back, golang-github-juju-ansiterm, golang-github-juju-httprequest, golang-github-juju-schema, golang-github-juju-testing, golang-github-juju-webbrowser, golang-github-posener-complete, golang-gopkg-juju-environschema.v1, golang-gopkg-macaroon-bakery.v2, golang-gopkg-macaroon.v2, harmony, hellfire, hoel, iem-plugin-suite, ignore-me, itypes, json-tricks, jstimezonedetect.js, libcdio, libfuture-asyncawait-perl, libgig, libjs-cssrelpreload, liblxi, libmail-box-imap4-perl, libmail-box-pop3-perl, libmail-message-perl, libmatekbd, libmoosex-traitfor-meta-class-betteranonclassnames-perl, libmoosex-util-perl, libpath-iter-perl, libplacebo, librecaptcha, libsyntax-keyword-try-perl, libt3highlight, libt3key, libt3widget, libtree-r-perl, liburcu, linux, mali-midgard-driver, mate-panel, memleax, movit, mpfr4, mstch, multitime, mwclient, network-manager-fortisslvpn, node-babel-preset-airbnb, node-babel-preset-env, node-boxen, node-browserslist, node-caniuse-lite, node-cli-boxes, node-clone-deep, node-d3-axis, node-d3-brush, node-d3-dsv, node-d3-force, node-d3-hierarchy, node-d3-request, node-d3-scale, node-d3-transition, node-d3-zoom, node-fbjs, node-fetch, node-grunt-webpack, node-gulp-flatten, node-gulp-rename, node-handlebars, node-ip, node-is-npm, node-isomorphic-fetch, node-js-beautify, node-js-cookie, node-jschardet, node-json-buffer, node-json3, node-latest-version, node-npm-bundled, node-plugin-error, node-postcss, node-postcss-value-parser, node-preact, node-prop-types, node-qw, node-sellside-emitter, node-stream-to-observable, node-strict-uri-encode, node-vue-template-compiler, ntl, olivetti-mode, org-mode-doc, otb, othman, papirus-icon-theme, pgq-node, php7.2, piu-piu, prometheus-sql-exporter, py-radix, pyparted, pytest-salt, pytest-tempdir, python-backports.tempfile, python-backports.weakref, python-certbot, python-certbot-apache, python-certbot-nginx, python-cloudkittyclient, python-josepy, python-jsondiff, python-magic, python-nose-random, python-pygerrit2, python-static3, r-cran-broom, r-cran-cli, r-cran-dbplyr, r-cran-devtools, r-cran-dt, r-cran-ggvis, r-cran-git2r, r-cran-pillar, r-cran-plotly, r-cran-psych, r-cran-rhandsontable, r-cran-rlist, r-cran-shinydashboard, r-cran-utf8, r-cran-whisker, r-cran-wordcloud, recoll, restrictedpython, rkt, rtklib, ruby-handlebars-assets, sasmodels, spectre-meltdown-checker, sphinx-gallery, stepic, tilde, togl, ums2net, vala-panel, vprerex, wafw00f & wireguard.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: fpylll, gnome-tweaks, org-mode-doc & py-radix.

CryptogramIsraeli Scientists Accidentally Reveal Classified Information

According to this story (non-paywall English version here), Israeli scientists released some information to the public they shouldn't have.

Defense establishment officials are now trying to erase any trace of the secret information from the web, but they have run into difficulties because the information was copied and is found on a number of platforms.

Those officials have managed to ensure that the Haaretz article doesn't have any actual information about the information. I have reason to believe the information is related to Internet security. Does anyone know more?

Planet DebianWouter Verhelst: Day three of the pre-FOSDEM Debconf Videoteam sprint

This should really have been the "day two" post, but I forgot to do that yesterday, and now it's the end of day three already, so let's just do the two together for now.

Kyle

Has been hacking on the opsis so we can get audio through it, but so far without much success. In addition, he's been working a bit more on documentation, as well as splitting up some data that's currently in our ansible repository into a separate one so that other people can use our ansible configuration more easily, without having to fork too much.

Tzafrir

Did some tests on the ansible setup, and did some documentation work, and worked on a kodi plugin for parsing the metadata that we've generated.

Stefano

Did some work on the DebConf website. This wasn't meant to be much, but yak shaving sucks. Additionally, he's been doing some work on the youtube uploader as well.

Nattie

Did more work reviewing our documentation, and has been working on rewording some of the more awkward bits.

Wouter

Spent much time on improving the SReview installation for FOSDEM. While at it, fixed a number of bugs in some of the newer code that were exposed by full tests of the FOSDEM installation. Additionally, added code to SReview to generate metadata files that can be handed to Stefano's youtube uploader.

Pollo

Although he had less time yesterday than he did on monday (and apparently no time today) to sprint remotely, Pollo still managed to add a basic CI infrastructure to lint our ansible playbooks.

Planet DebianAbhijith PA: Swatantra17

Its very late but here it goes..

Swatantra

Last month Thiruvananthapuram witnessed one of the biggest Free and Open Source Software conference called Swatantra17. Swatantra is a flagship triennial ( actually used to be triennial, but from now on organizers decided to conduct in every 2 years.) FOSS conference from ICFOSS. This year there were more than 30 speakers from all around the world. The event held from 20-21 December at Mascot hotel, Thiruvananthapuram. I was one of the community volunteer for the event and was excited from the day it announced :) .

Pinarayi Vijayan

Current Kerala Chief Minister Pinarayi Vijayan inaugurated Swatantra17. The first day session started with keynote from Software Freedom Conservancy executive director Karen Sandler. Karen told about safety of medical devices like defibrillator which runs proprietary software. After that there were many parallel talks about various free software projects,technologies and tools. This edition of Swatantra focused more on art. It was good to know more about artist’s free software stack. Most amazing thing is through out the conference I met so many people from FSCI whom I only know through matrix/IRC/emails.

Karen Sandler

The first day talks were ended at 6PM. After that Oorali band performed for us. This band is well-known in Kerala because they speak for many social and political issues. This make them best match for a free software conference cultural program :). Their songs are mainly about birds, forests, freedom and we danced to the many of the songs.

Oorali

Last day evening there was kind of BoF from FSF person Benjamin Mako Hill. Half way through I came to know he is also a Debian Developer :D. Unfortunately this Bof stopped as he was called for a panel discussion. After the panel discussion we all Debian people gathered and had a chat.

Panel Discussion

Planet DebianDaniel Leidert: Migrating the debichem group subversion repository to Git - Part 1: svn-all-fast-export basics

With the deprecation of alioth.debian.org the subversion service hosted there will be shut down too. According to lintian the estimated date is May 1st 2018 and there are currently more then 1500 source packages affected. In the debichem group we've used the subversion service since 2006. Our repository contains around 7500 commits done by around 20 different alioth user accounts and the packaging history of around 70 to 80 packages, including packaging attempts. I've spent the last days to prepare the Git migration, comparing different tools, controlling the created repositories and testing possibilities to automate the process as much as possible. The resulting scripts can currently be found here.

Of course I began as described at the Debian Wiki. But following this guide, using git-svn and converting the tags with the script supplied under rubric Convert remote tags and branches to local one gave me really weird results. The tags were pointing to the wrong commit-IDs. I thought, that git-svn was to blame and reported this as bug #887881. In the following mail exchange Andreas Kaesorg explained to me, that the issue is caused by so-called mixed-revision-tags in our repository as shown in the following example:


$ svn log -v -r7405
------------------------------------------------------------------------
r7405 | dleidert | 2018-01-17 18:14:57 +0100 (Mi, 17. Jan 2018) | 1 Zeile
Geänderte Pfade:
A /tags/shelxle/1.0.888-1 (von /unstable/shelxle:7396)
R /tags/shelxle/1.0.888-1/debian/changelog (von /unstable/shelxle/debian/changelog:7404)
R /tags/shelxle/1.0.888-1/debian/control (von /unstable/shelxle/debian/control:7403)
D /tags/shelxle/1.0.888-1/debian/patches/qt5.patch
R /tags/shelxle/1.0.888-1/debian/patches/series (von /unstable/shelxle/debian/patches/series:7402)
R /tags/shelxle/1.0.888-1/debian/rules (von /unstable/shelxle/debian/rules:7403)

[svn-buildpackage] Tagging shelxle 1.0.888-1
------------------------------------------------------------------------

Looking into the git log, the tags deteremined by git-svn are really not in their right place in the history line, even before running the script to convert the branches into real Git tags. So IMHO git-svn is not able to cope with this kind of situation. Because it also cannot handle our branch model, where we use /branch/package/, I began to look for different tools and found svn-all-fast-export, a tool created (by KDE?) to convert even large subversion repositories based on a ruleset. My attempt using this tool was so successful (not to speak of, how fast it is), that I want to describe it more. Maybe it will prove to be useful for others as well and it won't hurt to give some more information about this poorly documented tool :)

Step 1: Setting up a local subversion mirror

First I suggest setting up a local copy of the subversion repository to migrate, that is kept in sync with the remote repository. This can be achieved using the svnsync command. There are several howtos for this, so I won't describe this step here. Please check out this guide. In my case I have such a copy in /srv/svn/debichem.

Step 2: Creating the identity map

svn-all-fast-export needs at least two files to work. One is the so called identity map. This file contains the mapping between subversion user IDs (login names) and the (Git) committer info, like real name and mail address. The format is the same as used by git-svn:

loginname = author name <mail address>

e.g.

dleidert = Daniel Leidert <dleidert@debian.org>

The list of subversion user IDs can be obtained the same way as described in the Wiki:

svn log SVN_URL | awk -F'|' '/^r[0-9]+/ { print $2 }' | sort -u

Just replace the placeholder SVN_URL with your subversion URL. Here is the complete file for the debichem group.

Step 3: Creating the rules

The most important thing is the second file, which contains the processing rules. There is really not much documentation out there. So when in doubt, one has to read the source file src/ruleparser.cpp. I'll describe, what I already found out. If you are impatient, here is my result so far.

The basic rules are:


create repository REPOSITORY
...
end repository

and


match PATTERN
...
end match

The first rule creates a bare git repository with the name you've chosen (above represented by REPOSITORY). It can have one child, that is the repository description to be put into the repositories description file. There are AFAIK no other elements allowed here. So in case of e.g. ShelXle the rule might look like this:


create repository shelxle
description packaging of ShelXle, a graphical user interface for SHELXL
end repository

You'll have to create every repository, before you can put something into it. Else svn-all-fast-export will exit with an error. JFTR: It won't complain, if you create a repository, but don't put anything into it. You will just end up with an empty Git repository.

Now the second type of rule is the most important one. Based on regular expression match patterns (above represented by PATTERN), one can define actions, including the possibility to limit these actions to repositories, branches and revisions. The patterns are applied in their order of appearance. Thus if a matching pattern is found, other patterns matching but appearing later in the rules file, won't apply! So a special rule should always be put above a general rule. The patterns, that can be used, seem to be of type QRegExp and seem like basic Perl regular expressions including e.g. capturing, backreferences and lookahead capabilities. For a multi-package subversion repository with standard layout (that is /PACKAGE/{trunk,tags,branches}/), clean naming and subversion history, the rules could be:


match /([^/]+)/trunk/
repository \1
branch master
end match

match /([^/]+)/tags/([^/]+)/
repository \1
branch refs/tags/debian/\2
annotated true
end match

match /([^/]+)/branches/([^/]+)/
repository \1
branch \2
end match

The first rule captures the (source) package name from the path and puts it into the backreference \1. It applies to the trunk directory history and will put everything it finds there into the repository named after the directory - here we simply use the backreference \1 to that name - and there into the master branch. Note, that svn-all-fast-export will error out, if it tries to access a repository, which has not been created. So make sure, all repositories are created as shown with the create repository rule. The second rule captures the (source) package name from the path too and puts it into the backreference \1. But in backreference \2 it further captures (and applies to) all the tag directories under the /tags/ directory. Usually these have a Debian package version as name. With the branch statement as shown in this rule, the tags, which are really just branches in subversion, are automatically converted to annotated Git tags (another advantage of svn-all-fast-export over git-svn). Without enabling the annotated statement, the tags created will be lightweight tags. So the tag name (here: debian/VERSION) is determined via backreference \2. The third rule is almost the same, except that everything found in the matching path will be pushed into a Git branch named after the top-level directory captured from the subversion path.

Now in an ideal world, this might be enough and the actual conversion can be done. The command should only be executed in an empty directory. I'll assume, that the identity map is called authors and the rules file is called rules and that both are in the parent directory. I'll also assume, that the local subversion mirror of the packaging repository is at /srv/svn/mymirror. So ...

svn-all-fast-export --stats --identity-map=../authors.txt --rules=../debichem.rules --stats /srv/svn/mymirror

... will create one or more bare Git repositories (depending on your rules file) in the current directory. After the command succeeded, you can test the results ...


git -C REPOSITORY/ --bare show-ref
git -C REPOSITORY/ --bare log --all --graph

... and you will find your repositories description (if you added one to the rules file) in REPOSITORY/description:

cat REPOSITORY/description

Please note, that not all the debian version strings are well formed Git reference names and therefor need fixing. There might also be gaps shown in the Git history log. Or maybe the command didn't even suceed or complained (without you noticing it) or you ended up with an empty repository, although the matching rules applied. I encountered all of these issues and I'll describe the cause and fixes in the next blog article.

But if everything went well (you have no history gaps, the tags are in their right place within the linearized history and the repository looks fine) and you can and want to proceed, you might want to skip to the next step.

In the debichem group we used a different layout. The packaging directories were under /{unstable,experimental,wheezy,lenny,non-free}/PACKAGE/. This translates to /unstable/PACKAGE/ and /non-free/PACKAGE/ being the trunk directories and the others being the branches. The tags are in /tags/PACKAGE/. And packages, that are yet to upload are located in /wnpp/PACKAGE/. With this layout, the basic rules are:


# trunk handling
# e.g. /unstable/espresso/
# e.g. /non-free/molden/
match /(?:unstable|non-free)/([^/]+)/
repository \1
branch master
end match

# handling wnpp
# e.g. /wnpp/osra/
match /(wnpp)/([^/]+)/
repository \2
branch \1
end match

# branch handling
# e.g. /wheezy/espresso/
match /(lenny|wheezy|experimental)/([^/]+)/
repository \2
branch \1
end match

# tags handling
# e.g. /tags/espresso/VERSION/
match /tags/([^/]+)/([^/]+)/
repository \1
annotated true
branch refs/tags/debian/\2
substitute branch s/~/_/
substitute branch s/:/_/
end match

In the first rule, there is a non-capturing expression (?: ... ), which simply means, that the rule applies to /unstable/ and /non-free/. Thus the backreference \1 refers to second part of the path, the package directory name. The contents found are pushed to the master branch. In the second rule, the contents from the wnpp directory are not pushed to master, but instead to a branch called wnpp. This was necessary because of overlaps between /unstable/ and /wnpp/ history and already shows, that the repositories history makes things complicated. In the third rule, the first backreference \1 determines the branch (note the capturing expression in contrast to the first rule) and the second backreference \2 the package repository to act on. The last rule is similar, but now \1 determines the package repository and \2 the tag name (debian package version) based on the matching path. The example also shows another issue, which I'd like to explain more in the next article: some characters we use in debian package versions, e.g. the tilde sign and the colon, are not allowed within Git tag names and must therefor be substituted, which is done by the substitute branch EXPRESSION instructions.

Step 4: Cleaning the bare repository

The tool documentation suggests to run ...

git -C REPOSITORY/ repack -a -d -f

... before you upload this bare repository to another location. But Stuart Prescott told me on the debichem list, that this might not be enough and still leave some garbage behind. I'm not experienved enough to judge here, but his suggestion is, to clone the repository, either a bare clone or clone and init a new bare. I used the first approach:


git -C REPOSITORY/ --bare clone --bare REPOSITORY.git
git -C REPOSITORY.git/ repack -a -d -f

Please note, that this won't copy the repositories description file. You'll have to copy it manually, if you wanna keep it. The resulting bare repository can be uploaded (e.g. to git.debian.org as personal repository:


cp REPOSITORY/description REPOSITORY.git/description
touch REPOSITORY.git/git-daemon-export-ok
rsync -avz REPOSITORY.git git.debian.org:~/public_git/

Or you clone the repository, add a remote origin and push everything there. It is even possible to use the gitlab API at salsa.debian.org to create a project and push there. I'll save the latter for another post. If you are hasty, you'll find a script here.

CryptogramAfter Section 702 Reauthorization

For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We've just lost an important battle. On January 18, President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of US law.

Section 702 was initially passed in 2008, as an amendment to the Foreign Intelligence Surveillance Act of 1978. As the title of that law says, it was billed as a way for the NSA to spy on non-Americans located outside the United States. It was supposed to be an efficiency and cost-saving measure: the NSA was already permitted to tap communications cables located outside the country, and it was already permitted to tap communications cables from one foreign country to another that passed through the United States. Section 702 allowed it to tap those cables from inside the United States, where it was easier. It also allowed the NSA to request surveillance data directly from Internet companies under a program called PRISM.

The problem is that this authority also gave the NSA the ability to collect foreign communications and data in a way that inherently and intentionally also swept up Americans' communications as well, without a warrant. Other law enforcement agencies are allowed to ask the NSA to search those communications, give their contents to the FBI and other agencies and then lie about their origins in court.

In 1978, after Watergate had revealed the Nixon administration's abuses of power, we erected a wall between intelligence and law enforcement that prevented precisely this kind of sharing of surveillance data under any authority less restrictive than the Fourth Amendment. Weakening that wall is incredibly dangerous, and the NSA should never have been given this authority in the first place.

Arguably, it never was. The NSA had been doing this type of surveillance illegally for years, something that was first made public in 2006. Section 702 was secretly used as a way to paper over that illegal collection, but nothing in the text of the later amendment gives the NSA this authority. We didn't know that the NSA was using this law as the statutory basis for this surveillance until Edward Snowden showed us in 2013.

Civil libertarians have been battling this law in both Congress and the courts ever since it was proposed, and the NSA's domestic surveillance activities even longer. What this most recent vote tells me is that we've lost that fight.

Section 702 was passed under George W. Bush in 2008, reauthorized under Barack Obama in 2012, and now reauthorized again under Trump. In all three cases, congressional support was bipartisan. It has survived multiple lawsuits by the Electronic Frontier Foundation, the ACLU, and others. It has survived the revelations by Snowden that it was being used far more extensively than Congress or the public believed, and numerous public reports of violations of the law. It has even survived Trump's belief that he was being personally spied on by the intelligence community, as well as any congressional fears that Trump could abuse the authority in the coming years. And though this extension lasts only six years, it's inconceivable to me that it will ever be repealed at this point.

So what do we do? If we can't fight this particular statutory authority, where's the new front on surveillance? There are, it turns out, reasonable modifications that target surveillance more generally, and not in terms of any particular statutory authority. We need to look at US surveillance law more generally.

First, we need to strengthen the minimization procedures to limit incidental collection. Since the Internet was developed, all the world's communications travel around in a single global network. It's impossible to collect only foreign communications, because they're invariably mixed in with domestic communications. This is called "incidental" collection, but that's a misleading name. It's collected knowingly, and searched regularly. The intelligence community needs much stronger restrictions on which American communications channels it can access without a court order, and rules that require they delete the data if they inadvertently collect it. More importantly, "collection" is defined as the point the NSA takes a copy of the communications, and not later when they search their databases.

Second, we need to limit how other law enforcement agencies can use incidentally collected information. Today, those agencies can query a database of incidental collection on Americans. The NSA can legally pass information to those other agencies. This has to stop. Data collected by the NSA under its foreign surveillance authority should not be used as a vehicle for domestic surveillance.

The most recent reauthorization modified this lightly, forcing the FBI to obtain a court order when querying the 702 data for a criminal investigation. There are still exceptions and loopholes, though.

Third, we need to end what's called "parallel construction." Today, when a law enforcement agency uses evidence found in this NSA database to arrest someone, it doesn't have to disclose that fact in court. It can reconstruct the evidence in some other manner once it knows about it, and then pretend it learned of it that way. This right to lie to the judge and the defense is corrosive to liberty, and it must end.

Pressure to reform the NSA will probably first come from Europe. Already, European Union courts have pointed to warrantless NSA surveillance as a reason to keep Europeans' data out of US hands. Right now, there is a fragile agreement between the EU and the United States ­-- called "Privacy Shield" -- ­that requires Americans to maintain certain safeguards for international data flows. NSA surveillance goes against that, and it's only a matter of time before EU courts start ruling this way. That'll have significant effects on both government and corporate surveillance of Europeans and, by extension, the entire world.

Further pressure will come from the increased surveillance coming from the Internet of Things. When your home, car, and body are awash in sensors, privacy from both governments and corporations will become increasingly important. Sooner or later, society will reach a tipping point where it's all too much. When that happens, we're going to see significant pushback against surveillance of all kinds. That's when we'll get new laws that revise all government authorities in this area: a clean sweep for a new world, one with new norms and new fears.

It's possible that a federal court will rule on Section 702. Although there have been many lawsuits challenging the legality of what the NSA is doing and the constitutionality of the 702 program, no court has ever ruled on those questions. The Bush and Obama administrations successfully argued that defendants don't have legal standing to sue. That is, they have no right to sue because they don't know they're being targeted. If any of the lawsuits can get past that, things might change dramatically.

Meanwhile, much of this is the responsibility of the tech sector. This problem exists primarily because Internet companies collect and retain so much personal data and allow it to be sent across the network with minimal security. Since the government has abdicated its responsibility to protect our privacy and security, these companies need to step up: Minimize data collection. Don't save data longer than absolutely necessary. Encrypt what has to be saved. Well-designed Internet services will safeguard users, regardless of government surveillance authority.

For the rest of us concerned about this, it's important not to give up hope. Everything we do to keep the issue in the public eye ­-- and not just when the authority comes up for reauthorization again in 2024 -- hastens the day when we will reaffirm our rights to privacy in the digital age.

This essay previously appeared in the Washington Post.

Planet DebianJohn Goerzen: An old DOS BBS in a Docker container

Awhile back, I wrote about my Debian Docker base images. I decided to extend this concept a bit further: to running DOS applications in Docker.

But first, a screenshot:

It turns out this is possible, but difficult. I went through all three major DOS emulators available (dosbox, qemu, and dosemu). I got them all running inside the Docker container, but had a number of, er, fun issues to resolve.

The general thing one has to do here is present a fake modem to the DOS environment. This needs to be exposed outside the container as a TCP port. That much is possible in various ways — I wound up using tcpser. dosbox had a TCP modem interface, but it turned out to be too buggy for this purpose.

The challenge comes in where you want to be able to accept more than one incoming telnet (or TCP) connection at a time. DOS was not a multitasking operating system, so there were any number of hackish solutions back then. One might have had multiple physical computers, one for each incoming phone line. Or they might have run multiple pseudo-DOS instances under a multitasking layer like DESQview, OS/2, or even Windows 3.1.

(Side note: I just learned of DESQview/X, which integrated DESQview with X11R5 and replaced the Windows 3 drivers to allow running Windows as an X application).

For various reasons, I didn’t want to try running one of those systems inside Docker. That left me with emulating the original multiple physical node setup. In theory, pretty easy — spin up a bunch of DOS boxes, each using at most 1MB of emulated RAM, and go to town. But here came the challenge.

In a multiple-physical-node setup, you need some sort of file sharing, because your nodes have to access the shared message and file store. There were a myriad of clunky ways to do this in the old DOS days – Netware, LAN manager, even some PC NFS clients. I didn’t have access to Netware. I tried the Microsoft LM client in DOS, talking to a Samba server running inside the Docker container. This I got working, but the LM client used so much RAM that, even with various high memory tricks, BBS software wasn’t going to run. I couldn’t just mount an underlying filesystem in multiple dosbox instances either, because dosbox did caching that wasn’t going to be compatible.

This is why I wound up using dosemu. Besides being a more complete emulator than dosbox, it had a way of sharing the host’s filesystems that was going to work.

So, all of this wound up with this: jgoerzen/docker-bbs-renegade.

I also prepared building blocks for others that want to do something similar: docker-dos-bbs and the lower-level docker-dosemu.

As a side bonus, I also attempted running this under Joyent’s Triton (SmartOS, Solaris-based). I was pleasantly impressed that I got it all almost working there. So yes, a Renegade DOS BBS running under a Linux-based DOS emulator in a container on a Solaris machine.

Worse Than FailureCodeSOD: The Pythonic Wheel Reinvention

Starting with Java, a robust built-in class library is practically a default feature of modern programming languages. Why struggle with OS-specific behaviors, or with writing your own code, or managing a third party library to handle problems like accessing files or network resources.

One common class of WTF is the developer who steadfastly refuses to use it. They inevitably reinvent the wheel as a triangle with no axle. Another is the developer who is simply ignorant of what the language offers, and is too lazy to Google it. They don’t know what a wheel is, so they invent a coffee-table instead.

My personal favorite, though, is the rare person who knows about the class library, that uses the class library… to reinvent methods which exist in the class library. They’ve seen a wheel, they know what a wheel is for, and they still insist on inventing a coffee-table.

Anneke sends us one such method.

The method in question is called thus:

if output_exists("/some/path.dat"):
    do_something()

I want to stress, this is the only use of this method. The purpose is to check if a file containing output from a different process exists. If you’re familiar with Python, you might be thinking, “Wait, isn’t that just os.path.exists?”

Of course not.

def output_exists(full_path):
    path = os.path.dirname(full_path) + "/*"
    filename2=full_path.split('/')[-1]
    filename = '%s' % filename2
    files = glob.glob(path)
    back = []
    for f in re.findall(filename, " ".join(files)):
        back.append(os.path.join(os.path.dirname(full_path), f))
    return back

Now, in general, most of your directory-tree manipulating functions live in the os.path package, and you can see os.path.dirname used. That splits off the directory-only part. Then they throw a glob on it. I could, at this point, bring up the importance of os.path.join for that sort of operation, but why bother?

They knew enough to use os.path.dirname to get the directory portion of the path, but not os.path.split which can pick off the file portion of the path. The “Pythonic” way of writing that line would be (path, filename) = os.path.split(full_path). Wait, I misspoke: the “Pythonic” way would be to not write any part of this method.

'%s' % filename2 is how Python’s version of printf and I cannot for the life of me guess why it’s being done here. A misguided attempt at doing an strcpy-type operation?

glob.glob isn’t just the best method name in anything, it also does a filesystem search using globs, so files contains a list of all files in that directory.

" ".join(files) is the Python idiom for joining an array, so we turn the list of files into an array and search it using re.findall… which uses a regex for searching. Note that they’re using the filename for the regex, and they haven’t placed any guards around it, so if the input file is “foo.c”, and the directory contains “foo.cpp”, this will think that’s fine.

And then last but not least, it returns the array of matches, relying on the fact that an empty array in Python is false.

To write this code required at least some familiarity with three different major packages in the class library- os.path, glob, and re, but just one ounce of more familiarity ith os.path would have replaced the entire thing with a simple call to os.path.exists. Which is what Anneke did.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Sam VargheseBlack money continues to pour in to IPL

A little more than a year ago, Indian Prime Minister Narendra Modi announced that 500 and 1000 rupee notes would be removed from circulation as a step to flushing out all the black money in the country.

He made the announcement on TV in prime time on 8 November 2016 and gave people four hours time to be ready for the change!

But judging by the amounts which cricketers were bought for in the Indian Premier League Twenty20 auction last week, there is more black money than ever in the country.

Else, sums like US$1.5 million would not be available for the Kolkata Knight Riders to buy a cricketer like Mitchell Starc. This is black money being flushed out and made ready to be used as legal tender, the main reason why the Indian Government turns a blind eye to the process.

Former Indian spin king Bishen Singh Bedi accused the IPL of being a centre for money-laundering and he may not be far off the mark.

A little history will help explain India’s black money problem: Back in 1967, the then Indian finance minister Morarji Desai had the brilliant idea of raising taxes well beyond their existing level; the maximum marginal tax rate was raised as high as 97.75 percent.

Desai, who was better known for drinking his own urine, reasoned that people would pay up and that India’s budgetary problems would become more manageable.

Instead, the reverse happened. India has always had a problem with undeclared wealth, a kind of parallel economy. The amount of black money increased by leaps and bounds after Desai’s ridiculous laws were promulgated.

Seven years later, in 1974, the new finance minister Y.B. Chavan brought down rates by some 20 percentage points, but by then the damage had been done. The amount of black money in India today is estimated to be anything from 30 to 100 times the national budget.

The IPL attracts the best cricketers from around the world because of the money on offer. The amounts that are bid are paid for three years, and the player has to play for two months every year, with some additional promotional activity also involved.

The competition is in its 11th season and it has been dogged by controversy; in 2015, two teams were suspended for match-fixing and betting, with the incidents taking place in 2012 and 2013.

So, despite all the platitudes from Modi, don’t expect anything to change in India as far as black money is concerned. If anything, the amount will increase now that people know that these kinds of measures will be announced at the drop of a hat. They will be ready the next time Modi or anyone else comes up with some crazy initiative like this.

,

Krebs on SecurityDrugs Tripped Up Suspects In First Known ATM “Jackpotting” Attacks in the US

On Jan. 27, 2018, KrebsOnSecurity published what this author thought was a scoop about the first known incidence of U.S. ATMs being hit with “jackpotting” attacks, a crime in which thieves deploy malware that forces cash machines to spit out money like a loose Las Vegas slot machine. As it happens, the first known jackpotting attacks in the United States were reported in November 2017 by local media on the west coast, although the reporters in those cases seem to have completely buried the lede.

Isaac Rafael Jorge Romero, Jose Alejandro Osorio Echegaray, and Elio Moren Gozalez have been charged with carrying out ATM “jackpotting” attacks that force ATMs to spit out cash like a Las Vegas casino.

On Nov. 20, 2017, Oil City News — a community publication in Wyoming — reported on the arrest of three Venezuelan nationals who were busted on charges of marijuana possession after being stopped by police.

After pulling over the van the men were driving, police on the scene reportedly detected the unmistakable aroma of pot smoke wafting from the vehicle. When the cops searched the van, they discovered small amounts of pot, THC edible gummy candies, and several backpacks full of cash.

FBI agents had already been looking for the men, who were allegedly caught on surveillance footage tinkering with cash machines in Wyoming, Colorado and Utah, shortly before those ATMs were relieved of tens of thousands of dollars.

According to a complaint filed in the U.S. District Court for the District of Colorado, the men first hit an ATM at a credit union in Parker, Colo. on October 10, 2017. The robbery occurred after business hours, but the cash machine in question was located in a vestibule available to customers 24/7.

The complaint says surveillance videos showed the men opening the top of the ATM, which housed the computer and hard drive for the ATM — but not the secured vault where the cash was stored. The video showed the subjects reaching into the ATM, and then closing it and exiting the vestibule. On the video, one of the subjects appears to be carrying an object consistent with the size and appearance of the hard drive from the ATM.

Approximately ten minutes later, the subjects returned and opened up the cash machine again. Then they closed the top of the ATM and appeared to wait while the ATM computer restarted. After that, both subjects could be seen on the video using their mobile phones. One of the subjects reportedly appeared to be holding a small wireless mini-computer keyboard.

Soon after, the ATM began spitting out cash, netting the thieves more than $24,000. When they they were done, the suspects allegedly retrieved their equipment from the ATM and left.

Forensic analysis of the ATM hard drive determined that the thieves installed the Ploutus.D malware on the cash machine’s hard drive. Ploutus.D is an advanced malware strain that lets crooks interact directly with the ATM’s computer and force it to dispense money.

“Often the malware requires entering of codes to dispense cash,” reads an FBI affidavit (PDF). “These codes can be obtained by a third party, not at the location, who then provides the codes to the subjects at the ATM. This allows the third party to know how much cash is dispensed from the ATM, preventing those who are physically at the ATM from keeping cash for themselves instead of providing it to the criminal organization. The use of mobile phones is often used to obtain these dispensing codes.”

In November 2017, similar ATM jackpotting attacks were discovered in the Saint George, Utah area. Surveillance footage from those ATMs showed the same subjects were at work.

The FBI’s investigation determined that the vehicles used by the suspects in the Utah thefts were rented by Venezuelan nationals.

On Nov. 16, Isaac Rafael Jorge Romero, 29, Jose Alejandro Osorio Echegaray, 36, and two other Venezuelan nationals were detained in Teton County, Wyo. for drug possession. Two other suspects in the Utah theft were arrested in San Diego when they tried to return a rental car that was caught on surveillance camera at one of the hacked ATMs.

To carry out a jackpotting attack, thieves first must gain physical access to the cash machine. From there they can use malware or specialized electronics — often a combination of both — to control the operations of the ATM.

All of the known ATM jackpotting attacks in the U.S. so far appear to be targeting a handful of older model cash machines manufactured by ATM giant Diebold Nixdorf. However, security firm FireEye notes that — with minor modifications to the malware code — Plotus.D could be used to target software that runs on 40 different ATM vendors in 80 countries.

Diebold’s advisory on hardening ATMs against jackpotting attacks is available here (PDF).

Jackpotting is not a new crime: Indeed, it has been a problem for ATM operators in most of the world for many years now. But for some reason, jackpotting attacks have until recently eluded U.S. ATM operators.

Jackpotting has been a real threat to ATM owners and manufacturers since at least 2010, when the late security researcher Barnaby Michael Douglas Jack (known to most as simply “Barnaby Jack”) demonstrated the attack to a cheering audience at the Black Hat security conference. A recording of that presentation is below.

Cory DoctorowPodcast: The Man Who Sold the Moon, Part 03


Here’s part three of my reading (MP3) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.

MP3

CryptogramSubway Elevators and Movie-Plot Threats

Local residents are opposing adding an elevator to a subway station because terrorists might use it to detonate a bomb. No, really. There's no actual threat analysis, only fear:

"The idea that people can then ride in on the subway with a bomb or whatever and come straight up in an elevator is awful to me," said Claudia Ward, who lives in 15 Broad Street and was among a group of neighbors who denounced the plan at a recent meeting of the local community board. "It's too easy for someone to slip through. And I just don't want my family and my neighbors to be the collateral on that."

[...]

Local residents plan to continue to fight, said Ms. Gerstman, noting that her building's board decided against putting decorative planters at the building's entrance over fears that shards could injure people in the event of a blast.

"Knowing that, and then seeing the proposal for giant glass structures in front of my building ­- ding ding ding! -- what does a giant glass structure become in the event of an explosion?" she said.

In 2005, I coined the term "movie-plot threat" to denote a threat scenario that caused undue fear solely because of its specificity. Longtime readers of this blog will remember my annual Movie-Plot Threat Contests. I ended the contest in 2015 because I thought the meme had played itself out. Clearly there's more work to be done.

Worse Than FailureRepresentative Line: As the Clock Terns

Hydranix” can’t provide much detail about today’s code, because they’re under a “strict NDA”. All they could tell us was that it’s C++, and it’s part of a “mission critical” front end package. Honestly, I think this line speaks for itself:

(mil == 999 ? (!(mil = 0) && (sec == 59 ? 
  (!(sec = 0) && (min == 59 ? 
    (!(min = 0) && (++hou)) : ++min)) : ++sec)) : ++mil);

“Hydranix” suspects that this powers some sort of stopwatch, but they’re not really certain what its role is.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

CryptogramLocating Secret Military Bases via Fitness Data

In November, the company Strava released an anonymous data-visualization map showing all the fitness activity by everyone using the app.

Over this weekend, someone realized that it could be used to locate secret military bases: just look for repeated fitness activity in the middle of nowhere.

News article.

TEDLee Cronin’s ongoing quest for print-your-own medicine, and more news from TED speakers

Behold, your recap of TED-related news:

Print your own pharmaceutical factory. As part of an ongoing quest to make pharmaceuticals easier to manufacture, chemist Lee Cronin and his team at the University of Glasgow have designed a way to 3D-print a portable “factory” for the complicated and multi-step chemical reactions needed to create useful drugs. It’s a grouping of vessels about the size of water bottles; each vessel houses a different type of chemical reaction. Pharmacists or doctors could create specific drugs by adding the right ingredients to each vessel from pre-measured cartridges, following a simple step-by-step recipe. The process could help replace or supplement large chemical factories, and bring helpful drugs to new markets. (Watch Cronin’s TED Talk)

How Amit Sood’s TED Talk spawned the Google Art selfie craze. While Amit Sood was preparing for his 2016 TED Talk about Google’s Cultural Institute and Art Project, his co-presenter Cyril Diagne, a digital interaction artist, suggested that he include a prototype of a project they’d been playing with, one that matched selfies to famous pieces of art. Amit added the prototype to his talk, in which he matched live video of Cyril’s face to classic artworks — and when the TED audience saw it, they broke out in spontaneous applause. Inspired, Amit decided to develop the feature and add it to the Google Arts & Culture app. The new feature launched in December 2017, and it went viral in January 2018. Just like the live TED audience before it, online users loved it so much that the app shot to the number one spot in both the Android and iOS app stores. (Watch Sood’s TED Talk)

A lyrical film about the very real danger of climate change. Funded by a Kickstarter campaign by director and producer Matthieu Rytz, the documentary Anote’s Ark focuses on the clear and present danger of global warming to the Pacific Island nation of Kiribati (population: 100,000). As sea levels rise, the small, low-lying islands that make up Kiribati will soon be entirely covered by the ocean, displacing the population and their culture. Former president Anote Tong, who’s long been fighting global warming to save his country and his constituents, provides one of two central stories within the documentary. The film (here’s the trailer) premiered at the 2018 Sundance Festival in late January, and will be available more widely soon; follow on Facebook for news. (Watch Tong’s TED Talk)

An animated series about global challenges. Sometimes the best way to understand a big idea is on a whiteboard. Throughout 2018, Rabbi Jonathan Sacks and his team are producing a six-part whiteboard-animation series that explains key themes in his theology and philosophy around contemporary global issues. The first video, called “The Politics of Hope,” examines political strife in the West, and ways to change the culture from the politics of anger to the politics of hope. Future whiteboard videos will delve into integrated diversity, the relationship between religion and science, the dignity of difference, confronting religious violence, and the ethics of responsibility. (Watch Rabbi Sacks’ TED Talk)

Nobody wins the Google Lunar X Prize competition :( Launched in 2007, the Google Lunar X Prize competition challenged entrepreneurs and engineers to design low-cost ways to explore space. The premise, if not the work itself, was simple — the first privately funded team to get a robotic spacecraft to the moon, send high-resolution photos and video back to Earth, and move the spacecraft 500 meters would win a $20 million prize, from a total prize fund of $30 million. The deadline was set for 2012, and was pushed back four times; the latest deadline was set to be March 31, 2018. On January 23, X Prize founder and executive chair Peter Diamandis and CEO Marcus Shingles announced that the competition was over: It was clear none of the five remaining teams stood a chance of launching by March 31. Of course, the teams may continue to compete without the incentive of this cash prize, and some plan to. (Watch Diamandis’ TED Talk)

15 photos of America’s journey towards inclusivity. Art historian Sarah Lewis took control of the New Yorker photo team’s Instagram last week, sharing pictures that answered the timely question: “What are 15 images that chronicle America’s journey toward a more inclusive level of citizenship?” Among the iconic images from Gordon Parks and Carrie Mae Weems, Lewis also includes an image of her grandfather, who was expelled from the eleventh grade in 1926 for asking why history books ignored his own history. In the caption, she tells how he became a jazz musician and an artist, “inserting images of African Americans in scenes where he thought they should—and knew they did—exist.” All the photos are ones she uses in her “Vision and Justice” course at Harvard, which focuses on art, race and justice. (Watch Lewis’ TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.

Krebs on SecurityFile Your Taxes Before Scammers Do It For You

Today, Jan. 29, is officially the first day of the 2018 tax-filing season, also known as the day fraudsters start requesting phony tax refunds in the names of identity theft victims. Want to minimize the chances of getting hit by tax refund fraud this year? File your taxes before the bad guys can!

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

According to the IRS, consumer complaints over tax refund fraud have been declining steadily over the years as the IRS and states enact more stringent measures for screening potentially fraudulent applications.

If you file your taxes electronically and the return is rejected, and if you were the victim of identity theft (e.g., if your Social Security number and other information was leaked in the Equifax breach last year), you should submit an Identity Theft Affidavit (Form 14039). The IRS advises that if you suspect you are a victim of identity theft, continue to pay your taxes and file your tax return, even if you must do so by paper.

If the IRS believes you were likely the victim of tax refund fraud in the previous tax year they will likely send you a special filing PIN that needs to be entered along with this year’s return before the filing will be accepted by the IRS electronically. This year marks the third out of the last five that I’ve received one of these PINs from the IRS.

Of course, filing your taxes early to beat the fraudsters requires one to have all of the tax forms needed to do so. As a sole proprietor, this is a great challenge because many companies take their sweet time sending out 1099 forms and such (even though they’re required to do so by Jan. 31).

A great many companies are now turning to online services to deliver tax forms to contractors, employees and others. For example, I have received several notices via email regarding the availability of 1099 forms online; most say they are sending the forms in snail mail, but that if I need them sooner I can get them online if I just create an account or enter some personal information at some third-party site.

Having seen how so many of these sites handle personal information, I’m not terribly interested in volunteering more of it. According to Bankrate, taxpayers can still file their returns even if they don’t yet have all of their 1099s — as long as you have the correct information about how much you earned.

“Unlike a W-2, you generally don’t have to attach 1099s to your tax return,” Bankrate explains. “They are just issued so you’ll know how much to report, with copies going to the IRS so return processors can double-check your entries. As long as you have the correct information, you can put it on your tax form without having the statement in hand.”

In past tax years, identity thieves have used data gleaned from a variety of third-party and government Web sites to file phony tax refund requests — including from the IRS itself! One of their perennial favorites was the IRS’s Get Transcript service, which previously had fairly lax authentication measures.

After hundreds of thousands of taxpayers had their tax data accessed through the online tool, the IRS took it offline for a bit and then brought it back online but requiring a host of new data elements.

But many of those elements — such as your personal account number from a credit card, mortgage, home equity loan, home equity line of credit or car loan — can be gathered from multiple locations online with almost no authentication. For example, earlier this week I heard from Jason, a longtime reader who was shocked at how little information was required to get a copy of his 2017 mortgage interest statement from his former lender.

“I called our old mortgage company (Chase) to retrieve our 1098 from an old loan today,” Jason wrote. “After I provided the last four digits of the social security # to their IVR [interactive voice response system] that was enough to validate me to request a fax of the tax form, which would have included sensitive information. I asked for a supervisor who explained to me that it was sufficient to check the SSN last 4 + the caller id phone number to validate the account.”

If you’ve taken my advice and placed a security freeze on your credit file with the major credit bureaus, you don’t have to worry about thieves somehow bypassing the security on the IRS’s Get Transcript site. That’s because the IRS uses Experian to ask a series of knowledge-based authentication questions before an online account can even be created at the IRS’s site to access the transcript.

Now, anyone who reads this site regularly should know I’ve been highly critical of these KBA questions as a means of authentication. But the upshot here is that if you have a freeze in place at Experian (and I sincerely hope you do), Experian won’t even be able to ask those questions. Thus, thieves should not be able to create an account in your name at the IRS’s site (unless of course thieves manage to successfully request your freeze PIN from Experian’s site, in which case all bets are off).

While you’re getting your taxes in order this filing season, be on guard against fake emails or Web sites that may try to phish your personal or tax data. The IRS stresses that it will never initiate contact with taxpayers about a bill or refund. If you receive a phishing email that spoofs the IRS, consider forwarding it to phishing@irs.gov.

Finally, tax season also is when the phone-based tax scams kick into high gear, with fraudsters threatening taxpayers with arrest, deportation and other penalties if they don’t make an immediate payment over the phone. If you care for older parents or relatives, this may be a good time to remind them about these and other phone-based scams.

CryptogramEstimating the Cost of Internet Insecurity

It's really hard to estimate the cost of an insecure Internet. Studies are all over the map. A methodical study by RAND is the best work I've seen at trying to put a number on this. The results are, well, all over the map:

"Estimating the Global Cost of Cyber Risk: Methodology and Examples":

Abstract: There is marked variability from study to study in the estimated direct and systemic costs of cyber incidents, which is further complicated by the considerable variation in cyber risk in different countries and industry sectors. This report shares a transparent and adaptable methodology for estimating present and future global costs of cyber risk that acknowledges the considerable uncertainty in the frequencies and costs of cyber incidents. Specifically, this methodology (1) identifies the value at risk by country and industry sector; (2) computes direct costs by considering multiple financial exposures for each industry sector and the fraction of each exposure that is potentially at risk to cyber incidents; and (3) computes the systemic costs of cyber risk between industry sectors using Organisation for Economic Co-operation and Development input, output, and value-added data across sectors in more than 60 countries. The report has a companion Excel-based modeling and simulation platform that allows users to alter assumptions and investigate a wide variety of research questions. The authors used a literature review and data to create multiple sample sets of parameters. They then ran a set of case studies to show the model's functionality and to compare the results against those in the existing literature. The resulting values are highly sensitive to input parameters; for instance, the global cost of cyber crime has direct gross domestic product (GDP) costs of $275 billion to $6.6 trillion and total GDP costs (direct plus systemic) of $799 billion to $22.5 trillion (1.1 to 32.4 percent of GDP).

Here's Rand's risk calculator, if you want to play with the parameters yourself.

Note: I was an advisor to the project.

Separately, Symantec has published a new cybercrime report with their own statistics.

Worse Than FailureRepresentative Line: The Mystery of the SmallInt

PT didn’t provide very much information about today’s Representative Line.

Clearly bits and bytes was not something studied in this SQL stored procedure author. Additionally, Source control versions are managed with comments. OVER 90 Thousand!

        --Declare @PrevNumber smallint
                --2015/11/18 - SMALLINT causes overflow error when it goes over 32000 something 
                -- - this sp errors but can only see that when
                -- code is run in SQL Query analyzer 
                -- - we should also check if it goes higher than 99999
        DECLARE @PrevNumber int         --2015/11/18

Fortunately, I am Remy Poirot, the world’s greatest code detective. To your untrained eyes, you simply see the kind of comment which would annoy you. But I, an expert, with experience of the worst sorts of code man may imagine, can piece together the entire lineage of this code.

Let us begin with the facts: no source control is in use. Version history is managed in the comments. From this, we can deduce a number of things: the database where this code runs is also where it is stored. Changes are almost certainly made directly in production.

A statue of Hercule Poirot in Belgium

Which, when those changes fail, they may only be detected when the “code is run in SQL Query Analyzer”. This ties in with the “changes in production/no source control”, but it also tells us that it is possible to run this code, have it fail, and no one notices. This means this code must be part of an unattended process, a batch job of some kind. Even an overflow error vanishes into the ether.

This code also, according to the comments, should “also check if [@PrevNumber] goes higher than 99999”. This is our most vital clue, for it tells us that the content of the value has a maximum width- more than 5 characters to represent it is a problem. This obviously means that the target system is a mainframe with a flat-file storage model.

Already, from one line and a handful of comments, we’ve learned a great deal about this code, but one need not be the world’s greatest code detective to figure out this much. Let’s see what else we can tease out.

@PrevNumber must tie to some ID in the database, likely the “last processed ID” from the previous run of the batch job. The confusion over smallint and need to enforce a five-digit limit implies that this database isn’t actually in control of its data. Either the data comes from a front-end with no validation- certainly possible- or it comes from an external system. But a value greater than 99999 isn’t invalid in the database- otherwise they could enforce that restriction via a constraint. This means the database holds data coming from and going to different places- it’s a “business integration” database.

With these clues, we can assemble the final picture of the crime scene.

In a dark corner of a datacenter are the components of a mainframe system. The original install was likely in the 70s, and while it saw updates and maintenance for the next twenty years, starting in the 90s it was put on life support. “It’s going away, any day now…” Unfortunately, huge swathes of the business depended on it and no one is entirely certain what it does. They can’t simply port business processes to a new system, because no one knows what the business processes are. They’re encoded as millions of lines of code in a dialect of COBOL no one understands.

A conga-line of managers have passed through the organization, trying to kill the mainframe. Over time, they’ve brought in ERP systems from Oracle or SAP. Maybe they’ve taken some home-grown ERP written by users in Access and tried to extend them to an enterprise solution. Worse, over the years, acquisitions come along, bringing new systems from other vendors under the umbrella of one corporate IT group.

All these silos need to talk to each other. They also have to be integrated into the suite of home-grown reporting tools, automation, and Excel spreadsheets that run the business. Shipping can only pull from the mainframe, while the P/L dashboard the executives use can only pull from SAP. At first, it’s a zoo of ETL jobs, until someone gets the bright idea to centralize it.

They launch a project, and give it a slick three-letter-acronym, like “QPR” or “LMP”. Or, since there are only so many TLAs one can invent, they probably reuse an existing one, like “GPS” or “FBI”. The goal: have a central datastore that integrates data from every silo in the organization, and then the silos can pull back out the data they want for their processes. This project has a multi-million dollar budget, and has exceeded that budget twice over, and is not yet finished.

The code PT supplied for us is but one slice of that architecture. It’s pulling data from one of those centralized tables in the business integration database, and massaging it into a format it can pass off to the mainframe. Like a murder reconstructed from just a bit of fingernail, we’ve laid bare the entire crime with our powers of observation, our experience, and our knowledge of bad code.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

TEDTalks from TEDNYC Idea Search 2018

Cloe Shasha and Kelly Stoetzel hosted the fast-paced TED Idea Search 2018 program on January 24, 2018 at TED HQ in New York, NY. (Photo: Ryan Lash / TED)

TED is always looking for new voices with fresh ideas — and earlier this winter, we opened a challenge to the world: make a one-minute audition video that makes the case for your TED Talk. More than 1,200 people applied to be a part of the Idea Search program this year, and on Wednesday night at our New York headquarters, 13 audition finalists shared their ideas in a fast-paced program. Here are voices you may not have heard before — but that you’ll want to hear more from soon.

Ruth Morgan shares her work preventing the misinterpretation of forensic evidence. (Photo: Ryan Lash / TED)

Forensic evidence isn’t as clear-cut as you think. For years, forensic science research has focused on making it easier and more accurate to figure out what a trace — such as DNA or a jacket fiber — is and who it came from, but that doesn’t help us interpret what the evidence means. “What we need to know if we find your DNA on a weapon or gunshot residue on you is how did it get there and when did it get there,” says forensic scientist Ruth Morgan. These gaps in understanding have real consequences: forensic evidence is often misinterpreted and used to convict people of crimes they didn’t commit. Morgan and her team are committed to finding ways to answer the why and how, such as determining whether it’s possible to get trace evidence on you during normal daily activities (it is) and how trace DNA can be transferred. “We need to dramatically reduce the chance of forensic evidence being misinterpreted,” she says. “We need that to happen so that you never have to be that innocent person in the dock.”

The intersection of our senses. An experienced composer and filmmaker, Philip Clemo has been on a quest to determine if people can experience imagery with the same depth that they experience music. Research has shown that sound can impact how we perceive visuals, but can visuals have a similarly profound impact? In his live performances, Clemo and his band use abstract imagery in addition to abstract music to create a visual experience for the audience. He hopes that people can have these same experiences in their everyday lives by quieting their minds to fully experience the “visual music” of our surrounding environment — and improve our connection to our world.

Reading the Bible … without omission. At a time when he was a recovering fundamentalist and longtime atheist, David Ellis Dickerson received a job offer as a quiz question writer and Bible fact-checker for the game show The American Bible Challenge. Among his responsibilities: coming up with questions that conveniently ignored the sections of the Bible that mention slavery, concubines and incest. The omission expectations he was faced with made him realize that evangelicals read the Bible in the same way they watch reality television: “with a willing, even loving, suspension of disbelief.” Now, he invites devout Christians to read the Bible in its most unedited form, to recognize its internal contradictions and to grapple with its imperfections.

Danielle Bilot Bilot suggests three simple but productive actions we can take to help bees: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food. (Photo: Ryan Lash / TED)

To bee or not to bee? The most famous bee species of recent memory has overwhelmingly been the honey bee. For years, their concerning disappearance has made international news and been the center of research. Environmental designer Danielle Bilot believes that the honey bee should share the spotlight with the 3,600 other species that pollinate much of the food we eat every day in the US, such as blueberries, tomatoes and eggplants. Honey bees, she says, aren’t even native to North America (they were originally brought over from Europe) and therefore have a tougher time successfully pollinating these and many other indigenous crops. Regardless of species, human activity is harming them, and Bilot suggests three simple but productive actions we can take to make their lives easier and revive their populations: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food.

What if technology protected your hearing? Computers that talk to us and voice-enabled technology like Siri, Alexa and Google Home are changing the importance of hearing, says ear surgeon Matthew Bromwich. And with more than 500 million people suffering from disabling hearing loss globally, the importance of democratizing hearing health care is more relevant than ever. “How do we use our technology to improve the clarity of our communication?” Bromwich asks. He and his team have created a hearing testing technology called “SHOEBOX,” which gives hearing health care access to more than 70,000 people in 32 countries. He proposes using technology to help prevent this disability, amplify sound clarity, and paint a new future for hearing.

Welcome to the 2025 Technology Design Awards, with your host, Tom Wujec. Rocking a glittery dinner jacket, design guru Tom Wujec presents a science-fiction-y awards show from eight years into the future, honoring the designs that made the biggest impact in technology, consumer goods and transportation — capped off by a grand impact award chosen live onstage by an AI. While the designs seem fictional — a hacked auto, a self-rising house, a cutting-edge prosthetic — the kicker to Tom’s future award show is that everything he shows is in development right now.

In collaboration with the MIT Media Lab, Nilay Kulkarni used his skills as a self-taught programmer to build a simple tech solution to prevent human stampedes during the Kumbh Mela, one of the world’s largest crowd gatherings, in India. (Photo: Ryan Lash / TED)

A 15-year-old solves a deadly problem with a low-cost device. Every four years, more than 30 million Hindus gather for the Kumbh Mela, the world’s largest religious gathering, in order to wash themselves of their sins. Once every 12 years, it takes place in Nashik, a city on the western coast of India that ordinarily contains 1.5 million residents. With such a massive crowd in a small space, stampedes inevitably happen, and in 2003, 39 people were killed during the festival in Nashik. In 2014, then-15-year-old Nilay Kulkarni decided he wanted to find a solution. He recalls: “It seemed like a mission impossible, a dream too big.” After much trial and error, he and collaborators at the MIT Media Lab came up with a cheap, portable, effective stampede stopper called “Ashioto” (meaning footstep in Japanese): a pressure-sensor-equipped mat which counts the number of people stepping on it and sends the data over the internet to authorities so they can monitor the flow of people in real time. Five mats were deployed at the 2015 Nashik Kumbh Mela, and thanks to their use and other innovations, no stampedes occurred for the first time ever there. Much of the code is now available to the public to use for free, and Kulkarni is trying to improve the device. His dream: for Ashiotos to be used at all large gatherings, like the other Kumbh Melas, the Hajj and even at major concerts and sports events.

A new way to think about health care. Though doctors and nurses dominate people’s minds when it comes to health care, Tamekia MizLadi Smith is more interested in the roles of non-clinical staff in creating in creating effective and positive patient experiences. As an educator and spoken word poet, Smith uses the acronym “G.R.A.C.E.D.” to empower non-clinical staff to be accountable for data collection and to provide compassionate care to patients. Under the belief that compassionate care doesn’t begin and end with just clinicians, Smith asks that desk specialists, parking attendants and other non-clinical staff are trained and treated as integral parts of well-functioning health care systems.

Mad? Try some humor. “The world seems humor-impaired,” says comedian Judy Carter. “It just seems like everyone is going around angry: going, ‘Politics is making me so mad; my boss is just making me so mad.'” In a sharp, zippy talk, Carter makes the case that no one can actually make you mad — you always have a choice of how to respond — and that anger actually might be the wrong way to react. Instead, she suggests trying humor. “Comedy rips negativity to shreds,” she says.

Want a happier, healthier life? Look to your friends. In the relationship pantheon, friends tend to place third in importance (after spouses and blood relatives). But we should make them a priority in our lives, argues science writer Lydia Denworth. “The science of friendship suggests we should invest in relationships that deliver strong bonds. We should value people who are positive, stable, cooperative forces,” Denworth says. While friendship was long ignored by academics, researchers are now studying it and finding it provides us with strong physical and emotional benefits. “In this time when we’re struggling with an epidemic of loneliness and bitter political divisions, [friendships] remind us what joy and connection look like and why they matter,” Denworth says.

An accessible musical wonderland. With quick but impressive musical interludes, Matan Berkowitz introduces the “Airstrument” — a new type of instrument that allows anyone to create music in a matter of minutes by translating movement into sound. This technology is part of a series of devices Berkowitz has developed to enable musicians with disabilities (and anyone who wants to make music) to express themselves in non-traditional ways. “Creation with intention,” he raps, accompanied by a song created on the Airstrument. “Now it’s up to us to wake up and act.”

Divya Chander shares her work studying what human brains look like when they lose and regain consciousness. (Photo: Ryan Lash / TED)

Where does consciousness go when you’re under anesthesia? Divya Chander is an anesthesiologist, delivering specialized doses of drugs that put people in an altered state before surgery. She often wonders: Where do peoples brains do while they’re under? What do they perceive? The question has led her into a deeper exploration of perception, awareness and consciousness itself. In a thoughtful talk, she suggests that we have a lot to learn about consciousness … that we could learn by studying unconsciousness.

The art of creation without preparation. To close out the night, vocalist and improviser Lex Empress creates coherent lyrics from words written by audience members on paper airplanes thrown onto the stage. Empress is accompanied by virtuoso pianist and producer Gilian Baracs, who also improvises everything he plays. Their music reminds us to enjoy the great improvisation that is life.

,

Krebs on SecurityFirst ‘Jackpotting’ Attacks Hit U.S. ATMs

ATM “jackpotting” — a sophisticated crime in which thieves install malicious software and/or hardware at ATMs that forces the machines to spit out huge volumes of cash on demand — has long been a threat for banks in Europe and Asia, yet these attacks somehow have eluded U.S. ATM operators. But all that changed this week after the U.S. Secret Service quietly began warning financial institutions that jackpotting attacks have now been spotted targeting cash machines here in the United States.

To carry out a jackpotting attack, thieves first must gain physical access to the cash machine. From there they can use malware or specialized electronics — often a combination of both — to control the operations of the ATM.

A keyboard attached to the ATM port. Image: FireEye

On Jan. 21, 2018, KrebsOnSecurity began hearing rumblings about jackpotting attacks, also known as “logical attacks,” hitting U.S. ATM operators. I quickly reached out to ATM giant NCR Corp. to see if they’d heard anything. NCR said at the time it had received unconfirmed reports, but nothing solid yet.

On Jan. 26, NCR sent an advisory to its customers saying it had received reports from the Secret Service and other sources about jackpotting attacks against ATMs in the United States.

“While at present these appear focused on non-NCR ATMs, logical attacks are an industry-wide issue,” the NCR alert reads. “This represents the first confirmed cases of losses due to logical attacks in the US. This should be treated as a call to action to take appropriate steps to protect their ATMs against these forms of attack and mitigate any consequences.”

The NCR memo does not mention the type of jackpotting malware used against U.S. ATMs. But a source close to the matter said the Secret Service is warning that organized criminal gangs have been attacking stand-alone ATMs in the United States using “Ploutus.D,” an advanced strain of jackpotting malware first spotted in 2013.

According to that source — who asked to remain anonymous because he was not authorized to speak on the record — the Secret Service has received credible information that crooks are activating so-called “cash out crews” to attack front-loading ATMs manufactured by ATM vendor Diebold Nixdorf.

The source said the Secret Service is warning that thieves appear to be targeting Opteva 500 and 700 series Dielbold ATMs using the Ploutus.D malware in a series of coordinated attacks over the past 10 days, and that there is evidence that further attacks are being planned across the country.

“The targeted stand-alone ATMs are routinely located in pharmacies, big box retailers, and drive-thru ATMs,” reads a confidential Secret Service alert sent to multiple financial institutions and obtained by KrebsOnSecurity. “During previous attacks, fraudsters dressed as ATM technicians and attached a laptop computer with a mirror image of the ATMs operating system along with a mobile device to the targeted ATM.

Reached for comment, Diebold shared an alert it sent to customers Friday warning of potential jackpotting attacks in the United States. Diebold’s alert confirms the attacks so far appear to be targeting front-loaded Opteva cash machines.

“As in Mexico last year, the attack mode involves a series of different steps to overcome security mechanism and the authorization process for setting the communication with the [cash] dispenser,” the Diebold security alert reads. A copy of the entire Diebold alert, complete with advice on how to mitigate these attacks, is available here (PDF).

The Secret Service alert explains that the attackers typically use an endoscope — a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body — to locate the internal portion of the cash machine where they can attach a cord that allows them to sync their laptop with the ATM’s computer.

An endoscope made to work in tandem with a mobile device. Source: gadgetsforgeeks.com.au

“Once this is complete, the ATM is controlled by the fraudsters and the ATM will appear Out of Service to potential customers,” reads the confidential Secret Service alert.

At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

“In previous Ploutus.D attacks, the ATM continuously dispensed at a rate of 40 bills every 23 seconds,” the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert.

An 2017 analysis of Ploutus.D by security firm FireEye called it “one of the most advanced ATM malware families we’ve seen in the last few years.”

“Discovered for the first time in Mexico back in 2013, Ploutus enabled criminals to empty ATMs using either an external keyboard attached to the machine or via SMS message, a technique that had never been seen before,” FireEye’s Daniel Regalado wrote.

According to FireEye, the Ploutus attacks seen so far require thieves to somehow gain physical access to an ATM — either by picking its locks, using a stolen master key or otherwise removing or destroying part of the machine.

Regalado says the crime gangs typically responsible for these attacks deploy “money mules” to conduct the attacks and siphon cash from ATMs. The term refers to low-level operators within a criminal organization who are assigned high-risk jobs, such as installing ATM skimmers and otherwise physically tampering with cash machines.

“From there, the attackers can attach a physical keyboard to connect to the machine, and [use] an activation code provided by the boss in charge of the operation in order to dispense money from the ATM,” he wrote. “Once deployed to an ATM, Ploutus makes it possible for criminals to obtain thousands of dollars in minutes. While there are some risks of the money mule being caught by cameras, the speed in which the operation is carried out minimizes the mule’s risk.”

Indeed, the Secret Service memo shared by my source says the cash out crew/money mules typically take the dispensed cash and place it in a large bag. After the cash is taken from the ATM and the mule leaves, the phony technician(s) return to the site and remove their equipment from the compromised ATM.

“The last thing the fraudsters do before leaving the site is to plug the Ethernet cable back in,” the alert notes.

FireEye said all of the samples of Ploutus.D it examined targeted Diebold ATMs, but it warned that small changes to the malware’s code could enable it to be used against 40 different ATM vendors in 80 countries.

The Secret Service alert says ATMs still running on Windows XP are particularly vulnerable, and it urged ATM operators to update to a version of Windows 7 to defeat this specific type of attack.

This is a quickly developing story and may be updated multiple times over the next few days as more information becomes available.

Cory DoctorowI’m speaking at UCSD on Feb 9!

I’m appearing at UCSD on February 9, with a talk called “Scarcity, Abundance and the Finite Planet: Nothing Exceeds Like Excess,” in which I’ll discuss the potentials for scarcity and abundance — and bright-green vs austere-green futurism — drawing on my novels Walkaway, Makers and Down and Out in the Magic Kingdom.


The talk is free and open to the public; the organizers would appreciate an RSVP to galleryinfo@calit2.net.


The lecture will take place in the Calit2 Auditorium in the Qualcomm Institute’s Atkinson Hall headquarters. The talk will begin at 5:00 p.m., and it will be moderated by Visual Arts professor Jordan Crandall, who chairs the gallery@calit2’s 2017-2018 faculty committee. Following the talk and Q&A session, attendees are invited to stay for a public reception.


Doctorow will discuss the economics, material science, psychology and politics of scarcity and abundance as described in his novels WALKAWAY, MAKERS and DOWN AND OUT IN THE MAGIC KINGDOM. Together, they represent what he calls “a 15-year literature of technology, fabrication and fairness.”

Among the questions he’ll pose: How can everyone in the world live like an American when we need six planets’ worth of materials to realize that dream? Doctorow also asks, “Can fully automated luxury communism get us there, or will our futures be miserable austerity-ecology hairshirts where we all make do with less?”


Author Cory Doctorow to Speak at UC San Diego on Scarcity, Abundance and the Finite Planet [Doug Ramsey/UCSD News]

Planet Linux AustraliaDonna Benjamin: Turning stories into software at LCA2018

Donna speaking in front of a large screen showing a survey and colourful graph. Photo Credit: Josh Simmons
I love free software, but sometimes, I feel, that free software does not love me.
 
Why is it so hard to use? Why is it still so buggy? Why do the things I can do simply with other tools, take so much effort? Why is the documentation so inscrutable?  Why have all the config settings been removed from the GUI? Why does this HowTo assume I can find a config file, and edit it with VI? Do I have to learn to use VI before I can stop my window manager getting in the way of the application I’m trying to use?
 
Tis a mystery. Or is it?
 
It’s fair to say, that the Free Software community is still largely made up of blokes, who are software developers.  The idea that “user centered design” is a “Good Thing” is not evenly distributed. In fact, some seem to think it’s not a good thing at all, “patches welcome” they say, “go fix it yourself”. 
 
The web community on the other hand, has discovered that the key to their success is understanding and meeting the needs of the people who use their software. Ideological purity is great, but enabling people to meet their objectives, is better.
 
As technologists, we get excited by technology. Of course we do! Technology is modern magic. And we are wizards. It’s wonderful. But the people who use our software are not necessarily interested in the tech itself, they probably just want to use it to get something done. They probably don’t even care what language it’s written in.
 
Let’s say a customer walks into a hardware store and says they want a drill.  Or perhaps they walk in and stand in front of a shelf simply contemplating a dizzying array of drills, drill bits and other accessories. Which one is right for the job they wonder. Should I get a cordless one? Will I really need diamond tipped drill bits? 
 
There's a technique called the 5 Why's that's useful to get under the surface of a requirement. The idea is, you keep asking why until you uncover the real reason for a request, need, feature or widget. For example, we could ask this customer...
 
Why do you want this drill? To drill a hole. 
Why? To hang a picture on my wall.  
Why? To be able to share and enjoy this amazing photo from my recent holiday.
 
So we discover our customer did not, in fact, want a drill. Our customer wanted to express something about their identity by decorating their home.  So telling them all about the voltage of the drill, and the huge range of drill bits available, may have helped them choose the right drill for the job, but if we stop to understand the job in the first place, we’re more likely to be able to help that person get what they need to get their job done.
 
User stories are one way we can explore the “Why” behind the software we build. Check out my talk from the Developers Developers miniconf at linux.conf.au on Monday “Turning stories, into software.”
 

 

References

 

Photo by Josh Simmons

,

CryptogramFriday Squid Blogging: Squid that Mate, Die, and Then Sink

The mating and death characteristics of some squid are fascinating.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityRegistered at SSA.GOV? Good for You, But Keep Your Guard Up

KrebsOnSecurity has long warned readers to plant your own flag at the my Social Security online portal of the U.S. Social Security Administration (SSA) — even if you are not yet drawing benefits from the agency — because identity thieves have been registering accounts in peoples’ names and siphoning retirement and/or disability funds. This is the story of a Midwest couple that took all the right precautions and still got hit by ID thieves who impersonated them to the SSA directly over the phone.

In mid-December 2017 this author heard from Ed Eckenstein, a longtime reader in Oklahoma whose wife Ruth had just received a snail mail letter from the SSA about successfully applying to withdraw benefits. The letter confirmed she’d requested a one-time transfer of more than $11,000 from her SSA account. The couple said they were perplexed because both previously had taken my advice and registered accounts with MySocialSecurity, even though Ruth had not yet chosen to start receiving SSA benefits.

The fraudulent one-time payment that scammers tried to siphon from Ruth Eckenstein’s Social Security account.

Sure enough, when Ruth logged into her MySocialSecurity account online, there was a pending $11,665 withdrawal destined to be deposited into a Green Dot prepaid debit card account (funds deposited onto a Green Dot card can be spent like cash at any store that accepts credit or debit cards). The $11,655 amount was available for a one-time transfer because it was intended to retroactively cover monthly retirement payments back to her 65th birthday.

The letter the Eckensteins received from the SSA indicated that the benefits had been requested over the phone, meaning the crook(s) had called the SSA pretending to be Ruth and supplied them with enough information about her to enroll her to begin receiving benefits. Ed said he and his wife immediately called the SSA to notify them of fraudulent enrollment and pending withdrawal, and they were instructed to appear in person at an SSA office in Oklahoma City.

The SSA ultimately put a hold on the fraudulent $11,665 transfer, but Ed said it took more than four hours at the SSA office to sort it all out. Mr. Eckenstein said the agency also informed them that the thieves had signed his wife up for disability payments. In addition, her profile at the SSA had been changed to include a phone number in the 786 area code (Miami, Fla.).

“They didn’t change the physical address perhaps thinking that would trigger a letter to be sent to us,” Ed explained.

Thankfully, the SSA sent a letter anyway. Ed said many additional hours spent researching the matter with SSA personnel revealed that in order to open the claim on Ruth’s retirement benefits, the thieves had to supply the SSA with a short list of static identifiers about her, including her birthday, place of birth, mother’s maiden name, current address and phone number.

Unfortunately, most (if not all) of this data is available on a broad swath of the American populace for free online (think Zillow, Ancestry.com, Facebook, etc.) or else for sale in the cybercrime underground for about the cost of a latte at Starbucks.

The Eckensteins thought the matter had been resolved until Jan. 14, when Ruth received a 1099 form from the SSA indicating they’d reported to the IRS that she had in fact received an $11,665 payment.

“We’ve emailed our tax guy for guidance on how to deal with this on our taxes,” Mr. Eckenstein wrote in an email to KrebsOnSecurity. “My wife logged into SSA portal and there was a note indicating that corrected/updated 1099s would be available at the end of the month. She’s not sure whether that message was specific to her or whether everyone’s seeing that.”

NOT SMALL IF IT HAPPENS TO YOU

Identity thieves have been exploiting authentication weaknesses to divert retirement account funds almost since the SSA launched its portal eight years ago. But the crime really picked up in 2013, around the same time KrebsOnSecurity first began warning readers to register their own accounts at the MySSA portal. That uptick coincided with a move by the U.S. Treasury to start requiring that all beneficiaries receive payments through direct deposit (though the SSA says paper checks are still available to some beneficiaries under limited circumstances).

More than 34 million Americans now conduct business with the Social Security Administration (SSA) online. A story this week from Reuters says the SSA doesn’t track data on the prevalence of identity theft. Nevertheless, the agency assured the news outlet that its anti-fraud efforts have made the problem “very rare.”

But Reuters notes that a 2015 investigation by the SSA’s Office of Inspector General investigation identified more than 30,000 suspicious MySSA registrations, and more than 58,000 allegations of fraud related to MySSA accounts from February 2013 to February 2016.

“Those figures are small in the context of overall MySSA activity – but it will not seem small if it happens to you,” writes Mark Miller for Reuters.

The SSA has not yet responded to a request for comment.

Ed and Ruth’s experience notwithstanding, it’s still a good idea to set up a MySSA account — particularly if you or your spouse will be eligible to withdraw benefits soon. The agency has been trying to beef up online authentication for citizens logging into its MySSA portal. Last summer the SSA began requiring all users to enter a username and password in addition to a one-time security code sent their email or phone, although as previously reported here that authentication process could be far more robust.

The Reuters story reminds readers to periodically use the MySSA portal to make sure that your personal information – such as date of birth and mailing address – are correct. “For current beneficiaries, if you notice that a monthly payment has not arrived, you should notify the SSA immediately via the agency’s toll-free line (1-800-772-1213) or at your local field office,” Miller advised. “In most cases, the SSA will make you whole if the theft is reported quickly.”

Another option is to use the SSA’s “Block Electronic Access” feature, which blocks any automatic telephone or online access to your Social Security record – including by you (although it’s unclear if blocking access this way would have stopped ID thieves who manage to speak with a live SSA representative). To restore electronic access, you’ll need to contact the Social Security Administration and provide proof of your identity.

CryptogramThe Effects of the Spectre and Meltdown Vulnerabilities

On January 3, the world learned about a series of major security vulnerabilities in modern microprocessors. Called Spectre and Meltdown, these vulnerabilities were discovered by several different researchers last summer, disclosed to the microprocessors' manufacturers, and patched­ -- at least to the extent possible.

This news isn't really any different from the usual endless stream of security vulnerabilities and patches, but it's also a harbinger of the sorts of security problems we're going to be seeing in the coming years. These are vulnerabilities in computer hardware, not software. They affect virtually all high-end microprocessors produced in the last 20 years. Patching them requires large-scale coordination across the industry, and in some cases drastically affects the performance of the computers. And sometimes patching isn't possible; the vulnerability will remain until the computer is discarded.

Spectre and Meltdown aren't anomalies. They represent a new area to look for vulnerabilities and a new avenue of attack. They're the future of security­ -- and it doesn't look good for the defenders.

Modern computers do lots of things at the same time. Your computer and your phone simultaneously run several applications -- ­or apps. Your browser has several windows open. A cloud computer runs applications for many different computers. All of those applications need to be isolated from each other. For security, one application isn't supposed to be able to peek at what another one is doing, except in very controlled circumstances. Otherwise, a malicious advertisement on a website you're visiting could eavesdrop on your banking details, or the cloud service purchased by some foreign intelligence organization could eavesdrop on every other cloud customer, and so on. The companies that write browsers, operating systems, and cloud infrastructure spend a lot of time making sure this isolation works.

Both Spectre and Meltdown break that isolation, deep down at the microprocessor level, by exploiting performance optimizations that have been implemented for the past decade or so. Basically, microprocessors have become so fast that they spend a lot of time waiting for data to move in and out of memory. To increase performance, these processors guess what data they're going to receive and execute instructions based on that. If the guess turns out to be correct, it's a performance win. If it's wrong, the microprocessors throw away what they've done without losing any time. This feature is called speculative execution.

Spectre and Meltdown attack speculative execution in different ways. Meltdown is more of a conventional vulnerability; the designers of the speculative-execution process made a mistake, so they just needed to fix it. Spectre is worse; it's a flaw in the very concept of speculative execution. There's no way to patch that vulnerability; the chips need to be redesigned in such a way as to eliminate it.

Since the announcement, manufacturers have been rolling out patches to these vulnerabilities to the extent possible. Operating systems have been patched so that attackers can't make use of the vulnerabilities. Web browsers have been patched. Chips have been patched. From the user's perspective, these are routine fixes. But several aspects of these vulnerabilities illustrate the sorts of security problems we're only going to be seeing more of.

First, attacks against hardware, as opposed to software, will become more common. Last fall, vulnerabilities were discovered in Intel's Management Engine, a remote-administration feature on its microprocessors. Like Spectre and Meltdown, they affected how the chips operate. Looking for vulnerabilities on computer chips is new. Now that researchers know this is a fruitful area to explore, security researchers, foreign intelligence agencies, and criminals will be on the hunt.

Second, because microprocessors are fundamental parts of computers, patching requires coordination between many companies. Even when manufacturers like Intel and AMD can write a patch for a vulnerability, computer makers and application vendors still have to customize and push the patch out to the users. This makes it much harder to keep vulnerabilities secret while patches are being written. Spectre and Meltdown were announced prematurely because details were leaking and rumors were swirling. Situations like this give malicious actors more opportunity to attack systems before they're guarded.

Third, these vulnerabilities will affect computers' functionality. In some cases, the patches for Spectre and Meltdown result in significant reductions in speed. The press initially reported 30%, but that only seems true for certain servers running in the cloud. For your personal computer or phone, the performance hit from the patch is minimal. But as more vulnerabilities are discovered in hardware, patches will affect performance in noticeable ways.

And then there are the unpatchable vulnerabilities. For decades, the computer industry has kept things secure by finding vulnerabilities in fielded products and quickly patching them. Now there are cases where that doesn't work. Sometimes it's because computers are in cheap products that don't have a patch mechanism, like many of the DVRs and webcams that are vulnerable to the Mirai (and other) botnets -- ­groups of Internet-connected devices sabotaged for coordinated digital attacks. Sometimes it's because a computer chip's functionality is so core to a computer's design that patching it effectively means turning the computer off. This, too, is becoming more common.

Increasingly, everything is a computer: not just your laptop and phone, but your car, your appliances, your medical devices, and global infrastructure. These computers are and always will be vulnerable, but Spectre and Meltdown represent a new class of vulnerability. Unpatchable vulnerabilities in the deepest recesses of the world's computer hardware is the new normal. It's going to leave us all much more vulnerable in the future.

This essay previously appeared on TheAtlantic.com.

Worse Than FailureError'd: #TITLE_OF_ERRORD2#

Joe P. wrote, When I tried to buy a coffee at the airport with my contactless VISA card, it apparently thought my name was '%s'."

 

"Instead of outsourcing to Eastern Europe or the Asian subcontinent, companies should be hiring from Malta. Just look at these people! They speak fluent base64!" writes Michael J.

 

Raffael wrote, "While I can proudly say that I am working on bugs, the Salesforce Chatter site should probably consider doing the same."

 

"Wow! Thanks! Happy Null Year to you too!" Alexander K. writes.

 

Joel B. wrote, "Yesterday was the first time I've ever seen a phone with a 'License Violation'. Phone still works, so I guess there's that."

 

"They missed me so much, they decided to give me...nothing," writes Timothy.

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Light Talks and Close

Lightning Talk

  • Usability Fails
  • Etching
  • Diverse Events
  • Kids Space – fairly unstructured and self organising
  • Opening up LandSat imagery – NBAR-T available on NCI
  • Project Nacho – HTML -> VPN/RDP gateway . Apache Guacomle
  • Vocaloids
  • Blockchain
  • Using j2 to create C++ code
  • Memory model code update
  • CLIs are user interface too
  • Complicated git things
  • Mollygive -matching donations
  • Abusing Docker

Closing

  • LCA 2019 will be in Christchurch, New Zealand – http://lca2019.linux.org.au
  • 700 Attendees at 2018
  • 400 talk and 36 Miniconf submissions

 

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Session 2

QUIC: Replacing TCP for the Web Jana Iyengar

  • History
    • Protocol for http transport
    • Deployed Inside Google 2014 and Chrome / mobile apps
    • Improved performance: Youtube rebuffers 15-18% , Google search latency 3.6 – 8 %
    • 35% of Google’s egree traffic (7% of Internet)
    • Working group started in 2016 to standardized QUIC
    • Turned off at the start of 2016 due to security problem
    • Doubled in Sept 2016 due turned on for the youtube app
  • Technology
    • Previously – ip _> TCP -> TLS -> HTTP/2
    • QUIC -> udp -> QUIC -> http over QUIC
    • Includes crypto and tcp handshake
    • congestion control
    • loss recovery
    • TLS 1.3 has some of the same features that QUIC pioneered, being updated to take account
  • HTTP/1
    • 1 trip for TCP
    • 2 trips for TLS
    • Single connection – Head Of Line blocking
    • Multiple TCP connections workaround.
  • HTTP/2
    • Streams within a single transport connection
    • Packet loss will stall the TCP layer
    • Unresolved problems
      • Connection setup latency
      • Middlebox interference with TCP – makes it hard to change TCP
      • Head of line blocking within TCP
  • QUIC
    • Connection setup
      • 0 round trips, handshake packet followed directly by data packet
      • 1 round-trips if crypto keys are not new
      • 2 round trips if QUIC version needs renegotiation
    • Streams
      • http/2 streams are sent as quic streams
  • Aspirations of protocol
    • Deployable and evolveable
    • Low latency connection establishment
    • Stream multiplexing
    • Better loss recovery and flexible congestion control
      • richer signalling (unique packet number)
      • better RTT estimates
    • Resilience to NAT-rebinding ( UDP Nat-mapping changes often, maybe every few seconds)
  • UDP is not a transport, you put something in top of UDP to build a transport
  • Why not a new protocol instead of UDP? Almost impossible to get a new protocol in middle boxes around the Internet.
  • Metrics
    • Search Latency (see paper for other metrics)
    • Enter search term > entire page is loaded
    • Mean: desktop improve 8% , mobile 3.6 %
    • Low latency: Desktop 1% , Mobile none
    • Highest Latency 90-99% of users: Desktop & mobile 15-16%
    • Video similar
    • Big gain is from 0 RTT handshake
  • QUIC – Search Latency Improvements by Country
    • South Korea – 38ms RTT – 1% improvement
    • USA – 50ms – 2 – 3.5 %
    • India – 188ms – 5 – 13%
  • Middlebox ossification
    • Vendor ossified first byte of QUIC packet – flags byte
    • since it seemed to be the same on all QUIC packets
    • broke QUIC deployment when a flag was fixed
    • Encryption is the only way to protect against network ossification
    • “Greasing” by randomly changing options is also an option.
  • Other Protocols over QUIC?
    • Concentrating on http/2
    • Looking at Web RPC

Remote Work: My first decade working from the far end of the earth John Dalton

  • “Remote work has given me a fulfilling technical career while still being able to raise my family in Tasmania”
  • First son both in 2015, wanted to start in Tasmania with family to raise them, rather than moving to a tech hub.
  • 2017 working with High Performance Computing at University Tasmania
  • If everything is going to be outsourced, I want to be the one they outsourced to.
  • Wanted to do big web stuff, nobody in Tasmania doing that.
  • Was a user at LibraryThing
    • They were searching for Sysadmin/DBA in Portland, Maine
    • Knew he could do the job even though was on other side of the world
    • Negotiated into it over a couple of months
    • Knew could do the work, but not sure how the position would work out

Challenges

  • Discipline
    • Feels he is not organised. Doesn’t keep planner uptodate or todo lists etc
    • “You can spend a lot of time reading about time management without actually doing it”
    • Do you need to have the minimum level
  • Isolation
    • Lives 20 minutes out of Hobart
    • In semi-rural area for days at a time, doesn’t leave house all week except to ferry kids on weekends.
    • “Never considered myself an extrovert, but I do enjoy talking to people at least weekly”
    • Need to work to hook in with Hobart tech community, Goes to meetups. Plays D&D with friends.
    • Considering going to coworking space. sometimes goes to Cafes etc
  • Setting Boundries
    • Hard to Leave work.
    • Have a dedicated work space.
  • Internet Access
    • Prioritise Coverage over cost these days for mobile.
    • Sometimes fixed provider go down, need to have a backup
  • Communication
    • Less random communicated with other employees
    • Cannot assume any particular knowledge when talking with other people
    • Aware of particular cultural differences
    • Multiple chance of a miscommunication

Opportunities

  • Access to companies and jobs and technologies that could get locally
  • Access to people with a wider range of experiences and backgrounds

Finding remote work

  • Talk your way into it
  • Networking
  • Job Bof
  • stackoverflow.com/jobs can filter
  • weworkremotely.com

Making it work

  • Be Visable
  • Go home at the end of the day
  • Remember real people are at the end of the email

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Session 1

Self-Documenting Coders: Writing Workshop for Devs Heidi Waterhouse

History of Technical documentation

  • Linear Writing
    • On Paper, usually books
    • Emphasis on understanding and doing
  • Task-based writing
    • Early 90s
    • DITA
    • Concept, Procedure, Reference
  • Object-orientated writing
    • High art for of tech writers
    • Content as code
    • Only works when compiled
    • Favoured by tech writers, translated. Up to $2000 per seat
  • Guerilla Writing
    • Stack Overflow
    • Wikis
    • YouTube
    • frustrated non-writers trying to help peers
  • Search-first writing
    • Every page is page one
    • Search-index driven

Writing Words

  • 5 W’s of journalism.
  • Documentation needs to be tested
  • Audiences
    • eg Users, future-self, Sysadmins, experts, End users, installers
  • Writing Basics
    • Sentences short
    • Graphics for concepts
    • Avoid screencaps (too easily outdated)
    • User style guides and linters
    • Accessibility is a real thing
  • Words with pictures
    • Never include settings only in an image ( “set your screen to look like this” is bad)
    • Use images for concepts not instructions
  • Not all your users are readers
    • Can’t see well
    • Can’t parse easily
    • Some have terrible equipment
    • Some of the “some people” is us
    • Accessibility is not a checklist, although that helps, it is us
  • Using templates to write
    • Organising your thoughts and avoid forgetting parts
    • Add a standard look at low mental cost
  • Search-first writing – page one
    • If you didn’t answer the question or point to the answer you failed
    • answer “How do I?”
  • Indexing and search
    • All the words present are indexed
    • No false pointers
    • Use words people use and search for, Don’t use just your internal names for things
  • Semantic tagging and reuse
    • Semantic text splits form and content
    • Semantic tagging allows reuse
    • Reuse saves duplication
    • Reuse requires compiling
  • Sorting topics into buckets
    • Even with search you need some organisation
    • Group items by how they get used not by how they get prammed
    • Grouping similar items allows serendipity
  • Links, menus and flow
    • give people a next step
    • Provide related info on same page
    • show location
    • offer a chance to see the document structure

Distributing Words

  • Static Sites
  • Hosted Sites
  • Baked into the product
    • Only available to customers
    • only updates with the product
    • Hard to encourage average user to input
  • Knowledge based / CMS
    • Useful to community that known what it wants
    • Prone to aging and rot
    • Sometimes diverges from published docs or company message
  • Professional Writing Tools
    • Shiny and powerful
    • Learning Cliff
    • IDE
    • Super features
    • Not going to happen again
  • Paper-ish things
    • Essential for some topics
    • Reassuring to many people
    • touch is a sense we can bond with
    • Need to understand if people using docs will be online or offline when they want them.
  • Using templates to publish
    • Unified look and feel
    • Consistency and not missing things
    • Built-in checklist

Collaborating on Words

  • One weird trick, write it up as your best guess and let them correct it
  • Have a hack day
    • Ste a goal of things to delete
    • Set a goal of things to fix
    • Keep track of debt you can’t handle today
    • team-building doesn’t have to be about activities

Deleting Words

  • What needs to go
    • Old stuff that is wrong and terrible
    • Wrong stuff that hides right stuff
  • What to delete
    • Anything wrong
    • Anything dangerious
    • Anything used of updated in year
  • How
    • Delete temporarily (put aside for a while)
    • Based on analytics
    • Ruthlessly
    • Delete or update

Documentation Must be

  • True
  • Timely
  • Testable
  • Tuned

Documentation Components

  • Who is reading and why
    • Assuming no one likes reading docs
    • What is driving them to be here
  • Pre Requisites
    • What does a user need to succeed
    • Can I change the product to reduce documentation
    • Is there any hazard in this process
  • How do I do this task
    • Steps
    • Results
    • Next steps
  • Test – How do I know that it worked
    • If you can’t test i, it is not a procedure
    • What will the system do, how does the state change
  • Reference
    • What other stuff that affects this
    • What are the optionsal settings
    • What are the related things
  • Code and code samples
    • Best: code you can modify and run in the docs
    • 2nd Best: Code you can copy easily
    • Worst: retyping code
  • Option
    • Why did we build it this way
    • What else might you want to know
    • Have other people done this
    • Lifecycle

Documentation Types

  • Instructions
  • Ideas (arch, problem space,discarded options, process)
  • Action required (release notes, updates, deprecation)
  • Historical (roads maps, projects plans, retrospective documents)
  • Invisible docs (user experience, microinteractions, error messages)
    • Error messages – Unique ID, what caused, What mitigation, optional: Link to report

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Keynote – Jess Frazelle

Keynote: Containers aka crazy user space fun

  • Work at Microsoft on Open Source and containers, specifically on kubernetes
  • Containers vs Zones vs Jails vs VMs
  • Containers are not a first class concept in the kernel.
    • Namespaces
    • Cgroups
    • AppArmour in LSM (prevent mounting, writing to /proc etc) (or SELinux)
    • Seccomp (syscall filters, which allowed or denied) – Prevent 150 other syscalls which are uncommon or dangerous.
      • Got list from testing all of dockerhub
      • eg CLONE, UNSHARE
      • NoNewPrivs (exposed as “AllowPrivilegeEsculation” in K8s)
      • rkt and systemd-nspawn don’t 100% follow
  • Intel Clear containers are really VMs

History of Containers

  • OpenVZ – released 2005
  • Linux-Vserver (2008)
  • LXC ( 2008)
  • Docker ( 2013)
    • Initially used LXC as a backend
    • Switched to libcontainer in v0.7
  • lmctfy (2013)
    • By Google
  • rkt (2014)
  • runc (2015)
    • Part of Open container Initiative
  • Container runtimes are like the new Javascript frameworks

Are Containers Secure

  • Yes
  • and I can prove it
  • VMs / Zones and Jails are like all the Lego pieces are already glued togeather
  • Containers you have the parts seperate
    • You can turn on and off certain namespaces
    • You can share namespaces between containers
    • Every container in k8s shares PID and NET namespaces
    • Docker has sane defaults
    • You can sandbox apps every further though
  • https://contained.af/
    • No one has managed to break out of the container
    • Has a very strict seccomp profile applied
    • You’d be better off attacking the app, but you are still running a containers default seccomp filters

Containerizing the Desktop

  • Switched to runc from docker (had to convert stuff)
  • rootless containers
  • Runc hook “netns” to do networking
  • Sandboxed desktop apps, running in containers
  • Switch from Debian to CoreOS Container Linux as base OS
    • Verify the integrity of the OS
    • Just had to add graphics drivers
    • Based on gentoo, emerge all the way down

What if we applied the the same defaults to programming languages?

  • Generate seccomp filters at build-time
    • Previously tried at run time, doesn’t work that well, something always missed
    • At build time we can ensure all code is included in the filter
    • The go compiler writes the assembly for all the syscalls, you can hijack and grab the list of these, create a seccomp filter
    • No quite that simply
      • plugins
      • exec external stuff
      • can directly exec a syscall in go code, the name passed in via arguments at runtime
  • metaparticle.io
    • Library for cloud-native applications

Linux Containers in secure enclaves (SCONE)

  • Currently Slow
  • Lots of tradeoffs or what executes where (trusted area or untrsuted area)

Soft multi-tenancy

  • Reduced threat model, users not actively malicious
  • Hard Multi-tenancy would have potentially malicious containers running next to others
  • Host OS – eg CoreOs
  • Container Runtime – Look at glasshouse VMs
  • Network – Lots to do, default deny in k8s is a good start
  • DNS – Needs to be namespaced properly or turned off. option: kube-dns as a sidecar
  • Authentication and Authorisation – rbac
  • Isolation of master and System nodes from nodes running containers
  • Restricting access to host resources (k8s hostpath for volumes, pod security policy)
  • making sure everything else is “very dumb” to it’s surroundings

 

Share

,

Sociological ImagesChildren Learn Rules for Romance in Preschool

Originally Posted at TSP Discoveries

Photo by oddharmonic, Flickr CC

In the United States we tend to think children develop sexuality in adolescence, but new research by Heidi Gansen shows that children learn the rules and beliefs associated with romantic relationships and sexuality much earlier.

Gansen spent over 400 hours in nine different classrooms in three Michigan preschools. She observed behavior from teachers and students during daytime classroom hours and concluded that children learn — via teachers’ practices — that heterosexual relationships are normal and that boys and girls have very different roles to play in them.

In some classrooms, teachers actively encouraged “crushes” and kissing between boys and girls. Teachers assumed that any form of affection between opposite gender children was romantically-motivated and these teachers talked about the children as if they were in a romantic relationship, calling them “boyfriend/girlfriend.” On the other hand, the same teachers interpreted affection between children of the same gender as friendly, but not romantic. Children reproduced these beliefs when they played “house” in these classrooms. Rarely did children ever suggest that girls played the role of “dad” or boys played the role of “mom.” If they did, other children would propose a character they deemed more gender-appropriate like a sibling or a cousin.

Preschoolers also learned that boys have power over girls’ bodies in the classroom. In one case, teachers witnessed a boy kiss a girl on the cheek without permission. While teachers in some schools enforced what the author calls “kissing consent” rules, the teachers in this school interpreted the kiss as “sweet” and as the result of a harmless crush. Teachers also did not police boys’ sexual behaviors as actively as girls’ behaviors. For instance, when girls pulled their pants down teachers disciplined them, while teachers often ignored the same behavior from boys. Thus, children learned that rules for romance also differ by gender.

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at https://thesocietypages.org/socimages)

CryptogramWhatsApp Vulnerability

A new vulnerability in WhatsApp has been discovered:

...the researchers unearthed far more significant gaps in WhatsApp's security: They say that anyone who controls WhatsApp's servers could effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation.

Matthew Green has a good description:

If all you want is the TL;DR, here's the headline finding: due to flaws in both Signal and WhatsApp (which I single out because I use them), it's theoretically possible for strangers to add themselves to an encrypted group chat. However, the caveat is that these attacks are extremely difficult to pull off in practice, so nobody needs to panic. But both issues are very avoidable, and tend to undermine the logic of having an end-to-end encryption protocol in the first place.

Here's the research paper.

Worse Than FailureThe More Things Change: Fortran Edition

Technology improves over time. Storage capacity increases. Spinning platters are replaced with memory chips. CPUs and memory get faster. Moore's Law. Compilers and languages get better. More language features become available. But do these changes actually improve things? Fifty years ago, meteorologists used the best mainframes of the time, and got the weather wrong more than they got it right. Today, they have a global network of satellites and supercomputers, yet they're wrong more than they're right (we just had a snowstorm in NJ that was forecast as 2-4", but got 16" before drifting).

As with most other languages, FORTRAN also added structure, better flow control and so forth. The problem with languages undergoing such a massive improvement is that occasionally, coding styles live for a very long time.

Imagine a programmer who learned to code using FORTRAN IV (variable names up to 6 characters, integers implicitly start with "I" through "N" and reals start with any other letter - unless explicitly declared, flow control via GOTO, etc) writing a program in 2000 (using a then-current compiler but with FORTRAN IV style). Now imagine some PhD candidate coming along in 2017 to maintain and enhance this FORTRAN IV-style code with the latest FORTRAN compiler.

A.B.was working at a university with just such a scientific software project as part of earning a PhD. These are just a couple of the things that caused a few head-desk moments.

Include statements. The first variant only allows code to be included. The second allows preprocessor directives (like #define).

    INCLUDE  'path/file'

    #include 'path/file'

Variables. Since the only data types/structures originally available were character, logical, integer, real*4, real*8 and arrays, you had to shoehorn your data into the closest fit. This led to declarations sections that included hundreds of basic declarations. This hasn't improved today as people still use one data type to hold something that really should be implemented as something else. Also, while the compilers of today support encapsulation/modules, back then, everything was pretty much global.

Data structures. The only thing supported back then was multidimensional arrays. If you needed something like a map, you needed to roll your own. This looks odd to someone who cut their teeth on a version of the language where these features are built-in.

Inlining. FORTRAN subroutines support local subroutines and functions which are inlined, which is useful to provide implied visibility scoping. Prudent use allows you to DRY your code. This feature isn't even used, so the same code is duplicated over and over again inline. Any of you folks see that pattern in your new-fangled modern systems?

Joel Spolsky commented about the value of keeping old code around. While there is much truth in his words, the main problem is that the original programmers invariably move on, and as he points out, it is much harder to read (someone else's) code than to write your own; maintenance of ancient code is a real world issue. When code lives across too many language version improvements, it becomes inherently more difficult to maintain as its original form becomes more obsolete.

To give you an idea, take a look at the just the declaration section of one module that A.B. inherited (except for a 2 line comment at the top of the file, there were no comments). FWIW, when I did FORTRAN at the start of my career, I used to document the meaning of every. single. abbreviated. variable. name.

      subroutine thesub(xfac,casign,
     &     update,xret,typret,
     &     meop1,meop2,meop12,meoptp,
     &     traop1, traop2, tra12,
     &     iblk1,iblk2,iblk12,iblktp,
     &     idoff1,idoff2,idof12,
     &     cntinf,reoinf,
     &     strinf,mapinf,orbinf)
      implicit none
      include 'routes.h'
      include 'contr_times.h'
      include 'opdim.h'
      include 'stdunit.h'
      include 'ioparam.h'
      include 'multd2h.h'
      include 'def_operator.h'
      include 'def_me_list.h'
      include 'def_orbinf.h'
      include 'def_graph.h'
      include 'def_strinf.h'
      include 'def_filinf.h'
      include 'def_strmapinf.h'
      include 'def_reorder_info.h'
      include 'def_contraction_info.h'
      include 'ifc_memman.h'
      include 'ifc_operators.h'
      include 'hpvxseq.h'

      integer, parameter ::
     &     ntest = 000
      logical, intent(in) ::
     &     update
      real(8), intent(in) ::
     &     xfac, casign
      real(8), intent(inout), target ::
     &     xret(1)
      type(coninf), target ::
     &     cntinf
      integer, intent(in) ::
     &     typret,
     &     iblk1, iblk2, iblk12, iblktp,
     &     idoff1,idoff2,idof12
      logical, intent(in) ::
     &     traop1, traop2, tra12
      type(me_list), intent(in) ::
     &     meop1, meop2, meop12, meoptp
      type(strinf), intent(in) ::
     &     strinf
      type(mapinf), intent(inout) ::
     &     mapinf
      type(orbinf), intent(in) ::
     &     orbinf
      type(reoinf), intent(in), target ::
     &     reoinf

      logical ::
     &     bufop1, bufop2, buf12, 
     &     first1, first2, first3, first4, first5,
     &     msfix1, msfix2, msfx12, msfxtp,
     &     reject, fixsc1, fixsc2, fxsc12,
     &     reo12, non1, useher,
     &     traop1, traop2
      integer ::
     &     mstop1,mstop2,mst12,
     &     igmtp1,igmtp2,igmt12,
     &     nc_op1, na_op1, nc_op2, na_op2,
     &     nc_ex1, na_ex1, nc_ex2, na_ex2, 
     &     ncop12, naop12,
     &     nc12tp, na12tp,
     &     nc_cnt, na_cnt, idxres,
     &     nsym, isym, ifree, lenscr, lenblk, lenbuf,
     &     buftp1, buftp1, bftp12,
     &     idxst1, idxst2, idxt12,
     &     ioff1, ioff2, ioff12,
     &     idxop1, idxop2, idop12,
     &     lenop1, lenop2, len12,
     &     idxm12, ig12ls,
     &     mscmxa, mscmxc, msc_ac, msc_a, msc_c,
     &     msex1a, msex1c, msex2a, msex2c,
     &     igmcac, igamca, igamcc,
     &     igx1a, igx1c, igx2a, igx2c,
     &     idxms, idxdis, lenmap, lbuf12, lb12tp,
     &     idxd12, idx, ii, maxidx
      integer ::
     &     ncblk1, nablk1, ncbka1, ncbkx1, 
     &     ncblk2, nablk2, ncbka2, ncbkx2, 
     &     ncbk12, nabk12, ncb12t, nab12t, 
     &     ncblkc, nablkc,
     &     ncbk12, nabk12,
     &     ncro12, naro12,
     &     iblkof
      type(filinf), pointer ::
     &     ffop1,ffop2,ff12
      type(operator), pointer ::
     &     op1, op2, op1op2, op12tp
      integer, pointer ::
     &     cinf1c(:,:),cinf1a(:,:),
     &     cinf2c(:,:),cinf2a(:,:),
     &     cif12c(:,:),
     &     cif12a(:,:),
     &     cf12tc(:,:),
     &     cf12ta(:,:),
     &     cfx1c(:,:),cfx1a(:,:),
     &     cfx2c(:,:),cfx2a(:,:),
     &     cfcntc(:,:),cfcnta(:,:),
     &     inf1c(:),
     &     inf1a(:),
     &     inf2c(:),
     &     inf2a(:),
     &     inf12c(:),
     &     inf12a(:),
     &     dmap1c(:),dmap1a(:),
     &     dmap2c(:),dmap2a(:),
     &     dm12tc(:),dm12ta(:)

      real(8) ::
     &     xnrm, facscl, fcscl0, facab, xretls
      real(8) ::
     &     cpu, sys, cpu0, sys0, cpu00, sys00
      real(8), pointer ::
     &     xop1(:), xop2(:), xop12(:), xscr(:)
      real(8), pointer ::
     &     xbf1(:), xbf2(:), xbf12(:), xbf12(:), x12blk(:)

      integer ::
     &     msbnd(2,3), igabnd(2,3),
     &     ms12ia(3), ms12ic(3), ig12ia(3), ig12ic(3),
     &     ig12rw(3)

      integer, pointer ::
     &     gm1dc(:), gm1da(:),
     &     gm2dc(:), gm2da(:),
     &     gmx1dc(:), gmx1da(:),
     &     gmx2dc(:), gmx2da(:),
     &     gmcdsc (:), gmcdsa (:),
     &     gmidsc (:), gmidsa (:),
     &     ms1dsc(:), ms1dsa(:),
     &     ms2dsc(:), ms2dsa(:),
     &     msx1dc(:), msx1da(:),
     &     msx2dc(:), msx2da(:),
     &     mscdsc (:), mscdsa (:),
     &     msidsc (:), msidsa (:),
     &     idm1ds(:), idxm1a(:),
     &     idm2ds(:), idxm2a(:),
     &     idx1ds(:), ixms1a(:),
     &     idx2ds(:), ixms2d(:),
     &     idxdc (:), idxdsa (:),
     &     idxmdc (:),idxmda (:),
     &     lstrx1(:),lstrx2(:),lstcnt(:),
     &     lstr1(:), lstr2(:), lst12t(:)

      integer, pointer ::
     &     mex12a(:), m12c(:),
     &     mex1ca(:), mx1cc(:),
     &     mex2ca(:), mx2cc(:)

      integer, pointer ::
     &     ndis1(:,:), dgms1(:,:,:), gms1(:,:),
     &     lngms1(:,:),
     &     ndis2(:,:), dgms2(:,:,:), gms2(:,:),
     &     lngms2(:,:),
     &     nds12t(:,:), dgms12(:,:,:),
     &     gms12(:,:),
     &     lgms12(:,:),
     &     lg12tp(:,:), lm12tp(:,:,:)

      integer, pointer ::
     &     cir12c(:,:), cir12a(:,:),
     &     ci12c(:,:),  ci12a(:,:),
     &     mire1c(:),   mire1a(:),
     &     mire2c(:),   mire2a(:),
     &     mca12(:), didx12(:), dca12(:),
     &     mca1(:),  didx1(:),  dca1(:),
     &     mca2(:),  didx2(:),  dca2(:),
     &     lenstr_array(:,:,:)

c dbg
      integer, pointer ::
     &     dum1c(:), dum1a(:), hpvx1c(:), hpvx1a(:),
     &     dum2c(:), dum2a(:), hpvx2c(:), hpvx2a(:)
      integer ::
     &     msd1(ngastp,2,meop1%op%njoined),
     &     msd2(ngastp,2,meop2%op%njoined),
     &     jdx, totc, tota
c dbg

      type(graph), pointer ::
     &     graphs(:)

      integer, external ::
     &     ielsum, ielprd, imdst2, glnmp, idxlst,
     &     mxdsbk, m2ims4
      logical, external ::
     &     nxtdis, nxtds2, lstcmp,
     &     nndibk, nndids
      real(8), external ::
     &     ddot
[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 3

Insights – solving every problem for good Paul Wayper

Sysadmins

  • Too much to check, too little time
  • What does this message mean again
  • Too reactive

How Sysadmins fix problems

  • Read text files and command output
  • Look at them for information
  • Check this information against the knowlede
  • Decide on appobiate solution

Insites

  • Reads test files and outputs
  • Process them into information
  • Use information in rules
  • Rules provide information about Solution

Examples

  • Simple rule – check “localhost” is in /etc/hosts
  • Rule 2 – chronyd refuses to fix server’s time since is out by more than 1000s
    • Checks /var/log/message for error message from chrony
  • Insites rolls up all the checks against messages, so only down once
  • Rule 3 – rsyslog dropping messages

Website

http://red.ht/demo_rules

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 2

Personalisation at Scale: A “Cookie Cutter” Approach Jim O’Halloran

  • Impact on site performance on conversion is huge
  • Magento
    • LAMP stack + Redis or memcached
    • Generally App is CPI bound
    • Routing / Rendering still time consuming
  • Varnish full page caching (FPC)
  • But what about personalised content?
  • Edge Side Includes (ESIs)
    • But ESIs run in series, is slllow when you have many
    • Content is nont cacheable, expensive to calculate, significant render time
    • ESI therefore undermines much advantage of FPC
  • Ajax
    • Make ajax request and fetch personalised content
    • Still load on backend
    • ESI limitations plus added network latency
  • Cookie Cutter
    • When an event occurs that modifies personalisation state, send a cookies containing the required data with the response.
    • In the browser, use the content of that cookie to update the page

Example

  • Goto www.example.com
    • Probably cached in varnish
    • I don’t have a cookie
    • If I login, uncachable request, I am changing login state
    • Response includes Set-Cookie header creating a personalised cookie
  • Advantages
    • No backend requests
    • Page data served is cached always
  • How big can cookies be?
    • RFC 6265 has limits but in reality
    • Actual limit ~4096 bytes per cookie
    • Some older browsers also limit to ~4096 bytes total per domain

Potential issues

  • Request Size
    • Keep cookies small
      • Store small values only, No pre-rendered markup, No larger data structures
    • Serve static assets via CDN
    • Lot of stuff in cart can get huge
  • Information leakage
    • Final URLs leaked to unlogged in users
  • Large Scale changes
    • Page needs to look completely different to different users
    • Vary headers might be an option
  • Formkeys
    • XSRF protection workarounds
  • What about cache misses
    • Megento assembles all it’s pages from a series of blocks
    • Most parts of page are relatively static (block cache)
    • Aligent_CacheObserver – Megento extension that adds cache tags to blocks that should be cached but were not picked up as cachable by default
    • Aoe_TemplateHints – Visibility into Block cache
    • Cacheing != Performance Optimisation – Aoe_Profiler

Availability

  • Plugin availbale for Megento 1
    • Varnish CookieCutter
  • For Magento 2 has native varnish
    • But has limitations
    • Maybe some off CookieCutter stuff could improve

Future

  • localStorage instead of cookies


 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 1

Panel: Meltdown, Spectre, and the free-software community Jonathan Corbet, Andrew ‘bunnie’ Huang, Benno Rice, Jess Frazelle, Katie McLaughlin, Kees Cook

  • FreeBSD only heard 11 days beforehand. Would have liked more notice
  • Got people involved from the Kernel Summit in Oct
  • Hosting company only heard once it went official, been busy patching since
  • Likely to be class-action lawsuit for $billions. That might make chip makers more paranoid about documentation and disclosure.
  • Thoughts in embargo
    • People noticed strange patches going in beforehand.
    • Only broke 6 days early, had been going for 6 months
    • “Linus is happy with this, something is terribly wrong”
    • Sad that the 2nd-tier cloud providers didn’t know. Exclusive club and lines as to who got informed were not clear
    • Projects that don’t have explicit relationship with Intel didn’t get informed
  • Thoughts on other vendors
    • This class of bugs could affect anybody, open hardware would probably not fix
    • More open hardware could enable people to review the processors and find these from the design rather than poking around
    • Hard to guarantee the shipped hardware matches the design
    • Software people can build everything at home and check. FABs don’t work at home.
  • Speculative execution warned about years ago. Danger ignored. How to make sure the next one isn’t ignored?
    • We always have to do some risky stuff
    • The research on this built up slowly over the years
    • Even if you have only found impractical attacks against something doesn’t mean the practical one doesn’t exist.
  • What criteria do we use to decide who is in?
    • Mechanisms do exist, they were mainly not used. Perhaps because they were for software vulnerabilities
  • Did people move providers?
    • No but Containers made things easier to reboot stuff and shuffle
  • Are there similar vulnerabilities ( similar or general hardware ) coming along?
    • The Kernel page-table patches were fairly general, should cover many similar ones
    • All these performance optimising bit of your CPU are now attack surfaces
    • What are people going to do if this slows down hardware too much?
  • How do we explain problems like these to politicians etc
    • Legos
    • We still have kernel devs getting their laptops
  • Can be use CPUs that don’t have speculative execution?
    • Not really. Back to 486s
  • Who are we protesting against with the embargo?
    • Everybody
    • The longer period let better fixes get in
    • The meltdown fix could be done in semi-public so had better quality

What is the most common street name in Australia? Rachel Bunder

  • Why?
    • Saw a map with most common name by US street
  • Just looking at name, not end bit “park” , “road”
  • Data
    • PSMA Geocoded national address file – Great but came out after project
    • Use Open Street Maps
  • Started with Common Name in Sydney
    • Used Metro Extracts – site closing down soon
    • Format is geojson
    • Road files separately provided
  • Procedure
    • Used python, R also has good features and libaraies
    • geopandas
    • Had some paths with no names
    • What is a road? – “Something with a name I can drive a car on”
  • Sydney
    • Full street name
      • Victoria Road
      • Pacific Highway
      • oops like like names are being counted twice
    • Tried merging them together
    • Roads don’t 100% match ends. Added function to fuzzy merge the roads that are 100m apart
    • Still some weird ones but probably won’t affect top
    • Second attempt
      • Short st, George st, William st, John st, Church st
  • Now with just the “name bit”
    • Tried taking out just the last name. ended up with “the” as most common.
    • Started with “The” = whole name
    • Single word = whole name
    • name – descriptor – suffex
    • lots of weird names
    • name list – Park, Victoria, Railway, William, Short
  • Wouldn’t work in many other counties
  • Now for all over Australia
    • overpass data
    • Downloaded in 50kmx50x squares
  • Lessons
    • Start small
    • Choose something familiar
    • Check you bias (different naming conventions)
    • Constance vigerlence
    • Know your problem
  • Common plant names
    • Wattle – 15th – 385
  • Other name
    • “The Esplanade” more common than “The Avenue”
  • Top names
    • 5th – Victoria
    • 4th – Church – 497
    • 3rd – George –  551
    • 2nd – Railway
    • 1st – Park – 693
  • By State
    • WA – Forest
    • SA – Railway
    • Vic – Park
    • Tas – Esplanade
    • NT – Smith/Stuart
    • NSW – Park

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Keynote – Hugh Blemings

Wandering through the Commons

Reflections on Free and Open Source Software/Hardware in Australia, New Zealand and beyond

  • Past Linux.conf.au’s reviewed
  • FOSS in Aus and NZ
    • Above per capita
  • List of Aus / NZ people and their contributions
    • John Lions , Lions book on Unix
    • Pia Andrews/Waugh/Smith – Open Government, GovHack, Linux Australia, Open Data
    • Vik Oliver – 3D Printing
    • Clare Cuuran – Open Government in NZ
    • plus a bunch of others

Working in Free Software and Open Hardware

  • The basics
    • Be visable in projects of relevance
      • You will be typed into Google, looked at in GitHub
    • Be yourself
      • But be business Friendly
    • Linkedin is a thing, really
    • Need a accurate basic presence
  • Finding a new job
    • Networks
    • Local user groups
    • Conferences
    • The projects you work on
  • Application and negotiation
    • Be professional, courteous
    • Do homework about company and culture
    • Talk to people that work there
    • Spend time on interview prep
      • Know your stuff, if you don’t know, say so
    • Think about Salary expectations and stick to them
      • Val Aurora’s page on this is excellent
    • Ask to keep copyright on your code
      • Should be a no-brainer for a FOSS.OH company
  • In the Job
    • Takes time to get into groove, don’t sweat it
    • Get out every now and then, particularly if working from home
    • Work/life balance
    • Know when to jump
      • Poisonous workplaces
    • An aside to People’s managers
      • Bring your best or don’t be a people manager
      • Take your reports welfare seriously

Looking after You

  • Ours is in the main a sedentary and solitary pursuit
    • exercise
  • Sitting and standing in front of a desk all day is bad
    • takes breaks
  • Depression is a real thing
  • Eat more vegetables
  • Find friends/colleagues to exercise with

Working if FOSS / OH – Staying Current

  • Look over a colleagues shoulder
  • Do something that is not part of your regular job
    • low level programming
    • Karger systems, Openstack
  • Stay uptodate with Security Blogs and the like
    • Many of the attack vectors have generic relevance
  • Take the lid off, tinker with hardware
    • Lots of videos online to help or just watch

Make Hay while the Sun Shines

  • Save some money for rainy day
  • Keep networks Open
  • Even when you have a job

You’re fired … Now What? – In a moment

  • Don’t panic
    • Going out in a twitter storm won’t help anyone
  • It’s not personal
    • It is the position that is no longer needed, not you
  • If you think it an unfair dismissal, seek legal advice before signing anything
  • It is normal to feel rubbish
  • Beware of imposter syndrome
  • Try to keep 2-3 opportunities in the pipeline
  • Don’t assume people will remember you
    • It’s not personal, everyone gets busy
    • It’s okay to (politely naturally) follow up periodically
  • Keep search a little narrow for the first week or two
    • The expand widely
  • Balance take “something/everything” as better than waiting for your dream job

Dream Job

  • Power 9 CPU
    • 14nm process
    • 4GHz, 24 cores
    • 25km of wires
    • 8 billion transisters
    • 3900 official chips pins
    • ~19,000 connections from die to the pin

Conclusions

  • Part of a vibrant FOSS/OH community both hear and abroad
  • We have accomplished much
  • The most exciting (in both senses) things lie before us
  • We need all of you to be part at every level of the stack
  • Look forward to working with you…

Share

,

Krebs on SecurityChronicle: A Meteor Aimed At Planet Threat Intel?

Alphabet Inc., the parent company of Google, said today it is in the process of rolling out a new service designed to help companies more quickly make sense of and act on the mountains of threat data produced each day by cybersecurity tools.

Countless organizations rely on a hodgepodge of security software, hardware and services to find and detect cybersecurity intrusions before an incursion by malicious software or hackers has the chance to metastasize into a full-blown data breach.

The problem is that the sheer volume of data produced by these tools is staggering and increasing each day, meaning already-stretched IT staff often miss key signs of an intrusion until it’s too late.

Enter “Chronicle,” a nascent platform that graduated from the tech giant’s “X” division, which is a separate entity tasked with tackling hard-to-solve problems with an eye toward leveraging the company’s core strengths: Massive data analytics and storage capabilities, machine learning and custom search capabilities.

“We want to 10x the speed and impact of security teams’ work by making it much easier, faster and more cost-effective for them to capture and analyze security signals that have previously been too difficult and expensive to find,” wrote Stephen Gillett, CEO of the new venture.

Few details have been released yet about how exactly Chronicle will work, although the company did say it would draw in part on data from VirusTotal, a free service acquired by Google in 2012 that allows users to scan suspicious files against dozens of commercial antivirus tools simultaneously.

Gillett said his division is already trialing the service with several Fortune 500 firms to test the preview release of Chronicle, but the company declined to name any of those participating.

ANALYSIS

It’s not terribly clear from Gillett’s post or another blog post from Alphabet’s X division by Astro Teller how exactly Chronicle will differentiate itself in such a crowded market for cybersecurity offerings. But it’s worth considering the impact that VirusTotal has had over the years.

Currently, VirusTotal handles approximately one million submissions each day. The results of each submission get shared back with the entire community of antivirus vendors who lend their tools to the service — which allows each vendor to benefit by adding malware signatures for new variants that their tools missed but that a preponderance of other tools flagged as malicious.

Naturally, cybercriminals have responded by creating their own criminal versions of VirusTotal: So-called “no distribute” scanners. These services cater to malware authors, and use the same stable of antivirus tools, except they prevent these tools from phoning home to the antivirus companies about new, unknown variants.

On balance, it’s difficult to know whether the benefit that antivirus companies — and by extension their customers — gain by partnering with VirusTotal outweighs the mayhem enabled by these no-distribute scanners. But it seems clear that VirusTotal has helped antivirus companies and their customers do a better job focusing on threats that really matter, as opposed to chasing after (or cleaning up after) so-called “false positives,” — benign files that erroneously get flagged as malicious.

And this is precisely the signal-to-noise challenge created by the proliferation of security tools used in a typical organization today: How to spend more of your scarce cybersecurity workforce, budget and time identifying and stopping the threats that matter and less time sifting through noisy but otherwise time-wasting alerts triggered by non-threats.

I’m not a big listener of podcasts, but I do find myself increasingly making time to listen to Risky Business, a podcast produced by Australian cybersecurity journalist Patrick Gray. Responding to today’s announcement on Chronicle, Gray said he likewise had few details about it but was looking forward to learning more.

“Google has so much data and so many amazing internal resources that my gut reaction is to think this new company could be a meteor aimed at planet Threat Intel™️,” Gray quipped on Twitter, referring to the burgeoning industry of companies competing to help companies trying to identify new threats and attack trends. “Imagine if other companies spin out their tools…Netflix, Amazon, Facebook etc. That could be a fundamentally reshaped industry.”

Well said. I also look forward to hearing more about how Chronicle works and, more importantly, if it works.

Full disclosure: Since September 2016, KrebsOnSecurity has received protection against massive online attacks from Project Shield, a free anti-distributed denial-of-service (DDoS) offering provided by Jigsaw — another subsidiary of Google’s parent company. Project Shield provides DDoS protection for news, human rights, and elections monitoring Web sites.

Krebs on SecurityExpert: IoT Botnets the Work of a ‘Vast Minority’

In December 2017, the U.S. Department of Justice announced indictments and guilty pleas by three men in the United States responsible for creating and using Mirai, a malware strain that enslaves poorly-secured “Internet of Things” or IoT devices like security cameras and digital video recorders for use in large-scale cyberattacks.

The FBI and the DOJ had help in their investigation from many security experts, but this post focuses on one expert whose research into the Dark Web and its various malefactors was especially useful in that case. Allison Nixon is director of security research at Flashpoint, a cyber intelligence firm based in New York City. Nixon spoke with KrebsOnSecurity at length about her perspectives on IoT security and the vital role of law enforcement in this fight.

Brian Krebs (BK): Where are we today with respect to IoT security? Are we better off than were a year ago, or is the problem only worse?

Allison Nixon (AN): In some aspects we’re better off. The arrests that happened over the last year in the DDoS space, I would call that a good start, but we’re not out of the woods yet and we’re nowhere near the end of anything.

BK: Why not?

AN: Ultimately, what’s going with these IoT botnets is crime. People are talking about these cybersecurity problems — problems with the devices, etc. — but at the end of the day it’s crime and private citizens don’t have the power to make these bad actors stop.

BK: Certainly security professionals like yourself and others can be diligent about tracking the worst actors and the crime machines they’re using, and in reporting those systems when it’s advantageous to do so?

AN: That’s a fair argument. I can send abuse complaints to servers being used maliciously. And people can write articles that name individuals. However, it’s still a limited kind of impact. I’ve seen people get named in public and instead of stopping, what they do is improve their opsec [operational security measures] and keep doing the same thing but just sneakier. In the private sector, we can frustrate things, but we can’t actually stop them in the permanent, sanctioned way that law enforcement can. We don’t really have that kind of control.

BK: How are we not better off?

AN: I would say that as time progresses, the community that practices DDoS and malicious hacking and these pointless destructive attacks get more technically proficient when they’re executing attacks, and they just become a more difficult adversary.

BK: A more difficult adversary?

AN: Well, if you look at the individuals that were the subject of the announcement this month, and you look in their past, you can see they’ve been active in the hacking community a long time. Litespeed [the nickname used by Josiah White, one of the men who pleaded guilty to authoring Mirai] has been credited with lots of code.  He’s had years to develop and as far as I could tell he didn’t stop doing criminal activity until he got picked up by law enforcement.

BK: It seems to me that the Mirai authors probably would not have been caught had they never released the source code for their malware. They said they were doing so because multiple law enforcement agencies and security researchers were hot on their trail and they didn’t want to be the only ones holding the source code when the cops showed up at their door. But if that was really their goal in releasing it, doing so seems to have had the exact opposite effect. What’s your take on that?

AN: You are absolutely, 100 million percent correct. If they just shut everything down and left, they’d be fine now. The fact that they dumped the source was a tipping point of sorts. The damages they caused at that time were massive, but when they dumped the source code the amount of damage their actions contributed to ballooned [due to the proliferation of copycat Mirai botnets]. The charges against them specified their actions in infecting the machines they controlled, but when it comes to what interested researchers in the private sector, the moment they dumped the source code — that’s the most harmful act they did out of the entire thing.

BK: Do you believe their claimed reason for releasing the code?

AN: I believe it. They claimed they released it because they wanted to hamper investigative efforts to find them. The problem is that not only is it incorrect, it also doesn’t take into account the researchers on the other end of the spectrum who have to pick from many targets to spend their time looking at. Releasing the source code changed that dramatically. It was like catnip to researchers, and was just a new thing for researchers to look at and play with and wonder who wrote it.

If they really wanted to stay off law enforcement’s radar, they would be as low profile as they could and not be interesting. But they did everything wrong: They dumped the source code and attacked a security researcher using tools that are interesting to security researchers. That’s like attacking a dog with a steak. I’m going to wave this big juicy steak at a dog and that will teach him. They made every single mistake in the book.

BK: What do you think it is about these guys that leads them to this kind of behavior? Is it just a kind of inertia that inexorably leads them down a slippery slope if they don’t have some kind of intervention?

AN: These people go down a life path that does not lead them to a legitimate livelihood. They keep doing this and get better at it and they start to do these things that really can threaten the Internet as a whole. In the case of these DDoS botnets, it’s worrying that these individuals are allowed to go this deep before law enforcement catches them.

BK: There was a narrative that got a lot of play recently, and it was spun by a self-described Internet vigilante who calls himself “the Janitor.” He claimed to have been finding zero-day exploits in IoT devices so that he could shut down insecure IoT things that can’t really be secured before or maybe even after they have been compromised by IoT threats like Mirai. The Janitor says he released a bunch of his code because he’s tired of being the unrecognized superhero that he is, and many in the media seem to have eaten this up and taken his manifesto as gospel. What’s your take on the Janitor, and his so-called “bricker bot” project?

AN: I have to think about how to choose my words, because I don’t want to give anyone bad ideas. But one thing to keep in mind is that his method of bricking IoT devices doesn’t work, and it potentially makes the problem worse.

BK: What do you mean exactly?

AN: The reason is sometimes IoT malware like Mirai will try to close the door behind it, by crashing the telnet process that was used to infect the device [after the malware is successfully installed]. This can block other telnet-based malware from getting on the machine. And there’s a lot of this type of King of the Hill stuff going on in the IoT ecosystem right now.

But what [this bricker bot] malware does is a lot times it reboots a machine, and when the device is in that state the vulnerable telnet service goes back up. It used to be a lot of devices were infected with the very first Mirai, and when the [control center] for that botnet went down they were orphaned. We had a bunch of Mirai infections phoning home to nowhere. So there’s a real risk of taking the machine that was in the this weird state and making it vulnerable again.

BK: Hrm. That’s a very different story from the one told by the Bricker bot author. According to him, he spent several years of his life saving the world from certain doom at the hands of IoT devices. He even took credit for foiling the Mirai attacks on Deutsche Telekom. Could this just be a case of researcher exaggerating his accomplishments? Do you think his Bricker bot code ever really spread that far?

AN: I don’t have any evidence that there was mass exploitation by Bricker bot. I know his code was published. But when I talk to anyone running an IoT honeypot [a collection of virtual or vulnerable IoT devices designed to attract and record novel attacks against the devices] they have never seen it. The consensus is that regardless of peoples’ opinion on it we haven’t seen it in our honeypots. And considering the diversity of IoT honeypots out there today, if it was out there in real life we would have seen it by now.

BK: A lot of people believe that we’re focusing on the wrong solutions to IoT security — that having consumers lock down IoT devices security-wise or expecting law enforcement agencies to fix this problem for us for me are pollyannish ideas that in any case don’t address the root cause: Which is that there are a lot of companies producing crap IoT products that have virtually no security. What’s your take?

AN: The way I approach this problem is I see law enforcement as the ultimate end goal for all of these efforts. When I look at the IoT DDoS activity and the actual human beings doing this, the vast majority of Mirai attacks, attack infrastructure, malware variants and new exploits are coming from a vast minority of people doing this. That said, the way I perceive the underground ecosystem is probably different than the way most people perceive it.

BK: What’s the popular perception, do you think?

AN: It’s that, “Oh hey, one guy got arrested, great, but another guy will just take his place.” People compare it to a drug dealer on the street corner, but I don’t think that’s accurate in this case. The difference is when you’re looking at advanced criminal hacking campaigns, there’s not usually a replacement person waiting in the wings. These are incredibly deep skills developed over years. The people doing innovations in DDoS attacks and those who are driving the field forward are actually very few. So when you can ID them and attach behavior to the perpetrator, you realize there’s only a dozen people I need to care about and the world suddenly becomes a lot smaller.

BK: So do you think the efforts to force manufacturers to harden their products are a waste of time?

AN: I want to make it clear that all these different ways to tackle the problem…I don’t want to say one is more important than the other. I just happened to be working on one component of it. There’s definitely a lot of disagreement on this. I totally recognize this as a legitimate approach. A lot of people think the way forward is to focus on making sure the devices are secure. And there are efforts ongoing to help device manufacturers create more secure devices that are more resistant to these efforts.

And a lot is changing, although slowly. Do you remember way back when you bought a Wi-Fi router and it was open by default? Because the end user was obligated to change the default password, we had open Wi-Fi networks everywhere. As years passed, many manufacturers started making them more secure. For example, many of these devices now have customers refer to sticker on the machine that has a unique Wi-Fi password. That type of shift may be an example of what we can see in the future of IoT security.

BK: In the wake of the huge attacks from Mirai in 2016 and 2017, several lawmakers have proposed solutions. What do you think of the idea that it doesn’t matter what laws we pass in the United States that might require more security by IoT makers, that those makers are just going to keep on ignoring best practices when it comes to security?

AN: It’s easy to get cynical about this and a lot of people definitely feel like these these companies don’t sell directly to the U.S. and therefore don’t care about such efforts. Maybe in the short term that might be true, but in the long term I think it ends up biting them if they continue to not care.

Ultimately, these things just catch up with you if you have a reputation for making a poor product. What if you had a reputation for making a device that if you put it on the Internet it would reboot every five minutes because it’s getting attacked? Even if we did enact security requirements for IoT that manufacturers not in the U.S. wouldn’t have to follow, it would still in their best interests to care, because they are going to care sooner or later.

BK: I was on a Justice Department conference call with other journalists on the day they announced the Mirai author arrests and guilty pleas, and someone asked why this case was prosecuted out of Alaska. The answer that came back was that a great many of the machines infected with Mirai were in Alaska. But it seems more likely that it was because there was an FBI agent there who decided this was an important case but who actually had a very difficult time finding enough infected systems to reach the threshold needed to prosecute the case. What’s your read on that?

AN: I think that this case is probably going to set precedent in terms of the procedures and processes used to go after cybercrime. I’m sure you finished reading The Wired article about the Alaska investigation into Mirai: It goes in to detail about some of the difficult things that the Alaska FBI field office had to do to satisfy the legal requirements to take the case. Just to prove they had jurisdiction, they had to find a certain number of infected machines in Alaska.

Those were not easy to find, and in fact the FBI traveled far and wide in order to find these machines in Alaska. There are all kinds of barriers big and small that slow down the legal process for prosecuting cases like this, some of which are legitimate and some that I think are going to end up being streamlined after a case like this. And every time a successful case like this goes through [to a guilty plea], it makes it more possible for future cases to succeed.

This one group [that was the subject of the Mirai investigation] was the worst of the worst in this problem area. And right now it’s a huge victory for law enforcement to take down one group that is the worst of the worst in one problem area. Hopefully, it will lead to the takedown of many groups causing damage and harming people.

But the concept that in order for cybercriminals to get law enforcement attention they need to make international headlines and cause massive damage needs to change. Most cybercriminals probably think that what they’re doing nobody is going to notice, and in a sense they’re correct because there is so much obvious criminal activity blatantly connected to specific individuals. And that needs to change.

BK: Is there anything we didn’t talk about related to IoT security, the law enforcement investigations into Mirai, or anything else you’d like to add?

AN: I want to extend my gratitude to the people in the security industry and network operator community who recognized the gravity of this threat early on. There are a lot of people who were not named [in the stories and law enforcement press releases about the Mirai arrests], and want to say thank you for all the help. This couldn’t have happened without you.

Worse Than FailureSponsor Post: Make Your Apps Better with Raygun

I once inherited an application which had a bug in it. Okay, I’ve inherited a lot of applications like that. In this case, though, I didn’t know that there was a bug, until months later, when I sat next to a user and was shocked to discover that they had evolved a complex work-around to bypass the bug which took about twice as long, but actually worked.

“Why didn’t you open a ticket? This shouldn’t be like this.”

“Enh… it’s fine. And I hate dealing with tickets.”

In their defense, our ticketing system at that office was a godawful nightmare, and nobody liked dealing with it.

The fact is, awful ticket tracking aside, 99% of users don’t report problems in software. Adding logging can only help so much- eventually you have a giant haystack filled with needles you don’t even know are there. You have no way to see what your users are experiencing out in the wild.

But what if you could? What if you could build, test and deploy software with a real-time feedback loop on any problems the users were experiencing?

Our new sponsor, Raygun, gives you a window into the real user-experience for your software. With a few minutes of setup, all the errors, crashes, and performance issues will be identified for you, all in one tool.

You're probably using software and services today that relies on Raygun to identify when users have a poor experiences: Domino's Pizza, Coca-Cola, Microsoft and Unity all use it, along with many others.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integration, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Worse Than FailureAll Saints' Day

Cathedral Antwerp July 2015-1

Oh, PHP. It's the butt of any number of jokes in the programming community. Those who do PHP often lie and pretend they don't, just to avoid the social stigma. Today's submitter not only works in PHP, but they also freelance: the bottom of the bottom of the development hierarchy.

Last year, Ilya was working on a Joomla upgrade as well as adjusting several components on a big, obscure website. As he was poking around in the custom code, he found today's submission. You see, the website is in Italian. At the top of the page, it shows not only the date, but also the saint of the day. This is a Catholic thing: every day of the year has a patron saint, and in certain cultures, you might even be named for the saint whose day you were born on. A full list can be found on this Italian Wikipedia page.

Every day, the website was supposed to display text like "18 luglio: santi Sinforosa e sette compagni" (July 18: Sinforosa and the Seven Companions). But the code that generated this string had broken. It wasn't Ilya's task to fix it, but he chose to do so anyway, because why not?

His first suspect for where this text came from was this mess of Javascript embedded in the head:

     function getDataGiorno(){
     data = new Date();
     ora =data.getHours();
     minuti=data.getMinutes();
     secondi=data.getSeconds();
     giorno = data.getDay();
     mese = data.getMonth();
     date= data.getDate();
     year= data.getYear();
     if(minuti< 10)minuti="0"+minuti;
     if(secondi< 10)secondi="0"+secondi;
     if(year<1900)year=year+1900;
     if(ora<10)ora="0"+ora;
     if(giorno == 0) giorno = " Domenica ";
     if(giorno == 1) giorno = " Lunedì ";
     if(giorno == 2) giorno = " Martedì ";
     if(giorno == 3) giorno = " Mercoledì ";
     if(giorno == 4) giorno = " Giovedì ";
     if(giorno == 5) giorno = " Venerdì ";
     if(giorno == 6) giorno = " Sabato ";
     if(mese == 0) mese = "gennaio ";
     if(mese ==1) mese = "febbraio ";
     if(mese ==2) mese = "marzo ";
     if(mese ==3) mese = "aprile ";
     if(mese ==4) mese = "maggio ";
     if(mese ==5) mese = "giugno ";
     if(mese ==6) mese = "luglio ";
     if(mese ==7) mese = "agosto ";
     if(mese ==8) mese = "settembre ";
     if(mese ==9) mese = "ottobre ";
     if(mese ==10) mese = "novembre ";
     if(mese ==11) mese = "dicembre";
     var dt=date+" "+mese+" "+year;
     var gm =date+"_"+mese;

     return gm.replace(/^\s+|\s+$/g,""); ;
     }

     function getXMLHttp() {
     var xmlhttp = null;
     if (window.ActiveXObject) {
       if (navigator.userAgent.toLowerCase().indexOf("msie 5") != -1) {
         xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
       } else {
           xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
       }
     }
     if (!xmlhttp && typeof(XMLHttpRequest) != 'undefined') {
       xmlhttp = new XMLHttpRequest()
     }
     return xmlhttp
     }

     function elaboraRisposta() {
      var dt=getDataGiorno();
      var data = dt.replace('_',' ');
      if (dt.indexOf('1_')==0){
          dt.replace('1_','%C2%BA');
      }
       // alert("*"+dt+"*");
     var temp = new Array();
     temp = objHTTP.responseText.split(dt);
     //alert(temp[1]);

     var temp1=new Array();
     temp1=temp[1].split(":");
     temp=temp1[1].split("");


      if (objHTTP.readyState == 4) {
      santi=temp[0].split(",");
      //var app = new Array();
      //app=santi[0].split(";");
      santo=santi[0];
      //alert(santo);

        // document.write(data+" - "+santo.replace(/\/wiki\//g,"http://it.wikipedia.org/wiki/"));
        document.write(data+" - "+santo);
      }else {

      }

     }
     function loadDati() {
      objHTTP = getXMLHttp();
      objHTTP.open("GET", "calendario.html" , false);

      objHTTP.onreadystatechange = function() {elaboraRisposta()}


     objHTTP.send(null)
     }

If you've never seen Joomla before, do note that most templates use jQuery. There's no need to use ActiveXObject here.

"calendario.html" contained very familiar text: a saved copy of the Wikipedia page linked above. This ought to be splitting the text with the date, avoiding parsing HTML with regex by using String.prototype.split(), and then parsing the HTML to get the saint for that date to inject into the HTML.

But what if a new saint gets canonized, and the calendar changes? By caching a local copy, you ensure that the calendar will get out of date unless meticulously maintained. Therefore, the code to call this Javascript was commented out entirely in the including page:

<div class="santoForm">
<!--  <?php echo "<script>loadDati()</script>" ;  ?> -->
<?php ...

Instead, it had been replaced with this:

       setlocale(LC_TIME, 'ita', 'it_IT.utf8');
       $gg = ltrim(strftime("%d"), '0');

       if ($gg=='1'){
        $dt=ltrim(strftime("1º %B"), '0');
       }else{
         $dt=ltrim(strftime("%d %B"), '0');
       }
       //$dt='4 febbraio';
     $html = file_get_contents('http://it.wikipedia.org/wiki/Calendario_dei_santi');
     $doc = new DOMDocument();
     $doc->loadHTML($html);
     $elements = $doc->getElementsByTagName('li');
     foreach ($elements as $element) {
        if (strpos($element->nodeValue,utf8_encode($dt))!== false){
         list($santo,$after)= split(';',$element->nodeValue);
         break ;
        }else {}
     }
       list($santo,$after)= split(',',utf8_decode($santo));
       if (strlen ( $santo) > 55) {
         $santo=substr($santo, 0, 55)."...";
       }

This migrates the logic to the backend—the One True Place for all such logic—and uses standard library routines, just as it should. Of course, this being PHP, it breaks horribly if you look at it cross-eyed, or—like Ilya—don't have the Italian locale installed on your development machine. And of course, it'll also break if the live Wikipedia page about the saints gets reformatted. But what's the likelihood of that? Plus, it's not cached in the least, letting every visitor see updates in real time. After all, next time they canonize a saint, everyone will rush right to this site to verify that the day changed. That's how the Internet works, right?

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

CryptogramDetecting Drone Surveillance with Traffic Analysis

This is clever:

Researchers at Ben Gurion University in Beer Sheva, Israel have built a proof-of-concept system for counter-surveillance against spy drones that demonstrates a clever, if not exactly simple, way to determine whether a certain person or object is under aerial surveillance. They first generate a recognizable pattern on whatever subject­ -- a window, say -- someone might want to guard from potential surveillance. Then they remotely intercept a drone's radio signals to look for that pattern in the streaming video the drone sends back to its operator. If they spot it, they can determine that the drone is looking at their subject.

In other words, they can see what the drone sees, pulling out their recognizable pattern from the radio signal, even without breaking the drone's encrypted video.

The details have to do with the way drone video is compressed:

The researchers' technique takes advantage of an efficiency feature streaming video has used for years, known as "delta frames." Instead of encoding video as a series of raw images, it's compressed into a series of changes from the previous image in the video. That means when a streaming video shows a still object, it transmits fewer bytes of data than when it shows one that moves or changes color.

That compression feature can reveal key information about the content of the video to someone who's intercepting the streaming data, security researchers have shown in recent research, even when the data is encrypted.

Research paper and video.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 3 – Booting

Securing the Linux boot process Matthew Garrett

  • Without boot security there is no other security
  • MBR Attacks – previously common, still work sometimes
  • Bootloader attacks – Seen in the wild
  • Malicious initrd attacks
    • RAM disk, does stuff like decrypt hard drive
    • Attack captures disk pass-shrase when typed in
  • How do we fix these?
    • UEFI Secure boot
    • Microsoft required in machines shipped after mid-2012
    • sign objects, firmware trusts some certs, boots things correctly signed
    • Problem solved! Nope
    • initrds are not signed
  • initrds
    • contain local changes
    • do a lot of security stuff
  • TPMs
    • devices on system motherboards
    • slow but inexpensive
    • Not under control of the CPU
    • Set of registers “platform configuration registers”, list of hashes of objects booted in boot process. Measurements
    • PCR can enforce things, stop boots if stuff doesn’t match
    • But stuff changes all the time, eg update firmware . Can brick machine
  • Microsoft to the resuce
    • Tie Secure boot into measured boot
    • Measure signing keys rather than the actual files themselves
    • But initrds are not signed
  • Systemd to the resuce
    • systemd boot stub (not the systemd boot loader)
    • Embed initrd and the kernel into a single image with a single signature
    • But initrds contain local information
    • End users should not be signing stuff
  • Kernel can be handed multiple initranfs images (via cpio)
    • each unpacked in turn
    • Each will over-write the previous one
    • configuration can over-written but the signed image, perhaps safely so that if config is changed, stuff fails
    • unpack config first, code second
  • Kernel command line is also security sensative
    • eg turn off iommu and dump RAM to extract keys
    • Have a secure command line turning on all security features, append on the what user sends
  • Proof of device state
    • Can show you are number after boot based on TPM. Can compare to 2FA device to make sure it is securely booted. Safe to type in passwords
  • Secure Provision of secrets
    • Know a remote machine is booted safely and not been subverted before sending it secret stuff.

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 2

Dealing with Contributor Overload Holden Karau

  • Developer Advocate at Google
  • Apache Spark, Contributor to BEAM

Some people from big projects, some from projects hoping to get big

  • Remember it’s okay to not fix it all
  • The fun of a small project
    • Simple communication
    • Aligned incentives
    • Easy to tell who knows what
    • Tight community
  • The fun of a parge project
    • More people to do the work
    • More impact and people thanking you
    • Lots of ideas and experiences
    • If $s then fun conferences
    • Get paid to work on it.
  • Is my project on Fire? or just lots of people on it.
    • Measurable
      • User questions spike
      • issue spike
    • Lesss measurable
      • Non-explicit stuff not being passed on
  • Classic Pipeline
    • Users -> contributors -> committers _> PMC
    • Each stage takes times
    • Very leaky pipeline, perhaps it leaks too much
  • With hyper-growth project can quickly go south
    • Committer:user ration can’t get too far out.
  • Even without hyper-growth: sadness
    • Same thing happens, but slower
  • Overload – Mitigation
    • You don’t have to answer everyone, this can be hard
    • Stackoverflow
    • Are your answers easily searchable
    • Knowledge base – “do you mean”
    • Take time and look for patterns in questions
    • Find people who like writing and get to to write a book
      • Don’t to for core committers, they will have no time for anything else
  • Issue overload
    • Try and get rid of duplicate tickets
    • Autoclose tickets – mixed results
  • How to deal with a spike
    • Raise the bar
    • Make it easier
    • Get Perl to solve the problem
  • Raising the bar
    • Reject trivial changes – reduces the onramp
    • Add weird system – more posts on how to contribute
  • What can Perl solve
    • Style guide
    • bot bot bots
    • make it faster to merge
    • Improve PR + reviewer notice
    • Can increase productivity
  • Add more committers
    • Takes time and effort
    • People can be shy
    • Make a guide for new folks to follow
    • Have a safe space for people to ask questions
  • Reduce overhead for contributing well
    • Have doc on how to contribute next to the code, not elsewhere that people have to search for.

The Open Sourcing of Infrastructure Elizabeth K. Joseph

The recent history of infrastructure

  • 1998
    • To make a server use Solaris or NT. But off a shelf
    • Linux seen as Cheap Unix
    • Lots of FUD

Got a Junior Sysadmin Job

  • 2004
    • Had to tell people the basics “What is free software?”  , “Using Open Source Web Applications to Produce Business Results”
    • Turning point LAMP stack
    • Flood of changes on how customers interacted with software over last
      • Reluctance to be locked-in by a vendor
      • Concerns of security
      • Ability to fix bugs ourselves
      • Innovation stifled when software developed in isloation

Last 10 years

  • Changes in how peopel interacted with software
    • Downtime un-acceptable
    • Reliance of scaling and automation
    • Servers as Pets -> cattle
    • Large focus on data

Open Source is now Ubiquitous

  • Even Microsoft is using it a lot and interacting with the community

Operations tools were not as Open Sourced

  • Configuration Management
    • puppet modules, chef playbooks
  • Open application definitions – juhu charms, DC?OS Universe Catalog
  • Full disk images
    • Dockerhub

The Cloud

  • Cloud is the new propriatory
  • EC2-only infrastructure
  • Questions you should ask beforehand
    • Is your service adhering to open standards or am I locked in?
    • Recourse if the company goes out of business
    • Does vendor have a history of communicating about downtime and security problems?
    • Does vendor responds to bugs and feature requests?
    • Will the vendor use data in a way I’m not comfortable with?
    • Initial costs may be low, but do you have a plan to handle long term, growing costs
  • Alternatives
    • Openstack, Kubernetes, Docker Swarm, DC/OS with Apache Mesos

Hybrid Cloud

  • Tooling can be platform agnostic
  • Hard but can be done

Share

Planet Linux AustraliaFrancois Marier: LXC setup on Debian stretch

Here's how to setup LXC-based "chroots" on Debian stretch. While I wrote about this on Debian jessie, I had to make some networking changes for stretch and so here are the full steps that should work on stretch.

Start by installing (as root) the necessary packages:

apt install lxc libvirt-clients debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here):

lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx

That configuration requires that the veth kernel module be loaded. If you have any kinds of module-loading restrictions enabled, you probably need to add the following to /etc/modules and reboot:

veth

Next, I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

    net.ipv4.ip_forward=1
    
  2. and then applying it using:

    sysctl -p
    
  3. Restart the network bridge:

    systemctl restart lxc-net.service
    
  4. and ensure that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A FORWARD -s 10.0.3.0/24 -j ACCEPT
    -A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
    -A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
    -A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT
    
  5. and applying the rules using:

    iptables-apply
    

Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR=http://httpredir.debian.org/debian lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64 -F

Since the root password is randomly generated, you'll need to reset it before you can login as root:

sudo lxc-attach -n sid64 passwd

Then login as root and install a text editor inside the container because the root image doesn't have one by default:

apt install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Mounting your home directory inside a container

In order to have my home directory available within the container, I created a user account for myself inside the container and then added the following to the container config file (/var/lib/lxc/sid64/config):

lxc.mount.entry=/home/francois home/francois none bind 0 0

before restarting the container:

lxc-stop -n sid64
lxc-start -n sid64

Fixing locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

If you see these errors while reconfiguring the locales package:

Generating locales (this might take a while)...
  en_US.UTF-8...cannot change mode of new locale archive: No such file or directory
 done
  fr_CA.UTF-8...cannot change mode of new locale archive: No such file or directory
 done
Generation complete.

and see the following dmesg output on the host:

[235350.947808] audit: type=1400 audit(1441664940.224:225): apparmor="DENIED" operation="chmod" info="Failed name lookup - deleted entry" error=-2 profile="/usr/bin/lxc-start" name="/usr/lib/locale/locale-archive.WVNevc" pid=21651 comm="localedef" requested_mask="w" denied_mask="w" fsuid=0 ouid=0

then AppArmor is interfering with the locale-gen binary and the work-around I found is to temporarily shutdown AppArmor on the host:

lxc-stop -n sid64
systemctl stop apparmor
lxc-start -n sid64

and then start up it later once the locales have been updated:

lxc-stop -n sid64
systemctl start apparmor
lxc-start -n sid64

AppArmor support

If you are running AppArmor, your container probably won't start until you add the following to the container config (/var/lib/lxc/sid64/config):

lxc.aa_allow_incomplete = 1

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 1 – k8s @ home and bad buses

How to run Kubernetes on your spare hardware at home, and save the world Angus Lees

  • Mainframe ->
  • PC ->
  • Rackmount PC
    • Back the rackmount PC even with built-in redundancy will still fail. Or the location will go offline, or your data spreads across multiple machines
  • Since you need to have distributed/redundancy anyway. New model (2005). Grid computing. Clever software, dumb hardware. Loosely coupled servers
    • Libraries > RPC / Microservices
    • Threadpool -> hadoop
    • SQL -> key/store
    • NFS -> Object store
    • In-place upgrades -> “Immutable” image-based build from scratch
  • Computers in clouds
    • No cases. No redundant Power, journaling on filesystems turned off, etc
  • Everything is in clouds – Secondary effects
    • Corperate driven
    • Apache license over GPL
    • Centralised services rather than federated protocols
    • Profit-driven rather than scrating itches
  • Summary
    • Problem
      • Distributed Systems hard to configure
      • Solutions scale down poorly
      • Most homes don’t have racks of servers
    • Implication
      • Home Free Software “stuck” at single-machine architecture
  • Kubernetes (lots of stuff, but I use it already so just doing unique bits)
    • “Unix Process as a service”
    • Inverts the stack. Data is important then app. Kernel and Hardware unimportant.
    • Easy upgrades, everything is an upgrade
    • Declarative API , command line interface
  • “We’ve conducted this experiment for decades now, and I have news for you, Hardware fails”

Hardware at Home

  • Raid used to be “enterprise” now normal for home
  • Elastic compute for home too
  • Kubernetes for Home
    • Budget $100
      • ARM master nodes
      • Mixed architecture
    • Assume single layer-2 home ethernet
    • Worker nodes – old $500 laptops
      • x86-64
      • CoreOS
      • Broken screens, dead batteries
    • 3 * $30 Banana pis
      • Raspberry Pi2
      • armv7a
      • containOS
    • Persistentvolumes
      • NFS mount from RAID server
    • Service – keepalived-vip
    • Ingress
      • keepalived and nginx-ingress , letsEncrypt
      • Wildcard DNS
    • Status
      • Works!
      • Printing works
      • Install: PXE boot and run coreos-install
    • Status – ungood
      • Banana PIs a bit too slow.
    • github.com/anguslees/k8s-home

Is the 370 the worst bus route in Sydney? Katie Bell

  • The 370 bus
    • Goes UNSW and Sydney University. Goes around the city
  • If bus runs every 15 minutes, you should not be able to see 3 at once
  • Newspaper articles and Facebook group about how bad it is.
  • Two Questions
    • Bus privitisation better or worse
    • Is the 370 really the worst
  • Data provided
    • Lots of stuff but nothing the reliability
    • But they do have realtime data eg for the Tripetime app (done via a 3rd party)
    • They have a API and Key with standard format via GTFS
  • But they only publish “realtime” data, not the old data
    • So collected the realtime data, once a minute for 4 months
    • 557 GB
  • Format
    • zipfile of csv files
    • IDs sometimes ephemeral
    • Had to match timetable data and realtime data
    • Data had to be tidied up – lots
  • Processing realtime data
    • Download 1 minute
    • Parse
    • Match each of around ~7000 trips in timetable (across all of NSW)
    • Write ~20000 realtime updates to the DB
    • Running 5 EC2 instances at leak
    • Writing up to 40MB/s to the DB
  • Is the 370 the worst?
    • Define “worst”
    • Found NSW definition of what an on-time bus is.
    • Now more than 5:59 late or 1:59 early. Measured start/middle/end
    • Victoria definition strictor
    • She defined:
      • Early: more than 2min early
      • On time: 2m early – 5 min late
      • late more than 5m late
      • Very late – more thna 20m late
    • Across all trips
      • 3.7 million trips
      • On time 31%
      • More than 20m late 2.86%
    • Best routes
      • Nightime buses
      • Outside of Sydney
      • Shorter routes
      • 86% – 97% or better
    • Worst
      • Less than 5% on time
      • Longer routes
      • 370 is the 22nd worst
        • 8.79% on time
    • Worst routes ( percent > 20 min late)
      • 23% of 370 trips (6th worst)
      • Lots of Wollongong
    • Worst agencies
      • No obvious difference between agencies and private companies
    • Conclusion
      • Privatisation could go either way
      • 370 is close to the worst (277 could be worse) in Sydney
    • bus-shaming.com
    • github.com/katharosada/bus-shaming

Questions

  • Used Spot instances to keep cost down
  • $200 month on AWS
  • Buses better/worse according to time? Now checked yet
  • Wanted to calculate the “wait time” , not done yet.
  • Another feed of bus locations and some other data out there too.
  • Lots of other questions

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Keynote – Karen Sandler

Executive director of Software Freedom Conservancy

Previously spoke that LCA 2012 about closed-source software on her heart implant. Since then has pivoted career to more open-source advocacy in career.

  • DMCA exemption for medical device research
  • When you ask your doctor about safety of devices you sound like a conspiracy theorist
  • Various problems have been highlighted, some progress
  • Some companies addressing them

Initially published paper highlighting problem without saying she had the device

  • Got pushback from groups who thought she was scaremongering
  • Companies thinking about liability issues
  • After told story in 2012 things improved

Had to get new device recently.

  • Needed this disabled since her jobs pisses off hackers sometimes
  • All manufacturers said they could not disable wireless access
  • Finally found a single model that could be disabled made by a European manufacturer

 

Note: This is a quick summary, Lots more covered but hard to cover. Video should be good. Her slides were broken though much of the talk be she still delivered great talk.

Share

,

Google AdsenseLet machine learning create your In-feed ads


Last year we launched AdSense Native ads, a new family of ads created to match the look and feel of your site. If your site has an editorial feed (a list of articles or news) or a listings feed (a list of products or services), then Native In-feed ads are a great option to give your users a better experience.

Now we've brought the power of machine learning to In-feed ads, saving you time. If you're not quite sure what fonts, colors, and styles will work best for your site, you can let Google's machine learning help you decide. 

How it works: 

  1. Create a new Native In-Feed ad and select "Let Google suggest a style." 
  2. Enter the URL of a page with a feed you’d like to monetize. AdSense will scan your page to find the best placement. 
  3. Select which feed element you’d like your In-feed ad to match.
  4. Your ad is automatically created – simply place the piece of code into your feed, and you’re done! 

By the way, this method is optional, so if you prefer, you can create your ads manually. 

Posted by: 

Faris Zerdoudi, AdSense Tech Lead 
Violetta Kalathaki, AdSense Product Manager 


Sociological ImagesPod Panic & Social Problems

My gut reaction was that nobody is actually eating the freaking Tide Pods.

Despite the explosion of jokes—yes, mostly just jokes—about eating detergent packets, sociologists have long known about media-fueled cultural panics about problems that aren’t actually problems. Joel Best’s groundbreaking research on these cases is a classic example. Check out these short video interviews with Best on kids not really being poisoned by Halloween candy and the panic over “sex bracelets.”

Click here to view the embedded video.

In a tainted Halloween candy study, Best and Horiuchi followed up on media accounts to confirm cases of actual poisoning or serious injury, and they found many cases couldn’t be confirmed or were greatly exaggerated. So, I followed the data on detergent digestion.

It turns out, there is a small trend. According to a report from the American Association of Poison Control Centers,

…in 2016 and 2017, poison control centers handled thirty-nine and fifty-three cases of intentional exposures, respectively, among thirteen to nineteen year olds. In the first fifteen days of 2018 alone, centers have already handled thirty-nine such intentional cases among the same age demographic.

That said, this trend is only relative to previous years and cannot predict future incidents. The life cycle of internet jokes is fairly short, rotating quickly with an eye toward the attention economy. It wouldn’t be too surprising if people moved on from the pods long before the panic dies out.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main February 2018 Meeting: Linux.conf.au report

Feb 6 2018 18:30
Feb 6 2018 20:30
Feb 6 2018 18:30
Feb 6 2018 20:30
Location: 
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

PLEASE NOTE NEW LOCATION

Tuesday, February 6, 2018
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000

Speakers:

  • Russell Coker and others, LCA conference report

Russell Coker has done lots of Linux development over the years, mostly involved with Debian.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 6, 2018 - 18:30

CryptogramNew Malware Hijacks Cryptocurrency Mining

This is a clever attack.

After gaining control of the coin-mining software, the malware replaces the wallet address the computer owner uses to collect newly minted currency with an address controlled by the attacker. From then on, the attacker receives all coins generated, and owners are none the wiser unless they take time to manually inspect their software configuration.

So far it hasn't been very profitable, but it -- or some later version -- eventually will be.

Worse Than FailureCoded Smorgasbord: Archive This

Michael W came into the office to a hair-on-fire freakout: the midnight jobs failed. The entire company ran on batch processes to integrate data across a dozen ERPs, mainframes, CRMs, PDQs, OMGWTFBBQs, etc.: each business unit ran its own choice of enterprise software, but then needed to share data. If they couldn’t share data, business ground to a halt.

Business had ground to a halt, and it was because the archiver job had failed to copy some files. Michael owned the archiver program, not by choice, but because he got the short end of that particular stick.

The original developer liked logging. Pretty much every method looked something like this:

public int execute(Map arg0, PrintWriter arg1) throws Exception {
    Logger=new Logger(Properties.getString("LOGGER_NAME"));
    Log=new Logger(arg1);
    .
    .
    .
catch (Exception e) {
    e.printStackTrace();
    Logger.error("Monitor: Incorrect arguments");
    Log.printError("Monitor: Incorrect arguments");
    arg1.write("In Correct Argument Passed to Method.Please Check the Arguments passed \n \r");
    System.out.println("Monitor: Incorrect arguments");
}

Sometimes, to make the logging even more thorough, the catch block might look more like this:

catch(Exception e){
    e.printStackTrace();
    Logger.error("An exception happened during SFTP movement/import. " + (String)e.getMessage());
}

Java added Generics in 2004. This code was written in 2014. Does it use generics? Of course not. Every Hashtable is stringly-typed:

Hashtable attributes;
.
.
.
if (((String) attributes.get(key)).compareTo("1") == 0 | ((String) attributes.get(key)).compareTo("0") == 0) { … }

And since everything is stringly-typed, you have to worry about case-sensitive comparisons, but don’t worry, the previous developer makes sure everything’s case-insensitive, even when comparing numbers:

if (flag.equalsIgnoreCase("1") ) { … }

And don’t forget to handle Booleans…

public boolean convertToBoolean(String data) {
    if (data.compareToIgnoreCase("1") == 0)
        return true;
    else
        return false;
}

And empty strings…

if(!TO.equalsIgnoreCase("") && TO !=null) { … }

Actually, since types are so confusing, let’s make sure we’re casting to know-safe types.

catch (Exception e) {
    Logger.error((Object)this, e.getStackTraceAsString(), null, null);
}

Yes, they really are casting this to Object.

Since everything is stringly typed, we need this code, which checks to see if a String parameter is really sure that it’s a string…

protected void moveFile(String strSourceFolder, Object strSourceObject,
                     String strDestFolder) {
    if (strSourceObject.getClass().getName().compareToIgnoreCase("java.lang.String") == 0) { … }
    …
}

Now, that all was enough to get Michael’s blood pressure up, but none of that had anything to do with his actual problem. Why did the copy fail? The logs were useless, as they were spammed with messages with no particular organization. The code was bad, sure, so it wasn’t surprising that it crashed. For a little while, Michael thought it might be the getFiles method, which was supposed to identify which files needed to be copied. It did a recursive directory search (with no depth checking, so one symlink could send it into an infinite loop) nor did it actually filter files that it didn’t care about. It just made an ArrayList of every file in the directory structure and then decided which ones to copy.

He spent some time really investigating the copy method, to see if that would help him understand what went wrong:

sourceFileLength = sourceFile.length();
newPath = sourceFile.getCanonicalPath();
newPath = newPath.replace(".lock", "");
newFile = new File(newPath);
sourceFile.renameTo(newFile);                    
destFileLength = newFile.length();
while(sourceFileLength!=destFileLength)
{
    //Copy In Progress
}
//Remy: I didn't elide any code from the inside of that while loop- that is exactly how it's written, as an empty loop.

Hey, out of curiosity, what does the JavaDoc have to say about renameTo?

Many aspects of the behavior of this method are inherently platform-dependent: The rename operation might not be able to move a file from one filesystem to another, it might not be atomic, and it might not succeed if a file with the destination abstract pathname already exists. The return value should always be checked to make sure that the rename operation was successful.

It only throws exceptions if you don’t supply a destination, or if you don’t have permissions to the files. Otherwise, it just returns false on a failure.

So… if the renameTo operation fails, the archiver program will drop into an infinite loop. Unlogged. Undetected. That might seem like the root cause of the failure, but it wasn’t.

As it turned out, the root cause was that someone in ops hit “Ok” on a security update, which triggered a reboot, disrupting all the scheduled jobs.

Michael still wanted to fix the archiver program, but there was another problem with that. He owned the InventoryArchiver.jar. There was also OrdersArchiver.jar, and HRArchiver.jar, and so on. They had all been “written” by the same developer. They all did basically the same job. So they were all mostly copy-and-paste jobs with different hard-coded strings to specify where they ran. But they weren’t exactly copy-and-paste jobs, so each one had to be analyzed, line by line, to see where the logic differences might possibly crop up.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 2 – Keynote – Matthew Todd

Collaborating with Everybody: Open Source Drug Discovery

  • Term used is a bit undefined. Open Source, Free Drugs?
  • First Open Source Project – Praziquantel
    • Molecule has 2 mirror image forms. One does the job, other tastes awful. Pills were previously a mix
    • Project to just have pill with the single form
      • Created discussion
      • Online Lab Notebook
      • 75% of contributions were from private sector (especially Syncom)
      • Ended up finding a approach that worked, different from what was originally proposed from feedback.
      • Similar method found by private company that was also doing the work
  • Conventional Drug discovery
    • Find drug that kills something bad – Hit
    • Test it and see if it is suitable – Led
    • 13,500 molecules in public domain that kill maleria parasite
  • 6 Laws of Open Scrience
    • All data is open and all ideas are shared
    • Anyone can take part at any level of the project
  • Openness increasing seen as a key
  • Open Source Maleria
    • 4 campaigns
    • Work on a molecule, park it when doesn’t seem promising
    • But all data is still public
  • What it actually is
    • Electronic lab book (80% of scientists still use paper)
    • Using Labtrove, changing to labarchives
    • Everything you do goes up every day
    • Todo list
      • Tried stuff, ended up using issue list on github
      • Not using most other github stuff
    • Data on a Google Sheet
    • Light Website, twitter feed
  • Lab vs Code
  • Have a promising molecule – works well in mice
    • Would probably be a patentable state
    • Not sure yet exactly how it works
  • Competition – Predictive model
    • Lots of solutions submitted, not good enough to use
    • Hopeful a model will be created
  • Tried a a known-working molecule from elsewhere, but couldn’t get it to work
    • This is out in the open. Lots of discussion
  • School group able to recreate Daraprim, a high-priced US drug
  • Public Domain science is now accepted for publications
  • Need to to make computers understand molecule digram and convert to representative format which can then be search one.
  • Missing
    • Automated links to databases in tickets
    • Basic web page stuff, auto-porting of data, newsletter, become non-profit, stickers
    • Stuff is not folded back into the Wiki
  • OS Mycetoma – New Project
    • Fungus with no treatment
    • Working on possible molecule to treat
  • Some ideas on how to get products created this way to market – eg “data exclusivity”

 

Share

,

CryptogramSkygofree: New Government Malware for Android

Kaspersky Labs is reporting on a new piece of sophisticated malware:

We observed many web landing pages that mimic the sites of mobile operators and which are used to spread the Android implants. These domains have been registered by the attackers since 2015. According to our telemetry, that was the year the distribution campaign was at its most active. The activities continue: the most recently observed domain was registered on October 31, 2017. Based on our KSN statistics, there are several infected individuals, exclusively in Italy.

Moreover, as we dived deeper into the investigation, we discovered several spyware tools for Windows that form an implant for exfiltrating sensitive data on a targeted machine. The version we found was built at the beginning of 2017, and at the moment we are not sure whether this implant has been used in the wild.

It seems to be Italian. Ars Technica speculates that it is related to Hacking Team:

That's not to say the malware is perfect. The various versions examined by Kaspersky Lab contained several artifacts that provide valuable clues about the people who may have developed and maintained the code. Traces include the domain name h3g.co, which was registered by Italian IT firm Negg International. Negg officials didn't respond to an email requesting comment for this post. The malware may be filling a void left after the epic hack in 2015 of Hacking Team, another Italy-based developer of spyware.

BoingBoing post.

Cory DoctorowMy keynote from ConveyUX 2017: “I Can’t Let You Do That, Dave.”

“The Internet’s broken and that’s bad news, because everything we do today involves the Internet and everything we’ll do tomorrow will require it. But governments and corporations see the net, variously, as a perfect surveillance tool, a perfect pornography distribution tool, or a perfect video on demand tool—not as the nervous system of the 21st century. Time’s running out. Architecture is politics. The changes we’re making to the net today will prefigure the future our children and their children will thrive in—or suffer under.”

—Cory Doctorow

ConveyUX is pleased to feature author and activist Cory Doctorow to close out our 2017 event. Cory’s body work includes fascinating science fiction and engaging non-fiction about the relationship between society and technology. His most recent book is Information Doesn’t Want to be Free: Laws for the Internet Age. Cory will delve into some of the issues expressed in that book and talk about issues that affect all of us now and in the future. Cory will be on hand for Q&A and a post-session book signing.

CryptogramDark Caracal: Global Espionage Malware from Lebanon

The EFF and Lookout are reporting on a new piece of spyware operating out of Lebanon. It primarily targets mobile devices compromised by fake secure messaging clients like Signal and WhatsApp.

From the Lookout announcement:

Dark Caracal has operated a series of multi-platform campaigns starting from at least January 2012, according to our research. The campaigns span across 21+ countries and thousands of victims. Types of data stolen include documents, call records, audio recordings, secure messaging client content, contact information, text messages, photos, and account data. We believe this actor is operating their campaigns from a building belonging to the Lebanese General Security Directorate (GDGS) in Beirut.

It looks like a complex infrastructure that's been well-developed, and continually upgraded and maintained. It appears that a cyberweapons arms manufacturer is selling this tool to different countries. From the full report:

Dark Caracal is using the same infrastructure as was previously seen in the Operation Manul campaign, which targeted journalists, lawyers, and dissidents critical of the government of Kazakhstan.

There's a lot in the full report. It's worth reading.

Three news articles.

Worse Than FailureAlien Code Reuse

“Probably the best thing to do is try and reorganize the project some,” Tim, “Alien”’s new boss said. “It’s a bit of a mess, so a little refactoring will help you understand how the code all fits together.”

“Alien” grabbed the code from git, and started walking through the code. As promised, it was a bit of a mess, but partially that mess came from their business needs. There was a bunch of common functionality in a Common module, but for each region they did business in- Asia, North America, Europe, etc.- there was a region specific deployable, each in its own module. Each region had its own build target that would include the Common module as part of the build process.

The region-specific modules were vaguely organized into sub-modules, and that’s where “Alien” settled in to start reorganizing. Since Asia was the largest, most complicated module, they started there, on a sub-module called InventoryManagement. THey moved some files around, set up the module and sub-modules in Maven, and then rebuilt.

The Common library failed to build. This gave “Alien” some pause, as they hadn’t touched anything pertaining to the Common project. Specifically, Common failed to build because it was looking for some files in the Asia.InventoryManagement sub-module. Cue the dive into the error trace and the vagaries of the build process. Was there a dependency between Common and Asia that had gone unnoticed? No. Was there a build-order issue? No. Was Maven just being… well, Maven? Yes, but that wasn’t the problem.

After hunting around through all the obvious places, “Alien” eventually ran an ls -al.

~/messy-app/base/Common/src/com/mycompany > ls -al
lrwxrwxrwx 1 alien  alien    39 Jan  4 19:10 InventoryManagement -> ../../../../../Asia/InventoryManagement/src/com/mycompany/IM/
drwxr-x--- 3 alien  alien  4096 Jan  4 19:10 core/

Yes, that is a symbolic link. A long-ago predecessor discovered that the Asia.InventoryManagement sub-module contained some code that was useful across all modules. Acutally moving that code into Common would have involved refactoring Asia, which was the largest, most complicated module. Presumably, that sounded like work, so instead they just added a sym-link. The files actually lived in Asia, but were compiled into Common.

“Alien” writes, “This is the first time in my over–20-year working life I see people reuse source code like this.”

They fixed this, and then went hunting, only to find a dozen more examples of this kind of code “reuse”.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaJames Morris: LCA 2018 Kernel Miniconf – SELinux Namespacing Slides

I gave a short talk on SELinux namespacing today at the Linux.conf.au Kernel Miniconf in Sydney — the slides from the talk are here: http://namei.org/presentations/selinux_namespacing_lca2018.pdf

This is a work in progress to which I’ve been contributing, following on from initial discussions at Linux Plumbers 2017.

In brief, there’s a growing need to be able to provide SELinux confinement within containers: typically, SELinux appears disabled within a container on Fedora-based systems, as a workaround for a lack of container support.  Underlying this is a requirement to provide per-namespace SELinux instances,  where each container has its own SELinux policy and private kernel SELinux APIs.

A prototype for SELinux namespacing was developed by Stephen Smalley, who released the code via https://github.com/stephensmalley/selinux-kernel/tree/selinuxns.  There were and still are many TODO items.  I’ve since been working on providing namespacing support to on-disk inode labels, which are represented by security xattrs.  See the v0.2 patch post for more details.

Much of this work will be of interest to other LSMs such as Smack, and many architectural and technical issues remain to be solved.  For those interested in this work, please see the slides, which include a couple of overflow pages detailing some known but as yet unsolved issues (supplied by Stephen Smalley).

I anticipate discussions on this and related topics (LSM stacking, core namespaces) later in the year at Plumbers and the Linux Security Summit(s), at least.

The session was live streamed — I gather a standalone video will be available soon!

ETA: the video is up! See:

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 3 – Developers, Developers Miniconf

Beyond Web 2.0 Russell Keith-Magee

  • Django guy
  • Back in 2005 when Django first came out
    • Web was fairly simple, click something and something happened
    • model, views, templates, forms, url routing
  • The web c 2016
    • Rich client
    • API
    • mobile clients, native apps
    • realtime channels
  • Rich client frameworks
    • reponse to increased complexity that is required
    • Complex client-side and complex server-side code
  • Isomorphic Javascript development
    • Same code on both client and server
    • Only works with javascript really
    • hacks to work with other languages but not great
  • Isomorphic javascript development
    • Requirements
    • Need something in-between server and browser
    • Was once done with Java based web clients
    • model, view, controller
  • API-first development
  • How does it work with high-latency or no-connection?
  • Part of the controller and some of the model needed in the client
    • If you have python on the server you need python on the client
    • brython, skulp, pypy.js
    • <script type=”text/pyton”>
    • Note: Not phyton being compiled into javascript. Python is run in the browser
    • Need to download full python interpreter though (500k-15M)
    • Fairly fast
  • Do we need a full python interpreter?
    • Maybe something just to run the bytecode
    • Batavia
    • Javascript implementation of python virtual machine
    • 10KB
    • Downside – slower than cpython on the same machine
  • WASM
    • Like assembly but for the web
    • Benefits from 70y of experience with assembly languages
    • Close to Cpython speed
    • But
      • Not quite on browsers
      • No garbage collection
      • Cannot manipulate DOM
      • But both coming soon
  • Example: http://bit.ly/covered-in-bees
  • But “possible isn’t enough”
  • pybee.org
  • pybee.org/bee/join

Using “old skool” Free tools to easily publish API documentation – Alec Clew

  • https://github.com/alecthegeek/doc-api-old-skool
  • You API is successful if people are using it
  • High Quality and easy to use
  • Provide great docs (might cut down on support tickets)
  • Who are you writing for?
    • Might not have english as first language
    • New to the API
    • Might have different tech expertise (different languages)
    • Different tooling
  • Can be hard work
  • Make better docs
    • Use diagrams
    • Show real code (complete and working)
  • Keep your sentence simple
  • Keep the docs current
  • Treat documentation like code
    • Fix bugs
    • add features
    • refactor
    • automatic builds
    • Cross platform support
    • “Everything” is text and under version control
  • Demo using pandoc
  • Tools
  • pandoc, plantuml, Graphviz, M4, make, base/sed/python/etc

 

Lightning Talks

  • Nic – Alt attribute
    • need to be added to images
    • Don’t have alts when images as links
    • http://bit.ly/Nic-slides
  • Vaibhav Sager – Travis-CI
    • Builds codes
    • Can build websites
    • Uses to build Resume
    • Build presentations
  • Steve Ellis
    • Openshift Origin Demo
  • Alec Clews
    • Python vs C vs PHP vs Java vs Go for small case study
    • Implemented simple xmlrpc client in 5 languages
    • Python and Go were straightforward, each had one simple trick (40-50 lines)
    • C was 100 lines. A lot harder. Conversions, etc all manual
    • PHP wasn’t too hard. easier in modern vs older PHP
  • Daurn
    • Lua
    • Fengari.io – Lua in the browser
  • Alistair
    • How not to docker ( don’t trust the Internet)
    • Don’t run privileged
    • Don’t expose your docker socket
    • Don’t use host network mode
    • Don’t where your code is FROM
    • Make sure your kernel on your host is secure
  • Daniel
    • Put proxy in front of the docker socket
    • You can use it to limit what no-priv users with socket access to docker port can do

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 2

Manage all your tasks with TaskWarrior Paul ‘@pjf’ Fenwick

  • Lots of task management software out there
    • Tried lots
    • Doesn’t like proprietary ones, but unable to add features he wants
    • Likes command line
  • Disclaimer: “Most systems do not work for most people”
  • TaskWarrior
    • Lots of features
    • Learning cliff

Intro to TaskWarrior

  • Command line
  • Simple level can be just a todo list
  • Can add tags
    • unstructured many to many
    • Added just put putting “+whatever” on command
    • Great for searching
    • Can put all people or all types of jobs togeather
  • Meta Tags
    • Automatic date related (eg due this week or today)
  • Project
    • A bunch of tasks
    • Can be strung togeather
    • eg Travel project, projects for each trip inside them
  • Contexts (show only some projects and tasks)
    • Work tasks
    • Tasks for just a client
    • Home stuff
  • Annotation (Taking notes)
    • $ task 31 annotate “extra stuff”
    • has an auto timestamp
    • show by default, or just show a count of them
  • Tasks associated with dates
    • “wait”
    • Don’t show task until a date (approx)
    • Hid a task for an amount of time
    • Scheduled tasks urgency boasted at specific date
  • Until
    • delete a task after a certain date
  • Relative to other tasks
    • eg book flights 30 days before a conference
    • good for scripting, create a whole bunch of related tasks for a project
  • due dates
    • All sorts of things give (see above) gives tasks higher priority
    • Tasks can be manually changed
  • Tools and plugins
    • Taskopen – Opens resources in annotations (eg website, editor)
  • Working with others
    • Bugworrier – interfaces with github trello, gmail, jira, trac, bugzilla and lots of things
    • Lots of settings
    • Keeps all in sync
  • Lots of extra stuff
    • Paul updates his shell prompt to remind him things are busy
  • Also has
    • Graphical reports: burndown, calendar
    • Hooks: Eg hooks to run all sort of stuff
    • Online Sync
    • Android client
    • Web client
  • Reminder it has a steep learning curve.

Love thy future self: making your systems ops-friendly Matt Palmer

  • Instrumentation
  • Instrumenting incoming requests
    • Count of the total number of requests (broken down by requestor)
    • Count of reponses (broken down by request/error)
    • How long it took (broken down by sucess/errors
    • How many right now
  • Get number of in-progress requests, average time etc
  • Instrumenting outgoing requests
    • For each downstream component
    • Number of request sent
    • how many reponses we’ve received (broken down by success/err)
    • How long it too to get the response (broken down by request/ error)
    • How many right now
  • Gives you
    • incoming/outgoing ratio
    • error rate = problem is downstream
  • Logs
    • Logs cost tends to be more than instrumentation
  • Three Log priorities
    • Error
      • Need a full stack trace
      • Add info don’t replace it
      • Capture all the relivant variables
      • Structure
    • Information
      • Startup messages
      • Basic request info
      • Sampling
    • Debug
      • printf debugging at webcale
      • tag with module/method
      • unique id for each request
      • late-bind log data if possible.
      • Allow selective activation at runtime (feature flag, special url, signals)
    • Summary
      • Visbility required
      • Fault isolation

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 1 – Kernel Miniconf

Look out for what’s in the security pipeline – Casey Schaufler

Old Protocols

  • SeLinux
    • No much changing
  • Smack
    • Network configuration improvements and catchup with how the netlable code wants things to be done.
  • AppArmor
    • Labeled objects
    • Networking
    • Policy stacking

New Security Modules

  • Some peopel think existing security modules don’t work well with what they are doing
  • Landlock
    • eBPF extension to SECMARK
    • Kills processes when it goes outside of what it should be doing
  • PTAGS
    • General purpose process tags
    • Fro application use ( app can decide what it wants based on tags, not something external to the process enforcing things )
  • HardChroot
    • Limits on chroot jail
    • mount restrictions
  • Safename
    • Prevents creation of unsafe files names
    • start, middle or end characters
  • SimpleFlow
    • Tracks tainted data

Security Module Stacking

  • Problems with incompatibility of module labeling
  • People want different security policy and mechanism in containers than from the base OS
  • Netfilter problems between smack and Apparmor

Container

  • Containers are a little bit undefined right now. Not a kernel construct
  • But while not kernel constructs, need to work with and support them

Hardening

  • Printing pointers (eg in syslog)
  • Usercopy

 

Share

,

Planet Linux AustraliaBen Martin: 4cm thick wood cnc project: shelf

The lighter wood is about 4cm thick. Both of the sides are cut from a single plank of timber which left the feet with a slight weak point at the back. Given a larger bit of timber I would have tapered the legs outward from the back more gradually. But the design is restricted by the timber at hand.


The shelves are plywood which turned out fairly well after a few coats of poly. I knocked the extreme sharp edges of the ply so its a hurt a little rather than a lot if you accidentally poke the edge. This is a mixed machine and human build, the back of the plywood that meets the uprights was knocked off using a bandsaw.

Being able to CNC thick timber like this opens up more bold designs. Currently I have to use a 1/2 inch bit to get this reach. Stay tuned for more CNC timber fun!


,

Don MartiMore brand safety bullshit

There's enough bullshit on the Internet already, but I'm afraid I'm going to quote some more. This time from Ilyse Liffreing at IBM.

The reality is none of us can say with certainty that anywhere in the world, we are [brand] safe. Look what just happened with YouTube. They are working on fixing it, but even Facebook and Google themselves have said there’s not much they can do about it. I mean, it’s hard. It’s not black and white. We are putting a lot of money in it, and pull back on channels where we have concerns. We’ve had good talks with the YouTube teams.

Bullshit.

One important part of this decision is black and white.

Either you give money to Nazis.

Or you don't give money to Nazis.

If Nazis are better at "programmatic" than the resting-and-vesting chill bros at the programmatic ad firms (and, face it, Nazis kick ass at programmatic), then the choice to spend ad money in a we're-kind-of-not-sure-if-this-goes-to-Nazis-or-not way is a choice that puts your brand on the wrong side of a black and white line.

There are plenty of Nazi-free places for brands to run ads. They might not be the cheapest. But I know which side of the line I buy from.

,

CryptogramFriday Squid Blogging: Te Papa Colossal Squid Exhibition Is Being Renovated

The New Zealand home of the colossal squid exhibit is behind renovated.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSecurity Breaches Don't Affect Stock Price

Interesting research: "Long-term market implications of data breaches, not," by Russell Lange and Eric W. Burger.

Abstract: This report assesses the impact disclosure of data breaches has on the total returns and volatility of the affected companies' stock, with a focus on the results relative to the performance of the firms' peer industries, as represented through selected indices rather than the market as a whole. Financial performance is considered over a range of dates from 3 days post-breach through 6 months post-breach, in order to provide a longer-term perspective on the impact of the breach announcement.

Key findings:

  • While the difference in stock price between the sampled breached companies and their peers was negative (1.13%) in the first 3 days following announcement of a breach, by the 14th day the return difference had rebounded to + 0.05%, and on average remained positive through the period assessed.

  • For the differences in the breached companies' betas and the beta of their peer sets, the differences in the means of 8 months pre-breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • For the differences in the breached companies' beta correlations against the peer indices pre- and post-breach, the difference in the means of the rolling 60 day correlation 8 months pre- breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • In regression analysis, use of the number of accessed records, date, data sensitivity, and malicious versus accidental leak as variables failed to yield an R2 greater than 16.15% for response variables of 3, 14, 60, and 90 day return differential, excess beta differential, and rolling beta correlation differential, indicating that the financial impact on breached companies was highly idiosyncratic.

  • Based on returns, the most impacted industries at the 3 day post-breach date were U.S. Financial Services, Transportation, and Global Telecom. At the 90 day post-breach date, the three most impacted industries were U.S. Financial Services, U.S. Healthcare, and Global Telecom.

The market isn't going to fix this. If we want better security, we need to regulate the market.

Note: The article is behind a paywall. An older version is here. A similar article is here.

Worse Than FailureError'd: Alphabetical Soup

"I appreciate that TIAA doesn't want to fully recognize that the country once known as Burma now calls itself Myanmar, but I don't think that this is the way to handle it," Bruce R. writes.

 

"MSI Installed an update - but I wonder what else it decided to update in the process? The status bar just kept going and going..." writes Jon T.

 

Paul J. wrote, "Apparently my occupation could be 'All Other Persons' on this credit card application!"

 

Geoff wrote, "So I need to commit the changes I didn't make, and my options are 'don't commit' or 'don't commit'?"

 

David writes, "This was after a 15 minute period where I watched a timer spin frantically."

 

"It's as if DealeXtreme says 'three stars, I think you meant to say FIVE stars'," writes Henry N.

 

[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

,

Sociological ImagesBros and Beer Snobs

The rise of craft beer in the United States gives us more options than ever at happy hour. Choices in beer are closely tied to social class, and the market often veers into the world of pointlessly gendered products. Classic work in sociology has long studied how people use different cultural tastes to signal social status, but where once very particular tastes showed membership in the upper class—like a preference for fine wine and classical music—a world with more options offers status to people who consume a little bit of everything.

Photo Credit: Brian Gonzalez (Flickr CC)

But who gets to be an omnivore in the beer world? New research published in Social Currents by Helana Darwin shows how the new culture of craft beer still leans on old assumptions about gender and social status. In 2014, Darwin collected posts using gendered language from fifty beer blogs. She then visited four craft beer bars around New York City, surveying 93 patrons about the kinds of beer they would expect men and women to consume. Together, the results confirmed that customers tend to define “feminine” beer as light and fruity and “masculine” beer as strong, heavy, and darker.

Two interesting findings about what people do with these assumptions stand out. First, patrons admired women who drank masculine beer, but looked down on those who stuck to the feminine choices. Men, however, could have it both ways. Patrons described their choice to drink feminine beer as open-mindedness—the mark of a beer geek who could enjoy everything. Gender determined who got “credit” for having a broad range of taste.

Second, just like other exclusive markers of social status, the India Pale Ale held a hallowed place in craft brew culture to signify a select group of drinkers. Just like fancy wine, Darwin writes,

IPA constitutes an elite preference precisely because it is an acquired taste…inaccessible to those who lack the time, money, and desire to cultivate an appreciation for the taste.

Sociology can get a bad rap for being a buzzkill, and, if you’re going to partake, you should drink whatever you like. But this research provides an important look at how we build big assumptions about people into judgments about the smallest choices.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

TEDNew clues about the most mysterious star in the universe, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

New clues about the most mysterious star in the universe. KIC 8462852 (often called “Tabby’s star,” after the astronomer Tabetha Boyajian, who led the first study of the star) intermittently dims as much as 22% and then brightens again, for a reason no one has yet quite figured out. This bizarre occurrence led astronomers to propose over a dozen theories for why the star might be dimming, including the fringe theory that it was caused by an alien civilization using the planet’s energy. Now, new data shows that the dimming isn’t fully opaque; certain colors of light are blocked more than others. This suggests that what’s causing the star to dim is dust. After all, if an opaque object — like a planet or alien megastructure — was passing in front of the star, all of the light would be blocked equally. Tabby’s star is due to become visible again in late February or early March of 2018. (Watch Boyajian’s TED Talk)

TED’s new video series celebrates the genius design of everyday objects. What do the hoodie, the London Tube Map, the hyperlink, and the button have in common? They’re everyday objects, often overlooked, that have profoundly influenced the world around us. Each 3- to 4- minute episode of TED’s original video series Small Thing Big Idea celebrates one of these objects, with a well-known name in design explaining what exactly makes it so great. First up is Michael Bierut on the London Tube Map. (Watch the first episode here and tune in weekly on Tuesday for more.)

The science of black holes. In the new PBS special Black Hole Apocalypse, astrophysicist Janna Levin explores the science of black holes, what they are, why they are so powerful and destructive, and what they might tell us about the very origin of our existence. Dubbing them the world’s greatest mystery, Levin and her fellow scientists, including astronomer Andrea Ghez and experimental physicist Rainer Weiss, embark on a journey to portray the magnitude and importance of these voids that were long left unexplored and unexplained. (Watch Levin’s TED Talk, Ghez’s TED Talk, and read Weiss’ Ideas piece.)

An organized crime thriller with non-fiction roots. McMafia, a television show starring James Norton, premiered in the UK in early January. The show is a fictionalized account of Misha Glenny’s 2008 non-fiction book of the same name. The show focuses on Alex Goldman, the son of an exiled Mafia boss who wants to put his family’s history behind him. Unfortunately, a murder foils his plans and to protect his family, he must face up to various international crime syndicates. (Watch Glenny’s TED Talk)

Inside the African-American anti-abortion movement. In her new documentary for PBS’ Frontline, Yoruba Richen examines the complexities of the abortion debate as it relates to US’ racial history. Richen speaks with African-American members of both the pro-life and the anti-abortion movements, as her short doc follows a group of anti-abortion activists as they work in the black community. (Watch Richen’s TED Talk.)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.