Planet SAGE-AU

September 30, 2013

Tom LimoncelliWho are all these people?

LJ has become enough of a wasteland that I only check it once a week. Today I realize that I know longer remember half the real names of the people I read.

September 23, 2013

Russell CokerHive Bluetooth Stereo Speakers

picture of Hive bluetooth speakers

I’ve just been given a set of Hive Bluetooth speakers by MobileZap (see this link for all their Bluetooth speakers) [1].

The speakers charge by a micro-USB cable so I started charging them in my car immediately after collecting them. To connect them to a phone or other Bluetooth device you just press the Bluetooth button on top and get the phone to be visible and scanning for devices, they identify themselves as “Hive”, after that they just work. My first test of using them was playing Ingress and the quality of the sound was impressive, I had thought that the Ingress recommendation to use headphones was due to the risk of annoying other people or alerting other players, but the quality of the sound was impressive and the internal speakers of a phone can’t do it justice.

After getting home I did some tests listening to music. For watching music videos it didn’t work so well as the sound was too far removed from the video, but the audio quality was very good. I listened to “Vow” by “Garbage” (a good benchmark for stereo sound) and even though the Hive speakers are only 16.5cm wide I could still notice the stereo effect when they were about 1.5m away from me. The audio quality didn’t compare well with my Bose QC-15 headphones, but for affordable and portable speakers it was quite good and an obvious improvement over the speakers that are built in to any phone I’ve used.

According to the Bluetooth Wikipedia page the range of a class 2 device is 10m and the range of a class 3 device is 1m. When my Samsung Galaxy Note 2 is talking to it I get a reliable range of about 5 meters and a mostly working range of 6 or 7 meters (sound randomly drops out and gets choppy). It could be that other phones would support a longer range due to having a higher transmission power (either class 1 or being closer to the limits of class 2) and a more sensitive receiver. But it doesn’t seem likely that a 5m range is going to be a problem.

Volume and Quality

The speakers are rated at 5 Watt, when running at maximum volume (both through the phone volume setting and the volume control on the speakers) the sound is reasonably distortion free, as good as can be expected from playing an MP3 that’s not compressed with the highest quality. Sound Meter [2] reports the sound volume as almost 85dB on a Galaxy S3 and as almost 100dB on a Galaxy Note 2, that would be somewhere between the volume of a “busy street” or “alarm clock” and the volume of a “subway train” or “blow dryer” which seems like a reasonable description, I find it very unpleasant to be within a meter of the speakers at maximum volume. With the typical amount of background noise in my house I can play music on the Hive speakers at one end of my house and hear it clearly at the other end.

These speakers are more than capable of supplying the music for any party I’d want to host or attend. I’m not really into wild parties, but I think that anyone who has a one room party would be more than satisfied with the Hive speakers. Obviously the sound quality of portable speakers in a box that’s 16.5cm wide and 6cm high isn’t going to equal that of a full size set of speakers, but I think that hardly anyone who attends a party would expect better sound quality than the Hive speakers can provide. The aim of such speakers is to be portable, not really expensive, and to provide good sound quality within those constraints. I think that they meet such aims well.

Over the years there have been many occasions when I have used a Thinkpad to provide the music for a party and found it to be quite loud enough. My current Thinkpad is a T420 which can produce 75dB according to my Galaxy S3 or 85dB according to my Galaxy Note 2. So it seems that I only really need about 10dB less than the maximum volume of the Hive speakers.

Appearance

The designers obviously made an effort on the appearance of the device. They have gone with the Hive concept and used hexagons everywhere. It really looks nice.

Unfortunately when I took the photo there was some dust on it which didn’t look bad to the eye but caught the camera flash. But with a matte black device there’s always the problem of light colored dust. Even with a bit of dust it still looks great as a set of speakers, the dust just detracts from the appearance in photos.

Line In

One of the features I looked for was an audio line input so I could connect it directly to a non-Bluetooth device. I’m assuming that this feature works as it’s something that’s difficult to stuff up when designing such a product, but I haven’t got around to testing it. Once I started using the device I just found that I didn’t have a real need for that feature.

One thing that it might be useful for is PC desktop speakers that are powered by a USB port on the monitor. Currently I have a bearable (but not great) set of speakers for each PC and I don’t need to change anything. But having the option of another set of speakers is very handy in case I suddenly need to make hardware changes.

Other People’s Reviews

When I review a product I generally try and get opinions from random other people if possible. My mother and my mother-in-law were both impressed by the Hive speakers and expressed interest in owning a set. My mother-in-law was particularly interested as she uses her phone to listen to radio stations from outside Australia (I’m going to get her onto Aldi for cheap 3G data ASAP so she can listen to Internet radio when travelling).

Generally the impression that other people have of this device seems to be very positive. It seems that Bluetooth speakers aren’t just a Geek toy.

Conclusion

While I’m very impressed by this product, at this stage I’m not sure whether I would pay for this one or something cheaper if I was paying for it. MobileZap offers a range of other products that look appealing at lower price points. It really depends on how much I use it.

I’ve just got a Makerbot Replicator 3D printer working and I’ve found the Hive speakers very useful for the purpose of drowning out it’s noise. If I keep doing that sort of thing then I’ll get enough use out of the speakers to justify the price.

September 20, 2013

Russell CokerQi Phone Charging

I have just bought a wireless phone charging system based on the Qi Inductive Power Standard. I bought a charging device which connects to a standard micro-USB cable and receivers for the Samsung Galaxy Note 2 and Samsung Galaxy S3 phones I own. Both those phones have contacts in the back of their case that are designed for wireless charging so you can install a charging device inside them. The charging devices make the case fit a little tight, and the charging device is stuck to the phone battery with contact adhesive, this makes it impractical to change the battery on a phone with such a device and makes it a little more difficult to swap out a battery case. One nice feature of the Nexus 4 is that it has Qi charging built in, that saved me $19 and was also more convenient.

I believe that the main advantage of a wireless charger is to avoid the risk of damage to the phone if it’s dropped while connected to a USB charger. This allows the phone to be charged in situations where you might need to quickly or regularly unplug it to go somewhere. One example of how I might use it is when working at an office so I could charge my phone while at my desk and then quickly take it with me if I had to go to a meeting (sadly I have worked in many offices where they have so many meetings). Another example is for sysadmin work where I have to frequently visit devices to fix them.

The wireless charging mat that I bought from Kogan connects to a standard micro-USB plug, the good thing about this is that it’s easy to find cables and it can take power from any PC. The bad thing about this is that the resistance of the USB cable is a factor that limits the power that a phone can receive, when using wireless charging you have the limit of the cable resistance as well as some power loss from the wireless transmission. After any extended period of charging the charging mat feels warm to the touch and the phone that’s been resting on it feels warmer than usual. The warmth is an indication of energy loss which means longer charging times, a longer charging time isn’t necessarily a problem as the convenience of wireless charging can allow longer charging times, but if you want to charge your phone in a hurry before you go somewhere then wireless isn’t a good choice.

In the past I’ve discovered that the battery in a Samsung Galaxy S3 can’t be charged if the phone is at 46C [1]. 46C might seem extremely hot to people in some parts of the world (EG northern Europe and Canada) but the temperature in even southern parts of mainland Australia can get that hot and it can be hotter in central and northern parts, so phone temperature can be a real issue. Currently my house is at 21C according to a digital thermometer, the Galaxy S3 and the Note 2 are being charged from USB and report temperatures of 27C and 23C respectively. While the thermometer in my house and those in the phones probably aren’t really accurate it seems reasonable to assume that the battery of a relatively idle smart-phone that’s being charged will be a few degrees warmer than the ambient temperature. The Qi charger makes things a lot worse as it even feels warm to the touch. So maybe a phone on a Qi charger would be 8 degrees warmer than the ambient temperature or more. That implies that in Australian summer weather a Qi charger won’t be useful outside or in any building that lacks air-conditioning. So I think we can give up on the idea of using Qi devices to charge phones at a BBQ.

Picture of Qi charger on top of Samsung Galaxy Note 2

The final problem I have is that the Qi device is quite small, I took the above picture with my phone face-down because no part of the charger is visible in normal use. With that size I can’t just dump a phone like a Note 2 on top of the charging mat and expect it to work. I have to carefully place it so that it balances and so that the wireless receptor inside the phone matches the transmitter in the mat, if the phone isn’t placed correctly then the Qi mat won’t detect it and won’t supply full power to the transmitter.

Conclusion

I’m fairly disappointed in this device. The waste heat makes it unsuitable for Australian summer conditions and slows charging. The difficulty of correctly placing the phone reduces the convenience which is one of the major features.

The price was $19 for each charging card for the Note 2 and the S3 and $29 for the charging mat to give a total of $67. I think it’s worth the money for me to cover the risk of one of my phones having it’s USB port damaged. Using a Qi charger on occasion will decrease the probability of such damage and allow the phone to be used after receiving certain types of damage.

The prices of those phones nowadays are $389 for a Galaxy S3 (Kogan price), $250 for a Nexus 4 (when it was on sale in the Google store), and probably about $500 for a Galaxy Note 2 (last time Kogan offered them). So by paying $67 for Qi charging I believe that I’m getting some degree of damage insurance for just over $1100 worth of phones. It seems likely that the Nexus 5 will ship with Qi charging support and that the Galaxy Note 3 will also support an optional Qi charging card (which will probably also be $19 or some similar price) so the charging mat should be useful for a long time.

While I’m disappointed I don’t regret buying the device. But I would be hesitant to recommend it to other people and definitely wouldn’t recommend it to someone who doesn’t have a significant interest and investment in smart phones.

September 18, 2013

Russell CokerAdvice for Web Editing

A client has recently asked for my advice on web editing software. There are lots of programs out there for editing web sites and according to a quick Google search there are free Windows programs to do most things that you would want to do.

The first thing I’m wondering about is whether the best option is to just get a Linux PC for web editing. PCs capable of running Linux are almost free nowadays (any system which is too slow for the last couple of Windows versions will do nicely). While some time will have to be spent in learning a new OS someone who uses Linux for such tasks will be able to use fully-featured programs such as the GIMP which are installed as part of the OS. While it is possible to configure a Windows system to run rsync to copy a development site to the production server and to have all the useful tools installed it’s much easier to run a few apt-get or yum commands to install the software and then copy some scripts to the user’s home directory.

The next issue is whether web editing is the best idea. Sites that are manually edited tend to be very simple, inconsistent, or both. Some sort of CMS seems to be the better option. WordPress is a CMS that I’m very familiar with so it’s easy for me to install it for a client, while I try and resist the temptation to force my favorite software on clients there is the issue that I can install WordPress quickly which therefore saves money for my client. WordPress is a CMS that supports installing different themes (and has a huge repository of free themes). The content that it manages consists of “pages” and “posts”, two arbitrary types of document. Supporting two types of document with a common look and feel and common important data in a side-bar seems to describe the core functionality used by most web sites for small businesses.

Does anyone have any other ideas for ways of solving this problem? Note that it should be reasonably easy to use for someone who hasn’t had much experience at doing such things, it shouldn’t take much sysadmin time to install or cost to run.

September 14, 2013

Russell CokerIs Portslave Still Useful?

Portslave is a project that was started in the 90′s to listen to a serial port and launch a PPP or SLIP session after a user has been authenticated, I describe it as a “project” not a “program” because a large part of it’s operation is via a shared object that hooks into pppd, so if you connect to a Portslave terminal server and just start sending PPP data then the pppd will be launched and use the Portslave shared object for authentication. This dual mode of operation makes it a little tricky to develop and maintain, every significant update to pppd requires that Portslave be recompiled at the minimum, and sometimes code changes in Portslave have been required to match changes in pppd. CHAP authentication was broken in a pppd update in 2004 and I never fixed it, as an aside the last significant code change I made was to disable CHAP support, so I haven’t been actively working on it for 9 years.

I took over the Portslave project in 2000, at the time there were three separate forks of the project with different version numbering schemes. I used the release date as the only version number for my Portslave releases so that it would be easy for users to determine which version was the latest. Getting the latest version was very important given the ties to pppd.

When I started maintaining Portslave I had a couple of clients that maintained banks of modems for ISP service and for their staff to connect to the Internet. Also multi-port serial devices were quite common and modems where the standard way of connecting to the Internet.

Since that time all my clients have ceased running modems. Most people connect to the Internet via ADSL or Cable, and when people travel they use 3G net access via their phone which is usually cheaper, faster, and more convenient than using a modem. The last code changes I made to Portslave were in 2010, since then I’ve made one upload to Debian for the sole purpose of compiling against a new version of pppd.

I have no real interest in maintaining Portslave, it’s no longer a fun project for me, I don’t have enough spare time for such things, and no-one is paying me to work on it.

Currently Portslave has two Debian bugs, one is from a CMU project to scan programs for crashes that might indicate security flaws, it seems that Portslave crashes if standard input isn’t a terminal device [1]. That one shouldn’t be difficult to solve.

The other Debian bug is due to Portslave being compiled against an obsolete RADIUS client library [2]. It also shouldn’t be that difficult to fix, when I made it use libradius1 that wasn’t a difficult task and it should be even easier to convert from one RADIUS library to another.

But the question is whether it’s worth bothering. Is anyone using Portslave? Is anyone prepared to maintain it in Debian? Should I just file a bug report requesting that Portslave be removed from Debian?

September 07, 2013

Russell CokerThe 2013 Federal Election

picture of rubbish left after the federal election

Seven hours ago I was handing out how to vote cards for the Greens at the 2013 Australian Federal election. I was hoping that either we would have a Labor/Greens coalition or an outright majority for Labor. Unfortunately we got a Liberal majority in the lower house and it looks like some extreme right wing groups may get into the senate (replacements for “Family First” – the anti-Gay party).

For some reason the polling station where I was working only had volunteers from the three major parties (Greens, Labor, and Liberal) while other polling stations in the same electorate had volunteers from smaller parties such as the Sex Party and the Socialist Alliance.

The volunteers from the Liberal party ate McDonalds outside the polling station and afterwards McDonalds rubbish was left on the ground, the above picture isn’t particularly clear because I took it after 6PM when the polls closed. The Liberals didn’t care enough to put their rubbish in a bin, it’s an externality for them, if they get enough seats in the senate they will surely take the same approach to governing Australia. The Labor people didn’t take the effort to clean up the Liberal mess even though it wasn’t particularly difficult to do so, I think that’s the type of attitude that led to this election defeat. In the case of the McDonalds rubbish in question I put it in the bin so that when the primary school kids return on Monday their school won’t be too messy after the election. But in the case of the mess that is being made in Australian politics it will take many more Greens votes to allow us to clean it up.

August 28, 2013

Edwin GroothuisPolitical spam

Over the years, I have published various email addresses from the @mavetju.org domain in my weblog. They have been harvested by spammers. In this article, I published a From and a Reply-To field which don't exist as an email address: ryopdx@mavetju.org. It also published a Message-id: UHUh4a7dWj6_CpI3ZmfY@mavetju.org.

Imagine my surprise when I found two emails from Clive Palmer, the head of the Palmer United Party, in my mailbox:

From: clive.palmer@news1.palmerunitednews.com.au
Subject: A Message From Clive Palmer
To: ryopdx@mavetju.org

and:
From: clive.palmer@news1.palmerunitednews.com.au
Subject: A Message From Clive Palmer
To: uhuh4a7dwj6_cpi3zmfy@mavetju.org

Looks like he got his list of email addresses from a dubious source!

August 11, 2013

Edwin GroothuisKCS - How not to do it...

Riverbed Support has redesigned its Knowledge Base system to confirm to the Knowledge Centered Support methodology (KCS). Was the earlier system mostly crap, the current implementation isn't really my style neither...

There are several issues which need to be looked at with regarding to this KCS system:

  • The features of the software used.
  • The workflow in the organization.

The old system was based on the Salesforce based Solutions software. It had a HTML-only capable editor, didn't have revision support or a built-in lifecycle enforcement. It didn't differentiate between various sections in the KB article (issue, error, solution, notes) and metadata was not available. And the data was available via the Salesforce SQL interface.

The successful workflow was as follows: A TAC engineer enters a KB article, it gets approved by staff-engineering for technical review, it gets published and announced. Sounds simple, but often took a long time: staff-engineering was the bottleneck, and the system didn't allow other people to help out. So, a lot of good data stayed hidden and unavailable.

Come spring 2011 (Late 2011, considering I'm on the Southern hemisphere) and I get a call from the current publishing team asking for what I see to be needed in a good KB system. I explained them about the system features and the workflow:

The system needs to support revisions, needs to presentation-layer independent editor (like a Wiki), differentiation between the KB article section, needs to have proper metadata like tags, validity-dates, support a lifecycle enforcement. And there should be an API to the data for enable further development (announcements, RSS feeds, off-line access etc).

Besides the normal stages (new, working, ready, reviewed, published, announcement), the workflow should not be linked to the position within Riverbed Support.

After months of silence later, we are introduced to the new KCS system. It is based on InQuira and has nearly all the workflow features (was missing the announcement features) but oh boy, does it suck with regarding to the editor...

The editor of the InQuira system, is HTML based, which means that the various copy+pastes from other sources import a large amount of HTML code with styles not supported or not desired in the article. Also no abstraction is possible, meaning that the location of images, links to other KB articles and so on are done in the HTML code and making it impossible to make a change to the greater system because it would mean the editing of hundreds of KB articles. Revisioning is based on the HTML code only and not on the meta-data. The API implemented in the InQuira system is suitable for searching, but not for data extraction. The user interface only supports the manipulation of a single KB article, if you open multiple articles all changes will be done to the one opened last. And two people editing the same article, one in the metadata and one in the text, gives a very interesting result.

In the workflow the technical review has been removed, articles can go directly into the published state. Fixing issues like markup, style and content) will need to be done afterwards. However, there are no announcements neither, so you will not know if there is anything new added. Or anything duplicate. Or anything incorrect...

So, is it an improvement? Yes. No. Half. It is better than what it was. But there is still a lot to be improved before I would call it a success.

August 10, 2013

Edwin GroothuisOrganising a bridge tournament in a minefield

Earlier this year Naomi obtained her bridge director status and is involved in the New South Wales Bridge Assocation. She is also directing at the Southside Bridge Centre and plays at the Port Hacking club. As they say, never a dull moment!

One of her ideals is to organise a bridge tournament in Southern Sydney and now that she is involved in the NSWBA this is something which might actually run! It's called the Inaugural Sydney South Trophy Day and if it's up to her, it will be the first of a yearly event. However, doing this without stepping on anybodies toes seems to be impossible.

Issue 1: The location. Naomi wants to run it at the Southside Bridge Centre because they can provide the room, cards, accessories for a price much lower than the rate at a commercial event venue company. Sounds reasonable... Well, not if you consider that a lot of owners of other bridge clubs wouldn't mind to have it at their place too. And if it is not at their place, preferable not at anybody elses bridge club. Seeing as if this might going to be a yearly event and thus the location might be somewhere else next year, that doesn't really come up in their minds.

Issue 2: The date. Naomi wants to run it on the Labour Day public holiday on Monday 7 October. That is the day that the Hurstville Bridge club normally runs. So the Hurstville Bridge club is angry because this is going to cut in their number of people coming.

The bridge community in South Sydney consists in general of old people: In the Port Hacking, Hurstville and St George clubs Naomi is the youngest by far, and no new blood is coming in. It is just a matter of time before they are gone. However, in the Southside, Ingleburn and St George Budapest bridge clubs are actively promoting bridge and get new people involved.

So... On 7 October there will be a bridge tournament, most likely visited by people who want to promote the game of bridge and see it continue in a healthy way!

August 09, 2013

Edwin GroothuisDifferent kind of networking people

In the first thirteen years of my working life I have encountered a lot of different people in the field of networking. And for some reason they were all skilled, experienced and willing to learn. They understood their stuff, in case of issues a pointer to the right direction was enough to help them out.

In my experience at the Riverbed TAC I have encountered several new kind of networking people, although I wouldn't call them all "networking" people.

  • You have the ones described above: Call in with an issue and just need a clue on where to start looking. They know their network and they know how the WAN optimization works. When looking at packets together and explaining what can be seen, they already know where to look before I can ask them "Is there anything in the network which would strip these TCP options?".
  • Outsourced network manager and network architects: Oh how do I pity you. Your company got this multi-million dollar contract for the network, but WAN optimization is more than the network. It is the servers and the clients too, but because you only got the network you are not allowed to talk to the company which is managing the servers and the company which is managing the end-user support. The company which is managing the servers will not tell you that they upgraded to a newer version of whatever application and blame the fact that half of the company can't retrieve their email anymore on you! You will never be able to optimize encrypted MAPI traffic because the team of the other company which manages Active Directory doesn't want your Steelhead appliance to be able to perform delegation, let alone be configured as a read-only Domain Controller...
  • You have people who have inherited a network and just found out that besides routers, switches and firewalls you have another kind of network equipment and they have no idea what... These are people who can be saved, if introduced properly. Explain what WAN optimization is, give them documentation, explain what the issue is and follow up a week later. If it works out, they will end up as the first group. If not, they will end up as the next group...
  • Outsourced NOCs. They seem to look at a screen and when there is a different colour than green they will identify the brand of the device and call the TAC. They refuse to do basic troubleshooting themselves nor learn from previous cases opened and need to be asked for the system dump every time they open a case. You will get emails from them every eight hours with the request for an update because that is when they handover their cases but don't bother to forward the next steps to the original customers because they are not online yet... Getting information from the device in question goes most of the time without a problem. However getting more information about the issue will have to go through the NOC, who rephrases it and forwards it to the user, who answers the wrong question and sends it to the NOC and who forwards it back to you...

Did I miss a category? Most likely because I have repressed them, very deep...

August 08, 2013

Tom LimoncelliNJ Senate Primary

Here's the campaign messages I'm getting:

Holt: I'm a freakin' scientist! Vote for me! Aren't you sick of lawyers fucking things up? I was great in the house, now let me be great in the Senate!

Cory Booker: I cleaned up Newark... kind of... oh hell just vote for me because I'm so awesome on Twitter!

Pallone: FRANK LAUTENBURG FUCKING PROMISED ME HIS FUCKING SEAT BEFORE HE FUCKING DIED. HOW DARE THAT FUCKING CORY BOOKER ANNOUNCED HE WOULD RUN WITHOUT ASKING FRANK'S PERMISSION!??! FUCK HIM AND FUCK ANYONE THAT VOTES FOR HIM!

And then there's that other person that's running... but I haven't seen any campaign materials from her so I don't know what to say... I don't even know her name.


P.S. If you aren't sure, I'm voting for Rush Holt! FOR SCIENCE!!!

August 05, 2013

Edwin GroothuisWorkplace strangeness

A couple of weeks ago the people in the Sydney TAC started to work from 08:00 till 16:00 to better deal with the overlap with the US based TACs. I think this is successful, considering the number of handovers I get between 08:30 and 08:45. And considering that most days I'm home a little bit after 17:30, I think it's a win for all!

Recently we got the request to change our lunch break from 12:30 till 13:00 to 12:30 till 13:30. The reason given? Because the other TACs also have an hour lunch break, so we get an hour too: Only fair! However, the time to go home will be delayed with half an hour too. In the winter that would be around 18:00, in the summer that will be around 19:00. That is not really my idea of a good time...

However, not all the TACs do get a full hour, for example the New York and the Dutch TAC only get half an hour. And just like the Sydney TAC it is most of the time a "grab food and go back to the desk to continue working" event. So that cannot be the real reason...

My contract states a working week of 38 hours. Because the preparation of my lunch is "put butter and honey on four slices of bread and cut them in half" and then back to the desk, I spend about 40 hours on my desk. If I have to extend my lunch to 30 minutes, I end up with working 42.5 hours per week, while I get only paid 38 hours. That is 10% Riverbed gets for free...

While commuting from home to work by train, good for two hours per day, I spend time on the laptop to do the things I do not get the time for at work: Play with Wireshark, as I am one of the people at Riverbed who manage the internal Wireshark version; Make tools to extract data from system dumps; Continue on the Steelhead Troubleshooting Book, something which I don't get time for during office hours. Now I do this mostly to keep myself sane, because if I don't make them nobody will make them, but it is a huge advantage for Riverbed.

A couple of weeks ago we had a discussion on after hours calls: Helping out during the evening or in the weekends if the people working need help. During the discussion we agreed that there wouldn't be an official roster for this (Thank you very much!) but also that if people would need help and we were able to help out, we would do so. Of course we don't mind that, colleagues help each other out. The number of calls expected is minimal, I had only had twice in the last half year that somebody called.

So... What does management think they can get out of this? Are they willing to risk the hidden benefits for the "gift" of half an hour extra lunch time? We will see. Currently the idea is suspended until later.

August 01, 2013

Tom LimoncelliVote Rush Holt! Tue, Aug, 13!

Hey, my fellow NJ-ians! Would you like to finally have a senator that is a GEEK? That understands SCIENCE? That makes decisions based on SCIENCE?

I remember doing politics in Trenton years ago. Any time I met with a politician I had to explain basic facts. I was the expert trying to educate the powers that be... sometimes just explaining the terminology to get to the point where I can explain what my side of the issue is. It got old fast.

When I see RUSH HOLT speak about issues I care about (the climate crisis, social security, internet policy, etc.) he knows what the fuck he's talking about! It's so refreshing! It's like talking with your geeky progressive friend at a science fiction convention that reads the same blogs you do.

I know its august.... if you are on vacation get your ass to the polls anyway and vote.

I know he's number 2 in the polls. Not if we all SHOW UP!

Go to his website. Register. Donate. DO IT TODAY!

http://www.rushholt.com/

http://www.rushholt.com/

http://www.rushholt.com/

July 15, 2013

John SleePCF8574-based I2C LCD backpacks

So it seems that there's a lot of these about. A while back I picked up a bunch of cheap LCD backpacks on eBay, they look like this:



Like most of these cheap LCD backpack units, it utilises a GPIO expander chip. In this case, the chip used is a PCF8574. Most of the other I2C backpacks for Hitachi LCDs seem to utilise Microchip's MCP23008, which seems to be the little brother of the MCP23017 I've been playing with lately. So I had quite a hunt for some quick and easy code libraries to test out the backpacks with. I found one in Atlassian Bitbucket. I used version 1.2.1.


I then found this page which describes a number of the cheap eBay backpacks and clones. The third code example worked fine with the above library.


Unfortunately, after reducing the test sketch to merely the below, it was still gobbling 4768 bytes of flash. Unacceptably large, I think.


<script src="https://gist.github.com/indigoid/6000498.js"></script>

At some point in the past I'd picked up a bunch of black-on-green 16x2 LCDs for next to nothing. The total cost per LCD+backpack combo should come to easily less than $10 (AUD) each, including shipping. Pretty happy with that. They even work properly!

Dave HallAutomated Security Reviews of Drupal Sites

Most experienced Drupal developers have worked on sites with security issues. Keeping modules up to date is a pretty straight forward process, especially if you pay attention to the security advisories. Coder can find security issues in your code.

Recently I needed to perform a mass security audit to ensure a collection of sites were properly configured. After searching and failing to find a module that would do what I needed, I decided to write my own. The Security Check module for Drupal checks basic configuration options to ensure a site configuration doesn't have any obvious security flaws. This module isn't designed to find all flaws in your site.

Security Check works by checking a list of installed modules, settings for variables and permission assignments. I hope that others in the community will have suggestions for other generic tests that can be implemented in the module. If you have any ideas (or patches), please submit them as issues in the queue.

This module isn't a substitute for a full security audit, which can be conducted in house or by a third party such as Acquia's Professional Service team. Security Check is designed to be run as part of an automated site audit to catch low hanging fruit.

To use Security Checker install it in ~/.drush and then cd to any docroot and run "drush security-check" or if you prefer to use an alias run "drush @example secchk" from anywhere.

July 09, 2013

John SleeVan Halen @ Tokyo Superdome 2013-06-21 set list

  1. Unchained
  2. Runnin&apos with the devil
  3. She&aposs the woman
  4. Show your love
  5. Tattoo
  6. Everybody wants some
  7. Somebody get me a doctor
  8. Downtown
  9. Hear about it later
  10. Pretty woman
  11. Drum & weird stuff
  12. You really got me
  13. Dance the night away
  14. ??? Synth song
  15. And the cradle will rock
  16. Hot for teacher
  17. Women in love
  18. Tora Tora???
  19. Mean streets
  20. Beautiful girls
  21. --- Dave lee Roth (as gaijin yakuza) short film
  22. DLR guitar acoustic intro to ice cream man
  23. Panama
  24. EVH solo thing
  25. Ain&apost talkin&apos &aposbout love
  26. Encore: Jump

July 05, 2013

John SleeHow I compile Stellaris projects under OSX

So I don't really know much about the TI Stellaris yet, having mainly used AVR chips to date. But I have a couple of the Stellaris Launchpads, and have managed to get the toolchain working under Mac OS X. I'm not going to repeat all the steps required for that here; you can use this page as a good starting point for that, and noting that you can save yourself a whole bunch of time/hassle by setting up MacPorts and doing this:


sudo port install arm-none-eabi-{gcc,binutils,gdb}


In my $HOME/Code/stellaris directory I have a single Makefile called Makefile.simple, and a separate subdirectory for each project. Its contents look like this:


CC=arm-none-eabi-gcc
LD=arm-none-eabi-ld
OBJCOPY=arm-none-eabi-objcopy
CPU=cortex-m4
STELLARIS_TOP=$(HOME)/build/stellaris/stellaris
CFLAGS_STELLARIS=-mthumb -mcpu=$(CPU) -mfpu=fpv4-sp-d16 -mfloat-abi=softfp -ffunction-sections -fdata-sections -MD -I$(STELLARIS_TOP)
CFLAGS=-std=c99 -Wall -Os $(CFLAGS_STELLARIS)
LDFLAGS_LEFT=-T $(PROG).ld --entry ResetISR
LDFLAGS_RIGHT=--gc-sections

$(PROG).bin: $(PROG).o startup_gcc.o $(PROG).ld
$(LD) $(LDFLAGS_LEFT) -o $(PROG).bin $(PROG).o startup_gcc.o $(LDFLAGS_RIGHT)

$(PROG).out: $(PROG).bin
$(OBJCOPY) -O binary $(PROG).bin $(PROG).out

clean:
$(RM) $(PROG).bin $(PROG).out $(PROG).[od] startup_gcc.[od]

flash: $(PROG).out
lm4flash $(PROG).out

You can see in the above that STELLARIS_TOP is set to the location in which I have unpacked (actually git clone...) StellarisWare. This is provided to gcc so that it can find C include files. Adjust as necessary if you are using my Makefile

In each project subdirectory I have a few files:


$ ls
Makefile blinky.c blinky.ld startup_gcc.c

The Makefile merely includes ../Makefile.simple and specifies the name of this project:


PROG=blinky
-include ../Makefile.simple

The other files (blinky.ld and startup_gcc.c), at least at this learning stage, are the same for every project; I just copy them over from project to project.


To build my project, I just type make:


$ make
arm-none-eabi-gcc -std=c99 -Wall -Os -mthumb -mcpu=cortex-m4 -mfpu=fpv4-sp-d16 -mfloat-abi=softfp -ffunction-sections -fdata-sections -MD -I/Users/jslee/build/stellaris/stellaris -c -o blinky.o blinky.c
arm-none-eabi-gcc -std=c99 -Wall -Os -mthumb -mcpu=cortex-m4 -mfpu=fpv4-sp-d16 -mfloat-abi=softfp -ffunction-sections -fdata-sections -MD -I/Users/jslee/build/stellaris/stellaris -c -o startup_gcc.o startup_gcc.c
arm-none-eabi-ld -T blinky.ld --entry ResetISR -o blinky.bin blinky.o startup_gcc.o --gc-sections

My project compiled and linked OK, so it is ready to send to the Launchpad with lm4flash:


$ make flash
arm-none-eabi-objcopy -O binary blinky.bin blinky.out
lm4flash blinky.out
ICDI version: 9270

Please let me know if you found this useful! I also use a similar Makefile structure for building and flashing AVR and MSP430 projects.

June 30, 2013

John SleeMCP23017 and the Bus Pirate

So I wired up an MCP23017 GPIO expander to my Bus Pirate and couldn't understand why it wasn't acknowledging commands. Upon closer inspection of the datasheet I discovered that it expects an extra bit set in the I2C device address byte. The datasheet refers to this as the "Device Opcode".


Thus, if your device address lines are all tied to ground, ie. an address of 0x20, the address to write to is 0x40.


Once you've connected everything, the first step is to put your Bus Pirate in I2C mode and turn on the power:


HiZ>m4
Set speed:
1. ~5KHz
2. ~50KHz
3. ~100KHz
4. ~400KHz

(1)>1
Ready
I2C>W
POWER SUPPLIES ON

This example assumes you've tied all three address lines from the MCP23017 to ground, making its I2C address 0x20. Now we'll configure both IO banks to be all outputs; the 0:3 notation tells the Bus Pirate to write the value three times. The first 0 selects the IODIRA register address. The second and third 0s clear all the bits in the IODIRA and IODIRB registers, thus configuring them as outputs.


I2C>[0x40,0:3]
I2C START BIT
WRITE: 0x40 ACK
WRITE: 0x00 ACK 0x00 ACK 0x00 ACK
I2C STOP BIT

Finally, we turn on all eight GPIOA pins by writing 0xFF to register 0x12.


I2C>[0x40,0x12,0xFF]
I2C START BIT
WRITE: 0x40 ACK
WRITE: 0x12 ACK
WRITE: 0xFF ACK
I2C STOP BIT

As always, read the fine datasheet!

John SleeQuickstart guide: TI MSP430 on OSX Mountain Lion

I had all of this stuff working on my other OSX laptop but for some reason it wasn't as trivial to setup on my new laptop as it was last time around, mainly due to the USB driver. So I thought I'd save some notes for Google's (and possibly my!) future reference.


If this all fails on Mavericks when it arrives, I'll try to remember to update it.


  1. Install Xcode (I installed the commandline tools from inside Xcode too [Xcode menu => Preferences => Downloads tab]; these are probably not actually required)
  2. Install MacPorts
  3. Install the toolchain bits:
    sudo port install msp430-{binutils,gcc,gdb,libc}
  4. Get the fixed kernel extension source code - the TI download is broken under Mountain Lion and possibly Lion as well:
    git clone https://github.com/freespace/ez430rf2500.git
  5. Follow the README.md instructions included with the ez430rf2500 source to install the driver
  6. Plug in your Launchpad
  7. If everything is working, this should get you to an mspdebug shell: sudo mspdebug rf2500
[jslee@shamata Release] $ sudo mspdebug rf2500
MSPDebug version 0.21 - debugging tool for MSP430 MCUs
Copyright (C) 2009-2012 Daniel Beer <dlbeer>
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Trying to open interface 1 on 002
Initializing FET...
FET protocol version is 30066536
Set Vcc: 3000 mV
Configured for Spy-Bi-Wire
fet: FET returned error code 4 (Could not find device or device not supported)
fet: command C_IDENT1 failed
fet: identify failed
Trying again...
Initializing FET...
FET protocol version is 30066536
Set Vcc: 3000 mV
Configured for Spy-Bi-Wire
Sending reset...
Device ID: 0xf201
Code start address: 0xf800
Code size : 2048 byte = 2 kb
RAM start address: 0x200
RAM end address: 0x27f
RAM size : 128 byte = 0 kb
Device: MSP430F2012/G2231
Number of breakpoints: 2
fet: FET returned NAK
warning: device does not support power profiling
Chip ID data: f2 01 01

Available commands:
= erase isearch opt run setwatch_w
alias exit load power save_raw simio
break fill load_raw prog set step
cgraph gdb locka read setbreak sym
delbreak help md regs setwatch verify
dis hexout mw reset setwatch_r verify_raw

Available options:
color gdb_loop iradix
fet_block_size gdbc_xfer_size quiet

Type "help <topic>" for more information.
Press Ctrl+D to quit.

(mspdebug) regs
( PC: 0ffff) ( R4: 0dfde) ( R8: 0fbef) (R12: 0ffdf)
( SP: 0ffff) ( R5: 0f613) ( R9: 07ffc) (R13: 0feff)
( SR: 00000) ( R6: 0edff) (R10: 0ffff) (R14: 07bef)
( R3: 00000) ( R7: 0fbef) (R11: 0cff7) (R15: 0f5fc)
0xffff:
0ffff: ff
(mspdebug)

If all this works, you should be able to compile stuff and use mspdebug to load it into MSP430 flash via the Launchpad.

May 20, 2013

Gavin CarrChecking for a Dell DRAC card on linux

Note to self: this seems to be the most reliable way of checking whether a Dell machine has a DRAC card installed:

sudo ipmitool sdr elist mcloc

If there is, you'll see some kind of DRAC card:

iDRAC6           | 00h | ok  |  7.1 | Dynamic MC @ 20h

If there isn't, you'll see only a base management controller:

BMC              | 00h | ok  |  7.1 | Dynamic MC @ 20h

You need ipmi setup for this (if you haven't already):

# on RHEL/CentOS etc.
yum install OpenIPMI
service ipmi start

March 27, 2013

Robert MibusMoving!

I’ve been [comparatively] quiet for a while now, and I can finally announce the results of my endeavours: I’ve resigned from Internode (ie. iiNet) in order to accept a position at Google in Sydney.

This means that Missy and I and our kids are all packing our bags and getting ready to head eastwards, looking forward to new and exciting adventures.

FAQ:

  1. When? My last week in Adelaide is the week of the 22nd of April.
  2. What will you do at Google? I’ll be a Site Reliability Engineer (which is kinda like a sysadmin).
  3. Why leave? Because it’s the right time, and I’m looking for new challenges.
  4. Why Google? It seems like a good fit for interests and for me personally. The folk I know there are generally pretty awesome, and I intend to learn as much as I can from them.

As a final note, I’d like to thank Simon Hackett and Adam Fox for their support and encouragement over the past few years; without them, I don’t think I’d be where I am now.

(FWIW, I started coming up with a list of people I’d like to thank, but it was unmanageably long — “Thank you” to the rest of you too; you should know who you are).

March 22, 2013

Gavin CarrSubtracting Text Files

This has bitten me a couple of times now, and each time I've had to re-google the utility and figure out the appropriate incantation. So note to self: to subtract text files use comm(1).

Input files have to be sorted, but comm accepts a - argument for stdin, so you can sort on the fly if you like.

I also find the -1 -2 -3 options pretty counter-intuitive, as they indicate what you want to suppress, when I seem to want to indicate what I want to select. But whatever.

Here's the cheatsheet:

FILE1=one.txt
FILE2=two.txt

# FILE1 - FILE2 (lines unique to FILE1)
comm -23 $FILE1 $FILE2

# FILE2 - FILE1 (lines unique to FILE2)
comm -13 $FILE1 $FILE2

# intersection (common lines)
comm -12 $FILE1 $FILE2

# xor (non-common lines, either FILE)
comm -3 $FILE1 $FILE2
# or without the column delimiters:
comm -3 --output-delimiter=' ' $FILE1 $FILE2 | sed 's/^ *//'

# union (all lines)
comm $FILE1 $FILE2
# or without the column delimiters:
comm --output-delimiter=' ' $FILE1 $FILE2 | sed 's/^ *//'

January 28, 2013

Philip YarraWord games

So The Age publishes a word game - Target - in the weekend paper. The rules are simple: you get 9 letters, and have to make as many words of 4 letters or more out of them, and there is one of the letters which must be included in all words. You also have to figure out what the original 9-letter word was which got scrambled up to provide the 9 letters. I quite like the challenge of making words out of the letters, but I rarely figure out the 9-letter word. It frustrates me. In fact, so much that I seem to spend more time figuring out ways to write a program to solve it for me. Yeah, really.

Anyway, I couldn't be bothered doing it "properly" so I decided to do a quick command-line hack to solve it. Today's quiz had the letters H K I A R T B R M Witness the following ugliness:



grep  '^.........$' /usr/share/dict/words|grep 'k'|grep 'r'|grep 'm'| grep 'h' |grep -E '[i]{1}'|grep -E '[a]+'|grep -v '[eou]'

There must be more elegant ways to achieve this, but still, it got the job done in a couple of minutes.

Oh, and the word was "birthmark" in case you wondered :-)

Edit: I solved this properly in perl today.

Philip YarraWord games, part two

Sometimes I wake up in the morning, and manage to get back to sleep. Other times, my brain starts ticking over, and there's no point shouting at it to go back to sleep. This morning was an example of the latter.

Specifically, I started trying to work out the nine-letter word from the Sunday Age, from memory. From there, I started thinking about this earlier bit of hackery, and how I really should have tried to solve the problem properly. So I came up with this:

#!/usr/bin/perl

my $DICT="/etc/dictionaries-common/words";
my $WORD=shift();

my $LC_WORD=lc($WORD);
my $SORTED_WORD=(join '', sort { $a cmp $b } split(//, $LC_WORD));
my $LENGTH=length($SORTED_WORD);

open FILE, $DICT or die $!;

while(<FILE>) {
        chomp;
        $CANDIDATE=(join '', sort { $a cmp $b } split(//, lc($_) ));
        next if (length($CANDIDATE) ne $LENGTH);
        print ("FOUND: $_\n") if($CANDIDATE eq $SORTED_WORD);
}

close FILE;
exit 0;


Perl is definitely not my favourite language (too many magic variables for my liking!) but it's quick, and it works.

Of course, plenty of other people have already solved this issue (and gone well beyond too, in this case) but that's not the point - it was an interesting exercise. My wife did ask "isn't that cheating?" but I kinda think if you write a program to solve the problem, it's arguable not really cheating. Right?

January 11, 2013

Philip YarraRadio scanners

In a diversion from the usual nerdy stuff, let's talk about radio scanners. Um, well maybe not a complete diversion, but it's not strictly a computer, so you see what I'm saying...

We live in a high fire danger area. The CFA website, fire ready app, ABC news radio... they're all good, but the information is not always timely, and there have been cases where they haven't coped under the high load of a high fire danger day. When you really want to know what's happening nearby, the CFA radio communications give you the best picture. Messages between VicFire (the dispatch centre) and the crews on the ground tell you a great deal about what's happening.

Currently we use an analog scanner - a Uniden Bearcat UBC93XLT, which has served us very well. We replaced the stock aerial with a ScanDucky and it's been terrific. Our scanner came from Dick Smith, but they don't list it anymore. They still list the 92 - similar, but without the rechargeable batteries (I always regarded the extra few dollars for the 93 well worth it - the scanner stays attached to the charger in the kitchen, and when we go mobile it's ready to rock). It looks like Dick Smith and (of all places!) OfficeWorks offer what looks like the "small" version of our scanner - this might be a good "for now" analog scanner (note: I haven't tried one of these, but some casual googling indicates this should do roughly what the 93 does - but caveat emptor - don't buy this assuming it will pick up CFA radio based on my non-recommendation - check that it supports the CFA frequencies). Our scanner's limited keypad was cryptic enough that I was forced to read the manual (I know! I hope they don't cancel my nerd badge for reading the manual... it just wasn't intuitive) so don't expect to open the box and use a scanner straight away.

Eventually CFA VicFire channels will go digital. At this stage the project is implementing digital for country areas, with outer metro to follow "eventually". Check which region you're in here. I'm told the fireground channels will stay on analog (these are the channels used for communication between the trucks working on the same incident, I can generally hear these if the incident is nearby). Anyhoo, digital: it gives the users the option to encrypt their conversations (as Police channels in metro areas already do) but I have been told by CFA sources that the digital channels will remain unencrypted, meaning we can still listen with our scanners. So I plan to upgrade to the Bearcat Digital 396 XT or similar. I assume that it will also receive the analog transmissions (needed for fireground channels) but please note that I cannot confirm this until I have one in my hot little hands. This guy has lots of useful information about these units. If it supports analog, we may have a UBC93XLT for sale... if not, we'll hang on to it.

January 09, 2013

Tom LimoncelliTax the Rich: An animated fairy tale

<lj-embed id="253">

Tax the rich: An animated fairy tale, is narrated by Ed Asner, with animation by Mike Konopacki. Written and directed by Fred Glass for the California Federation of Teachers. An 8 minute video about how we arrived at this moment of poorly funded public services and widening economic inequality. Things go downhill in a happy and prosperous land after the rich decide they don't want to pay taxes anymore. They tell the people that there is no alternative, but the people aren't so sure. This land bears a startling resemblance to our land. For more info, www.cft.org. © 2012 California Federation of Teachers

January 02, 2013

Dave HallVisualising Drupal Development History with Gource

Over the Christmas break I came across gource, a software version control visualization tool. Gource produces really nice visual representations of software projects growing. About 2 years ago David Norman produced a gource video of development of Drupal up to the 7 release. This is pretty cool, but it only shows who committed the patch, not who contributed to it.

After some searching I found the Drupal contribution analyzer sandbox project. This module allows you to produce contributor tag clouds and code swarm videos. This was closer to what I was after, but I had to patch patch drupal_log_generator.py to support the gource custom log format.

The result is a 5:23 minute video showing the growth of Drupal.

The first few years things are pretty consistent and easy to follow. The Drupal 8 development cycle shows how much the community of contributors has grown. Towards the end of last year things look really chaotic.

To produce the video I used a clone of the Drupal 8 branch as at some time on January 1, 2013. The gource command I used was:
gource --log-format custom -i 500 -s 0.0001 -a 0.01 -r 30 --title "Drupal" --highlight-users --disable-progress --hide filenames -1280x720 drupal --bloom-intensity 0.2 --bloom-multiplier 0.2 --stop-at-end /tmp/commit.log -o -| ffmpeg -y -r 60 -f image2pipe -vcodec ppm -i - -vcodec libvpx -b 10000K ~/Videos/drupal.webm

I considered writing a script to find and download user avatars from groups.drupal.org but after reviewing the video without them I decided it would be too cluttered.

Can you find your name in the video?

Note: I gave up on trying to embed the video

December 02, 2012

Robert Mibusping -f[aux flood]

I’m having trouble admitting this publicly, but I learnt something new recently about ping, of all things.

To quote from the man page of ping:

-f Flood ping. For every ECHO_REQUEST sent a period ``.'' is
 printed, while for ever ECHO_REPLY received a backspace is
 printed. This provides a rapid display of how many packets are
 being dropped.

Seems reasonable enough. I mean, “flood” is pretty clear, right? Except, during troubleshooting this week I found that pinging a responding host and a non-responding host resulted in sending a different rate of packets.

Reading on in the man page, it becomes apparent why:

If interval is not given, it sets interval to
zero and outputs packets as fast as they come back or one hun‐
dred times per second, whichever is more. Only the super-user
may use this option with zero interval.

So what you really want, if you want a consistent number of packets per second, is to use the command like this:

ping -f -i 0.01 somehost.example.net

An interval of zero is actually an interval of “anything from zero to 10 milliseconds, depending on the RTT to the remote host”.

November 22, 2012

Robert MibusMultiple SSH port-forwards, all in a row…

Sometimes, you really need to get from point “A” to point “B”, but you can’t. Restrictive firewalls, poor change control, you don’t own the infrastructure, or maybe it’s just “temporary” and you don’t want to have to go to all that effort just for a few days.

Struggle no more!

SSH Port Forwarding Primer

Let’s just say that you have an application on server “A”, and you want it to be able to reach a web service on your local desktop. You want to open a socket listening on the Remote end, which forwards the connection back over SSH to your local machine. (Maybe a NAT gateway is in the way).

Assuming the server is called “server-a” and your web service is on port 8080, the following may well suffice:

ssh -R8080:localhost:8080 server-a.example.com

“-R” for Remote listening socket, the port number for the designated [remote, in this case] side, then a host and port on the other [local] side.

If you want a local listening socket to forward the packets securely to the remote server – maybe a firewall is in the way this time – just change the Remote for Local:

ssh -L8080:localhost:8080 server-a.example.com

Now, connecting to your desktop’s port 8080 will land on the far-end’s localhost:8080. Again, the first 8080 is for the designated side [local, this time] and the localhost:8080 for the other [remote] side.

Linking port-forwards together

Here’s a hypothetical scenario, where Server D needs to be reached by Server A. For whatever reason, no single host has connectivity the whole distance.

In short, if an SSH port forward opens a remote socket and sends it locally, then that local port can be opened by a different SSH session and forwarded elsewhere too.

You can set this example up thusly:

Diagram of a complex SSH forwarding situation

On the desktop:

ssh -R8080:localhost:8080 server-a.example.com
ssh -L8080:localhost:8080 server-b.example.com

These two together gets Server A connectivity to Server B (via port 8080). Right now, it ends there, because Server B isn’t listening on port 8080.

On Server B:

ssh -L8080:server-d.example.com:8080 server-c.example.com

This time, we don’t want to forward the local socket’s connections to localhost on the remote server, we want to pass it all the way to Server D. What happens is that Server B listens on port 8080, bundles all the data up and over SSH to Server C, then Server C unpacks it all and sends it to the nominated address (server-d.example.com:8080).

Now Server A can point to it’s localhost:8080, and end up on server-d.example.com:8080.

Spiffy, huh!

Please don’t do this in production, though – but if you do, please don’t tell anybody I told you how! :)

You might also be interested in my other SSH port-forwarding hacky trick, where I show how to make a remote server appear local without needing proxy support in your application.

November 17, 2012

Robert MibusGNOME-Shell multi-timezone clock

As many of you know, the company I work for (Internode) was recentlyish purchased by iiNet.

iiNet’s headquarters is in Perth, which is 1.5 or 2.5 hours “behind” Adelaide time.

There’s no longer a multi-timezone clock available for the panel, so… oh well I just wrote one myself.

Screenshot of multi-timezone clock in GNOME-Shell top panel

Right now, it only supports GMT+8 as the “remote” timezone, but it’s easy enough to change the code.

The code is up on GitHub: https://github.com/mibus/MultiClock

Thanks to Marco Dallagiacoma for writing the “Fuzzy Clock” plugin, as I used it as a basis for my own code.

November 11, 2012

Robert MibusNetWorker – Random stalling

One of the things I’ve spent a lot of time with, has been EMC NetWorker (previously Legato NetWorker).

A vaguely common issue is for a process of some kind – backups, staging to tape, restores, etc – for no reason just stop making any new progress.

Once you’ve checked off the common reasons – like making sure you haven’t run out of disk space or usable tapes – it seems like the only option is to restart NetWorker as a whole, losing any in-progress actions (even ones that are to devices that haven’t stalled).

I suspect that random underlying I/O issues can occasionally upset it, and it doesn’t quite recover. But, whatever. How do you make it recover a single device, without restarting the whole thing?

First up, get the PID of the main nsrd process. On Solaris, ps -ef | grep nsrd; or on Linux ps uaxw | grep nsrd.

Assuming the PID is 1234, you next need to run: dbgcommand -p 1234 PrintDevInfo

It should pretty quickly spit out a whole stack of debugging info to /nsr/logs/daemon.raw. It’s moderately complicated, but you should see that it’s a dump of its internal state of each device, including d_device – the *nix device or directory, and mm_number – the unique ID for the nsrmmd process for that device.

So – find the device you’re interested in, and find the mm_number for that device.

Get a list of your nsrmmd processes, eg. ps -ef | nsrmmd or ps auxw | grep nsrmmd. If your mm_number is 5, then there will be a process nsrmmd -n 5

Kill the process, and it should re-spawn by itself on further access.

October 12, 2012

Tom LimoncelliThis graphic updates every day.

October 10, 2012

Dave HallWhat LEGO taught me about Community Building (Drupal Version)

This is a Drupalised version of Jive's Deirdre Walsh's blog post entitled "What LEGO taught me about Community Building".

Like most kids, I loved LEGO. I would spend hours building everything from spaceships to crazy robots (true story).

As an adult, building a community has that same sense of awesomeness.

Here is a list of the top 7 things LEGO taught me about building a quality community.

Accessibility. You can find LEGO building blocks anywhere (especially in the cup holders in my car). Social business needs to be the same. A strong community should span internally and externally, across departments, geographies, and devices.

Usability. Unlike Ikea furniture, anybody can pick up a few LEGO blocks, stick them together, and build something amazing. A good community should make it easy for members to go from a newbie to expert in record time, with engaging tutorials and introductory tours.

Fun. LEGO allows people spend hours being creative. Communities should engage users. Every week Drupal events are held which help make this a reality.

Beneficial. LEGOs are more than just an entertaining toy. By playing with LEGOs, kids learn things like simple mechanics. The same should ring true for your community - members should learn through building and sharing. Community members should be free to run, study, redistribute, modify and copy the code.

Next Generational. LEGO has evolved its product offerings. When I had more free time I used to play around with LEGO Mindstorms NXT. This flavor of LEGO allows you to build and program robots - a far advancement from the standard building blocks. A good community will also adopt next-generation technologies, such as enterprise applications, deep webservices integration, html5 and responsive design.

Versatile. By buying a single set of LEGOs you can make several different creations. One day, you'll build a log cabin and the next day a castle. Building a community is similar. With an investment in one strong social business platform, like Drupal, you can build a variety of vibrant communities for areas like customer support, sales and marketing, social intranet, etc.

Leader. Most boxes of LEGOs comes with one of those cool little, plastic people. Like those minifigs, it's key to have a community manager, who can serve as the front-person. Altimeter Research’s Jeremiah Owyang studied community manager job descriptions from 16 different organizations and found four key elements: community advocacy, brand evangelism, savvy communication skills and editorial planning, and liaising between internal decision makers and community members. In the Drupal community we don't need to link to LinkedIn profiles of people who inspire us. All the cool people have accounts on drupal.org - including Dries.

While building a community might not feel like child's play, just remember that it can be fun and the hard work will pay off in the end. Communities are real things that involve people, they are more than a website built using a CMS. As an example look at recent DrupalCons or BADCamp.

Now, if I can only get belly to be as flat as a LEGO minifig's....

October 08, 2012

Dave HallCoder Talks Wanted for DrupalCon Sydney

One of the many hats I wear these days is Development and Coding Track Chair for DrupalCon Sydney 2013. As outlined in the track description we are planning on showcasing what is awesome today in Drupal 7 and the cool stuff that is coming in Drupal 8. Given that there are no core conversations in Sydney we are trying to put together a more intermediate-to-advanced level track. I want people to come to these sessions and go away with their heads full of ideas about what they can do better in their next project.

If you have a session that you think fits that brief then please submit something. If you want to ask me anything before submitting your session, feel free to contact me. The decision on which sessions are accepted will be made in late October / early November by the track team, the global track chairs, the content chair and myself in a collaborative decision making process. The accepted sessions will be announced on 13 November.

Although the event won't be as big as a northern hemisphere DrupalCon, it is going to be full of great people. The initial 100 early bird tickets sold out in less than 8 hours!

Please be aware that there is no financial support available for speakers and you will be required to buy a speakers ticket at a cost of 165USD.

Submissions close at 23:59 AEST (UTC+10) on 26 October so submit a session today!

September 26, 2012

Gavin CarrExtract - some devops yang

If you're a modern sysadmin you've probably been sipping at the devops koolaid and trying out one or more of the current system configuration management tools like puppet or chef.

These tools are awesome - particularly for homogenous large-scale deployments of identical nodes.

In practice in the enterprise, though, things get more messy. You can have legacy nodes that can't be puppetised due to their sensitivity and importance; or nodes that are sufficiently unusual that the payoff of putting them under configuration management doesn't justify the work; or just systems which you don't have full control over.

We've been using a simple tool called extract in these kinds of environments, which pulls a given set of files from remote hosts and stores them under version control in a set of local per-host trees.

You can think of it as the yang to puppet or chef's yin - instead of pushing configs onto remote nodes, it's about pulling configs off nodes, and storing them for tracking and change control.

We've been primarily using it in a RedHat/CentOS environment, so we use it in conjunction with rpm-find-changes, which identifies all the config files under /etc that have been changed from their deployment versions, or are custom files not belonging to a package.

Extract doesn't care where its list of files to extract comes from, so it should be easily customised for other environments.

It uses a simple extract.conf shell-variable-style config file, like this:

# Where extracted files are to be stored (in per-host trees)
EXTRACT_ROOT=/data/extract

# Hosts from which to extract (space separated)
EXTRACT_HOSTS=host1 host2 host3

# File containing list of files to extract (on the remote host, not locally)
EXTRACT_FILES_REMOTE=/var/cache/rpm-find-changes/etc.txt

Extract also allows arbitrary scripts to be called at the beginning (setup) and end (teardown) of a run, and before and/or after each host. Extract ships with some example shell scripts for loading ssh keys, and checking extracted changes into git or bzr. These hooks are also configured in the extract.conf config e.g.:

# Pre-process scripts
# PRE_EXTRACT_SETUP - run once only, before any extracts are done
PRE_EXTRACT_SETUP=pre_extract_load_ssh_keys
# PRE_EXTRACT_HOST - run before each host extraction
#PRE_EXTRACT_HOST=pre_extract_noop

# Post process scripts
# POST_EXTRACT_HOST - run after each host extraction
POST_EXTRACT_HOST=post_extract_git
# POST_EXTRACT_TEARDOWN - run once only, after all extracts are completed
#POST_EXTRACT_TEARDOWN=post_extract_touch

Extract is available on github, and packages for RHEL/CentOS 5 and 6 are available from my repository.

Feedback/pull requests always welcome.

September 19, 2012

Philip YarraNanoBSD 6.3 to 8.0 - what do I need to change?

Upgrading some NanoBSD boxes from FreeBSD 6.3 to 8.0, and adding BGP functionality along the way. A couple of config changes that are required:

Enable BGP

  1. add /cfg/local/bgpd.conf and edit to suit (hint: AS and IP addresses ought to match what is assigned for the site)
  2. add  openbgpd_enable="YES" to /cfg/rc.conf
  3. add _bgpd user account to /etc/passwd and /etc/group like this:

pw useradd "_bgpd" -u 130 -c "BGP Daemon" -d /var/empty -s /sbin/nologin
mount /cfg
cp /etc/group /cfg
cp /etc/passwd /cfg
cp /etc/pwd.db /cfg
cp /etc/spwd.db /cfg
mount -u -o ro / 

NTPD changes

On boot, ntpd fails to start with errors such as:

Starting ntpd.
ERROR:  only one configfile option allowed
ntpd - NTP daemon program - Ver. 4.2.4p5


In /cfg/rc.conf, change this:

ntpd_enable="YES"
ntpd_flags="-g -p /var/run/ntpd.pid -f /etc/ntpd.drift -c /etc/ntp.conf -t 3"


to this:

ntpd_enable="YES"
ntpd_config="/etc/ntp.conf"      # ntpd(8) configuration file
ntpd_flags="-p /var/run/ntpd.pid -f /etc/ntpd.drift -t 3"

Wireless access point

Change ath0 interface config from this:

ifconfig_ath0="ssid bsdbox media autoselect mode 11g mediaopt hostap up"

... to this...

wlans_ath0="wlan0"
create_args_wlan0="wlanmode hostap"
ifconfig_wlan0="ssid bsdbox media autoselect mode 11g mediaopt hostap up"


Edit /cfg/hostapd.conf and change interface=ath0 to interface=wlan0

Edit /cfg/rc.conf and change the bridge members so that ath0 is removed, and wlan0 added

Other stuff

  1. add "kern.maxfilesperproc=4096" to /cfg/sysctl.conf so that newer version of bind can start
Also, you can ignore all this stuff in dmesg:
FAILURE - READ_DMA status=51<ready> error=10<nid_not_found> LBA=15625215
ad0: FAILURE - READ_DMA status=51<ready> error=10<nid_not_found> LBA=15625215</nid_not_found></ready></nid_not_found></ready>


Apparently it's just FreeBSD's way to tell you to relax and have fun :-) PfSense info on it over here

You can also relax about this error:
Starting named.
named[1302]: the working directory is not writable

That's because at boot, /etc/namedb/ isn't writable, but it becomes so when the mfs (RAM disk ) is mounted there. I think...

DMA and Ultra DMA

Transcend UDMA 16GB CF cards do not work reliably - they will not boot on power-up (this is in a Soekris net5501). I can get them to boot by letting boot fail, drop into COMBios over serial, issue a reboot, then and only then will it boot. I suspect this is related to DMA levels supported by the net5501. Obviously not reliable enough for our purposes, so I have ordered some DMA66 SanDisk 4GB cards.

September 12, 2012

Dave HallSwitching Installation Profiles on Existing Drupal Sites

In my last blog post I outlined how to use per project installation profiles. If you read that post and want to use installation profiles to take advantage of site wide content changes and centralised dependency management, this post will show you how to do it quickly and easily.

The easiest way to switch installation profiles is using the command line with drush. The following command will do it for you:

$ drush vset --exact -y install_profile my_profile

An alternative way of doing this is by directly manipulating the database. You can run the following SQL on your Drupal database to switch installation profiles:


UPDATE variable SET value = 'my_profile' WHERE name = 'install_profile';
-- Clear the cache using MySQL only syntax, when DB caching is used.
TRUNCATE cache;

Before you switch installation profiles, you should check that you have all the required modules enabled in your site. If you don't have all of the modules required by the new installation profile enabled in your site, your are likely to have issues. The best way to ensure you have all the depedencies enabled is to run the following one liner:

drush en $(grep depedencies /path/to/my-site/profiles/my_profile/my_profile.info | sed -n 's/depedencies\[\]=\(.*\)/\1/p')

Even though it is pretty easy to switch installation profiles I would recommend starting your project with a project specific installation profile.

Edit: Fellow Technocrat, Jaime Schmidt picked up a missing step in the instructions above. You need to enable the installation profile in the system table. The easiest way to do that is with this drush one liner:

echo UPDATE system SET schema_version = 0 WHERE name = 'my_profile' | drush sqlc && drush cc all

August 04, 2012

Tim KentA potential backup solution for small sites running VMware ESXi

Today, external consumer USB3 and/or eSATA drives can be a great low cost alternative to tape. For most small outfits, they fulfil the speed and capacity requirements for nightly backups. I use the same rotation scheme with these drives as I did tape with great success.

Unfortunately these drives can't easily be utilised by those running virtualised servers on top of ESXi. VMware offers SCSI pass-through as a supported option, however the tape drives and media are quite expensive by comparison.

VMware offered a glimpse of hope with their USB pass-through introduced in ESXi 4.1, but it proved to have extremely poor throughput (~7MB/sec) so can realistically only shift a couple of hundred GB at most per night.

I have trialled some USB over IP devices; the best of these can lift the throughput from ~7MB/sec to ~25MB/sec, but the drivers can be problematic and are often only available for Windows platforms.

This got me thinking about presenting a USB3 controller via ESXi's VMDirectPath I/O feature.

VMDirectPath I/O requires a CPU and motherboard capable of Intel Virtualization Technology for Directed I/O (VT-d) or AMD IP Virtualization Technology (IOMMU). It also requires that your target VM is at a hardware level of 7 or greater. A full list of requirements can be found at http://kb.vmware.com/kb/1010789.

I tested pass-through on a card with the NEC/Renesas uPD720200A chipset (Lindy part # 51122) running firmware 4015. The test VM runs Windows Server 2003R2 with the Renesas 2.1.28.1 driver. I had to configure the VM with pciPassthru0.msiEnabled = "FALSE" as per http://www.vmware.com/pdf/vsp_4_vmdirectpath_host.pdf or the device would show up with a yellow bang in Device Manager and would not function.

The final result - over 80MB/sec throughput (both read and write) from a Seagate 2.5" USB3 drive!

July 24, 2012

Gavin CarrAoE on RHEL/CentOS

Updated 2012-07-24: packages updated to aoe-80 and aoetools-34, respectively.


I'm a big fan of Coraid and their relatively low-cost storage units. I've been using them for 5+ years now, and they've always been pretty well engineered, reliable, and performant.

They talk ATA-over-Ethernet (AoE), which is a very simple non-routable protocol for transmitting ATA commands directly via Ethernet frames, without the overhead of higher level layers like IP and TCP. So they're a lighter protocol than something like iSCSI, and so theoretically higher performance.

One issue with them on linux is that the in-kernel 'aoe' driver is typically pretty old. Coraid's latest aoe driver is version 78, for instance, while the RHEL6 kernel (2.6.32) comes with aoe v47, and the RHEL5 kernel (2.6.18) comes with aoe v22. So updating to the latest version is highly recommended, but also a bit of a pain, because if you do it manually it has to be recompiled for each new kernel update.

The modern way to handle this is to use a kernel-ABI tracking kmod, which gives you a driver that will work across multiple kernel updates for a given EL generation, without having to recompile each time.

So I've created a kmod-aoe package that seems to work nicely here. It's downloadable below, or you can install it from my yum repository. The kmod depends on the 'aoetools' package, which supplies the command line utilities for managing your AoE devices.

kmod-aoe (v80):

aoetools (v34):

There's an init script in the aoetools package that loads the kernel module, activates any configured LVM volume groups, and mounts any filesystems. All configuration is done via /etc/sysconfig/aoe.

July 11, 2012

Philip YarraFreeBSD: show outgoing SMTP connections

To show active NAT sessions: pfctl -s state
To show just those going to SMTP ports: pfctl -s state | awk '$7 ~ /:25$/'

Helpful to find outgoing NAT sessions that might be caused by a spambot like, oh, let's say maybe cutwail.

And to show all SMTP sessions, both directions:
 pfctl -s state | awk '$7 ~ /:25$/||$3 ~ /:25$/'

July 03, 2012

Gavin CarrRHEL6 GDM Sessions Workaround

Update: ilaiho has provided a better solution in the comments, which is to install the xorg-x11-xinit-session package, which adds a "User script" session option. This will invoke your (executable) ~/.xsession or ~/.Xclients configs, if selected, and works well, so I'd recommend you go that route instead of using this patch now.


The GDM Greeter in RHEL6 seems to have lost the ability to select 'session types' (or window managers), which apparently means you're stuck using Gnome, even if you have other better options installed. One workaround is to install KDM instead, and set DISPLAYMANAGER=KDE in your /etc/sysconfig/desktop config, as KDM does still support selectable session types.

Since I've become a big fan of tiling window managers in general, and ion in particular, this was pretty annoying, so I wasted a few hours today working through the /etc/X11 scripts and figuring out how they hung together on RHEL6.

So for any other gnome-haters out there who don't want to have to go to KDM, here's a patch to /etc/X11/xinit/Xsession that ignores the default 'gnome-session' set by GDM, which allows proper window manager selection either by user .xsession or .Xclients files, or by the /etc/sysconfig/desktop DISPLAY setting.

diff --git a/xinit/Xsession b/xinit/Xsession
index e12e0ee..ab94d28 100755
--- a/xinit/Xsession
+++ b/xinit/Xsession
@@ -30,6 +30,14 @@ SWITCHDESKPATH=/usr/share/switchdesk
 # Xsession and xinitrc scripts which has been factored out to avoid duplication
 . /etc/X11/xinit/xinitrc-common

+# RHEL6 GDM doesn't seem to support selectable sessions, and always requests a
+# gnome-session. So we unset this default here, to allow things like user
+# .xsession or .Xclients files to be checked, and /etc/sysconfig/desktop
+# settings (via /etc/X11/xinit/Xclients) honoured.
+if [ -n "$GDMSESSION" -a $# -eq 1 -a "$1" = gnome-session ]; then
+  shift
+fi
+
 # This Xsession.d implementation, is intended to obsolte and replace the
 # various mechanisms present in the 'case' statement which follows, and to
 # eventually be able to easily remove all hard coded window manager specific

Apply as root:

cd /etc/X11
patch -p1 < /tmp/xsession.patch

January 28, 2012

Mark UnwinEinstein quote

“Any fool can make things bigger, more complex, and more violent. It takes a touch of genius-and a lot of courage-to move in the opposite direction.”
Albert Einstein.

January 25, 2012

Mark UnwinIt's Australia Day!

w00t!!!
Best country in the world (OK, so I'm slightly biased).

November 20, 2011

Mark UnwinWhat would you like to see next in OAv2?

Go here and vote for your preferred feature.
If you don't see your feature, let me know!

http://www.open-audit.org/phpBB3/viewtopic.php?f=20&t=5796

September 26, 2011

Mark UnwinOAv2 beta3 released


Go grab it.
To upgrade your database (for an existing beta1 or beta2 install), copy the OAv2 files over the old ones, then fire up OAv2 and go to Help -> about (as an Admin).
Then click the red upgrade text. Done.

Make sure you use the new audit script, too.

FWIW - I would backup your database before doing this and also copy your original OAv2 files somewhere else. That way, if the worst happens, you can always revert back...

Please submit some statistics (Help -> Statistics) so I have some idea of how many people are using OAv2 (and how many systems they are auditing with it). This submission cannot be linked back to your organisation.

Also - I am off camping with the family from tomorrow night (Tue, Brisbane time). I will have limited internet access and no access to debug issue's. I will check the forums, but fixes won't be forthcoming until next week. Apologies if this causes an inconvenience.


Mark UnwinAlpha 7 is out

Printers and Monitors now audited.
Many bugs squashed.
Get it now !

http://launchpad.net/oav2/trunk/alpha7/+download/OAv2.zip

June 17, 2011

Alex JurkiewiczNon-interactive database migration of Kayako 3 to 4

The Kayako 3 -> 4 upgrade process is a little convoluted. You have to install a fresh copy of 4, then run a script to import your Kayako 3 data. For large installs you need to run the script multiple times to fully migrate your database, which is a problem because the script interactively asks for your database credentials every time it's run. You don't really want to babysit the multi-hour migration process do you? Fear not, just patch the code:

--- __swift/modules/base/console/class.Controller_Import.php.orig    2011-06-16 12:09:40.000000000 +1000
+++ __swift/modules/base/console/class.Controller_Import.php    2011-06-16 12:11:06.000000000 +1000
@@ -75,12 +75,12 @@
        $this->Console->WriteLine('====================', false, SWIFT_Console::COLOR_GREEN);
        $this->Console->WriteLine();
 
-       $_databaseHost = $this->Console->Prompt('Database Host:');
-       $_databaseName = $this->Console->Prompt('Database Name:');
-       $_databasePort = $this->Console->Prompt('Database Port (enter for default port):');
-       $_databaseSocket = $this->Console->Prompt('Database Socket (enter for default socket):');
-       $_databaseUsername = $this->Console->Prompt('Database Username:');
-       $_databasePassword = $this->Console->Prompt('Database Password:');
+       $_databaseHost = 'localhost';
+       $_databaseName = 'kayako3database';
+       $_databasePort = '3306';
+       $_databaseSocket = '';
+       $_databaseUsername = 'kayako3user';
+       $_databasePassword = 'sekret';
 
        if (empty($_databasePort))
        {

This post brought to you by too long spent trying to automate this with an expect script, before discovery of the fact Kayako don't encode all their PHP.

May 18, 2011

Alex JurkiewiczBackslash in username or CWD breaks Bash prompt in Centos

Something I just ran in to. If your username or current directory has an escape code in it (say, because your username is from Active Directory like "DOMAIN\alex.jurkiewicz"), the default Bash shell on Centos 5 has problems. Depending on the escape code you might get a broken prompt or even no terminal output at all!

The problem is that PROMPT_COMMAND set in /etc/bashrc is set to interpret escape codes in the username and current directory:

PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}\007"'

PROMPT_COMMAND is run each time before the prompt is printed. Here it is used to set the xterm or GNU screen window title.

There are two ways to fix this:

  1. Add 'unset PROMPT_COMMAND' to your .bashrc. This will stop your xterm / screen title from being updated but is a simple fix.
  2. Set PROMPT_COMMAND properly using override files in /etc/sysconfig, so $USER and $PWD are echoed literally without escape code interpretation. Create the following two files with +x permissions:

/etc/sysconfig/bash-prompt-xterm:

# Duplicate of default PROMPT_COMMAND, but using a single command to stop race conditions and without escape code interpretation for USER, HOSTNAME and PWD
printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"

/etc/sysconfig/bash-prompt-screen:

# Duplicate of default PROMPT_COMMAND, but using a single command to stop race conditions and without escape code interpretation for USER, HOSTNAME and PWD
printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"

(The printf statement was taken from this RH bug.)

Logging out and back in again should result in a fixed terminal.

January 10, 2011

Matt BottrellA fathers poem to his unborn child

I sit down now to pen this note,
Of how I feel, and of love denote.
For in a few weeks you shall appear,
A fulfilment of love sincere.

I look forward to cradling you in my arm,
Able to protect you from any harm.
A tender kiss and soothing word,
A gentle stroke, nurtured.

As you grow from baby to child,
Learning from experiences you have compiled.
Always remember I am close by,
A guiding hand you can rely.

I look forward to many an embrace,
My arms open when you need their place.
My knee is yours for a horsey ride,
My ears listening to your story side.

As you migrate from child to adult,
Remember I am here to consult,
I promise to be there until I die,
For you are the apple of my eye.

August 22, 2010

Alex JurkiewiczSplashID Sucks

After an evaluation of SplashID (made by SplashData) as a new password manager for my workplace I've come to the conclusion it's snake oil rather than secure. And not just snake oil, but poorly designed snake oil. Here's why.

The architecture of SplashID is simple. The backend is a plain MySQL database. The user interface is SplashID's app, available on Windows and Mac. When you start your client and log in, it communicates directly in MySQL-speak to the database backend. The connection to MySQL is SSLified (yay!), although bizzarely SplashData call this encryption "IPSec". Not having an actual server process between the clients and database is an unusual design, but it's possible to build something secure this way so we press on.

In SplashID's world, each user's access credentials are made up of three parts. Since each user has a MySQL account, the first two are a username and password. The third part is a "master password". What's a master password? I'm glad you asked. You see, every cell of data in the SplashID database is encrypted with the same key. (Encrypted with AES-256 and Blowfish. Why use two ciphers? Why not!) The encryption key is, of course, the "master password". Because all data is encrypted with this key, every user has to have access to it. Most programs do this by storing the "master password" in the database, one copy per user, encrypted with that user's password. Unlike these programs, SplashID just makes every user remember piece of secret information. Why SplashID does this is another mystery, and a strike against them for poor UI design.

Let's investigate this "every user is a MySQL user" concept. I've created a limited user for myself in SplashID with no access to any passwords. The Splash client app obviously lets me see nothing, but how about a generic MySQL client?

Inappropriate syntax highlighting turn on!

$ mysql -u user1 -p -h splashtest
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 92 to server version: 5.1.47-community
 
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| splashiddb         |
+--------------------+
3 rows in set (0.02 sec)
 
mysql> use splashiddb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql> show tables;
+-----------------------+
| Tables_in_splashiddb  |
+-----------------------+
| apppreftable          |
| attachmenttable       |
| columninfotable       |
| customicontable       |
| customtypetable       |
| databaseinfotable     |
| eventloggertable      |
| groupsubgrouptable    |
| grouptable            |
| mostviewedtable       |
| recentlyaddedtable    |
| recentlymodifiedtable |
| recentlyviewedtable   |
| recordtable           |
| typetable             |
| usergrouptable        |
| usertable             |
+-----------------------+
17 rows in set (0.02 sec)

It looks like there's only one table that all passwords are stored in. MySQL doesn't offer per-row access controls, but surely I can't view every password in the database with my limited user???

mysql> select * from recordtable\G
*************************** 1. row ***************************
RECORDID: 1F6642A0C892BC76
TYPEUID: 0000000000000013
GROUPUID: 1F66429EC892BBDB
FIELD1: <blob>
FIELD2: <blob>
FIELD3: 
FIELD4: 
FIELD5:
FIELD6:
FIELD7:
FIELD8:
FIELD9: <blob> 
FIELD10: <blob>
NOTE:
HASATTACHMENT: 0
HASCUSTOMFIELD: 0
VIEWCOUNT: 6
*************************** 2. row ***************************
[snip]
10 rows in set (0.03 sec)

Oh dear. Oh dear oh dear.

mysql> Bye

So there you go. Every user in SplashID, no matter how limited, can view every password in the database, all encrypted with a key they know. Another strike against SplashID. They need a miracle now.

And hark! Here comes the explanation from SplashData. I emailed them specifically regarding my findings, wanting to make sure this wasn't some huge mistake. I asked:

...every cell is encrypted using the same process, right? From that it follows that if a user can decrypt one cell, they can decrypt every cell. The only protection is that your encryption routine is not published. Or am I missing something?

The reply:

That’s right Alex.

That’s why I mentioned-
> Actually, they don't use the same key. AES key is a hash function of the
> Blowfish key. I'm sorry I cannot give you more details on the algorithms we use.

So, if the user knows the Blowfish key, it is not enough. They still need to decrypt using SplashID Enterprise application.

Even though every user can download the entire encrypted database, even though the master password decryption routine is stored in the client side application, it's all fine because nobody has ever reverse engineered an application to extract a single hash function before! In the end, the previous security missteps hardly matter compared to this blunder. All it will take is one enterprising security researcher or blackhat to figure it out and put their findings on the web, and suddenly every password in every SplashID install is wide open for the taking by its users.

We won't be using SplashID at my workplace, and my advice to you, dear reader, is to avoid them too.

July 25, 2010

Matt BottrellWhen customer profiling and targeted advertising goes wrong

Don't get me wrong... I love a bargain as much as the next guy or girl.

What I don't like however is when a computer system is implemented with little regard and isn't actively checked by a human.
It's one way to make your company look like a jack-ass.

Sorry Woolworths - you've landed yourself in such a category.

Most Australian supermarket shoppers are aware of the fuel discounts offered by Coles and Woolworths, which can slice anything from $0.03 - $0.20 per litre of the cost of your fuel. Something that's always welcome by motorists.

It's the only reason I have an everyday rewards card. Fuel discounts add up over time, even more so for myself, as I drive with LPG the majority of the time, so $0.20 off per litre on LPG is quite substantial.

During the months of April and May Woolies decided that for 8 weeks straight I would like to buy wine. I'm not talking 1 bottle either. Most 'deals' require a purchase of 6 or more bottles in a given purchase.

A sample of the Email contents is included below:
Sample Email from Woolworths

To be honest, I love a good drop of red. Probably more so than the average punter. (We normally have a few dozen on hand in the house). At the end of March I had let our stocks go down over a period of time so had restocked. This seems to have triggered their rewards system to pester me for the next 8 weeks straight. No fuel offers (which was the main selling point of the card), nor any other offer... just grog.

At 6 bottles minimum per Email over 8 weeks, anyone reading my Email from Woolworths, would think I'm an alcoholic!
Email listing from Woolworths

The crazy thing... It backfired.
I didn't buy any wine during that period. (As I had just restocked my levels.) This form of marketing happens 'after the fact', and as such it fails. If I have already made a bulk purchase, why would I wish to repeat it shortly after, and every week for a period of 8 weeks?

Woolworths reward system needs looking at. (As does Coles for that matter). It would be more beneficial to flag such bulk purchases of your customers, then look at sending it out 'specials' say every 3,6 or 12 months... you're likely to have a bigger uptake. I can't see my car dealership sending out a "buy a brand new car" Emails if I had just taken delivery of a new vehicle.

Certainly for everyday staples, it would be nice to have these filter through regularly. However don't see these, like 25% off either Meat, Fruit or Vegies for a week. It seems to be items like Coca-Cola, Alcohol and other non-essentials. I'm not surprised though... the supermarkets know we need staples.. and are trying to increase our trolley sizes by teasing us into buying these non-basic item.

Certainly I do hope that Woolworths and Coles both learn that their average shopper has the intelligence above that of a broken trolley wheel, as the current marketing strategies to date are quite insulting.

May 11, 2010

Alex JurkiewiczWordPress's WP-Super-Cache's Super Cache with nginx

(Apologies for the triple-layer title, but it's a specific subject involving a badly named plugin.)

This has been explained before (the progenitor for most other examples on the net seems to be this forum post), but the solution was ugly and slightly incomplete. nginx's lack of a one-line RewriteCond equivalent means there will never be an elegant solution, but I think I've come up with something clearer.

First, background. WP Super Cache has two levels of caching:

  1. "WP Cache". Whenever WordPress's index.php renders a page, a copy of the page output is stored in /blog/wp-content/cache (and the meta subfolder). For future requests for the same page, this cached copy is served by index.php. The good: subsequent requests don't hit the database or re-run your badly coded widgets for every visitor. The bad: PHP still runs for every request.
  2. "Super Cache". As well as a copy of page output being stored as per above, a copy is also stored in /blog/wp-content/supercache, in a structure that mirrors your blog's URL hierarchy. With clever use of rewrite rules at your webserver layer, you can entirely skip loading PHP & WordPress for any request that a cached file has been created for.

The WP Cache layer always works. The rest of this post is about making use of the Super Cached files with your shiny nginx server. For reference, the Apache rules are here. This nginx code follows the same order and structure, but has some differences. Read:

location /blog {
    gzip_static on;

Aside: gzip_static requires an nginx configured with --with-http_gzip_static_module. If your build isn't, and you don't want to compile your own, just remove this directive. Instead of serving pre-compressed Super Cache files to clients that support compression, nginx will compress them on the fly (like normal).

    set $supercache "";
    if ($request_method = GET) {
        set $supercache "${supercache}G";
    }
    if ($args = "") {
        set $supercache "${supercache}A";
    }
    if ($http_cookie !~ (comment_author_|wordpress_logged_in|wp-postpass_)) {
        set $supercache "${supercache}C";
    }
    if (-f $document_root/blog/wp-content/cache/supercache/$http_host$request_uri/index.html) {
        set $supercache "${supercache}F";
    }
    # If we met all the conditions, serve the supercached file
    if ($supercache = GACF) {
        rewrite ^ /blog/wp-content/cache/supercache/$http_host$request_uri/index.html break;
    }
    # Otherwise pass to wordpress as normal
    if (!-e $request_filename) {
        rewrite ^ /blog/index.php last;
    }
}

# The cache files should not be directly accessible to clients
location /blog/wp-content/cache { internal; }

# Configure the PHP backend as per normal
location ~ (\.php$) {
    include fastcgi_params;
    if (-e $request_filename) {
        fastcgi_pass unix:/tmp/nginx-php-fastcgi.sock;
    }
}

Done! If you have problems, three pointers:

  1. WP Super Cache has a very big settings page. You can set them as you like mostly, but make sure you set this and this (if you're using gzip_static).
  2. Check the bottom of the source of your pages to see if a page was server from the cache, and if so, whether it was served from the Super Cache.
  3. If you need to troubleshoot, make liberal use of the logging facility that WP Super Cache implements.

March 29, 2010

Alex JurkiewiczCross-compiling x264 for win32 on Ubuntu Linux

The total lack of documentation on compiling x264 (and dependencies) for win32 on a linux32 system is henceforth rectified. This guide assumes you are using Ubuntu 9.10 and the packaged version of mingw32. Newer versions of the below packages might require additional/less wrangling.

Required packages in the base system:

sudo apt-get install pkg-config yasm subversion cvs git-core mingw32

Create the basic tree for installing win32-compatible dependancies to:

mkdir -p ~/win32-x264/{src,lib,include,share,bin}

Place this helper script at ~/win32-x264/mingw and chmod +x it:

#!/bin/sh
export CC=i586-mingw32msvc-gcc
export CXX=i586-mingw32msvc-g++
export CPP=i586-mingw32msvc-cpp
export AR=i586-mingw32msvc-ar
export RANLIB=i586-mingw32msvc-ranlib
export ADD2LINE=i586-mingw32msvc-addr2line
export AS=i586-mingw32msvc-as
export LD=i586-mingw32msvc-ld
export NM=i586-mingw32msvc-nm
export STRIP=i586-mingw32msvc-strip
 
export PATH="/usr/i586-mingw32msvc/bin:$PATH"
export PKG_CONFIG_PATH="$HOME/win32-x264/lib/pkgconfig/"
exec "$@"

Now to install pthread & zlib:

cd ~/win32-x264/src
wget -qO - ftp://sourceware.org/pub/pthreads-win32/pthreads-w32-2-8-0-release.tar.gz | tar xzvf -
cd pthreads-w32-2-8-0-release
make GC-static CROSS=i586-mingw32msvc-
cp libpthreadGC2.a ../../lib
cp *.h ../../include
cd ~/win32-x264/src
wget -qO - http://zlib.net/zlib-1.2.4.tar.gz | tar xzvf -
cd zlib-1.2.4
../../mingw ./configure
# Remove references to "-lc" from the Makefile (tells GCC to link output with libc, which is implied anyway, and explicit declaration causes a script error)
sed -i"" -e 's/-lc//' Makefile
make
DESTDIR=../.. make install prefix=

Installing FFmpeg:

cd ~/win32-x264/src
svn checkout svn://svn.ffmpeg.org/ffmpeg/trunk ffmpeg
cd ffmpeg
# Delete references to -Wmissing-prototypes, a GCC warning that fails when cross-compiling
sed -i"" -e '/missing-prototypes/d' configure
./configure \
    --target-os=mingw32 --cross-prefix=i586-mingw32msvc- --arch=x86 --prefix=../.. \
    --enable-memalign-hack --enable-gpl --enable-avisynth --enable-postproc --enable-runtime-cpudetect \
    --disable-encoders --disable-muxers --disable-network --disable-devices
make
make install

Installing FFmpegsource:

cd ~/win32-x264/src
svn checkout http://ffmpegsource.googlecode.com/svn/trunk/ ffms
cd ffms
../../mingw ./configure --host=mingw32 --with-zlib=../.. --prefix=$HOME/win32-x264
../../mingw make
make install

Installing GPAC:
Special thanks to the GPAC dev who kindly assisted me in beating the terrible configure/Makefile scripts into shape.

cd $HOME/win32-x264/src
# Create a CVS auth file on your machine
cvs -d:pserver:anonymous@gpac.cvs.sourceforge.net:/cvsroot/gpac login
cvs -z3 -d:pserver:anonymous@gpac.cvs.sourceforge.net:/cvsroot/gpac co -P gpac
cd gpac
chmod +rwx configure src/Makefile
# Hardcode cross-prefix
sed -i'' -e 's/cross_prefix=""/cross_prefix="i586-mingw32msvc-"/' configure
../../mingw ./configure --static --use-js=no --use-ft=no --use-jpeg=no --use-png=no --use-faad=no --use-mad=no --use-xvid=no --use-ffmpeg=no --use-ogg=no --use-vorbis=no --use-theora=no --use-openjpeg=no --disable-ssl --disable-opengl --disable-wx --disable-oss-audio --disable-x11-shm --disable-x11-xv --disable-fragments--use-a52=no --disable-xmlrpc --disable-dvb --disable-alsa --static-mp4box --extra-cflags="-I$HOME/win32-x264/include -I/usr/i586-mingw32msvc/include" --extra-ldflags="-L$HOME/win32-x264/lib -L/usr/i586-mingw32msvc/lib"
# Fix pthread lib name
sed -i"" -e 's/pthread/pthreadGC2/' config.mak
# Add extra libs that are required but not included
sed -i"" -e 's/-lpthreadGC2/-lpthreadGC2 -lwinmm -lwsock32 -lopengl32 -lglu32/' config.mak
make
# Make will fail a few commands after building libgpac_static.a (i586-mingw32msvc-ar cr ../bin/gcc/libgpac_static.a ...). That's fine, we just need libgpac_static.a
cp bin/gcc/libgpac_static.a ../../lib/
cp -r include/gpac ../../include/

Building x264:

cd ~/win32-x264/src
git clone git://git.videolan.org/x264.git
cd x264
./configure --cross-prefix=i586-mingw32msvc- --host=i586-pc-mingw32 --extra-cflags="-I$HOME/win32-x264/include" --extra-ldflags="-L$HOME/win32-x264/lib"
make

Leave to cool for 15 minutes. Serves four.

Changelog:

  • 20100518: Updated ffmpeg configure args. ffms build needs mingw wrapper. Add cvs to required packages.

December 09, 2009

Matt BottrellCan't beat 'em, join 'em

Well I ranted in my previous post about being held hostage to Farmville.

It was in jest and was poking my adorable wife.... :-)

I bit the bullet in the end, and finally joined Facebook and even bloody Farmville.
There should be a law against that game, it's far too addictive. :-P

So dear reader, I'm still wiping egg off my face... I held out for years and didn't see the point of it... but it looks like I've slipped and fell on it.

Though, I gotta admit this whole FB think is great for keeping in touch with long lost friends.... it's really quite scarey.
Pity one can't always seem to shake those people you'd rather forget. :-|

I'm not yet on Twitter, but who knows what 2010 will hold in store.

November 15, 2009

Matt BottrellInnocent Farmville hostage.

It would appear that even whilst I don't use either Twitter or Facebook -- I happen to be held hostage often to Farmville.

I have elected not to join either two communities for several reasons:

  • I seriously spend far too many hours a day on a computer (12-18 hours a day). I don't need something else to add to the hours.
  • I like keeping some level of personal privacy. I really don't have a need to post what I ate for breakfast, what my favourite book/movie/music/clothing is. (You really want to know my favourite music is -- follow me on last.fm). I also have a blog where I can write down my thoughts/opinions/frustrations already.
  • I have multiple methods to keep in touch with those I elect to already. (Email, IM, Telephone, SMS). I seriously couldn't give a flying razoo about people I went to primary/high school/Uni with. I haven't seen them for over 20 years, and I don't have the desire to kindle the relationship due to the mere fact we attended the same education institution (and for the majority of that time -- compulsory; I'm sure neither of us wanted to be there!)

Having said that -- I don't object to others that do use the services. Each to their own I say. 8-) (But don't expect me to accept invites for either -- both are duly ignored!) :-P

Pauline is a Facebook user, she enjoys it... she catches up with a lot of old friends via it. She has put off joining Farmville for months, but finally caved to the constant barrage of invites and joined.

I now seems our daily life resolves around 'harvest time'... a classic case of seeing the Farmville Alarm come into effect. An often quoted phrase at present is
'Ohh, I have to go harvest X .... gimme 10 mins.'
This can happen at the most inconvenient times. :-|

So at present, I feel I'm affectively a Farmville hostage. I'm wanting a virtual world-war to break out so that bombers blow up the fields. I might get a bit of normality back in my life. :-P

October 19, 2009

Matt BottrellTeddy bear moments

I think we can all attest to the phenomenon known as the Teddy Bear troubleshooting.

I think we all probably need our own Teddy Bears in each of our human office box cubicles or work desks.

So next time you need to do some serious troubleshooting or some heavy lifting when debugging - try pulling out the Teddy Bear.
Even better, you can hug something after it's solved! 8-)

October 08, 2009

Tim KentBlackBerry MDS proxy pain

I'm just having a rant about MDS SSL connections through a proxy. Non-SSL traffic will work fine, however SSL traffic appears to go direct even when proxy settings have been defined as per KB11028. My regular expression matches the addresses fine.

Surely people out there want/need to proxy all their BES MDS traffic?

March 24, 2009

Mark [Cueball] GlossopHumour of the Day: AFL Grand Final

Received this in an email a couple of years ago. Seems appropriate to repost given my impending excursion to Melbourne for Round 1 of the AFL Premiership Season 2009…

Ah – 4 games of live footy in a weekend [plus the Eagles v Lions game on a big screen of course]…

It’s the AFL grand final and a man makes his way to his seat right on the wing. He sits down, noticing that the seat next to him is empty.

He leans over and asks his neighbor if someone will be sitting there.

“No,” says the neighbor. “The seat is empty.”

“This is incredible”, said the man. “Who in their right mind would have a seat like this for the AFL Grandfinal and not use it?”

The neighbour says “Well, actually, the seat belongs to me. I was supposed to come with my wife, but she passed away. This is the first AFL Grand final we haven’t been to together since we got married in 1967.”

“Oh … I’m sorry to hear that. That’s terrible. But couldn’t you find someone else, a friend or relative, or even a neighbor to take the seat?”

The man shakes his head “No, they’re all at her funeral.”

March 20, 2009

Mark [Cueball] GlossopSome Thoughts on IT jobs and Working Conditions

Had an interview recently. Overall the interview itself was relatively positive, and I think the challenge that was offered was something that I’d have been quite up for, but I had some reservations about the work environment – more than just “passing reservations”, so I thought I’d put some thoughts onto digital paper, so to speak.

I do have fairly strong feelings about the inadequacies of “open plan offices” for IT workers [or more generally, “knowledge-based workers”.] To give you a better idea of what I am referring to:

  • Peopleware – possibly the single-most important reference on working conditions for tech workers. It shows comprehensively how people with fewer distractions get more productive work done than those who are constantly interrupted[1][2]:
    “The people who brought us open-plan seating simply weren’t up to the task. But they talked a good game. They sidestepped the issue of whether productivity might go down by asserting very loudly that the new office arrangement would cause productivity to go up, and up a lot, by as much as three hundred percent. …The only method we have ever seen used to confirm claims that the open plan improves productivity is proof by repeated assertion.”
  • Joel’s [Original] ‘Bionic Office’
  • Joel’s Updated Offices – keep in mind this is Manhattan office space, so getting the best people on board requires the best environment. Contrariwise, you may not get the worst people in the worst environments — but the “best” IT people will usually move on to better, more productive environments fairly quickly.
  • Open plans make establishing “Mutual Interruption Shields”[3] almost impossible.
  • Tom Limoncelli also makes the following quote here:
    The biggest time management problem for system administrators is interruptions.
    I tend to think that the same problem applies to software developers – it’s sometimes referred to as a “mental context switch”, and can cut the productivity of your IT workers in half – or worse. Open plan offices are, generally speaking, the epitome of evil when it comes to protecting your IT employees from interruptions.
  • A Field Guide to Developers – some interesting observations about what things are [and aren’t] important to IT workers [the article was written with software developers in mind, but in my experience systems administrators are quite similar in their expectations and ideas about “good workplaces”.] From the Field Guide:
    “One thing that programmers don’t care about – They don’t care about money, actually, unless you’re screwing up on the other things. If you start to hear complaints about salaries where you never heard them before, that’s usually a sign that people aren’t really loving their job. If potential new hires just won’t back down on their demands for outlandish salaries, you’re probably dealing with a case of people who are thinking, ‘Well, if it’s going to have to suck to go to work, at least I should be getting paid well.’

    “That doesn’t mean you can underpay people, because they do care about justice, and they will get infuriated if they find out that different people are getting different salaries for the same work, or that everyone in your shop is making 20% less than an otherwise identical shop down the road, and suddenly money will be a big issue. You do have to pay competitively, but all said, of all the things that programmers look at in deciding where to work, as long as the salaries are basically fair, they will be surprisingly low on their list of considerations, and offering high salaries is a surprisingly ineffective tool in overcoming problems like the fact that programmers get 15” monitors and salespeople yell at them all the time and the job involves making nuclear weapons out of baby seals.”
  • From The Practice of System and Network Administration, Chapter 35.1[4]:
    The hiring process can be simplified into two stages. The first stage is to identify the people whom you want to hire. The second stage is to persuade them that they want to work for you.
    Making a persuasive argument with a poor workplace environment is always going to be difficult, regardless of salary or any other factors. Many people in the IT industry can be “unique” in this respect – they find roles that keep them interested and excited about each day at work – and that aspect is far more important than work that pays a top-dollar salary but is rote and monotonous.

Some things I noted about the place where I interviewed (either from observation while I was waiting, or during the interview):

  • Almost all IT staff in one open plan area. Think of a 1950’s newspaper bullpen, and you get the idea. There was one area to the side where where some of the more senior staff seemed to have their own bullpen.
  • Not even cubicles for some semblance of privacy. I’ve worked in a place where even the telephone operators in the call centre had more privacy and insulation from distractions.
  • Apparently this “extreme open plan” was a deliberate decision — it was apparently part of an ongoing attempt to fix some ingrained cultural deficiencies. [How exactly this was expected to achieve their goals is still unclear to me…the actual problems weren’t fully disclosed.]
  • Some people were trying to work while others carried on in one corner of the room in a fairly noisy discussion – from what I could see and from the information I was provided in the interview, there was no separate meeting area or room for ideas to be brainstormed. Not seeing the impact of that on overall worker productivity completely escapes me.
  • The interview itself was conducted in one of the few private offices [presumably because privacy is important for an interview, and without a private meeting room, what else will you do?]
  • From what I could tell, only very senior management were allocated the few private offices. Apparently parking was allocated on a similar theme…only for the very senior.[5]
  • No space for individual whiteboards or reference libraries. No, Google doesn’t answer all questions, and the two whiteboards I saw seemed to be shared by all staff.
  • Two excessively noisy airconditioners — not a ducted or even split A/C system, and the compressors were completely underspecified for the office space/volume [making them run at or above capacity by the sound they were making – and it wasn’t even a hot day.]
  • Very large space with large windows, but using overhead lights instead of lots of natural light — opening the blinds and letting more light in seemed like an easy fix, but one that seemed to be overlooked by a lot of intelligent people.
  • The space could actually quite easily be converted into a two-storey, split or lofted area, providing significantly more workspace area and worker privacy. But I expect that would be too much money spent on IT workers [hmm, wait, apparently that’s the thinking that caused many of these problems initially! Meh.]
  • Non-ergonomic chairs and desks. If you’re putting people in chairs for 8 hrs per day, those desks and chairs had better be comfortable and compliant with occupational health and safety regulations.
  • Multiple monitors – if you’ve got 4 different 19” monitors attached to a single machine – maybe, just maybe, you should consider using those monitors elsewhere and buying two 24” monitors. You get 13% less pixels in a typical scenario, but only two monitors with more actual pixel real estate. Two monitors that use less power, are easier to manage and there’s only one break in your overall screen real estate. You’re also less likely to waste time juggling windows from one screen to the next – which is another productivity win. Sure it’s a small detail, but lots of small things over a long time actually add up pretty quickly.[6]

Recruiting new staff for such poor environments is going to be difficult. Not impossible, but definitely difficult:

  • If you’re planning to build a team – changing the environment to attract good candidates is critical to your prospects of building a top-notch technical team.
  • In a place where salaries aren’t really competitive, and office working conditions are assessed with a low priority, people are going to want you to offer other remuneration options.
  • Options you ask? Such as a subsidised mobile phone, PDA and broadband[7], telecommuting, higher than standard superannuation, salary packaging/salary sacrifice options, free or subsidised parking, regular technical training, flexible working hours, less restrictive dress codes, and of course the aforementioned things like private offices and quiet work environments.
  • Given the current economic climate, and the tight budgets most businesses presently have, flexibility on “alternative” remuneration options seems like an easy option to consider, yet seemed like “a bridge too far” for this place.
  • The poor economy isn’t going to last forever – when that happens, employers are going to find themselves on the back foot due to staff attrition: “the grass is always greener”, and when you’ve put up with poor conditions for long enough, it doesn’t take much to say “hey, I can do better – I’m out of here.” All it takes for that is a slight salary bump. If you provide a great work environment, better salary isn’t always going to compensate for that. [If tell you you can work in a great IT job with a great team for $70k p.a., then offer a crap, boring job with lousy conditions for $95k p.a. — how many IT people will take that? The number is a lot lower than you might think.]
  • So – bad conditions, non-competitive salaries and lack of alternative remuneration options all add up to “don’t work here unless things change”.

I’m lead to understand that the role I interviewed is a new role, paying OK[8] with significant responsibilities and strong prospects for advancement, yet it has gone unfilled for some time. I’m not completely surprised. If something was to change and I was offered the role, I’d still feel “80% positive, 20% negative” about it – but that 20% could easily make the difference between a 9-12 month stop-gap tenure and a 3+ year team-building role. It simply would depend how committed they proved to be about making real change, and providing a top-notch workplace experience.

The Nutshell Version For Employers:
  • The current economic climate will not last forever. Signs of recovery are already present in Australia.[9] If you’re reading this from the US, expect similar changes as the ARRA stimulus kicks in on all the huge IT projects Obama has approved.
  • Despite the climate, quality IT staff are still in demand. That demand will only increase as the economy recovers.[10]
  • Treat your staff well.
  • Pay them at least comparable salaries to other people doing the same work at other companies/organisations/institutions.
  • Offer alternative remuneration options.
  • IT workers almost always have backup plans[11] – poor timing may be the only problem for them. At the moment.[12]
  • If you don’t make them secure when times are bad, the first chance that comes along for better conditions and better pay may well leave you in the lurch, if you ignore this advice.
  • IT workers talk to other IT workers. Information will and does travel.
  • Perth isn’t a very large place. Information definitely travels quite easily in the IT industry here[13]
  • Information – good and bad – travels easily through mechanisms that may not always be known to you[14] “IT Networking” isn’t always about Cat5 cable :-)
  • Don’t think that people will stay out of loyalty when you’ve treated them like crap.[15]
  • If you’re an employer and this is all news to you – you really need to do your homework better.

My $0.02 for today.

P.S. If you’re going to comment, please refrain from mentioning names, if only to protect the guilty :-D

  1. Peopleware pp. 52-3.
  2. And yes, I’m aware that the authors of Peopleware aren’t against all shared workspaces – but those who share workspaces should be working on similar tasks or projects
  3. From Time Management from Systems Administrators
  4. The Bible for System Administrators IMHO
  5. Everyone else was expected to battle for the limited public parking available in the precinct. No subsidisation. I got the impression that since there was a train station very close by that there was an expectation staff would use that option. Never mind that public transport would actually cost me the same or more than driving and parking.
  6. Kudos where due – actually having multiple monitors for tech workers is almost a given these days, but I’ve still seen places where it’s not done, despite being de rigueur for programmers/sysadmins.
  7. Yes, accessing systems from home is important, even if you’re not offering any sort of telecommuting.
  8. Although I was offered $5-$10k less than what I would expect for a comparable role at a similar employer
  9. The stock market may take 2-3 years to regain lost ground, but that doesn’t reflect the health of an economy – continued growth does.
  10. Yes, I see the irony between my statement and the fact that I’m still looking for work. Am I a quality IT worker? Yes. Am I selling myself properly? Maybe not. Am I possibly overqualified for some roles? Maybe. I’ve really never been out of work for long enough to care, so maybe job hunting is one area where I need to learn a few more things. I’d much rather be improving my tech skills and working on interesting things however.
  11. Sometimes multiple backup plans.
  12. And yes, I’m being deliberately cryptic.
  13. Some might say it doesn’t matter where you are.
  14. Say, for instance, a lunch with former colleagues who happen to know a lot more than you ever expected about the environment you were considering.
  15. As a historical reference, I was only earning about $63k (all up) when I was working at $JOB-2 — money wasn’t everything. I left mainly because of two things:
    • The offer of a more challenging position with better conditions
    • The prospect of the existing working conditions at $JOB-2 being sharply compromised was becoming very real. [After I left, that “prospect” did in fact become a reality.]
    There were other, less significant factors – but those were the two main ones.

March 18, 2009

Mark [Cueball] GlossopHumour of the Day: Bill Payment FTW

I wonder how long it will take for the bank to cash this cheque…
Bill Payment Win » FAIL Blog:
FailBlog - Cheque Payment WIN

March 04, 2009

Mark [Cueball] GlossopWWDC 2009

From a source that I can verify as being accurate for (at least) the last two years, WWDC 2009 will be held in San Francisco at Moscone West. The dates?

Monday June 8 – Friday June 12

I wouldn’t go booking flights or hotels just yet, but that’s when I’m planning on being in SF again…i.e. nothing’s definite until Apple makes the announcement, but that’s the info I have from a previously reliable source.

Will update info if I get any more news.

Belated Update: I was in Melbourne for the footy when the announcement was made last week, so I forgot about updating this post. Dates above are confirmed. See WWDC site for more info.

Mark [Cueball] GlossopFreeview TV – The Real Advert

OK so it’s no secret I’ve got a fair bit of pent-up animosity towards Australian network TV…so it shouldn’t be any surprise that I found this little gem on YouTube quite in line with my sense of humour.

Note: if you’re not reading from Oz, then you probably won’t have seen the Freeview ads – but you should still be able to get a laugh out of it…network TV programmers worldwide pull the same crap whatever country you’re in.

Edit: Turns out Freeview didn’t like this being on YouTube. I believe there’s another way to get the video; will update when I find out more…but for now the link below doesn’t work.

Freeview – The Real Advert

Edit 2:
Updated TV Tonight article about the video.
Reposted: YouTube – repost.
Downloadable movie version available from DownWind Media.

January 07, 2009

Tim KentDNS resolution on iPhone

I've been playing with a few iPhones lately and have had trouble getting WiFi working through our proxy. After much hair pulling the problem turns out to be a feature in the iPhone DNS resolver that refuses to look up any hostname ending in ".local". This also appears to be a problem on Mac OS X:

http://support.apple.com/kb/HT2385?viewlocale=en_US

With OS X you can add "local" to the Search Domains field and disable this behaviour, unfortunately it doesn't work for the iPhone.

October 26, 2008

Tim KentVoIP headaches

I've recently signed up with PennyTel to get better prices on phone calls. This was after two relatives of mine both recommended PennyTel and said how easy the whole thing was to set up when using a Linksys SPA-3102.

OK, so I signed up and purchased the Linksys device. I set the networking stuff through the phone then followed the guide on the PennyTel website to configure SIP (VoIP connectivity stuff). I was feeling pretty good about the whole thing, that is until I made the first phone call!

I thought I'd try to impress a mate so I called up one of my tech savvy friends and told them I was using VoIP to talk to them. The quality sounded quite good, then after 32 seconds the call dropped out! I had called a mobile so I thought it may just be a glitch. The next two calls resulted in the same drop out after 32 seconds. By this stage my friend thought it was quite amusing that my new phone service was so unreliable after I had been boasting about the cheap call rates!

After hours of Googling and messages back and forth between PennyTel support, I still hadn't managed to avoid the call drop out, or another intermittent problem where the SIP registration was randomly failing. The settings looked fine, and PennyTel didn't appear to have any outages as I tested things with a soft phone from another DSL connection. I was really regretting the whole thing, and getting pretty pissed off. I had a think about the whole scenario, and the only thing I hadn't eliminated was my DrayTek Vigor 2600We ADSL router. I had already set the port forwards required for the Linksys SPA (UDP 5060-5061 and 16384-16482) so thought nothing more of router configuration. As a last resort, I searched the Internet for people running VoIP through their DrayTek to see if any incompatibilities existed. I came across a site with someone experiencing my exact problem, and they had a workaround! It appears that the 2600We has a SIP application layer proxy enabled by default. This really confuses things on the Linksys and has to be disabled. After telnetting to the device and entering the following command, things were working great:

sys sip_alg 0

Note that you may need to upgrade your DrayTek firmware for this command to be available.

After the changes I made some calls and no longer got disconnected after 32 seconds! Woohoo! At the end of the day I'm glad I chose VoIP for the cost savings, even though it caused me grief the first few days.

Update: One other setting I have found needed a bit of tweaking was the dial plan. Here is my current Brisbane dial plan for an example:

(000S0<:@gw0>|<:07>[3-5]xxxxxxxS0|0[23478]xxxxxxxxS0|1[38]xx.<:@gw0>|19xx.!|xx.)

August 31, 2008

Tim KentData destruction

After cleaning my home office I was left with some old hard drives to dispose of, this got me thinking about data destruction. In the past I cleared my drives with a couple of passes of random data using dd, but is this thorough enough?

This time round I have used a free bootable CD called CopyWipe (great utility, BootIt NG is also worth a mention). Each drive was given 5 passes, and then taken to with a hammer just to be sure. I've linked a picture to the "after" shot.

I can see data destruction being a larger problem as time goes on. I'd be interested to know the techniques others use for this problem.