Planet Bozo

January 21, 2020

Worse Than FailureCodeSOD: An Enterprise API

There’s a simple rule about “enterprise” software: if the word “enterprise” is used in any way to describe the product, it’s a terrible product. Enterprise products are usually pretty special purpose and serve a well-capitalized but usually relatively small market. You aren’t going to sell licenses to millions of companies, but maybe tens of thousands. Often hundreds. There are some extremely niche enterprise products that have only two or three customers.

Lots of money, combined with an actually small market in terms of customers, guarantees no real opportunity for competition. Couple that with the fact that each of your customers wants the off-the-shelf product you’re selling them to have every feature they need for their business case, you’re on a fast track to bloated software, inner platforms, and just general awfulness.

Today, we’re not talking about Oracle, though. Jacek is in the healthcare industry. Years ago, his company decided to shove off their patient scheduling and billing to a Software-as-a-Service enterprise product. Integrating that into all of their other systems has kept the development teams busy ever since, and each successive revision of the API has created a boatload of new work.

Currently, the API is on version six. V6 is notable because this is the first version which offers a REST/JSON API. V5 was a SOAP-based API.

Here’s an example message you might send to their scheduling services:

{
    "Id": 12,               
    "StartDateTime": "2018-06-16T11:00:00",
    "EndDateTime": "2018-06-16T11:30:00",             
    "Description": "Lunch"
}

That schedules a blackout window during which a service provider might be unavailable. Nothing about that seems too bad, does it?

What if, on the other hand, we wanted to schedule a wellness class, on say proper nutrition?

{
  "siteId": 7,
  "locationId": 2,
  "classScheduleId": 9,
  "classDescriptionId": 15,
  "resources": [
    {
      "id": 2,
      "name": "Conference Room 3"
    }
  ],
  "maxCapacity": 24,
  "webCapacity": 20,
  "staffId": 5,
  "staffName": "Sallybob Bobsen",
  "isActive": true,
  "startDate": "2018-07-22",
  "endDate": "2020-07-22",
  "startTime": "10:30:00",
  "endTime": "11:30:00",
  "daysOfWeek": [
    "Tuesday",
    "Thursday"
  ]
}

Well, that message on its own doesn’t look terrible either. But when you combine the two, you’ll notice that whether you use PascalCase or camelCase varies based on which endpoint you’re invoking.

If you want to create a client, you’ll post a document like:

{"FirstName": "Suebob", "LastName": "Bobsen"…}

I’ve skipped over a giant pile of fields- address, relationship to other clients, emergency contacts, etc. They’re not relevant, because I simply want to compare creating a client to updating a client:

{
  "Client": {
    "Id": "123456789",
    "LastName": "Bobsdottir",
    "CrossRegionalUpdate": false
  }
}

So we send different shaped documents to perform different operations on the same resource.

And it’s important to note, none of these resources have their own endpoints. In a RESTful webservice, I’d want to send my updates as a post to https://theservicehost/clients/123456789, while a create would just be a post to https://theservicehost/clients.

Here, we have different routes for addclient and updateclient. So this isn’t a RESTful API at all- it’s a shoddily assembled RPC API, probably just a JSON wrapper around their SOAP API. It has no concept of resources. Fine, we can accept that and move on, and hey, the documentation at least lays out the correct shape of the documents, and what the fields mean, right?

Well, look back at that update message. See the CrossRegionalUpdate field? Well, per the documentation:

When CrossRegionalUpdate is omitted or set to true, the client’s updated information is propagated to all of the region’s sites.

So, it sounds like if you have a region that contains multiple sites, you want to set that to true (or omit it), right? Wrong. Most customers should set the flag explicitly to false. When Jacek talked to a technical support rep from the company, they said: “You would know if this should be set to true. Just set it to false.”

With all that, you can imagine how their error handling works, which is to say, it doesn’t. There are two things which happen when an error occurs: it either returns a 400 status code with no body, or it returns a 302 status code which redirects to another endpoint which serves up a plain-text version of your request. Your request, not the response to the request.

This nearly created a serious problem. The requests contain personally identifiable information about patients. Their initial naive logging approach was just to dump error responses to a log file, assuming that the API would never return an error with PII in it. At first, it looked like they should just treat a 302 as an error, and log the response, but they discovered in testing that would basically be dumping PII into an insecure store reviewed by IT support personnel.

Jacek is “eagerly” looking forward to v7 of this API, which is certain to contain more “improvements”. The real question is: if this is their sixth attempt to get it right, what did v1 look like, and did anyone survive attempting to use it?

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

January 20, 2020

Worse Than FailurePainful Self-Development

View my timesheet page

Daniel didn't believe the rumor at first. Whenever his company chased after the hottest new business trends, they usually pursued the worst trends imaginable. But word was that this time, they'd seen fit to mimic Google’s fabled "20% Time."

The invitation for a company-wide meeting soon landed in everyone's inbox, turning rumor into reality. For a moment, Daniel dared to dream about the various time-saving scripts he'd always wanted to write, and the applications he'd always wanted to re-implement from the ground up—all those things he'd been prevented from even thinking about as he'd been flung from one urgent project to the next. With 20% of his work time dedicated toward personal pursuits, all of that was set to change ...

He forced himself not to get excited. As a hardened corporate veteran, Daniel had been down this road before. It wasn't a question of would this go wrong, but how would it go wrong.

At the company meeting, which Daniel attended via conference call, a parade of high-level executive voices gushed about the new corporate policy, dubbed "Take-12." "From now on, everyone gets one hour each month to put toward personal work and development!"

Twelve hours a year, out of the 2,000 that most people were expected to work. Daniel supposed "0.6% Time" didn't have quite the same ring to it.

And how would the new policy be reconciled with the company's already Byzantine timesheet and internal billing system? Well, the company didn't even even attempt such a thing; there was simply no room in anyone's budget. Instead, someone had the bright idea of co-opting a different internal system for curating video playlists for online training courses. The idea was that each employee would create a new playlist of 12 hour-long videos, one for each month. The videos wouldn't actually be videos, just placeholders. By "watching" a "video," an employee would be logging that they'd spent one hour of time on personal development.

The premise started off sketchy, and only got sketchier as difficulties were encountered during implementation. Daniel could only shake his head in astonishment as he read the User Guide detailing what he had to do:

Set up your playlist each year:
1. Click “Technical Academy” on the home screen
2. The “Take 12: Development Time” modal will open
3. Click “Add to Playlist” in the title bar
4. The “Add to Playlist” menu will appear
5. Click to expand the menu options
6. Click on “Create a Playlist” option
7. Text entry boxes will appear for your new playlist
8. Type in a playlist title. E.g. “Take 12 Development Time 2019”
9. Type in a description
10. Click the “Add Playlist” button located under the text fields
11. Select your new playlist from the playlist dropdown
12. Click the “Add to Playlist” button
13. The word “Added” will appear
14. Click the home button in the screen
15. Your newly created “Take 12: Development Time” playlist will show in the “My Playlists” section of the home page (You may need to refresh the screen for it to show first time)

Using your Take 12 Development Time each month:
1. Click on your Take 12 playlist to open it
2. Expand the “Outstanding Content” section of your playlist
3. Click on the month you wish to register time for
4. A page will open which has some text on it, there’s nothing you need to do here.
5. Exit this window by using the ‘home’ button
6. When you’ve finished your hour of learning, click on your Take 12 playlist again to open it
7. Expand the “Outstanding Content” section of your playlist
8. The current month has a “Completed?” button next to it.
9. Click the “Completed?” button
10. The selected month will move from the “Outstanding Content” section to the “Completed Content” section
11. Close your playlist until next month

People ended up spending more time troubleshooting their buggy playlists than on personal development. With little by way of profit or benefit to justify its continued existence, "Take-12" was eventually done away with altogether. Daniel congratulated himself for having lowered his expectations.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

XKCDUnsubscribe Message

January 17, 2020

Worse Than FailureError'd: Text Should Go Here

"The fact that Microsoft values my PC's health over copyediting is why I thumbed up this window," Eric wrote.

 

"Great! Thanks Steam! How am I going to contact Paypal now?"

 

"Now serving order number Not a Number. A totally normal order number, don't question it," Pierre-Luc wrote.

 

"Sure, I had a pretty rough start getting my cheapo smart power outlet up and running, but hey, on the plus side it does look like 2 indeed got changed to 'two'" writes Bob.

 

Andy writes, "I'm not sure how much processing Dreamhost needs to do when making a password with the complexity of YXmztnS5vxA6, but this screenshot was taken after 45 seconds of loading."

 

"I depend heavily on Microsoft Null, but I can't really imagine what kind of updates it might require," Matthew F. wrote.

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

XKCDBad Map Projection: South America

January 16, 2020

Worse Than FailureCodeSOD: Switch Off

There are certain things which you see in code that, at first glance, if you haven’t already learned better, look like they might almost be clever. One of those in any construct that starts with:

switch(true) {…}

It seems tempting at various points. Your cases can be boolean conditions now, but you can also collapse cases together, getting tricky with breaks to build complex logic. It’s more compact than a chain of ifs. It’s also almost always the wrong thing to do.

Kasha stumbled across this while tracking down a bug:

    // The variable names in this code have been anonymized to protect the guilty. In the original they were actually ok.
    private function foo($a, $b)
    {
        switch (true){
            case ($a&&$b): return 'b';
                break;
            case (!$a&&!$b): return 'c';
                break;
            case $a: return 'a';
                break;
            casedefault: return 'unknown';
                break;
        }
    }

As Kasha’s comment tells us, we won’t judge by the variable names in use here. Even so, the awkward switch also contains awkward logic, and seems designed for unreadability. It’s not the most challenging logic to trace. Even Kasha writes “I comprehended the piece just fine at first while looking at it, and only later it hit me how awkward it is.” And that’s the real issue: it’s awkward. It’s not eye-bleedingly bad. It’s not cringe-worthy. It’s just the sort of thing that you see in your codebase and grumble about each time you see it.

And that’s the real WTF in this case. The developer responsible for the code produces a lot of this kind of code. They never use an if if they can compact it into a switch. They favor while(true) with breaks over sensible while loops.

And in this case, they left a crunch-induced typo which created the bug: casedefault is not part of the PHP switch syntax. Like most languages with a switch, the last condition should simply be default:.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

January 15, 2020

XKCDTattoo Ideas

January 13, 2020

XKCDJPEG2000

December 09, 2019

etbesystemd-nspawn and Private Networking

Currently there’s two things I want to do with my PC at the same time, one is watching streaming services like ABC iView (which won’t run from non-Australian IP addresses) and another is torrenting over a VPN. I had considered doing something ugly with iptables to try and get routing done on a per-UID basis but that seemed to difficult. At the time I wasn’t aware of the ip rule add uidrange [1] option. So setting up a private networking namespace with a systemd-nspawn container seemed like a good idea.

Chroot Setup

For the chroot (which I use as a slang term for a copy of a Linux installation in a subdirectory) I used a btrfs subvol that’s a snapshot of the root subvol. The idea is that when I upgrade the root system I can just recreate the chroot with a new snapshot.

To get this working I created files in the root subvol which are used for the container.

I created a script like the following named /usr/local/sbin/container-sshd to launch the container. It sets up the networking and executes sshd. The systemd-nspawn program is designed to launch init but that’s not required, I prefer to just launch sshd so there’s only one running process in a container that’s not being actively used.

#!/bin/bash

# restorecon commands only needed for SE Linux
/sbin/restorecon -R /dev
/bin/mount none -t tmpfs /run
/bin/mkdir -p /run/sshd
/sbin/restorecon -R /run /tmp
/sbin/ifconfig host0 10.3.0.2 netmask 255.255.0.0
/sbin/route add default gw 10.2.0.1
exec /usr/sbin/sshd -D -f /etc/ssh/sshd_torrent_config

How to Launch It

To setup the container I used a command like “/usr/bin/systemd-nspawn -D /subvols/torrent -M torrent –bind=/home -n /usr/local/sbin/container-sshd“.

First I had tried the --network-ipvlan option which creates a new IP address on the same MAC address. That gave me an interface iv-br0 on the container that I could use normally (br0 being the bridge used in my workstation as it’s primary network interface). The IP address I assigned to that was in the same subnet as br0, but for some reason that’s unknown to me (maybe an interaction between bridging and network namespaces) I couldn’t access it from the host, I could only access it from other hosts on the network. I then tried the --network-macvlan option (to create a new MAC address for virtual networking), but that had the same problem with accessing the IP address from the local host outside the container as well as problems with MAC redirection to the primary MAC of the host (again maybe an interaction with bridging).

Then I tried just the “-n” option which gave it a private network interface. That created an interface named ve-torrent on the host side and one named host0 in the container. Using ifconfig and route to configure the interface in the container before launching sshd is easy. I haven’t yet determined a good way of configuring the host side of the private network interface automatically.

I had to use a bind for /home because /home is a subvol and therefore doesn’t get included in the container by default.

How it Works

Now when it’s running I can just “ssh -X” to the container and then run graphical programs that use the VPN while at the same time running graphical programs on the main host that don’t use the VPN.

Things To Do

Find out why --network-ipvlan and --network-macvlan don’t work with communication from the same host.

Find out why --network-macvlan gives errors about MAC redirection when pinging.

Determine a good way of setting up the host side after the systemd-nspawn program has run.

Find out if there are better ways of solving this problem, this way works but might not be ideal. Comments welcome.

November 18, 2019

etbe4K Monitors

A couple of years ago a relative who uses a Linux workstation I support bought a 4K (4096*2160 resolution) monitor. That meant that I had to get 4K working, which was 2 years of pain for me and probably not enough benefit for them to justify it. Recently I had the opportunity to buy some 4K monitors at a low enough price that it didn’t make sense to refuse so I got to experience it myself.

The Need for 4K

I’m getting older and my vision is decreasing as expected. I recently got new glasses and got a pair of reading glasses as a reduced ability to change focus is common as you get older. Unfortunately I made a mistake when requesting the focus distance for the reading glasses and they work well for phones, tablets, and books but not for laptops and desktop computers. Now I have the option of either spending a moderate amount of money to buy a new pair of reading glasses or just dealing with the fact that laptop/desktop use isn’t going to be as good until the next time I need new glasses (sometime 2021).

I like having lots of terminal windows on my desktop. For common tasks I might need a few terminals open at a time and if I get interrupted in a task I like to leave the terminal windows for it open so I can easily go back to it. Having more 80*25 terminal windows on screen increases my productivity. My previous monitor was 2560*1440 which for years had allowed me to have a 4*4 array of non-overlapping terminal windows as well as another 8 or 9 overlapping ones if I needed more. 16 terminals allows me to ssh to lots of systems and edit lots of files in vi. Earlier this year I had found it difficult to read the font size that previously worked well for me so I had to use a larger font that meant that only 3*3 terminals would fit on my screen. Going from 16 non-overlapping windows and an optional 8 overlapping to 9 non-overlapping and an optional 6 overlapping is a significant difference. I could get a second monitor, and I won’t rule out doing so at some future time. But it’s not ideal.

When I got a 4K monitor working properly I found that I could go back to a smaller font that allowed 16 non overlapping windows. So I got a real benefit from a 4K monitor!

Video Hardware

Version 1.0 of HDMI released in 2002 only supports 1920*1080 (FullHD) resolution. Version 1.3 released in 2006 supported 2560*1440. Most of my collection of PCIe video cards have a maximum resolution of 1920*1080 in HDMI, so it seems that they only support HDMI 1.2 or earlier. When investigating this I wondered what version of PCIe they were using, the command “dmidecode |grep PCI” gives that information, seems that at least one PCIe video card supports PCIe 2 (released in 2007) but not HDMI 1.3 (released in 2006).

Many video cards in my collection support 2560*1440 with DVI but only 1920*1080 with HDMI. As 4K monitors don’t support DVI input that meant that when initially using a 4K monitor I was running in 1920*1080 instead of 2560*1440 with my old monitor.

I found that one of my old video cards supported 4K resolution, it has a NVidia GT630 chipset (here’s the page with specifications for that chipset [1]). It seems that because I have a video card with 2G of RAM I have the “Keplar” variant which supports 4K resolution. I got the video card in question because it uses PCIe*8 and I had a workstation that only had PCIe*8 slots and I didn’t feel like cutting a card down to size (which is apparently possible but not recommended), it is also fanless (quiet) which is handy if you don’t need a lot of GPU power.

A couple of months ago I checked the cheap video cards at my favourite computer store (MSY) and all the cheap ones didn’t support 4K resolution. Now it seems that all the video cards they sell could support 4K, by “could” I mean that a Google search of the chipset says that it’s possible but of course some surrounding chips could fail to support it.

The GT630 card is great for text, but the combination of it with a i5-2500 CPU (rating 6353 according to cpubenchmark.net [3]) doesn’t allow playing Netflix full-screen and on 1920*1080 videos scaled to full-screen sometimes gets mplayer messages about the CPU being too slow. I don’t know how much of this is due to the CPU and how much is due to the graphics hardware.

When trying the same system with an ATI Radeon R7 260X/360 graphics card (16* PCIe and draws enough power to need a separate connection to the PSU) the Netflix playback appears better but mplayer seems no better.

I guess I need a new PC to play 1920*1080 video scaled to full-screen on a 4K monitor. No idea what hardware will be needed to play actual 4K video. Comments offering suggestions in this regard will be appreciated.

Software Configuration

For GNOME apps (which you will probably run even if like me you use KDE for your desktop) you need to run commands like the following to scale menus etc:

gsettings set org.gnome.settings-daemon.plugins.xsettings overrides "[{'Gdk/WindowScalingFactor', <2>}]"
gsettings set org.gnome.desktop.interface scaling-factor 2

For KDE run the System Settings app, go to Display and Monitor, then go to Displays and Scale Display to scale things.

The Arch Linux Wiki page on HiDPI [2] is good for information on how to make apps work with high DPI (or regular screens for people with poor vision).

Conclusion

4K displays are still rather painful, both in hardware and software configuration. For serious computer use it’s worth the hassle, but it doesn’t seem to be good for general use yet. 2560*1440 is pretty good and works with much more hardware and requires hardly any software configuration.

November 16, 2019

Dave HallDrupalSouth Diversity Scholarship Winner Announced

A few weeks ago we announced our diversity scholarship for DrupalSouth. Before announcing the winner I want to talk a bit about our experience doing this for the first time.

DrupalSouth is the largest Drupal event held in Oceania every year. It provides a great marketing opportunity for businesses wanting to promote their products and services to the Drupal community. Dave Hall Consulting planned to sponsor DrupalSouth to promote our new training business - Getting It Live training. By the time we got organised all of the (affordable) sponsorship opportunities had gone. After considering various opportunities around the event we felt the best way of investing a similar amount of money and giving something back to the community was through a diversity scholarship

The community provided positive feedback about the initiative. However despite the enthusiasm and working our networks to get a range of applicants, we only ended up with 7 applicants. They were all guys. One applicant was from Australia, the rest were from overseas. About half the applicants dropped out when contacted to confirm that they could cover their own travel and visa expenses.

We are likely to offer other scholarships in the future. We will start earlier and explore other channels for promoting the program.

The scholarship has been awarded to Yogesh Ingale, from Mumbai, India. Over the last 3 years Yogesh has been employed by Tata Consultancy Services’ digital operations team as a DevOps Engineer. During this time he has worked with Drupal, Cloud Computing, Python and Web Technologies. Yogesh is interested in automating processes. When he’s not working, Yogesh likes to travel, automate things and write blog posts. Disclaimer: I know Yogesh through my work with one of my clients. Some times the Drupal community feels pretty small.

Congratulations Yogesh! I am looking forward to seeing you in Hobart.

If you want to meet Yogesh before DrupalSouth, we still have some seats available for our 73780151419">2 day git training course that’s running on 25-26 November. If you won’t be in Hobart, contact us to discuss your training needs.

November 03, 2019

etbeKMail Crashing and LIBGL

One problem I’ve had recently on two systems with NVideo video cards is KMail crashing (SEGV) while reading mail. Sometimes it goes for months without having problems, and then it gets into a state where reading a few messages (or sometimes reading one particular message) causes a crash. The crash happens somewhere in the Mesa library stack.

In an attempt to investigate this I tried running KMail via ssh (as that precludes a lot of the GL stuff), but that crashed in a different way (I filed an upstream bug report [1]).

I have discovered a workaround for this issue, I set the environment variable LIBGL_ALWAYS_SOFTWARE=1 and then things work. At this stage I can’t be sure exactly where the problems are. As it’s certain KMail operations that trigger it I think that’s evidence of problems originating in KMail, but the end result when it happens often includes a kernel error log so there’s probably a problem in the Nouveau driver. I spent quite a lot of time investigating this, including recompiling most of the library stack with debugging mode and didn’t get much of a positive result. Hopefully putting it out there will help the next person who has such issues.

Here is a list of environment variables that can be set to debug LIBGL issues (strangely I couldn’t find documentation on this when Googling it). If you are stuck with a problem related to LIBGL you can try setting each of these to “1” in turn and see if it makes a difference. That can either be for the purpose of debugging a problem or creating a workaround that allows you to run the programs you need to run. I don’t know why GL is required to read email.

LIBGL_DIAGNOSTIC
LIBGL_ALWAYS_INDIRECT
LIBGL_ALWAYS_SOFTWARE
LIBGL_DRI3_DISABLE
LIBGL_NO_DRAWARRAYS
LIBGL_DEBUG
LIBGL_DRIVERS_PATH
LIBGL_DRIVERS_DIR
LIBGL_SHOW_FPS

October 29, 2019

Dave HallBuying an Apple Watch for 7USD

For DrupalCon Amsterdam, Srijan ran a competition with the prize being an Apple Watch 5. It was a fun idea. Try to get a screenshot of an animated GIF slot machine showing 3 matching logos and tweet it.

Try your luck at @DrupalConEur Catch 3 in a row and win an #AppleWatchSeries5. To participate, get 3 of the same logos in a series, grab a screenshot and share it with us in the comment section below. See you in Amsterdam! #SrijanJackpot #ContestAlert #DrupalCon

I entered the competition.

I managed to score 3 of the no logo logos. That's gotta be worth something, right? #srijanJackpot

The competition had a flaw. The winner was selected based on likes.

After a week I realised that I wasn’t going to win. Others were able to garner more likes than I could. Then my hacker mindset kicked in.

I thought I’d find how much 100 likes would cost. A quick search revealed likes costs pennies a piece. At this point I decided that instead of buying an easy win, I’d buy a ridiculous number of likes. 500 likes only cost 7USD. Having a blog post about gaming the system was a good enough prize for me.

Receipt: 500 likes for 7USD

I was unsure how things would go. I was supposed to get my 500 likes across 10 days. For the first 12 hours I got nothing. I thought I’d lost my money on a scam. Then the trickle of likes started. Every hour I’d get a 2-3 likes, mostly from Eastern Europe. Every so often I’d get a retweet or a bonus like on a follow up comment. All up I got over 600 fake likes. Great value for money.

Today Sirjan awarded me the watch. I waited until after they’ve finished taking photos before coming clean. Pics or it didn’t happen and all that. They insisted that I still won the competition without the bought likes.

The prize being handed over

Think very carefully before launching a competition that involves social media engagement. There’s a whole fake engagement economy.

October 04, 2019

Dave HallAnnouncing the DrupalSouth Diversity Scholarship

Over the years I have benefited greatly from the generosity of the Drupal Community. In 2011 people sponsored me to write lines of code to get me to DrupalCon Chicago.

Today Dave Hall Consulting is a very successful small business. We have contributed code, time and content to Drupal. It is time for us to give back in more concrete terms.

We want to help someone from an under represented group take their career to the next level. This year we will provide a Diversity Scholarship for one person to attend DrupalSouth, our 2 day Gettin’ Git training course and 5 nights at the conference hotel. This will allow this person to attend the premier Drupal event in the region while also learning everything there is to know about git.

To apply for the scholarship, fill out the form by 23:59 AEST 19 October 2019 to be considered. (Extended from 12 October)

The winner has been announced.

June 29, 2019

etbeLong-term Device Use

It seems to me that Android phones have recently passed the stage where hardware advances are well ahead of software bloat. This is the point that desktop PCs passed about 15 years ago and laptops passed about 8 years ago. For just over 15 years I’ve been avoiding buying desktop PCs, the hardware that organisations I work for throw out is good enough that I don’t need to. For the last 8 years I’ve been avoiding buying new laptops, instead buying refurbished or second hand ones which are more than adequate for my needs. Now it seems that Android phones have reached the same stage of development.

3 years ago I purchased my last phone, a Nexus 6P [1]. Then 18 months ago I got a Huawei Mate 9 as a warranty replacement [2] (I had swapped phones with my wife so the phone I was using which broke was less than a year old). The Nexus 6P had been working quite well for me until it stopped booting, but I was happy to have something a little newer and faster to replace it at no extra cost.

Prior to the Nexus 6P I had a Samsung Galaxy Note 3 for 1 year 9 months which was a personal record for owning a phone and not wanting to replace it. I was quite happy with the Note 3 until the day I fell on top of it and cracked the screen (it would have been ok if I had just dropped it). While the Note 3 still has my personal record for continuous phone use, the Nexus 6P/Huawei Mate 9 have the record for going without paying for a new phone.

A few days ago when browsing the Kogan web site I saw a refurbished Mate 10 Pro on sale for about $380. That’s not much money (I usually have spent $500+ on each phone) and while the Mate 9 is still going strong the Mate 10 is a little faster and has more RAM. The extra RAM is important to me as I have problems with Android killing apps when I don’t want it to. Also the IP67 protection will be a handy feature. So that phone should be delivered to me soon.

Some phones are getting ridiculously expensive nowadays (who wants to walk around with a $1000+ Pixel?) but it seems that the slightly lower end models are more than adequate and the older versions are still good.

Cost Summary

If I can buy a refurbished or old model phone every 2 years for under $400 that will make using a phone cost about $0.50 per day. The Nexus 6P cost me $704 in June 2016 which means that for the past 3 years my phone cost was about $0.62 per day.

It seems that laptops tend to last me about 4 years [3], and I don’t need high-end models (I even used one from a rubbish pile for a while). The last laptops I bought cost me $289 for a Thinkpad X1 Carbon [4] and $306 for the Thinkpad T420 [5]. That makes laptops about $0.20 per day.

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet for $579. That is still working very well for me today, apart from only having 32G of internal storage space and an OS update preventing Android apps from writing to the micro SD card (so I have to use USB to copy TV shows on to it) there’s nothing more than I need from a tablet. Strangely I even get good battery life out of it, I can use it for a couple of hours without the battery running out. Battery life isn’t nearly as good as when it was new, but it’s still OK for my needs. As Samsung stopped providing security updates I can’t use the tablet as a SSH client, but now that my primary laptop is a small and light model that’s less of an issue. Currently that tablet has cost me just over $0.30 per day and it’s still working well.

Currently it seems that my hardware expense for the forseeable future is likely to be about $1 per day. 20 cents for laptop, 30 cents for tablet, and 50 cents for phone. The overall expense is about $1.66 per month as I’m on a $20 per month pre-paid plan with Aldi Mobile.

Saving Money

A laptop is very important to me, the amounts of money that I’m spending don’t reflect that. But it seems that I don’t have any option for spending more on a laptop (the Thinkpad X1 Carbon I have now is just great and there’s no real option for getting more utility by spending more). I also don’t have any option to spend less on a tablet, 5 years is a great lifetime for a device that is practically impossible to repair (repair will cost a significant portion of the replacement cost).

I hope that the Mate 10 can last at least 2 years which will make it a new record for low cost of ownership of a phone for me. If app vendors can refrain from making their bloated software take 50% more RAM in the next 2 years that should be achievable.

The surprising thing I learned while writing this post is that my mobile phone expense is the largest of all my expenses related to mobile computing. Given that I want to get good reception in remote areas (needs to be Telstra or another company that uses their network) and that I need at least 3GB of data transfer per month it doesn’t seem that I have any options for reducing that cost.

August 25, 2018

Dave HallAWS Parameter Store

Anyone with a moderate level of AWS experience will have learned that Amazon offers more than one way of doing something. Storing secrets is no exception. 

It is possible to spin up Hashicorp Vault on AWS using an official Amazon quick start guide. The down side of this approach is that you have to maintain it.

If you want an "AWS native" approach, you have 2 services to choose from. As the name suggests, Secrets Manager provides some secrets management tools on top of the store. This includes automagic rotation of AWS RDS credentials on a regular schedule. For the first 30 days the service is free, then you start paying per secret per month, plus API calls.

There is a free option, Amazon's Systems Manager Parameter Store. This is what I'll be covering today.

Structure

It is easy when you first start out to store all your secrets at the top level. After a while you will regret this decision. 

Parameter Store supports hierarchies. I recommend using them from day one. Today I generally use /[appname]-[env]/[KEY]. After some time with this scheme I am finding that /[appname]/[env]/[KEY] feels like it will be easier to manage. IAM permissions support paths and wildcards, so either scheme will work.

If you need to migrate your secrets, use Parameter Store namespace migration script

Access Controls

Like most Amazon services IAM controls access to Parameter Store. 

Parameter Store allows you to store your values as plain text or encrypted using a key using KMS. For encrypted values the user must have have grants on the parameter store value and KMS key. For consistency I recommend encrypting all your parameters.

If you have a monolith a key per application per envionment is likely to work well. If you have a collection of microservices having a key per service per environment becomes difficult to manage. In this case share a key between several services in the same environment.

Here is an IAM policy for an Lambda function to access a hierarchy of values in parameter store:

To allow your developers to manage the parameters in dev you will need a policy that looks like this:

Amazon has great documentation on controlling access to Parameter Store and KMS.

Adding Parameters

Amazon allows you to store almost any string up to 4Kbs in length in the Parameter store. This gives you a lot of flexibility.

Parameter Store supports deep hierarchies. You will find this becomes annoying to manage. Use hierarchies to group your values by application and environment. Within the heirarchy use a flat structure. I recommend using lower case letters with dashes between words for your paths. For the parameter keys use upper case letters with underscores. This makes it easy to differentiate the two when searching for parameters. 

Parameter store encodes everything as strings. There may be cases where you want to store an integer as an integer or a more complex data structure. You could use a naming convention to differentiate your different types. I found it easiest to encode every thing as json. When pulling values from the store I json decode it. The down side is strings must be wrapped in double quotes. This is offset by the flexibility of being able to encode objects and use numbers.

It is possible to add parameters to the store using 3 different methods. I generally find the AWS web console easiest when adding a small number of entries. Rather than walking you through this, Amazon have good documentation on adding values. Remember to always use "secure string" to encrypt your values.

Adding parameters via boto3 is straight forward. Once again it is well documented by Amazon.

Finally you can maintain parameters in with a little bit of code. In this example I do it with Python.

Using Parameters

I have used Parameter Store from Python and the command line. It is easier to use it from Python.

My example assumes that it a Lambda function running with the policy from earlier. The function is called my-app-dev. This is what my code looks like:

If you want to avoid loading your config each time your Lambda function is called you can store the results in a global variable. This leverages Amazon's feature that doesn't clear global variables between function invocations. The catch is that your function won't pick up parameter changes without a code deployment. Another option is to put in place logic for periodic purging of the cache.

On the command line things are little harder to manage if you have more than 10 parameters. To export a small number of entries as environment variables, you can use this one liner:

Make sure you have jq installed and the AWS cli installed and configured.

Conclusion

Amazon's System Manager Parameter Store provides a secure way of storing and managing secrets for your AWS based apps. Unlike Hashicorp Vault, Amazon manages everything for you. If you don't need the more advanced features of Secrets Manager you don't have to pay for them. For most users Parameter Store will be adequate.