Planet Javasummit

August 18, 2018

Tim Bray Diversity “Goals”

Many of us (speaking from the tech sector where I work) think the sector’s workplace diversity isn’t very good. Specifically, there aren’t enough women. Large companies — all the ones I’ve worked for, anyhow — have goals, and generally work hard at meeting them. Many companies now say they care about diversity, and have goals around improving it. But improvement is painfully slow; why? Maybe part of it is that those aren’t the same kind of “goals”.

How business goals work

When I say “large companies have goals”, I mean that in a very specific way. Each planning cycle, company groups and their managers take on a set of explicitly written-down goals for that planning cycle. Goals are tracked in a simple database and at the end of the year, each group/manager gets a pass/fail on each. The way that goals are defined and refined and agreed to and recorded and structured differs from place to place; at Google and several other big high-techs, they’re called OKRs.

The percentage of goal completion that’s regarded as “good” also varies, but it’s never 100%. The idea is that your reach should exceed your grasp, and if you score 100 you might have been sandbagging, choosing insufficiently ambitious goals to make yourself look good.

Goal completion is deadly serious business among most management types I’ve known, and the number has a real effect on career trajectory and thus compensation. I don’t think it’s controversial to say that in business, those things matter a whole lot.

Goals are sorted into “output goals” (example: $100M in sales for a product) and “input goals” (example: five customer visits per week by every salesperson). They can be technical too, around things like uptime, latency, and trouble tickets.

Input and output are not mutually exclusive. Input goals are at some level more “reasonable” because they are things that an organization controls directly. Output goals are more aggressive, but also liberating because they turn teams loose to figure out what the best path is to getting that sales number or uptime or whatever.

Generally, I like this management practice: Setting goals and measuring performance against them. It drives clarity about what you’re trying to achieve and how well you’re doing.

Diversity goal questions

Here’s a question: For any given company, do its diversity goals work like regular company goals? That is to say, do they go into the percentage completion number? The number that managers get judged on and rewarded for meeting?

I actually don’t know what the answers would be for most high-techs, but I suspect it’s “Not often enough.” I suspect that because the diversity numbers across the high-tech landscape are universally pretty bad, and because the people in management are generally, you know, pretty smart, and will come up with remarkably clever ways to meet the goals they’re getting judged on.

I’ve also observed that while the numbers are unsatisfying in the large, there are teams who consistently manage to do better than others at hiring and retaining women. And by the way, anecdotally, those are good teams (with good managers); the kind who get things done and have low attrition rates and happy customers.

Here’s another question: For diversity, should we be talking input or output goals? I say: Why not both? I’m not expert on the state of the art in building diversity, but wherever we know what the equivalent of “five customer visits per week” is, let’s sign teams up for a few of those. And yeah, output goals. Let’s ask managers to double the proportion of women engineers, measure whether they do it or not, and leave the details to them. The good ones will figure out a way to get there.

It’s like this: If you claim you have diversity goals, but your managers’ careers don’t depend on their performance against those goals, you don’t really.

August 17, 2018

Worse Than FailureError'd: The Illusion of Choice

"So I can keep my current language setting or switch to Pakistani English. THERE IS NO IN-BETWEEN," Robert K. writes.

 

"I guess robot bears aren't allowed to have the honey, or register the warranty on their trailer hitch" wrote Charles R.

 

"Not to be outdone by King's Cross's platform 0 [and fictional platform 9 3/4], it looks like Marylebone is jumping on the weird band-wagon," David L. writes.

 

Alex wrote, "If the percentage it to be believed, I'm downloading Notepad+++++++++++++++."

 

"Hmm, so many choices?" writes Dave A.

 

Ergin S. writes, "My card number starts with 36 and is 14 digits long so it might take me a little while to get there, but thanks to the dev for at least trying to make things more convenient."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

XKCDEquations

August 16, 2018

Worse Than FailureRepresentative Line: Tern This Statement Around and Go Home

When looking for representative lines, ternaries are almost easy mode. While there’s nothing wrong with a good ternary expression, they have a bad reputation because they can quickly drift out towards “utterly unreadable”.

Or, sometimes, they can drift towards “incredibly stupid”. This anonymous submission is a pretty brazen example of the latter:

return (accounts == 1 ? 1 : accounts)

Presumably, once upon a time, this was a different expression. The code changed. Nobody thought about what was changing or why. They just changed it and moved on. Or, maybe, they did think about it, and thought, “someday this might go back to being complicated again, so I’ll leave the ternary in place”, which is arguably a worse approach.

We’ll never know which it was.

Since that was so simple, let’s look at something a little uglier, as a bonus. “WDPS” sends along a second ternary violation, this one has the added bonus of being in Objective-C. This code was written by a contractor (whitespace added to keep the article readable- original is all on one line):

    NSMutableArray *buttonItems = [NSMutableArray array];
    buttonItems = !negSpacer && !self.buttonCog
            ? @[] : (!negSpacer && self.buttonCog 
            ? @[self.buttonCog] : (!self.buttonCog && negSpacer 
            ? @[negSpacer] : @[negSpacer,self.buttonCog]));

This is a perfect example of a ternary which simply got out of control while someone tried to play code golf. Either this block adds no items to buttonItems, or it adds a buttonCog or it adds a negSpacer, or it adds both. Which means it could more simply be written as:

   NSMutableArray *buttonItems = [NSMutableArray array];
   if (negSpacer) {
        [buttonItems addObject:negSpacer];
    }
    if (self.buttonCog) {
        [buttonItems addObject:self.buttonCog];
    }
[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

August 15, 2018

Worse Than FailureCodeSOD: Isn't There a Vaccine For MUMPS?

Alex F is suffering from a disease. No, it’s not disfiguring, it’s not fatal. It’s something much worse than that.

It’s MUMPS.

MUMPS is a little bit infamous. MUMPS is its own WTF.

Alex is a support tech, which in their organization means that they sometimes write up tickets, or for simple problems even fix the code themselves. For this issue, Alex wrote up a ticket, explaining that the users was submitting a background job to run a report, but instead got an error.

Alex sent it to the developer, and the developer replied with a one line code fix:

 i $$zRunAsBkgUser(desc_$H,"runReportBkg^KHUTILLOCMAP",$na(%ZeData)) d
 . w !,"Search has been started in the background."
 e  w !,"Search failed to start in the background."

Alex tested it, and… it didn’t work. So, fully aware of the risks they were taking, Alex dug into the code, starting with the global function $$zRunAsBkgUser.

Before I post any more code, I am legally required to offer a content warning: the rest of this article is going to be full of MUMPS code. This is not for the faint of heart, and TDWTF accepts no responsibility for your mental health if you continue. Don’t read the rest of this article if you have eaten any solid food in the past twenty minutes. If you experience a rash, this may be a sign of a life threatening condition, and you should seek immediate treatment. Do not consume alcohol while reading this article. Save that for after, you’ll need it.

 ;---------
  ; NAME:         zRunAsBkgUser
  ; SCOPE:        PUBLIC
  ; DESCRIPTION:  Run the specified tag as the correct OS-level background user. The process will always start in the system default time zone.
  ; PARAMETERS:
  ;  %uJobID (I,REQ)      - Free text string uniquely identifying the request
  ;                         If null, the tag will be used instead but -- as this is not guaranteed unique -- this ID should be considered required
  ;  %uBkgTag (I,REQ)     - The tag to run
  ;  %uVarList (I,OPT)    - Variables to be passed from the current process' symbol table
  ;  %uJobParams (I,OPT)  - An array of additional parameters to be passed to %ZdUJOB
  ;                         Should be passed with the names of the parameters in %ZdUJOB, e.g. arr("%ZeDIR")="MGR"
  ;                         Currently supports only: %ZeDIR, %ZeNODE, %ZeBkOv
  ;  %uError (O,OPT)      - Error message in case of failure
  ;  %uForceBkg (I,OPT)   - If true, will force the request to be submitted to %ZeUMON
  ;  %uVerifyCond (I,OPT) - If null, this tag will return immediately after submitting the request
  ;                         If non-null, should contain code that will be evaluated to determine the success or failure of the request
  ;                         Will be executed as s @("result=("_%uVerifyCond_")")
  ;  %uVerifyTmo (I,OPT)  - Length of time, in seconds, to try to verify the success of the request
  ;                         Defaults to 1 second
  ; RETURNS:      If %uVerifyCond is not set: 1 if it's acceptable to run, 0 otherwise
  ;               If %uVerifyCond is set: 1 if the condition is verified after the specified timeout, 0 otherwise
zRunAsBkgUser(%uJobID,%uBkgTag,%uVarList,%uJobParams,%uError,%uForceBkg,%uVerifyCond,%uVerifyTmo) q $$RunBkgJob^%ZeUMON($$zCurrRou(),%uJobID,%uBkgTag,%uVarList,.%uJobParams,.%uError,%uForceBkg,%uVerifyCond,%uVerifyTmo) ;;#eof#  ;;#inline#

Thank the gods for comments, I guess. Alex’s eyes locked upon the sixth parameter- %uForceBkg. That seems a bit odd, for a function which is supposed to be submitting a background job. The zRunAsBkgUser function is otherwise quite short- it’s a wrapper around RunBkgJob.

Let’s just look at the comments:

 ;---------
  ; NAME:         RunBkgJob
  ; SCOPE:        INTERNAL
  ; DESCRIPTION:  Submit request to monitor daemon to run the specified tag as a background process
  ;               Used to ensure the correct OS-level user in the child process
  ;               Will fork off from the current process if the correct OS-level user is already specified,
  ;               unless the %uForceBkg flag is set. It will always start in the system default time zone.
  ; KEYWORDS:     run,background,job,submit,%ZeUMON,correct,user
  ; CALLED BY:    ($$)zRunAsBkgUser
  ; PARAMETERS:
  ;  %uOrigRou (I,REQ)    - The routine submitting the request
  ;  %uJobID (I,REQ)      - Free text string uniquely identifying the request
  ;                         If null, the tag will be used instead but -- as this is not guaranteed unique -- this ID should be considered required
  ;  %uBkgTag (I,REQ)     - The tag to run
  ;  %uVarList (I,OPT)    - Variables to be passed from the current process' symbol table
  ;                         If "", pass nothing; if 1, pass everything
  ;  %uJobParams (I,OPT)  - An array of additional parameters to be passed to %ZdUJOB
  ;                         Should be passed with the names of the parameters in %ZdUJOB, e.g. arr("%ZeDIR")="MGR"
  ;                         Currently supports only: %ZeDIR, %ZeNODE, %ZeBkOv
  ;  %uError (O,OPT)      - Error message in case of failure
  ;  %uForceBkg (I,OPT)   - If true, will force the request to be submitted to %ZeUMON
  ;  %uVerifyCond (I,OPT) - If null, this tag will return immediately after submitting the request
  ;                         If non-null, should contain code that will be evaluated to determine the success or failure of the request
  ;                         Will be executed as s @("result=("_%uVerifyCond_")")
  ;  %uVerifyTmo (I,OPT)  - Length of time, in seconds, to try to verify the success of the request
  ;                         Defaults to 1 second
  ; RETURNS:      If %uVerifyCond is not set: 1 if it's acceptable to run, 0 otherwise
  ;               If %uVerifyCond is set: 1 if the condition is verified after the specified timeout, 0 otherwise

Once again, the suspicious uForceBkg parameter is getting passed it. The comments claim that this only controls the timezone, which implies either the parameter is horribly misnamed, or the comments are wrong. Or, possibly, both. Wait, no, it's talking about ZeUMON. My brain wants it to be timezones. MUMPS is getting to me. Since the zRunAsBkgUser has different comments, I suspect it’s both, but it’s MUMPS. I have no idea what could happen. Let’s look at the code.

  RunBkgJob(%uOrigRou,%uJobID,%uBkgTag,%uVarList,%uJobParams,%uError,%uForceBkg,%uVerifyCond,%uVerifyTmo) ;
  n %uSecCount,%uIsStarted,%uCondCode,%uVarCnt,%uVar,%uRet,%uTempFeat
  k %uError
  i %uBkgTag="" s %uError="Need to pass a tag" q 0
  i '$$validrou(%uBkgTag) s %uError="Tag does not exist" q 0
  ;if we're already the right user, just fork off directly
  i '%uForceBkg,$$zValidBkgOSUser() d  q %uRet
  . d inheritOff^%ZdDEBUG()
  . s %uRet=$$^%ZdUJOB(%uBkgTag,"",%uVarList,%uJobParams("%ZeDIR"),%uJobParams("%ZeNODE"),$$zTZSystem(1),"","","","",%uJobParams("%ZeOvBk"))
  . d inheritOn^%ZdDEBUG()
  ;
  s:%uJobID="" %uJobID=%uBkgTag   ;this *should* be uniquely identifying, though it might not be...
  s ^%ZeUMON("START","J",%uJobID,"TAG")=%uBkgTag
  s ^%ZeUMON("START","J",%uJobID,"CALLER")=%uOrigRou
  i $$zFeatureCanUseTempFeatGlobal() s %uTempFeat=$$zFeatureSerializeTempGlo() s:%uTempFeat'="" ^%ZeUMON("START","J",%uJobID,"FEAT")=%uTempFeat
  m:$D(%uJobParams) ^%ZeUMON("START","J",%uJobID,"PARAMS")=%uJobParams
  i %uVarList]"" d
  . s ^%ZeUMON("START","J",%uJobID,"VARS")=%uVarList
  . d inheritOff^%ZdDEBUG()
  . i %uVarList=1 d %zSavVbl($name(^%ZeUMON("START","J",%uJobID,"VARS"))) i 1   ;Save whole symbol table if %uVarList is 1
  . e  f %uVarCnt=1:1:$L(%uVarList,",") s %uVar=$p(%uVarList,",",%uVarCnt) m:%uVar]"" ^%ZeUMON("START","J",%uJobID,"VARS",%uVar)=@%uVar
  . d inheritOn^%ZdDEBUG()
  s ^%ZeUMON("START","G",%uJobID)=""   ;avoid race conditions by setting pointer only after the data is complete
  d log("BKG","Request to launch tag "_%uBkgTag_" from "_%uOrigRou)
  q:%uVerifyCond="" 1   ;don't hang around if there's no need
  d
  . s %uError="Verification tag crashed"
  . d SetTrap^%ZeERRTRAP("","","Error verifying launch of background tag "_%uBkgTag)
  . s:%uVerifyTmo<1 %uVerifyTmo=1
  . s %uIsStarted=0
  . s %uCondCode="%uIsStarted=("_%uVerifyCond_")"
  . f %uSecCount=1:1:%uVerifyTmo h 1 s @%uCondCode q:%uIsStarted
  . d ClearTrap^%ZeERRTRAP
  . k %uError
  i %uError="",'%uIsStarted s %uError="Could not verify that job started successfully"
  q %uIsStarted
  ;
  q  ;;#eor#

Well, there you have it, the bug is so simple to spot, I’ll leave it as an exercise to the readers.

I’m kidding. The smoking gun, as Alex calls it, is the block:

  i '%uForceBkg,$$zValidBkgOSUser() d  q %uRet
  . d inheritOff^%ZdDEBUG()
  . s %uRet=$$^%ZdUJOB(%uBkgTag,"",%uVarList,%uJobParams("%ZeDIR"),%uJobParams("%ZeNODE"),$$zTZSystem(1),"","","","",%uJobParams("%ZeOvBk"))
  . d inheritOn^%ZdDEBUG()
  ;

This is what passes for an “if” statement in MUMPS. Specifically, if the %uForceBkg parameter is set, and the zValidBkgOSUser function returns true, then we’ll submit the job. Otherwise, we don’t submit the job, and thus get errors when we check on whether or not it’s done.

So, the underlying bug, such as it were, is a confusing parameter with an unreasonable default. This is not all that much of a WTF, I admit, but I really really wanted you all to see this much MUMPS code in a single sitting, and I wanted to remind you: there are people who work with this every day.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

XKCDRepair or Replace

August 14, 2018

Worse Than FailureA Shell Game

When the big banks and brokerages on Wall Street first got the idea that UNIX systems could replace mainframes, one of them decided to take the plunge - Big Bang style. They had hundreds of programmers cranking out as much of the mainframe functionality as they could. Copy-paste was all the rage; anything to save time. It could be fixed later.

Nyst 1878 - Cerastoderma parkinsoni R-klep

Senior management decreed that the plan was to get all the software as ready as it could be by the deadline, then turn off and remove the mainframe terminals on Friday night, swap in the pre-configured UNIX boxes over the weekend, and turn it all on for Monday morning. Everyone was to be there 24 hours a day from Friday forward, for as long as it took. Air mattresses, munchies, etc. were brought in for when people would inevitably need to crash.

While the first few hours were rough, the plan worked. Come Monday, all hands were in place on the production floor and whatever didn't work caused a flurry of activity to get the issue fixed in very short order. All bureaucracy was abandoned in favor of: everyone has root in order to do whatever it takes on-the-fly, no approvals required. Business was conducted. There was a huge sigh of relief.

Then began the inevitable onslaught of add this and that for all the features that couldn't be implemented by the hard cutoff. This went on for 3-4 years until the software was relatively complete, but in desperate need of a full rewrite. The tech people reminded management of their warning about all the shortcuts to save time up front, and that it was time to pay the bill.

To their credit, management gave them the time and money to do it. Unfortunately, copy-paste was still ingrained in the culture, so nine different trading systems had about 90% of their code identical to their peers, but all in separate repositories, each with slightly different modification histories to the core code.

It was about this time that I joined one of the teams. The first thing they had me do was learn how to verify that all 87 (yes, eighty seven) of the nightly batch jobs had completed correctly. For this task, both the team manager and lead dev worked non-stop from 6AM to 10AM - every single day - to verify the results of the nightly jobs. I made a list of all of the jobs to check, and what to verify for each job. It took me from 6AM to 3:00PM, which was kind of pointless as the markets close at 4PM.

After doing it for one day, I said no way and asked them to continue doing it so as to give me time to automate it. They graciously agreed.

It took a while, but I wound up with a rude-n-crude 5K LOC ksh script that reduced the task to checking a text file for a list of OK/NG statuses. But this still didn't help if something had failed. I kept scripting more sub-checks for each task to implement what to do on failure (look up what document had the name of the job to run, figure out what arguments to pass, etc., get the status of the fix-it job, and notify someone on the upstream system if it still failed, etc). Either way, the result was recorded.

In the end, the ksh script had grown to more than 15K LOC, but it reduced the entire 8+ hour task to checking a 20 digit (bit-mask) page once a day. Some jobs failed every day for known reasons, but that was OK. As long as the bit-mask of the page was the expected value, you could ignore it; you only had to get involved if an automated repair of something was attempted but failed (this only happened about once every six months).

In retrospect, there were better ways to write that shell script, but it worked. Not only did all that nightly batch job validation and repair logic get encoded in the script (with lots of documentation of the what/how/why variety), but having rid ourselves of the need to deal with this daily mess freed up one man-day per day, and more importantly, allowed my boss to sleep later.

One day, my boss was bragging to the managers of the other trading systems (that were 90% copy-pasted) that he no longer had to deal with this issue. Since they were still dealing with the daily batch-check, they wanted my script. Helping peer teams was considered a Good Thing™, so we gave them the script and showed them how it worked, along with a detailed list of things to change so that it would work with the specifics of their individual systems.

About a week later, the support people on my team (including my boss) started getting nine different status pages in the morning - within seconds of each other - all with different status codes.

It turns out the other teams only modified the program and data file paths for the monitored batch jobs that were relevant to their teams, but didn't bother to delete the sections for the batch jobs they didn't need, and didn't update the notification pager list with info for their own teams. Not only did we get the pages for all of them, but this happened on the one day in six months that something in our system really broke and required manual intervention. Unfortunately, all of the shell scripts attempted to auto correct our failed job. Without. Any. Synchronization. By the time we cleared the confusion of the multiple pages, figured out the status of our own system, realized something required manual fixing and started to fix the mess created by the multiple parallel repair attempts, there wasn't enough time to get it running before the start of business. The financial users were not amused that they couldn't conduct business for several hours.

Once everyone changed the notification lists and deleted all the sections that didn't apply to their specific systems, the problems ceased and those batch-check scripts ran daily until the systems they monitored were finally retired.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

August 13, 2018

XKCDWord Puzzles

August 10, 2018

XKCDPie Charts

August 04, 2018

Tim Bray Bye-bye Haida Gwaii

I’m down to my last few pictures and stories from our July vacation in Haida Gwaii and Gwaii Haanas.

Two eagles in Gwaii Haanas

Fuji X-T2, XF55-200mmF3.5-4.8, 200mm, 1/680 sec at f/5.6, ISO 200

Most of our stops were at old Haida village sites. One of the highlights, aside from the totem poles, were the sites of the large houses; here’s a sample:

Site of large house at an old Haida Village

Pixel 2, 1/350 sec at f/1.8, ISO 50

The idea was, they dug down into the earth, then they put up a fair-size house on top. The steps down to the floor would provide living and sleeping space; the fire would be in the middle. The interior space was really impressive; there are cool old photos at the Canadian Museum of History.

Skedans and the Hot Springs

Our first Haida-village-site stop was at Skedans; a couple of the locals told us it should be called Q’una, but the local Watchman (who was a woman) Carol Duck, called it “Skedans”, so I guess either works. Carol was absolutely great, and here’s a story. While we were there, the weekly supply boat pulled up, and there was a lot of chaos while they were unloading their stuff. Carol climbed on the boat to visit with someone; when it was pulling out, I noticed she hadn’t come back and mentioned that to one of the locals. He laughed and yelled at them and the boat turned around and brought her back. I guess one of the guys was her partner, because another man hollered out “he’s trying to take your woman away, just like one of those old Haida stories!” and there was a general outburst of hilarity; Carol (everyone called her “Duck”) wasn’t totally amused.

Another stop worth mentioning is Hotspring Island, a totally ordinary little place except for the geothermal hot spring, downstream of which they’ve built a bunch of hot tubs, just above a sandy beach, so you can do the “chill your ass in the North Pacific, then bake it in the hot sulphurous water” thing. A totally relaxing place to stop for lunch.

While we were there, an RCMP police boat pulled up, with a couple of personable young officers; they’d been on a training patrol up and down the remote, stormblown west coast of Haida Gwaii, and broken their boat in a couple of places. In distant communities like Haida Gwaii, the RCMP usually sends in ignorant junior white boys for four-year postings, then rotates them out as they begin to grow a clue or two.

After they left, I was shooting the shit with two Haida tour-guide guys, and it was a little tense until we discovered that we all mostly hold the RCMP in contempt. There’s the part where too many of their arestees die in captivity, the part where they systematically harass their female employees, the part where the leadership was embezzling the retirement funds, the part where they taser ignorant confused immigrants to death in Vancouver airport, and — particularly relevant in that context — the part where our indigenous people have come to regard them, generally, as the enemy.

Back to our vacation. One thing I want to highlight is the general wonderfulness of cruising from island to island in a small boat, every moment a feast for the eyes; I’m talking about the quality of light, and the textures of the trees and the stones and the water. Here are a few random snaps from in the boat.

Little water cave in Haida Gwaii

Pixel 2, 1/750 sec at f/1.8, ISO 51

Water and cave in Haida Gwaii

Pixel 2, 1/530 sec at f/1.8, ISO 51

Gwaii Haanas waterfront

Fuji X-T2, XF55-200mmF3.5-4.8, 55mm, 1/680 sec at f/5.6, ISO 200;
processed with Silver Efex

Gwaii Haanas seawater, very clear

Pixel 2, 1/1900 sec at f/1.8, ISO 51

Trees at the edge of an island, Gwaii Haanas

Fuji X-T2, XF55-200mmF3.5-4.8, 55mm, 1/200 sec at f/4.5, ISO 200

Marine life in Gwaii Haanas

Fuji X-T2, XF55-200mmF3.5-4.8, 67mm, 1/200 sec at f/7.1, ISO 2500

It’s worth noting that those last two pictures are from the exact same spot, beside a random tiny island, looking up at the trees then down at the urchins and anemones. It helps that a Zodiac can float right up to the edge of a rocky island, that this is basically a fjord so there’s lots of water right up to the edge, and that our guide Marilyn was an awesome boat pilot.

Windy Bay

It’s another old Haida village site, but here’s the view coming in; there’s a new big house and totem pole.

Windy Bay on Lyell Island

Fuji X-T2, XF55-200mmF3.5-4.8, 67mm, 1/210 sec at f/5.6, ISO 200

This is on Lyell Island, Athlii Gwaii, a special place. It’s beautiful, like many islands on Gwaii Haanas, but it was the site, starting in 1985, of a pitched battle between the Haida and some supporting greens on one side, and the logging industry, which was hell-bent on monetizing every old-growth tree in the hemisphere. 72 citizens were arrested but they won, and launched the process that led to the creation of Gwaii Haanas. I’m in awe, full of gratitude for those people and their work, and you should be too.

Here are a couple of shots of the totem pole.

Totem pole at Windy Bay on Lyell Island

Pixel 2, 1/3900 sec at f/1.8, ISO 55

Totem pole at Windy Bay at Lyell Island

Fuji X-T2, XF55-200mmF3.5-4.8, 78mm, 1/750 sec at f/5.6, ISO 200

The Haida Watchman there at Windy bay was Henry Tyler (everyone calls him “Tyler”) and when we went into the big house, he told us stories of the Athlii Gwaii protest action (they sent in Haida RCMP officers to arrest their own elders, thinking that would help) and then took out a drum and sang the Athlii Gwaii song which, he said, is becoming the Haida national anthem. It was a moment of wrenching beauty. Thank you Tyler, and good luck to you and your people.

Time moves on, and the old totems that haven’t fallen yet will, but as you can see, the world has new totems too. And then, this is happening.

Tree on fallen totem pole, Haida Gwaii

Pixel 2, 1/1500 sec at f/1.8, ISO 50

August 03, 2018

Tim Bray Why Serverless?

We were arguing at work about different modes of computing, and it dawned on me that the big arguments for going serverless are business arguments, not really technology-centric at all. Maybe everyone else already noticed.

[Disclosure: Not only do I work at AWS, but as of earlier this year I’m actually part of the Serverless group. I still spend most of my time working on messaging and eventing and workflows, but that’s serverless too.]

Now, here are a few compelling (to me, anyhow) arguments for serverless computing:

  1. Capacity Planning. It’s hard. It’s easy to get wrong. The penalties for being wrong on the high side are wasted investment, and on the low side abused customers. Serverless says: “Don’t do that.”

  2. Exploit Avoidance. There are no sure bets in this world, but one very decent weapon against next week’s Spectre or Heartbleeed or whatever is: Keep your hosts up to date on their patch levels. Serverless says: “Run functions on hosts that get recycled all the time and don’t linger unpatched.”

  3. Elastic Billing. There are a few servers out there, not that many, running apps that keep their hardware busy doing useful work all the time. But whether it’s on-prem or in the cloud, you’re normally paying even when the app’s not working. Serverless says “Bill by the tenth of a second.”

Technology still matters

Now, when we get into an argument about whether some app or service should be built serverlessly or using traditional hosts, the trade-offs get very technical very fast. How much caching do you need to do? How do you manage your database connections? Do you need shard affinity? What’s the idempotency story?

But some of the big reasons why you want to go serverless, whenever you can, aren’t subtle and at the end of the day they’re not really technical.

This is a new thing

I’m a greybeard and have seen a lot of technology waves roll through. By and large, what’s driven the big changes are technical advantages: PCs let you recompute huge spreadsheets at a keystroke, in seconds. Java came with a pretty big, pretty good library, so your code crashed less. The Web let you deliver a rich GUI without having to write client-side software.

But Serverless isn’t entirely alone. The other big IT wave I’ve seen that was in large part economics driven was the public cloud. You could, given sufficient time and resources, build whatever you needed to on-prem; but on the Cloud you could do it without making big capital bets or fighting legacy IT administrators.

Serverless, cloud, it all goes together.

July 27, 2018

Tim Bray Ninstints and Koyah

On the second day of our Haida Gwaii excursion, our long morning Zodiac stage started just outside the park (the green zone on this map), headed through interior channels and then out into the Hecate Strait around the bottom right of Moresby Island, where we saw the seals and whales pictured previously here, then turning west along the bottom of Moresby through the Houston Stewart Channel and ending up at the place you can see marked “Ninstints” near the bottom center of the map. It has several other names but to the locals it’s SG̱ang Gwaay Llanagaay; they drop the third word so it sounds like Sgangway. The place is among the most amazing I’ve visited.

Cartographers call this “Anthony Island”; here’s a zoomed-in map. This is not on the scary but somewhat sheltered mainland-facing coast, it’s the last land on the Western fringe before you’re on the broad open Pacific, next stop Japan. Marilyn beached the Zodiac in the little islet-sheltered bay wedged into the north corner facing northwest; here’s a picture looking back out that bay.

Little bay on Anthony Island, Gwaii Haanas

Fuji X-T2, XF35mmF1.4R, 1/420 sec at f/8, ISO 200

We started with lunch; it’d been a long ride. What a picnic spot! Then we strolled across the island to the Watchmen’s cottage, the place marked on the map linked above as a UNESCO World Heritage site.

Walking across SG̱ang Gwaay in Gwaii Haanas

Fuji X-T2, XF35mmF1.4R, 1/60 sec at f/8, ISO 200

That walk was totally out of Tolkien; words cannot begin to describe the savage beauty of those big weathered trees and the mossy forest floor between them, the quality of light and of air.

The Watchmen were not on their best form; one of them had had to be helicoptered out the night before, probably gallstones. But still, welcoming. The watch house faces east, away from the Pacific, and is on a bay nearly 100% sheltered by an islet whose trees have been miniaturized by the winds and exposure, natural bonsai.

Natural Bonsai at SG̱ang Gwaay in Gwaii Haanas

Fuji X-T2, XF35mmF1.4R, 1/480 sec at f/8, ISO 200

Then we visited the old village site; the path down there is another walk through fantasy.

Path to the SG̱ang Gwaay village site in Gwaii Haaans

Pixel 2, 1/60 sec at f/1.8, ISO 173

Many of the totem poles are still standing, deeply weathered of course. I’m betting they’ll be upright maybe another decade, maybe less; so if you want to see them, get on it.

Totem pole at the SG̱ang Gwaay village site in Gwaii Haaans

Pixel 2, 1/600 sec at f/1.8, ISO 51

Standing totems at the SG̱ang Gwaay village site in Gwaii Haaans

Fuji X-T2, XF35mmF1.4R, 1/350 sec at f/5.6, ISO 200

“Ninstints”

Back in the day, gringos like my ancestors tended to name each village they visited after its chief. And therein lies a tale. I’m going to give it to you as I got it from Marilyn and then from James, of James and James, guides for another touring party we met at another site; Haidas both of them. It seems roughly congruent with what Wikipedia and its sources say:

Koyah was the chief at SG̱ang Gwaay; he was a famous war leader and trader. He was trading with an English ship captain when one of his followers stole items from the ship. The captain was enraged, seized Koyah, abused him, and eventually released him from the ship with his hair cut off. After that, he had no status in the village — the women rejected him — and they brought in Ninstints to be the chief.

But Koyah was enraged at his loss of status and wanted to win it back. He went back to war, raiding here and there, over and over again, and finally, an old man, managed to sink one American and one British ship. After that, his status was considered restored.

For details, see Wikipedia and the Dictionary of Canadian Biography.

I’m not going to expand on Haida culture, except that it featured trading, war, slavery, and especially Potlatches, a thing that it’s worth reading about. One wonders how much of a fight they might have offered against the British had not smallpox wiped 90% of them out, emptying the villages; nobody but the Watchmen are there now.

Below, the remains of one of the big houses at the village site.

At the SG̱ang Gwaay village site, Gwaii Haanas

Pixel 2, 1/1250 sec at f/1.8, ISO 50

After, we left the village site and scrambled around the north part of the island to a point where there was a view west, out toward the open Pacific.

View west from Anthony Island, Gwaii Haanas

Pixel 2, 1/7800 sec at f/1.8, ISO 63

We had to climb up on a big rock outcropping for the view, and it was another dose of magic, maritime in flavor this time. In a crack, under water, were shells smashed on the rocks by gulls preparing their dinner.

Seashells in a pool in Gwaii Haaanas

Fuji X-T2, XF35mmF1.4R, 1/220 sec at f/3.6, ISO 200

Of course Marilyn knew the name of the snail species, but I’ve forgotten it. I’ll never forget standing on that rock, the never-logged forest behind, the Pacific in front; a very pure place.

Our time on the island was too short; my thanks once again to the Haida Nation in general for co-management of the park, and to the watchmen at SG̱ang Gwaay for having us.

Rose Harbour

After, the boat ride back to our night’s lodging was a short double-back to Rose Harbour. [Side-note: That’s just the second Wikipedia entry that I’ve created.]

It’s the only enclave of privately-owned land in the vast park, originally set up as a whaling station around 1910, then vacated in the Forties. Now, it’s the one place in Gwaii Haanas where visitors can sleep in a bed under a roof, eat food that someone else cooked, and have a hot shower, its water heated by a wood fire.

As we passed earlier in the day, we went by a little old aluminium skiff going the other way; Marilyn said “That’s the girls, heading out after supper.” Later at the communal table we ate those ling cod with vegetables out of the Rose Harbour gardens. It was spicy and fresh and totally excellent, as were the pancakes the next morning. Here’s the guest-house.

Guest-house at Rose Harbour, Haida Gwaii

Pixel 2, 1/11800 sec at f/1.8, ISO 103

The rooms were tiny but comfy, the stairs up to them like ladders; I’m sure that’s how it is in Elven residences. There was no electricity. There were immense whale-bones on the beach. The wood-heated shower was delightful. The outdoor loos were not the best.

Rose Harbour’s most visible inhabitant (and our host), Tassilo Götz Hanisch, a voluble white-maned patriarch, is a musician. He and the other residents of Rose Harbour have a strained relationship with Parks Canada, who’d like them gone and the park, from their point of view, made whole. Götz says millions have been offered. He informed me at considerable length about the malignant but inept turpitude of his adversaries.

I didn’t get to hear their side. I guess, at one level, I can see the argument. But I have to say that I think it’s a good thing that Gwaii Haanas has a place that offers a bed and a meal to travelers neither athletic and accomplished enough to kayak, nor rich enough to have a cruising yacht. And the hospitality (excepting the loos) is damn fine.

Here’s a sunset from Rose Harbour.

Sunset at Rose Harbour, Haida Gwaii

Fuji X-T2, XF35mmF1.4R, 1/210 sec at f/3.6, ISO 200

July 23, 2018

etbePasswords Used by Daemons

There’s a lot of advice about how to create and manage user passwords, and some of it is even good. But there doesn’t seem to be much advice about passwords for daemons, scripts, and other system processes.

I’m writing this post with some rough ideas about the topic, please let me know if you have any better ideas. Also I’m considering passwords and keys in a fairly broad sense, a private key for a HTTPS certificate has more in common with a password to access another server than most other data that a server might use. This also applies to SSH host secret keys, keys that are in ssh authorized_keys files, and other services too.

Passwords in Memory

When SSL support for Apache was first released the standard practice was to have the SSL private key encrypted and require the sysadmin enter a password to start the daemon. This practice has mostly gone away, I would hope that would be due to people realising that it offers little value but it’s more likely that it’s just because it’s really annoying and doesn’t scale for cloud deployments.

If there was a benefit to having the password only in RAM (IE no readable file on disk) then there are options such as granting read access to the private key file only during startup. I have seen a web page recommending running “chmod 0” on the private key file after the daemon starts up.

I don’t believe that there is a real benefit to having a password only existing in RAM. Many exploits target the address space of the server process, Heartbleed is one well known bug that is still shipping in new products today which reads server memory for encryption keys. If you run a program that is vulnerable to Heartbleed then it’s SSL private key (and probably a lot of other application data) are vulnerable to attackers regardless of whether you needed to enter a password at daemon startup.

If you have an application or daemon that might need a password at any time then there’s usually no way of securely storing that password such that a compromise of that application or daemon can’t get the password. In theory you could have a proxy for the service in question which runs as a different user and manages the passwords.

Password Lifecycle

Ideally you would be able to replace passwords at any time. Any time a password is suspected to have been leaked then it should be replaced. That requires that you know where the password is used (both which applications and which configuration files used by those applications) and that you are able to change all programs that use it in a reasonable amount of time.

The first thing to do to achieve this is to have one password per application not one per use. For example if you have a database storing accounts used for a mail server then you would be tempted to have an outbound mail server such as Postfix and an IMAP server such as Dovecot both use the same password to access the database. The correct thing to do is to have one database account for the Dovecot and another for Postfix so if you need to change the password for one of them you don’t need to change passwords in two locations and restart two daemons at the same time. Another good option is to have Postfix talk to Dovecot for authenticating outbound mail, that means you only have a single configuration location for storing the password and also means that a security flaw in Postfix (or more likely a misconfiguration) couldn’t give access to the database server.

Passwords Used By Web Services

It’s very common to run web sites on Apache backed by database servers, so common that the acronym LAMP is widely used for Linux, Apache, Mysql, and PHP. In a typical LAMP installation you have multiple web sites running as the same user which by default can read each other’s configuration files. There are some solutions to this.

There is an Apache module mod_apparmor to use the Apparmor security system [1]. This allows changing to a specified Apparmor “hat” based on the URI or a specified hat for the virtual server. Each Apparmor hat is granted access to different files and therefore files that contain passwords for MySQL (or any other service) can be restricted on a per vhost basis. This only works with the prefork MPM.

There is also an Apache module mpm-itk which runs each vhost under a specified UID and GID [2]. This also allows protecting sites on the same server from each other. The ITK MPM is also based on the prefork MPM.

I’ve been thinking of writing a SE Linux MPM for Apache to do similar things. It would have to be based on prefork too. Maybe a change to mpm-itk to support SE Linux context as well as UID and GID.

Managing It All

Once the passwords are separated such that each service runs with minimum privileges you need to track and manage it all. At the simplest that needs a document listing where all of the passwords are used and how to change them. If you use a configuration management tool then that could manage the passwords. Here’s a list of tools to manage service passwords in tools like Ansible [3].

June 18, 2018

etbeCooperative Learning

This post is about my latest idea for learning about computers. I posted it to my local LUG mailing list and received no responses. But I still think it’s a great idea and that I just need to find the right way to launch it.

I think it would be good to try cooperative learning about Computer Science online. The idea is that everyone would join an IRC channel at a suitable time with virtual machine software configured and try out new FOSS software at the same time and exchange ideas about it via IRC. It would be fairly informal and people could come and go as they wish, the session would probably go for about 4 hours but if people want to go on longer then no-one would stop them.

I’ve got some under-utilised KVM servers that I could use to provide test VMs for network software, my original idea was to use those for members of my local LUG. But that doesn’t scale well. If a larger group people are to be involved they would have to run their own virtual machines, use physical hardware, or use trial accounts from VM companies.

The general idea would be for two broad categories of sessions, ones where an expert provides a training session (assigning tasks to students and providing suggestions when they get stuck) and ones where the coordinator has no particular expertise and everyone just learns together (like “let’s all download a random BSD Unix and see how it compares to Linux”).

As this would be IRC based there would be no impediment for people from other regions being involved apart from the fact that it might start at 1AM their time (IE 6PM in the east coast of Australia is 1AM on the west coast of the US). For most people the best times for such education would be evenings on week nights which greatly limits the geographic spread.

While the aims of this would mostly be things that relate to Linux, I would be happy to coordinate a session on ReactOS as well. I’m thinking of running training sessions on etbemon, DNS, Postfix, BTRFS, ZFS, and SE Linux.

I’m thinking of coordinating learning sessions about DragonflyBSD (particularly HAMMER2), ReactOS, Haiku, and Ceph. If people are interested in DragonflyBSD then we should do that one first as in a week or so I’ll probably have learned what I want to learn and moved on (but not become enough of an expert to run a training session).

One of the benefits of this idea is to help in motivation. If you are on your own playing with something new like a different Unix OS in a VM you will be tempted to take a break and watch YouTube or something when you get stuck. If there are a dozen other people also working on it then you will have help in solving problems and an incentive to keep at it while help is available.

So the issues to be discussed are:

  1. What communication method to use? IRC? What server?
  2. What time/date for the first session?
  3. What topic for the first session? DragonflyBSD?
  4. How do we announce recurring meetings? A mailing list?
  5. What else should we setup to facilitate training? A wiki for notes?

Finally while I list things I’m interested in learning and teaching this isn’t just about me. If this becomes successful then I expect that there will be some topics that don’t interest me and some sessions at times when I am have other things to do (like work). I’m sure people can have fun without me. If anyone has already established something like this then I’d be happy to join that instead of starting my own, my aim is not to run another hobbyist/professional group but to learn things and teach things.

There is a Wikipedia page about Cooperative Learning. While that’s interesting I don’t think it has much relevance on what I’m trying to do. The Wikipedia article has some good information on the benefits of cooperative education and situations where it doesn’t work well. My idea is to have a self-selecting people who choose it because of their own personal goals in terms of fun and learning. So it doesn’t have to work for everyone, just for enough people to have a good group.

June 06, 2018

etbeBTRFS and SE Linux

I’ve had problems with systems running SE Linux on BTRFS losing the XATTRs used for storing the SE Linux file labels after a power outage.

Here is the link to the patch that fixes this [1]. Thanks to Hans van Kranenburg and Holger Hoffstätte for the information about this patch which was already included in kernel 4.16.11. That was uploaded to Debian on the 27th of May and got into testing about the time that my message about this issue got to the SE Linux list (which was a couple of days before I sent it to the BTRFS developers).

The kernel from Debian/Stable still has the issue. So using a testing kernel might be a good option to deal with this problem at the moment.

Below is the information on reproducing this problem. It may be useful for people who want to reproduce similar problems. Also all sysadmins should know about “reboot -nffd”, if something really goes wrong with your kernel you may need to do that immediately to prevent corrupted data being written to your disks.

The command “reboot -nffd” (kernel reboot without flushing kernel buffers or writing status) when run on a BTRFS system with SE Linux will often result in /var/log/audit/audit.log being unlabeled. It also results in some systemd-journald files like /var/log/journal/c195779d29154ed8bcb4e8444c4a1728/system.journal being unlabeled but that is rarer. I think that the same
problem afflicts both systemd-journald and auditd but it’s a race condition that on my systems (both production and test) is more likely to affect auditd.

root@stretch:/# xattr -l /var/log/audit/audit.log 
security.selinux: 
0000   73 79 73 74 65 6D 5F 75 3A 6F 62 6A 65 63 74 5F    system_u:object_ 
0010   72 3A 61 75 64 69 74 64 5F 6C 6F 67 5F 74 3A 73    r:auditd_log_t:s 
0020   30 00                                              0.

SE Linux uses the xattr “security.selinux”, you can see what it’s doing with xattr(1) but generally using “ls -Z” is easiest.

If this issue just affected “reboot -nffd” then a solution might be to just not run that command. However this affects systems after a power outage.

I have reproduced this bug with kernel 4.9.0-6-amd64 (the latest security update for Debian/Stretch which is the latest supported release of Debian). I have also reproduced it in an identical manner with kernel 4.16.0-1-amd64 (the latest from Debian/Unstable). For testing I reproduced this with a 4G filesystem in a VM, but in production it has happened on BTRFS RAID-1 arrays, both SSD and HDD.

#!/bin/bash 
set -e 
COUNT=$(ps aux|grep [s]bin/auditd|wc -l) 
date 
if [ "$COUNT" = "1" ]; then 
 echo "all good" 
else 
 echo "failed" 
 exit 1 
fi

Firstly the above is the script /usr/local/sbin/testit, I test for auditd running because it aborts if the context on it’s log file is wrong. When SE Linux is in enforcing mode an incorrect/missing label on the audit.log file causes auditd to abort.

root@stretch:~# ls -liZ /var/log/audit/audit.log 
37952 -rw-------. 1 root root system_u:object_r:auditd_log_t:s0 4385230 Jun  1 
12:23 /var/log/audit/audit.log

Above is before I do the tests.

while ssh stretch /usr/local/sbin/testit ; do 
 ssh stretch "reboot -nffd" > /dev/null 2>&1 & 
 sleep 20 
done

Above is the shell code I run to do the tests. Note that the VM in question runs on SSD storage which is why it can consistently boot in less than 20 seconds.

Fri  1 Jun 12:26:13 UTC 2018 
all good 
Fri  1 Jun 12:26:33 UTC 2018 
failed

Above is the output from the shell code in question. After the first reboot it fails. The probability of failure on my test system is greater than 50%.

root@stretch:~# ls -liZ /var/log/audit/audit.log  
37952 -rw-------. 1 root root system_u:object_r:unlabeled_t:s0 4396803 Jun  1 12:26 /var/log/audit/audit.log

Now the result. Note that the Inode has not changed. I could understand a newly created file missing an xattr, but this is an existing file which shouldn’t have had it’s xattr changed. But somehow it gets corrupted.

The first possibility I considered was that SE Linux code might be at fault. I asked on the SE Linux mailing list (I haven’t been involved in SE Linux kernel code for about 15 years) and was informed that this isn’t likely at
all. There have been no problems like this reported with other filesystems.

March 16, 2018

etbeRacism in the Office

Today I was at an office party and the conversation turned to race, specifically the incidence of unarmed Afro-American men and boys who are shot by police. Apparently the idea that white people (even in other countries) might treat non-white people badly offends some people, so we had a man try to explain that Afro-Americans commit more crime and therefore are more likely to get shot. This part of the discussion isn’t even noteworthy, it’s the sort of thing that happens all the time.

I and another man pointed out that crime is correlated with poverty and racism causes non-white people to be disproportionately poor. We also pointed out that US police seem capable of arresting proven violent white criminals without shooting them (he cited arrests of Mafia members I cited mass murderers like the one who shot up the cinema). This part of the discussion isn’t particularly noteworthy either. Usually when someone tries explaining some racist ideas and gets firm disagreement they back down. But not this time.

The next step was the issue of whether black people are inherently violent. He cited all of Africa as evidence. There’s a meme that you shouldn’t accuse someone of being racist, it’s apparently very offensive. I find racism very offensive and speak the truth about it. So all the following discussion was peppered with him complaining about how offended he was and me not caring (stop saying racist things if you don’t want me to call you racist).

Next was an appeal to “statistics” and “facts”. He said that he was only citing statistics and facts, clearly not understanding that saying “Africans are violent” is not a statistic. I told him to get his phone and Google for some statistics as he hadn’t cited any. I thought that might make him just go away, it was clear that we were long past the possibility of agreeing on these issues. I don’t go to parties seeking out such arguments, in fact I’d rather avoid such people altogether if possible.

So he found an article about recent immigrants from Somalia in Melbourne (not about the US or Africa, the previous topics of discussion). We are having ongoing discussions in Australia about violent crime, mainly due to conservatives who want to break international agreements regarding the treatment of refugees. For the record I support stronger jail sentences for violent crime, but this is an idea that is not well accepted by conservatives presumably because the vast majority of violent criminals are white (due to the vast majority of the Australian population being white).

His next claim was that Africans are genetically violent due to DNA changes from violence in the past. He specifically said that if someone was a witness to violence it would change their DNA to make them and their children more violent. He also specifically said that this was due to thousands of years of violence in Africa (he mentioned two thousand and three thousand years on different occasions). I pointed out that European history has plenty of violence that is well documented and also that DNA just doesn’t work the way he thinks it does.

Of course he tried to shout me down about the issue of DNA, telling me that he studied Psychology at a university in London and knows how DNA works, demanding to know my qualifications, and asserting that any scientist would support him. I don’t have a medical degree, but I have spent quite a lot of time attending lectures on medical research including from researchers who deliberately change DNA to study how this changes the biological processes of the organism in question.

I offered him the opportunity to star in a Youtube video about this, I’d record everything he wants to say about DNA. But he regarded that offer as an attempt to “shame” him because of his “controversial” views. It was a strange and sudden change from “any scientist will support me” to “it’s controversial”. Unfortunately he didn’t give up on his attempts to convince me that he wasn’t racist and that black people are lesser.

The next odd thing was when he asked me “what do you call them” (black people), “do you call them Afro-Americans when they are here”. I explained that if an American of African ancestry visits Australia then you would call them Afro-American, otherwise not. It’s strange that someone goes from being so certain of so many things to not knowing the basics. In retrospect I should have asked whether he was aware that there are black people who aren’t African.

Then I sought opinions from other people at the party regarding DNA modifications. While I didn’t expect to immediately convince him of the error of his ways it should at least demonstrate that I’m not the one who’s in a minority regarding this issue. As expected there was no support for the ideas of DNA modifying. During that discussion I mentioned radiation as a cause of DNA changes. He then came up with the idea that radiation from someone’s mouth when they shout at you could change your DNA. This was the subject of some jokes, one man said something like “my parents shouted at me a lot but didn’t make me a mutant”.

The other people had some sensible things to say, pointing out that psychological trauma changes the way people raise children and can have multi-generational effects. But the idea of events 3000 years ago having such effects was ridiculed.

By this time people were starting to leave. A heated discussion of racism tends to kill the party atmosphere. There might be some people who think I should have just avoided the discussion to keep the party going (really I didn’t want it and tried to end it). But I’m not going to allow a racist to think that I agree with them, and if having a party requires any form of agreement to racism then it’s not a party I care about.

As I was getting ready to leave the man said that he thought he didn’t explain things well because he was tipsy. I disagree, I think he explained some things very well. When someone goes to such extraordinary lengths to criticise all black people after a discussion of white cops killing unarmed black people I think it shows their character. But I did offer some friendly advice, “don’t drink with people you work with or for or any other people you want to impress”, I suggested that maybe quitting alcohol altogether is the right thing to do if this is what it causes. But he still thought it was wrong of me to call him racist, and I still don’t care. Alcohol doesn’t make anyone suddenly think that black people are inherently dangerous (even when unarmed) and therefore deserving of being shot by police (disregarding the fact that police can take members of the Mafia alive). But it does make people less inhibited about sharing such views even when it’s clear that they don’t have an accepting audience.

Some Final Notes

I was not looking for an argument or trying to entrap him in any way. I refrained from asking him about other races who have experienced violence in the past, maybe he would have made similar claims about other non-white races and maybe he wouldn’t, I didn’t try to broaden the scope of the dispute.

I am not going to do anything that might be taken as agreement or support of racism unless faced with the threat of violence. He did not threaten me so I wasn’t going to back down from the debate.

I gave him multiple opportunities to leave the debate. When I insisted that he find statistics to support his cause I hoped and expected that he would depart. Instead he came back with a page about the latest racist dog-whistle in Australian politics which had no correlation with anything we had previously discussed.

I think the fact that this debate happened says something about Australian and British culture. This man apparently hadn’t had people push back on such ideas before.

November 18, 2014

Kelvin Lawrence - personal25 Years of the World Wide Web

I have been so busy that I am a few days late putting this post together but hopefully better late than never!

A few days ago, hard though it is to believe, the Worldwide Web, that so many of us take for granted these days, celebrated it's 25th anniversary. Created in 1989 by Sir Tim Berners-Lee , for many of us, "Web" has become as essential in our daily lives as electricity or natural gas. Built from its earliest days upon the notion of open standards the Web has become the information backbone of our current society. My first exposure, that I can remember, to the concept of the Web was in the early 1990s when I was part of the OS/2 team at IBM and we put one of the earliest browsers, Web Explorer, into the operating system and shipped it. Back then, an HTML web page was little more than text, images, animated GIFs and most importantly of all hyperlinks. I was also involved with the team that did some of the early ports of Netscape Navigator to OS/2 and I still recall being blown away by some of what I saw that team doing upon some of my many visits to Netscape in California what seems like a lifetime ago now!

 From those modest but still highly effective beginnings, the Web and most importantly perhaps, the Web browser, has evolved into the complete business and entertainment platform that it is today.


The Web, and open standards, have been part of my personal and work life ever since. I am honored to have been a small part of the evolution of the web myself. I have worked on a number of different projects with great people from all over the World under the auspices of the W3C for longer than I care to remember! I have done a lot of fun things in my career, but one of the highlights was definitely working with so many talented people on the original Scalable Vector Graphics (SVG) specification which is now supported by most of the major browsers and of course you can find my library of SVGsamples here on my site.

It is also so fitting that the latest evolution of Web technology, the finished HTML 5 specification was announced to coincide with the 25th anniversary of the Web.

I could write so much more about what the Web has meant to me but most of all I think my fondest memory will always be all of the great friends I have met and the large number of very talented people that I have had the good fortune to work with through our joint passion to make the Web a better and even more open, place.

Happy (slightly belated) Birthday Worldwide Web and here's to the next 25!

November 13, 2014

Kelvin Lawrence - personalAsian Tiger Mosquitoes

The weather has been unusually cold for the time of year the last day or so. I was actually hoping that if we get a hard freeze it will kill off for now the Asian Tiger mosquitoes that we have been overrun with this year. However I have my doubts as apparently, unlike other mosquitoes, their eggs, which they lay in vegetation and standing water, can survive a harsh winter. They apparently got into the USA in a shipment of waterlogged tires (tyres for my UK friends) some time ago and they are now spreading more broadly. They are covered in black and white stripes and look quite different than the regular "brown" colored mosquitoes we are used to seeing here. They are also a lot more aggressive. They bite all day long (not just at dusk) and even bite animals but definitely prefer humans. It has got so bad that we have had to pay to have our yard sprayed regularly almost all year so that we even have a chance to sit outside and enjoy our yard. These nasty little guys also transmit the chikungunya virus for which I currently believe there is no vaccine. It's not usually fatal but does have some nasty symptoms if you are unlucky enough to catch it. Here's a link to a WebMD write up on these little nasties.

November 12, 2014

Kelvin Lawrence - personalPink Floyd's Endless River - The End of an Era

I just purchased the new Pink Floyd CD from Amazon which includes a free digital download as well. I have been listening to it while I work today. Given the way the album was put together (using material the late Richard Wright recorded almost 20 years ago during the making of The Division Bell) much of the music is immediately familiar. I definitely also hear flashbacks to Wish You Were Here, Dark Side of the Moon and many other albums as well. It's mostly instrumental and there is a lot of it - four sides if you buy the vinyl version!! A lot of the music has an almost eerie tone to it - definitely a good one for the headphones with the lights off. It's a really good listen but left me feeling sad in a way, in a good way I guess, as much of their music has been the backdrop to the last 40 years or so of my life and this is definitely the end of a musical era as supposedly this is the last album the band plan to release. It has a bit of everything for Pink Floyd fans, especially those who like some of the "more recent" albums. Don't expect a bunch of rocking songs that you will be humming along to all day but as a complete work, listened to end to end, I found it very moving. Very much not your modern day pop tune and thank goodness for that!

October 26, 2014

Kelvin Lawrence - personalSeven years post cancer surgery

Today marks another big milestone for me. It has now been seven years since my cancer surgery. As always, I am grateful for all of my family, friends and doctors and every minute that I get to spend with them.

June 03, 2009

Software Summit June 3, 2009: The Finale of Colorado Software Summit

To Our Friends and Supporters,

In these challenging economic times, business has slowed, many companies have had to resort to layoffs and/or closures, and everyone has been tightening their belts. Unfortunately, Colorado Software Summit has not been immune to this downturn. As have so many companies and individuals, we too have experienced a severe decline in our business, and as a result we are not able to continue producing this annual conference.

This year would have been our 18th conference, and we had planned to continue through our 20th in 2011, but instead we must end it now.

Producing this conference has been a wonderful experience for us, truly a labor of love, and we have been extremely privileged to have been able to do well by doing good.  We are very proud of the many people whose careers flourished through what they learned here, of the extensive community we built via the conference, and of the several businesses that were begun through friendships made here. We treasure the friends we made, and we consider them to be part of our extended family. Just as in any family, we celebrated with them through joyous life events and grieved with them through tragic ones.

This is a sad time for us, of course, but not overwhelmingly so. It's sort of the feeling you have when your son leaves for college, or your daughter gets married. You knew it was coming someday, but it is here much sooner than you imagined, and the sadness is sweetened with the joy you had in all that has come before.

We have been privileged to have created a thriving community of friends who met for the first time at the conference, and we want that community to continue. We hope that all of you will stay in touch with us and with each other, and that the Colorado Software Summit community will continue as a source of wisdom and friendship to all of you. If you have ever attended one of our conference, we hope you will consider joining the Colorado Software Summit LinkedIn group as one means to keep in touch.

With our very best wishes for your future, and with unbounded gratitude for your support,

- Wayne and Peggy Kovsky -

All presentations from Colorado Software Summit 2008 have been posted.

May 18, 2009

Software Summit May 17, 2009: Additions to Preliminary Agenda for Colorado Software Summit 2009

We have posted additions to the preliminary agenda for Colorado Software Summit 2009, in two formats:

We will continue to post additions to this agenda during the coming weeks. Please check back here from time to time for additions and/or changes to the agenda, or subscribe to our RSS feed to receive notifications of updates automatically.

Presentations from the 2008 Conference

We have posted presentations for these speakers from Colorado Software Summit 2008:

Presentations from Colorado Software Summit 2008 will be posted periodically throughout the year.

May 03, 2009

Software Summit May 3, 2009: Additions to Preliminary Agenda for Colorado Software Summit 2009

We have posted additions to the preliminary agenda for Colorado Software Summit 2009, in two formats:

We will continue to post additions to this agenda during the coming weeks. Please check back here from time to time for additions and/or changes to the agenda, or subscribe to our RSS feed to receive notifications of updates automatically.

Presentations from the 2008 Conference

We have posted presentations for these speakers from Colorado Software Summit 2008:

Presentations from Colorado Software Summit 2008 will be posted periodically throughout the year.

April 26, 2009

Software Summit April 25, 2009: Preliminary Agenda for Colorado Software Summit 2009

We have posted the preliminary agenda for Colorado Software Summit 2009, in two formats:

We will continue to post additions to this agenda during the coming weeks. Please check back here from time to time for additions and/or changes to the agenda, or subscribe to our RSS feed to receive notifications of updates automatically.

Presentations from the 2008 Conference

We have posted presentations for these speakers from Colorado Software Summit 2008:

Presentations from Colorado Software Summit 2008 will be posted periodically throughout the year.