Planet Ubuntu California

June 26, 2016

Akkana Peck

How to un-deny a host blocked by denyhosts

We had a little crisis Friday when our server suddenly stopped accepting ssh connections.

The problem turned out to be denyhosts, a program that looks for things like failed login attempts and blacklists IP addresses.

But why was our own IP blacklisted? It was apparently because I'd been experimenting with a program called mailsync, which used to be a useful program for synchronizing IMAP folders with local mail folders. But at least on Debian, it has broken in a fairly serious way, so that it makes three or four tries with the wrong password before it actually uses the right one that you've configured in .mailsync. These failed logins are a good way to get yourself blacklisted, and there doesn't seem to be any way to fix mailsync or the c-client library it uses under the covers.

Okay, so first, stop using mailsync. But then how to get our IP off the server's blacklist? Just editing /etc/hosts.deny didn't do it -- the IP reappeared there a few minutes later.

A web search found lots of solutions -- you have to edit a long list of files, but no two articles had the same file list. It appears that it's safest to remove the IP from every file in /var/lib/denyhosts.

So here are the step by step instructions.

First, shut off the denyhosts service:

service denyhosts stop

Go to /var/lib/denyhosts/ and grep for any file that includes your IP:

grep *

(If you aren't sure what your IP is as far as the outside world is concerned, Googling what's my IP will helpfully tell you, as well as giving you a list of other sites that will also tell you.)

Then edit each of these files in turn, removing your IP from them (it will probably be at the end of the file).

When you're done with that, you have one more file to edit: remove your IP from the end of /etc/hosts.deny

You may also want to add your IP to /etc/hosts.allow, but it may not make much difference, and if you're on a dynamic IP it might be a bad idea since that IP will eventually be used by someone else.

Finally, you're ready to re-start denyhosts:

service denyhosts stop

Whew, un-blocked. And stay away from mailsync. I wish I knew of a program that actually worked to keep IMAP and mbox mailboxes in sync.

June 26, 2016 06:59 PM

June 24, 2016

Elizabeth Krumbach

Trains in Maine

I grew up just outside of Portland, Maine. About 45 minutes south of there is the Seashore Trolley Museum. I went several times as a kid, having been quite the little rail fan. But it wasn’t until I moved to San Francisco that I really picked up my love for rails again with all the historic transit here in the city. With my new love for San Francisco streetcars, I made plans during our last trip back to make to visit the beloved trolley museum of my youth.

I’ll pause for a moment now to talk about terminology. Here in San Francisco we call that colorful fleet of cars that ride down Market and long the Embarcadero “streetcars” but in Maine, and in various other parts of the world, they’re known as “trolleys” instead. I don’t know why this distinction exists, and both terms are pretty broad so a dictionary is no help here. Since I was visiting the trolley museum, I’ll be referring to the ones I saw there as trolleys.

Before my trip I became a member of the museum, which gave us free entrance to the museum and a discount at the gift shop. We had originally intended to go to the museum upon arrival in Maine on the 26th of May, but learned when we showed up that they hadn’t opened on weekdays yet since it was still before Memorial Day. Whoops! We adjusted our plans and went back on Saturday.

Saturday was a hot day, but not intolerable. We had a little time to kill before the next trolley was leaving, so we made our way over to the Burton B Shaw South Boston Car House to start checking out some of the trolleys they had on display. These ones were pretty far into the rust territory and it was the smallest barn of them all, but I was delighted to find one of their double deckers inside. The streetcar lines in San Francisco don’t have the electric overhead infrastructure to support these cars, so it was a real treat for me. Later in the day we also saw another double decker that I was actually able to go up inside!

It was then time to board! With the windows open on the Boston 5821 trolley we enjoyed a nice loop around the property. The car itself was unfamiliar to me, but here in San Francisco we have the 1059, a PCC that is painted in honor of the Boston Elevated Railway so I was familiar with the transit company and livery. During the ride around the loop we had a pair of very New England tour guides who enjoyed bantering (think Car Talk). I caught a video of a segment of our trolley car ride. Riding through the beautiful green woods of Maine is certainly a different experience than the downtown streets of San Francisco that I’m used to!

On this ride I learned that many of the early amusement parks were created by the rail companies in an effort to increase ridership on Sundays, and transit companies in Maine were no exception. They also stopped by a rock formation that had evidence of how they would split rocks using water that froze and expanded in the winter to make way for the railroad tracks during building. The rocks were then crushed and used to help build the foundation of the tracks. The route from Biddeford to Kennebunkport, which the tracks we rode on was part of, is slanted downhill in the southern direction, so we also heard tales of the electricity being shut off at midnight and the last train of the day sometimes relying upon speeding up near midnight and coasting the rest of the way to the final station. I think the jury is out about how much exaggeration is to be expected in stories like this.

5821, Boston Elevated Railway

After the loop, we were met by a tour guide who took us around the other two transit barns that they have on the property. For most of the tour I popped ahead of the tour group to take photos, while staying within auditory range to hear what he had to say. I think this explains the 250+ pictures I took throughout the day. The barns had trolleys going at least 4 deep, in 3-4 rows. They had cars from all over the world, ranging from a stunning open top car from Montreal to that double decker from Glasgow that I got to go up to the top of. Some of the trolleys had really stunning interiors, like the Liberty Bell Limited from Philadelphia, I wouldn’t mind riding in one of those! They also had a handful of other trains that weren’t passenger trolleys, like a snow sweeper from Ottawa and a very familiar cable car from San Francisco.

Our walk around the property concluded with a visit to the restoration shop where they do work on the trolleys. Inside we saw some of the trolley skeletons and a bunch of the tools and machines they use to do work on the cars.

As you may expect, had a blast. They have an impressive assortment of trolleys, and I enjoyed learning about them and taking pictures. The museum also has a small assortment of vintage buses and train cars from various transit agencies, with a strong bias toward Boston. It was fun to see some trains that looked eerily similar to the BART trains that we still run here in the bay area, along with some of Philadelphia’s SEPTA trains. I even caught a glimpse of a SEPTA PCC trolley with livery that was somewhat modern, but it was under a cover and likely not yet restored.

The icing on the cake was their gift shop. I picked up a book for my nephew, along with my standard “tourist stuff” shot glass and magnet. The real gems were the model trains. I selected a couple toys that will accompany the others that I have from Philadelphia and San Francisco that will go on the standard wooden track that many children have. The adult model trains are where my heart was, I was able to get one of the F-Line train models (1063 Baltimore) that I didn’t have yet, along with a much larger (1:48 scale) and more impressive 2352 Connecticut Company, Destination Middletwon Birney Safety Car. I’ll be happy when I finally have a place to display all of these, but for now my little F-Line cars are hanging out on top of my second monitor.

As I mentioned, I took a lot of photos during our adventure, a whole bunch more can be browsed in an album on Flickr, and I do recommend it if you’re interested!

My visit to Maine was also to visit family and as I was making plans I tried to figure out things that would be fun, but not too tiring for my nearly four year old nephew. The Seashore Trolley Museum will be great when he’s a bit older, but could I sneak in a different train trip that would be more his speed? Absolutely! The Narrow Gauge Railroad Museum in Portland, Maine was perfect.

The train ride itself takes about 40 minutes total, and takes you on a 1.5 mile (3 miles round trip) voyage along Portland Harbor. This meant it was about 15 minutes each way, with a stop at the end of the line for about 10 minutes for the engine to detatch and re-attach to the other side of the train, I took a video of the reattachment, which took a few tries that day. The timing was perfect for someone so young, and I was delighted to see how much he enjoyed the ride.

I enjoyed it too, it was a beautiful spring day and Portland Harbor is a lovely place to ride a train along.

We spent about a half hour in the small accompanying museum. Narrow gauge is a broad term for a variety of gauges, and I learned the one that ran there in Portland had a 2 foot gauge. As I understand it, wider gauges tend to make for a smoother ride, and though these trains were very clearly passenger trains (and vintage ones at that), the ride was a bumpy one. They had a couple other passenger and freight cars in the museum, and my nephew enjoyed playing with some of the train toys.

I hadn’t really intended for this trip to Maine to be so train-heavy, but I’m glad we were able to take advantage of the stunning weather and make it so! More photos from the Narrow Gauge Railroad, including things like the telegraph and inside of the cars they had on display are here:

by pleia2 at June 24, 2016 03:57 AM

June 20, 2016

Jono Bacon

Announcing Jono Bacon Consulting

A little while back I shared that I decided to leave GitHub. Firstly, thanks to all of you for your incredible support. I am blessed to have such wonderful people in my life.

Since that post I have been rather quiet about what my next adventure is going to be, and some of the speculation has been rather amusing. Now I am finally ready to share more details.

In a nutshell, I have started a new consultancy practice to provide community management, innersourcing, developer workflow/relations, and other related services. To keep things simple right now, this new practice is called Jono Bacon Consulting (original, eh?)

As some of you know, I have actually been providing community strategy and management consultancy for quite some time. Previously I have worked with organizations such as Deutsche Bank, Sony Mobile, ON.LAB, Open Networking Foundation, Intel and others. I am also an active advisor for organizations such as AlienVault, Open Networking Foundation, Open Cloud Consortium, Mycroft AI and I also advise some startup accelerators.

I have always loved this kind of work. My wider career ambitions have always been to help organizations build great communities and to further the wider art and science of collaboration and community development. I love the experience and insight I gain with each new client.

When I made the decision to move on from GitHub I was fortunate to have some compelling options on the table for new roles. After spending some time thinking about what I love doing and these wider ambitions, it became clear that consulting was the right step forward. I would have shared this news earlier but I have already been busy traveling and working with clients. 😉

I am really excited about this new chapter. While I feel I have a lot I can offer my clients today, I am looking forward to continuing to broaden my knowledge, expertise, and diversity of community strategy and leadership. I am also excited to share these learnings with you all in my writing, presentations, and elsewhere. This has always been a journey, and each new road opens up interesting new questions and potential, and I am thirsty to discover and explore more.

So, if you are interested in building a community, either inside or outside (or both) your organization, feel free to discover more and get in touch and we can talk more.

by Jono Bacon at June 20, 2016 02:45 PM

June 18, 2016

Akkana Peck

Cave 6" as a Quick-Look Scope

I haven't had a chance to do much astronomy since moving to New Mexico, despite the stunning dark skies. For one thing, those stunning dark skies are often covered with clouds -- New Mexico's dramatic skyscapes can go from clear to windy to cloudy to hail or thunderstorms and back to clear and hot over the course of a few hours. Gorgeous to watch, but distracting for astronomy, and particularly bad if you want to plan ahead and observe on a particular night. The Pajarito Astronomers' monthly star parties are often clouded or rained out, as was the PEEC Nature Center's moon-and-planets star party last week.

That sort of uncertainty means that the best bet is a so-called "quick-look scope": one that sits by the door, ready to be hauled out if the sky is clear and you have the urge. Usually that means some kind of tiny refractor; but it can also mean leaving a heavy mount permanently set up (with a cover to protect it from those thunderstorms) so it's easy to carry out a telescope tube and plunk it on the mount.

I have just that sort of scope sitting in our shed: an old, dusty Cave Astrola 6" Newtonian on an equatorian mount. My father got it for me on my 12th birthday. Where he got the money for such a princely gift -- we didn't have much in those days -- I never knew, but I cherished that telescope, and for years spent most of my nights in the backyard peering through the Los Angeles smog.

Eventually I hooked up with older astronomers (alas, my father had passed away) and cadged rides to star parties out in the Mojave desert. Fortunately for me, parenting standards back then allowed a lot more freedom, and my mother was a good judge of character and let me go. I wonder if there are any parents today who would let their daughter go off to the desert with a bunch of strange men? Even back then, she told me later, some of her friends ribbed her -- "Oh, 'astronomy'. Suuuuuure. They're probably all off doing drugs in the desert." I'm so lucky that my mom trusted me (and her own sense of the guys in the local astronomy club) more than her friends.

The Cave has followed me through quite a few moves, heavy, bulky and old fashioned as it is; even when I had scopes that were bigger, or more portable, I kept it for the sentimental value. But I hadn't actually set it up in years. Last week, I assembled the heavy mount and set it up on a clear spot in the yard. I dusted off the scope, cleaned the primary mirror and collimated everything, replaced the finder which had fallen out somewhere along the way, set it up ... and waited for a break in the clouds.

[Hyginus Rille by Michael Karrer] I'm happy to say that the optics are still excellent. As I write this (to be posted later), I just came in from beautiful views of Hyginus Rille and the Alpine Valley on the moon. On Jupiter the Great Red Spot was just rotating out. Mars, a couple of weeks before opposition, is still behind a cloud (yes, there are plenty of clouds). And now the clouds have covered the moon and Jupiter as well. Meanwhile, while I wait for a clear view of Mars, a bat makes frenetic passes overhead, and something in the junipers next to my observing spot is making rhythmic crunch, crunch, crunch sounds. A rabbit chewing something tough? Or just something rustling in the bushes?

I just went out again, and now the clouds have briefly uncovered Mars. It's the first good look I've had at the Red Planet in years. (Tiny achromatic refractors really don't do justice to tiny, bright objects.) Mars is the most difficult planet to observe: Dave liks to talk about needing to get your "Mars eyes" trained for each Mars opposition, since they only come every two years. But even without my "Mars eyes", I had no trouble seeing the North pole with dark Acidalia enveloping it, and, in the south, the sinuous chain of Sini Sabaeus, Meridiani, Margaritifer, and Mare Erythraeum. (I didn't identify any of these at the time; instead, I dusted off my sketch pad and sketched what I saw, then compared it with XEphem's Mars view afterward.)

I'm liking this new quick-look telescope -- not to mention the childhood memories it brings back.

June 18, 2016 02:53 PM

June 14, 2016

Elizabeth Krumbach

Spike, Dino and José

In the fall of 2014 we attended a wedding for one of MJ’s cousins and guests got to bring home their own little succulent plant wedding favor. At the time we didn’t even know what a succulent was, but we dutifully carted it home on the flight.

For the first few months we kept it in the temporary container it came in and I didn’t have a lot of faith in my ability to keep it alive. We managed though, MJ did some research to learn what it was and how to move it into a pot, and it’s been growing ever since.

One day, Caligula wasn’t feeling well. After months of ignoring the plant, he decided that a plant was just the thing to sooth is upset stomach. He tried to bite it, but couldn’t find a good spot because the leaves have a spike at the end. We named the plant Spike.

Simcoe and Spike

In December of last year our dear Spike had a brush with fame! I snapped a picture of the rain one afternoon, and a glimpse of Spike was included in an article.

In April I attended an OpenStack Summit in Austin, Texas. At one of the parties Canonical was giving out succulents in dinosaur planters. How could I resist that? Plus, I’d continue the trend of free plants traveling home in carry on luggage. Having a succulent in a dinosaur planter sticking out of my purse was quite the conversation starter as I traveled home.

Simcoe sits with Dino and Spike

Spike has grown since it was that little wedding favor plant, and it never grew straight. Perhaps because we didn’t turn it enough and it grew towards the sun, or because succulents just keep growing and that just happens. We weren’t sure what to do though, as it eventually got to the point where it was too top-heavy to properly support its own weight! Spike now has scaffolding.

We’d grown quite fond of our little plant, and wanted to see how we could save him and not do the same with Dino. MJ did some research and found the Cactus & Succulent Society of San Jose that appeared to be very welcoming to folks like us looking for help and to identify what the plants are. We went last Sunday, upon my return from Maine and brought Spike and Dino along.

The society meets at a meetinghouse in a park in San Jose and the society members were just as welcoming as their website led us to believe! We were welcomed as we walked in and immediately had a few of our questions answered. As the presentation began they gave us chairs and raffle tickets for later in the meeting. The meeting had a presentation from a woman who sells a lot of succulents and also does a lot of craft projects that use the plants, putting them in living wreathes and various types of cages. I had worried that Dino living in a plastic dinosaur planter would offend them (what are you doing to your precious plant?!), but it turns out that putting succulents into interesting planters is quite a popular hobby. We learned that Dino is perfectly happy in that planter for now.

From the presentation, a succulent box that hangs on the wall, and some cages and various types of succulents

After the meeting they gave out awards for the mini-show that they had for members who brought in plants they wanted to share. We were able to get all the rest of our questions answered as well. We learned a bunch.

  • Both Spike and Dino are of the Echeveria genus. We can do our own research online or at plant shows to compare ours to others to figure out exactly what kind of Echeveria they are. Spike has a purple tint to the leaves and dino has red.
  • We didn’t actually destroy Spike, in spite of the lop-sidedness. Succulents grow and grow and grow. A method of reproduction is when a leaf drops off it can grow into another plant! Spike is ready to become lots of mini-Spikes!
  • One of the reasons societies like this exist is so people can give away and sell their ever-growing population of succulents.
  • We picked up some Miracle Grow soil for cacti and succulents at a home improvement store. That’s fine, but our plant likely doesn’t actually need fertilizer. Something to think about.
  • We should be watering these succulents every week or two, but need to keep an eye on how moist the soil is since root rot is one of the only things that does kill these hearty plants.
  • It’s pretty hard to kill a succulent, so they do use them for all kinds of craft projects and inventive ways.

They gave us some advice about how to handle Spike. They recommended cutting off the top(!), drying it out for a few weeks and replanting that. The bottom of the plant will also grow a new top of the plant. Assuming all goes well, we’ll at least end up with two Spikes that will hopefully grow straight this time, plus as many of the leaves as we want to grow into new plants. We have four leaves drying out now. We haven’t done the scary cutting and replanting yet, but it may be a project for this upcoming weekend, along with picking up a few more pots.

As the meeting wound down they did a raffle. The final ticket called was mine! We ended up going home with a Notocactus Roseoluteus, a flowering cactus. We certainly hadn’t planned on adding to our plant family at this meeting, but it’s a nice plant, and hopefully as a cactus it’ll be another plant that we can keep alive. Since we got this cactus in San Jose, we named it José.

José, the Notocactus Roseoluteus

So far José is doing ok, we watered it yesterday morning and it’s now sitting on the windowsill with the other plants.

by pleia2 at June 14, 2016 12:11 AM

June 10, 2016

Akkana Peck

Visual diffs and file merges with vimdiff

I needed to merge some changes from a development file into the file on the real website, and discovered that the program I most often use for that, meld, is in one of its all too frequent periods where its developers break it in ways that make it unusable for a few months. (Some of this is related to GTK, which is a whole separate rant.)

That led me to explore some other diff/merge alternatives. I've used tkdiff quite a bit for viewing diffs, but when I tried to use it to merge one file into another I found its merge just too hard to use. Likewise for emacs: it's a wonderful editor but I never did figure out how to get ediff to show diffs reliably, let alone merge from one file to another.

But vimdiff looked a lot easier and had a lot more documentation available, and actually works pretty well.

I normally run vim in an xterm window, but for a diff/merge tool, I want a very wide window which will show the diffs side by side. So I used gvimdiff instead of regular vimdiff: gvimdiff docs.production/filename

Configuring gvimdiff to see diffs

gvimdiff initially pops up a tiny little window, and it ignores Xdefaults. Of course you can resize it, but who wants to do that every time? You can control the initial size by setting the lines and columns variables in .vimrc. About 180 columns by 60 lines worked pretty well for my fonts on my monitor, showing two 80-column files side by side. But clearly I don't want to set that in .vimrc so that it runs every time I run vim; I only want that super-wide size when I'm running a side-by-side diff.

You can control that by checking the &diff variable in .vimrc:

if &diff
    set lines=58
    set columns=180

If you do decide to resize the window, you'll notice that the separator between the two files doesn't stay in the center: it gives you lots of space for the right file and hardly any for the left. Inside that same &diff clause, this somewhat arcane incantation tells vim to keep the separator centered:

    autocmd VimResized * exec "normal \<C-w>="

I also found that the colors, in the vim scheme I was using, made it impossible to see highlighted text. You can go in and edit the color scheme and make your own, of course, but an easy way quick fix is to set all highlighting to one color, like yellow, inside the if $diff section:

    highlight DiffAdd    cterm=bold gui=none guibg=Yellow
    highlight DiffDelete cterm=bold gui=none guibg=Yellow
    highlight DiffChange cterm=bold gui=none guibg=Yellow
    highlight DiffText   cterm=bold gui=none guibg=Yellow

Merging changes

Okay, once you can view the differences between the two files, how do you merge from one to the other? Most online sources are quite vague on that, but it's actually fairly easy:

]c jumps to the next difference
[c jumps to the previous difference
dp makes them both look like the left side (apparently stands for diff put
do makes them both look like the right side (apparently stands for diff obtain

The only difficult part is that it's not really undoable. u (the normal vim undo keystroke) works inconsistently after dp: the focus is generally in the left window, so u applies to that window, while dp modified the right window and the undo doesn't apply there. If you put this in your .vimrc

nmap du :wincmd w<cr>:normal u<cr>:wincmd w<cr>
then you can use du to undo changes in the right window, while u still undoes in the left window. So you still have to keep track of which direction your changes are going.

Worse, neither undo nor this du command restores the highlighting showing there's a difference between the two files. So, really, undoing should be reserved for emergencies; if you try to rely on it much you'll end up being unsure what has and hasn't changed.

In the end, vimdiff probably works best for straightforward diffs, and it's probably best get in the habit of always merging from right to left, using do. In other words, run vimdiff file-to-merge-to file-to-merge-from, and think about each change before doing it to make it less likely that you'll need to undo.

And hope that whatever silly transient bug in meld drove you to use vimdiff gets fixed quickly.

June 10, 2016 02:10 AM

June 06, 2016

Elizabeth Krumbach

Hashtag FirstJob

Back in February Gareth Rushgrove started the fantastic Twitter hashtag, #FirstTechJob. The responses were inspiring for many people, from those starting out to people like me who “fell into” a tech career. I had a natural love for computers, various junior tech jobs and volunteered in open source for years. I had no formal education in computer science. While my story is not uncommon in tech, it can still be isolating and embarrassing at academic conferences I participate in.

Lots of IT/software jobs ask for experience, but everyone starts somewhere. Lets encourage new folks with a tweet about our #FirstTechJob

I chimed in myself.

Contract web developer at a web development firm. Turned static designs into layouts on sites! Browser compatibility issues... #FirstTechJob

I knew when I posted it that 140 characters was not enough to provide context. For me and so many others it wasn’t just about having that first job and going through the prerequisite grunt work, but the long journey I had before getting said first tech job.

This week, two things inspired me to write more about this. First, I visited my old hometown. Second, several folks I know went to AlterConf and I saw a bunch of tweets about how tech workers should be more compassionate toward support/building/cleaning staff working around them. Don’t disrupt their work, but learn their names, engage them in conversation, treat them with respect.

I’ll begin by setting my privilege stage:

  1. I’m a white woman.
  2. Though we weren’t wealthy, I grew up in an affluent town with great public schools.
  3. I always had clothes, healthy food and a house to live in.
  4. Even though it was 10 years old (and so was I!) when it came to our house in 1991, I had desktop computer at home and I could use it as much as I wanted. We got online at home in 1998.
  5. In addition to a supportive Linux User Group community in Philadelphia, my white 20-something boyfriend referred and recommended me to the employer who gave me my first tech job.
  6. I had, and continue to have, time to learn, hack and experiment outside of work hours.

In spite of any of the other challenges I encountered as a child, youth and young adult, my life was a lot better than many others then and now. I had a lot going for me.

The same age as I am, I found the first computer we had in a museum

So what did my visit back home do?

My husband and I stayed in one of the nicest hotels in Portland, Maine. It was at the top of the highest hill in the city and had a beautiful view of the harbor and the Portland Art Museum.

My most vivid memory from that art museum was not visiting it, though I’m sure I did with a school trip, but when I was a teen and worked as catering staff for a wedding there. Looking out the hotel from our room I remembered the 16 hour day that left me dead on my feet and vowing never to do it again (though of course I did). I woke up early to help cart everything to the museum, helped to make sure the chefs and servers had everything they needed behind the scenes and washed the fancy champagne soup dishes, watching most the soup go down the drain. We rushed around the venue after the event concluded to clean and pack everything back into the van.

It brought back memories of other catering jobs I did too. At one of these jobs in my home town I served hors d’oeuvres to the extended family of people I went to school with. Being friendly and outgoing enough to offer food and carry around those trays while handing out the little napkins is a skill that I still have a lot of respect for.

The Portland Art Museum is in the center of my photo

It wasn’t just catering that I did as soon as I was old enough to work. As we drove through my home town in Cape Elizabeth my verbal tour to my husband included actual historical landmarks that make the town a tourist destination and “I babysat there!” and “I used to clean that house!” It turns out I worked a lot during high school and over those summer vacations.

All these hashtags and discussions really hit home. A formidable amount of my youth was spent as “the help” and I know what it’s like to be invisible to and disrespected by people I serve.

If nothing else, I’ll add my voice to those imploring my fellow techies to make an effort to be more compassionate to the support staff around them. After all, you know me, you can relate to me, and I spent time in their hard-working, worn out shoes.

by pleia2 at June 06, 2016 09:05 PM

June 04, 2016

Akkana Peck

Walking your Goat at the Summer Concert

I love this place. We just got back from this week's free Friday concert at Ashley Pond. Not a great band this time (the previous two were both excellent). But that's okay -- it's still fun to sit on the grass on a summer evening and watch the swallows wheeling over the pond and the old folks dancing up near the stage and the little kids and dogs dashing pell-mell through the crowd, while Dave, dredging up his rock-star past, explains why this band's sound is so muddy (too many stacked effects pedals).

And then on the way out, I'm watching appreciatively as the teen group, who were earlier walking a slack line strung between two trees, has now switched to juggling clubs. (I know old people are supposed to complain about "kids today", but honestly, the kids here seem smart and fit and into all kinds of cool activities.) One of the jugglers has just thrown three clubs and a ball, and is mostly keeping them all in the air, when I hear a bleat to my right -- it's a girl walking by with a goat on a leash.

Just another ordinary Friday evening in Los Alamos.

June 04, 2016 02:45 AM

May 29, 2016

Elizabeth Krumbach

Toys and Cats in Austin

It’s been a month since returning from my trip to Austin for the OpenStack Summit, but I’ve been overwhelmed with work and finishing my book, more on that in another post. Not much time for writing here in my blog! I had some side adventures in Austin that I’d hate to see go unmentioned.

The OpenStack Summits are pretty exhausting, so what better way to unwind than to snuggle up with some kitties? As we wrapped up our work on Friday afternoon I gathered a crew to join me at the Blue Cat Cafe, which was just under a mile from the conference venue. A bit after 5PM we made our way over there.

Along the way, we discovered the Austin Toy Museum. It was a small place, but it was a fun detour. I got my picture taken with R2-D2.

They had a relatively big Star Wars exhibit with a bunch of toys that my colleagues and I enjoyed pointing to and saying we had as kids. The museum definitely skewed toward toys from the 1980s, and the fellow who sold us our tickets waxed poetically about how the 1980s were the golden age of toys. Who am I to argue? I sure enjoyed my toys as a kid in the 1980s.

Hoth toys have always been a favorite of mine

The museum distinguishes itself by the video games, which you get to play as much as you want for the price of admission. They have a whole wall of consoles, plus several arcade games. I enjoyed getting smashed to pieces in Astroids and playing a bit of Pac-Man, both on arcade games. Plus, my 1980s flashback journey was completed by seeing a couple Popples hanging out on top of the Q*bert game.

From there we finally made our way over to the cat cafe! Cat cafes have been popping up in major cities, including one in San Francisco, but this was the first time I’d made it to one. Like many of them, their focus is on adoption and care for cats that don’t have homes. They’re also great for cat lovers who can’t have one at home, or are traveling for a conference and missing their own kitties!

The inside of this cafe was definitely the domain of kitties. An old drum set was transformed into kitty sleeping areas. An old furniture-style CRT TV had the mechanical components removed to make way for a nice cat bed. There were also plenty of places to climb!

There were also some unintentional cat toys. When someone left the bathroom door open we learned why you don’t leave the bathroom door open.

The cafe component of this establishment was served by a food truck in front of the building. You can order from inside with the kitties, but they take your order out to the food truck to be prepared and then you pick it up at a window inside, or they bring it to you. I enjoyed some hot cider while we petted the cats that wandered through where we were sitting on some couches.

Our adventure to the cat cafe was my perfect relaxing activity post-conference. Next time I’m in Austin I plan on checking out the Museum of the Weird and Austin Books & Comics, which I had planned on visiting but didn’t make it to.

A few more photos from the cat cafe here:

by pleia2 at May 29, 2016 03:59 PM

May 25, 2016

Elizabeth Krumbach

Sharks and Giants

Six years ago sports weren’t on my radar. I’d been to a couple minor league baseball games (Sea Dogs in Portland when I was young, and the Reading Phillies a few years earlier) but it wasn’t until 2010 that I went to a major sporting event.

I’m not sure if it was the stunning AT&T Park or I was just at a point in my life where I could chill out and enjoy a game, but I fell in love that night in 2010 when we watched the Philadelphia Phillies play the San Francisco Giants. Since then I’ve attended a bunch more San Francisco Giants games, several Oakland A’s games, and MJ and I have branched out into hockey too by going to San Jose Sharks games. Back in December I went to my first football game. Baseball still holds my heart, and so does AT&T Park, but I do enjoy a good hockey game.

A couple weeks ago when we learned that the Sharks were going into the 7th game of second round finals we snapped up tickets. On May 12th we took Caltrain down to San Jose to see them play against Nashville.

It was the first time I’d ever been to a playoff game for any sport. Going to a sold out game with the energy that a playoff brings was quite the experience. It was a really enjoyable game for Sharks fans.

Nashville had lots of great passes, but the Sharks won 4-0, sealing their spot in the conference finals. Nice! This week will determine how far they continue to go, as I write this they’re in a 3-2 game lead in the conference finals.

More pictures from the evening and the game:

The only downside to the evening was the trek home. I’d love for Caltrain to be a good option both ways. Going down is pretty easy and quick on a bullet train during rush hour, but coming home is pretty rough. The game ended around 8:30, we were on the train platform by 9 to catch a 9:30 train. By the time we go home it was 11:30PM. Three hours from the end of the game to getting home was a bit much, especially since I was also recovering from a nasty cold that sapped my energy pretty severely.

I hadn’t planned on going to another game this month, but a friend and colleague who is staying in town for a few weeks contacted me to see if I’d be interested in catching a baseball game this week. Count me in. Last night MJ and I met up with my buddy Spencer and we caught a Giants game down at my beloved AT&T Park.

The weather was a bit gloomy, but we only had a bit of misting during the end of the game. The Giants were in their first game against the San Diego Padres, and the Padres put up a fight. The game was 0-0 until the bottom of the 9th. It was actually a little painful, but I had good company… who I dragged halfway across the stadium so we could get decent beer during the game. Happy to report that I enjoyed a Mango Wheat and Go West! IPA by Anchor Brewing Company along with my obligatory ball game hot dogs.

It was the bottom of the 9th inning, as we all were getting ready for extra innings, that the Giants scored a run. It sure made for an exciting final inning!

More photos here:

No complaints about the commute home from AT&T Park. We live less than a mile from the stadium so just needed to use our feet to get home, along with dozens of other fans headed in the same direction.

by pleia2 at May 25, 2016 07:45 AM

May 23, 2016

Jono Bacon

Moving on From GitHub

Last year I joined GitHub as Director Of Community. My role has been to champion and manage GitHub’s global, scalable community development initiatives. Friday was my last day as a hubber and I wanted to share a few words about why I have decided to move on.

My passion has always been about building productive, engaging communities, particularly focused on open source and technology. I have devoted my career to understanding the nuances of this work and which workflow, technical, psychological, and leadership ingredients can deliver the most effective and rewarding results.

As part of this body of work I wrote The Art of Community, founded the annual Community Leadership Summit, and I have led the development of community at Canonical, XPRIZE, OpenAdvantage, and for a range of organizations as a consultant and advisor.

I was attracted to GitHub because I was already a fan and was excited by the potential within such a large ecosystem. GitHub’s story has been a remarkable one and it is such a core component in modern software development. I also love the creativity and elegance at the core of GitHub and the spirit and tone in which the company operates.

Like any growing organization though, GitHub will from time to time need to make adjustments in strategy and organization. One component in some recent adjustments sadly resulted in the Director of Community role going away.

The company was enthusiastic about my contributions and encouraged me to explore some other roles that included positions in product marketing, professional services, and elsewhere. So, I met with these different teams to explore some new and existing positions and see what might be a good fit. Thanks to everyone in those conversations for your time and energy.

Unfortunately, I ultimately didn’t feel they matched my passion and skills for building powerful, productive, engaging communities, as I mentioned above. As such, I decided it was time to part ways with GitHub.

Of course, I am sad to leave. Working at GitHub was a blast. GitHub is a great company and is working on some valuable and important areas that strike right at the center of how we build great software. I worked with some wonderful people and I have many fond memories. I am looking forward to staying in touch with my former colleagues and executives and I will continue to be an ardent supporter, fan, and user of both GitHub and Atom.

So, what is next? Well, I have a few things in the pipeline that I am not quite ready to share yet, so stay tuned and I will share this soon. In the meantime, to my fellow hubbers, live long and prosper!

by Jono Bacon at May 23, 2016 03:20 PM

May 18, 2016

Jono Bacon

Kindness and Community

On Friday last week I flew out to Austin to run the Community Leadership Summit and join OSCON. When I arrived in Austin, I called home and our son, Jack, was rather upset. It was clear he wasn’t just missing daddy, he also wasn’t feeling very well.

As the week unfolded he developed strep throat. While a fairly benign issue in the scheme of things, it is clearly uncomfortable for him and pretty scary for a 3 year-old. With my wife, Erica, flying out today to also join OSCON and perform one of the keynotes, it was clear that I needed to head home to take care of him. So, I packed my bag, wrestled to keep the OSCON FOMO at bay, and headed to the airport.

Coordinating the logistics was no simple feat, and stressful. We both feel awful when Jack is sick, and we had to coordinate new flights, reschedule meetings, notify colleagues and handover work, coordinate coverage for the few hours in-between her leaving and me landing, and other things. As I write this I am on the flight heading home and at some point she will zoom past me on another flight heading to Austin.

Now, none of this is unusual. Shit happens. People face challenges every day, and many far worse than this. What struck me so notably today though was the sheer level of kindness from our friends, family, and colleagues.

People wrapped around us like a glove. Countless people offered to take care of responsibilities, help us with travel and airport runs, share tips for helping Jack feel better, provide sympathy and support, and more.

This was all after a weekend of running the Community Leadership Summit, an event that solicited similar levels of kindness. There were volunteers who got out of bed at 5am to help us set up, people who offered to prepare and deliver keynotes and sessions, coordinate evening events, equipment, sponsorship contributions, and help run the event itself. Then, to top things off, there were remarkably generous words and appreciation for the event as a whole when it drew to a close.

This is the core of what makes community so special, and so important. While at times it can seem the world has been overrun with cynicism, narcissism, negativity, and selfishness, we are instead surrounded by an abundance of kindness. What helps this kindness bubble to the surface are great relationships, trust, respect, and clear ways in which people can play a participatory role and support each other. Whether it is something small like helping Erica and I to take care of our little man or something more involved such as an open source project, it never ceases to inspire and amaze me how innately kind and collaborative we are.

This is another example of why I have devoted my life to understanding every nuance I can of how we can tap into and foster these fundamental human instincts. This is how we innovate, how we make the world a better place, and how we build opportunity for everyone, no matter what their background is.

When we harness these instincts, understand the subtleties of how we think and operate, and wrap them in effective collaborative workflows and environments, we create the ability to build and disrupt things more effectively than ever.

It is an exciting journey, and I am thankful every day to be joined on it by so many remarkable people. We are going build an exciting future together and have a rocking great time doing so.

by Jono Bacon at May 18, 2016 07:48 PM

May 12, 2016

Elizabeth Krumbach

My Yakkety Yak has arrived!

I like toys, but I’m an adult who lives in a small condo, so I need to behave myself when it comes to bringing new friends into our home. I made an agreement with myself to try and limit my stuffed toy purchases to two per year, one for each Ubuntu release.

Even so, I now have quite the collection.

These toys serve the purpose of brightening up our events with some fun, and enjoy the search for a new animal to match Mark Shuttleworth’s latest animal announcement. Truth be told, my tahr is a goat that I found that kind of looks like a tahr. The same goes for my xerus. My pangolin ended up having to be a plastic toy, though awareness about the animal (and conservation effords) has grown since 2012 so I’d likely be able to find one now. The quetzal was the trickiest, I had to admit defeat bought an ornament instead, but I did find and buy some quetzal earrings during our honeymoon in Mexico.

I’ve had fun as well and learned more about animals, which I love anyway. For the salamander I bought a $55 Hellbender Salamander Adoption Kit from the World Wildlife fund, an organization my husband and I now donate to annually. Learning about pangolins led me to visit one in San Diego and become a made me aware of the Save Pangolins organization.

It is now time for a Yakkety Yak! After some indecisiveness, I went with an adorable NICI yak, which I found on Amazon and shipped from Shijiazhuang, China. He arrived today.

Here he is!

…though I did also enjoy the first photo I took, where trusty photobombed us.

by pleia2 at May 12, 2016 01:38 AM

May 10, 2016

Elizabeth Krumbach

Newton OpenStack Summit Days 3-5

On Monday and Tuesday I was pretty focused on the conference side of the OpenStack Summit, but with all the keynotes behind us, when Wednesday rolled around I found myself much more focused on the Design Summit side.

Our first session of the day was on Community Task Tracking, which we jokingly called the “task tracking bake-off.” As background, couple years ago the OpenStack Infrastructure team placed our bets on an in-project developed task tracker called StoryBoard. The hope had been that the intention to move off of Launchpad and onto this new platform would bring support from companies looking to help with development. Unfortunately this didn’t pan out. Development landed on the shoulders of a single poor, overworked soul. At this point we started looking at the Maniphest component of Phabricator. Simultaneously we ended up with a contributor putting together configuration management for Maniphest and had a team pop up to continue support of StoryBoard for a downstream that had begun using it. A few weeks ago I organized a bug day where the team got together to do a serious once through of outstanding bugs and provide feedback to the StoryBoard team about what we need to use it, we went from 571 active bugs down to 414.

This set the stage for our session. We could stand up a Maniphest server or place our bets with StoryBoard again. We had a lot to consider.

  Pros Cons
Storyboard Strong influence over direction, already running and being used in our infra, good API We need to invest in development ourselves, little support for design/UI folks (though we could run a standalone Pholio)
Maniphest Investment is made by a large exiting development team, feature rich with pluggable components like Pholio for design folks Little influence over direction (like with Gerrit), still have to stand up and migrate to, weak API

Both had a few things lacking that we’d need before we go full steam into use by all of OpenStack, so there seemed to be consensus that they were similar in terms of work and time needed to get to that point. Of all the considerations, the need to develop our own vs. depending on upstream is the one that weighed most heavily upon me. Will companies really step up and help with development once we move everyone into production? What happens if our excellent current StoryBoard developers are reassigned to other projects? Having an active upstream certainly is a benefit. The session didn’t end with a formal selection, but we will be discussing it more over the next couple weeks so we can move toward making a recommendation to the Technical Committee (TC). Read-only session etherpad here.

The next session I attended was in the QA track, for the DevStack Roadmap. The session centered around finally making DevStack use Neutron by default. It’s been some time since nova-networking was deprecated, so this switch was a long time in coming. In addition to the technical components of this switch, there documentation needs to be updated around the networking decisions. Since I’ve just recently done some deep dives into OpenStack networking, somehow I ended up volunteering to help with this bit! Read-only session etherpad here.

Before the very busy lunch I had coming up, there was one more morning session, on Landing Page for Contributors. The current pages we have on the wiki, like the Main page on the wiki itself and the How To Contribute wiki aren’t the most welcoming of pages, they’re more walls of text that a new contributor has to sift through. This session talked through a lot of the tooling that could be used to make a more inviting, approachable page, drawing from other projects who have forged this path in the past. Of course it is also important that the content is reviewed and maintainable from the project perspective too, so something that can be held in revision control is key. Read-only session etherpad here.

As lunch rolled around I rushed upstairs to assist with the Git and Gerrit – Lunch and Learn. The event began by expecting and separating out about 1/3 of the folks in the room who hadn’t completed the prerequisites. It was the job of myself and the other helpers to start working with these folks to get their accounts set up and git-review installed. This wasn’t a trivial task, in spite of my intimate knowledge of how our system works and years of using it, almost all the attendees used Windows or Mac. I use Linux full time and we don’t maintain good (or any) documentation for in our development workflow for OpenStack development for these other operating systems.

A lot of folks did make it through configuration, and it was nice to be reminded about how our community is growing and that our tools need to do as well. A patch was submitted several months back to add a video of how to set things up on Windows, but that’s inconsistent with the rest of our documentation and has not been accepted. It would be great to see some folks using these other operating systems help us get the written documentation into better shape. Beyond the prerequisites, session leaders Amy Marrich and Tamara Johnston walked folks through setting up their environment, submitting a patch to the sandbox repo, submitting a test bug, reviewing a change and more. The slide deck they used has been uploaded to Amy’s AustinSummit GitHub project. I also took a few minutes to explain the Zuul Status page and a bit about each of the pipelines that a change may go through on the way to being merged.

Git and Gerrit – Lunch and Learn

Directly after lunch I was in another infrastructure session, this time to talk about Launch-Node, Ansible and Puppet. Launching new, long-lived servers in our infrastructure is one of those tasks that has remained frustratingly hands on. This manual work has been a time sink and a lot of it can be automated, so we as a team consider this situation a bug. Our Launch-Node script has been developed to start tackling this and the session went through some of the things we need to be careful of, including handling of DNS and duplicate hostnames (what if we’re spinning up a replacement server?), when do we unmount and disassociate cinder volumes and trove databases with the old server and bring them up on the new? Lots of great discussion around all of this was had. Fixes were already coming in by the end of this session and we have a good path moving forward. Read-only session etherpad here.

The next infrastructure session focused on Wiki upgrades. We’ve been struggling with spam problems for a several months. We need to do an upgrade to get some of the latest anti-spam tooling, which also requires upgrading the operating system in order to get a newer version of PHP. The people-power we have for this is limited, as we all have a lot of other projects on our plates. The session began with outlining what we need to do to get this done, and wound down with the proposal to shut down the wiki in a year. The OpenStack project has great, collaborative tooling for publishing documentation and things, we also use etherpads a lot for notes and to do lists, is there really still an active need for a wiki? Thierry Carrez sent an email today that started work on socializing our options, whether to carry on with the wiki or not. As the discussions continue on list, I hope to help in finding tooling for teams that need it and the current tools don’t satisfy. While we do that over the next year, Paul Belanger has bravely stepped forward to lead up the ongoing maintenance of the wiki until the possible retirement. Read-only session etherpad here.

Thursday morning kicked off bright and early with a session on Proposal jobs. As some quick background, proposal jobs are run on a privileged server in the OpenStack infrastructure that has the credentials to publish to a few places, like translations files up to Zanata. With this in mind, and as general good policy, we like to keep jobs we’re running here down to a minimum, using non-privileged servers as much as possible to complete tasks. The session walked through several of the existing jobs and news ones that were being proposed to sort through how they could be done differently, and make sure we’re all on the same page as a team when it comes to approving new jobs on these servers. Read-only session etherpad here.

It was then on to a session to “Robustify” Ansible-Puppet. Several months back we switched over to a system of triggering Puppet runs with Ansible instead of using the Puppetmaster software. This process quickly became complicated, so much so that even I struggled to trace the whole path of how everything works. Thankfully Monty Taylor and Spencer Krum started off the session by whiteboarding how everything works together, or doesn’t, as the case may be. It was a huge help to see it sketched out so that the pain points could be identified, one of those rare times when it was super valuable to be together in a room as a team rather than trying to explain things over IRC. We learned that inventory creation for Ansible is one of our pain points, but the complexity of the whole system has made fixing problems tricky, you pull one thread and something else gets undone! We also discussed the status of logging, and how we can better prepare for edge cases where things Really Go Wrong and we can’t access to the server to see the logs to find out what happened. There’s also some Puppetboard debugging to do, as folks rely on the data from that and it hasn’t been entirely accurate in reporting failures lately. In all, a great session, read-only session etherpad here.

Monty and Spencer explain our Ansible-Puppet setup

Next up for Infrastructure was a fishbowl session about OpenID/SSO for Community Systems. The OpenStack Foundation invested in the development of OpenStackID when few other options that fit our need were mature in this space. Today we have the option of using ipsilon, which has a bigger development community and is already in use by another major open source project (Fedora). The session outlined the benefits of consuming an upstream tool instead, including their development model, security considerations and general resources that have been spent to roll our own solution. The session also outlined exactly what our needs are to move all of our authentication away from Launchpad hosted by Canonical. I think it was a good session with some healthy discussion about where we are with our tooling, read-only session etherpad here.

I spent my time after lunch with the translations/internationalization (i18n) folks in a 90 minute work session on Translation Processes and Tools (read-only session etherpad here). My role in this session, along with Steve Kowalik and Andreas Jaeger was to represent the infrastructure team and the tooling we could provide to help the i18n team get their work done. Of particular focus were the translations check site that we need to work toward bringing online and our plan to upgrade Zanata, and the underlying operating system it’s running on. We also discussed some of the other requirements of the team, like automated polling of Active Technical Contributor (ATC) status for translators and improved statistics on Stackalytics for translations. Andreas was also able to take time to show off the new translations procedure for reno-driven release notes, which allows for translations throughout the cycle as they’re committed to the repositories rather than a mad rush to complete them at the end. It was also nice to catch up with Alex Eng from the Zanata team and former i18n PTL Daisy (Ying Chun Guo) who I had such a great time with in Tokyo, I wish I’d had more time to grab a meal with them.

In our final Infrastructure-focused session of the day, we met to discuss operating system upgrades. With the release of the latest Ubuntu LTS (16.04) the week prior to the summit, we find ourselves in a world of three Ubuntu LTS releases in the mix. We decided to first carve out some time to get all of our 12.04 systems upgraded to 14.04. From there we’ll work to get our Puppet modules updated and services prepared for running on 16.04. Of particular interest to me is getting the Zanata server on 16.04 soon so we can upgrade the version of Zanata that it’s running and requires a newer version of Java than 14.04 provides. We also spent a little time splitting out the easier servers to upgrade from the more difficult ones, especially since some systems have very little data and don’t actually need an in place upgrade, we can simply redeploy workers. We will do a more thorough evaluation when we’re closer to upgrade time, which we’re scheduling for some time this month. Read-only session etherpad here.

Thursday evening meant it was time for our Infrastructure Team dinner! Over 20 self-proclaimed infrastructure team members piled into cars to make it across town to Freedmans to enjoy piles of BBQ. I had to pass on all things bready (including beer) but later in the evening we made our way inside to the bar where we found agave tequila that was not forbidden for me. The rest was history. Lots of fun and great chats with everyone, including a bunch of non-infra people who had been clued into our late night shenanigans and decided to join us.

Infra evening gathering, photo by Monty Taylor

Friday was our day for team work session gatherings. Infrastructure ended up in room 404 (which, in fact, was difficult to find). Jeremy Stanley (fungi) kicked the day off by outlining topics for Infra and QA that we may find valuable to work on together while we were in the room. I worked on a few things with folks for about an hour before switching tracks to join my translations friends again over in their work session.

Steve, Andreas and I made our way over to the i18n session to chat with them about the ability to translate more things (like DevStack documentation) and to give them an update from our upgrades session for an idea of when they could expect the Zanata upgrade. Perhaps the most exciting part of the morning was their request for us to finally shut down the OpenStack Transifex project. We switched to Zanata when Transifex went closed source, but our hosted account has lingered around for a year since we’ve used it “just in case” we needed something from it. With two OpenStack cycles on Zanata behind us, it was time to shut it down. We were all delighted when we saw the email: [Transifex] The organization OpenStack has been deleted by the user jaegerandi.

Cheerful crowd of i18n contributors!

After one more lunch at Cooper’s BBQ, I made it back to the Infrastructure room for more afternoon work, but I could feel the cloud of exhaustion hitting me by then. Most of what I managed was informally chatting with my fellow contributors and sketching out work to be done rather than actually getting much done. There’d be plenty of time for that once I returned home!

I concluded my time in Austin with a few colleagues with a visit to the Austin Toy Museum, some leisurely time at the Blue Cat Cafe (my first cat cafe!) and a quiet sushi dinner. With that, another great OpenStack Summit was behind me. My flight home left at 6AM Saturday morning.

Edit: Infrastructure PTL Jeremy Stanley has also written summaries of sessions here: Newton Summit Infra Sessions Recap

by pleia2 at May 10, 2016 07:39 PM

May 08, 2016

Akkana Peck

Setting "Emacs" key theme in gtk3 (and Firefox 46)

I recently let Firefox upgrade itself to 46.0.1, and suddenly I couldn't type anything any more. The emacs/readline editing bindings, which I use probably thousands of times a day, no longer worked. So every time I typed a Ctrl-H to delete the previous character, or Ctrl-B to move back one character, a sidebar popped up. When I typed Ctrl-W to delete the last word, it closed the tab. Ctrl-U, to erase the contents of the urlbar, opened a new View Source tab, while Ctrl-N, to go to the next line, opened a new window. Argh!

(I know that people who don't use these bindings are rolling their eyes and wondering "What's the big deal?" But if you're a touch typist, once you've gotten used to being able to edit text without moving your hands from the home position, it's hard to imagine why everyone else seems content with key bindings that require you to move your hands and eyes way over to keys like Backspace or Home/End that aren't even in the same position on every keyboard. I map CapsLock to Ctrl for the same reason, since my hands are too small to hit the PC-positioned Ctrl key without moving my whole hand. Ctrl was to the left of the "A" key on nearly all computer keyboards until IBM's 1986 "101 Enhanced Keyboard", and it made a lot more sense than IBM's redesign since few people use Caps Lock very often.)

I found a bug filed on the broken bindings, and lots of people commenting online, but it wasn't until I found out that Firefox 46 had switched to GTK3 that I understood had actually happened. And adding gtk3 to my web searches finally put me on the track to finding the solution, after trying several other supposed fixes that weren't.

Here's what actually worked: edit ~/.config/gtk-3.0/settings.ini and add, inside the [Settings] section, this line:

gtk-key-theme-name = Emacs

I think that's all that was needed. But in case that doesn't do it, here's something I had already tried, unsuccessfully, and it's possible that you actually need it in addition to the settings.ini change (I don't know how to undo magic Gnome settings so I can't test it):

gsettings set org.gnome.desktop.interface gtk-key-theme "Emacs"

May 08, 2016 12:11 AM

May 01, 2016

Elizabeth Krumbach

Newton OpenStack Summit Days 1-2

This past week I attended my sixth OpenStack Summit. This one took us to Austin, Texas. I was last in Austin in 2014 when I quickly stopped by to give a talk at the Texas LinuxFest, but I wasn’t able to stay long during that trip. This trip gave me a chance (well, several) to finally have some local BBQ!

I arrived Sunday afternoon and took the opportunity to meet up with Chris Aedo and Paul Belanger, who I’d be on the stage with on Monday morning. We were able to do our first meetup together and do a final once through of our slides to make sure they had all the updates we wanted and we were clear on where the transitions were. Gathering at the convention center also allowed to pick up our badges before the mad rush that would come the opening of the conference itself on Monday morning.

With Austin being the Live Music Capital of the World, we were greeted in the morning by live music from the band Soul Track Mind. I really enjoyed the vibe it brought to the morning, and we had a show to watch as we settled in and waited for the keynotes.

Jonathan Bryce and Lauren Sell of the OpenStack Foundation opened the conference and gave us a tour of numbers. The first OpenStack summit was held in Austin just under six years ago with 75 people and they were proud to announce that this summit had over 7,500. It’s been quite the ride that I’m proud to have been part of since the beginning of 2013. In Jonathan’s keynote we were able to get a glimpse into the real users of OpenStack, with highlights including the fact that 65% of respondents to the recent OpenStack User Survey are using OpenStack in production and that half of the Fortune 100 companies are using OpenStack in some capacity. It was also interesting to learn how important the standard APIs for interacting with clouds was for companies, a fact that I always hoped would shine through as this open source cloud was being adopted. The video from his keynote is here: Embracing Datacenter Diversity.

As the keynotes continued the ones that really stood out for me were by AT&T (video: AT&T’s Cloud Journey with OpenStack) and Volkswagen Group (Driving the Future of IT Infrastructure at Volkswagen Group.

The AT&T keynote was interesting from a technical perspective. It’s clear that the rise of mobile devices and the internet of things has put pressure on telecoms to grow much more quickly than they have in the past to handle this new mobile infrastructure. Their keynote shared that they expected this to grow an additional ten times by 2020. To meet this need, the networking aspects of technologies like OpenStack are important to their strategy as they move away from “black box” hardware from networking vendors and to more software-driven infrastructure that could grow more quickly to fit their needs. We learned that they’re currently using 10 OpenStack projects in their infrastructure, with plans to add 3 more in the near future, and learned about their in house AT&T Integrated Cloud (AIC) tooling for managing OpenStack. When the morning concluded, all their work was rewarded with a Super User award, they wrote about here.

The Volkswagen Group keynote was a lot of fun. As the world of electric and automated cars quickly approaches they have recognized the need to innovate more quickly and use technology to get there. They still seem to be in the early days of OpenStack deployments, but have committed a portion of one of their new data centers to just OpenStack. Ultimately they see a hybrid cloud future, leveraging both public and private hosting.

The keynote sessions concluded with the announcement of the 2017 OpenStack Summit locations: Boston and Sydney!

Directly after the keynote I had to meet Paul and Chris for our talk on OpenStack Infrastructure for Beginners (video, slides). We had a packed room. I lead off the presentation by covering an overview of our work and by giving a high level tour of the OpenStack project infrastructure. Chris picked up by speaking to how things worked from a developer perspective, tying that back into how and why we set things up the way we did. Paul rounded out the presentation by diving into more of the specifics around Zuul and Jenkins, including how our testing jobs are defined and run. I think the talk went well, we certainly had a lot of fun as we went into lunch chatting with folks about specific components that they were looking either to get involved with or replicate in their own continuous integration systems.

Chris Aedo presenting, photo by Donnie Ham (source)

After a delicious lunch at Cooper’s BBQ, I went over to a talk on “OpenStack Stable: What It Actually Means to Maintain Stable Branches” by Matt Riedemann, Matthew Treinish and Ihar Hrachyshka in the Upstream Development track of the conference. This was a new track for this summit, and it was great to see how well-attended the sessions ended up being. The goal of this talk was to inform members of the community what exactly is involved in management of stable releases, which has a lot more moving pieces than most people tend to expect. Video from the session up here. It was then over to “From Upstream Documentation To Downstream Product Knowledge Base” by Stefano Maffulli and Caleb Boylan of DreamHost. They’ve been taking OpenStack documentation and adjusting it for easier and more targeted for consumption by their customers. They talked about their toolchain that gets it from raw source from the OpenStack upstream into the proprietary knowledge base at DreamHost. It’ll be interesting to see how this scales long term through releases and documentations changes, video here.

My day concluded by participating in a series of Lightning Talks. My talk was first, during which I spent 5 minutes giving a tour of I was inspired to give this talk after realizing that even though the links are right there, most people are completely unaware of what things like Reviewday (“Reviews” link) are. It also gave me the opportunity to take a closer, current look at OpenStack Health prior to my presentation, I had intended to go to “OpenStack-Health Dashboard and Dealing with Data from the Gate” (video) but it conflicted with the talk we were giving in the morning. The lightning talks continued with talks by Paul Belanger on Grafyaml, James E. Blair on Gertty and Andreas Jaeger on the steps for adding a project to OpenStack. The lightning talks from there drifted away from Infrastructure and into more general upstream development. Video of all the lightning talks here.

Day two of the summit began with live music again! It was nice to see that it wasn’t a single day event. This time Mark Collier of the OpenStack Foundation kicked things off by talking about the explosion of growth in infrastructure needed to support the growing Internet of Things. Of particular interest was learning about how operators are particularly seeking seamless integration of virtual machines, containers and bare metal, and how OpenStack meets that need today as a sort of integration engine, video here.

The highlights of the morning for me included a presentation from tcp cloud in the Czech Republic. They’re developing a Smart City in the small Czech city of Písek. He did an overview of the devices they were using and presented a diagram demonstrating how all the data they collect from around the city gets piped into an OpenStack cloud that they run. He concluded his presentation by revealing that they’d turned the summit itself into a mini city by placing devices around the venue to track temperature and CO2 levels throughout the rooms, very cool. Video of the presentation here.

tcp cloud presentation

I also enjoyed seeing Dean Troyer on stage to talk about improving user experience (UX) with OpenStackClient (OSC). As someone who has put a lot of work into converting documented commands in my book in an effort to use OSC rather than the individual project clients I certainly appreciate his dedication to this project. The video from the talk is here. It was also great to hear from OVH, an ISP and cloud hosting provider who currently donates OpenStack instances to our infrastructure team for running CI testing.

Tuesday also marked the beginning of the Design Summit. This is when I split off from the user conference and then spend the rest of my time in development space. This time the Design Summit was held across the street from the convention center in the Hilton where I was staying. This area of the summit takes us away from presentation-style sessions and into discussions and work sessions. This first day focused on cross-project sessions.

This was the lightest day of the week for me, having a much stronger commitment to the infrastructure sessions happening later in the week. Still, went to several sessions, starting off with a session led by Doug Hellmann to talk about how to improve the situation around global requirements. The session actually seemed to be an attempt to define the issues around requirements and get more contributors to help with requirements project review and to chat about improvements to tests. We’d really like to see requirements changes have a lower chance of breaking things, so trying to find folks to sign up to do this test writing work is really important.

I had lunch with my book writing co-conspirator Matt Fischer to chat about some of the final touches we’re working on before it’s all turned in. Ended up with a meaty lunch again at Moonshine Grill just across the street from the convention center, after which I went into a “Stable Branch End of Life Policy” session led by Thierry Carrez and Matt Riedemann. The stable situation is a tough one. Many operators want stable releases with longer lifespans, but the commitment from companies to put engineers on it is extremely limited. This session explored the resources required to continue supporting releases for longer (infra, QA, etc) and there were musings around extending the support period for projects meeting certain requirements for up to 24 months (from 18). Ultimately by the end of the summit it does seem that 18 months continues to be the release lifespan of them all.

I then went over to the Textile building across from the conference center where my employer, HPE, had set up their headquarters. I had a great on-camera chat with Stephen Spector about how open source has evolved from hobbyist to corporate since I became involved in 2001. I then followed some of the marketing folks outside to shoot some snippits for video later.

The day of sessions continued with a “Brainstorm format for design summit split event” session that talked a lot about dates. As a starting point, Thierry Carrez wrote a couple blog posts about the proposal to split the design summit from the user summit:

With these insightful blog posts in mind, the discussion moved forward on the assumption that the events would be split and how to handle that timing-wise. When in the cycle would each event happen for maximum benefit for our entire community? In the first blog post he had a graphic that had a proposed timeline, which the discussions mostly stuck to, but dove deeper into discussing what is going on during each release cycle week and what the best time would be for developers to gather together to start planning the next release. While there was good discussion on the topic, it was clear that there continues to be apprehension around travel for some contributors. There are fears that they would struggle to attend multiple events funding-wise, especially when questions arose around whether mid-cycle events would still be needed. Change is tough, but I’m on board with the plan to split out these events. Even as I write this blog post, I notice the themes and feel for the different parts of our current summit are very different.

My session day concluded with a session about cross-projects specifications for work lead by Shamail Tahir and Carol Barrett from the Product Working Group. I didn’t know much about OpenStack user stories, so this session was informative for seeing how those should be used in specs. In general, planning work in a collaborative way, especially across different projects that have diverse communities is tricky. Having some standards in place for these specs so teams are on the same page and have the same expectations for format seems like a good idea.

Tuesday evening meant it was time for the StackCity Community Party. Instead of individual companies throwing big, expensive parties, a street was rented out and companies were able to sponsor the bars and eateries in order to throw their branded events in them. Given my dietary restrictions this week, I wasn’t able to partake in much of the food being offered, so I only spent about an hour there before joining a similarly restricted diet friend over at Iron Works BBQ. But not before I picked up a dinosaur with a succulent in it from Canonical.

I called it an early night after dinner, and I’m glad I did. Wednesday through Friday were some busy days! But those days are for another post.

More photos from the summit here:

by pleia2 at May 01, 2016 04:28 PM

April 29, 2016

Akkana Peck

Vermillion Cliffs trip, and other distractions

[Red Toadstool, in the Paria Rimrocks] [Cobra Arch, in the Vermillion Cliffs] I haven't posted in a while. Partly I was busy preparing for, enjoying, then recovering from, a hiking trip to the Vermillion Cliffs, on the Colorado River near the Arizona/Utah border. We had no internet access there (no wi-fi at the hotel, and no data on the cellphone). But we had some great hikes, and I saw my first California Condors (they have a site where they release captive-bred birds). Photos (from the hikes, not the condors, which were too far away): Vermillion Cliffs trip.

I've also been having fun welding more critters, including a roadrunner, a puppy and a rattlesnake. I'm learning how to weld small items, like nail legs on spark plug dragonflies and scorpions, which tend to melt at the MIG welder's lowest setting.

[ Welded puppy \ [ Welded Roadrunner ] [ Welded rattlesnake ]

New Mexico's weather is being charmingly erratic (which is fairly usual): we went for a hike exploring some unmapped cavate ruins, shivering in the cold wind and occasionally getting lightly snowed upon. Then the next day was a gloriously sunny hike out Deer Trap Mesa with clear long-distance views of the mountains and mesas in all directions. Today we had graupel -- someone recently introduced me to that term for what Dave and I have been calling "snail" or "how" since it's a combination of snow and hail, soft balls of hail like tiny snowballs. They turned the back yard white for ten or fifteen minutes, but then the sun came out for a bit and melted all the little snowballs.

But since it looks like much of today will be cloudy, it's a perfect day to use up that leftover pork roast and fill the house with good smells by making a batch of slow-cooker green chile posole.

April 29, 2016 06:28 PM

April 23, 2016

Elizabeth Krumbach


A few weeks ago I had the pleasure of flying to Singapore to participate in FOSSASIA 2016, which is billed as Asia’s Premier Open Technology Event. I was able to spend a little time prior to the event doing some touristing but Friday morning came quickly and I met up with a colleague to make our way to the conference. We took the Singapore MRT (Mass Rapid Transit, rails!) from the station near our hotel to the Science Centre Singapore where the conference was being held. I was really pleased with how fast, frequent, clean and easy to navigate the MRT is during rush hour. Though the trains did tend to fill up, we had very easy rides to and from the venue each day.

This was my second open source conference in a science museum, and I really like the association. As conference attendees we were free to visit the museum (photos here). It was quite an honor to be welcomed to the center by Lim Tit Meng, the museum’s Chief Executive, during the keynotes on Friday morning. That morning I also had the pleasure of meeting FOSSASIA founder Hong Phuc, who I had been exchanging emails with leading up to the event, it was very clear that she’s continued to be very hands on with the organization of the conference since its founding.

The theme of the conference this year centered around the Internet of Things, so the Friday morning keynotes drew from a diverse group of people and organizations. I was particularly impressed that they didn’t just call upon open source developers to give presentations. Keynotes came from folks working on hardware, design and fascinating programs that used IoT devices.

Highlights of the morning included a talk by Bunnie Huang who made electronic, lighted badges for Burning Man that changed their light patterns based on how they “mated” with other badges to change their blinkome (think genome). Talks continued with a really fun one from Bernard Leong of the Singapore Post who explained how they’ve been experimenting with drones for small package delivery, particularly to remote areas, using Pulau Ubin as an example in the demonstration run.

I was then really delighted to hear about UNESCO’s YouthMobile program from Davide Storti and ITO Misako. YouthMobile is encouraging children to shift from being mere users of mobile devices to actually developing applications for them. I find this project to be particularly important as I know I wouldn’t be the technologist I am today without being able to fiddle with my early computers. We need to grow that next generation of tinkerers, but increasingly kids tend to only have access to mobile rather than the big old desktops that I grew up on. I believe projects aimed at inspiring the tinkerer in children on these new devices will grow in importance as we move into the future It was also nice to hear that the project hasn’t just been creating all their own curriculum to accomplish their goals, they’ve been partnering with existing initiatives and programs. Kudos to them for doing it right.

Davide Storti and ITO Misako on YouthMobile

Cat Allman continued keynotes as she talked about the work Google has started to do in the Maker and Science space. Their work includes Google Summer of Code accepting more science-focused programs, support of Maker events and “road trips” with students to science museums. The final keynote came from Jan Nikolai Nelles who spoke on the The Other Nefertiti, where a team visited a German museum and created a not-strictly-authorized 3D rendering of a famous Nefertiti bust. It was a valuable thing unto itself, and interesting for raising awareness about how museums share data about artifacts, or don’t, as the case may be.

The conference continued as I went to a talk titled “Why are we (Still) wasting food? How technology can help” which sounds interesting, but the presenter didn’t seem to understand his audience or what the conference was about. The talk was pretty much a sales talk about the success of their product in saving food in restaurant and other industry kitchens. A noble effort, and it was fun to brainstorm how some of the components he talked about could be used in other open source projects. I visited their website during the talk and was perplexed to be unable to find a link to their source code. During the Q&A I specifically asked whether the software was actually open source. The presenter struggled to answer my question, he claimed that it was, but that he is not a developer so he wasn’t sure which parts or where I could find it. He gave me his business card so I could send him an email about it after the conference. My email follow-up received this response:

“We are not using any open source code. Everything is developed in house.”

How disappointing! I’m not sure how their talk ended up at a Free and Open Source Software conference, though their selection of a non-technical presenter who couldn’t answer a simple question that strikes at the core of what the conference is about does hint at their obliviousness. I certainly didn’t appreciate being tricked into attending a sales talk about a suite of proprietary software. Thankfully, the conference improved after this.

I attended a talk by U-Zyn Chua about how he reverse engineered an API in a taxi app for his Singapore Taxi data project. His talk was fascinating for two reasons. First he walked us through the work that had to be done to use an undocumented API. Second, the data about taxis that he collected was fascinating, high traffic areas, times of days when taxis were busy. Plus, between this talk and the Singapore Post talk I learned a lot about the geography and population centers of Singapore.

Official Group Photograph - FOSSASIA 2016
Official Group Photograph – FOSSASIA 2016 by Michael Cannon

The conference continued the next day and I made sure I made time to attend Sayan Chowdhury’s “Dive deep into Fedora Infra” talk. Fedora was an early project on my open source infra list and it’s always exciting to chat with their engineers and swap running infra in the open stories. Sayan’s talk gave an overview of several of the key services that they’ve developed and deployed, including projects like Fedora Infrastructure Message Bus (fedmsg) which was also deployed by the open source infra team for the Debian project. Unfortunately I had to quickly depart from that talk in order to make it over to my own just after.

I gave a talk on “Code Review for DevOps” which I had a lot of fun modifying for the 20 minute slot and for a devops rather than systems administration audience. I put a firmer emphasis on the development of tooling in our team and was able to tighten up the presentation a lot to deliver a whirlwind tour of how we do almost everything through a code review system and with testing. Slides from the presentation are here (PDF).

Photo of my presentation by Dong (Vincent) Ma source)

I mentioned that my talk was 20 minutes long, and that makes this a good time to pause and reflect on that format. Almost all the talks at this conference were 20 minute slots, which is about half the length I’m accustom to. I really like this length. If a talk is not interesting, at least it’s short. If it is interesting, 20 minutes does actually give enough time for a good presentation. The schedule also allowed for 10 minutes between sessions so that people could get to their next room. In reality, all this timing this could have used a bit more policing. Q&As and even talks themselves by speakers used to longer slots frequently overflowed beyond their 20 minute window and frequently made it difficult to complete seeing one talk and getting out to the next. For a volunteer-run event, they did do a good job overall of sticking to at least the schedule of when talks started in each room, so if I planned accordingly I rarely missed the beginning of a talk in an alternate track because the schedule had drifted.

Saturday afternoon I spent some time going to lightning talks, including one about “Continuous Integration and Continuous Deployment (CI/CD) for Open Source and Free Software Development” by my colleague Dong Ma. With only 5 minutes, he was quickly able to contrast some of the features of the FOSSology open source CI/CD workflow with that of the model the OpenStack community has developed.

Dong Ma on open source CI/CD

I was then off to Sundeep Anand’s presentation, “Using Python Client to talk with Zanata Server.” Last autumn we launched running on Zanata and have been using the Java client along with a series of scripts to handle manipulation of the translations in the OpenStack project. It was interesting to learn about his strides with the Python client, which is making its way up to feature parity with the Java one. Since OpenStack itself is written in Python, switching to this Python client may make sense for us at some point, as it would make it easier for developers on our project to contribute to it. During his talk he also gave a demonstration of Zanata itself as he walked through the use of the client.

These talks were all very practical for me and applicable to my work, but that doesn’t mean I didn’t go off and have fun too. Later that afternoon I attended a talk on “A trip to Pluto with OpenSpace” where the team developing OpenSpace took public images of the Pluto flyby and gave us a demonstration of how their software worked to provide such a fascinating, animated demonstration. I also got to learn about the New Palmyra project where people are getting together to create 3D models of famous monuments in Syria that have been or are at risk of being destroyed by ongoing military conflict in the region. I also enjoyed learning about the passion that everyone on that team is bringing to the project, and I have a lot of respect for and interest in their goals of preserving history.

On Sunday the first talk I attended was by François Cartegnie on the newest features of the popular, cross-platform VLC software project. As a user of multiple platforms (Linux and Android) it was nice to hear that with the 3.0 release they’re aiming to standardize on that release number, as the differing version numbers across platforms have been confusing. He also spent a great deal of time explaining the challenges they continually overcome to be the best player on the market, including not just by supporting encoding standards, but by also supporting when those encoding standards are poorly or improperly implemented. This can’t be an easy task. I was also interested to learn that the uPnP support has also been revamped and should be working better these days.

My colleague and tourism buddy for the week Matthew Treinish spoke next, on “QA in the Open.” Drawing from his experience as the QA project lead for OpenStack for several cycles, he talked about the plugin-driven model that OpenStack QA has adopted. This model has helped individual projects take ownership of their testing requirements and has helped scale the very small core QA team, which now spans over a thousand repositories and dozens of projects that make up OpenStack.

Matthew Treinish on QA in the Open

Sunday afternoon had a talk that was one of the conference highlights for me: “Reproducible Builds – fulfilling the original promise of free software” by Chris Lamb. I had an interest in the topic before joining the session, but it was one of those talks where I was really pulled in and became even more interested in the topic. The idea on the surface seems pretty simple, you want to be able to exactly replicate builds over time and space. But there are a number of challenges to this when it comes to actually doing it, which he outlined:

  • Timestamps
  • Timezones and locales
  • Different versions of libraries
  • Non-deterministic file ordering
  • Users, groups, umask, environment variables
  • Random behavior (eg. hash ordering)
  • Build paths

Chris Lamb on Reproducible Builds

As soon as he enumerated these things it was obvious that they all would be problems, and still surprising that it would be so difficult. From this talk I learned about the project which seeks to document and discuss these issues and find solutions for all of them. Additionally, Chris himself is a participant in Debian and he was able to share statistics about how most Debian packages are now being created in a way that adheres to the reproducible model. Very cool stuff, I hope to learn more about it.

My afternoon continued by attending a talk about btrfs by Anand Jain. His focus was basics and then on to upcoming features in development. The talk may have convinced me to start using it in a basic way on one of my systems soon, as the support for the core components is actually quite stable these days. I then went to an Asciidoc talk, where presenter George Goh compiled his presentation from Asciidoc just before he began presenting, nicely done! He stressed the importance of documentation and making it easy to keep updated, with automated updates of references to things like figures that live inline in the text. He also explored the use of template systems in Asciidoc to easily export portions of your document to different projects and organizations while preserving the appropriate branding for each.

In what seemed much too soon, the conference conclusion came on Sunday evening. There were thanks and words from several of the organizers. Words from the audience and various attendees were also spoke, my favorite of which came from young (middle school by my US-rendering) students visiting from Saudi Arabia. Several had feared that the conference would be boring and too technical for the level they were at, but they expressed excitement about how much fun they had and how many presenters had succeeded in presenting topics that they could understand. It was thrilling to hear this from these students, I want the future architects of our future to start young, be exposed to free and open source software and to be excited by the possibilities.

More of my photos from the event here:

Thanks to all the organizers and volunteers for putting this conference together. I had a wonderful time and hope to participate again in the future!

by pleia2 at April 23, 2016 05:50 PM

April 21, 2016

Jono Bacon

Dan Ariely on Building More Human Technology, Data, Artificial Intelligence, and More

Behavioral economics is an exciting skeleton on which to build human systems such as technology and communities.

One of the leading minds in behavioral economics is Dan Ariely, New York Times best-selling author of Predictably Irrational, The Upside Of Irrationality, and frequent TED speaker.

I recently interviewed Dan for my Forbes column to explore how behavioral economics is playing a role in technology, data, artificial intelligence, and preventing online abuse. Predictably, his insight was irrationally interesting. OK, that was a stretch.

Read the piece here

by Jono Bacon at April 21, 2016 08:59 PM

Nathan Haines

Ubuntu 16.04 LTS FAQ

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Ubuntu 16.04 LTS is here! Let's take a look at some of the most exciting features and common questions around this new operating system.

Ubuntu 16.04 LTS

  1. When does Ubuntu 16.04 LTS come out?

    • Ubuntu 16.04 LTS will reach general release on April 21st, 2016.
  2. I meant at what time will the release happen?

    • Ubuntu is actively being developed until the actual release happens, minus a small delay to help the mirrors propogate first. The release will be announced on the ubuntu-announce mailing list. (This page will not exist until the release.)
  3. What does "16.04 LTS" mean?

    • Ubuntu is released on a regular schedule every six months. The first release was in October 2004, and was named Ubuntu 4.10. For Ubuntu, the major version number is the year of release and the minor version number is the month of release. Ubuntu 16.04 is released on 2016-04-21, so the version number is 16.04.
    • Ubuntu releases are supported for 9 months, but many computing activities require stability. Every two years, an Ubuntu release is developed with long term support in mind. These releases, designated with "LTS" after the version number, are supported for 5 years on the server and desktop.
  4. What does "Xenial Xerus" mean?

    • Every version of Ubuntu has an alliterative development codename. After Ubuntu 6.06 LTS was released, the decision was made to choose new codenames in alphabetical order. Ubuntu 16.04 LTS is codenamed the Xenial Xerus release, or xenial for short.
    • "Xenial" is an adjective that means "friendly to others, especially foreigners, guests, or strangers." With lxd being perfect for "guest" containers, Snappy Ubuntu Core being perfect for IoT developers, snap packages being perfect for third-party software developers, and Ubuntu on Windows perfect for Windows developers who use Ubuntu in the cloud (or Ubuntu developers who are forced to use Windows at work!), xenial is a perfect description of Ubuntu 16.04!
    • "Xerus" is the genus name of the African ground squirrel. They collaborate and are not aggressive to other mammals, so they fit the description of xenial. It also makes for an adorable mascot!
  5. How long will Ubuntu 16.04 LTS be supported?

    • Ubuntu 16.04 LTS will be supported on desktops, servers, and in the cloud for 5 years, until April 2021. After this time, 16.04 LTS will enter end-of-life and no more security updates will be released.

Getting Ubuntu 16.04 LTS

  1. Where can I download Ubuntu 16.04 LTS?

    • Once released, Ubuntu 16.04 LTS will be available for release at This URL will help you select the right architecture and will automatically link you to a mirror for the download. Please don't constantly refresh the direct download site!
  2. What if I find ubuntu-16.04-desktop-amd64.iso on an Ubuntu server before the official release is announced?

    • Then you've found a final release candidate that is being used to seed the mirrors before releases. Downloading or linking to it will interfere with the mirrors and delay the release.
  3. What if I post a link to it anyway?

    • If you do it on /r/Ubuntu, your post or comment will be removed and you will be banned for a day. The release team works hard enough as it is!
  4. What if I want to help others get Ubuntu 16.04 LTS faster?

    • Thank you for your help! Consider using BitTorrent (Ubuntu comes with Transmission) and seeding the final release.
  5. What if I'm already running Ubuntu 14.04.4 LTS or Ubuntu 15.10?

    • Then you can simply upgrade to Ubuntu 16.04 using Software Updater

Upgrading to Ubuntu 16.04 LTS

  1. Is upgrading to a new version of Ubuntu easy?

    • Yes, the upgrade process is supported and automated. However, you should always back up your files and data before upgrading Ubuntu. Actually, you should always keep recent backups even when you not upgrading Ubuntu.
    • Ubuntu checks for software updates once a day, and Software Updater will inform you once a new version of Ubuntu is available. The upgrade will download a large amount of data--anywhere from 0.5 - 1.5 GB of data depending on the packages you have installed, and the upgrade process can take some time. Don't do any serious work on your computer during the upgrade process. Light web browsing or a simple game such as Aisleriot, Mahjongg, or Mines is safe.
  2. Should I upgrade to Ubuntu 16.04 LTS right away or wait?

    • It should be safe to upgrade immediately, and as long as you back up your home folder and have install media for your current version of Ubuntu in case you want to reinstall, there's very little risk involved.
  3. Is it better to wait until later?

    • Probably not, but there are other benefits. Ubuntu 16.04 will receive newer release images with bug fixes about 3 months after its initial release. In addition, downloading updates can be much faster after release week. (Be sure to set up your Ubuntu mirror in Software & Updates!) Ubuntu 14.04 LTS is supported until April 2019 and Ubuntu 15.10 is supported until July 2016, so you have nothing to lose by waiting a couple weeks.
  4. I'm running Ubuntu 15.10. How do I upgrade to Ubuntu 16.04 LTS?

    • After Ubuntu 16.04 LTS is released, Software Updater will inform you that a new version of Ubuntu is available. Make sure that all available updates for Ubuntu 15.10 have been installed first, then click the "Upgrade..." button.
  5. I'm running Ubuntu 14.04.4 LTS. How do I upgrade to Ubuntu 16.04 LTS?

    • After Ubuntu 16.04.1 LTS is released in July 2016, Software Updater will inform you that a new version of Ubuntu is available. Make sure that all available updates for Ubuntu 14.04 LTS have been installed first, then click the "Upgrade..." button.
  6. I'm running Ubuntu 12.04 LTS. How do I upgrade to Ubuntu 16.04 LTS?

    • You can't upgrade directly to Ubuntu 16.04 LTS, so you have two options:
      • Use Update Manager to upgrade to Ubuntu 14.04 LTS, then reboot and use Software Updater to upgrade again to Ubuntu 16.04 LTS.
      • Back up your computer and install Ubuntu 16.04 LTS from scratch.
  7. What is Ubuntu 16.04.1 and why can't I update Ubuntu 14.04 LTS immediately?

    • A new version of Ubuntu is released every six months, but LTS releases are used for years. So Ubuntu offers "point releases" of LTS versions. Starting 3 months after the release and then every 6 months thereafter, new install images are created that include the latest updates to all of the default software. This allows new installations to run the latest software immediately and decreases the time it takes to download updates after a new install.
    • Because LTS users depend on stability, Ubuntu 14.04 LTS will not automatically offer an update to Ubuntu 16.04 LTS until the first point release. After three months, any show-stopper bugs should be solved and the upgrade process will have been tested by many others and improved if necessary.
  8. What if I want to upgrade right now?

    • Upgrading from Ubuntu 14.04 LTS to Ubuntu 16.04 LTS should be safe and easy. If you have a recent backup of your files and data, simply open Terminal and type update-manager -d. This will tell Ubuntu to upgrade to the next release early.
  9. What if I already ran update-manager -d and upgraded to a beta or pre-release version of Ubuntu 16.04 LTS?

    • If you run Software Updater after the release of Ubuntu 16.04 LTS, your version of xenial will be the same as the released version of Ubuntu.
  10. What if I don't believe that?

    • When xenial is being developed, it is constantly being improved. Milestones such as Alpha 1, Beta 2, and so on are simply points in time where developers can check progress. If you install Ubuntu from a Beta 2 image (for example), the moment you apply updates, you are no longer running Beta 2. Updates to xenial continue until release, when the Ubuntu archive is locked, images are spun, and the xenial archive is finalized and released as Ubuntu 16.04 LTS. After the release of Ubuntu 16.04 LTS, all further updates come from the xenial-updates and xenial-security repositories and the xenial repository remains unchanged. Updating from the Ubuntu repositories during and after the xenial development and release brings you along through theses moments in time.
      • TRIVIA: As implied above, this means that Ubuntu 16.04 LTS doesn't exist until the Release Team names the final product. Until then, the release is simply Xenial Xerus or xenial for short.

Coming next:

Details on new features!

  • How do snap packages and deb packages work together?
  • DAE Unity 8?
  • Y U NO AMD fglrx drivers?
  • And other questions you ask in the [/r/Ubuntu comments](!

April 21, 2016 10:17 AM

April 16, 2016

Elizabeth Krumbach

Color an Ubuntu Xenial Xerus

Last cycle I reached out to artist and creator of Full Circle Magazine Ronnie Tucker to see if he’d create a coloring page of a werewolf for some upcoming events. He came through and we had a lot of fun with it (blog post here).

With the LTS release coming up, I reached out to him again.

He quickly turned my request around, and now we have a xerus to color!

Xerus coloring page
Click the image or here to download the full size version for printing.

Huge thanks to Ronnie for coming through with this, it’s shared with a CC-SA license, so I encourage people to print and share them at their release events and beyond!

While we’re on the topic of the our African ground squirrel friend, thanks to Tom Macfarlane of the Canonical Design Team I was able to update the Animal SVGs section of the Official Artwork page on the Ubuntu wiki. For those of you who haven’t seen the mascot image, it’s a real treat.

Xerus official mascot

It’s a great accompaniment to your release party. Download the SVG version for printing from the wiki page or directly here.

by pleia2 at April 16, 2016 05:03 PM

April 14, 2016

Jono Bacon

Mycroft and Building a Future of Open Artificial Intelligence

Last year a new project hit Kickstarter called Mycroft that promises to build an artificial intelligence assistant. The campaign set out to raise $99,000 and raised just shy of $128,000.

Now, artificial intelligence assistants are nothing particularly new. There are talking phones and tablets such as Apple’s Siri and Google Now, and of course the talking trash can, the Amazon Echo. Mycroft is different though and I have been pretty supportive of the project, so much so that I serve as an advisor to the team. Let me tell you why.

Here is a recent build in action, demoed by Ryan Sipes, Mycroft CTO and all round nice chap:

Mycroft is interesting both for the product it is designed to be and the way the team are building it.

For the former, artificial intelligence assistants are going to be a prevalent part of our future. Where these devices will be judged though is in the sheer scope of the functions, information, and data they can interact with. They won’t be judged by what they can do, but instead what they can’t do.

This is where the latter piece, how Mycroft is being built, really interests me.

Firstly, Mycroft is open source in not just the software, but also the hardware and service it connects to. You can buy a Mycroft, open it up, and peek into every facet of what it is, how it works, and how information is shared and communicated. Now, for most consumers this might not be very interesting, but from a product development perspective it offers some distinctive benefits:

  • A community can be formed that can play a role in the future development and success of the product. This means that developers, data scientists, advocates, and more can play a part in Mycroft.
  • Capabilities can be crowdsourced to radically expand what Mycroft can do. In much the same way OpenStreetmap has been able to map the world, developers can scratch their own itch and create capabilities to extend Mycroft.
  • The technology can be integrated far beyond the white box sitting on your kitchen counter and into Operating Systems, devices, connected home units, and beyond.
  • The hardware can be iterated by people building support for Mycroft on additional boards. This could potentially lower costs for future units with the integration work reduced.
  • Improved security for users with a wider developer community wrapped around the project.
  • A partner ecosystem can be developed where companies can use and invest in the core Mycroft open source projects to reduce their costs and expand the technology.

There is though a far wider set of implications with Mycroft too. Much has been been written about the concerns from people such as Elon Musk and Stephen Hawking about the risks of artificial intelligence, primarily if it is owned by a single company, or a small set of companies.

While I don’t think skynet is taking over anytime soon, these concerns are valid and this raises the importance that artificial intelligence is something that is open, not proprietary. I think Mycroft can play a credible role in building a core set of services around AI that are part of an open commons that companies can invest in. Think of this as the OpenStack of AI, if you will.

Hacking on Mycroft

So, it would be remiss if I didn’t share a few details of how the curious among you can get involved. Mycroft currently has three core projects:

  • The Adapt Intent Parser converts natural language into machine readable data structures.
  • Mimic takes in text and reads it out loud to create a high quality voice.
  • OpenSTT is aimed at creating an open source speech-to-text model that can be used by individuals and company to allow for high accuracy, low-latency conversion of speech into text.

You can also find the various projects here on GitHub and find a thriving user and developer community here.

Mycroft are also participating in the IBM Watson AI XPRIZE where the goal is to create an artificial intelligence platform that interacts with people so naturally that when people speak to it they’ll be unable to tell of they’re talking to a machine or to a person. You can find out more about how Mycroft is participating here.

I know the team are very interested in attracting developers, docs writers, translators, advocates, and more to play a role across these different parts of the project. If this all sounds very exciting to you, be sure to get started by posting to the forum.

by Jono Bacon at April 14, 2016 05:01 AM

April 13, 2016

Jono Bacon

Going Large on Medium

I just wanted to share a quick note to let you know that I will be sharing future posts both on and on my Medium site.

I would love to hear what kind of content you would find interesting for me to share. Feel free to share in the comments!


by Jono Bacon at April 13, 2016 07:19 PM

April 12, 2016

Jono Bacon

Upcoming Speaking at Interop and Abstractions

I just wanted to share a couple of upcoming speaking engagements going on:

  • Interop in Las Vegas – 5th May 2016 – I will be participating in the keynote panel at Interop this year. The panel is called How Open-Source Changes the IT Equation and I am looking forward to participating with Colin McNamara, Greg Ferro, and Sean Roberts.
  • Abstractions in Pittsburgh – 18-20 Aug 2016 – I will be delivering one of the headlining talks at Abstractions. This looks like an exciting new conference and my first time in Pittsburgh. Looking forward to getting out there!

Some more speaking gigs are in the works. More details soon.

by Jono Bacon at April 12, 2016 03:30 PM

April 10, 2016

Jono Bacon

Community Leadership Summit 2016

On 14th – 15th May 2016 in Austin, Texas the Community Leadership Summit 2016 will be taking place. For the 8th year now, community leaders and managers from a range of different industries, professions, and backgrounds will meet together to share ideas and best practice. See our incredible registered attendee list that is shaping up for this year’s event.

This year we also have many incredible keynotes that will cover topics such as building developer communities, tackling imposter syndrome, gamification, governance, and more. Of course CLS will incorporate the popular unconference format where the audience determine the sessions in the schedule.

We are also delighted to host the FLOSS Community Metrics event as part of CLS this year too!

The event is entirely free and everyone is welcome! CLS takes place the weekend before OSCON in the same venue in Austin. Be sure to go and register to join us and we hope to see you in Austin in May!

Many thanks to O’Reilly, Autodesk, and the Linux Foundation for their sponsorship of the event!

by Jono Bacon at April 10, 2016 09:35 PM

April 05, 2016

Akkana Peck

Modifying a git repo so you can pull without a password

There's been a discussion in the GIMP community about setting up git repos to host contributed assets like scripts, plug-ins and brushes, to replace the long-stagnant GIMP Plug-in Repository. One of the suggestions involves having lots of tiny git repos rather than one that holds all the assets.

That got me to thinking about one annoyance I always have when setting up a new git repository on github: the repository is initially configured with an ssh URL, so I can push to it; but that means I can't pull from the repo without typing my ssh password (more accurately, the password to my ssh key).

Fortunately, there's a way to fix that: a git configuration can have one url for pulling source, and a different pushurl for pushing changes.

These are defined in the file .git/config inside each repository. So edit that file and take a look at the [remote "origin"] section.

For instance, in the GIMP source repositories, hosted on, instead of the default of url = ssh:// I can set

pushurl = ssh://
url = git://
(disclaimer: I'm not sure this is still correct; my gnome git access stopped working -- I think it was during the Heartbleed security fire drill, or one of those -- and never got fixed.)

For GitHub the syntax is a little different. When I initially set up a repository, the url comes out something like url = (sometimes the git@ part isn't included), and the password-free pull URL is something you can get from github's website. So you'll end up with something like this:

pushurl =
url =

Automating it

That's helpful, and I've made that change on all of my repos. But I just forked another repo on github, and as I went to edit .git/config I remembered what a pain this had been to do en masse on all my repos; and how it would be a much bigger pain to do it on a gazillion tiny GIMP asset repos if they end up going with that model and I ever want to help with the development. It's just the thing that should be scriptable.

However, the rules for what constitutes a valid git passwordless pull URL, and what constitutes a valid ssh writable URL, seem to encompass a lot of territory. So the quickie Python script I whipped up to modify .git/config doesn't claim to handle everything; it only handles the URLs I've encountered personally on Gnome and GitHub. Still, that should be useful if I ever have to add multiple repos at once. The script: repo-pullpush (yes, I know it's a terrible name) on GitHub.

April 05, 2016 06:28 PM

March 29, 2016

Elizabeth Krumbach

Tourist in Singapore

Time flies, and my recent trip to Singapore to speak at FOSSASIA snuck up on me. I wasn’t able to make time to do research into local attractions and so I found myself there the day before the conference I was there to attend with only one thing on my agenda, the Night Safari. MJ told me about it years ago when he visited Singapore and how he thought I’d enjoy it, given my love for animals and zoos.

I flew Singapore Air, frequently ranked the best airline in the world, and for good reason. Even in coach, the service is top notch and the food is edible, sometimes even good. My itinerary took me through Seoul on the way out, which felt the long way of doing things but my layover was short and I had a contiguous flight number, so passengers were mostly just shuffled through security and loaded onto the next plane. I seem to have cashed in all my travel karma this trip and ended up with an entire center row to myself, which meant I could lie down and get some sleep during the flights even though I was in coach. Heavenly! I arrived in Singapore at the bright and early time of 2AM and caught a taxi to my hotel. Thankfully I was able to get some sleep there too so I was ready for my jet lag adjustment day on Wednesday.

In the morning I met up with a colleague who was also in town for the conference. With neither of us having plans, I dragged him along with me as we bought tickets for the Night Safari that evening, including transport from a tour company that included priority boarding inside the park once we arrived. And then on to a touristy hop on/hop off bus to give us an overview of the city.

On the tourist bus!

The first thing I’ll say about Singapore: It’s hot and humid. I’m not built for this kind of weather. As much as I enjoyed my adventures, it was a struggle each day to keep up with my “I went to school in Georgia, this is fine!” colleague and to stay hydrated.

Then there’s their love for greenery. As a city-state there is a prevalence of what they refer to as the “concrete jungle” but they also seem keen on striking a balance. Many buildings have green gardens, and even full trees, on various balconies and roofs of their tall buildings. Even throughout areas of the city you could find larger green spaces than I’m accustom to seeing, bigger trees that they’ve clearly made an effort to make sure could still thrive. It was nice to see in a city.

The tourist bus took us through the heart of downtown where we were staying, then down to Chinatown, where the where we saw the Sri Mariamman Temple (which is actually a Hindu temple). The financial and districts were next, and then we decided to leave the bus for a time as we got to the Gardens by the Bay. This was a huge complex. There were several outdoor gardens with various themes, which surround the main area that has a couple indoor complexes as well as the outdoor tree-like structures that loom large, I got some great pictures of them.

We decided to go into the Cloud Forest, seeing as we were in town to speak about our work on cloud software. I was worried it would be even hotter inside, but it was amusing to discover that it was actually cooler, quite the welcome break for me. The massive dome structure enclosed what I would compare to the rain forest dome inside the California Academy of Sciences building in San Francisco, but much bigger and with a strong focus on flora rather than fauna. You enter the building at ground level and take the elevator to the top to walk down several stories through exhibits showing plant life of all sorts. It made for some nice views of the whole complex, and outside too.

After the dome, it was back out in the heat. We walked through some of the outdoor gardens before hopping on the tourist bus again. We took it through the Indian neighborhood where we saw the Sri Veeramakaliamman Temple and Arab section which included getting to see the beautiful Masjid Sultan (mosque), near where we had dinner later in the week at an Indian place that advertised being Halal.

By the time the bus got back to the stop near our hotel it was time for me to take a break before the Night Safari. We were being picked up for the safari at 6PM, which took us on a van to meet our bus that took us up to the part of the island where the Night Safari was. The tour guide gave an interesting take on history and the social benefits of living in Singapore on our journey up. It did make me reflect upon the fact that while there was traffic, the congestion was nothing like I’d expect for a city of Singapore’s size. I hadn’t yet experienced the public transit, but as I’d learn later in my trip it was quite good for the southern parts of the island.

The Night Safari! First impression: Tourist trap. But it got better. Once you make your way past the crowds, shops and food places, and beyond the goofy welcome show that has various animals doing tricks things get better. The adventure begins on a tram through the park. With the tour we didn’t have to wait in line, which when combined with the bus ride there, made it worth the extra fee for paying for the tour. The tram takes you through various habitats from around the world where nocturnal animals dwell. Big cats, various types of deer, wolves and hippos were among the star attractions. I was delighted to finally get to see some tahrs, which the last Ubuntu LTS release were named after.

After the tram tour I was feeling pretty tired, heat and jet lag hitting me hard. But I decided to go on a couple of the walking trails anyway. It was worth it. The walking trails are by far the best part of the park! More animals and getting to take the time as much time as you want to see the various animals. Exhaustion started hitting me when we completed half the trails, but I got to see fishing cats, otters, bats, a sleeping pangolin (another Ubuntu LTS animal!) and my favorite of the night, the binturong, otherwise known as a bear cat. I didn’t take any pictures of the animals, because night safari. By the end of our walking I was pretty tired and just wanted to get back to my bed, we forewent the tour bus back to the hotel and just got a taxi.

Thursday evening the first conference events kicked off with a hot pot dinner, but prior to that we had more time for touristing. During our city tour the day before I saw the Mint – Museum of Toys. Casting away thoughts of Toy Story 2’s plot line of being sold to a Japanese toy museum, I was delighted to visit an actual toy museum. Sadly, their floor on Space and Sci-Fi toys was closed, but the rest of the museum mostly made up for it. The open parts of the museum had 5 floors of toy displays spanning about one hundred years. Most of the toys were cartoon-related, with Popeye, super heroes, various popular Anime and Disney characters all making a respectable showing. Some of the toys packed into displays had surprisingly high appraisals attached to them, and there were notes here and there about their rarity. I had a lot of fun!

After toys, we decided to find lunch. It turns out that a number of places aren’t open for lunch, so we wandered around for a bit until around noon when we found ourselves in the Raffles Hotel courtyard in front of a menu that looked lovely for lunch. It was outdoors, so no escaping the heat, but the shade made things a bit more tolerable. It didn’t take long for us to eye the list of Slings on the cocktail menu and learn via a Google search that we were sitting where Singapore Slings were invented. How cool! Hydration took a back seat, I had to have a Singapore Sling were they were invented.

After lunch we continued our walk to make our way to the newly opened National Gallery. I had actually read about this one incidentally before arriving in Singapore, as it just opened in November and the opening was briefly covered in a travel magazine I read. This new gallery is housed in the historical former Supreme Court and City Hall buildings, and they didn’t do anything to hide this. Particularly in the Supreme Court building, it was very obvious that it was a courthouse, with much of what look like original benches throughout and rooms that still looked like court rooms with big wooden chairs and (jury?) boxes. In all, they were amazing buildings. The contents within made it that much better, these were some of the most impressive galleries I’ve ever had the pleasure of walking through. Art spanned centuries and styles of southern Asian talent, as well as art from colonials. I do admit enjoying the older, more realistic art rather than the modern and abstract, but there was something for everyone there. I’ll definitely go again the next time I’m in Singapore.

The National Gallery visit concluded my tourist adventures. That evening we met up with fellow FOSSASIA speakers at a hot pot restaurant not far from our hotel. It was my first time having hot pot, collecting raw meats, vegetables and fish from a buffet and dumping it in various boiling pots with seasonings was an experience I’m glad I had, but the weather got me there too. Sitting over a boiling pot in the evening heat and humidity certainly took its toll on me. Later in the week I had the opposite culinary adventure when I ended up at Swensen’s, an ice cream chain that started in San Francisco. I’d never been to the one in San Francisco, but apparently they’ve been a big hit in south Asia. It was fascinating to be in a San Francisco-themed restaurant and order a Golden Gate Bridge sundae while sitting halfway around the world from my city by the bay. Maybe I should visit the one in San Francisco now.

More photos from my tour around Singapore here:

Two days isn’t nearly enough in Singapore. Even though I don’t shop (and shopping is BIG there!) I only got a small taste of what the city had to offer.

Next stop was on to the conference at the Singapore Science Centre, which was quite the inspired venue selection for an open source conference, especially one that attracted a number of younger attendees, but that’s a story for another day.

by pleia2 at March 29, 2016 02:31 AM

March 27, 2016

Elizabeth Krumbach

Wine and dine in Napa Valley

In 2008 when I was visiting MJ in my first trip to San Francisco we had plans to go up to Napa Valley. Given the distance and crowds, the driver MJ hired for the day made an alternate suggestion: “How about Sonoma Valley instead?” That day was the beginning of us being Sonoma Valley fans. Tastings weren’t over-crowded, the wine was excellent, at the time traffic was tolerable even coming back to the city. We visited a winery with a wine cave, where we’d get engaged three years later. Last year we joined a wine club, sealing our fate to visit regularly.

We never did make it to Napa, until a couple weeks ago.

For MJ’s birthday last year I promised him a meal at the most coveted restaurant in California, The French Laundry. I worked with a concierge to complete the herculean effort to get reservations, and then rescheduled a couple of times to work around our shifting travel schedules. Finally they were firmed up for Sunday, March 13th. The timing worked out, with all our travel lately we hadn’t seen much of each other, so it was a nice excuse to get out of town and spend the weekend together. We drove up Friday night and checked into the Harvest Inn, catching a late dinner at the lovely restaurant there, Harvest Table.

Dinner at Harvest Table

Saturday morning we began our wine trail. We didn’t have a lot of time to plan this trip, so we depended upon the recommendations of my recent house guest, George Mulak (and remotely, his wife Vicki), who supplied us with a list of their favorites. Their recommendations were spot on. Our first stop was Heitz Cellar which was conveniently almost across the street from where we were staying. They have a relatively small tasting area, and sadly when we arrived the skies had opened up to give us piles of rain, so there was no enjoying the grounds. They did have a couple things I really liked though. The first was a bit surprise, I don’t typically care for Zinfandels, but we bought a bottle of theirs, it was very good. Two bottles of their port also came home with us. Next on our list was one of several Rutherford <Noun> wineries, and we ended up at the wrong one, in what was a lovely mistake. We found ourselves at Rutherford Hill. a famous winery known for their Merlots, and I love Merlot. They also had wine caves and did tours! On the rainy day that it ended up being, a wine cave tour was a fantastic shelter from the weather. Our bartender and tour guide was super friendly and inviting and there’s a reason they are world-renowned: their wines are wonderful. We even joined their club.

Drinking wine in the Rutherford Hills wine caves

For lunch we went to Rutherford Grill, which we quickly noticed looked a lot like one of our Silicon Valley favorites, Los Altos Grill, and San Francisco haunt Hillstone. Turns out they’re all related. The familiarity was a welcome surprise, and an enjoyable lunch.

Wine adventures continued in the same parking lot as the grill when we made our way across to Beaulieu Vineyard (BV). I think planning ahead would have served us better here, we just did the basic tasting which was pretty run of the mill. A day with better weather and a planned historic wine tour would have been a better experience, maybe next time. From there we made our final stop of the day back near our hotel at Franciscan Estate Winery. We had a lovely time chatting with the Philadelphia-native pouring our wines and did a couple flights covering their range of types and qualities. A fine way to round out our afternoon. We picked up some snacks and water (time to hydrate!) at the lovely Dean & DeLuca shop (purveyors of fine food) and went back to the hotel to spend some time relaxing before dinner.

Final tasting of the day at Franciscan

In preparation for our exciting French Laundry reservation the following day, we booked late (9:45PM) dinner reservations at a related restaurant, Bouchon. Another French restaurant by Thomas Keller, the meal was delicious and the atmosphere was both fancy and casual, a lovely mix of how at home a really nice Napa Valley restaurant can make you feel. Highly recommended, and quite a bit easier to get reservations at than The French Laundry, though I still did need to plan a couple weeks ahead.

Appetizers at Bouchon

Sunday morning concluded our stay at the Harvest Inn. In spite of the rainy weekend, I did get to enjoy walking through their grounds a bit and appreciated the spacious room we had and the real wood fireplace. The location was great too, giving us a nice home base for the loop of wineries we visited. We’d stay here again. Check out was quick and then we were dressed up and on our way to the gem of our Napa adventure: Tasting menu lunch at The French Laundry!

In case I haven’t drilled this home enough, The French Laundry was named the Best Restaurant in the World multiple times. Even when it’s not at the top, looking at pretty much any top 10 lists for the past decade will see it listed as well. Going here was a really big, once in a lifetime, kind of deal.

The rainy weekend continued as we were seated downstairs and settled in with a glass of champagne to start our meal. A half bottle of red wine later joined us mid-meal. What struck me first about the meal there was the environment. French restaurants I’ve been to are either very modern or very stuffy, neither of which I’m a huge fan of. The French Laundry was a lovely mix of the two, much like Bouchon of the previous night, it seemed to reflect its home in Napa Valley. The restaurant was truly laundry themed in a very classy way, with a clothes pin as their logo and the lamps on the walls tastefully boasting clothes laundry symbols. The staff was professional, charming and witty. The food was spectacular, quickly making it into one of the top three meals I’ve ever had. The meal took about three hours, with small plates coming at a nice pace to keep us satisfied but also relaxed so we could enjoy the time there. I was definitely full at the end, especially after the stream of beautiful and delicious desserts that filled our table at the end. At the conclusion of the meal we were given a copy of the menu and gifted the wooden clothes pins that were at our table upon arrival. In all, it was an exceptional experience.

Meal at The French Laundry

With some time on our hands following our long lunch at The French Laundry we decided to add one more winery to our itinerary before driving home, Hagafen Cellars. Their wines are Kosher, even for Passover, which makes them great for us during that no-bread time and a star at the White House during major Jewish and Israeli-focused events. Best of all, their wines are wonderful. Having not grown up Jewish, I was not aware of the disappointment found with the standard Manischewitz wine until a couple years ago, so it was refreshing to learn we have other options during Passover! We were pretty close to joining their wine club, but in the end preferred making our own selections, and with a trunk full of wine we figured we’d had enough for now.

Final stop, Hagafen Cellars

With that, our fairy tale weekend together in Napa Valley came to a close. MJ flew out to Seattle that night for work. My trip to Singapore had me leaving the next morning.

More photos from our weekend in Napa Valley here:

by pleia2 at March 27, 2016 06:26 PM

March 26, 2016

Akkana Peck

Debian: Holding packages you build from source, and rebuilding them easily

Recently I wrote about building the Debian hexchat package to correct a key binding bug.

I built my own version of the hexchat packages, then installed the ones I needed:

dpkg -i hexchat_2.10.2-1_i386.deb hexchat-common_2.10.2-1_all.deb hexchat-python_2.10.2-1_i386.deb hexchat-perl_2.10.2-1_i386.deb

That's fine, but of course, a few days later Debian had an update to the hexchat package that wiped out my changes.

The solution to that is to hold the packages so they won't be overwritten on the next apt-get upgrade:

aptitude hold hexchat hexchat-common hexchat-perl hexchat-python

If you forget which packages you've held, you can find out with aptitude:

aptitude search '~ahold'

Simplifying the rebuilding process

But now I wanted an easier way to build the package. I didn't want to have to search for my old blog post and paste the lines one by one every time there was an update -- then I'd get lazy and never update the package, and I'd never get security fixes.

I solved that with a zsh function:

newhexchat() {
    # Can't set errreturn yet, because that will cause mv and rm
    # (even with -f) to exit if there's nothing to remove.
    cd ~/outsrc/hexchat
    echo "Removing what was in old previously"
    rm -rf old
    echo "Moving everything here to old/"
    mkdir old
    mv *.* old/

    # Make sure this exits on errors from here on!
    setopt localoptions errreturn

    echo "Getting source ..."
    apt-get source hexchat
    cd hexchat-2*
    echo "Patching ..."
    patch -p0 < ~/outsrc/hexchat-2.10.2.patch
    echo "Building ..."
    debuild -b -uc -us
    echo 'Installing' ../hexchat{,-python,-perl}_2*.deb
    sudo dpkg -i ../hexchat{,-python,-perl}_2*.deb

Now I can type newhexchat and pull a new version of the source, build it, and install the new packages.

How do you know if you need to rebuild?

One more thing. How can I find out when there's a new version of hexchat, so I know I need to build new source in case there's a security fix?

One way is the Debian Package Tracking System. You can subscribe to a package and get emails when a new version is released. There's supposed to be a package tracker web interface, e.g. package tracker: hexchat with a form you can fill out to subscribe to updates -- but for some packages, including hexchat, there's no form. Clicking on the link for the new package tracker goes to a similar page that also doesn't have a form.

So I guess the only option is to subscribe by email. Send mail to containing this line:

subscribe hexchat [your-email-address]
You'll get a reply asking for confirmation.

This may turn out to generate too much mail: I've only just subscribed, so I don't know yet. There are supposedly keywords you can use to limit the subscription, such as upload-binary and upload-source, but the instructions aren't at all clear on how to include them in your subscription mail -- you say keyword, or keyword your-email, so where do you put the actual keywords you want to accept? They offer no examples.

Use apt to check whether your version is current

If you can't get the email interface to work or suspect it'll be too much email, you can use apt to check whether the current version in the repository is higher than the one you're running:

apt-cache policy hexchat

You might want to automate that, to make it easy to check on every package you've held to see if there's a new version. Here's a little shell function to do that:

# Check on status of all held packages:
check_holds() {
    for pkg in $( aptitude search '~ahold' | awk '{print $2}' ); do
        policy=$(apt-cache policy $pkg)
        installed=$(echo $policy | grep Installed: | awk '{print $2}' )
        candidate=$(echo $policy | grep Candidate: | awk '{print $2}' )
        if [[ "$installed" == "$candidate" ]]; then
            echo $pkg : nothing new
            echo $pkg : new version $candidate available

March 26, 2016 05:11 PM

Elizabeth Krumbach

Six years in San Francisco

February 2016 marked six years of me living here in San Francisco. It’s hard to believe that much time has passed, but at the same time I feel so at home in my latest adopted city. I sometimes find myself struggling to remember what it was like to live in the suburbs, drive every day and not be able to just walk to the dentist, or take in the beautiful sights along the Embarcadero as I go for a run. I’ve grown accustom to the weather and seasons (or lack thereof), and barely think twice when making plans. Of course the weather will be beautiful!

I love you, California, I adore spending my time on The Dock of the Bay.

Our travel schedules this year have been a bit crazy though. I just returned from my second overseas conference of the year on Monday and MJ has been spending almost half his time time traveling with work. We’ve tried to plan things so that we’re not out of town at the same time, but haven’t always been effective. Plus, being out of town the same time is great for the cats and our need for a pet sitter, but it’s less great for getting time together. We ended up celebrating Valentine’s Day a day early, on February 13th, in order to work around these schedules and MJ’s plan to leave for a trip on Sunday.

It was a fabulous Valentine’s Day dinner though. We went to Jardinière over in Hayes Valley and both ordered the tasting menu, and I went with the wine pairing since I didn’t have a flight to catch the next day. Everything was exceptional, from the sea urchin to the beautifully prepared, marbled steak that melts in your mouth. I hope we can make it back at some point.

With MJ out of town I’ve had to fight the temptation to slip into workaholic mode. I definitely have a lot of work to do, especially as my for-real-this-time book deadline approaches. But I’ve grown appreciative of the need to take a break, and how it untangles the mind to be fresh again the next day and more effective at solving problems. On Presidents’ Day I treated myself to an afternoon at the zoo.

More photos from the zoo here:

I’ve also gotten to make time to spend with friends here and there, recently making it out to the cinema with a friend to see the Oscar Nominated Animation Shorts. I grew to appreciate these shorts years ago after learning my beloved Wallace & Gromit films had been nominated and won in the past, but it had been some time since I’d gone to a theater to enjoy them.

While MJ has been in town, I’ve reflected on my six years here in the city and realized there were still things I’ve wanted to do in the area but haven’t had the opportunity to, so I’ve been slowly checking them off my list. Even small changes to accommodate new things have been worth it. One afternoon we took a slight detour from going to the Beach Chalet and instead went downstairs to the Park Chalet where we had never been before.

High Tide Hefeweizen at the Park Chalet

While on the topic of food, we also finally made it over to Zachary’s Chicago Pizza over in Oakland, near the Rockridge BART station. I’m definitely a New York pizza girl, but I hear so many good things about Zachary’s every time I moan about the state of California pizza. We went around 2:30 in the afternoon on a Saturday afternoon and were seated immediately. Eating there is a bit of an event, you order and wait a half hour for your giant wall of deep dish pizza to cook, I had the Spinach & Mushroom. The toppings and cheese are buried inside the pizza, with the sauce covering the top. It was really good, even if I could barely finish two pieces (leftovers!).

After Zachary’s I had planned to take BART up to downtown Berkeley to hit up a comic book store, since the one I used to go to here in San Francisco has closed due to increasing rent. I was delighted to learn that there was a comic book store within walking distance of where we already were. That’s how I was introduced to The Escapist in Berkeley, just over the Oakland/Berkeley border. I picked up most of the backlog of comics I was looking for, and then hit up Dark Carnival next door, a great Sci-Fi and Fantasy book store that I’d been to in the past. I’ll be returning to both stores in the near future.

And now it’s time to take an aforementioned break. Saturday off, here I come!

by pleia2 at March 26, 2016 02:32 AM

March 25, 2016

Jono Bacon

Suggestions for Donating a Speaker fee

In August I am speaking at Abstractions and the conference organizers very kindly offered to provide a speaker fee.

Thing is, I have a job and so I don’t need the fee as much as some other folks in the world. As such, I would like to donate the speaker fee to an open source / free software / social good organization and would love suggestions in the comments.

I probably won’t donate to the Free Software Foundations, EFF, or Software Freedom Conservancy as I have already financially contributed to them this year.

Let me know your suggestions in the comments!

by Jono Bacon at March 25, 2016 04:30 PM

March 17, 2016

Akkana Peck

Changing X brightness and gamma with xrandr

I switched a few weeks ago from unstable ("Sid") to testing ("Stretch") in the hope that my system, particularly X, would break less often. The very next day, I updated and discovered I couldn't use my system at night any more, because the program I use to reduce the screen brightness by tweaking X gamma no longer worked. Neither did other related programs, such as xgamma and xcalib.

The Dell monitor I use doesn't have reasonable hardware brightness controls: strangely, the brightness button works when the monitor is connected over VGA, but if I want to use the sharper HDMI connection, brightness adjustment no longer works. So I depend on software brightness adjustment in order to use my computer at night when the room is dim.

Fortunately, it turns out there's a workaround. xrandr has options for both brightness and gamma:

xrandr --output HDMI1 --brightness .5
xrandr --output HDMI1 --gamma .5:.5:.5

I've always put xbrightness on a key, so I can use a function key to adjust brightness interactively up and down according to conditions. So a command that sets brightness to .5 or .8 isn't what I need; I need to get the current brightness and set it a little brighter or a little dimmer. xrandr doesn't offer that, so I needed to script it.

You can get the current brightness with

xrandr --verbose | grep -i brightness

But I was hoping there would be a more straightforward way to get brightness from a program. I looked into Python bindings for xrandr; there are some, but with no documentation and no examples. After an hour of fiddling around, I concluded that I could waste the rest of the day poring through the source code and trying things hoping something would work; or I could spend fifteen minutes using to wrap the command-line xrandr.

So subprocesses it was. It made for a nice short script, much simpler than the old xbrightness C program that used <X11/extensions/xf86vmode.h> and XF86VidModeGetGammaRampSize(): xbright on github.

March 17, 2016 05:01 PM

March 09, 2016

Akkana Peck

Juniper allergy season

It's spring, and that means it's the windy season in New Mexico -- and juniper allergy season.

When we were house-hunting here, talking to our realtor about things like local weather, she mentioned that spring tended to be windy and a lot of people got allergic. I shrugged it off -- oh, sure, people get allergic in spring in California too. Little did I know.

A month or two after we moved, I experienced the worst allergies of my life. (Just to be clear, by allergies I mean hay fever, sneezing, itchy eyes ... not anaphylaxis or anything life threatening, just misery and a morbid fear of ever opening a window no matter how nice the temperature outside might be.)

[Female (left) and male junipers in spring]
I was out checking the mail one morning, sneezing nonstop, when a couple of locals passed by on their morning walk. I introduced myself and we chatted a bit. They noticed my sneezing. "It's the junipers," they explained. "See how a lot of them are orange now? Those are the males, and that's the pollen."

I had read that juniper plants were either male or female, unlike most plants which have both male and female parts on every plant. I had never thought of junipers as something that could cause allergies -- they're a common ornamental plant in California, and also commonly encountered on trails throughout the southwest -- nor had I noticed the recent color change of half the junipers in our neighborhood.

But once it's pointed out, the color difference is striking. These two trees, growing right next to each other, are the same color most of the year, and it's hard to tell which is male and which is female. But in spring, suddenly one turns orange while the other remains its usual bright green. (The other season when it's easy to tell the difference is late fall, when the female will be covered with berries.)

Close up, the difference is even more striking. The male is dense with tiny orange pollen-laden cones.

[Female juniper closeup] [male juniper closeup showing pollen cones]

A few weeks after learning the source of my allergies, I happened to be looking out the window on a typically windy spring day when I saw an alarming sight -- it looked like the yard was on fire! There were dense clouds of smoke billowing up out of the trees. I grabbed binoculars and discovered that what looked like fire smoke was actually clouds of pollen blowing from a few junipers. Since then I've gotten used to seeing juniper "smoke" blowing through the canyons on windy spring days. Touching a juniper that's ready to go will produce similar clouds.

The good news is that there are treatments for juniper allergies. Flonase helps a lot, and a lot of people have told me that allergy shots are effective. My first spring here was a bit miserable, but I'm doing much better now, and can appreciate the fascinating biology of junipers and the amazing spectacle of the smoking junipers (not to mention the nice spring temperatures) without having to hide inside with the windows shut.

March 09, 2016 03:20 AM

March 08, 2016


Mir and Vulkan Demo

This week the Mir team got a Vulkan demo working on Mir! (youtube link to demo)

I’ve been working on replumbing mir’s internals a bit to give more fine grained control over buffers, and my tech lead Cemil has been working on hooking that API into the Vulkan/Mir WSI.

The tl;dr on Vulkan is its a recently finalized hardware accelerated graphics API from Khronos (who also proved the OpenGL APIs). It doesn’t surplant OpenGL, but can give better performance (esp in multithreaded environments) and better debug in exchange for more explicit control of the GPU.

Some links:
Khronos Vulkan page

Wikipedia Vulkan entry

short video from Intel at SIGGRAPH with a quick explanation

longer video from NVIDIA at SIGGRAPH on Vulkan


If you’re wondering when this will appear in a repository near you, probably right after the Ubuntu Y series opens up (we’re in a feature freeze for xenial/16.04 LTS at the moment).

by Kevin at March 08, 2016 07:31 PM

March 06, 2016

Elizabeth Krumbach

Xubuntu 16.04 ISO testing tips

As we get closer to the 16.04 LTS release, it’s becoming increasingly important for people to be testing the daily ISOs to catch any problems. This past week, we had the landing of GNOME Software to replace the Ubuntu Software Center and this will definitely need folks looking at it and reporting bugs (current ones tracked here:

In light of this, I thought I’d quickly share a few of my own tips and stumbling points. My focus is typically on Xubuntu testing, but things I talk about are applicable to Ubuntu too.

ISO testing on a rainy day

1. Downloading the ISO

Downloading an ISO every day, or even once a week can be tedious. Helpfully, the team provides the images via zsync which will only download the differences in the ISO between days, saving you a lot of time and bandwidth. Always use this option when you’re downloading ISOs, you can even use it the first time you download one, as it will notice that none exists.

The zsync URL is right alongside all the others when you choose “Link to the download information” in the ISO tracker:

You then use a terminal to cd into the directory where you want the ISO to be (or where it already is) and copy the zsync line into the terminal and hit enter. It will begin by examining the current ISO and then give you a progress bar for what it needs to download.

2. Putting the image on a USB stick

I have struggled with this for several releases. At first I was using UNetbootin (unetbootin), then usb-creator (usb-creator-gtk). Then I’d switch off between the two per release when one or the other wasn’t behaving properly. What a mess! How can we expect people to test if they can’t even get the ISO on a USB stick with simple instructions?

The other day flocculant, the Xubuntu QA Lead, clued me into using GNOME Disks to put an ISO on a USB stick for testing. You pop in the USB stick, launch gnome-disks (you’ll need to install the gnome-disk-utility package in Xubuntu), select your USB stick in the list on the left and choose the “Restore Disk Image…” option in the top right to select the ISO image you want to use:

I thought about doing a quick screencast of it, but Paul W. Frields over at Fedora Magazine beat me to it by more than a year: How to make a Live USB stick using GNOME Disks

This has worked beautifully with both the Xubuntu and Ubuntu ISOs.

3. Reporting bugs

The ISO tracker, where you report testing results, is easy enough to log into, but a fair number of people quit the testing process when it gets to actually reporting bugs. How do I report bugs? What package do I report them against? What if I do it wrong?

I’ve been doing ISO testing for several years, and have even run multiple events with a focus on ISO testing, and STILL struggle with this.

How did I get over it?

First, I know it’s a really long page, but this will get you familiar with the basics of reporting a bug using the ubuntu-bug tool: Ubuntu ReportingBugs

Often times being familiar with the basic tooling isn’t enough. It’s pretty common to run into a bug that’s manifesting in the desktop environment rather than in a specific application. A wallpaper is gone, a theme looks wrong, you’re struggling to log in. Where do those get submitted? And Is this bad enough for me to classify it as “Critical” in the ISO Tracker? This is when I ask. For Xubuntu I ask in #xubuntu-devel and for Ubuntu I ask in #ubuntu-quality. Note: people don’t hover over their keyboards on IRC, explain what you’re doing, ask your question and be patient.

This isn’t just for bugs, we want to see more people testing and it’s great when new testers come into our IRC channels to share their experiences and where they’re getting stuck. You’re part of our community :)

Simcoe thinks USB sticks are cat toys


I hope you’ll join us.

by pleia2 at March 06, 2016 05:54 PM

March 04, 2016

Akkana Peck

Recipe: Easy beef (or whatever) jerky

You don't need a special smoker or dehydrator to make great beef jerky.

Winter is the time to make beef jerky -- hopefully enough to last all summer, because in summer we try to avoid using the oven, cooking everything outside so as not to heat up the house. In winter, having the oven on for five hours is a good thing.

It took some tuning to get the flavor and the amount of saltiness right, but I'm happy with my recipe now.

Beef jerky


  • thinly sliced beef or pork: about a pound or two
  • 1-1/2 cups water
  • 1/4 cup soy sauce
  • 3/4 tbsp salt
  • Any additional seasonings you desire: pepper, chile powder, sage, ginger, sugar, etc.


Heat water slightly (30-40 sec in microwave) to help dissolve salt. Mix all ingredients except beef.

Cut meat into small pieces, trimming fat as much as possible.

Marinate in warm salt solution for 15 min, stirring occasionally. (For pork, you might want a shorter marinating time. I haven't tried other meats.)

Set the oven on its lowest temperature (170F here).

Lay out beef on a rack, with pieces not touching or overlapping.
Nobody seems to sell actual cooking racks, but you can buy "cooling racks" for cooling cookies, which seem to work fine for jerky. They're small so you probably need two racks for a pound of beef.

Ideally, put the rack on one oven shelf with a layer of foil on the rack below to catch the drips.
You want as much air space as possible under the meat. You can put the rack on a cookie sheet, but it'll take longer to cook and you'll have to turn the meat halfway through. Don't lay the beef directly on cookie sheet or foil unless you absolutely can't find a rack.

Cook until sufficiently dry and getting hard, about 4 to 4-1/2 hours at 170F depending on how dry you like your jerky. Drier jerky will keep longer unrefrigerated, but it's not as tasty. I cook mine a little less and store it in the fridge when I'm not actually carrying it hiking or traveling.

If you're using a cookie sheet, turn the pieces once at around 2-3 hours when the tops start to look dry and dark.

Tip: if you're using a rack without a cookie sheet, a fork wedged between the bars of the rack makes it easy to remove a rack from the oven.

March 04, 2016 07:24 PM

March 01, 2016

Elizabeth Krumbach

OpenStack infra-cloud sprint

Last week at the HPE offices in Fort Collins, members of the OpenStack Infrastructure team focused on getting an infra-cloud into production met from Monday through Thursday.

The infra-cloud is an important project for our team, so important that it has a Mission!

The infra-cloud’s mission is to turn donated raw hardware resources into expanded capacity for the OpenStack infrastructure nodepool.

This means that in addition companies who Contribute Cloud Test Resources in the form of OpenStack instances, we’ll be running our own OpenStack-driven cloud that will provide additional instances to our pool of servers we run tests on. We’re using the OpenStack Puppet Modules (since the rest of our infra uses Puppet) and bifrost, which is a series of Ansible playbooks that use Ironic to automate the task of deploying a base image onto a set of known hardware.

Our target for infra-cloud was a few racks of HPE hardware provided to the team by HPE that resides in a couple HPE data centers. When the idea for a sprint came together, we thought it might be nice to have the sprint itself hosted at an HPE site where we could meet some of the folks who handle servers. That’s how we ended up in Fort Collins, at an HPE office that had hosted several mid-cycle and sprint events for OpenStack in the past.

Our event kicked off with an overview by Colleen Murphy of work that’s been done to date. The infra-cloud team that Colleen is part of has been architecting and deploying the infra-cloud over the past several months with an eye toward formalizing the process and landing it in our git repositories. Part of the aim of this sprint was to get everyone on the broader OpenStack Infrastructure team up to speed with how everything works so that the infra cores could intelligently review and provide feedback on the patches being deployed. Colleen’s slides (available here) also gave us an overview of the baremetal workflow with bifrost, the characteristics of the controller and compute nodes, networking (and differences found between the US East and US West regions) and her strategy for deploying locally for a development environment (GitHub repo here). She also spent time getting us up to speed with the HPE iLO management interfaces that we’ll have to use if we’re having trouble with provisioning.

This introduction took up our morning. After lunch it was time to talk about our plan for the rest of our time together. We discussed the version of OpenStack we wanted to focus on and broadly how and if we planned to do upgrades, along with goals of this project. Of great importance was also that we built something that could be redeployed if we changed something, we don’t want this infrastructure to bit rot and cause a major hassle if we need to rebuild the cloud for some reason. We then went through the architecture section of the infra-cloud documentation to confirm that the assumptions there continued to be accurate, and made notes accordingly on our etherpad when they were not.

Our discussion then shifted into broad goals for our week, out came the whiteboard! It was decided that we’d focus on getting all the patches landed to support US West so that by the end of the sprint we’d have at least one working cloud. It was during this discussion that we learned how valuable hosting our sprint at an HPE facility was. An attendee at our sprint, Phil Jensen, works in the Fort Collins data center and updated us on the plans for moving systems out of US West. The timeline that he was aware of was considerably closer than we’d been planning on. A call was scheduled for Thursday to sort out those details, and we’re thankful we did since it turns out we had to effectively be ready to shut down the systems by the end of our sprint.

Goals continued for various sub-tasks in what coalesced in the main goal of the sprint: Get a region added to Nodepool so I could run a test on it.

Tuesday morning we began tackling our tasks, and at 11:30 Phil came by to give us a tour of the local data center there in Fort Collins. Now, if we’re honest, there was no technical reason for this tour. All the systems engineers on our team have been in data centers before, most of us have even worked in them. But there’s a reason we got into this: we like computers. Even if we mostly interact with clouds these days, a tour through a data center is always a lot of fun for us. Plus it got us out of the conference room for a half hour, so it was a nice pause in our day. Huge thanks to Phil for showing us around.

The data center also had one of the server types we’re using in infra-cloud, the HP SL390. While we didn’t get to see the exact servers we’re using, it was fun to get to see the size and form factor of the servers in person.

Spencer Krum checks out a rack of HP SL390s

Tuesday was spent heads down, landing patches. People moved around the room as we huddled in groups, and there was some collaborative debugging on the projector as we learned more about the deployment, learned a whole lot more about OpenStack itself and worked through some unfortunate issues with Puppet and Ansible.

Not so much glamour, sprints are mostly spent working on our laptops

Wednesday was the big day for us. The morning was spent landing more patches and in the afternoon we added our cloud to the list of clouds in Nodepool. We then eagerly hovered over the Jenkins dashboard and waited for a job to need a trusty node to run a test…

Slave ubuntu-trusty-infracloud-west-8281553 Building swift-coverage-bindep #2

The test ran! And completed successfully! Colleen grabbed a couple screenshots.

We watch on Clark Boylan’s laptop as the test runs

Alas, it was not all roses. Our cloud struggled to obey the deletion command and the test itself ran considerably slower than we would have expected. We spent some quality time looking at disk configurations and settings together to see if we could track down the issue and do some tuning. We still have more work to do here to get everything running well on this hardware once it has moved to the new facility.

Thursday we spent some time getting US East patches to land before the data center moves. We also had a call mid-day to firm up the timing of the move. Our timing for the sprint ended up working out well for the move schedule, we were able to complete a considerable amount of work at the sprint before the machines had to be shut down. The call was also valuable in getting to chat with some of the key parties involved and learn what we needed to hand off to them with regard to our requirements for the new home the servers will have, in an HPE POD (cool!) in Houston. This allowed us to kick off a Network Requirements for Infracloud Relocation Deployment thread and Cody A.W. Somerville captured notes from the rest of the conversation here.

The day concluded with a chat about how the sprint went. The feedback was pretty positive, we all got a lot of work done, Spencer summarized our feedback on list here.

Personally, I liked that the HPE campus in Fort Collins has wild rabbits. Also, it snowed a little and I like snow.

I could have done without the geese.

It was also enjoyable to visit downtown Fort Collins in the evenings and meet up with some of the OpenStack locals. Plus, at Coopersmith’s I got a beer with a hop pillow on top. I love hops.

More photos from the week here:

David F. Flanders also Tweeted some photos:

by pleia2 at March 01, 2016 02:01 AM

February 27, 2016

Akkana Peck

Learning to Weld

I'm learning to weld metal junk into art!

I've wanted to learn to weld since I was a teen-ager at an LAAS star party, lusting after somebody's beautiful homebuilt 10" telescope on a compact metal fork mount. But building something like that was utterly out of reach for a high school kid. (This was before John Dobson showed the world how to build excellent alt-azimuth mounts out of wood and cheap materials ... or at least before Dobsonians made it to my corner of LA.)

Later the welding bug cropped up again as I worked on modified suspension designs for my X1/9 autocross car, or fiddled with bicycles, or built telescopes. But it still seemed out of reach, too expensive and I had no idea how to get started, so I always found some other way of doing what I needed.

But recently I had the good fortune to hook up with Los Alamos's two excellent metal sculptors, David Trujillo and Richard Swenson. Mr. Trujillo was kind enough to offer to mentor me and let me use his equipment to learn to make sculptures like his. (Richard has also given me some pointers.)

[My first metal art piece] MIG welding is both easier and harder than I expected. David Trujillo showed me the basics and got me going welding a little face out of a gear and chain on my very first day. What a fun start!

In a lot of ways, MIG welding is actually easier than soldering. For one thing, you don't need three or four hands to hold everything together while also holding the iron and the solder. On the other hand, the craft of getting a good weld is something that's going to require a lot more practice.

Setting up a home workshop

I knew I wanted my own welder, so I could work at home on my own schedule without needing to pester my long-suffering mentors. I bought a MIG welder and a bottle of gas (and, of course, safety equipment like a helmet, leather apron and gloves), plus a small welding table. But then I found that was only the beginning.

[Metal art: Spoon cobra] Before you can weld a piece of steel you have to clean it. Rust, dirt, paint, oil and anti-rust coatings all get in the way of making a good weld. David and Richard use a sandblasting cabinet, but that requires a big air compressor, making it as big an investment as the welder itself.

At first I thought I could make do with a wire brush wheel on a drill. But it turned out to be remarkably difficult to hold the drill firmly enough while brushing a piece of steel -- that works for small areas but not for cleaning a large piece or for removing a thick coating of rust or paint.

A bench grinder worked much better, with a wire brush wheel on one side for easy cleaning jobs and a regular grinding stone on the other side for grinding off thick coats of paint or rust. The first bench grinder I bought at Harbor Freight had a crazy amount of vibration that made it unusable, and their wire brush wheel didn't center properly and added to the wobble problem. I returned both, and bought a Ryobi from Home Depot and a better wire brush wheel from the local Metzger's Hardware. The Ryobi has a lot of vibration too, but not so much that I can't use it, and it does a great job of getting rust and paint off.

[Metal art: grease-gun goony bird] Then I had to find a place to put the equipment. I tried a couple of different spots before finally settling on the garage. Pro tip: welding on a south-facing patio doesn't work: sunlight glints off the metal and makes the auto-darkening helmet flash frenetically, and any breeze from the south disrupts everything. And it's hard to get motivated to out outside and weld when it's snowing. The garage is working well, though it's a little cramped and I have to move the Miata out whenever I want to weld if I don't want to risk my baby's nice paint job to welding fumes. I can live with that for now.

All told, it was over a month after I bought the welder before I could make any progress on welding. But I'm having fun now. Finding good junk to use as raw materials is turning out to be challenging, but with the junk I've collected so far I've made some pieces I'm pretty happy with, I'm learning, and my welds are getting better all the time.

Earlier this week I made a goony bird out of a grease gun. Yesterday I picked up some chairs, a lawnmower and an old exercise bike from a friend, and just came in from disassembling them. I think I see some roadrunner, cow, and triceratops parts in there.

Photos of everything I've made so far: Metal art.

February 27, 2016 09:02 PM

February 25, 2016

Akkana Peck

Migrating from xchat: a couple of hexchat fixes

I decided recently to clean up my Debian "Sid" system, using apt-get autoclean, apt-get purge `deborphan`, aptitude purge ~c, and aptitude purge ~o. It gained me almost two gigabytes of space. On the other hand, it deleted several packages I had long depended on. One of them was xchat.

I installed hexchat, the fully open replacement for xchat. Mostly, it's the same program ... but a few things didn't work right.

Script fixes

The two xchat scripts I use weren't loading. Turns out hexchat wants to find its scripts in .config/hexchat/addons, so I moved them there. But still didn't work; it was looking for a widget called "xchat-inputbox". That was fairly easy to patch: I added a line to print the name of each widget it saw, determined the name had changed in the obvious way, and changed

    if( $child->get( "name" ) eq 'xchat-inputbox' ) {
    if( $child->get( "name" ) eq 'xchat-inputbox' ||
        $child->get( "name" ) eq 'hexchat-inputbox' ) {
That solved the problem.

Notifying me if someone calls me

The next problem: when someone mentioned my nick in a channel, the channel tab highlighted; but when I switched to the channel, there was no highlight on the actual line of conversation so I could find out who was talking to me. (It was turning the nick of the person addressing me to a specific color, but since every nick is a different color anyway, that doesn't make the line stand out when you're scanning for it.)

The highlighting for message lines is set in a dialog you can configure: Settings→Text events...
Scroll down to Channel Msg Hilight and click on that elaborate code on the right: %C2<%C8%B$1%B%C2>%O$t$2%O
That's the code that controls how the line will be displayed.

Some of these codes are described in Hexchat: Appearance/Theming, and most of the rest are described in the dialog itself. $t is an exception: I'm not sure what it means (maybe I just missed it in the list).

I wanted hexchat to show the nick of whoever called me name in inverse video. (Xchat always made it bold, but sometimes that's subtle; inverse video would be a lot easier to find when scrolling through a busy channel.) %R is reverse video, %B is bold, and %O removes any decorations and sets the text back to normal text, so I set the code to: %R%B<$1>%O $t$2 That seemed to work, though after I exited hexchat and started it up the next morning it had magically changed to %R%B<$1>%O$t$2%O.

Hacking hexchat source to remove hardwired keybindings

But the big problem was the hardwired keybindings. In particular, Ctrl-F -- the longstanding key sequence that moves forward one character -- in hexchat, it brings up a search window. (Xchat had this problem for a little while, many years ago, but they fixed it, or at least made it sensitive to whether the GTK key theme is "Emacs".)

Ctrl-F doesn't appear in the list under Settings→Keyboard shortcuts, so I couldn't fix it that way. I guess they should rename that dialog to Some keyboard shortcuts. Turns out Ctrl-F is compiled in. So the only solution is to rebuild from source.

I decided to use the Debian package source:

apt-get source hexchat

The search for the Ctrl-F binding turned out to be harder than it had been back in the xchat days. I was confident the binding would be in one of the files in src/fe-gtk, but grepping for key, find and search all gave way too many hits. Combining them was the key:

egrep -i 'find|search' *.c | grep -i key

That gave a bunch of spurious hits in fkeys.c -- I had already examined that file and determined that it had to do with the Settings→Keyboard shortcuts dialog, not the compiled-in key bindings. But it also gave some lines from menu.c including the one I needed:

    {N_("Search Text..."), menu_search, GTK_STOCK_FIND, M_MENUSTOCK, 0, 0, 1, GDK_KEY_f},

Inspection of nearby lines showed that the last GDK_KEY_ argument is optional -- there were quite a few lines that didn't have a key binding specified. So all I needed to do was remove that GDK_KEY_f. Here's my patch:

--- src/fe-gtk/menu.c.orig      2016-02-23 12:13:55.910549105 -0700
+++ src/fe-gtk/menu.c   2016-02-23 12:07:21.670540110 -0700
@@ -1829,7 +1829,7 @@
        {N_("Save Text..."), menu_savebuffer, GTK_STOCK_SAVE, M_MENUSTOCK, 0, 0,
 #define SEARCH_OFFSET (70)
        {N_("Search"), 0, GTK_STOCK_JUSTIFY_LEFT, M_MENUSUB, 0, 0, 1},
-               {N_("Search Text..."), menu_search, GTK_STOCK_FIND, M_MENUSTOCK,
 0, 0, 1, GDK_KEY_f},
+               {N_("Search Text..."), menu_search, GTK_STOCK_FIND, M_MENUSTOCK,
 0, 0, 1},
                {N_("Search Next"   ), menu_search_next, GTK_STOCK_FIND, M_MENUS
TOCK, 0, 0, 1, GDK_KEY_g},
                {N_("Search Previous"   ), menu_search_prev, GTK_STOCK_FIND, M_M
ENUSTOCK, 0, 0, 1, GDK_KEY_G},
                {0, 0, 0, M_END, 0, 0, 0},

After making that change, I rebuilt the hexchat package and installed it:

sudo apt-get build-dep hexchat
sudo apt-get install devscripts
cd hexchat-2.10.2/
debuild -b -uc -us
sudo dpkg -i ../hexchat_2.10.2-1_i386.deb

Update: I later wrote about how to automate this here: Debian: Holding packages you build from source, and rebuilding them easily.

And the hardwired Ctrl-F key binding was gone, and the normal forward-character binding from my GTK key theme took over.

I still have a couple of minor things I'd like to fix, like the too-large font hexchat uses for its channel tabs, but those are minor. At least I'm back to where I was before foolishly deciding to clean up my system.

February 25, 2016 02:00 AM

February 19, 2016

Akkana Peck

GIMP ditty: change font size and face on every text layer

A silly little GIMP ditty:
I had a Google map page showing locations of lots of metal recycling places in Albuquerque. The Google map shows stars for each location, but to find out the name and location of each address, you have to mouse over each star. I wanted a printable version to carry in the car with me.

I made a screenshot in GIMP, then added text for the stars over the places that looked most promising. But I was doing this quickly, and as I added text for more locations, I realized that it was getting crowded and I wished I'd used a smaller font. How do you change the font size for ALL font layers in an image, all at once?

Of course GIMP has no built-in method for this -- it's not something that comes up very often, and there's no reason it would have a filter like that. But the GIMP PDB (Procedural DataBase, part of the GIMP API) lets you change font size and face, so it's an easy script to write.

In the past I would have written something like this in script-fu, but now that Python is available on all GIMP platforms, there's no reason not to use it for everything.

Changing font face is just as easy as changing size, so I added that as well.

I won't bother to break it down line by line, since it's so simple. Here's the script: Mass change font face and size in all GIMP text layers.

February 19, 2016 06:11 PM

February 18, 2016

Jono Bacon

Supporting Beep Beep Yarr!

Some of you may be familiar with LinuxVoice magazine. They put an enormous amount of effort in creating a high quality, feature-packed magazine with a small team. They are led by Graham Morrison who I have known for many years and who is one of the most thoughtful, passionate, and decent human beings I have ever met.

Well, the same team are starting an important new project called Beep Beep Yarr!. It is essentially a Kickstarter crowd-funded children’s book that is designed to teach core principles of programming to kids. The project not just involves the creation of the book, but also a parent’s guide and an interactive app to help kids engage with the principles in the book.

They are not looking to raise a tremendous amount of money ($28,684 is the goal converted to mad dollar) and they have already raised $15,952 at the time of writing. I just went and added my support – I can’t wait to read this to our 3 year-old, Jack.

I think this campaign is important for a few reasons. Firstly, I am convinced that learning to program and all the associated pieces (logic flow, breaking problems down into smaller pieces, maths, collaboration) is going to be a critical skill in the future. Programming is not just important for teaching people how to control computers but it also helps people to fundamentally understand and synthesize logic which has knock-on benefits in other types of thinking and problem-solving too.

Beep Beep Yarr! is setting out to provide an important first introduction to these principles for kids. It could conceivably play an essential role in jumpstarting this journey for lots of kids, our own included.

So, go and support the campaign, not just because it is a valuable project, but also because the team behind it are good people who do great work.

by Jono Bacon at February 18, 2016 06:05 PM

February 15, 2016

Elizabeth Krumbach

Simcoe’s January 2016 Checkups

First up, as I first wrote about back in August, since July Simcoe has been struggling with some sores and scabbing around her eyes and inside her ear. This typically goes away after a few weeks, but it keeps coming back Over the winter holidays she started developing more scabbing, this time in addition to hear eyes and ears, it was showing up near her tail and back legs. She was also grooming excessively What could be going on?

We went through some rounds of antibiotics and then some Neo-Poly-Dex Ophthalmic for treatment of bacterial infections around her eyes throughout the fall. Unfortunately this didn’t help much, so we eventually scheduled an appointment at the beginning of January with a dermatologist at SFVS where she has been mostly transferred to for more specialized care of her renal failure as it progresses. The dermatologist determined that she’s actually suffering from allergies which are causing the breakouts. She’s now on a daily anti-allergy pill, Atopica. The outbreaks haven’t returned, but now she seems to be suffering from increasing constipation, which we’re currently trying to treat by supplementing her diet with pumpkin mixed with renal diet wet food she likes. It’s pretty clear that it’s causing her distress every time it happens. It’s unclear whether they’re related, but I have a call with the dermatologist and possibly the vet this week to find out.

As for her renal failure, we had an appointment on January 16th with the specialist to look at her levels and see how she’s doing. Due to the constipation we’re reluctant to put her on appetite stimulants just yet, but she is continuing to lose weight, which is a real concern. From November she was down from 8.9 to 8.8.

Simcoe weight

Her BUN and CRE levels also are on the increase, so we’re keeping a close eye on her.

Simcoe weight
Simcoe weight

Her next formal appointment is scheduled for April, so we’ll see how things go over the next month and a half. Behavior-wise she’s still the active and happy kitty we’re accustomed to, aside from the constipation.

Simcoe on Laundry
Simcoe on Suitcase

Still getting into my freshly folded laundry and claiming my suitcases every time I dare bring them out for a trip away from her!

by pleia2 at February 15, 2016 03:58 AM

February 12, 2016

Elizabeth Krumbach

Highlights from LCA 2016 in Geelong

Last week I had the pleasure of attending my second This year it took place in Geelong, a port city about an hour train ride southwest of Melbourne. After my Melbourne-area adventures earlier in the week, I made my way to Geelong via train on Sunday afternoon. That evening I met up with a whole bunch of my HPE colleagues for dinner at a restaurant next to my hotel.

Monday morning the conference began! Every day 1km the walk from my hotel to the conference venue at Deakin University’s Waterfront Campus and back was a pleasure as it took me along the shoreline. I passed a beach, a marina and even a Ferris wheel and a carousel.

I didn’t make time to enjoy the beach (complete with part of Geelong’s interesting post-people art installation), but I know many conference attendees did.

With that backdrop, it was time to dive into some Linux! I spent much of Monday in the Open Cloud Symposium miniconf run by my OpenStack Infra colleague over at Rackspace, Joshua Hesketh. I really enjoyed the pair of talks by Casey West, The Twelve-Factor Container (video) and Cloud Anti-Patterns (video). In both talks he gave engaging overviews of best practices and common gotchas with each technology. With containers it’s a temptation during the initial adoption phase to treat them like “tiny VMs” rather than compute-centric, storage free, containers for horizontally-scalable applications. He also stressed the importance of a consolidated code base for development and production and keeping any persistent storage out of containers and more generally the importance of Repeatability, Reliability and Resiliency. The second talk focused on how to bring applications into a cloud-native environment by using the 5-stages of grief repurposed for cloud-native. Key themes in this talk walked you through beginning with a legacy application being crammed into a container and the eventual modernization of that software into a series of microservices, including an automated build pipeline and continuous delivery with automated testing.

Unfortunately I was ill on Tuesday, so my conferencing picked up on Wednesday with a keynote by Catarina Mota who spoke on open hardware and materials, with a strong focus on 3D printing. It’s a topic that I’m already well-versed in, so the talk was mostly review for me, but I did enjoy one of the videos that she shared during her talk: Full Printed by nueveojos.

The day continued with a couple of talks that were some of my favorites of the conference. The first was Going Faster: Continuous Delivery for Firefox by Laura Thomson. Continuous Delivery (CD) has become increasingly popular for server-side applications that are served up to users, but this talk was an interesting take: delivering a client in a CD model. She didn’t offer a full solution for a CD browser, but instead walked through the problem space, design decisions and rationale behind the tooling they are using to get closer to a CD model for client-side software. Firefox is in an interesting space for this, as it already has add-ons that are released outside of the Firefox release model. What they decided to do was leverage this add-on tooling to create system add-ons, which are core to Firefox and to deliver microchanges, improvements and updates to the browser online. They’re also working to separate the browser code itself from the data that ships with it, under the premise that things like policy blacklists, dictionaries and fonts should be able to be updated and shipped independent of a browser version release. Indeed! This data would instead be shipped as downloadable content, and could also be tuned to only ship certain features upon request, like specific language support.

Laura Thomson, Director of Engineering, Cloud Services Engineering and Operations at Mozilla

The next talk that I got a lot out of was Wait, ?tahW: The Twisted Road to Right-to-Left Language Support (video) by Moriel Schottlender. Much like the first accessibility and internationalization talks I attended in the past, this is one of those talks that sticks with me because it opened my eyes to an area I’d never thought much about, as an English-only speaking citizen of the United States. She was also a great speaker who delivered the talk with the humor and intrigue… “can you guess the behavior of this right-to-left feature?” The talk began by making the case for more UIs supporting right to left (RTL) languages, citing that there are 800 million RTL speakers in the world who we should be supporting. She walked us through the concepts of Visual and Logical Rendering, how “obvious” solutions like flipping all content are flawed and considerations with regard to the relationship of content and the interface itself when designing for RTL. She also gave us a glimpse into the behavior of the Unicode Bidirectional Algorithm and the fascinating ways it behaves when mixing LTR and RTL languages. She concluded by sharing that expectations of RTL language users are pretty low since most software gets it wrong, but this means that there’s a great opportunity for projects that do support it to get it right. Her website on the topic that has everything she covered in her talk, and more, is at

Moriel Schottlender, Software Engineer at Wikimedia

Wednesday night was the Penguin Dinner, which is the major, all attendees welcome conference dinner of the event. The venue was The Pier, which was a restaurant appropriately perched on the end of a very long pier. It was a bit loud, but I had some interesting discussions with my fellow attendees and there was a lovely patio where we were able to get some fresh air and take pictures of the bay.

On Thursday a whole bunch of us enjoyed a talk about a Linux-driven Microwave (video) by David Tulloh. What I liked most about his talk was that while he definitely was giving a talk about tinkering with a microwave to give it more features and make it more accessible, he was also “encouraging other people to do crazy things.” Hack a microwave, hack all kinds of devices and change the world! Manufacturing one-off costs are coming down…

In the afternoon I gave my own talk, Open Source Tools for Distributed Systems Administration (video, slides). I was a bit worried that attendance wouldn’t be good because of who I was scheduled against, but I was mistaken, the room was quite full! After the talk I was able to chat with some folks who are also working on distributed systems teams, and with someone from another major project who was seeking to put more of their infrastructure work into open source. In all, a very effective gathering. Plus, my colleague Masayuki Igawa took a great photo during the talk!

Photo by Masayuki Igawa (source)

The afternoon continued with a talk by Rikki Endsley on Speaking their language: How to write for technical and non-technical audiences (video). Helpfully, she wrote an article on the topic so I didn’t need to take notes! The talk walked through various audiences, lay, managerial and experts and gave examples of how to craft posting for each. The announcement of a development change, for instance, will look very different when presenting it to existing developers than it may look to newcomers (perhaps “X process changed, here’s how” vs. “dev process made easier for new contributors!”), and completely differently when you’re approaching a media outlet to provide coverage for a change in your project. The article dives deep into her key points, but I will say that she delivered the talk with such humor that it was fun to learn directly from hearing her speak on the topic.

Also got my picture with Rikki! (source)

Thursday night was the Speakers’ dinner, which took place at a lovely little restaurant about 15 minutes from the venue via bus. I’m shy, so it’s always a bit intimidating to rub shoulders with some of the high profile speakers that they have at LCA,. Helpfully, I’m terrible with names, so I managed to chat away with a few people and not realize that they are A Big Deal until later. Hah! So the dinner was nice, but having been a long week I was somewhat thankful when the buses came at 10PM to bring us back.

Friday began with my favorite keynote of the conference! It was by Genevieve Bell (video), an Intel fellow with a background in cultural anthropology. Like all of my favorite talks, hers was full of humor and wit, particularly around the fact that she’s an anthropologist who was hired to work for a major technology company without much idea of what that would mean. In reality, her job turned out to be explaining humans to engineers and technologists, and using their combined insight to explore potential future innovations. Her insights were fascinating! A key point was that traditional “future predictions” tend to be a bit near-sighted and very rooted in problems of the present. In reality our present is “messy and myriad” and that technology and society are complicated topics, particularly when taken together. Her expertise brought insight to human behavior that helps engineers realize that while devices work better when connected, humans work better while disconnected (to the point of seeking “disconnection” from the internet on our vacations and weekends).

Additionally, many devices and technologies aim to provide a “seamless” experience, but that humans actually prefer seamful interactions so we can split up our lives into contexts. Finally, she spent a fair amount of time talking about our lives in the world of Internet of Things, and how some serious rules will need to be put in place to make us feel safe and supported by our devices rather than vulnerable and spied upon. Ultimately, technology has to be designed with the human element in mind, and her plea to us, as the architects of the future, is to be optimistic about the future and make sure we’re getting it right.

After her talk I now believe every technology company should have a staff cultural anthropologist.

Intel Fellow and cultural anthropologist Genevieve Bell

My day continued with a talk by Andrew Tridgell on Helicopters and rocket-planes (video), one on Copyleft For the Next Decade: A Comprehensive Plan (video) by Bradley Kuhn, a talk by Matthew Garrett on Troublesome Privacy Measures: using TPMs to protect users (video) and an interesting dive into handling secret data with Tollef Fog Heen’s talk on secretd – another take on securely storing credentials (video).

With that, the conference came to a close with a closing session that included raffle prizes, thanks to everyone and the hand-off to the team running LCA 2017 in Hobart next year.

I went to more talks than highlighted in this post, but with a whole week of conferencing it would have been a lot to cover. I also am typically not the biggest fan of the “hallway track” (introvert, shy) and long breaks, but I knew enough people at this conference to find people to spend time with during breaks and meals. I could also get a bit of work done during the longer breaks without skipping too many sessions and it easy to switch rooms between sessions without disruption. Plus, all the room moderators I saw did an excellent job of keeping things on schedule.

Huge thanks to all the organizers and everyone who made me feel so welcome this year. It was a wonderful experience and I hope to do it again next year!

More photos from the conference and beautiful Geelong here:

by pleia2 at February 12, 2016 09:20 PM

February 10, 2016


OpenShot 2.0.6 (Beta 3) Released!

The third beta of OpenShot 2.0 has been officially released! To install it, add the PPA by using the Terminal commands below:

sudo add-apt-repository ppa:openshot.developers/ppa
sudo apt-get update
sudo apt-get install openshot openshot-doc

Now that OpenShot is installed, you should be able to launch it from your Applications menu, or from the terminal ($ openshot-qt). Every time OpenShot has an update, you will be prompted to update to the newest version. It's a great way to test our latest features.

Smoother Animation
Animations are now silky smooth because of improved anti-aliasing support in the libopenshot compositing engine. Zooming, panning, and rotation all benefit from this change.

Audio Quality Improvements
Audio support in this new version is vastly superior to previous versions. Popping, crackling, and other related audio issues have been fixed.

A new autosave engine has been built for OpenShot 2.0, and it’s fast, simple to configure, and will automatically save your project at a specific interval (if it needs saving). Check the Preferences to be sure it’s enabled (it will default to enabled for new users).

Automatic Backup and Recovery
Along with our new autosave engine, a new automatic backup and recovery feature has also been integrated into the autosave flow. If your project is not yet saved… have no fear, the autosave engine will make a backup of your unsaved project (as often as autosave is configured for), and if OpenShot crashes, it will recover your most recent backup on launch.

Project File Improvements
Many improvements have been made to project file handling, including relative paths for built-in transitions and improvements to temp files being copied to project folders (i.e. animated titles). Projects should be completely portable now, between different versions of OpenShot and on different Operating Systems. This was a key design goal of OpenShot 2.0, and it works really well now.

Improved Exception Handling
Integration between libopenshot (our video editing library) and openshot-qt (our PyQt5 user interface) has been improved. Exceptions generated by libopenshot are now passed to the user interface, and no longer crash the application. Users are now presented with a friendly error message with some details of what happened. Of course, there is still the occasional “hard crash” which kills everything, but many, many crashes will now be avoided, and users more informed on what has happened.

Preferences Improvements
There are more preferences available now (audio preview settings - sample rate, channel layout, debug mode, etc…), including a new feature to prompt users when the application will “require a restart” for an option to take effect.

Improved Stability on Windows
A couple of pretty nasty bugs were fixed for Windows, although in theory they should have crashed on other platforms as well. But for whatever reason, certain types of crashes relating to threading only seem to happen on Windows, and many of those are now fixed.

New Version Detection
OpenShot will now check the most recent released version on launch (from the website) and descretely prompt the user by showing an icon in the top right of the main window. This has been a requested feature for a really long time, and it’s finally here. It will also quietly give up if no Internet connection is available, and it runs in a separate thread, so it doesn’t slow down anything.

Metrics and Anonymous Error Reporting
A new anonymous metric and error reporting module has been added to OpenShot. It can be enabled / disabled in the Preferences, and it will occasionally send out anonymous metrics and error reports, which will help me identify where crashes are happening. It’s very basic data, such as “WEBM encoding error - Windows 8, version 2.0.6, libopenshot-version: 0.1.0”, and all IP addresses are anonymized, but will be critical to help improve OpenShot over time.

Improved Precision when Dragging
Dragging multiple clips around the timeline has been improved. There were many small issues that would sometimes occur, such as extra spacing being added between clips, or transitions being slightly out of place. These issues have been fixed, and moving multiple clips now works very well.

Debug Mode
In the preferences, one of the new options is “Debug Mode”, which outputs a ton of extra info into the logs. This might only work on Linux at the moment, because it requires the capturing of standard output, which is blocked in the Windows and Mac versions (due to cx_Freeze). I hope to enable this feature for all OSes soon, or at least to provide a “Debug” version for Windows and Mac, that would also pop open a terminal/command prompt with the standard output visible.

Updated Translations
Updates to 78 supported languages have been made. A huge thanks to the translators who have been hard at work helping with OpenShot translations. There are over 1000 phrases which require translation, and seeing OpenShot run so seamlessly in different languages is just awesome! I love it!

Lots of Bug fixes

  • In addition to all the above improvements and fixes, here are many other smaller bugs and issues that have been addressed in this version.
  • Prompt before overwriting a video on export
  • Fixed regression while previewing videos (causing playhead to hop around)
  • Default export format set to MP4 (regardless of language)
  • Fixed regression with Cutting / Split video dialog
  • Fixed Undo / Redo bug with new project
  • Backspace key now deletes clips (useful with certain keyboards and laptop keyboards)
  • Fixed bug on Animated Title dialog not updating progress while rendering
  • Added multi-line and unicode support to Animated Titles
  • Improved launcher to use distutils entry_points

Get Involved
Please report bugs and suggestions here: Please contribute language translations here (if you are a non-English speaking user):

by iheartubuntu ( at February 10, 2016 01:23 PM

Elizabeth Krumbach

Kangaroos, Penguins, a Koala and a Platypus

On the evening of January 27th I began my journey to visit Australia for the second time in my life. My first visit to the land down under was in 2014 when I spoke at and attended my first in Perth. Perth was beautiful, in addition to the conference (which I wrote about here, here and here), I took some time to see the beach and visit the zoo during my tourist adventures.

This time I was headed for Melbourne to once again attend and speak at, this time in the port city of Geelong. I arrived the morning of Friday the 29th to spend a couple days adjusting to the time zone and visiting some animals. However, I was surprised at the unexpected discovery of something else I love in Melbourne: historic street cars. Called trams there, they run a free City Circle Tram that uses the historic cars! There’s even a The Colonial Tramcar Restaurant which allows you to dine inside one as you make your way along the city rails. Unfortunately my trip was not long enough to ride in a tram or enjoy a meal, but this alone puts Melbourne right on my list of cities to visit again.

At the Perth Zoo I got my first glimpse of a wombat (they are BIG!) and enjoyed walking through an enclosure where the kangaroos roamed freely. This time I had some more animals on my checklist, and wanted to get a bit closer to some others. After checking into my hotel in Melbourne, I went straight to the Melbourne Zoo.

I love zoos. I’ve visited zoos in countries all over the world. But there’s something special you should know about the Melbourne Zoo: they have a platypus. Everything I’ve read indicate that they don’t do very well in captivity and captive breeding is very rare. As a result, no zoos outside of Australia have platypuses, so if I wanted to see one it had to be in Australia. I bought my zoo ticket and immediately asked “where can I find the platypus?” With that, I got to see a platypus! They platypus was swimming in it’s enclosure and I wasn’t able to get a photo of it (moving too fast), but I did get a lovely video. They are funny creatures, and very cute!

The rest of the zoo was very nice. I didn’t see everything, but I spent a couple hours visiting the local animals and checking out some of their bigger exhibits. I almost skipped their seals (seals live at home!) and penguins (I’d see wild ones the next day!), but I’m glad I didn’t since it was a very nice setup. Plus, I wasn’t able to take pictures of the wild fairy penguins as to not disturb them in their natural habitat, but the ones at the zoo were fine.

I also got a video of the penguins!

More photos from the Melbourne Zoo here:

When I got into a cab to return to my hotel it began to rain. I was able to pick up an early dinner and spend the evening catching up on some work and getting to bed early.

Saturday was animal tour day! I booked a AAT Kings full day Phillip Island – Penguins, Kangaroos & Koalas tour that had a tour bus picking me up right at my hotel. I selected the Viewing Platform Upgrade and it was well worth it.

Philip Island is about two hours from Melbourne, and it’s where the penguins live. They come out onto the beach at sunset and all rush back to their homes. The rest of the tour was a series of activities leading up to this grand event, beginning with a stop at MARU Koala & Animal Park. We were in the bus for nearly two hours to get to the small park, during which the tour guide told us about the history of Melbourne and about the penguins we’d see later in the evening.

The tour included entrance fees, but I paid an extra $20 to pet a koala and get some food for the kangaroos and other animals. First up, koala! The koala I got to pet was an active critter. It sat still during my photo, but between people it could be seen reaching toward the keepers to get back the stem of eucalyptus that it got to munch on during the tourist photos. It was fun to learn that instead of being really soft like they look, their fur feels a lot more like wool.

The rest of my time at the park was spent with the kangaroos. Not only are they just hopping around for everyone to see like in the Perth Zoo, when you have a container of food you get to feed them! And pet them! In case you’re wondering, it’s one of the best things ever. They’re all very used to being around human tourists all day, and when you lay your hand flat as instructed to have them eat from your hand they don’t bite.

I got to feed and pet lots of kangaroos!

The rest of the afternoon was spent visiting a couple scenic outlooks and a beach before stopping for dinner in the town of Cowes on Philip Island where I enjoyed a lovely fish dinner with a stunning view at Harry’s on the Esplanade. The weather was so nice!

Selfies were made for the solo tourist

As we approached the “skinny tip of the island” the tour guide told us a bit about the history of the island and the nature preserve where the penguins live. The area had once been heavily populated with vacation homes, but with the accidental introduction of foxes, which kill penguins, and increased human population, the island quickly saw their penguin (and other local wildlife populations) drop. We learned that a program was put in place to buy back all the private property and turn it into a preserve, and work was also done to rid the island of foxes. The program seems to have worked, the preserve no longer has private homes and we saw dozens of wild wallabies as well as some of the large native geese that were also targets of the foxes. Most exciting for me was that the penguin population was preserved for us to enjoy.

As the bus made its way through the park, we could see little penguin homes throughout the landscape. Some were natural holes built by the penguins, and others were man-made houses put in place when they tore down a private home and discovered penguins had been using it for a burrow and required some kind of replacement. The hills were also covered in deep trails that we learned were little penguin highways, used for centuries (millennia?) for the little penguins to make their way from the ocean where they hunted throughout the day, to their nests where they spend the nights. The bus then stopped at the top of a hill that looked down onto the beach where we’d spend the evening watching the penguins come ashore. I took the picture from inside the bus, but if you look closely at this picture you see the big rows of stadium seating, and then to the left, and closer, there are some benches that are curvy. The stadium like seating was general admission and the curvy ones are the viewing platform upgrade I paid for.

The penguins come ashore when it gets dark (just before 9PM while I was there), so we had about an hour before then to visit the gift shop and get settled in to our seats. I took the opportunity to send post cards to my family, featuring penguins and sent out right there from the island. I also picked up a blanket, because in spite of the warm day and my rain jacket, the wind had picked up to make it a bit chilly and it was threatening rain by the time dusk came around.

It was then time for the penguins. With the viewing platform upgrade the penguins were still a bit far when they came out of the ocean, but we got a nice view of them as they approached up the beach, walking right past our seating area! They come out of the ocean in big clumps of a couple dozen, so each time we saw another grouping the human crowd would pipe up and notice. I think for the general admission it would be a lot harder to see them come up on the beach. The rest of the penguin parade is fun for everyone though, they waddle and scuttle up the island to their little homes, and they pass all the trails, regardless of where you were seated. Along the pathways the penguins get so close to you that you could reach out and touch them (of course, you don’t!). Photos are strictly prohibited since the risk is too high that someone would accidentally use a flash and disturb them, but it was kind of refreshing to just soak in the time with the penguins without a camera/phone. All told, I understand there are nearly 1,500 penguins each night that come out of the ocean at that spot.

The hills then come alive with penguin noises as they enjoy their evenings, chatting away and settling in with their chicks. Apparently this parade lasts well into the night, though most of them do come out of the ocean during the hour or so that I spent there with the tour group. At 10PM it was time to meet back at the bus to take us back to Melbourne. The timing was very good, about 10 minutes after getting in the bus it started raining. We got to watch the film Oddball on our journey home, about another island of penguins in Victoria that was at risk from foxes but were saved.

In all, the day was pretty overwhelming for me. In a good way. Petting some of these incredibly cute Australian animals! Seeing adorable penguins in the wild! A day that I’ll cherish for a lifetime.

More photos from the tour here:

The next day it was time to take a train to Geelong for the Linux conference. An event with a whole different type of penguins!

by pleia2 at February 10, 2016 08:58 AM

February 08, 2016

Akkana Peck

Attack of the Killer Titmouse!

[Juniper titmouse attacking my window] For the last several days, when I go upstairs in mid-morning I often hear a strange sound coming from the bedroom. It's a juniper titmouse energetically attacking the east-facing window.

He calls, most often in threes, as he flutters around the windowsill, sometimes scratching or pecking the window. He'll attack the bottom for a while, moving from one side to the other, then fly up to the top of the window to attack the top corners, then back to the bottom.

For several days I've run down to grab the camera as soon as I saw him, but by the time I get back and get focused, he becomes camera-shy and flies away, and I hear EEE EEE EEE from a nearby tree instead. Later in the day I'll sometimes see him down at the office windows, though never as persistently as upstairs in the morning.

I've suspected he's attacking his reflection (and also assumed he's a "he"), partly because I see him at the east-facing bedroom window in the morning and at the south-facing office window in the early afternoon. But I'm not sure about it, and certainly I hear his call from trees scattered around the yard.

Something I was never sure of, but am now: titmice definitely can raise and lower their crests. I'd never seen one with its crest lowered, but this one flattens his crest while he's in attack mode.

His EEE EEE EEE call isn't very similar to any of the calls listed for juniper titmouse in the Stokes CD set or the Audubon Android app. So when he briefly attacked the window next to my computer yesterday afternoon while I was sitting there, I grabbed a camera and shot a video, hoping to capture the sound. The titmouse didn't exactly cooperate: he chirped a few times, not always in the group of three he uses so persistently in the morning, and the sound in the video came out terribly noisy; but after some processing in audacity I managed to edit out some of the noise. And then this morning as I was brushing my teeth, I heard him again and he was more obliging, giving me a long video of him attacking and yelling at the bedroom window. Here's the Juniper titmouse call as he attacks my window this morning, and yesterday's Juniper titmouse call at the office window yesterday. Today's video is on youtube: Titmouse attacking the window but that's without the sound edits, so it's tough to hear him.

(Incidentally, since Audacity has a super confusing user interface and I'm sure I'll need this again, what seemed to work best was to highlight sections that weren't titmouse and use Edit→Delete; then use Effects→Amplify, checking the box for Allow clipping and using Preview to amplify it to the point where the bird is audible. Then find a section that's just noise, no titmouse, select it, run Effects→Noise Reduction and click Get Noise Profile. The window goes away, so click somewhere to un-select, call up Effects→Noise Reduction again and this time click OK.)

I feel a bit sorry for the little titmouse, attacking windows so frenetically. Titmice are cute, excellent birds to have around, and I hope he's saving some energy for attracting a mate who will build a nest here this spring. Meanwhile, he's certainly providing entertainment for me.

February 08, 2016 06:10 PM

February 05, 2016

Akkana Peck

Updating Debian under a chroot

Debian's Unstable ("Sid") distribution has been terrible lately. They're switching to a version of X that doesn't require root, and apparently the X transition has broken all sorts of things in ways that are hard to fix and there's no ETA for when things might get any better.

And, being Debian, there's no real bug system so you can't just CC yourself on the bug to see when new fixes might be available to try. You just have to wait, try every few days and see if the system

That's hard when the system doesn't work at all. Last week, I was booting into a shell but X wouldn't run, so at least I could pull updates. This week, X starts but the keyboard and mouse don't work at all, making it hard to run an upgrade. has been fixed.

Fortunately, I have an install of Debian stable ("Jessie") on this system as well. When I partition a large disk I always reserve several root partitions so I can try out other Linux distros, and when running the more experimental versions, like Sid, sometimes that's a life saver. So I've been running Jessie while I wait for Sid to get fixed. The only trick is: how can I upgrade my Sid partition while running Jessie, since Sid isn't usable at all?

I have an entry in /etc/fstab that lets me mount my Sid partition easily:

/dev/sda6 /sid ext4 defaults,user,noauto,exec 0 0
So I can type mount /sid as myself, without even needing to be root.

But Debian's apt upgrade tools assume everything will be on /, not on /sid. So I'll need to use chroot /sid (as root) to change the root of the filesystem to /sid. That only affects the shell where I type that command; the rest of my system will still be happily running Jessie.

Mount the special filesystems

That mostly works, but not quite, because I get a lot of errors like permission denied: /dev/null.

/dev/null is a device: you can write to it and the bytes disappear, as if into a black hole except without Hawking radiation. Since /dev is implemented by the kernel and udev, in the chroot it's just an empty directory. And if a program opens /dev/null in the chroot, it might create a regular file there and actually write to it. You wouldn't want that: it eats up disk space and can slow things down a lot.

The way to fix that is before you chroot: mount --bind /dev /sid/dev which will make /sid/dev a mirror of the real /dev. It has to be done before the chroot because inside the chroot, you no longer have access to the running system's /dev.

But there is a different syntax you can use after chrooting:

mount -t proc proc proc/
mount --rbind /sys sys/
mount --rbind /dev dev/

It's a good idea to do this for /proc and /sys as well, and Debian recommends adding /dev/pts (which must be done after you've mounted /dev), even though most of these probably won't come into play during your upgrade.

Mount /boot

Finally, on my multi-boot system, I have one shared /boot partition with kernels for Jessie, Sid and any other distros I have installed on this system. (That's somewhat hard to do using grub2 but easy on Debian though you may need to turn off auto-update and Debian is making it harder to use extlinux now.) Anyway, if you have a separate /boot partition, you'll want it mounted in the chroot, in case the update needs to add a new kernel. Since you presumably already have the same /boot mounted on the running system, use mount --bind for that as well.

So here's the final set of commands to run, as root:

mount /sid
mount --bind /proc /sid/proc
mount --bind /sys /sid/sys
mount --bind /dev /sid/dev
mount --bind /dev/pts /sid/dev/pts
mount --bind /boot /sid/boot
chroot /sid

And then you can proceed with your apt-get update, apt-get dist-upgrade etc. When you're finished, you can unmount everything with one command:

umount --recursive /sid

Some helpful background reading:

February 05, 2016 06:43 PM

February 02, 2016

Nathan Haines

Ubuntu Free Culture Showcase submissions are now open again!

It’s time once again for the Ubuntu Free Culture Showcase!

The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We're looking for content which shows off the skill and talent of these amazing artists and will greet Ubuntu 16.04 LTS users.

Not only will the chosen content be featured on the next set of pressed Ubuntu discs shared worldwide across the next two years, but it will serve the joint purposes of providing a perfect test for new users testing Ubuntu’s live session or new installations, but also celebrating the fantastic talents of artists who embrace Free content licenses.

While we hope to see contributions from the video, audio, and photographic realms, I also want to thank the artists who have provided wallpapers for Ubuntu release after release. Ubuntu 15.10 shipped with wallpapers from the following contributors:

I'm looking forward to seeing the next round of entrants and a difficult time picking final choices to ship with Ubuntu 16.04 LTS.

For more information, please visit the Ubuntu Free Culture Showcase page on the Ubuntu wiki.

February 02, 2016 11:33 AM

February 01, 2016

Jono Bacon

The Hybrid Desktop

OK, folks, I want to share a random idea that cropped up after a long conversation with Langridge a few weeks back. This is merely food for thought and designed to trigger some discussion.

Today my computing experience is comprised of Ubuntu and Mac OS X. On Ubuntu I am still playing with GNOME Shell and on Mac I am using the standard desktop experience.

I like both. Both have benefits and disadvantages. My Mac has beautiful hardware and anything I plug into it just works out the box (or has drivers). While I spend most of my life in Chrome and Atom, I use some apps that are not available on Ubuntu (e.g. Bluejeans and Evernote clients). I also find multimedia is just easier and more reliable on my Mac.

My heart will always be with Linux though. I love how slick and simple Shell is and I depend on the huge developer toolchain available to me in Ubuntu. I like how customizable my desktop is and that I can be part of a community that makes the software I use. There is something hugely fulfilling about hanging out with the people who make the tools you use.

So, I have two platforms and use the best of both. The problem is, they feel like two different boxes of things sat on the same shelf. I want to jumble the contents of those boxes together and spread them across the very same shelf.

The Idea

So, imagine this (this is total fantasy, I have no idea if this would be technically feasible.)

You want the very best computing experience, so you first go out and buy a Mac. They have arguably the nicest overall hardware combo (looks, usability, battery etc) out there.

You then download a distribution from the Internet. This is shipped as a .dmg and you install it. It then proceeds to install a bunch of software on your computer. This includes things such as:

  • GNOME Shell
  • All the GNOME 3 apps
  • Various command line tools commonly used on Linux
  • An ability to install Linux packages (e.g. Debian packages, RPMs, snaps) natively

When you fire up the distribution, GNOME Shell appears (or Unity, KDE, Elementary etc) and it is running natively on the Mac, full screen like you would see on Linux. For all intents and purposes it looks and feels like a Linux box, but it is running on top of Mac OS X. This means hardware issues (particularly hardware that needs specific drivers) go away.

Because shell is native it integrates with the Mac side of the fence. All the Mac applications can be browsed and started from Shell. Nautilus shows your Mac filesystem.

If you want to install more software you can use something such as apt-get, snappy, or another service. Everything is pulled in and available natively.

Of course, there will be some integration points where this may not work (e.g. alt-tab might not be able to display Shell apps as well as Mac apps), but importantly you can use your favorite Linux desktop as your main desktop yet still use your favorite Mac apps and features.

I think this could bring a number of benefits:

  • It would open up a huge userbase as a potential audience. Switching to Linux is a big deal for most people. Why not bring the goodness to the Mac userbase?
  • It could be a great opportunity for smaller desktops to differentiate (e.g. Elementary).
  • It could be a great way to introduce people to open source in a more accessible way (it doesn’t require a new OS).
  • It could potentially bring lots of new developers to projects such as GNOME, Unity, KDE, or Elementary.
  • It could significantly increase the level of testing, translations and other supplemental services due to more people being able to play with it.

Of course, from a purely Free Software perspective it could be seen as a step back. Then again, with Darwin being open source and the desktop and apps you install in the distribution being open source, it would be a mostly free platform. It wouldn’t be free in the eyes of the FSF, but then again, neither is Ubuntu. 😉

So, again, just wanted to throw the idea out there to spur some discussion. I think it could be a great project to see. It wouldn’t replace any of the existing Linux distros, but I think it could bring an influx of additional folks over to the open source desktops.

So, two questions for you all to respond to:

  1. What do you think? Could it be an interesting project?
  2. If so, technically how do you think this could be accomplished?

by Jono Bacon at February 01, 2016 03:17 AM

January 31, 2016

Elizabeth Krumbach


I have already written about the UbuCon Summit and Ubuntu booth at SCALE14x (14th annual Southern California Linux Expo), but the conference went far beyond Ubuntu for me!

First of all, I love this new venue. SCALE had previously been held at hotels near LAX, with all the ones I’d attended being at the Hilton LAX. It was a fine venue itself, but the conference was clearly outgrowing it even when I last attended in 2014 and there weren’t many food options around, particularly if you wanted a more formal meal. The Pasadena Convention Center was the opposite of this. Lots of space, lots of great food of all kinds and price ranges within walking distance! A whole plaza across from the venue made a quick lunch at a nice place quite doable.

It’s also worth mentioning that with over 3000 attendees this year, the conference has matured well. My first SCALE was 9x back in 2011, and with every year the growth and professionalism has continued, but without losing the feel of a community-run, regional conference that I love so much. Even the expo hall has continued to show a strong contingent of open source project and organization booths among the flashy company-driven booths, but even the company booths weren’t over done. Kudos to the SCALE crew for their work and efforts that make SCALE continue to be one of my favorite open source conferences.

As for the conference itself, MJ and I were both able to attend for work, which was a nice change for us. Plus, given how much conference travel I’ve done on my own, it’s nice to travel and enjoy an event together.

Thursday was taken up pretty much exclusively by the UbuCon Summit, but Friday we started to transition into more general conference activities. The first conference-wide keynote was on Friday morning with Cory Doctorow presenting No Matter Who’s Winning the War on General Purpose Computing, You’re Losing where he explored security and Digital rights management (DRM) in the exploding field of the Internet of Things. His premise was that we did largely win the open source vs. proprietary battle, but now we’re in a whole different space where DRM are now threatening our safety and stifling innovation. Security vulnerabilities in devices are going undisclosed when discovered by third parties under threat of prosecution for violating DRM-focused laws which have popped up worldwide. Depending on the device, this fear of disclosure could actually result in vulnerabilities causing physical harm to someone if compromised in a malicious way. He also dove into more dystopian future where smart devices are given away for free/cheap but then are phoning home and can be controlled remotely by an entity that doesn’t have your personal best interest in mind. The talk certainly gave me a lot to think about. He concluded by presenting the Apollo 1201 Project “a mission to eradicate DRM in our lifetime” that he’s working on at the EFF, article here.

Later that morning I made my way over to the DevOpsDayLA track to present on Open Source tools for distributed systems administration. Unfortunately, the projectors in the room weren’t working. Thankfully my slides were not essential to the talk, so even though I did feel a bit unsettled to present without slides, I made it through. People even said nice things afterwards, so I think it went pretty well in spite of the technology snafu. The slides that should have been seen during the talk are available here (PDF) and since I am always asked, I do maintain a list of other open source infras. Thanks to @scalexphotos for capturing a photo during my talk.

In the afternoon I spent some time in the expo hall, where I was able to see many more familiar faces! Again, the community booths are the major draw for me, so it was great visiting with participants of projects and groups there. It was nice to swing by the Ubuntu booth to see how polished everything was looking. I also got to see Emma of System76, who I hadn’t seen in quite some time.

Friday evening had a series of Birds of a Feather (BoF) sessions. I was able to make my way over to one on OpenStack before wrapping up my evening.

Saturday morning began with a welcome from Pasadena City Council member Andy Wilson who was enthusiastic about SCALE14x coming to Pasadena and quickly dove into his technical projects and the work being done in Pasadena around tech. I love this trend of city officials welcoming open source conferences to their area, it means a lot that the work we’re doing is being taken seriously by the cities we’re in. Then it moved into a keynote by Mark Shuttleworth on Open Source in the World of App Stores which had many similarities to his talk at the UbuCon Summit, but was targeted more generally about how distributions can help keep pace today’s computing that deploys “at the speed of git.”

I then went to Akkana Peck’s talk on Stupid GIMP tricks (and smart ones, too). It was a very visual talk, so I’m struggling to do it justice in written form, but she demonstrated various tools for photo editing in GIMP that I knew nothing about, I learned a lot. She concluded by talking about the features that came out in the 2.8 release and then the features planned and being worked on in the upcoming 2.9 release. Video of the talk here In the afternoon I attended a Kubernetes talk, noting quickly that the containers track was pretty packed throughout the conference.

Akkana Peck on GIMP

Between “hallway track” chats about everything from the Ubuntu project to the OpenStack project infrastructure tooling, Saturday afternoon also gave me the opportunity to do a bit more wandering through the expo hall. I visited my colleagues at the HPE booth and was able to see their cloud in a box. It was amusing to see the suitcase version and the Ubuntu booth with an Orange box. Putting OpenStack clouds in a single demonstration deployment for a conference is a popular thing!

My last talk of the day was by OpenStack Magnum Project Technical Lead Adrian Otto on Docker, Kubernetes, and Mesos: Compared. He walked us through some of the basics of Magnum first, then dove into each technology. Docker Swarm is good for simple tooling that you’re comfortable with and doing exactly what you tell it (imperative) and have 100s-1000s machines in the cluster. Kubernetes is more declarative (you tell it what you want, it figures out how to do it) and currently has some scaling concerns that make it better suited for a cluster of up to 200 nodes. Mesos is a more complicated system that he recommended using if you have a dedicated infrastructure team and can effectively scale to over 10k nodes. Video of the talk here

Sunday began with a keynote by Sarah Sharp on Improving Diversity with Maslow’s Hierarchy of Needs. She spoke about diversity across various angles, from income and internet bandwidth restrictions to gender and race, and the intersection of these things. There are many things that open source projects assume: unlimited ability to download software, ability for contributors to have uninterrupted “deep hack mode” time, access to fast systems to do development on. These assumptions fall apart when a contributor is paying for the bandwidth they use, are a caretaker who doesn’t have long periods without interruptions or a new system that they have access to. Additionally, there are opportunities that are simply denied to many genders, as studies have show that mothers and daughters don’t have as many opportunities or as much access to technology as the fathers and sons in their household. She also explored safety in a community, demonstrating how even a single sexist or racist contributor can single-handedly destroy diversity for your project by driving away potential contributors. Having a well-written code of conduct with a clear enforcement plan is also important and cited resources for organizations and people who could help you with that, warning that you shouldn’t roll your own. She concluded by asking audience members to recognize the problem and take action in their communities to help improve diversity. Her excellent slides (with notes) are here and a video of the talk here.

I then made my way to the Sysadmin track to see Jonah Horowitz and Albert Tobey on From Sys Admin to Netflix SRE. First off, their slides were hilarious. Lots of 80s references to things that were out-dated as they made their way through how they’re doing Site Reliability Engineering (SRE) at Netflix and inside their CORE (Cloud Operations Reliability Engineering) team. In their work, they’ve moved past configuration management, preferring to deploy baked AMIs (essentially, golden images). They also don’t see themselves as “running applications for the developers” and instead empower developers to do their own releases and application-level monitoring. In this new world of managing fleets of servers rather than individual systems, they’ve worked to develop a blameless culture where they do postmortems so that anything that is found to be done manually or otherwise error-prone can be fixed so the issue doesn’t happen again. The also shared the open source tooling that they use to bypass traditional monitoring systems and provide SREs with a high level view of how their system is working, noting that no one in the organization “knows everything” about the infrastructure. This tooling includes Spinnaker, Atlas and Vector, along with their well-known Simian Army which services within Netflix must run (unless they have a good reason not to) to test tolerance of random instance failures. Video of the talk can be found here and slide here.

After lunch I made my way to A fresh look at SELinux… by Daniel Walsh. I’d seen him speak on SELinux before, and found his talk valuable then too. This time I was particularly interested in how it’s progressed in RHEL7/Centos7, like the new rules for a file type, such as knowing what permissions /home/user/.ssh should have and having an semanage command to set those permissions to that default instead of doing it manually. I also learned about semanage -e (equivalency) to copy permissions from one place to another and the new mv -Z which moves things while retaining the SELinux properties. Finally, I somehow didn’t have a good grasp on improvements to the man pages, doing things like `man httpd_selinux` works and is very helpful! I was also amused to learn a bout (especially since our team does not turn it off, and that took some work on my part!). In closing, there’s also an SELinux Coloring Book (which I’ve written about before), and though I didn’t get to the session in time to get one, MJ picked me up on in the expo hall. Video of the talk here

With that, we were at the last talk of the conference. I went over to Dustin Kirkland’s talk on “adapt install [anything]” on your Ubuntu LTS server/desktop! Adapt is a wrapper around LXD containers that allows you to locally (unprivileged user) install versions of Ubuntu software from various versions and run it locally on your system. The script handles provisioning the container, many default settings and keeping it updated automatically, so you really can “adapt install” and run a series of adapt commands to interact with it as if it were prepared locally. It all reminded me of the pile of chroot-building scripts I had back when I was doing Debian packaging, but more polished than mine ever were! He wrote a blog post following up his talk here: adapt install [anything] which includes a link to his slides. Video from the talk here (link at 4 hours 42 minutes).

With the conference complete, it was sad to leave, but I had an evening flight out of Burbank. Amusingly, even my flight was full of SCALE folks, so there were some fun chats in the boarding area before our departure.

Huge thanks to everyone who made SCALE possible, I’m looking forward to next year!

More photos from SCALE14x here:

by pleia2 at January 31, 2016 09:15 PM

Akkana Peck

Setting mouse speed in X

My mouse died recently: the middle button started bouncing, so a middle button click would show up as two clicks instead of one. What a piece of junk -- I only bought that Logitech some ten years ago! (Seriously, I'm pretty amazed how long it lasted, considering it wasn't anything fancy.)

I replaced it with another Logitech, which turned out to be quite difficult to find. Turns out most stores only sell cordless mice these days. Why would I want something that depends on batteries to use every day at my desktop?

But I finally found another basic corded Logitech mouse (at Office Depot). Brought it home and it worked fine, except that the speed was way too fast, much faster than my old mouse. So I needed to find out how to change mouse speed.

X11 has traditionally made it easy to change mouse acceleration, but that wasn't what I wanted. I like my mouse to be fairly linear, not slow to start then suddenly zippy. There's no X11 property for mouse speed; it turns out that to set mouse speed, you need to call it Deceleration.

But first, you need to get the ID for your mouse.

$ xinput list| grep -i mouse
⎜   ↳ Logitech USB Optical Mouse                id=11   [slave  pointer  (2)]

Armed with the ID of 11, we can find the current speed (deceleration) and its ID:

$ xinput list-props 11 | grep Deceleration
        Device Accel Constant Deceleration (259):       3.500000
        Device Accel Adaptive Deceleration (260):       1.000000

Constant deceleration is what I want to set, so I'll use that ID of 259 and set the new deceleration to 2:

$ xinput set-prop 11 259 2

That's fine for doing it once. But what if you want it to happen automatically when you start X? Those constants might all stay the same, but what if they don't?

So let's build a shell pipeline that should work even if the constants aren't.

First, let's get the mouse ID out of xinput list. We want to pull out the digits immediately following "id=", and nothing else.

$ xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/'

Save that in a variable (because we'll need to use it more than once) and feed it in to list-props to get the deceleration ID. Then use sed again, in the same way, to pull out just the thing in parentheses following "Deceleration":

$ mouseid=$(xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/')
$ xinput list-props $mouseid | grep 'Constant Deceleration'
        Device Accel Constant Deceleration (262):       2.000000
$ xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/'

Whew! Now we have a way of getting both the mouse ID and the ID for the "Constant Deceleration" parameter, and we can pass them in to set-prop with our desired value (I'm using 2) tacked onto the end:

$ xinput set-prop $mouseid $(xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/') 2

Add those two lines (setting the mouseid, then the final xinput line) wherever your window manager will run them when you start X. For me, using Openbox, they go in .config/openbox/autostart. And now my mouse will automatically be the speed I want it to be.

January 31, 2016 08:42 PM

January 30, 2016

Elizabeth Krumbach

Ubuntu at SCALE14x

I spent a long weekend in Pasadena from January 21-24th to participate in the 14th Annual Southern California Linux Expo (SCALE14x). As I mentioned previously, a major part of my attendance was focused on the Ubuntu-related activities. Wednesday evening I joined a whole crowd of my Ubuntu friends at a pre-UbuCon meet-and-greet at a wine bar (all ages were welcome) near the venue.

It was at this meet-and-greet where I first got to see several folks I hadn’t seen since the last Ubuntu Developer Summit (UDS) back in Copenhagen in 2012. Others I had seen recently at other open source conferences and still more I was meeting for the first time, amazing contributors to our community who I’d only had the opportunity to get to know online. It was at that event that the excitement and energy I used to get from UDS came rushing back to me. I knew this was going to be a great event.

The official start of this first UbuCon Summit began Thursday morning. I arrived bright and early to say hello to everyone, and finally got to meet Scarlett Clark of the Kubuntu development team. If you aren’t familiar with her blog and are interested in the latest updates to Kubuntu, I highly recommend it. She’s also one of the newly elected members of the Ubuntu Community Council.

Me and Scarlett Clark

After morning introductions, we filed into the ballroom where the keynote and plenaries would take place. It was the biggest ballroom of the conference venue! The SCALE crew really came through with support of this event, it was quite impressive. Plus, the room was quite full for the opening and Mark Shuttleworth’s keynote, particularly when you consider that it was a Thursday morning. Richard Gaskin and Nathan Haines, familiar names to anyone who has been to previous UbuCon events at SCALE, opened the conference with a welcome and details about how the event had grown this year. Logistics and other details were handled now too, and then they quickly went through how the event would work, with a keynote, series of plenaries and then split User and Developer tracks in the afternoon. They concluded by thanking sponsors and various volunteers and Canonical staff who made the UbuCon Summit a reality.

UbuCon Summit introduction by Richard Gaskin and Nathan Haines

The welcome, Mark’s keynote and the morning plenaries are available on YouTube, starting here and continuing here.

Mark’s keynote began by acknowledging the technical and preference diversity in our community, from desktop environments to devices. He then reflected upon his own history in Linux and open source, starting in university when he first installed Linux from a pile of floppies. It’s been an interesting progression to see where things were twenty years ago, and how many of the major tech headlines today are driven by Linux and Ubuntu, from advancements in cloud technology to self-driving cars. He continued by talking about success on a variety of platforms, from the tiny Raspberry Pi 2 to supercomputers and the cloud, Ubuntu has really made it.

With this success story, he leapt into the theme of the rest of his talk: “Great, let’s change.” He dove into the idea that today’s complex, multi-system infrastructure software is “too big for apt-get” as you consider relationships and dependencies between services. Juju is what he called “apt-get for the cloud/cluster” and explained how LXD, the next evolution of LXC running as a daemon, gives developers the ability to run a series of containers to test deployments of some of these complex systems. This means that just like the developers and systems engineers of the 90s and 00s were able to use open source software to deploy demonstrations of standalone software on our laptops, containers allow the students of today to deploy complex systems locally.

He then talked about Snappy, the new software packaging tooling. His premise was that even a six month release cycle is too long as many people are continuously delivering software from sources like GitHub. Many places have a solid foundation of packages we rely upon and then a handful of newer tools that can be packaged quickly in Snappy rather than going through the traditional Debian Packaging route, which is considerably more complicated. It was interesting to listen to this, as a former Debian package maintainer myself I always wanted to believe that we could teach everyone to do software packaging. However, seeing these efforts play out the community work with app developers it became clear between their reluctance and the backlog felt by the App Review Board, it really wasn’t working. Snappy moves us away from PyPI, PPAs and such into an easier, but still packaged and managed, way to handle software on our systems. It’ll be fascinating to see how this goes.

Mark Shuttleworth on Snappy

He concluded by talking about the popular Internet of Things (IoT) and how Ubuntu Core with Snappy is so important here. DJI, “the market leader in easy-to-fly drones and aerial photography systems,” now offers an Ubuntu-driven drone. The Open Source Robotics Institute uses Ubuntu. GE is designing smart kitchen appliances powered by Ubuntu and many (all?) of the self-driving cars known about use Ubuntu somewhere inside them. There was also a business model here, a company that produces the hardware and a minimal features set that comes with it, also sells a more advanced version, and then industry-expert third parties who further build upon it to sell industry-specific software.

After Mark’s talk there were a series of plenaries that took place in the same room.

First up was Sergio Schvezov who followed on Mark’s keynote nicely as he gave a demo of Snapcraft, the tool used to turn software into a .snap package for Ubuntu Core. Next up was Jorge Castro who gave a great talk about the state of Gaming on Ubuntu, which he said was “Not bad.” Having just had this discussion with my sister, the timing was great for me. On the day of his talk, there were 1,516 games on Steam that would natively run on Linux, a nice selection of which are modern games that are new and exciting across multiple platforms today. He acknowledged the pre-made Steam Boxes but also made the case for homebrewed Steam systems with graphics card recommendations, explaining that Intel did fine, AMD is still lagging behind high performance with their open source drivers and giving several models of NVidia cards today that do very well (from low to high quality, and cost: 750Ti, 950, 960, 970, 980, 980Ti). He also passed around a controller that works with Linux to the audience. He concluded by talking about some issues remaining with Linux Gaming, including regressions in drivers that cause degraded performance, the general performance gap when compared to some other gaming systems and the remaining stigma that there are “no games” on Linux, which talks like this are seeking to reverse. Plenaries continued with Didier Roche introducing Ubuntu Make, a project which makes creating a developer platform out of Ubuntu with several SDKs much easier so that developers reduce the bootstrapping time. His blog has a lot of great posts on the tooling. The last talk of the morning was by Scarlett Clark, who gave us a quick update on Kubuntu Development, explaining that the team had recently joined forces with KDE packagers in Debian to more effectively share resources in their work.

It was then time for group photo! Which included my xerus, and where I had a nice chat (and selfie!) with Carla Sella as we settled in for the picture.

Me and Carla Sella

In the afternoon I attended the User track, starting off with Nathan Haines on The Future of Ubuntu. In this talk he talked about what convergence of devices meant for Ubuntu and warded off concerns that the work on the phone was done in isolation and wouldn’t help the traditional (desktop, server) Ubuntu products. With Ubuntu Core and Snappy, he explained, all the work done on phones is being rolled back into progress made on the other systems, and even IoT devices, that will use them in the future. Following Nathan was the Ubuntu Redux talk by Jono Bacon. His talk could largely be divided into two parts: History of Ubuntu and how we got here, and 5 recommendations for the Ubuntu community. He had lots of great stories and photos, including one of a very young Mark, and moved right along to today with Unity 8 and the convergence story. His 5 recommendations were interesting, so I’ll repeat them here:

  1. Focus on core opportunities. Ubuntu can run anywhere, but should it? We have finite resources, focus efforts accordingly.
  2. Rethink what community in Ubuntu is. We didn’t always have Juju charmers and app developers, but they are now a major part of our community. Understand that our community has changed and adjust our vision as to where we can find new contributors.
  3. Get together more in person. The Ubuntu Online Summit works for technical work, but we’ve missed out on the human component. In person interactions are not just a “nice to have” in communities, they’re essential.
  4. Reduce ambiguity. In a trend that would continue in our leadership panel the next day, some folks (including Jono) argue that there is still ambiguity around Intellectual Propoerty and licensing in the Ubuntu community (Mark disagrees).
  5. Understand people who are not us.

Nathan Haines on The Future of Ubuntu

The next presentation was my own, on Building a career with Ubuntu and FOSS where I drew upon examples in my own career and that of others I’ve worked with in the Ubuntu community to share recommendations for folks looking to contribute to Ubuntu and FOSS as a tool to develop skills and tools for their career. Slides here (PDF). David Planella on The Ubuntu phone and the road to convergence followed my talk. He walked audience members through the launch plan for the phone, going through the device launch with BQ for Ubuntu enthusiasts, the second phase for “innovators and early adopters” where they released the Meizu devices in Europe and China and went on to explain how they’re tackling phase three: general customer availability. He talked about the Ubuntu Phone Insiders group of 30 early access individuals who came from a diverse crowd to provide early feedback and share details (via blog posts, social media) to others. He then gave a tour of the phones themselves, including how scopes (“like mini search engines on your phone”) change how people interact with their device. He concluded with a note about the availability of the SDK for phones available at, and that they’re working to make it easy for developers to upload and distribute their applications.

Video from the User track can be found here. The Developer track was also happening, video for that can be found here. If you’re scanning through these to find a specific talk, note that each is 1 hour long.

Presentations for the first day concluded with a Q&A with Richard Gaskin and Nathan Haines back in the main ballroom. Then it was off to the Thursday evening drinks and appetizers at Porto Alegre Churrascaria! Once again, a great opportunity to catch up with friends old and new in the community. It was great running into Amber Graner and getting to talk about our respective paid roles these days, and even touched upon key things we worked on in the Ubuntu community that helped us get there.

The UbuCon Summit activities continued after a SCALE keynote with an Ubuntu Leadership panel which I participated in along with Oliver Ries, David Planella, Daniel Holbach, Michael Hall, Nathan Haines and José Antonio Rey with Jono Bacon as a moderator. Jono had prepared a great set of questions, exploring the strengths and weaknesses in our community, things we’re excited about and eager to work on and more. We also took questions from the audience. Video for this panel and the plenaries that followed, which I had to miss in order to give a talk elsewhere, are available here. The link takes you to 1hr 50min in, where the Leadership panel begins.

The afternoon took us off into unconference mode, which allowed us to direct our own conference setup. Due to aforementioned talk I was giving elsewhere, I wasn’t able to participate in scheduling, but I did attend a couple sessions in the afternoon. First was proposed by Brendan Perrine where we talked about strategies for keeping the Ubuntu documentation up to date, and also talked about the status of the Community Help wiki, which has been locked down due to spam for nearly a month(!). I then joined cm-t arudy to chat about an idea the French team is floating around to have people quickly share stories and photos about Ubuntu in some kind of community forum. The conversation was a bit tool-heavy, but everyone was also conscious of how it would need to be moderated. I hope I see something come of this, it sounds like a great project.

With the UbuCon Summit coming to a close, the booth was the next great task for the team. I couldn’t make time to participate this year, but the booth featured lots of great goodies and a fleet of contributors working the booth who were doing a fantastic job of talking to people as the crowds continued to flow through each day.

Huge thanks to everyone who spent months preparing for the UbuCon Summit and booth on the SCALE14x expo hall. It was a really amazing event that I was proud to be a part of. I’m already looking forward to the next one!

Finally, I took responsibility for the @ubuntu_us_ca Twitter account throughout the weekend. It was the first time I’ve done such a comprehensive live-tweeting of an event from a team/project account. I recommend a browse through the tweets if you’re interested in hearing more from other great people live-tweeting the event. It was a lot of fun, but also surprisingly exhausting!

More photos from my time at SCALE14x (including lots of Ubuntu ones!) here:

by pleia2 at January 30, 2016 11:40 PM

Jono Bacon

Happy Birthday, Stuart

About 15 years ago I met Stuart ‘Aq’ Langridge when he walked into the new Wolverhampton Linux Users Group I had just started with his trademark bombastic personality and humor. Ever since those first interactions we have become really close friends.

Today Stuart turns 40 and I just wanted to share a few words about how remarkable a human being he is.

Many of you who have listened to Stuart on Bad Voltage, seen him speak, worked with him, or socialized with him will know him for his larger than life personality. He is funny, warm, and passionate about his family, friends, and technology. He is opinionated, and many of you will know him for the amusing, insightful, and tremendously articulate way in which he expresses his views.

He is remarkably talented and has an incredible level of insight and perspective. He is not just a brilliant programmer and software architect, but he has a deft knowledge and understanding of people, how they work together, and the driving forces behind human interaction. What I have always admired is that while bombastic in his views, he is always open to fresh ideas and new perspectives. For him life is a journey and new ways of looking at the road are truly thrilling for him.

As I have grown as a person in my career, with my family, and particularly when moving to America, he has always supported yet challenged me. He is one of those rare friends that can enthusiastically validate great steps forward yet, with the same enthusiasm, illustrate mistakes too. I love the fact that we have a relationship that can be so open and honest, yet underlined with respect. It is his personality, understanding, humor, thoughtfulness, care, and mentorship that will always make him one of my favorite people in the world.

Stuart, I love you, pal. Have an awesome birthday, and may we all continue to cherish your friendship for many years to come.

by Jono Bacon at January 30, 2016 09:05 PM

January 29, 2016

Jono Bacon

Heading Out To

On Saturday I will be flying out to taking place in Geelong. Because it is outrageously far away from where I live, I will arrive on Monday morning. 🙂

I am excited to be joining the conference. The last time I made the trip was sadly way back in 2007 and I had an absolutely tremendous time. Wonderful people, great topics, and well worth the trip. Typically I have struggled to get out with my schedule, but I am delighted to be joining this year.

I will also be delivering one of the keynotes this year. My keynote will be on Thu 4th Feb 2016 at 9am. I will be delving into how we are at potentially the most exciting time ever for building strong, collaborative communities, and sharing some perspectives on how we empower a new generation of open source contributors.

So, I hope to see many of you there. If you want to get together for a meeting, don’t hesitate in getting in touch. You can contact me at for GitHub related discussions, or for everything else. See you there!

by Jono Bacon at January 29, 2016 07:10 AM

January 23, 2016


Brave Browser on Ubuntu

Brendan Eich, one of the co-founders of the Mozilla project (Firefox browser) is developing a new web browser promising to block intrusive ads and 3rd party trackers. Enter BRAVE built for Linux, Windows, OSX, iOS and Android...

Brave's browser, still in early development, speeds up web pages by stripping out not just ads but also other page elements that track online behavior and web elements used to deliver ads. By removing advertisements and trackers, Braves browser will speed up page loading considerably. It loads pages 2x to 4x faster than other smart phone browsers & up to 2x faster than other browsers for personal computers.

Blocking ads however could be a challenge for the Brave team as advertising helps fund websites and bloggers content. The browsers work around is to eventually display new ads from their own pool of advertisers and connect Bitcoin as a method for users to pay website owners directly for the content users are viewing for free. Its a new way of doing things for sure and could disrupt Googles ad network which is Googles biggest source of revenue.

Brave has been built out of the open source Chromium browser, which is the foundation for Google's Chrome browser. An interesting choice considering Brave is essentially trying to take market cap away from Google. Has Eich and his Brave team averted the ad blocking war or created a new style war?

All of Braves source packages have been made available on GitHub. We managed to compile it on Ubuntu 16.04, but ran into problems. The GitHub page includes a readme file for installation however its really incomplete right now as of 1/22/16. We also ran into problems with the newest version of Node.js 5.xx not being supported by our newest version of Ubuntu. However, installation of Node.js 5.xx may work fine on Ubuntu 15.10 or older, thus getting the Brave browser installed on Ubuntu.

Keep your eyes open on their GitHub for updated installation information such as DEB files or PPAs for an easier way to install Brave.

by iheartubuntu ( at January 23, 2016 04:56 PM

January 20, 2016

Elizabeth Krumbach

December events and a pair of tapestries

In my last post I talked some about the early December tourist stuff that I did. I also partook in several events that gave me a nice, fun distraction when I was looking for some down time after work and book writing.

It’s no secret that I like good food, so when a spot opened up with some friends to check out Lazy Bear here in San Francisco, I was pretty eager to go. They had two seatings per night and everyone sits together at long tables and was served each course at the same time. We had to skip the pork selections, but I was happy with the substitutions they provided for us. They also gave us pencils and notebooks to take notes about the dishes. An overall excellent dinner.

On December 2nd MJ and I met up with my friend Amanda to see Randall Monroe of XKCD fame talk about his new book, Thing Explainer. In this book he talks about complicated concepts using only the 1000 most common words. He shared stories about the process of writing the book and some things he had a lot of fun with. It was particularly amusing to hear how much he used the word “bag” when explaining the human body. We waited around pretty late for what ended up being some marathon signing, huge thanks to him for staying around so we could get our copy signed!

The very next day I scored a ticket to a local Geek Girl Dinner here in SOMA. I’d only been to one before, and going alone always means I’m a bit on edge nervousness-wise. But it was a Star Wars themed dinner and I do enjoy hearing stories from other women in tech, so I donned my R2-D2 hoodie and made my way over. Turns out, not many people were there to celebrate Star Wars, but they did have R2-D2 cupcakes and some cardboard cutouts of the new characters, so they pulled it off. The highlight of the night for me was a technical career panel of women who were able to talk about their varied entry points into tech. As someone with a non-traditional background myself, it’s always inspiring to hear from other women who made major career changes after being inspired by technology in some way or another.

Twilio tech careers panel

I mentioned in an earlier post that our friend Danita was in town recently. The evening she arrived I was neck deep in book work… and the tail end of the Bring Back MST3K Kickstarter campaign. They hosted five hours of a telethon-style variety show with magicians, musicians, comedians and various cameos by past and future MST3K actors, writers and robots. I’m pretty excited about this reboot, MST3K was an oddly important show when I was a youth. A game based on riffing is what first brought me on to an IRC network and introduced me to a whole host of people who made major impacts in my life. We all loved MST3K. Today I still enjoy Rifftrax (including the live show I went to last week). In spite of technical difficulties it was fun to prop up my tablet while working and watch the stream of their final fundraising push as they broke the record for biggest TV kickstarter campaign ever. Congratulations everyone, I am delighted to have donated to the campaign and look forward to the new episodes!

Hanukkah was also in December. Unfortunately MJ had to be out of town for the first few days, so we did a Google Hangout video call each evening. I set the tablet up on the counter as I lit the lights. I also took pictures each night so I could share the experience further.

At the end of the month MJ had a couple of his cousins in town to visit over the Christmas holiday. I didn’t take much time off, but I did tag along on select adventures, enjoying several great meals together and snapping a bunch of tourist photos of the Golden Gate Bridge (album here). We also made our way to Pier 39 one afternoon to visit sea lions and MJ and I made a detour to the Aquarium of the Bay while the girls did some shopping. The octopus and sea otters were particularly lively that evening (album here) and I snapped a couple videos: Giant Pacific Octopus and River otters going away for the night. Gotta love the winter clothes the human family was wearing in the otter video, we had a brisk December!

To conclude, I’ll leave you with a pair of Peruvian tapestries that we picked up in Cusco in August. Peru was one of my favorite adventures to date, and it’s nice that we were able to bring home some woven keepsakes from the Center for Traditional Textiles. We bundled them together in a carry on to bring them home and then brought them to our local framing shop and art gallery for framing. It took a few months, but I think it was worth it, they did a very nice job.

And now that I’ve taken a breather, it’s time to pack for SCALE14x, which we’re leaving for tomorrow morning. I also need to see if I can tie off some loose ends with this chapter I’m working on before we go.

by pleia2 at January 20, 2016 02:04 AM

January 17, 2016

Elizabeth Krumbach

Local tourist: A mansion, some wine and the 49ers

Some will say that there are tourists and there are travelers. The distinction tends to be that tourists visit the common places and take selfies, while travelers wander off the beaten path and take a more peaceful and thoughtful approach to enjoying their chosen destination.

I’m a happy tourist. Even when I’m at home.

Back in December our friend Danita was in town and I took advantage of this fact by going full on Bay Area tourist with her.

Our first adventure was going down to the Winchester Mystery House in San Jose. Built continuously for decades by the widow Sarah Winchester (of Winchester rifle fame), the house is a maze of uneven floors, staircases that go nowhere and doors that could drop you a story or two if you don’t watch when stepping through them. It’s said that the spiritualist movement heavily influenced Mrs. Winchester’s decisions, from moving to California after her husband’s death to the need to continuously be doing construction. She had a private seance room and after the house survived the 1906 earthquake that destroyed the tower that used to be a key feature in the house, she followed spirit-driven guidance. This caused her to stop work on the main, highly decorated front part of the house and only work on the back half, not even fixing up the sections damaged in the earthquake.

Door to nowhere
A “door to nowhere” in the Winchester House

There certainly are bits about this place that remind me of a tourist trap, including the massive gift shop and ghost stories. But it wasn’t shopping, spiritualism or ghosts that brought me here. As an armchair history and documentary geek, I’ve known about the Winchester House for years. When I moved to the bay area almost six years ago, it immediately went on my “to visit” list. The beautiful Victorian architecture, the oddity that was how she built it and her interest in the latest turn of the 20th century innovations in the house are what interested me. She had three elevators in the house, of varying types as the technology was developed, providing a fascinating snapshot into approximately 20 years of early elevator innovation history. She was an early adopter of electricity, and there were various types of the latest time and energy-saving gadgets and tools that were installed to help her staff get their work done. Plus, in addition to having a car (with a chauffeur, obviously), the garage where it was kept had a car wash apparatus built in! We went on a behind-the-scenes tour to visit many of these things. The estate originally covered many acres, allowing for a large fruit orchard and fruit was actually processed on site, so we got to see the massive on-site evaporator used for preparing the fruit for distribution.

Fruit evaporator at Winchester House

When Mrs. Winchester died, her belongings were carefully distributed among her heirs, but no arrangements were made for the house. Instead, curious neighbors got together and made sure it was saved from demolition, effectively turning it into a tourist attraction just a few years after her passing. Still privately-owned, today it’s listed on the U.S. National Register of Historic Places.

Photos weren’t allowed inside the house, but I snapped away outside:

My next round of local touristing took us north, to Sonoma county for some wine tasting! We’re a member of a winery up there, so we had our shipment to pick up too, but it’s also always fun bringing our friends to our favorite stops in wine country. We started at Imagery Winery where we picked up our wine and enjoyed tastings of several of their sweeter wines, including their port. From there we picked up fresh sandwiches at a deli and grocery store before making our way to Benziger Family Winery, where MJ and I got engaged back in 2011.s We ate lunch before the rain began and then went inside to do some more wine tastings. Thankfully, the weather cleared up before our 3PM tour, where we got to see the vinyards, their processing area and inside the wine caves. It was cold though, in the 40s with a stiff breeze throughout the day. Our adventure concluded with a stop at Jacuzzi Family Vineyards where we tasted some olive oils, vinegar and mustard.

More photos from our Sonoma adventure here:

In slightly less tourism and more local experience, the last adventure I went on with Danita was a trip down the bay (took Amtrak) to the brand new NFL stadium for the 49ers on Sunday, December 20th. I’m not into football, but going to an NFL game was something I wanted to experience, particularly since this brand new stadium is the one the Super Bowl will be played in a few weeks from now. Nice experience to have! The forecast called for rain, but we lucked out and it was merely cold (40s and 50s), I picked up a winter hat there at the stadium and they appeared to be doing brisk business for us Californians who are not accustomed to the chilly weather. We got to our seats before all the pre-game activities began, of which there are many, I had no idea the kind of pomp that accompanies a football game! We had really nice seats right next to the field, so close that Danita was able to find us upon watching game footage later:

The game itself? I am still no football fan. As someone who doesn’t watch much, I’ll admit that it was a bit hard for me to follow. Thankfully Danita is a big fan so she was able to explain things to me when I had questions. And regardless of the sport, it is fun to be piled into a stadium with fans. Hot dogs and pretzels, cheering and excitement, all good for the human spirit. I also found the cheerleaders to be a lot of fun, for all the stopping and starting the football players did, the cheerleaders were active throughout the game. I also learned that the stadium was near the San Jose airport, I may have taken a lot of pictures of planes flying over the stadium. They also had a halftime break that featured some previous Super Bowl 49ers from the 80s, Joe Montana was among them. Even as someone who doesn’t pay attention to football, I recognized him!

Airplane, cheerleaders and probably some football happening ;)

The Amtrak trip home was also an adventure, but not the good kind. Our train broke down and we had to be rescued by the next train, an hour behind us. There were high spirits among our fellow passengers though… and lots of spirits, the train bar ran out of champagne. It was raining by the time we got on the next train and so we had a bit of a late and soggy trip back. Still, all in all I’m glad I went.

More photos from the game here:

by pleia2 at January 17, 2016 08:01 PM

Color me Ubuntu at UbuCon Summit & SCALE14x

This week I’ll be flying down to Pasadena, California to attend the first UbuCon Summit, which is taking place at the the Fourteenth Annual Southern California Linux Expo (SCALE14x). The UbuCon Summit was the brain child of meetings we had over the summer that expressed concern over the lack of in person collaboration and connection in the Ubuntu community since the last Ubuntu Developer Summit back in 2012. Instead of creating a whole new event, we looked at the community-run UbuCon events around the world and worked with the organizers of the one for SCALE14x to bring in funding and planning help from Canonical, travel assistance to project members and speakers to provide a full two days of conference and unconference event content.

UbuCon Summit

As an attendee of and speaker at these SCALE UbuCons for several years, I’m proud to see the work that Richard Gaskin and Nathan Haines has put into this event over the years turn into something bigger and more broadly supported. The event will feature two tracks on Thursday, one for Users and one for Developers. Friday will begin with a panel and then lead into an unconference all afternoon with attendee-driven content (don’t worry if you’ve never done an unconference before, a full introduction after the panel will be provided on to how to participate).

As we lead up to this the UbuCon Summit (you can still register here, it’s free!) on Thursday and Friday, I keep learning that more people from the Ubuntu community will be attending, several of whom I haven’t seen since that last Developer Summit in 2012. Mark Shuttleworth will be coming in to give a keynote for the event, along with various other speakers. On Thursday at 3PM, I’ll be giving a talk on Building a Career with Ubuntu and FOSS in the User track, and on Friday I’ll be one of several panelists participating in an Ubuntu Leadership Panel at 10:30AM, following the morning SCALE keynote by Cory Doctorow. Check out the full UbuCon schedule here:

Over the past few months I’ve been able to hop on some of the weekly UbuCon Summit planning calls to provide feedback from folks preparing to participate and attend. During one of our calls, Abi Birrell of Canonical held up an origami werewolf that she’d be sending along instructions to make. Turns out, back in October the design team held a competition that included origami instructions and gave an award for creating an origami werewolf. I joked that I didn’t listen to the rest of the call after seeing the origami werewolf, I had already gone into planning mode!

With instructions in hand, I hosted an Ubuntu Hour in San Francisco last week where I brought along the instructions. I figured I’d use the Ubuntu Hour as a testing ground for UbuCon and SCALE14x. Good news: We had a lot of fun, it broke the ice with new attendees and we laughed a lot. Bad news: We’re not very good at origami. There were no completed animals at the end of the Ubuntu Hour!

Origami werewolf attempt
The xerus helps at werewolf origami

At 40 steps to create the werewolf, one hour and a crowd inexperienced with origami, it was probably not the best activity if we wanted animals at the end, but it did give me a set of expectations. The success of how fun it was to try it (and even fail) did get me thinking though, what other creative things could we do at Ubuntu events? Then I read an article about adult coloring books. That’s it! I shot an email off to Ronnie Tucker, to see if he could come up with a coloring page. Most people in the Ubuntu community know Ronnie as the creator of Full Circle Magazine: the independent magazine for the Ubuntu Linux community, but he’s also a talented artist whose skills were a perfect matched for this task. Lucky for me, it was a stay-home snowy day in Glasgow yesterday and within a couple hours he had a werewolf draft to me. By this morning he had a final version ready for printing in my inbox.

Werewolf coloring page

You can download the creative commons licensed original here to print your own. I have printed off several (and ordered some packets of crayons) to bring along to the UbuCon Summit and Ubuntu booth in the SCALE14x expo hall. I’m also bringing along a bunch of origami paper, so people can try their hand at the werewolf… and unicorn too.

Finally, lest we forget that my actual paid job is a systems administrator on the OpenStack Infrastructure team, I’m also doing a talk at DevOpsDayLA on Open Source tools for distributed systems administration. If you think I geek out about Ubuntu and coloring werewolves, you should see how I act when I’m talking about the awesome systems work I get to do at my day job.

by pleia2 at January 17, 2016 06:32 PM

January 14, 2016

Akkana Peck

Snow hiking

[Akk on snowshoes crossing the Jemez East Fork]

It's been snowing quite a bit! Radical, and fun, for a California ex-pat. But it doesn't slow down the weekly hiking group I'm in. When the weather turns white, the group switches to cross-country skiing and snowshoeing.

A few weeks ago, I tried cross-country skiing for the first time. (I've downhill skied a handful of times, so I know how, more or less, but never got very good at it. Ski areas are way too far away and way too expensive in Californian.) It was fun, but I have a chronic rotator cuff problem, probably left over from an old motorcycle injury, and found my shoulder didn't deal well with skiing. Well, the skiing was probably fine. It was probably more the falling and trying to get back up again that it didn't like.

So for the past two weeks I've tried snowshoes instead. That went just fine. It doesn't take much learning: it's just like hiking, except a little bit harder work remembering not to step on your own big feet. "Bozo goes hiking!" Dave called it, but it isn't nearly as Bozo-esque as I thought it would be.

Last week we snowshoed from a campground out to the edge of Frijoles Canyon, in a snowstorm most of the way, and ice fog -- sounds harsh when described like that, but it was lovely, and we were plenty warm when we were moving. This week, we followed the prettiest trail in the area, the East Fork of the Jemez River. In summer, it's a vibrantly green meadow with the sparkling creek snaking through it. In winter, it turns into a green and sparkling white forest. Someone took a photo of me snowshoeing across one of the many log bridges spanning the East Fork. You can't see any hint of the river itself -- it's buried in snow.

But if you hike in far enough, there's a warm spring: we're on the edge of the Valles Caldera, an old supervolcano that still has plenty of low-level geothermal activity left. The river is warm enough here that it's still running even in midwinter ... and there was a dipper there. American dippers are little birds that dive into creeks and fly under the water in search of food. They're in constant motion, diving, re-emerging, bathing, shaking off, and this dipper went about its business fifteen feet from where we were standing watching it. Someone had told me that he saw two dippers at this spot yesterday, but we were happy to get such a good look at even one.

We had lunch in a sunny spot downstream from the dipper, then headed back to the trailhead. A lovely way to spend a winter day.

January 14, 2016 02:01 AM

January 11, 2016

Jono Bacon

SCALE14x Plans

In a week and a half I am flying out to Pasadena to the SCALE14x conference. I will be there from the evening of Wed 20th Jan 2016 to Sun 24th Jan 2016.

SCALE is a tremendous conference, as I have mentioned many times before. This is a busy year for me, so I wanted to share what I will be up to:

  • Thurs 21st Jan 2016 at 2pm in Ballroom AUbuntu Redux – as part of the UbuCon Summit I will be delivering a presentation about the key patterns that have led Ubuntu to where it is today and my unvarnished perspective on where Ubuntu is going and what success looks like.
  • Thurs 21st Jan 2016 at 7pm – in Ballroom DEFLOSS Reflections – I am delighted to be a part of a session that looks into the past, present, and future of Open Source. The past will be covered by the venerable Jon ‘Maddog’ Hall, the present by myself, and the future by Keila Banks.
  • Fri 22nd Jan 2016 at 10.30am – in Ballroom DE – Ubuntu Panel – I will be hosting a panel where Mark Shuttleworth (Ubuntu Founder), David Planella (Ubuntu Community Manager), Olli Ries (Engineering Manager), and Community Council and community members will be put under the spotlight to illustrate where the future of Ubuntu is going. This is a wonderful opportunity to come along and get your questions answered!
  • Fri 22nd Jan 2016 at 8pm – in Ballroom DEBad Voltage: Live – join us for a fun, informative, and irreverent live Bad Voltage performance. There will be free beer, lots of prizes (including a $2200 Pogo Linux workstation, Zareason Strata laptop, Amazon Fire Stick, Mycroft, Raspberry Pi 2 kit, plenty of swag and more), and plenty of audience participation and surprises. Be sure to join us!
  • Sat 23rd Jan 2016 at 4.30pm – in Ballroom HBuilding Awesome Communities On GitHub – this will be my first presentation in my new role as Director Of Community at GitHub. In it I will be delving into how you can build great communities with GitHub and I will talk about some of the work I will be focused on in my new role and how this will empower communities around the world.

I am looking forward to seeing you all there and if you would like have a meeting while I am there, please drop me an email to

by Jono Bacon at January 11, 2016 05:48 PM