Planet Ubuntu California

November 27, 2015

Akkana Peck

Getting around make clean or make distclean aclocal failures

Keeping up with source trees for open source projects, it often happens that you pull the latest source, type make, and get an error like this (edited for brevity):

$ make
cd . && /bin/sh ./missing --run aclocal-1.14
missing: line 52: aclocal-1.14: command not found
WARNING: aclocal-1.14' is missing on your system. You should only need it if you modified acinclude.m4' or'. You might want to install the Automake' and Perl' packages. Grab them from any GNU archive site.

What's happening is that make is set up to run ./ (similar to running ./configure except it does some other stuff tailored to people who build from the most current source tree) automatically if anything has changed in the tree. But if the version of aclocal has changed since the last time you ran or configure, then running configure with the same arguments won't work.

Often, running a make distclean, to clean out all local configuration in your tree and start from scratch, will fix the problem. A simpler make clean might even be enough. But when you try it, you get the same aclocal error.

Whoops! make clean runs make, which triggers the rule that configure has to run before make, which fails.

It would be nice if the make rules were smart enough to notice this and not require configure or autogen if the make target is something simple like clean or distclean. Alas, in most projects, they aren't.

But it turns out that even if you can't run with your usual arguments -- e.g. ./ --prefix=/usr/local/gimp-git -- running ./ by itself with no extra arguments will often fix the problem.

This happens to me often enough with the GIMP source tree that I made a shell alias for it:

alias distclean="./ && ./configure && make clean"

Saving your configure arguments

Of course, this wipes out any arguments you've previously passed to autogen and configure. So assuming this succeeds, your very next action should be to run autogen again with the arguments you actually want to use, e.g.:

./ --prefix=/usr/local/gimp-git

Before you ran the distclean, you could get those arguments by looking at the first few lines of config.log. But after you've run distclean, config.log is gone -- what if you forgot to save the arguments first? Or what if you just forget that you need to re-run again after your distclean?

To guard against that, I wrote a somewhat more complicated shell function to use instead of the simple alias I listed above.

The first trick is to get the arguments you previously passed to configure. You can parse them out of config.log:

$ egrep '^  \$ ./configure' config.log
  $ ./configure --prefix=/usr/local/gimp-git --enable-foo --disable-bar

Adding a bit of sed to strip off the beginning of the command, you could save the previously used arguments like this:

    args=$(egrep '^  \$ ./configure' config.log | sed 's_^  \$ ./configure __')

(There's a better place for getting those arguments, config.status -- but parsing them from there is a bit more complicated, so I'll follow up with a separate article on that, chock-full of zsh goodness.)

So here's the distclean shell function, written for zsh:

distclean() {
    setopt localoptions errreturn

    args=$(egrep '^  \$ ./configure' config.log | sed 's_^  \$ ./configure __')
    echo "Saved args:" $args
    make clean

    echo "==========================="
    echo "Running ./ $args"
    sleep 3
    ./ $args

The setopt localoptions errreturn at the beginning is a zsh-ism that tells the shell to exit if there's an error. You don't want to forge ahead and run configure and make clean if your didn't work right. errreturn does much the same thing as the && between the commands in the simpler shell alias above, but with cleaner syntax.

If you're using bash, you could string all the commands on one line instead, with && between them, something like this: ./ && ./configure && make clean && ./ $args Or perhaps some bash user will tell me of a better way.

November 27, 2015 08:33 PM

November 25, 2015

Jono Bacon

Supporting Software Freedom Conservancy

There are a number of important organizations in the Open Source and Free Software world that do tremendously valuable work. This includes groups such as the Linux Foundation, Free Software Foundation, Electronic Frontier Foundation, Apache Software Foundation, and others.

One such group is the Software Freedom Conservancy. To get a sense of what they do, they explain it best:

Software Freedom Conservancy is a not-for-profit organization that helps promote, improve, develop, and defend Free, Libre, and Open Source Software (FLOSS) projects. Conservancy provides a non-profit home and infrastructure for FLOSS projects. This allows FLOSS developers to focus on what they do best — writing and improving FLOSS for the general public — while Conservancy takes care of the projects’ needs that do not relate directly to software development and documentation.

Conservancy performs some important work. Examples include bringing projects under their protection, providing input on and driving policy that relates to open software/standards, funding developers to do work, helping refine IP policies, protecting GPL compliance, and more.

This work comes at a cost. The team need to hire staff, cover travel/expenses, and more. I support their work by contributing, and I would like to encourage you do too. It isn’t a lot of money but it goes a long way.

They just kicked off a fundraiser at and I would like to recommend you all take a look. They provide an important public service, they operate in a financially responsible way, and their work is well intended and executed.

by Jono Bacon at November 25, 2015 03:09 AM

November 23, 2015

Elizabeth Krumbach

LISA15 wrap-up

From November 11th through 13th I attended and spoke at Usenix’s LISA15 (Large Installation Systems Administration) conference. I participated in a women in tech panel back in 2012, so I’d been to the conference once before, but this was the first time I submitted a talk. A huge thanks goes to Tom Limoncelli for reaching out to me to encourage me to submit, and I was amused to see my response to his encouragement ended up being the introduction to a blog post earlier this year. LISA has changed!

The event program outlines two main sections of LISA, tutorials and conference. I flew in on Tuesday in order to attend the three conference days from Wednesday through Friday. I picked up my badge Tuesday night and was all ready for the conference come Wednesday morning.

Wednesday began with a keynote from Mikey Dickerson of the U.S. Digital Service. It was one of the best talks I’ve seen all year, and I go to a lot of conferences. Launched just over a year ago (August 2014), the USDS is a part of the US executive office tasked with work and advisement to federal agencies about technology. His talk centered around the work he did post launch of He was working at Google at the time and was brought in as one of the experts to help rescue the website after the catastrophic failed launch. Long hours, a critical 24-hour news cycle that made sure they stayed under pressure to fix it and work to convince everyone to use best practices refined by the industry made for an amusing and familiar tale. The reasons for the failure were painfully easy to predict, no monitoring, no incident response plan or post-mortems, no formal testing and release process. These things are fundamental to software development in the industry today, and for whatever reason (time? money?) were left off this critical launch. The happy ending was that the site now works (though he wouldn’t go as far as saying it was “completely fixed”) and their success could be measured by the lack of news about the website during the 2014-2015 enrollment cycle. He also discussed some of the other work the USDS was up to, including putting together Requirements for Federal Websites and Digital Services, improvements to VA disability processing and the creation of the College Scorecard.

A talk by Mikey Dickerson of the USDS opens up LISA15

I then went to see Supercomputing for Healthcare: A Collaborative Approach to Accelerating Scientific Discovery (slides linked on that page) presented by Patricia Kovatch of the Icahn School of Medicine at Mount Sinai. She started off by talking about the vast amounts of data collected by facilities like Mount Sinai and how important having that data accessible and mine-able by researchers who are looking for cures to health problems. Then she dove into into collaboration, the keystone of her talk, bringing up several up important social points. Even as a technologist, you should understand the goals of everyone you work with, from the mission statement of your organization to yourself, your management, your clients and the clients (or patients!) served by the organization. Communication is key, and she recommended making non-tech friendly visualizations (that track metrics which are important – and re-evaluate those often), monthly reports and open meetings where interested parties can participate and build trust in your organization. She also covered some things that can be done to influence user behavior, like creating a “free” compute queue that’s lower priority but a department doesn’t need to pay for to encourage usage of that rather than taking over the high priority queue for everything (because everyone’s job is high priority when it’s all the same to them…). In case it’s not obvious, there was a lot of information in this talk squeezed into her time slot! I can’t imagine any team realistically going from having a poorly communicating department to adopting all of these suggestions, but she does present a fantastic array of helpful ideas that can be implemented slowly over time, each of which would help an organization. The slides are definitely worth a browse.

Next up was my OpenStack colleague Devananda van der Veen who was talking about Ironic: A Modern Approach to Hardware Provisioning. Largely divorcing Ironic from OpenStack, he spent this talk talking about how to use it largely as a stand alone tool for hardware provisioning. But he did begin by talking about how tools like OpenStack have started handling VMs, which themselves are abstractions of computers, and that Ironic takes that one step further, but instead of a VM you have hardware that’s an abstraction of a computer, thus putting bare metal and VMs on similar footing abstraction-wise with tooling in OpenStack with Ironic. He spent a fair amount of time talking about how much effort has been put in by hardware manufacturers into writing hardware drivers, and how quickly adoption in production has taken off with companies like Rackspace and Yahoo! being very public about their usage.

The hallway track was strong at this conference! The next talk I attended was in the afternoon, The Latest from Kubernetes by Tim Hockin. As an open source project, I feel like Kubernetes has moved very quickly since I first heard about it, so this was really valuable talk that skipped over introductory details and went straight to talking about new features and improvements in version 1.1. There’s iptables kube-proxy (yay kernel!), support for a level 7 loadbalancer (Ingress), namespaces, resource isolation, quota and limits, network plugins, persistent volumes, secrets handling and an alpha release of daemon sets. And his talk ran long, so he wasn’t able to get to everything! Slides, all 85 of them, are linked to the talk page and are valuable even without the accompanying talk.

My day wrapped up with My First Year at Chef: Measuring All the Things by Nicole Forsgren, the Director of Organizational Performance & Analytics at Chef. Nicole presented a situation where she joined a company that wanted to do better tracking of metrics within a devops organization and outlined how she made this happen at Chef. The first step was just talking about metrics, do you have them? What should you measure? She encouraged making sure both dev and ops were included in the metrics discussions so you’re always on the same page and talking about the same things. In starting these talks, she also suggested the free ~20 page book Data Driven: Creating a Data Culture for framing the discussions. She then walked through creating a single page scorecard for the organization about key things they want to see happen or improve, pick a few key things and then work toward how they can set targets and measure progress and success. Benchmarks were also cited as important, so you can see how you’re doing compared to where you began and more generally in the industry. Advice was also given about what kinds of measurement numbers to look at: internal, external, cultural and whether subjective or objective makes the most sense for each metric, and how to go about subjective measuring.

Nicole Forsgren on “Measuring All the Things”

I had dinner with my local friend Mackenzie Morgan. I hadn’t seen her since my wedding 2.5 years ago, so it was fun to finally spend time catching up in person, and offered a stress-free conclusion to my first conference day.

The high-quality lineup of keynote speakers continued on Thursday morning with Christopher Soghoian of the ALCU who came to talk about Sysadmins and Their Role in Cyberwar: Why Several Governments Want to Spy on and Hack You, Even If You Have Nothing to Hide. He led with the fact that many systems administrators are smart enough to know how to secure themselves, but many don’t take precautions at home: we use poor passwords, don’t encrypt our hard drives, etc. I’m proud to say that I’m paranoid enough that I actually am pretty cautious personally, but I think that stems from being a hobbiest first, it’s always been natural for my personal stuff to be just as secure as what I happen to be paid to work on. With that premise, he dove into government spying that was made clear by Snowden’s documents and high profile cases of systems administrators and NOC workers being targeted personally to gain control of the systems they manage either through technical means (say, sloppy ssh key handling), social engineering or stalking and blackmail. Know targets have been people working for the government, sysadmins at energy and antivirus companies, but he noted any of us could be a target if the data we’re responsible for administering is valuable in anyway. I can’t say any of the information in the talk was new to me, but it was presented in a way that was entertaining and makes me realize that I probably should pay more attention in my day to day work. Bottom line: Even if you’re just an innocent, self-proclaimed boring geek who goes home and watches SciFi after work, you need to be vigilant. See, I have a reason to be paranoid!

I picked up talks in the afternoon by attending one on fwunit: Unit Testing and Monitoring Your Network Flows with Fwunit by Dustin J. Mitchell. The tool was specifically designed for workflows at Mozilla so only a limited set of routers and switches are supported right now (Juniper SRX, AWS, patches welcome for others), but the goal was to be able to do flow monitoring on their network in order to have a good view into where and how traffic moved through their network. They also wanted to be able to do this without inflexible proprietary tooling and in a way that could be scripted into their testing infrastructure. Did a change they make just cut off a bunch of traffic that is needed by one of their teams? Alert and revert! Future work includes improvements to tracking ACLs, optimized statistic gathering and exploring options to test prior to production so reverts aren’t needed.

Keeping with the networking thread, Dinesh G Dutt of Cumulus Networks spoke next on The Consilience Of Networking and Computing. The premise of his talk was that the networking world is stuck in a sea of proprietary tooling that isn’t trivial to use and the industry there is losing out on a lot of the promises of devops since it’s difficult to automate everything in an effective manner. He calls for a more infrastructure-as-code-driven plan forward for networking and cited places where progress is being made, like in the Open Compute Project. His talk reminded me of OpenConfig working group that an acquaintance has been involved with, so it does sound like there is some consensus among network operators about where they want to see the future go.

The final talk I went to on Thursday was Vulnerability Scanning’s Not Good Enough: Enforcing Security and Compliance at Velocity Using Infrastructure As Code by Julian Dunn. He was preaching to the choir a bit as he introduced how useless standard vulnerability scanning is to us sysadmins (“I scanned for your version of Apache, and that version number is vulnerable” “…do you not understand how distro patches work?”) and expressed how challenging they are to keep up with. His proposal was two fold. First, that companies get more in the habit of prioritizing security in general rather than passing arbitrary compliance tests. Second, to consolidate the tooling used by everyone and integrate it into the development and deployment pipeline to make sure security standards are adhered to in the long run (not just when the folks testing for compliance are in the building). To this end, he promoted use of the Chef Inspec Project.

Thursday evening was the LISA social, but I skipped that in favor of a small dinner I was invited to at a local Ethiopian restaurant. Fun fact: I’ve only ever eaten Ethiopian food when I’m traveling, and the first time I had it was in 2012 when I was in San Diego, following my first LISA conference!

The final day of the conference began with a talk by Jez Humble on Lean Configuration Management. He spent some time reflecting on modern methodologies for product development (agile, change management, scrum), and discussed how today with the rapid pace of releases (and sometimes continuous delivery) there is an increasing need to make sure quality is built in at the source and bugs are addressed quickly. He then went into the list of very useful indicators for a successful devops team:

  • Use of revision control
  • Failure alerts from properly configured logging and monitoring
  • Developers who merge code into trunk (not feature branches! small changes!) daily
  • Peer review driven change approval (not non-peer change review boards)
  • Culture that exhibits the Generative organizational structure as defined by R Westrum in his A typology of organisational cultures

He also talked a fair amount about team structures and the ricks when not only dev and ops are segregated, but also product development and others in the organization. He proposed bringing them closer together, even putting an ops person on a dev team and making sure business interests and goals in the product are also clearly communicated to everyone involved.

It was a pleasure to have my talk following this one, as our team strives to tick off most of the boxes when it comes to having a successful team (though we don’t really do active, alerting monitoring). I spoke on Tools for Distributed, Open Source Systems Administration (slides linked on the linked page) where I walked through the key strategies and open source tools we’re using as a team that’s distributed geographically and across time zones. I talked about our Continuous Integration system (the heart of our work together), various IRC channels we use for different purposes (day to day sync-up, meetings, sprints, incidents), use of etherpads for collaborative editing and work and how we have started to address hand-offs between time zones (mostly our answer is “hire more people in that time zone so they have someone to work with”). After my talk I had some great chats with folks either doing similar work, or trying to nudge their organization into being productive across offices. The talk was also well attended, so huge thanks to everyone who came out to it.

At lunch time I had a quick meal with Ben Cotton before sneaking off to the nearby zoo to see if I could get a glimpse of the pandas. I saw a sleeping panda. I was back in time for the first talk after lunch, Thomas A. Limoncelli on Transactional System Administration Is Killing Us and Must be Stopped. Many systems administrators live in a world of tickets. Tickets come in, they are processed, we’re always stressed because we have too many tickets and are always running around to get them done with poor tooling for priority (everything is important!). It also leads to a very reaction-driven workflow, instead of fixing fundamental long term issues and long term planning is very hard. It also creates a bad power dynamic, sysadmins begin to see users as a nuisance, and users are always waiting on those sysadmins in order to get their work done. Plus, users hate opening tickets and sysadmins hate reading tickets opened by users. Perhaps worst of all, we created this problem by insisting upon usage of ticketing systems in the 90s. Whoops. In order to solve this, his recommendations are very much in line with what I’d been hearing at the conference all week: embed ops with dev, build self-service tooling so repeatable things are no longer manually done by sysadmins (automate, automate, automate!), have developers write their own monitors for their software (ops don’t know how it works, the devs do, they can write better monitoring than just pinging a server!). He also promoted the usage of Kanban and building your team schedule so that there is a rotating role for emergencies and others are able to focus on long term project work.

The final talk of the main conference I attended was The Care and Feeding of a Community by Jessica Hilt. I’ve been working with communities for a long time, even holding some major leadership positions, but I really envy the experience that Jessica brought to her talk, particularly since she’s considerably more outgoing and willing to confront conflict than I am. She began with an overview of different types of communities and how their goals matter so you can collect the right group of people for the community you’re building. She stressed that goals like cooperative learning (educational, tech communities, beyond) is a valuable use of a group’s time and helps build expertise and encourages retention when members are getting value. Continuing on a similar theme, networking and socialization are important, so that people have a bond with each other and provide a positive feedback loop that keeps the community healthy. During a particularly amusing part of her talk, she also mentioned that you want to include people who complain, since it’s often that the complainers are passionate about the group topic, but are just grumpy and they can be a valuable asset. Once you have ideas and potential members identified, you can work on organizing. What are the best tools to serve this community? What rules need to be in place to make sure people are treated fairly and with respect? She concluded by talking about long term sustainability, which includes re-evaluating the purpose of the group from time to time, making sure it’s still attracting new members, confirming that the tooling is still effective and that the rules in place are being enforced.

During the break before the closing talks of the conference I had the opportunity to meet the current Fedora Project Lead, Matthew Miller. Incidentally, it was the same day that my tenure on the Ubuntu Community Council officially expired, so we were able to have an interesting chat about leadership and community dynamics in our respective Linux distributions. We have more in common than we tend to believe.

The conference concluded with a conference report from the LISA Build team that handled the network infrastructure for the conference. They presented all kinds of stats about traffic and devices and stories of their adventures throughout the conference. I was particularly amused when they talked about some of the devices connecting, including an iPod. I couldn’t have been the only one in the audience brainstorming what wireless devices I could bring next year to spark amusement in their final report. They then handed it off to a tech-leaning comedian who gave us a very unusual, meandering talk that kept the room laughing.

This is my last conference of the year and likely my last talk, unless someone local ropes me into something else. It was a wonderful note to land on in spite being tired from so much travel this past month. Huge thanks to everyone who took time to say hello and invite me out, it went a long way to making me feel welcome.

More photos from the conference here:

by pleia2 at November 23, 2015 04:35 AM

November 20, 2015

Elizabeth Krumbach

Ubuntu Community Appreciation Day

Often times, Ubuntu Community Appreciation Day sneaks up on me and I don’t have an opportunity to do a full blog post. This time I was able to spend several days reflecting on who has had an impact on my experience this year, and while the list is longer than I can include here (thanks everyone), there are some key people who I do need to thank.

José Antonio Rey

If you’ve been involved with Ubuntu for any length of time, you know José. He’s done extraordinary work as a volunteer across various areas in Ubuntu, but this year I got to know him just a little bit better. He and his father picked me up from the airport in Lima, Peru when visited his home country for UbuCon Latinoamérica back in August. In the midst of preparing for a conference, he also played tour guide my first day as we traveled the city to pick up shirts for the conference and then took time to have lunch at one of the best ceviche places in town. I felt incredibly welcome as he introduced me to staff and volunteers and checked on me throughout the conference to make sure I had what I needed. Excellent conference with incredible support, thank you José!

Naudy Urquiola

I met Naudy at UbuCon Latinoamérica, and I’m so glad I did. He made the trip from Venezuela to join us all, and I quickly learned how passionate and dedicated to Ubuntu he was. When he introduced himself he handed me a Venezuelan flag, which hung off my backpack for the rest of the conference. Throughout the event he took photos and has been sharing them since, along with other great Ubuntu tidbits that he’s excited about, a constant reminder of the great time we all had. Thanks for being such an inspirational volunteer, Naudy!

Naudy, me, Jose

Richard Gaskin

For the past several years Richard has led UbuCon at the Southern California Linux Expo, rounding up a great list of speakers for each event and making sure everything goes smoothly. This year I’m proud to say it’s turning into an even bigger event, as the UbuCon Summit. He’s also got a great Google+ feed. But for this post, I want to call out that he reminds me why we’re all here. It can become easy to get burnt out as a volunteer on open source, feel uninspired and tired. During my last one-on-one call with Richard, his enthusiasm around Ubuntu for enabling us to accomplish great things brought back my energy. Thanks to Ubuntu I’m able to work with Partimus and Computer Reach to bring computers to people at home and around the world. Passion for bringing technology to people who lack access is one of the reasons I wake up in the morning. Thanks to Richard for reminding me of this.

Laura Czajkowski, Michael Hall, David Planella and Jono Bacon

What happens when you lock 5 community managers in a convention center for three days to discuss hard problems in our community? We laugh, we cry, we come up with solid plans moving forward! I wrote about the outcome of our discussions from the Community Leadership Summit in July here, but beyond the raw data dump provided there, I was able to connect on a very personal level with each of them. Whether it was over a conference table or over a beer, we were able to be honest with each other to discuss hard problems and still come out friends. No blame, no accusations, just listening, talking and more listening. Thank you all, it’s an honor to work with you.

Laura, David, Michael and me (Jono took the picture!)

Paul White

For the past several years, Paul White has been my right hand man with the Ubuntu Weekly Newsletter. If you enjoy reading the newsletter, you should thank him as well. As I’ve traveled a lot this year and worked on my next book, he’s been keeping the newsletter going, from writing summaries to collecting links, with me just swinging in to review, make sure all the ducks are lined up and that the release goes out on time. It’s often thankless work with only a small team (obligatory reminder that we always need more help, see here and/or email to learn more). Thank you Paul for your work this year.

Matthew Miller

Matthew Miller is the Fedora Project Lead, we were introduced last week at LISA15 by Ben Cotton in an amusing Twitter exchange. He may seem like an interesting choice for an Ubuntu appreciation blog post, but this is your annual reminder that as members of Linux distribution communities, we’re all in this together. In the 20 or so minutes we spoke during a break between sessions, we were able to dive right into discussing leadership and community, understanding each others jokes and pain points. I appreciate him today because his ability to listen and insights have enriched my experience in Ubuntu by bringing in a valuable outside perspective and making me feel like we’re not in this alone. Thanks mattdm!

Matt holds my very X/Ubuntu laptop, I hold a Fedora sticker


If you’re reading this, you probably care about Ubuntu. Thank you for caring. I’d like to send you a holiday card!

by pleia2 at November 20, 2015 05:15 PM

November 18, 2015

Elizabeth Krumbach

Holiday cards 2015!

Every year I send out a big batch of winter-themed holiday cards to friends and acquaintances online.

Holiday cable car

Reading this? That means you! Even if you’re outside the United States!

Send me an email at with your postal mailing address. Please put “Holiday Card” in the subject so I can filter it appropriately. Please do this even if I’ve sent you a card in the past, I won’t be reusing the list from last year.

If you’re an Ubuntu fan, let me know and I’ll send along some stickers too :)

Typical disclaimer: My husband is Jewish and we celebrate Hanukkah, but the cards are non-religious, with some variation of “Happy holidays” or “Season’s greetings” on them.

by pleia2 at November 18, 2015 07:04 PM

November 16, 2015

Eric Hammond

Using AWS CodeCommit With Git Repositories In Multiple AWS Accounts

set up each local CodeCommit repository clone to use a specific cross-account IAM role with git clone --config and aws codecommit credentials-helper

When I started testing AWS CodeCommit, I used the Git ssh protocol with uploaded ssh keys to provide access, because this is the Git access mode I’m most familiar with. However, using ssh keys requires each person to have an IAM user in the same AWS account as the CodeCommit Git repository.

In my personal and work AWS usage, each individual has a single IAM user in a master AWS account, and those users are granted permission to assume cross-account IAM roles to perform operations in other AWS accounts. We cannot use the ssh method to access Git repositories in other AWS accounts, as there are no IAM users in those accounts.

AWS CodeCommit comes to our rescue with an alternative https access method that supports Git Smart HTTP, and the aws-cli offers a credential-helper feature that integrates with the git client to authenticate Git requests to the CodeCommit service.

In my tests, this works perfectly with cross-account IAM roles. After the initial git clone command, there is no difference in how git is used compared to the ssh access method.

Most of the aws codecommit credential-helper examples I’ve seen suggest you set up a git config --global setting before cloning a CodeCommit repository. A couple even show how to restrict the config to AWS CodeCommit repositories only so as to not interfere with GitHub and other repositories. (See “Resoures” below)

I prefer to have the configuration associated with the specific Git repositories that need it, not in the global setting file. This is possible by passing in a couple --config parameters to the git clone command.

Create/Get CodeCommit Repository

The first step in this demo is to create a CodeComit repository, or to query the https endpoint of an existing CodeCommit repo you might already have.

Set up parameters:

repository_name=...   # Your repository name
repository_description=$repository_name   # Or more descriptive

If you don’t already have a CodeCommit repository, you can create one using a command like:

repository_endpoint=$(aws codecommit create-repository \
  --region "$region" \
  --repository-name "$repository_name" \
  --repository-description "$repository_description" \
  --output text \
  --query 'repositoryMetadata.cloneUrlHttp')
echo repository_endpoint=$repository_endpoint

If you already have a CodeCommit repository set up, you can query the https endpoint using a command like:

repository_endpoint=$(aws codecommit get-repository \
  --region "$region" \
  --repository-name "$repository_name" \
  --output text \
  --query 'repositoryMetadata.cloneUrlHttp')
echo repository_endpoint=$repository_endpoint

Now, let’s clone the repository locally, using our IAM credentials. With this method, there’s no need to upload ssh keys or modify the local ssh config file.

git clone

The git command line client allows us to specify specific config options to use for a clone operation and will add those config settings to the repository for future git commands to use.

Each repository can have a specific aws-cli profile that you want to use when interacting with the remote CodeCommit repository through the local Git clone. The profile can specify a cross-account IAM role to assume, as I mentioned at the beginning of this article. Or, it could be a profile that specifies AWS credentials for an IAM user in a different account. Or, it could simply be "default" for the main profile in your aws-cli configuration file.

Here’s the command to clone a Git repository from CodeCommit, and for authorized access, associate it with a specific aws-cli profile:

profile=$AWS_DEFAULT_PROFILE   # Or your aws-cli profile name

git clone \
  --config 'credential.helper=!aws codecommit --profile '$profile' --region '$region' credential-helper $@' \
  --config 'credential.UseHttpPath=true' \
cd $repository_name

At this point, you can interact with the local repository, pull, push, and do all the normal Git operations. When git talks to CodeCommit, it will use aws-cli to authenticate each request transparently, using the profile you specified in the clone command above.

Clean up

If you created a CodeCommit repository to follow the example in this article, and you no longer need it, you can wipe it out of existence with this command:

aws codecommit delete-repository \
  --region "$region" \
  --repository-name $repository_name

You might also want to delete the local Git repository.

With the https access method in CodeCommit, there is no need to need upload or to delete any uploaded ssh keys from IAM, as all access control is performed seamlessly through standard AWS authentication and authorization controls.


Here are some other articles that talk about CodeCommit and the aws-cli credential-helper.

In Setup Steps for HTTPS Connections to AWS CodeCommit Repositories on Linux, AWS explains how to set up the aws-cli credential-helper globally so that it applies to all repositories you clone locally. This is the simplistic setting that I started with before learning how to apply the config rules on a per-repository basis.

In Using CodeCommit and GitHub Credential Helpers, James Wing shows how Amazon’s instructions cause problems if you have some CodeCommit repos and some GitHub repos locally and how to fix them (globally). He also solves problems with Git credential caches for Windows and Mac users.

In CodeCommit with EC2 Role Credentials, James Wing shows how to set up the credential-helper system wide in cloud-init, and uses CodeCommit with an IAM EC2 instance role.

Original article and comments:

November 16, 2015 06:06 PM

Jono Bacon

Atom: My New Favorite Code Editor

As a hobbyist Python programmer, over the years I have tried a variety of different editors. Back in the day I used to use Eclipse with the PyDev plugin. I then moved on to use GEdit with a few extensions switched on. After that I moved to Geany. I have to admit, much to the shock of some of you, I never really stuck with Sublime, despite a few attempts.

As some of you will know, this coming week I start at GitHub as Director of Community. Like many, when I was exploring GitHub as a potential next career step, I did some research into what the company has been focusing their efforts on. While I had heard of the Atom editor, I didn’t realize it came from GitHub. So, I thought I would give it a whirl.

Now, before I go on, I rather like Atom, and some of you may think that I am only saying this because of my new job at GitHub. I assure you that this is not the case. I almost certainly would have loved Atom if I had discovered it without the possibility of a role at GitHub, but you will have to take my word for that. Irrespective, you should try it yourself and make your own mind up.

My Requirements

Going into this I had a set of things I look for in an editor that tends to work well with my peanut-sized brain. These include:

  • Support for multiple languages.
  • A simple, uncluttered, editor with comprehensive key-bindings.
  • Syntax highlighting and auto-completion for the things I care about (Python, JSON, HTML, CSS, etc).
  • Support for multiple files, line numbers, and core search/replace.
  • A class/function view for easily jumping around large source files.
  • High performance in operation and reliable.
  • Cross-platform (I use a mixture of Ubuntu and Mac OS X).
  • Nice to have but not required: integrated terminal, version control tools.

Now, some of you will think that this mixture of ingredients sounds an awful lot like an IDE. This is a reasonable point, but what I wanted was a simple text editor, just with a certain set of key features…the ones above…built in. I wanted to avoid the IDE weight and clutter.

This is when I discovered Atom, and this is when it freaking rocked my world.

The Basics

Atom is an open source cross-platform editor. There are builds available for Mac, Windows, and Linux. There is of course the source available too in GitHub. As a side point, and as an Ubuntu fan, I am hoping Atom is brought into Ubuntu Make and I am delighted to see didrocks is on it.

Atom is simple and uncluttered.

Atom is simple and uncluttered.

As a core editor it seems to deliver everything you might need. Auto-completion, multiple panes, line numbers, multiple file support, search/replace features etc. It has the uncluttered and simple user interface I have been looking for and it seems wicked fast.

Stock Atom also includes little niceties such as markdown preview, handy for editing files on GitHub:

Editing Markdown is simple with the preview pane.

Editing Markdown is simple with the preview pane.

So, in stock form it ticks off most of the requirements listed above.

A Hackable Editor

Where it gets really neat is that Atom is a self-described hackable text editor. Essentially what this means is that Atom is a desktop application built with JavaScript, CSS, and Node.js. It uses another GitHub project called Electron that provides the ability to build cross-platform desktop apps with web technologies.

Consequently, basically everything in Atom can be customized. Now, there are core exposed customizations such as look and feel, keybindings, wrapping, invisibles, tabs/spaces etc, but then Atom provides an extensive level of customization via themes and packages. This means that if the requirements I identified above (or anything else) are not in the core of the editor, they can be switched on if there are suitable Atom packages available.

Now, for a long time text editors have been able to be tuned and tweaked like this, but Atom has taken it to a new level.

Firstly, the interface for discovering, installing, enabling, and updating plugins is incredibly simple. This is built right into Atom and there is thankfully over 3000 packages available for expanding Atom in different ways.

Searching for and installing plugins is built right into Atom.

Searching for and installing plugins is built right into Atom.

Thus, Atom at the core is a simple, uncluttered editor that provides the features the vast majority of programmers would want. If something is missing you can then invariably find a package or theme that implements it and if you can’t, Atom is extensively hackable to create that missing piece and share it with the world. This arguably provides the ability for Atom to satisfy pretty much about everyone while always retaining a core that is simple, sleek, and efficient.

My Packages

To give you a sense of how I have expanded Atom, and some examples of how it can be used beyond the default core that is shipped, here are the packages I have installed.

Please note: many of the screenshots below are taken from the respective plugin pages, so the credit is owned by those pages.

Symbols Tree View

Search for symbols-tree-view in the Atom package installer.

This package simply provides a symbols/class view on the right side of the editor. I find this invaluable for jumping around large source files.


Merge Conflicts

Search for merge-conflicts in the Atom package installer.

A comprehensive tool for unpicking merge conflicts that you may see when merging in pull requests or other branches. This makes handling these kinds of conflicts much easier.



Search for pigments in the Atom package installer.

A neat little package for displaying color codes inline in your code. This makes it simple to get a sense of what color that random stream of characters actually relates to.


Color Picker

Search for color-picker in the Atom package installer.

Another neat color-related package. Essentially, it makes picking a specific color as easy as navigating a color picker. Handy for when you need a slightly different shade of a color you already have.


Terminal Plus

Search for terminal-plus in the Atom package installer.

An integrated terminal inside Atom. I have to admit, I don’t use this all the time (I often just use the system terminal), but this adds a nice level of completeness for those who may need it.



Search for linter in the Atom package installer.

This is a powerful base Linter for ensuring you are writing, y’know, code that works. Apparently it has “cow powers” whatever that means.


In Conclusion

As I said earlier, editor choice is a very personal thing. Some of you will be looking at this and won’t be convinced about Atom. That is totally cool. Live long and edit in whatever tool you prefer.

Speaking personally though, I love the simplicity, extensibility, and innovation that is going into Atom. It is an editor that lets me focus on writing code and doesn’t try to force me into a mindset that doesn’t feel natural. Give it a shot, you may quite like it.

Let me know what you think in the comments below!

by Jono Bacon at November 16, 2015 06:12 AM

November 13, 2015

Jono Bacon

Blogging, Podcasting, or Video?

Over the course of my career I have been fortune to meet some incredible people and learn some interesting things. These have been both dramatic new approaches to my work and small insights that provide a different lens to look at a problem through.

When I learn these new insights I like to share them. This is the way we push knowledge forward: we share, discuss, and remix it in different ways. I have benefited from the sharing of others, so I feel I should do the same.

Therein lies a dilemma though: what is the best medium for transmitting thoughts? Do we blog? Use social media? Podcasting? Video? Presentations? How do we best present content for (a) wider consumption, (b) effectively delivering the message, and (c) simple sharing?

Back of the Napkin

In exploring this I did a little back of the napkin research. I ask a range of people where they generally like to consume media and what kind of media formats they are most likely to actually use.

The response was fairly consistent. Most of us seem to discover material on social media these days and while video is considered an enjoyable experience if done well, most people tend to consume content by reading. There were various reasons shared for this:

  • It is quicker to read a blog post than watch a video.
  • I can’t watch video at work, on my commute, etc.
  • It is easier to recap key points in an article.
  • I can’t share salient points in a video very easily.

While I was initially attracted to the notion of sharing some of these thoughts in an audio format, I have decided to focus instead more on writing. This was partially informed by my back of the napkin research, but also in thinking about how we best present thoughts.

Doing Your Thinking

I recently read online (my apologies, I forget the source) an argument that social media is making us lazy: essentially, that we tend to blast out thoughts on Twitter as it is quick and easy, as opposed to sitting down and presenting a cogent articulation of a position or idea.

This resonated with me. Yesterday at a conference, Jeff Atwood shared an interesting point:

“The best way to learn is to teach.”

This is a subtle but important point. The articulation and presentation of information is not just important for the reader, but for the author as well.

While I want to share the things I have learned, I also (rather selfishly) want to get better at those things and how I articulate and evolve those ideas in the future.

As such, it became clear that blogging is the best solution for me. It provides the best user interface for me to articulate and structure my thoughts (a text editor), it is easily consumable, easily shareable, and easily searchable on Google.

So, regular readers may notice that has been spruced up a little. Specifically, my blog has been tuned quite a bit to be more readable, easier to participate in, and easier to share the content with.

I am not finished with the changes, but my goal is to regularly write and share content that may be useful for my readers. You can keep up to date with new articles by following me on either Twitter, Facebook, or Google+. As is with life, the cadence of this will vary, but I hope you will hop into the articles and share your thoughts and join the conversation.

by Jono Bacon at November 13, 2015 10:02 PM

November 11, 2015

Elizabeth Krumbach

Grace Hopper Celebration of Women in Computing 2015

After a quick trip to Las Vegas in October, I was off to Houston for my first Grace Hopper Celebration of Women in Computing! I sometimes struggle some with women in computing events, and as a preamble to this post I wrote about it here. But I was excited to finally attend a Grace Hopper conference and honored to have my talk about the Continuous Integration system we use in the OpenStack project accepted in the open source track.

Since I’m an ops person and not a programmer, the agenda I was looking at leaned very much toward the keynotes, plenaries and open source, with a few talks just for fun thrown in. Internet of Things! Astronauts!

Wednesday kicked off with a series of keynotes. The introduction by Telle Whitney, CEO and President of the Anita Borg Institute for Women and Technology (ABI) included statistics about attendees, of which there were 12,000 from over 60 countries and over 1,000 organizations. She then introduced the president of the ACM, Alexander L. Wolf, who talked about Association for Computing Machinery (ACM) and encouraged attendees to join professional organizations like the ACM in order to bring voice to our profession. I’ve been a member since 2007.

The big keynote for the morning was by Hilary Mason, a data scientist and Founder at Fast Forward Labs. She dove into the pace of computer technology, progress of Artificial Intelligence and how data is driving an increasing amount of innovation. She explained that various mechanisms that make data available and the drop in computing prices has helped drive this, explaining that what makes a machine intelligence technology interesting tends to follow four steps:

  1. A theoretical breakthrough
  2. A change in economics
  3. A capability to build a commodity
  4. New data is available

Slides from her talk are on slideshare.

From the keynotes I went to the first series of open source presentations which began with a talk by Jen Wike Huger on contributing to As a contributor already, it was fun to hear her talk and I was particularly delighted to see her highlight three of my favorite stories as examples of how telling your open source story can make a difference:

Jen Wike Huger on

The next presentation was by Robin J. Goldstone, a Solutions Architect at Lawrence Livermore National Laboratory (LLNL) where they work on supercomputers! Her talk centered around the extreme growth of open source in the High Performance Computing (HPC) space by giving a bit of a history of supercomputing at LLNL and beyond, and how the introduction of open source into their ecosystem has changed things. She talked about their work on the CHAOS Linux clustering operating system that they’ve developed which allows them to make changes without consulting a vendor, many of whom aren’t authorized to access the data stored on the clusters anyway. It was fascinating to hear her speak to how it’s been working in production and she expressed excitement about the ability to share their work with other organizations.

From there, it was great to listen to Sreeranjani (Jini) Ramprakash of Argonne National Laboratory where they’re using Jenkins, the open source Continuous Integration system, in their supercomputer infrastructure. Most of her talk centered less around the nuts and bolts of how they’re using it, and more on why they chose to adopt it, including the importance testing changes in a distributed team (can’t just tap on a shoulder to ask why and when something broke), richer reports when something does break and shorter debug time since all changes are tested. When talking about Jenkins specifically, we learned that they had used it elsewhere in their organization so adopting that hosted version was at first a no-brainer, but then when they learned that they really had to run their own. The low bar created by it being open source software allowed them to run it themselves without too much of an issue.

That afternoon I attended the plenaries, kicked off by Clara Shih, the CEO and Founder at Hearsay Social. Her talk began by talking about how involvement with the Grace Hopper conference and ABI helped prepare her early for success in her career, and quickly launched into 5 major points when working in and succeeding with technology:

  1. Listen carefully (to everyone: customers, employees)
  2. Be OK with being different (and you have to be yourself to truly be accepted, don’t fake it)
  3. Cherish relationships above all else (both personal and professional, especially as a minority)
  4. There is no failure, only learning
  5. Who? If not us. When? If not now. (And do your part to encourage other women in tech)

Clara Shih keynote

Her plenary was followed by a surprising one from Blake Irving, the CEO of GoDaddy. GoDaddy has a poor reputation when it comes to women, particularly with respect to their objectifying ad campaigns that made the company famous early on. In his talk, I felt a genuine commitment from him personally and the company to change this, from the retirement of those advertisements and making sure the female employees within GoDaddy are being paid fairly. Reflecting on company culture, he also said they wanted advertising to reflect the passion and great work that happens within the company, in spite of poor public opinion due to their ads. They’re taking diversity seriously and he shared various statistics about demographics and pay within the company to show gender pay parity in various roles, which is a step I hadn’t seen a company do before (there are diversity stats from several companies, but not very detailed or broken up by role in a useful way). The major take-away was that if a company with a reputation like GoDaddy can work toward turning things around, anyone can.

The final plenary of Wednesday was from Megan Smith, the Chief Technology Officer of the United States. The office was created by President Obama in 2009 and Smith is the third person to hold the post, and the first woman. Her talk about the efforts being made by the US government to embrace the digital world, from White House Tech Meetups, the TechHire Initiative and White House Demo Days and Maker work. Even more exciting, she brought a whole crew of women from various areas of the government to speak on various projects. One spoke on simplifying access to Veteran Medical records through digital access, another on healthcare more broadly as they worked to fix after it was launched. A technology-driven modernization effort to the immigration system was particularly memorable, as work to make it easier and cheaper for potential citizens to get the resources they need without the mountain of confusing and expensive forms that they often have to go through today to become citizens and bring family members to the United States. It was also interesting to learn about the open data initiatives from as well as how citizens can help bring more records online through the Citizen Archivist program. I was also really impressed with their commitment to open source throughout all of their talks. It seems obvious to me that any software developed with my tax dollars should be made available to me in an open source manner, but it’s only recently that this has actually started to gain traction, and this administration seems committed to making sure we continue to go in this direction.

Technologists in US Government!

A quick walk through the opening of the career fair and exposition hall finished up my day. It was a pretty overwhelming space. So many companies seeking to interview and hiring from the incredible pool of talent that GHC brings together.

My Thursday was very busy. It began with an HP networking breakfast, with the 70 or so people from HP who came to the conference as attendees (not booth and interview staff) could meet up. I got a surprise at the breakfast by being passed the microphone after several VPs spoke as I was one of the two speakers from HP who was attending the conference and the only one at the breakfast, no pressure! From there, it was off to the keynotes.

I really enjoyed hearing from Hadi Partovi, the Founder of about his take on the importance of humans being taught about coding in the world today and how the work of is helping to make that happen on a massive scale. The growing demand versus slower creation of computer science professional statistics were grim and he stressed the importance of computer science as a component of primary education. It was impressive to learn about some of the statistics from their mere 2 years of existence, going into their third year they’re hoping to reach hundreds of thousands of more students.

It was a real pleasure to hear from Susan Wojcicki, the CEO of YouTube. She touched upon several important topics, including myths in computing that keep school age girls (even her own daughter!) away: Computer Science is boring, girls aren’t good at it and discomfort with associating with the stereotypical people who are portrayed in the field. She talked about the trouble with retention of women in CS, citing improvements to paid maternity leave as a huge factor in helping retention at Google.

Following the keynotes I attended the next round of open source sessions. Becka Morgan, a professor at Western Oregon University began the morning with a very interesting talk about building mentorship programs for her students in partnership with open source projects. I learned that she initially had worked with the local Ubuntu team, even having some of her students attend an Ubuntu Global Jam in Portland, an experience she hoped to repeat in the future. She walked us through various iterations of her class structure and different open source projects she worked with in order to get the right balance of structure, support from project volunteers and clear expectations on all sides. It was great to hear about how she was then able to take her work and merge it with that of others in POSSE (Professors’ Open Source Summer Experience) so they could help build programs and curriculum together. Key take-aways for success in her classroom included:

  • Make sure students have concrete tasks to work on
  • Find a way to interact with open source project participants in the classroom, whether they visit or attend virtually through a video call or similar (Google Hangouts were often used)
  • Tell students to ask specific, concrete questions when they need help, never assume the mentors will stop their work to reach out and ask them if they need help (they’re busy, and often doing the mentoring as a volunteer!)
  • Seek out community opportunities for students to attend, like the Ubuntu Global Jam

Her talk was followed by one by Gina Likins of Red Hat who talked about the experience in her career moving from a very proprietary company to one that is open and actually develops open source software. As someone who is familiar with structures of open organizations from my own work and open source experiences it was mostly information I was familiar with, but one interesting point she made was that in some companies people hoard information in an effort to make sure they have an edge over other teams. This stifles innovation and is very short-sighted, more importance in sharing knowledge so that everyone can grow is a valuable cultural trait for an organization. Billie Rinaldi followed Gina’s talk with one about working on an Apache Software Foundation project, sharing the benefits of a solid structure and valuable structures for getting involved as important to open source projects and something that the foundation supports.

Prior to a partner lunch that I was invited to, I went to a final morning talk by Dr. Nadya Fouad who published the famous Leaning in, but Getting Pushed Back (and Out) study results where culture, including failure to provide clear and fair advancement opportunities, was cited in their study of women leaving engineering. I’d read articles about her work, as it was widely covered when it first came out as one of the best studies to come out covering the retention problem. Of particular note was that about $3.4 billion in US federal funds are spent on the engineering “pipeline problem” each year, and very little attention is paid to the near 50% of women who complete an engineering degree and don’t continue with an engineering career. I’ve known that culture was to blame for some time, so it was satisfying to see someone do a study on the topic to gather data beyond the unscientific anecdotal stories I had piled up in my own experience with female friends and acquaintances who have left or lost their passion for the tech industry. She helpfully outlined things that were indicators for a successful career path, of course noting that these things are good for everyone: good workload management, psychologically safe environment, supportive leadership, promotion path, equitable development opportunities and an actively supported work/life balance policy.

Dr. Nadya Fouad on retention in engineering

After lunch began the trio of open source presentations that included my own! The afternoon began with a talk by Irene Ros on Best Practices for Releasing and Choosing Open Source Software. This talk gave her an opportunity to attack evaluation of open source from both sides, both what to look for in a project before adopting it and what you need to provide users and a community before you release your own open source project – predictably these are the same thing! She stressed the importance of using a revision control system, writing documentation, version tracking (see for a popular method), publishing of release notes and changelogs, proper licensing, support and issue tracking and in general paying attention to feedback and needs of the community. I loved her presentation because it included a lot of valuable information packed into her short talk slot, not all of which is obvious to new projects.

My talk came next, where I talked about our Open Source Continuous Integration System. In 20 minutes I gave a whirlwind tour of our CI system, including our custom components (Zuul, Nodepool, Jenkins Job Builder) along with the most popular open source offerings for code review (Gerrit) and CI (Jenkins). I included a lot of links in my talk so that folks who were interested could dive deeper into whichever component my quick overview was of interest to them. I was delighted to conclude my talk with several minutes of engaging Q&A before turning the microphone over to my OpenStack colleague Anne Gentle. Slides from my talk are here: 2015-ghc-Open-Source-Continuous-Integration-System.pdf

Thanks to Terri Oda for the photo! (source)

Anne’s talk a great one on documentation. She stressed the importance of treating open source documentation just like you would code. Use revision control, track versions, make the format they are written in simple (like reStructuredText) and use the same tooling as developers so it’s easy for developers to contribute to documentation. She also spoke about the automated test and build tools we use in OpenStack (part of our CI system, yay!) and how they help the team continue to publish quickly and stay on top of the progress of documentation. It was also worthy to note that writing documentation in OpenStack grants one Active Technical Contributor status, which gives you prestige in the community as a contributor (just like a developer) and a free ticket to the OpenStack summits that happen twice a year. That’s how documentation writers should be treated!

Since our trio of talks followed each other immediately, I spent the break after Anne’s talk following up with folks in the audience who were interested in learning more about our CI system and generally geeking out about various components. It was a whole lot of fun to chat with other Gerrit operators and challenges that our Nodepool system solves when it comes to test node handling. I had a lot of fun, and it’s always great when these conversations follow me for the rest of the conference like they did at GHC.

The next session I attended was the Women Lead in Open Source panel. A fantastic lineup of women in open source explored several of the popular open source organizations and opportunities for women and others, including Systers, Google Summer of Code, Outreachy and OpenHatch. The panel then spent a lot of time answering great questions about breaking into open source, how to select a first project and searching for ways to contribute based on various skills, like knowledge of specific programming languages.

The plenary that wrapped up our day was a popular one by Sheryl Sandberg, which caused the main keynote and plenary room to fill up quickly. For all the criticism, I found myself to be the target audience of her book Lean In and found tremendous value in not holding back my career while waiting for other parts of my life to happen (or not). Various topics were covered in her plenary, from salary parity across genders and the related topic of negotiation, bringing back the word “feminism” and banning the word “bossy”, equal marriages, unconscious bias and the much too gradual progress on C-suite gender parity. She has placed a lot work and hope into Lean In Circles and how they help build and grow the necessary professional networks for women. She advised us to undertake a positive mindfulness exercise before bed, writing down three things you did well during the day (“even if it’s something simple”). A strong conclusion was made by telling us to stay in technology, because they are the best jobs out there.

With the plenary concluded, I went back to my hotel to “rest for a few minutes before the evening events” and promptly fell asleep for 2.5 hours. I guess I had some sleep debt! In spite of missing out on some fun evening events, it probably a wise move to just take it easy that evening.

Friday’s keynote could be summed up concisely with one word: Robots! Valerie Fenwick wrote a great post about the keynote by Manuela Veloso of Carnegie Mellon University here: GHC15: Keynote: Robotics as a Part of Society.

As we shuffled out of the last keynote, I was on my way back to the open source track for star-studded panel (ok, two of them are my friends, too!) of brilliant security experts. The premise of the panel was exploring some of the recent high profile open source vulnerabilities and the role that companies now play in making sure this widely used tooling is safe, a task that all of the panelists work on. I found a lot of value in hearing from security experts what struggles they have when interacting with open source projects, like how to be diplomatic about reporting vulnerabilities and figuring out how to do it securely when a mechanism isn’t in place. They explored the fact that most open source projects simply don’t have security in mind, and they suggested some simple tooling and tips that can be used to evaluate security of various types of software, from Nmap and AFL to the OSWASP Top 10 of 2013 which is a rundown of common security issues with software, many of which are still legitimate today and the Mozilla wiki that has a surprising amount of security information (I knew about it from their SSL pages, lots of good information there). They also recommended the book The Art of Software Security Assessment and concluded my mentioning that learning about security is a valuable skill, there are a lot of jobs!

I had a bit of fun after the security panel and went to one of the much larger rooms to attend a panel about Data Science at NASA. Space is cool, and astronaut Catherine Coleman on the panel to talk about her work on the International Space Station (ISS)! It was also really fun to see photos of several women she’s worked with on the ISS and in the program, as female *nauts are still a minority (though there are a lot of female technologists working at NASA). I enjoyed hearing her talk about knowing your strengths and those of the people you’re working, since your life could depend upon it, teams are vital at NASA. Annette Moore, CIO of the Johnson Space Center, then spoke about the incredible amount of data being sent from the ISS, from the results of experiments to the more human communications that the astronauts need to keep in contact with those of us back on Earth. I have to admit that it did sound pretty cool to be the leader of the team providing IT support for a space station. Dorothy Rasco, CFO at Johnson Space Center, then spoke about some of the challenges of a manned mission to Mars, including handling the larger, more protected lander required, making sure it gets there fast enough, and various questions about living in a different atmosphere and food (most doesn’t have a shelf life beyond 3 years, not long enough!). Panel moderator and CTO-IT of NASA Deborah Diaz then took time to talk more broadly about the public policy of data at NASA which meant some interesting and ongoing big data challenges around making sure it’s all made available effectively. She shared a the link to that has various projects for the public, including thousands of data sets, open source code repositories and almost 50 APIs to work with. Very cool stuff! She also touched upon managing wearables (our new “Internet of Things”) that astronauts have been wearing for years, and how to manage all the devices on a technology and practical level, to record and store important scientific data collected, all without overburdening those wearing them.

Later in the afternoon I went to a fun Internet of Things workshop where we split into groups and tried to brainstorm an IoT product while paying careful attention to security and privacy around identity and authentication mechanisms for these devices. Our team invented a smart pillow. I think we were all getting pretty tired from conferencing!

The conference concluded with an inspiring talk from Miral Kotb, the Founder of iLuminate. A brilliant software engineer, I loved hearing about her passion for dance and technology, and how she followed both to dream up and build her company. I’d never heard of iLuminate before, but for the other uninitiated their performances are done in the dark with full body suits that use a whole bunch of lights synced up with their proprietary hardware and software to give the audience a light, music and dance show. Following her talk she brought out the dancers to close the conference with a show, nice!

I met up with some friends and acquaintances for dinner before going over to the closing party, which was held in the Houston Astros ballpark! I had fun, and made it back to the hotel around 10:30 so I could collect my bags and make my move to a hotel closer to the airport so I could just take a quick shuttle in the early AM to catch my flight to Tokyo the next day.

More photos from the conference and after party here:

It was quite a conference, I’m thankful that I was able to participate. The venue in Houston was somewhat disruptively under construction, but it’s otherwise a great space and it was great to learn that they’ll be holding the conference there again next year. I’d encourage women in tech I know to go if they’re feeling isolated or looking for tips to succeeding. If you’re thinking of submitting a talk, I’d also be happy to proof and make recommendations about your proposal, as it’s one of the more complicated submission processes I’ve been through and competition for speaking slots is strong.

by pleia2 at November 11, 2015 02:09 AM

November 10, 2015


Small Run Fab Services

For quite a while I’ve been just using protoboards, or trying toner transfer to make pcbs, with limited success.

A botched toner transfer attempt

A hackaday article (Why are you still making PCB’s?) turned me on to low cost, prototyping pcb runs. Cutting my own boards via toner transfer had lots of drawbacks:

  • I’d botch my transfer (as seen above), and have to clean the board and start over again. Chemicals are no fun either.
  • Drilling is tedious.
  • I never really got to the point where I’d say it was easy to do a one-sided board.
  • I would always route one-sided boards, as I never got good enough to want to try a 2 layer board.
  • There was no solder mask layer, so You’d get oxidation, and have to be very careful while soldering.
  • Adding silkscreen was just not worth the effort.

I seemed to remember trying to find small run services like this a while ago, but coming up short. I might be coming late to the party of small-run pcb fabs, but I was excited to find services like OSHpark are out there. They’ll cut you three 2-layer pcbs with all the fixins’ for $5/square inch! This is a much nicer board and probably at a cheaper cost than I am able to do myself.

Here’s the same board design (rerouted for 2layer) as the botched one above:

The same board design as above, uploaded into OSHpark

You can upload an Eagle BRD file directly, or submit the normal gerber files. Once uploaded, you can easily share the project on OSHpark. (this project’s download). You have to wait 12 days for the boards, but if I’m being honest with myself, this is a quicker turnaround time than my basement-fab could do! I’m sure I’ll be cutting my own boards way less in the future.

by Kevin at November 10, 2015 02:45 PM

November 09, 2015

Eric Hammond

Creating An Amazon API Gateway With aws-cli For Domain Redirect

Ten commands to launch a minimal, functioning API Gateway

As of this publication date, the Amazon API Gateway is pretty new and the aws-cli interface for it is even newer. The API and aws-cli documentation at the moment is a bit rough, but this article outlines steps to create a functioning API Gateway with the aws-cli. Hopefully, this can help others who are trying to get it to work.


I regularly have a need to redirect browsers from one domain to another, whether it’s a vanity domain, legacy domain, “www” to base domain, misspelling, or other reasons.

I usually do this with an S3 bucket in website mode with a CloudFront distribution in front to support https. This works, performs well, and costs next to nothing.

Now that the Amazon API Gateway has aws-cli support, I was looking for simple projects to test out so I worked to reproduce the domain redirect. I found I can create an API Gateway that will redirect a hostname to a target URL, without any back end for the API (not even a Lambda function).

I’m not saying the API Gateway method is better than using S3 plus CloudFront for simple hostname redirection. In fact, it costs more (though still cheap), takes more commands to set up, and isn’t quite as flexible in what URL paths get redirected from the source domain. It does, however, work and may be useful as an API Gateway aws-cli example.


The following steps assume that you already own and have set up the source domain (to be redirected). Specifically:

  • You have already created a Route53 Hosted Zone for the source domain in your AWS account.

  • You have the source domain SSL key, certificate, chain certificate in local files.

Now here are the steps for setting up the domain to redirect to another URL using the aws-cli to create an API Gateway.

1. Create an Amazon API Gateway with aws-cli

Set up the parameters for your redirection. Adjust values to suit: # Replace with your domain
target_url= # Replace with your URL

api_description="Redirect $base_domain to $target_url"


Create a new API Gateway:

api_id=$(aws apigateway create-rest-api \
  --region "$region" \
  --name "$api_name" \
  --description "$api_description" \
  --output text \
  --query 'id')
echo api_id=$api_id

Get the resource id of the root path (/):

resource_id=$(aws apigateway get-resources \
  --region "$region" \
  --rest-api-id "$api_id" \
  --output text \
  --query 'items[?path==`'$resource_path'`].[id]')
echo resource_id=$resource_id

Create a GET method on the root resource:

aws apigateway put-method \
  --region "$region" \
  --rest-api-id "$api_id" \
  --resource-id "$resource_id" \
  --http-method GET \
  --authorization-type NONE \
  --no-api-key-required \
  --request-parameters '{}'

Add a Method Response for status 301 with a required Location HTTP header:

aws apigateway put-method-response \
  --region "$region" \
  --rest-api-id "$api_id" \
  --resource-id "$resource_id" \
  --http-method "GET" \
  --status-code 301 \
  --response-models '{"application/json":"Empty"}' \
  --response-parameters '{"method.response.header.Location":true}'

Set the GET method integration to MOCK with a default 301 status code. By using a mock integration, we don’t need a back end.

aws apigateway put-integration \
  --region "$region" \
  --rest-api-id "$api_id" \
  --resource-id "$resource_id" \
  --http-method GET \
  --type MOCK \
  --request-templates '{"application/json":"{\"statusCode\": 301}"}'

Add an Integration Response for GET method status 301. Set the Location header to the redirect target URL.

aws apigateway put-integration-response \
  --region "$region" \
  --rest-api-id "$api_id" \
  --resource-id "$resource_id" \
  --http-method GET \
  --status-code 301 \
  --response-templates '{"application/json":"redirect"}' \
  --response-parameters \

2. Create API Gateway Deployment and Stage using aws-cli

The deployment and its first stage are created with one command:

deployment_id=$(aws apigateway create-deployment \
  --region "$region" \
  --rest-api-id "$api_id" \
  --description "$api_name deployment" \
  --stage-name "$stage_name" \
  --stage-description "$api_name $stage_name" \
  --no-cache-cluster-enabled \
  --output text \
  --query 'id')
echo deployment_id=$deployment_id

If you want to add more stages for the deployment, you can do it with the create-stage sub-command.

At this point, we can actually test the redirect using the endpoint URL that is printed by this command:

echo "https://$api_id.execute-api.$$stage_name$resource_path"

3. Create API Gateway Domain Name using aws-cli

The API Gateway Domain Name seems to be a CloudFront distribution with an SSL Certificate, though it won’t show up in your normal CloudFront queries in the AWS account.

distribution_domain=$(aws apigateway create-domain-name \
  --region "$region" \
  --domain-name "$base_domain" \
  --certificate-name "$certificate_name" \
  --certificate-body "file://$certificate_body" \
  --certificate-private-key "file://$certificate_private_key" \
  --certificate-chain "file://$certificate_chain" \
  --output text \
  --query distributionDomainName)
echo distribution_domain=$distribution_domain

aws apigateway create-base-path-mapping \
  --region "$region" \
  --rest-api-id "$api_id" \
  --domain-name "$base_domain" \
  --stage "$stage_name"

4. Set up DNS

All that’s left is to update Route53 so that we can use our preferred hostname for the CloudFront distribution in front of the API Gateway. You can do this with your own DNS if you aren’t managing the domain’s DNS in Route53.

Get the hosted zone id for the source domain:

  aws route53 list-hosted-zones \
    --region "$region" \
    --output text \
    --query 'HostedZones[?Name==`'$base_domain'.`].Id'
echo hosted_zone_id=$hosted_zone_id

Add an Alias record for the source domain, pointing to the CloudFront distribution associated with the API Gateway Domain Name.

change_id=$(aws route53 change-resource-record-sets \
  --region "$region" \
  --hosted-zone-id $hosted_zone_id \
  --change-batch '{
    "Changes": [{
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "'$base_domain'",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "'$cloudfront_hosted_zone_id'",
          "DNSName": "'$distribution_domain'",
          "EvaluateTargetHealth": false
  }}}]}' \
  --output text \
  --query 'ChangeInfo.Id')
echo change_id=$change_id

This could be a CNAME if you are setting up a hostname that is not a bare apex domain, but the Alias approach works in all Route53 cases.

Once this is all done, you may still need to wait 10-20 minutes while the CloudFront distribution is deployed to all edge nodes, and for the Route53 updates to complete.

Eventually, however, hitting the source domain in your browser should automatically redirect to the target URL. Here is my example in action:

Using the above as a starting point, we can now expand into more advance setups with the API Gateway and the aws-cli.

Original article and comments:

November 09, 2015 10:10 AM

November 08, 2015

Elizabeth Krumbach

Preamble to Grace Hopper Celebration of Women in Computing 2015

Prior to the OpenStack Summit last week, I attended the Grace Hopper Celebration of Women in Computing in Houston.

But it’s important to recognize a few things before I write about my experience at the conference in subsequent post.

I have experienced sexism and even serious threats throughout my work in open source software. This became particularly acute as I worked to increase my network of female peers and boost participation of women in open source with my work in Ubuntu Women and LinuxChix.

This is not to say open source work has been bad. The vast majority my experiences have been positive and I’ve built life-long friendships with many of the people I’ve volunteered with over the years. My passion for open source software as a movement, a community and a career is very much intact.

I have been exceptionally fortunate in my paid technical (mostly Linux Systems Administration) career. I have been a part of organizations that have not only supported and promoted my work, but have shown a real commitment to diversity in the talent they hire. At my first junior systems administration job in Philadelphia, my boss ran a small business where he constantly defied the technical stereotypes regarding race, age and gender with his hires, allowing me to work with a small, but diverse group of people. In my work now in Hewlett Packard Enterprise I’m delighted to work with many brilliant women, from my management chain to my peers, as well as people from all over the world.

My experience was not just luck. I’ve had been very fortunate to have the career flexibility and financial stability through a working partner to select jobs that fit my criteria for a satisfying work environment. When I needed to be frugal when living on my own in a small, inexpensive apartment far from the city and very limited budget, I made it through. Early in my career when I couldn’t find permanent work I wanted, I called up a temp agency and did everything from data entry to accounting work. I also spent time working as a technical consultant, at one job I did back end web development, in another helped make choices around enterprise open source platforms for a pharmaceutical company. While there certainly were micro-aggressions to deal with (clients regularly asking to speak with a “real developer” or directing design-oriented questions to me rather than my male designer colleague), my passion for technology and the work I was doing kept me above water through these routine frustrations.

When it comes to succeeding in my technical career I’ve also had the benefit of being a pretty hard core nerd. Every summer in high school I worked odd neighborhood jobs to save up money to buy computer parts. I had extended family members who gave us our first computer in 1991 (I was 10), the only gaming console I ever owned as a youth (the NES) and when we needed a better computer, grandparents who gave us a 486 for Christmas in 1994 (I was 13). Subsequent computers I bought with my precious summer work savings from classified ads, dragging my poor mother to the doorstep of more than one unusual fellow who was selling some old computer equipment. Both my parents had a love for SciFi, my father making the Lord of the Rings series a more familiar story than those from the Christian Bible, and my mother with her love of terribly amusing giant monster horror movies that I still hold close to this day. One look at my domain name here shows that I also grew up with the Star Wars trilogy. I’ve been playing video games since we got that first NES and I still carry around a Nintendo DS pretty much everywhere I go. I’ve participated in Magic:The Gathering tournaments. I wear geek t-shirts and never learned how to put on make-up. I have a passion for beer. I fit in with the “guys” in tech.

So far, I’m one of the women in tech who has stayed.

In spite of my work trying to get more women involved, like the two mentorship programs I participated in this year for women, I’ve spent a lot of time these past few years actively ignoring some of the bigger issues regarding women in tech. I love technology. I love open source. I’ve built my life and hobbies around my technical work and expertise. When I leave home and volunteer, it’s not spooning soup into bowls at a soup kitchen, it’s using my technical skills to deploy computers to disadvantaged communities. Trying to ignore the issues that most women face has been a survival tactic. It’s depressing and discouraging to learn how far behind we still are with pay, career advancement and both overt and subtle sexism in the workplace. I know that people (not just women!) who aren’t geeky or don’t drink like me are often ostracized or feel like they have to fake it to succeed, but I’ve pushed that aside to succeed and contribute in the way I have found is most valuable to my career and my community.

At the Grace Hopper Celebration of Women in Computing there was a lot of focus on all the things I’ve tried to ignore. All that discrimination in the form of lower pay for women, fewer opportunities for advancement, maternity penalties to the careers of women and lack of paternity leave for men in the US, praise for “cowboy” computing (jumping in at 3AM to save the day rather than spending time making sure things are stable and 3AM saves aren’t ever required) and direct discrimination. The conference did an exceptional job of addressing how we can handle these things, whether it be strategies in the workplace or seeking out a new job when things can’t be fixed. But it did depress and exhaust me. I couldn’t ignore the issues anymore during the three days that I attended.

It’s a very valuable conference and I’m really proud that I had the opportunity to speak there. I have the deepest respect and gratefulness for those who run the conference and make efforts every day to improve our industry for women and minorities. My next post will be my typical conference summary of what I learned while there and the opportunities that presented themselves. Just keep this post in mind as you make your way through the next one.

by pleia2 at November 08, 2015 03:41 AM

November 06, 2015

Elizabeth Krumbach

A werewolf SVG and a xerus

The release of Ubuntu 15.10, code name Wily Werewolf, came out last month. With this release have been requests for the SVG file used in all the release information. Thanks to a ping from +HEXcube on G+ I was reminded to reach out to Tom Macfarlane of the Canonical Design Team and he quickly sent it over!

It has been added to the Animal SVGs section of the official artwork page on the Ubuntu wiki.

And following Mark Shuttleworth’s announcement that the next release is code named Xenial Xerus, I added to my collection of critters to bring along to Ubuntu Hours and other events.


Finally, in case you were wondering how Xerus is pronounced (I was!), says: zeer-uh s.

by pleia2 at November 06, 2015 07:49 PM

Eric Hammond

Pause/Resume AWS Lambda Reading Kinesis Stream

use the aws-cli to suspend an AWS Lambda function processing an Amazon Kinesis stream, then resume it again

At Campus Explorer we are using AWS Lambda extensively, with sources including Kinesis, DyanmoDB, S3, SNS, CloudFormation, API Gateway, custom events, and schedules.

This week, Steve Caldwell (CTO and prolific developer) encountered a situation which required pausing an AWS Lambda function with a Kinesis stream source, and later resuming it, preferably from the same point at which it had been reading in each Kinesis shard.

We brainstormed a half dozen different ways to accomplish this with varying levels of difficulty, varying levels of cost, and varying levels of not-quite-what-we-wanted-ness.

A few hours later, Steve shared that he had discovered the answer (and suggested I pass on the answer to you).

Buried in the AWS Lambda documentation for update-event-source-mapping in the aws-cli (and the UpdateEventSourceMapping in the API), is the mention of --enabled and --no-enabled with this description:

Specifies whether AWS Lambda should actively poll the stream or not. If disabled, AWS Lambda will not poll the stream.

As it turns out, this does exactly what we need. These options can be specified to change the processing enabled state without changing anything else about the AWS Lambda function or how it reads from the stream.

The big benefit that isn’t documented (but verified by Amazon) is that this saves the place in each Kinesis shard. On resume, AWS Lambda continues reading from the same shard iterators without missing or duplicating records in the stream.


To pause an AWS Lambda function reading an Amazon Kinesis stream:

event_source_mapping_uuid=... # (see below)

aws lambda update-event-source-mapping \
  --region "$region" \
  --uuid "$event_source_mapping_uuid" \

And to resume the AWS Lambda function right where it was suspended without losing place in any of the Kinesis shards:

aws lambda update-event-source-mapping \
  --region "$region" \
  --uuid "$event_source_mapping_uuid" \

You can find the current state of the event source mapping (e.g., whether it is enabled/unpaused or disabled/paused) with this command:

aws lambda get-event-source-mapping \
  --region "$region" \
  --uuid "$event_source_mapping_uuid" \
  --output text \
  --query 'State'

Here are the possible states: Creating, Enabling, Enabled, Disabling, Disabled, Updating, Deleting. I’m not sure how long it can spend in the Disabling state before transitioning to full Disabled, but you might want to monitor the state and wait if you want to make sure it is fully paused before taking some other action.

If you’re not sure what $event_source_mapping_uuid should be set to in all the above commands, keep reading.


Here’s an aws-cli incantation that will return the event source mapping UUID given a Kinesis stream and connected AWS Lambda function.


  aws lambda list-event-source-mappings \
    --region "$region" \
    --function-name "$function_name" \
    --output text \
    --query 'EventSourceMappings[?EventSourceArn==`'$source_arn'`].UUID')
echo event_source_mapping_uuid=$event_source_mapping_uuid

If your AWS Lambda function has multiple Kinesis event sources, you will need to pause each one of them separately.

Other Event Sources

The same process described above should be usable to pause/resume an AWS Lambda function reading from a DynamoDB Stream, though I have not tested it.

Other types of AWS Lambda function event sources are not currently possible to pause and resume without missing events (e.g., S3, SNS). However, if pause/resume is something you’d like to make easy for those sources, you could use AWS Lambda, the glue of AWS.

For example, suppose you currently have events flowing like this:

S3 -> SNS -> Lambda

and you want to be able to pause the Lambda function, without losing S3 events.

Insert a trivial new Lambda(pipe) function that reposts the S3/SNS events to a new Kinesis stream like so:

S3 -> SNS -> Lambda(pipe) -> Kinesis -> Lambda

and now you can pause the last Kinesis->Lambda mapping while saving S3/SNS events in the Kinesis stream for up to 7 days, then resume where you left off.

I still like my “pause Lambda” brainstorming idea of updating the AWS Lambda function code to simply sleep forever, triggering a timeout error after 5 minutes and causing the Kinesis/Lambda framework to retry the function call with the same data over and over until we are ready to resume by uploading the real code again, but Steve’s discovery is going to end up being somewhat simpler, safer, and cheaper.

Original article and comments:

November 06, 2015 12:00 AM

November 02, 2015

Eric Hammond

Alestic Git Sunset

retiring “Git with gitolite by Alestic” on AWS Marketplace

Back in 2011 when the AWS Marketplace launched, Amazon was interested in having some examples of open source software listed in the marketplace, so I created and published Git with gitolite by Alestic.

This was a free AWS Marketplace product that endeavored to simplify the process of launching an EC2 instance running Git for private repositories, with ssh access managed through the open source gitolite software.

Though maintaining releases of this product has not been overly burdensome, I am planning to discontinue this work and spend time on other projects that would likely be more beneficial to the community.

Current Plan

Unless I receive some strong and convincing feedback from users about why this product’s life should be extended, I currently plan to ask Amazon to sunset Git with gitolite by Alestic in the coming months.

When this happens, AWS users will not be able to subscribe and launch new instances of the product, unless they already had an active AWS Marketplace subscription for it.


Folks who want to use private Git repositories have a number of options:

  • Amazon has released CodeCommit, “a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories”.

  • The AWS Marketplace has other Git related products, some of them free, in the Source Control software section.

  • At the bottom of the original Alestic Git page, I have always listed a number of service that will host private Git repositories for a fee. The obvious and most popular choice is GitHub.

  • The code I use to build the Git with gitolite AMI is open source, and publicly available on GitHub. You are welcome to use and adapt this to build your own updated AMI.

Existing Customers

AWS Marketplace customers who currently have a subscription to Git with gitolite by Alestic may continue running the product and should be able to start new instances of it if needed.

Note, however, that the AMIs will not be updated and the Ubuntu LTS operating systems do eventually reach end of life where they do not receive security updates.

In the 4.5 years this product has been publicly available, I think one person asked for help (a client ssh issue), but I’ll continue to be available if there are issues running the AMI itself.

Transfer of Control

If you already host software on the AWS Marketplace, and you would be willing to assume maintenance of the Git with gitolite product, please get in touch with me to discuss a possible transition.

Original article and comments:

November 02, 2015 09:42 AM

November 01, 2015

Elizabeth Krumbach

Mitaka OpenStack Summit Days 3 & 4

With the keynotes behind us and the conference side of the OpenStack Summit on its final day, I spent Thursday focused on the OpenStack Design Summit from 9AM onward.

The day began with a work session surrounding Gerrit. We tried several months back to do an upgrade from version 2.8, but had to roll back when we noticed some problems. Khai Do has spent the past few months tracking down the issues and working on an upgrade plan, which we hashed through during this session. We talked through the upgrade process, which includes a pre-upgrade database cleanup and the upgrade itself. Then we chatted about scheduling, pretty much settling on a mid-week day prior to Thanksgiving here in the US to complete the upgrade. Read-only etherpad from the session here. The next session centered around work that Greg Haynes has done with Nodepool image workers. Through discussion and some heads down work, several patches in the proposed stack were brought in during this work session. Read-only notes from the session are in the etherpad here.

Some Infrastructure in the morning

Anita Kuno then led a session about scaling new project creation. The OpenStack project is now made up of over 800 git repositories that all use Gerrit, and has 18,948 accounts. She also shared the following statistics that she pulled on October 21st:

Numbers for Liberty: (May 25 2015 – Oct 18 2015)
Patches merged: 32435
Patchsets created: 138517
Comments added: 863869

The session had the expertise of some folks who were running very large Gerrit installations internally at their companies, and we benefited from that as they talked about the tooling they have to create projects and provided some feedback to scaling issues there. Of particular concern to us were making sure that Gerrit performance continues to be sufficient for the project, including making sure the event stream isn’t overloaded, that we continue to have enough server and database space, scaling of git itself (leverage more directly for Gerrit?) and in general making sure we’re doing all the appropriate tuning. Read-only etherpad for this session here.

Anita Kuno leads the Scaling New Project Creation session

Next up was the Task Tracking session. During the OpenStack Summit in Vancouver the decision was made to stop active development on StoryBoard, in spite of the tireless efforts of the 1.5 developers working on it. Since then, another company has come along to pick up development, so the session centered around whether we should reconsider usage of StoryBoard and ask for development efforts again or move forward with our Phabricator Maniphest plans. After much discussion, the conclusion of the session was that we’d stick with deploying Maniphest and working to see if that satisfies the needs of our workflow. Plus, a nod of thanks to the current StoryBoard maintainers, one of whom was able to join us for the summit (thanks Zara!). Read-only etherpad notes here.

For lunch I met up with my HP colleagues as we secured two tables in the hot buffet lunch area and had some great chats about life, the universe and probably OpenStack. Our team has changed a lot over the past year, so it was nice to meet a few new people and put the IRC nick to a face in several cases. With lunch in another building, I was able to once again take advantage of the Japanese garden that exists as pathways between the hotels. It was such a peaceful space to walk through in the midst of the chaos that is the summit.

I spent my afternoon in the Developer Lounge, mostly chatting with OpenStack folks I usually don’t get to catch up with. In spite of my general shyness and typical dislike for hallway tracks, it actually was a useful and enjoyable afternoon. The day rounded out with a Release Management session where they dug into the internals of processes, scripts, tagging and everything related to the releases of each component of OpenStack. For details, check out the read-only etherpad here.

Thursday evening, Alex Eng invited Steve Kowalik and me out for dinner with Carlos and Diana Munoz. Steve and I have been working with Carlos and Alex on the Zanata migration for some time now, so it was great to spend time together catching up on a more personal level and to meet Carlos’ wife. Plus, the tempura that we had was delicious.

Friday was contributor meetup day! We had Infrastructure sessions spanning both the morning and afternoon. After collecting several agenda items, several of us core/root members of the Infrastructure team made our way to a tatami mat hut in the Japanese garden to evaluate team priorities for the cycle. Our focus will be the continuing projects like Zuulv3, but also a showing of support and to prioritize reviews related to the infra-cloud, which we anticipate will expand our test pool significantly.

Infrastructure team contributor meetup/work session

After the core/root meeting, I headed down to the i18n contributors meetup where they were discussing Stackalytics integration when I arrived. We chatted some about OpenStackID and ways to collect user data for statistics, as well as having an opportunity to fiddle with the API some and give feedback as to improvements that would help us out.

i18n contributor meetup

I had lunch with Clint Adams and Steve before heading back to the Infrastructure afternoon session. During the afternoon session I was able to chat with Jeremy Stanley, Clark Boylan and Jim Blair about how to handle the proposed translations check site. I also made some last minute changes so we could finally deploy, the project that our Outreachy intern Emma Barber worked on over the summer and then beyond the internship period to finish just a few days ago. It was pretty exciting to get that finally launched, and has already shown itself to be a useful addition to our infrastructure!

With that, the summit came to a close. I had one last dinner with several of my OpenStack Infrastructure colleagues, a delicious teppanyaki dinner at Steak House Hama in Rappongi. Saturday it was time to finally go home!

More photos from the summit here:

by pleia2 at November 01, 2015 08:20 PM

Mitaka OpenStack Summit Day 2

The opening keynotes on Wednesday had a pretty common theme: Neutron. Mark Collier shared that in the Liberty cycle, for the first time Neutron was the most active project in OpenStack. This is really inspiring news for the community, as just two years ago there was concern about the commitment to the project and adoption statistics. Mark went on to share that 89% of respondents to the latest OpenStack User Survey said they’re using Neutron in production. Keynotes continued with former Neutron PTL Kyle Mestery who gave a quick history of Neutron, spoke about design goals and new Kuryr project, which focuses on networking for containers. Company-wise, there were keynotes from NTT Resonant, Rackspace, SK Telecom, Rakuten, CyberAgent and IBM about their use of OpenStack. Many of these companies shared basic details of their deployments and stressed the win for their customers both in terms of cost and deployment and feature release time (fast!). Full video of the keynotes here.

Directly after the keynotes we had the OpenStack Infrastructure work sessions on our transition to Masterless Puppet. There are several moving parts to this transition, including changes to how we use hiera, changes to and additions of several scripts and exploring how we handle PuppetDB and PuppetBoard in the Puppet Masterless world. I admit to not being as productive during this session as I would have liked, but I did manage to catch up on the problem space and hope I can help with future improvements. I hope our very public experience in the move away from having a Puppetmaster is valuable to other teams looking to do the same. Read-only link to the etherpad here.

Infra team assembled for our work sessions!

At lunch I was finally able to meet Christian Berendt, who has been very helpful with the technical review of the book I’m working on. He met Matt Fischer and I at one of the many on-site restaurants and we all talked through the current status of the book and the path forward to completion. The book continues to be a challenging project, but it’s always energizing to meet up with the other folks who are spending time on it to brainstorm and push through the difficult parts.

I went to a session about Ironic third party plugin testing after lunch. Mike Perez, former Cinder PTL, shared his experience with requiring testing in the third party space for Cinder and had a lot of valuable feedback. There was mention of the Third Party Team, which has regular meetings, the openstack-ci module to aid in deployment of the CI, discussion of potential milestone deadlines over the next couple of cycles. and speculation as to possible hardware requirements. Read-only etherpad here.

It was then off to a QA session about the new health dashboard to track problems and failures in our CI system so they can be checked by anyone when a disruption occurs. An initial prototype has been launched, so the discussion centered around the future of scaling the dashboard so it can be introduced to the wider community. This included concerns like backups and bottlenecks like subunit2sql performance fixes. I volunteered for fixing up the UI for health dashboard so that it matches the rest of our pages, and should have that done next week. Read-only link to the Etherpad from the session here.

My final session of the day was on a proposal for Nodepool plugins. The proposal sought to address the needs of testing on bare metal and in containers directly. The consensus from the Infrastructure team tended to be that we really want to use the support for containers and bare metal that is already in, or being developed for, as native in OpenStack solutions. This will still require changes to Nodepool, but the hope is that it won’t require the re-architecture that a plugin system may require.

Yolanda Robla leads the Nodepool Plugins session

With that, the day wound down. The official evening party was put on by HP, Scality, Cisco and Bit-isle and took place at the beautiful Happo-En park. It was a shuttle ride or a 10 minute walk from the venue, we went with the latter. The event had several indoor spaces with refreshments (sushi! tempura! beer! sake!) and entertainment like drumming, various types of dance and sumo.

It also had a beautiful outdoor space, with a park to walk through, water features, bridges, and a whole fleet of bonsai trees, one of which was 520 years old. Given my tolerance for crowded parties, having a space where I could escape to and get some fresh air in a quieter space is important for my enjoyment of a party.

I wrapped up the evening chatting with some colleagues in one of the quiet outdoor spaces and managed to get back to the hotel not too late. It was time to get some rest for Thursday!

More photos from the evening here:

by pleia2 at November 01, 2015 01:25 PM

October 30, 2015

Akkana Peck

HDMI presentation setup on Linux, Part II: problems and tips

In Part I of HDMI Presentation Setup on Linux, I covered the basics of getting video and audio working over HDMI. Now I want to cover some finer-grained details: some problems I had, and ways to make it easier to enable HDMI when you need it.

Testing follies, delays, and screen blinking/flashing woes

While I was initially trying to get this working, I was using my own short sound clip (one of the alerts I use for IRC) and it wasn't working. Then I tried the test I showed in part I, $ aplay -D plughw:0,3 /usr/share/sounds/alsa/Front_Center.wav and that worked fine. Tried my sound clip again -- nothing. I noticed that my clip was mono and 8-bit while the ALSA sample was stereo and 16-bit, and I wasted a lot of time in web searches on why HDMI would play one and not the other.

Eventually I figured out that the reason my short clip wasn't playing was that there's a delay when switching on HDMI sound, and the first second two two of any audio may be skipped. I found lots of complaints about people missing the first few seconds of sound over HDMI, so this problem is quite common, and I haven't found a solution.

So if you're giving a talk where you need to play short clips -- for instance, a talk on bird calls -- be aware of this. I'm probably going to make a clip of a few seconds of silence, so I can play silence before every short clip to make sure I'm fully switched over to HDMI before the clip starts: aplay -D plughw:0,3 silence.wav osprey.wav

Another problem, probably related, when first starting an audio file: the screen blinks brieftly off then on again, then blinks again a little while after the clip ends. ("Flicker" turns out to be a better term to use when web searching, though I just see a single blink, not continued flickering). It's possible this is something about my home TV, and I will have to try it with another monitor somewhere to see if it's universal. It sounds like kernel bug 51421: Enabling HDMI sound makes HDMI video flicker, but that bug was marked resolved in 2012 and I'm seeing this in 2015 on Debian Jessie.

Making HDMI the sound default

What a pain, to have to remember to add -D plughw:0,3 every time you play a sound. And what do you do for other programs that don't have that argument?

Fortunately, you can make HDMI your default sound output. Create a file in your home directory called .asoundrc with this in it (you may be able to edit this down -- I didn't try) and then all audio will go to HDMI:

pcm.dmixer {
  type dmix
  ipc_key 1024
  ipc_key_add_uid false
  ipc_perm 0660
  slave {
    pcm "hw:0,3"
    rate 48000
    channels 2
    period_time 0
    period_size 1024
    buffer_time 0
    buffer_size 4096
pcm. !default {
  type plug
  slave.pcm "dmixer"

Great! But what about after you disconnect? Audio will still be going to HDMI ... in other words, nowhere. So rename that file:

$ mv .asoundrc asoundrc-hdmi
Then when you connect to HDMI, you can copy it back:
$ cp asoundrc-hdmi .asoundrc 

What a pain, you say again! This should happen automatically!

That's possible, but tricky: you have to set up udev rules and scripts. See this Arch Linux discussion on HDMI audio output switching automatically for the gory details. I haven't bothered, since this is something I'll do only rarely, when I want to give one of those multimedia presentations I sometimes contemplate but never actually give. So for me, it's not worth fighting with udev when, by the time I actually need HDMI audio, the udev syntax probably will have changed again.

Aliases to make switching easy

But when I finally do break down and design a multimedia presentation, I'm not going to be wanting to do all this fiddling in the presentation room right before the talk. I want to set up aliases to make it easy.

There are two things that need to be done in that case: make HDMI output the default, and make sure it's unmuted.

Muting can be done automatically with amixer. First run amixer with no arguments to find out the channel name (it gives a lot of output, but look through the "Simple mixer control" lines, or speed that up with amixer | grep control.

Once you know the channel name (IEC958 on my laptop), you can run: amixer sset IEC958 unmute The rest of the alias is just shell hackery to create a file called .asoundrc with the right stuff in it, and saving .asoundrc before overwriting it. My alias in .zshrc is set up so that I can say hdmisound on or hdmisound off (with no arguments, it assumes on), and it looks like this:

# Send all audio output to HDMI.
# Usage: hdmisound [on|off], default is on.
hdmisound() {
    if [[ $1 == 'off' ]]; then
        if [[ -f ~/.asoundrc ]]; then
            mv ~/.asoundrc ~/.asoundrc.hdmi
        amixer sset IEC958 mmute
        if [[ -f ~/.asoundrc ]]; then
            mv ~/.asoundrc ~/.asoundrc.nohdmi
        cat >> ~/.asoundrc <<EOF
pcm.dmixer {
  type dmix
  ipc_key 1024
  ipc_key_add_uid false
  ipc_perm 0660
  slave {
    pcm "hw:0,3"
    rate 48000
    channels 2
    period_time 0
    period_size 1024
    buffer_time 0
    buffer_size 4096
pcm. !default {
  type plug
  slave.pcm "dmixer"
        amixer sset IEC958 unmute

Of course, I could put all that .asoundrc content into a file and just copy/rename it each time. But then I have another file I need to make sure is in place on every laptop; I decided I'd rather make the alias self-contained in my .zshrc.

October 30, 2015 05:57 PM

October 29, 2015


How to Encrypt Folders

For those of you who are concerned about three letter agencies or anyone else... CryptKeeper works great. You can install it by finding it in the Ubuntu Software Center or via the terminal window using:

sudo apt-get install cryptkeeper

To launch CryptKeeper, find it by clicking the Ubuntu icon at the top left and searching for it. Once you open the app it will put a little key symbol on your top panel.

To create an encrypted protected folder, click on the Cryptkeeper key applet and select "New Encrypted Folder"

Input a folder name and where to save the folder (maybe in your Home folder? or on your Desktop?) and then click the "Forward" button.

The next screen will ask you to input the password you will use to unlock the folder each time you mount it. Then, click the "Forward" button.

Your new encrypted folder will be created and will be ready to use!


Any time you want to access your encrypted folder, click on the CryptKeeper key applet on the top panel and select the folder you want.

It will ask you to type your password to mount it.


You can unmount the folder also by going to thr key applet and unchecking the folder.

Theres also a few options in the applet such as deleting the folder all together or changing its password.

The program is a little tricky in that your encrypted folder will auto unmount after a few minutes. Once it does, the folder will still look mounted, but appear blank if you go into it. It might be confusing at first but you'll get used to it. Personally, I like to go into the CryptKeeper preferences and change the "unmount after idle" setting to 60 minutes. This forces me to mount or unmount manually.

This is a great program if you dont like to encrypt your entire /home folder when installing Ubuntu.

by iheartubuntu ( at October 29, 2015 12:05 AM

October 27, 2015

Akkana Peck

HDMI presentation setup on Linux, video and audio: Part I

For about a decade now I've been happily connecting to projectors to give talks. xrandr --output VGA1 --mode 1024x768 switches on the laptop's external VGA port, and xrandr --auto turns it off again after I'm disconnected. No fuss.

But increasingly, local venues are eschewing video projectors and instead using big-screen TVs, some of which offer only an HDMI port, no VGA. I thought I'd better figure out how to present a talk over HDMI now, so I'll be ready when I need to know.

Fortunately, my newest laptop does have an HDMI port. But in case it ever goes on the fritz and I have to use an older laptop, I discovered you can buy VGA to HDMI adaptors rather cheaply (about $10) on ebay. I bought one of those, tested it on my TV at home and it at least worked there. Be careful when shopping: you want to make sure you're getting something that takes VGA in and outputs HDMI, rather than the reverse. Ebay descriptions aren't always 100% clear on that, but if you check the gender of the connector in the photo and make sure it's right to plug into the socket on your laptop, you should be all right.

Once you're plugged in (whether via an adaptor, or native HDMI built into your laptop), connecting is easy, just like connecting with VGA:

xrandr --output HDMI1 --mode 1024x768

Of course, you can modify the resolution as you see fit. I plan to continue to design my presentations for a 1024x768 resolution for the forseeable future. Since my laptop is 1366x1024, I can use the remaining 342-pixel-wide swath for my speaker notes and leave them invisible to the audience.

But for GIMP presentations, I'll probably want to use the full width of my laptop screen. --mode 1366x768 didn't work -- that resolution wasn't available -- but running xrandr with no arguments got me a list of available resolutions, which included 1360x768. That worked fine and is what I'll use for GIMP talks and other live demos where I want more screen space.

Sound over HDMI

My Toastmasters club had a tech session where a few of us tried out the new monitor in our meeting room to make sure we could use it. One person was playing a video with sound. I've never used sound in a talk, but I've always wanted to find an excuse to try it. Alas, it didn't "just work" -- xrandr's video settings have nothing to do with ALSA's audio settings. So I had to wait until I got home so I could do web searches and piece together the answer.

First, run aplay -l , which should show something like this:

$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: Intel [HDA Intel], device 0: STAC92xx Analog [STAC92xx Analog]
  Subdevices: 0/1
  Subdevice #0: subdevice #0
card 0: Intel [HDA Intel], device 3: HDMI 0 [HDMI 0]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Find the device number for the HDMI device, which I've highlighted here: in this case, it's 3 (which seems to be common on Intel chipsets).

Now you can run a test:

$ aplay -D plughw:0,3 /usr/share/sounds/alsa/Front_Center.wav
Playing WAVE '/usr/share/sounds/alsa/Front_Center.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Mono
If you don't hear anything, don't worry: the HDMI channel is probably muted if you've never used it before. Run either alsamixer or alsamixergui.

[alsamixergui with HDMI muted] [alsamixer] Now find the channel representing your HDMI connection. (HDMI must be plugged in for this to work.) In alsamixer, it's called S/PDIF; in alsamixergui, it's called IEC958. If you look up either of those terms, Wikipedia S/PDIF will tell you that S/PDIF is the Sony/Philips Digital Interconnect Format, a data protocol and a set of physical specifications. Those physical specifications appear to have nothing to do with video, and use connectors that are nothing like HDMI. So it doesn't make much sense. Just remember that if you see IEC958 or S/PDIF in ALSA, that's probably your HDMI channel.

In the alsamixergui screenshot, IEC958 is muted: you can tell because the little speaker icon at the top of the column is bright white. If it were unmuted, the speaker icon would be grey like most of the others. Yes, this seems backward. It's Linux audio: get used to obscure user interfaces.

In the alsamixer screenshot, the mutes are at the bottom of each column, and MM indicates a channel is muted (like the Beep channel in the screenshot). S/PDIF is not muted here, though it appears to be at zero volume. (The 00 doesn't tell you it's at zero volume; 00 means it's not muted. What did I say about Linux audio?) ALSA apparently doesn't let you adjust the volume of HDMI output: presumably they expect that your HDMI monitor will have its own volume control. If your S/PDIF is muted, you can use your right-arrow key to arrow over to the S/PDIF channel, then type m to toggle muting. You can exit alsamixer with Ctrl-C (Q and Ctrl-Q don't work).

Now try that aplay -D command again and see if it works. With any luck, it will (loudly).

A couple of other tests you might want to try:
speaker-test -t sine -f 440 -c 2 -s 1 -D hw:0,3
plays a sine wave. speaker-test -c 2 -r 48000 -D hw:0,3
runs a general speaker test sequence.

In Part II of Linux HDMI Presentations, I'll cover some problems I had, and how to write an alias to make it easy to turn HDMI audio on and off.

October 27, 2015 08:36 PM

Elizabeth Krumbach

Mitaka OpenStack Summit Days 0 & 1

For Mitaka, the OpenStack Summit is only 4 days long, lasting from Tuesday through Friday. That didn’t prevent Monday from being the kick off of festivities! I had two events Monday evening, starting with the Women of OpenStack networking event at Aoyama Laputa Garden. I was happy to be around many familiar faces who all made the evening an enjoyable one. From there, I returned to the summit venue to then walk over to the HP employee party at a nearby restaurant which rounded out my evening. Thankfully back at the hotel by 10PM, we have a long week ahead of us.

Loving the custom signage and napkins at the Women of OpenStack networking event!

Tuesday was the actual launch of the summit. The initial stats coming out say that this summit has just over 5,000 attendees from 56 countries, which makes it the biggest non-North American summit to date. My day began by attending several keynotes (all of which are in the video here: OpenStack Tokyo Summit Full Keynote). It continues to be inspiring to watch such diverse companies embracing not only the usage of open source and OpenStack, but the contributions back to the community. It’s noteworthy that contributing back is always a highlight of many of the keynotes given by companies at this event. Jonathan Bryce also talked about the cool new feature on the OpenStack website that takes various metrics and creates a table tracking age, maturity and adoption to help operators evaluate each component, it can be found at: He also announced an OpenStack certification that was built and developed with some of the training partners and will start being available to take in 2016.

The keynote from Lithium Technologies talked about their use of containers in their OpenStack deployment and how they’ve allowed the company to do some really interesting scaling and high availability work. Plus, the presenter did a live demo of updating a site in production as he replaced lasers with fish in the Croc Hunter game, I guess making it Croc Feeder! The keynote from Yahoo! Japan had the most interesting statistics, sharing that they have 50k instances across their OpenStack deployment, 20 petabytes of data storage for it and over 20 OpenStack clusters. It was also interesting to hear from Erica Brescia, COO and co-founder of Bitnami, who spoke to making cloud platforms as easy to contribute to as possible, which touched upon the commoditization of cloud and how companies manage to distinguish themselves in an ecosystem that has so much choice.

After Erica’s keynote I had to run out to meet up with some of my HP colleagues for a quick interview with Stephen Spector about the work I’m doing on the Infrastructure team. I’m admittedly not one for videos or live interviews in general, but I really love talking about my work, so I hope at least that was reflected in the video (which will be posted soon).

Interview with Stephen Spector, thanks to @hphelioncloud for tweeting, source

The rest of my Summit day was consumed with translations/i18n sessions. The first was a summit presentation on Get OpenStack to speak your language – OpenStack I18n Team Introduction by Ying Chun (Daisy) Guo of IBM, Carlos Munoz of Red Hat and KATO Tomoyuki of Fujitsu. Daisy began by talking about the short history of i18n team, the current statistics of 40 language teams working on various languages, the 12 languages that fully translated Horizon for the Liberty cycle and an overview of team organization and high priority targets for translations, including Horizon and the user guides. Carlos went on to talk about Zanata and the script workflow that we’ve developed in OpenStack for syncing between git, our code review system and Zanata. He also covered pending improvements, including to glossary support, per-project permissions, statistics (including for Stackalytics), better request management for people wishing to join teams and have to be approved by translations coordinators. KATO then concluded the session by doing a demo of the OpenStack Zanata instance to demonstrate how to get started with using it and shared some best practices put in place by the Japanese team, reflecting that other teams would also have similar practices that potential translators should look into. He also mentioned the dashboard translation check site being worked on for Mitaka, which we created a spec for last cycle: Provide a translation check site for translators. Video of the presentation is available here.

Daisy, Carlos and KATO giving the i18n talk

I did some wandering around the Marketplace and then had lunch with my friend and colleague Steve Kowalik who I’ve most recently been doing a lot of translations infrastructure work with. Events of my afternoon continued with the session I was nominally leading on Translation Tool Support: What Do We Need to Improve? The discussion mostly centered around the experience and improvements needed for Zanata, which went live for the Liberty cycle translations in September. Aside from a few pain points, the major discussion happened around the need for statistics to be fed into Stackalytics and the barriers that exist to making that happen. We also touched upon the translation check site I mentioned earlier, and I think we now have a path forward to getting the tooling that had been used in the past for a privately hosted instance shared so we can see how to replicate it in OpenStack’s Infrastructure. Read-only Etherpad with notes from the session here.

Translations/i18n roundtable sessions attendees

Daisy ran the next session on Translation and Development: How Do They Work Together? where we spent a lot of time talking about how the freezing of branches and importing for the Liberty release worked and speculating as to what changes needed to be made so that the development cycle isn’t held up and the translators still have enough time to do their work. We also dove into some discussions around how the scripts currently work and preferences around handling stable branches that translators may want to go back to and update and have included in a stable release update. Read-only Etherpad with notes from the session here.

Jumping right back in with parties, this evening I attended the Core Reviewers party put on by HP. This party is usually a highlight of the summits for me, and this time was no exception. It was hosted at Sengakuji Temple which houses a small museum related to the on site cemetery of the famous Forty-seven Ronin who avenged their master’s death and then committed seppuku (ritual suicide) in lieu of other punishment. The event itself had a story of these Ronin, complete with actors who we were able to pose with:

The event had an assortment of Japanese food and drink, I went with sake and okonomiyaki, yum! Other entertainment included some live calligraphy, a short kabuki dance and various musical performances. We were also able to bring incense to the graves of the Ronin warriors to pay respects.

Off to a good start to the summit! Looking forward to the next three days.

by pleia2 at October 27, 2015 02:41 PM

October 26, 2015

Jono Bacon

An Experiment In Reviving Dead Open Source Projects

Earlier this week I did a keynote at All Things Open. While the topic covered the opportunity of us building effective community collaboration and speeding up the development of Open Source and innovation, I also touched on some of the challenges.

One of these challenges is sustainability. There are too many great Open Source projects out there that are dead.

My view, although some may consider it rather romantic, is that there is a good maintainer out there for the vast majority of these projects, but the project and the new maintainer just haven’t met yet. So, this got me thinking…I wonder if this theory is actually true, and if it is, how do we connect these people and projects together?

While on the flight home I started thinking of what this could look like. I then had an idea of how this could work and I have written a little code to play with it. This is almost certainly the wrong solution to this problem, but I figured it could be an interesting start to a wider discussion for how we solve the issue of dead projects.

The Idea

The basic crux of my idea is that we provide a simple way for projects to indicate that a project needs a new maintainer. The easiest way to do this is to add a file into the source tree of the project.

This file is an .adopt file which basically includes some details about the project and indicates if it is currently maintained:

For example:

maintained = no
name = Jokosher
description = An audio multitracker built for the GNOME desktop.
category = Audio
repo =
discussion =
languages = Python

name = Bob Smith
email =

Now, this is a crude method of specifying the main bits of a project and much of this format will need tuning (e.g. we could pull out languages and frameworks out into a new block). You get the drift though: this is metadata about a project that also indicates (a) whether it is maintained, (b) what the key resources are for someone to get a good feel for the project, and (c) who the contact would be to help a new potential maintainer come in.

With this file available in the source tree, it should be publically available (e.g. the raw file on GitHub). A link to this file would then be pasted into a web service that adds it to a queue.

This queue is essentially a big list of .adopt files from around the web. A script then inspects each of these .adopt files and parses the data out into a database.

This database is then used to make this list of unmaintained projects searchable in some way. For example, you could search by category or programming languages. While maintained continues to be set to no the project will remain on the list.

When a suitable maintainer steps up and the project is alive again, all the maintainer needs to do is set this maintained line to yes. On the next scan of the queue, that particular .adopt file will be identified as now maintained and it will be removed, thus not appearing in the database.

A First Step

To provide a sense of how this could work I threw some Python together at

It is built using CherryPy to keep it simple. I wanted to avoid a full-fledged Django-type framework until the core premise of how this works is fleshed out. A caveat here: this is a really quick, thrown-together prototype designed to encourage some discussion and ideation.

It works like this:

  • Run to spin up a local webserver on that will display the empty queue. You can then add some remotely or locally hosted .adopt files by clicking the button at the top of the page. I have included three examples on GitHub 1 2 3. These are added to queue.list.
  • You then run the that will scan the queue and create a sqllite3 database with the data.
  • The website then includes a really simple and crude list of projects and the links to the relevant resources (e.g. code, discussion).

Now, as you can tell, I have only spent a few hours knocking this together and there are many things missing. For example:

  • It doesn’t include the ability to search for projects or search by language.
  • The schema is a first cut and needs a lot of care and attention.
  • The UI is very simplistic.
  • There is barely any error-checking.

Topics For Discussion

So, this is a start. I think there are a lot of interesting topics for discussion here though:

  • Is this core concept a good idea? There is a reasonable likelihood it isn’t, but that is the goal of all of this…let’s discuss it. :-)
  • If it is the core of a good idea, how can the overall approach be improved and refined?
  • What kind of fields should be in an .adopt file? How do we include the most important pieces of information but also keep this a low barrier for entry for projects.
  • What should be the hand-off to encourage someone to explore and ultimately maintain a project? A list of dead projects is one thing but there could be instructions, guides, and other material to help people get a sense of how they maintain a project.
  • Maintaining a project is a great way for students to build strong skills and develop a resume – could this be a carrot and stick for encouraging people to revive dead projects?
  • What kind of metrics would need to be tracked in this work?

To keep things simple and consistent I would like to encourage this discussion over on the project’s issue tracker. Share your comments, thoughts, and methods of improvement there.


by Jono Bacon at October 26, 2015 05:11 AM

October 22, 2015

Akkana Peck

Non-free software can mean unexpected surprises

I went to a night sky photography talk on Tuesday. The presenter talked a bit about tips on camera lenses, exposures; then showed a raw image and prepared to demonstrate how to process it to bring out the details.

His slides disappeared, the screen went blank, and then ... nothing. He wrestled with his laptop for a while. Finally he said "Looks like I'm going to need a network connection", left the podium and headed out the door to find someone to help him with that.

I'm not sure what the networking issue was: the nature center has open wi-fi, but you know how it is during talks: if anything can possibly go wrong with networking, it will, which is why a good speaker tries not to rely on it. And I'm not blaming this speaker, who had clearly done plenty of preparation and thought he had everything lined up.

Eventually they got the network connection, and he connected to Adobe. It turns out the problem was that Adobe Photoshop is now cloud-based. Even if you have a local copy of the software, it insists on checking in with Adobe at least every 30 days. At least, that's the theory. But he had used the software on that laptop earlier that same day, and thought he was safe. But that wasn't good enough, and Photoshop picked the worst possible time -- a talk in front of a large audience -- to decide it needed to check in before letting him do anything.

Someone sitting near me muttered "I'd been thinking about buying that, but now I don't think I will." Someone else told me afterward that all Photoshop is now cloud-based; older versions still work, but if you buy Photoshop now, your only option is this cloud version that may decide ... at the least opportune moment ... that you can't use your software any more.

I'm so glad I use Free software like GIMP. Not that things can't go wrong giving a GIMP talk, of course. Unexpected problems or bugs can arise with any software, and you take that risk any time you give a live demo.

But at least with Free, open source software like GIMP, you know you own the software and it's not suddenly going to refuse to run without a license check. That sort of freedom is what makes the difference between free as in beer, and Free as in speech.

You can practice your demo carefully before the talk to guard against most bugs and glitches; but all the practice in the world won't guard against software that won't start.

I talked to the club president afterward and offered to give a GIMP talk to the club some time soon, when their schedule allows.

October 22, 2015 04:24 PM

October 19, 2015

Eric Hammond

Protecting Critical SNS Topics From Deletion

stops even all-powerful IAM admin users in their tracks

I run some SNS topics as a public service where anybody can subscribe their own AWS Lambda functions, SQS queues, and email addresses.

If one of these SNS topics were to be accidentally deleted, I could recreate it with the same name and ARN. However, all of the existing user subscriptions would be lost and I would not be able to restore them myself. Each of the hundreds of users would have to figure out what happened and re-subscribe the appropriate targets with the correct permissions.

I don’t want these SNS topics to be deleted. Ever.

Retention Policy

Where the SNS Topics are created using CloudFormation, I put in a basic level of protection by specifying the following attribute on both the AWS::SNS::Topic and AWS::SNS::TopicPolicy resources:

"DeletionPolicy" : "Retain"

This tells CloudFormation to leave the SNS topic and topic policy untouched if the CloudFormation template itself is deleted. I’ve tested this and it works well.

However, I also want to protect the SNS Topic from accidental deletion in the AWS console, aws-cli, or other tools.

Deny Delete

I found that I can extend the SNS topic policy with a Deny statement for sns:DeleteTopic and this blocks delete-topic attempts even with an IAM user that has full admin privileges.

Simply add a statement to the list in the policy document as demonstrated here:

"PolicyDocument" : {
  "Version": "2008-10-17",
  "Statement": [
    ...existing policy statements...,
      "Sid": "DenyTopicDelete",
      "Effect": "Deny",
      "Principal": {
        "AWS": "*"
      "Action": [
      "Resource": "*"

You can specify the specific SNS Topic ARN as the "Resource" value if you like being explicit.

To reduce confusion, you may also remove "sns:DeleteTopic" from the "Allow" policy statement where it is by default, but the "Deny" does override it.

Delete Override

Unfortunately (or fortunately) this doesn’t completely prevent a determined admin from being able to take appropriate action to remove an SNS topic that has been set up this way. It is primarily intended to prevent accidental deletion and to ensure that any deletion is confirmed by explicit and conscious steps.

If you create an SNS topic policy with this statement that denies all delete attempts, and you really need to delete the SNS topic, then you should first modify the topic policy. Remove the deletion protection statement, then delete the topic.

I haven’t tried to find a way to prevent an admin from modifying the SNS topic policy. I’m a little afraid of what would happen if that was successful, though I guess we always have the AWS root account as a safety backup.

Speaking of which…

Root Account

It is important to note that this approach does nothing to stop the AWS root account from deleting the SNS topic. If you use the email address and password to sign in on the console, or use root account credentials in the aws-cli, then you are unprotected. That account cannot be restricted.

Amazon recommends that you lock away the root account keys. I recommend that you go further and throw away the password for the root account.

Original article and comments:

October 19, 2015 11:00 PM

October 17, 2015

Eric Hammond

AWS IAM "ReadOnlyAccess" Managed Policy Is Too Permissive (For Us)

taking away some read-only permissions that Amazon allows

Amazon has created an IAM Managed Policy named ReadOnlyAccess, which grants read-only access to active resources on most AWS services. At Campus Explorer, we depend on this convenient managed policy for our read-only roles.

However, our use of “read-only” doesn’t quite match up with the choices that Amazon made when creating this policy. This isn’t to say that Amazon is wrong, just that our concept of “read-only” differs slightly from Amazon’s.

The biggest difference is that we want our read-only roles to be able to see the architecture of our AWS systems and what resources are active, but we would prefer that the role not be able to read sensitive data from DynamoDB, S3, Kinesis, SQS queue messages, CloudFormation template parameters, and the like.

For example, third party services often ask for a ReadOnlyAccess role to allow them to analyze your AWS account and provide helpful tips on how to improve security or cost control. This sounds good, but do you really want them to be reading messages from Kinesis streams and SQS queues or downloading the contents of S3 obects?

To better protect our data when creating read-only roles, we not only attach the ReadOnlyAccess managed policy from Amazon, but we also attach our own ReadOnlyAccessDenyData managed policy that uses Deny statements to take away a number of the previously allowed permissions.


These are our business’ current rules as compiled by sysadmin extraordinaire, Jennine Townsend. Your business needs may differ. Feel free to tighten or loosen as you see fit. The goal here is to let Amazon do most of the work in managing the ReadOnlyAccess policy, but tweak it a bit to fit your particular situation.

Using the aws-cli, you can create a supplemental managed policy with a command like:

policy_arn=$(aws iam create-policy \
  --policy-name "$policy_name" \
  --description 'Use in combination with Amazon managed ReadOnlyAccess policy.' \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
        "Sid": "ReadOnlyAccessDenyData",
        "Effect": "Deny",
        "Action": [
        "Resource": "*"
  }' \
  --output text \
  --query 'Policy.Arn')
echo policy_arn=$policy_arn

Attach the managed policy to your existing role, group, or user.

# role
role_name="readonly" # Replace with your role name
aws iam attach-role-policy \
  --role-name "$role_name" \
  --policy-arn "$policy_arn"

# group
group_name="readonly" # Replace with your group name
aws iam attach-user-policy \
  --gropu-name "$group_name" \
  --policy-arn "$policy_arn" 

# user
user_name="bilbo" # Replace with your user name
aws iam attach-user-policy \
  --user-name "$user_name" \
  --policy-arn "$policy_arn"


If you created the above managed policy and wish to remove it, then detach from any users, groups, roles you had attached it to:

aws iam detach-role-policy \
  --role-name "$role_name" \
  --policy-arn "$policy_arn"

aws iam detach-user-policy \
  --gropu-name "$group_name" \
  --policy-arn "$policy_arn" 

aws iam detach-user-policy \
  --user-name "$user_name" \
  --policy-arn "$policy_arn"

and delete the managed policy itself:

aws iam delete-policy \
  --policy-arn "$policy_arn"

Original article and comments:

October 17, 2015 04:06 AM

October 16, 2015

Jono Bacon

All Things Open Keynote, Talks, Book Signing and More

Tomorrow morning at the ungodly hour of 6am I board a slight to Raleigh for the All Things Open conference. The conference starts on Monday, but I am flying out earlier for a bunch of meetings with folks from

This is my first time at All Things Open but it looks like they have a stunning line up of speakers and some great folks attending.

I just wanted to share some of the things I will be doing there:

  • Tues 20th Oct at 9.05am – Keynote – I will be delivering a keynote about the opportunity for Open Source and effective collaboration and community leadership to solve problems and innovate. The presentation will delve into the evolution of technology, where Open Source plays a role, the challenges we need to solve, and the opportunity everyone in the room can participate in.
  • Tues 20th Oct at 12.15pm – Lightning Talk – I will be giving one of the lightning talks. It will an introduction to the fascinating science of behavioral economics and how it can provide a scaffolding for building effective teams and communities.
  • Tues 20th Oct at at 2.15pm – Presentation – I will delivering a presentation called A Crash Course in Bacon Flavored Community Management. In it I will be discussing the key components of building strong and empowered communities, how we execute in those elements, how we manage politics and conflict, and tracking success and growth.
  • Tues 20th Oct at at 3.00pm – Book Signing – I will be signing free copies of The Art of Community at the booth (booth #17). Come and say hi, get a free book, and have a natter. Books are limited, so get there early.

As ever, if you would like to have a meeting with me, drop me an email to and we can coordinate a time.

I hope to see you there!

by Jono Bacon at October 16, 2015 08:39 PM

Eric Hammond

AWS IAM "ReadOnlyAccess" Managed Policy Is Missing Features

adding in some read-only permissions that Amazon missed

Amazon has created a Managed Policy in IAM named ReadOnlyAccess, which grants read-only access to AWS resources and API calls that make no changes to the account. At Campus Explorer, we depend on this convenient managed policy for our read-only roles–though we add a few Deny statements as we don’t believe, for example, that pulling messages off of an SQS queue really belongs in a read-only role.

In theory, and mostly in practice, Amazon manages this managed policy so that we don’t have to keep up with all of the changing API calls from new services and new features in existing services.

My colleague, Jennine Townsend, practices security conscious living and therefore spends most of the time using the AWS console and AWS CLI with an IAM role that has read-only access to our AWS accounts. She switches to roles that have permission to make changes only when necessary (and then uses code that has been tested and added to revision control).

Last week, Jennine was streaming the AWS re:Invent keynotes where Amazon announced some great new services and new features for existing services. Naturally, she went to check them out using the AWS console and aws-cli.

However, even where these services were available in the console and CLI, she ran into permission problems. It turns out that Amazon had not (and still has not) updated the ReadOnlyAccess managed policy in IAM.

This is exactly what a managed policy is for. We attach it to our own roles and let Amazon manage what the specific rules are that make the most sense for that policy without every customer having to make the updates themselves.

Note to Amazon: Please add “Update Managed Policies” to the checklist for launching APIs for new services and features.


Jennine put together the following managed policy that you can add to your account so that you can access the new features that AWS is making available in Elasticsearch and Config Rules.

This also provides some read-only features that are missing for other services like CloudTrail. For example, CloudTrail recommends including “cloudtrail:LookupEvents” in a read-only policy, but that is missing in the managed policy provided by Amazon.

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "ReadOnlyAccessSupplemental",
      "Action": [
      "Effect": "Allow",
      "Resource": "*",
      "Condition": { "Bool": { "aws:SecureTransport": "true" } }


Using the aws-cli, you can create this managed policy with a command like:

policy_arn=$(aws iam create-policy \
  --policy-name "$policy_name" \
  --description 'Use in combination with Amazon managed ReadOnlyAccess policy.' \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
        "Sid": "ReadOnlyAccessSupplemental",
        "Action": [
        "Effect": "Allow",
        "Resource": "*",
        "Condition": { "Bool": { "aws:SecureTransport": "true" } }
  }' \
  --output text \
  --query 'Policy.Arn')
echo policy_arn=$policy_arn

Then attach the managed policy to your existing role, group, or user.

# role
role_name="readonly" # Replace with your role name
aws iam attach-role-policy \
  --role-name "$role_name" \
  --policy-arn "$policy_arn"

# group
group_name="readonly" # Replace with your group name
aws iam attach-group-policy \
  --group-name "$group_name" \
  --policy-arn "$policy_arn" 

# user
user_name="bilbo" # Replace with your user name
aws iam attach-user-policy \
  --user-name "$user_name" \
  --policy-arn "$policy_arn"


If you created the above managed policy and wish to remove it, then detach from any users, groups, roles you had attached it to:

aws iam detach-role-policy \
  --role-name "$role_name" \
  --policy-arn "$policy_arn"

aws iam detach-group-policy \
  --group-name "$group_name" \
  --policy-arn "$policy_arn" 

aws iam detach-user-policy \
  --user-name "$user_name" \
  --policy-arn "$policy_arn"

and delete the managed policy itself:

aws iam delete-policy \
  --policy-arn "$policy_arn"


The recommended readonly rules for Amazon Elasticsearch, AWS Config, and CloudTrail were found in these documents:

Original article and comments:

October 16, 2015 05:16 PM

October 15, 2015

Akkana Peck

Viewer for email attachments in Office formats

I seem to have fallen into a nest of Mac users whose idea of email is a text part, an HTML part, plus two or three or seven attachments (no exaggeration!) in an unholy combination of .DOC, .DOCX, .PPT and other Microsoft Office formats, plus .PDF.

Converting to text in mutt

As a mutt user who generally reads all email as plaintext, normally my reaction to a mess like that would be "Thanks, but no thanks". But this is an organization that does a lot of good work despite their file format habits, and I want to help.

In mutt, HTML mail attachments are easy. This pair of entries in ~/.mailcap takes care of them:

text/html; firefox 'file://%s'; nametemplate=%s.html
text/html; lynx -dump %s; nametemplate=%s.html; copiousoutput
Then in .muttrc, I have
auto_view text/html
alternative_order text/plain text

If a message has a text/plain part, mutt shows that. If it has text/html but no text/plain, it looks for the "copiousoutput" mailcap entry, runs the HTML part through lynx (or I could use links or w3m) and displays that automatically. If, reading the message in lynx, it looks to me like the message has complex formatting that really needs a browser, I can go to mutt's attachments screen and display the attachment in firefox using the other mailcap entry.

Word attachments are not quite so easy, especially when there are a lot of them. The straightforward way is to save each one to a file, then run LibreOffice on each file, but that's slow and tedious and leaves a lot of temporary files behind. For simple documents, converting to plaintext is usually good enough to get the gist of the attachments. These .mailcap entries can do that:

application/msword; catdoc %s; copiousoutput
application/vnd.openxmlformats-officedocument.wordprocessingml.document; docx2txt %s -; copiousoutput
Alternatives to catdoc include wvText and antiword.

But none of them work so well when you're cross-referencing five different attachments, or for documents where color and formatting make a difference, like mail from someone who doesn't know how to get their mailer to include quoted text, and instead distinguishes their comments from the text they're replying to by making their new comments green (ugh!) For those, you really do need a graphical window.

I decided what I really wanted (aside from people not sending me these crazy emails in the first place!) was to view all the attachments as tabs in a new window. And the obvious way to do that is to convert them to formats Firefox can read.

Converting to HTML

I'd used wvHtml to convert .doc files to HTML, and it does a decent job and is fairly fast, but it can't handle .docx. (People who send Office formats seem to distribute their files fairly evenly between DOC and DOCX. You'd think they'd use the same format for everything they wrote, but apparently not.) It turns out LibreOffice has a command-line conversion program, unoconv, that can handle any format LibreOffice can handle. It's a lot slower than wvHtml but it does a pretty good job, and it can handle .ppt (PowerPoint) files too.

For PDF files, I tried using pdftohtml, but it doesn't always do so well, and it's hard to get it to produce a single HTML file rather than a directory of separate page files. And about three quarters of PDF files sent through email turn out to be PDF in name only: they're actually collections of images of single pages, wrapped together as a PDF file. (Mostly, when I see a PDF like that I just skip it and try to get the information elsewhere. But I wanted my program at least to be able to show what's in the document, and let the user choose whether to skip it.) In the end, I decided to open a firefox tab and let Firefox's built-in PDF reader show the file, though popping up separate mupdf windows is also an option.

I wanted to show the HTML part of the email, too. Sometimes there's formatting there (like the aforementioned people whose idea of quoting messages is to type their replies in a different color), but there can also be embedded images. Extracting the images and showing them in a browser window is a bit tricky, but it's a problem I'd already solved a couple of years ago: Viewing HTML mail messages from Mutt (or other command-line mailers).

Showing it all in a new Firefox window

So that accounted for all the formats I needed to handle. The final trick was the firefox window. Since some of these conversions, especially unoconv, are quite slow, I wanted to pop up a window right away with a "converting, please wait..." message. Initially, I used a javascript: URL, running the command:

firefox -new-window "javascript:document.writeln('<br><h1>Translating documents, please wait ...</h1>');"

I didn't want to rely on Javascript, though. A data: URL, which I hadn't used before, can do the same thing without javascript:

firefox -new-window "data:text/html,<br><br><h1>Translating documents, please wait ...</h1>"

But I wanted the first attachment to replace the contents of that same window as soon as it was ready, and then subsequent attachments open a new tab in that window. But it turned out that firefox is inconsistent about what -new-window and -new-tab do; there's no guarantee that -new-tab will show up in the same window you recently popped up with -new-window, and running just firefox URL might open in either the new window or the old, in a new tab or not, or might not open at all. And things got even more complicated after I decided that I should use -private-window to open these attachments in private browsing mode.

In the end, the only way firefox would behave in a repeatable, predictable way was to use -private-window for everything. The first call pops up the private window, and each new call opens a new tab in the private window. If you want two separate windows for two different mail messages, you're out of luck: you can't have two different private windows. I decided I could live with that; if it eventually starts to bother me, I can always give up on Firefox and write a little python-webkit wrapper to do what I need.

Using a file redirect instead

But that still left me with no way to replace the contents of the "Please wait..." window with useful content. Someone on #firefox came up with a clever idea: write the content to a page with a meta redirect.

So initially, I create a file pleasewait.html that includes the header:

<meta http-equiv="refresh" content="2;URL=pleasewait.html">
(other HTML, charset information, etc. as needed). The meta refresh means Firefox will reload the file every two seconds. When the first converted file is ready, I just change the header to redirect to URL=first_converted_file.html. Meanwhile, I can be opening the other documents in additional tabs.

Finally, I added the command to my .muttrc. When I'm viewing a message either in the index or pager screens, F10 will call the script and decode all the attachments.

macro index <F10> "<pipe-message>~/bin/viewmailattachments\n" "View all attachments in browser"
macro pager <F10> "<pipe-message>~/bin/viewmailattachments\n" "View all attachments in browser"

Whew! It was trickier than I thought it would be. But I find I'm using it quite a bit, and it takes a lot of the pain out of those attachment-full emails.

The script is available at: viewmailattachments on GitHub.

October 15, 2015 09:18 PM

Jono Bacon

Goodbye XPRIZE, Hello GitHub

Some time ago I was introduced to Peter Diamandis, co-founder of XPRIZE, Human Longevity Inc, Planetary Resources, and some other organizations. We hit it off and he invited me to come and build community at the XPRIZE Foundation. His vision and mine were aligned: to provide a way in which anyone with passion and talent can play a role in XPRIZE’s ambitious mission to build a brighter future.

The ride at XPRIZE has been thrilling. When I started we really had no community outside of some fans on Twitter and Facebook. Today we have a community website, forum, wiki, documentation, and other infrastructure. We created the XPRIZE Think Tanks programme of community-driven local groups and now have groups across the United States, India, Asia, Europe, South America, Australia, and beyond. We have a passionate collaborative community working together to explore how they can innovate to solve major problems that face humanity.

Some of our earliest community members, at a sprint at the XPRIZE office

I am proud of my work at XPRIZE but even prouder of the tremendous work in the community. I am also proud of my colleagues at the foundation who were open to this new concept of community percolating into everything we do.

Although my experience at XPRIZE has been wonderful, I have missed the technology and Open Source world.

Something Jim Whitehurst, CEO of Red Hat said to me a while back was that coming from Delta to Red Hat, and thus outside of Open Source into Open Source, helped him to realize how special the Open Source culture and mindset is.

Likewise, while I never left Open Source, moving to XPRIZE was stepping back from the flame somewhat, and it helped me to see the kindness, creativity, agility, and energy that so many of us in the Open Source world take for granted.

As such, despite the rewarding nature of my work at XPRIZE, I decided that I wanted to get back closer to technology. There was a caveat though: I still wanted to be able to play a role in furthering the efficacy and impact of how we as human beings collaborate and build communities to do incredible things.

A New Journey

With this in mind, I am delighted to share that on the 14th November 2015 I will be joining GitHub as Director of Community.

GitHub are a remarkable organization. In recent years they have captured the mindshare of developers and provided the go-to place where people can create, share, and collaborate around code and other content. GitHub is doing great today but I think there is huge potential for what it could be in the future for building powerful, effective communities

My role will be to lead GitHub’s community development initiatives, best practice, product development, and engagement.

My work will be an interesting mix of community engagement, policy, awareness, and developer relations, but also product management to enhance GitHub for the needs of existing and future communities.

I am also going to work to continue to ensure that GitHub is a safe, collaborative, and inclusive environment. I want everyone to have the opportunity to enjoy GitHub and be the best they can be, either within the communities they are part of on GitHub, or as part of the wider GitHub userbase.

Over the next few weeks I will be taking care of the handoff of my responsibilities at XPRIZE and my last day will be on Fri 30th Oct 2015. I will then be flying to Bangalore in India to keynote the Joomla World Conference, taking a little time off, and then starting my new position at GitHub on the 17th November 2015.

Thanks, everyone!

by Jono Bacon at October 15, 2015 06:10 PM

October 11, 2015

Elizabeth Krumbach

22 hours in Las Vegas

Complicated routing options on the journey back from a conference, MJ decided to go with an option that would put him on the ground in Las Vegas for 22 hours. 22 hours? That’s enough time for me to join him!

The flight from San Francisco to Las Vegas is quick and cheap. Flying into Las Vegas is always a treat, as flights tend to take you over the strip for a glitzy introduction to all the fun to be had. My Friday night flight landed around the same time as MJ’s (10PM) so we were able to meet in baggage claim to get a cab to our hotel. We typically stay in the fancy, new hotels on the strip in Las Vegas, but since it was only for one night I made the case for staying at Luxor, the giant Pyramid at the south end of the strip. We had a nice room and the slanted walls were not as troubling as I had feared. The rest of the night was spent in Luxor grabbing a late night burger at Backstage Deli, a couple drinks at the nice and quiet High Bar and a giant fruity vodka smoothie in a crazy plastic refillable cup before retiring to our room. Apply water and a few hours of sleep.

With our room on the 12th floor (the pyramid is 15 stories high), stepping just outside our room we had a nice view down into the center of the building, which was pretty cool.

Sleep was fitfully elusive, so I managed to catch the sun rise around 6:30AM and by the time the pools opened at 9AM I was ready to head down before too many people came down. With pools behind the pyramid, at this time of year the pyramid offered some shade in the first hour I was down there, so I was able to swim and relax in the shade before heading back up to the hotel to shower and pack before checking out.

After checking out we made our way down to the Bellagio for their well-rated weekend brunch buffet. The inside of the dining room was nothing to write home about which was surprising, given how lavish the rest of the Bellagio is. But the food was top notch. Spicy tuna hand rolls, cocktail shrimp, waffles, various fruits an desserts and the Las Vegas buffet obligatory giant crab legs made for an enjoyable mid-day meal (and turned out to be my only meal of the day).

The end of the buffet marked MJ and I parting ways for a few hours. He was off to play cards and I hopped in an UberX (just came to Vegas a few weeks ago!) and went to the The National Atomic Testing Museum. I heard about the museum in an episode of Mysteries at the Museum, which has fleshed out my domestic museum-visiting plans for the next century, they go to so many fascinating little museums. The partnership with the Smithsonian gave me high hopes that I wasn’t walking into a tourist trap. My UberX driver whet my appetite further as she mentioned that she had brought her students there before and highly recommended it (my sadness upon learning that a district teacher is driving an UberX on weekends to make some extra cash is a whole different blog post).

The museum was well worth the ride. It took me just under 2 hours to properly enjoy as I walked through the galleries and inspected various exhibits and videos. I almost missed the Ground Zero Theatre, but thankfully had my map so I got to enjoy the mildly shaking benches, wind and sound projected as you “experience” what it’s like to be one of the observers of a nuclear test in the desert of Nevada. I also learned a lot. Upon learning that there were dangerous atmospheric changes resulting from testing in the air and sea, the entire program went underground for decades. As in, literally, nuclear testing done under the ground. There were videos and whole galleries devoted to the development of technology from drills to build the underground testing areas to monitors to track, report and survive the blasts to report data. It may not have been as impressive or iconic as a mushroom cloud, but the underground testing was pretty fascinating.

Most of my photos from Las Vegas yesterday are from the museum, you can see them here:

And since a friend of mine asked, there is indeed an Area 51 special exhibit. If I had to guess, this was not done in collaboration with the Smithsonian and instead just capitalizing on the popularity of things like Ancient Aliens. Now, I’m one of those skeptics who is a life time …fan(?) of Coast to Coast (many late nights listening to Art Bell during high school) and I totally intend on going on an Area 51 tour from Vegas some day. I watch the unbelievable alien documentaries and the X-Files is still one of my favorite shows of all time (I have an “I want to believe” necklace”). As a skeptic, I don’t believe much of it, but I kind of wish I did. Maybe I’m just searching for non-conspiracy, real evidence. This exhibit was a rehash of everything I already knew about, it was pretty cheesy and leaned heavily in the direction of tourist trap. There were some solid bits about military testing in the desert and I spent more time in those sections, but all in all I can’t say I’d recommend it unless you’re into that kind of thing. Also, I learned that I have way too much of this stuff in my brain, haha!

I took another UberX back to Luxor, my driver this time was a local who was surprised to learn about the museum and is now considering bringing his STEM-inclined son there, woo! Getting back around 4:30PM I found myself with a couple options: therapeutic shopping or Titanic: The Artifact Exhibition. I’m not much of a shopper, but it was oddly tempting, but being me I went with the exhibit. The Titanic story is a compelling one, and I was watching documentaries about the discovery efforts pre-blockbuster. Still, I had avoided this exhibit in the past because there is contention in the scientific world about the value and morality of for-profit (not non-profit), high entertainment value exhibits like this. I was also worried it would be awful. I was finally swayed by really wanting to see the “Big Piece” which I’d recently seen in a documentary. I was pleasantly surprised. The artifacts and whole exhibit were very tastefully done. They did some great work with lighting and climate too, the exhibit getting darker and colder as we went through it to reflect the accident and sinking of the ship. No photography was allowed inside.

It was then time to meet up with MJ and make our way to the airport to conclude our visit.

I had fun, but admittedly things didn’t go perfectly. I also have a lot going on. Arguably this trip was a waste of time and I should really have stayed home to take care of things before all my travel coming up. On the other hand, I wanted to celebrate a promotion at work being confirmed this week (yay!) and with all that “a lot going on” I appreciated the break. I’m back to business today, it’s time to grab some lunch and then pack, pack, pack! My three week trip begins Tuesday morning.

by pleia2 at October 11, 2015 09:34 PM

Akkana Peck

How to get X output redirection back

X stopped working after my last Debian update.

Rather than run a login manager, I typically log in on the console. Then in my .zlogin file, I have:

if [[ $(tty) == /dev/tty1 ]]; then
  # do various things first, then:
  startx -- -dumbSched >& $HOME/.xsession-errors
Ignore -dumbSched for now; it's a fix for a timing problem openbox has when bringing up initial windows. The relevant part here is that I redirect both standard output and standard error to a file named .xsession-errors. That means that if I run GIMP or firefox or any other program from a menu, and later decide I need to see their output to look for error messages, all I have to do is check that file.

But as of my last update, that no longer works. Plain startx, without the output redirection, works fine. But with the redirect, X pauses for five or ten seconds, then exits, giving me my prompt back but with messed-up terminal settings, so I have to type reset before I do anything else.

Of course, I checked that .xsession file for errors, and also the ~/.local/share/xorg/Xorg.0.log file it referred me to (which is where X stores its log now that it's no longer running as root). It seems the problem is this:

Fatal server error:
(EE) xf86OpenConsole: VT_ACTIVATE failed: Operation not permitted
Which wasn't illuminating but at least gave me a useful search keyword.

I found a fair number of people on the web having the same problem. It's related to the recent Xorg change that makes it possible to run Xorg as a regular user, not root. Not that running as a user should have anything to do with capturing standard output and error. But apparently Xorg running as a user is dependent on what sort of virtual terminal it was run from; and the way it determines the controlling terminal, apparently, is by checking stderr (and maybe also stdout).

Here's a slightly longer description of what it's doing, from the ever useful Arch Linux forums.

I'm fairly sure there are better ways of determining a process's controlling terminal than using stderr. For instance, a casual web search turned up ctermid; or you could do checks on /dev/tty. There are probably other ways.

The Arch Linux thread linked above, and quite a few others, suggest adding the server option -keeptty when starting X. The Xorg manual isn't encouraging about this as a solution:

Prevent the server from detaching its initial controlling terminal. This option is only useful when debugging the server. Not all platforms support (or can use) this option.
But it does work.

I found several bugs filed already on the redirection problem. Freedesktop has a bug report on it, but it's more than a year old and has no comments or activity: Freedesktop bug 82732: rootless X doesn't start if stderr redirected.

Redhat has a bug report: Xorg without root rights breaks by streams redirection, and supposedly added a fix way back in January in their package version xorg-x11-xinit-1.3.4-3.fc21 ... though it looks like their fix is simply to enable -keeptty automatically, which is better than nothing but doesn't seem ideal. Still, it does suggest that it's probably not harmful to use that workaround and ignore what the Xorg man page says.

Debian didn't seem to have a bug filed on the problem yet (not terribly surprising, since they only enabled it in unstable a few days ago), so I used reportbug to attempt to file one. I would link to it here if Debian had an actual bug system that allowed searching for bugs (they do have a page entitled "BTS Search" but it gives "Internal Server Error", and the alternate google groups bug search doesn't find my bug), or if their bug reporting system acknowledged new bugs by emailing the submitter the bug number. In truth I strongly suspect that reportbug is actually a no-op and doesn't actually do anything with the emailed report.

But I'm not sure the Debian bug matters since the real bug is Xorg's, and it doesn't look like they're very interested in the problem. So anyone who wants to be able to access output of programs running under X probably needs to use -keeptty for the forseeable future.

Update: the bug acknowledgement came in six hours later. It's bug 801529.

October 11, 2015 06:31 PM

October 10, 2015

Jono Bacon

What LibreOffice Impress Needs To Rock

Across the course of my career I have given, and continue to give, a lot of presentations at conferences all over the world. In the vast majority of them I have used LibreOffice because I like and support the project and I like my presentations being in an open format that can be used across different Operating Systems.

At times I have also used Keynote and Powerpoint and there are a few small things that LibreOffice is missing to be the perfect presentation tool. I thought I would share these here with a hope that these features will be built and thus turn LibreOffice Impress into the most perfect presentation tool on the planet. Naturally, if these features do get built, I will write a follow up post lavishing praise on the LibreOffice team. If anyone from the LibreOffice team wants to focus on these I am more than happy to provide feedback and input!

Smart Guides

One the most fantastic elements of both Keynote and Powerpoint are the smart guides. These are guidelines that appear when you move an object around to help you to align things (such as centering an object or making sure multiple objects are the same width/height from each other).

This feature is invaluable and the absence of it in Impress is notable and at times frustrating. I think a lot of people would move over to LibreOffice if this was available and switched on by default.


Moving objects is slow and clunky in LibreOffice. Moving an object doesn’t smoothly move pixel by pixel but instead jerkily moves as I drag my mouse. It seems that the object moves in 5/10 pixel increments. This means positioning objects is less precise and feels slow and clunky.

Likewise, selections (e.g. selecting multiple objects) and reordering slides has the same chunkiness.

If this was refined it would make the whole app feel far more pleasurable to use.

Embeddable Windows

There has been times when giving a presentation when I have wanted to embed a window in a presentation to save me breaking out of a presentation to show the audience something. Breaking out of a presentation ruins the magic…we want to stay in full presentation mode where possible!

As an example, I might want to show the audience a web page. I would like to therefore embed Chrome/Firefox into my presentation.

I might also want to show a feature using a command line tool. I would like to embed the terminal into my presentation, potentially on the left side of the slide with some content to the right of it. This would be invaluable for teaching programming for example. I might also want to embed a text editor.

Importantly, embedded windows would preferably have no window borders and an option to remove the menu so it looks fully integrated. This would be a tremendous feature that neither Keynote or Powerpoint have.

Nested Section Slides

Many presentations have multiple sections. If you have a lot of slides like I do it can be handy to be able to break slides in sections (with the appropriate slides nested under a main slide for each section). This is a standard feature in Keynote. This makes it easy to jump to different sections when editing. What would be really ideal is if there is also a hotkey that can jump between the different sections – this provides a great opportunity then to jump between different logical pieces of a presentation.

Media Events

When putting together a deck for Bad Voltage Live I wanted to play a slide with an embedded audio clip in it and configure what happens before or after the audio plays. For example, I would like the audio to play and then automatically transition to the next slide when the audio is finished. Or, I want to load a slide with an embedded audio clip and then require another click to start playing the audio. From what I can tell, these features are missing in LibreOffice.

Those are the main things for me. So, LibreOffice community, think you can get these integrated into the app? Kudos can be yours!

by Jono Bacon at October 10, 2015 09:39 PM

October 08, 2015

Eric Hammond

Unreliable Town Clock Is Now Using AWS Lambda Scheduled Functions

Today at AWS re:Invent, Amazon has announced the ability to schedule AWS Lambda function invocations using cron syntax. Yay!

I’m happy to announce that the The Unreliable Town Clock is now using this functionality behind the scenes to send the chime messages to the public SNS topic every quarter hour, in both us-east-1 and us-west-1.

No significant changes should be perceived by the hundreds of subscribers to the Unreliable Town Clock public SNS topic.

If you are already using the Unreliable Town Clock, what should you do?

The Unreliable Town Clock is a community published service that was intended as a stop-gap measure to fill some common types of AWS Lambda scheduling needs, while we waited for Amazon to produce the official, reliable Lambda cron scheduling.

The Unreliable Town Clock was originally built using simple, mostly-but-not-entirely reliable AWS services, and is monitored and supported by one individual (me).

The Unreliable Town Clock should be much less unreliable now that it is using AWS Lambda Scheduled Functions. However, there is still the matter of one individual managing the AWS account and the code between Lambda scheduling and the SNS topic.

I encourage everybody to start using the AWS Lambda Scheduled Functions directly. It’s easy to set up and has Amazon’s reliability and support behind it.

I intend to keep the Unreliable Town Clock running indefinitely since folks are already using it and depending on it, but I would again encourage you to move from the Unreliable Town Clock to direct AWS functionality at your convenience.

Original article and comments:

October 08, 2015 04:45 PM

Elizabeth Krumbach

Star Wars baseball for my 34th birthday

I turned 34 this year. 33 was a good year, full of accomplishments and exciting travel. MJ made sure 34 began well too.

On my actual birthday we were both slammed with work, but we were able to meet for dinner down on the peninsula at The Melting Pot in San Mateo. It’s actually at the Caltrain station, so it’s easy for me to get to and, cool, a train station. Plus, fondue is awesome.

The big present for my birthday was the weekend following my birthday. MJ bought us a package of tickets to Star Wars day at the Giant’s AT&T Park! You arrive 3 hours before the game to eat, drink and mingle with other fans at the edge of the field. I got pictures taken with folks who went all out with getting dressed up, and with R2-D2.

The gathering then had a raffle and we were walked along the edge of the field to get to our seats.

And amazing seats they were! The weather also played it’s typically agreeable role and gave us a sunny and slightly breezy afternoon. Perfect for a game.

The game itself was Star Wars themed throughout. With Darth Vader and Storm Troopers accompanying the entrance of the umpires (empire, umpire, haha!), videos throughout the game, and Chewbacca bobble heads, of which we each got two since a special MVP version was also given to us during the welcoming gathering we went to.

And to make things even better, the Giant’s won over the Rockies 2-3.

More photos from the day here:

by pleia2 at October 08, 2015 06:28 AM

CloudNow Awards, Perl 6 and the PPIE’s role in SF public transit

September felt a bit quiet for me event-wise. I had to cancel a speaking engagement I was looking forward to when I realized it landed on Yom Kippur (oops) and the only other event I had on my schedule related to work was the award ceremony for CloudNOW‘s top women in Cloud award.

There I was able to visit with my HP friends who were hosting a booth and giving away HP cloud goodies. They were also promoting a scholarship program for women in college who want to work on an open source project, and I was able to chime in as a former mentor for the program.

After networking, the event had several talks, including one by friend and now colleague at HP, Allison Randal. She gave a great talk about value and history of software and where we’re going with cloud and open source.

Allison Randal on the evolution of the value of software

One of the hosts also took time to do a short interview with Isis Anchalee, the engineer who started the #ILookLikeAnEngineer hashtag that went viral promoting people who don’t traditionally “look like” engineers, and highlighting the fact that assumptions are often wrong (here’s mine). I was really impressed with the talent and accomplishments of all the women I met throughout the event. People say I juggle a lot, they should have a chat with some of the women who won these awards!

I kicked off October this week by going to a Perl 6 talk by Larry Wall. I was recovering from a migraine and a workout with my trainer earlier in the day, but I forced myself to go out to this event anyway. I’m glad I did. I strategically wore my FOSDEM shirt, figuring that even though I’d be too shy there may be someone who found it interesting enough to strike up a conversation. Success! I had a great chat with an open sourcey systems fellow who was greatly interested in the surge of money being poured into the open source ecosystem. I could talk about that for hours.

The presentation itself was full of wit and humor, and I learned a lot about Perl 6 that I never bothered to look into. As the alpha and beta releases have been trickling out this year, it was nice to learn that they hope to have their 6.Christmas release ready, well, by Christmas.

Taking a bit of a turn away from technology on computers, tonight I spent the evening at the California Historical Society, which is just a block or so away from where we live. They were hosting a lecture on City Rising for the 21st Century: San Francisco Public Transit 1915, now, tomorrow. The “City Rising” bit comes from the celebration of the Panama-Pacific International Exposition (PPIE) that happened 100 years ago, in 1915, here in San Francisco. As a technology and history lover I’ve always been fascinated by these World’s Fairs, so getting to learn about the one here has been fun. Several months back we bought Laura Ackley’s San Francisco’s Jewel City: The Panama-Pacific International Exposition of 1915 and I finished reading it a few weeks ago. I just recently picked up the giant Jewel City: Art from San Francisco’s Panama-Pacific International Exposition which has several contributors and pages of full color reproductions of art that was showcased at the fair. And I was excited to learn that the de Young museum is opening an exhibit of the same name as that giant book on October 17th that will have several of the actual pieces that were at the expo 100 years ago.

The lecture and panel tonight drew from both my fascination with the PPIE AND general interest in local transit. I went to the Fair, Please! exhibit at the Market Street Railway museum and picked up a copy of the Bay Area Electric Railroad Association Journal from Spring 2007 that had an article by Grant Ute on transit at the fair. So it was a delight to see Grant tonight and have him do the introductory talk for the event. I should have brought the Journal and my copy of San Francisco’s Municipal Railway as he was signing things, alas!

The talk and panel were thoroughly enjoyable. Once the panel moved on from progress and changes made and made possible by transit changes surrounding the PPIE, topics ranged from the removal of (or refusal to build) elevated highways in San Francisco and how it’s created a beautiful transit and walk-friendly city, policies around the promotion of public transit and how funding has changed over the years.

I love things on rails, it was a good evening.

This concludes local events for a while. I’m doing a quick jaunt to Las Vegas to spend a day with MJ on Friday-Saturday. Then on Tuesday I’m flying off to the Grace Hopper Celebration of Women in Computing where I’ll be talking about the open source continuous integration system we use in OpenStack (talk details on this page). Directly from Houston I’m flying to Tokyo where I’ll meet MJ for a week of touristing in Tokyo, Osaka and Kyoto before the OpenStack Summit in Tokyo. I’m finally back home on October 31st, for a week, and then I’m off to speak at LISA’15. Phew!

by pleia2 at October 08, 2015 06:05 AM

October 07, 2015

Elizabeth Krumbach

September in San Francisco

Having spent much of September here at home in San Francisco, I’ve split my time between work, writing my book and getting out to enjoy this beautiful city we live in. Going out has certainly taken time away from writing, but I’d probably go bonkers and would likely be unproductive anyway if I stayed home, so here we are!

A couple weeks ago I had my friend Steve over and he brought along his BB-8. I had snagged my own following out trip to Philadelphia so we had a fun evening of chatting, playing with our BB-8s and enjoying a nice east coast seafood dinner at one of my favorite restaurants.

Simcoe was suitably amused by dual BB-8s

One of the first enjoy-our-city outings last month MJ and I did together was to do something we’d never done in San Francisco before: Go to the theater. MJ had heard good things about Between Riverside and Crazy, so he got us tickets and we went one Sunday afternoon. It was being performed at the beautiful A.C.T. Geary Theater near Union Square, an easy walk from home. With seats in the uppermost balcony we had a nice view of the stage and everything went beautifully. I think this was the first time I’d been to a non-musical play and I found myself quickly lost in the story and characters. I’d recommend the play, it has finished the run in San Francisco, but this was the west coast debut so I’m sure it’ll pop up somewhere else. I think we’ll be doing this again.

September also means the Jewish High Holy days of Rosh Hashanah and Yom Kippur. We attend services at the synagogue where we are members and on Yom Kippur we spent the whole day there. Having only celebrated these holidays for a few years, I’m still learning a lot and trying to bring it into my own life as a tradition. I’m getting there and it was nice to spend the time with MJ away from work and hectic events.

The next weekend my friends Jon and Crissi were in town. I had fun with the fact that their visit synced up with Muni Heritage Week and on Saturday morning I met up with them briefly to check out the historic cars and buses that they had out for the event.

Crissi and me exploring a pier near Ferry Building

Unfortunately due to time constraints I couldn’t ride on any of the special buses or street cars on the free routes they were running, seeing them had to be enough! And I was fortunate that they didn’t do the weekend in October when I’m typically traveling.

More photos from Muni Heritage Weekend here:

As Jon and Crissi went to meet friends for lunch, I headed home to meet MJ so we could go to a Giants vs. A’s baseball game over on Oakland. It had been a couple years since I’d been to an A’s game and so it was fun to visit the Coliseum again. We were joined by a friend (and sushi chef) who we’d been meaning to see socially and found a game to be the perfect opportunity. I was certainly conflicted as I dressed for the game, having an unconventional love for both teams. But I ended up dressing to cheer for the Giants, and with a score of 14 to 10, the Giants did prevail.

More photos from the game here:

The weekend concluded on Sunday as we met Jon and Crissi for brunch. I met them at their hotel at Fisherman’s Wharf in the morning in order to introduce them to the cable car. In the long line we got to see cars turned around a couple of times, and had lots of time to chat before finally getting on the car. The cable car ends its trip at Powell and Market, which was then a quick walk back home to meet MJ and hop in the car.

We took them across the city to see the ocean and have brunch at the edge of Golden Gate park. After brunch we made our way over to the Queen Wilhelmina Tulip Garden and then to see the Bison living in the park. Back in the car we drove up to and over the Golden Gate Bridge to take pictures down at Fort Mason. Then it was back over the bridge to Crissy Field where we got to take even more pictures (and so Crissi could visit Crissy Field, of course). Our journey then took us back toward their hotel, where our car conveniently broke down about three blocks from where we were planning on parking. Fortunately we were able to ease into a street parking spot, which gave us the ability to come back later to handle getting the poor thing towed. So then we were off to Pier 39 to get a nice dose of tourism and visit with the sea lions and wrap up our day!

I love doing the tourist things when friends and family are in town. We live in such a beautiful city and getting to enjoy it in tourist mode while also showing it off is a whole lot of fun. Naturally it was also fun to catch up with Jon and Crissi, as we missed them the last time we were in Philadelphia. They had just successfully completed another year of running the annual FOSSCON conference so I got to hear all about that, and it made me really want to go again next year.

More photos from our adventures across the city here:

by pleia2 at October 07, 2015 05:19 AM

October 06, 2015

Elizabeth Krumbach

Ending my 6 year tenure on the Ubuntu Community Council

On September 16th, Michael Hall sent out a call for nominations for the Ubuntu Community Council. I will not be seeking re-election this time around.

My journey with Ubuntu has been a long one. I can actually pinpoint the day it began, because it was also the day I created my account: March 12th, 2005. That day I installed Ubuntu on one of my old laptops to play with this crazy new Debian derivative and was delighted to learn that the PCMCIA card I had for WiFi actually worked out of the box. No kidding. In 2006 I submitted my first package to Debian and following earlier involvement with Debian Women, I sent my first message to the Ubuntu-Women mailing list offering to help with consolidating team resources. In 2007 a LoCo in my area (Pennsylvania) started up, and my message was the third one in the archives!

As the years went by, Ubuntu empowered me to help people and build my career.

In 2007 I worked with the Pennsylvania LoCo to provide 10 Ubuntu computers to girls in Philadelphia without access to computers (details). In 2010 I joined the board of Partimus, a non-profit which uses Ubuntu (and the flavors) to provide schools and other education-focused programs in the San Francisco Bay Area with donated computers (work continues, details on the Partimus blog). In 2012 I took a short sabbatical from work and joined other volunteers from Computer Reach to deploy computers in Ghana (details). Today I maintain a series of articles for the Xubuntu team called Xubuntu at… where we profile organizations using Ubuntu, many of which do so in a way that serves their local community. Most people also know me as the curator for the Ubuntu Weekly Newsletter, a project I started contributing to in 2010.

Throughout this time, I have worked as a Linux Systems Administrator, a role that’s allowed me to build up my expertise around Linux and continue to spend volunteer time on the projects I love. I’ve also have been fortunate to have employers who not only allow me to continue my work on open source, but actively encourage and celebrate it. In 2014 I had the honor of working with Matthew Helmke and others on the 8th edition of The Official Ubuntu Book. Today I’m working on my second open source book for the same publisher.

I share all of this to demonstrate that I have made a serious investment in Ubuntu. Ubuntu has long been deeply intertwined in both my personal and professional goals.

Unfortunately this year has been a difficult one for me. As I find success growing in my day job (working as a systems administrator on the OpenStack project infrastructure for HP), I’ve been witness to numerous struggles within the Ubuntu community and those struggles have really hit home for me. Many discussions on community mailing lists have felt increasingly strained and I don’t feel like my responses have been effective or helpful. They’ve also come home to me in the form of a pile of emails harshly accusing me of not doing enough for the community and in breaches of trust during important conversations that have caused me serious personal pain.

I’ve also struggled to come to terms with Canonical’s position on Intellectual Property (Jono Bacon’s post here echos my feelings and struggle). I am not a lawyer and considering both sides I still don’t know where I stand. People on both sides have accused me of not caring or understanding the issue because I sympathize with everyone involved and have taken their concerns and motivations to heart.

It’s also very difficult to be a volunteer, community advocate in a project that’s controlled by a company. Not only that, but we continually have to teach some of employees how to properly engage with an open source community. I have met many exceptional Canonical employees, I work with them regularly and I had a blast at UbuCon Latin America this year with several others. In nearly every interaction with Canonical and every discussion with Mark about community issues, we’ve eventually had positive results and found a successful path forward. But I’m exhausted by it. It sometimes feels like a game of Whac-A-Mole where we are continually being confronted with the same problems, but with different people, and it’s our job to explain to the Marketing/Development/Design/Web/whatever team at Canonical that they’ve made a mistake with regard to the community and help them move forward effectively.

We had some really great conversations when a few members of the Community Council and the Community Team at Canonical at the Community Leadership Summit back in July (I wrote about it here). But I was already feeling tired then and I had trouble feeling hopeful. I realized during a recent call with an incredibly helpful and engaged Canonical employee that I’d actually given up. He was making assurances to us about improvements that could be made and really listening to our concerns, I could tell that he honestly cared. I should have been happy, hopeful and encouraged, but inside I was full of sarcasm, bitterness and snark. This is very out of character for me. I don’t want to be that person. I can no longer effectively be an advocate for the community while feeling this way.

It’s time for me to step down and step back. I will continue to be involved with Xubuntu, the Ubuntu News Team and Ubuntu California, but I need to spend time away from leadership and community building roles before I actually burn out.

I strongly encourage people who care about Ubuntu and the community to apply for a position on the Ubuntu Community Council. We need people who care. I need people who care. While it’s sometimes not the easiest council to be on, it’s been rewarding in so many ways. Mark seriously listens to feedback from the Community Council, and I’m incredibly thankful for his leadership and guidance over the years. Deep down I do continue to have hope and encouragement and I still love Ubuntu. Some day I hope to come back.

I also love you all. Please come talk to me at any time (IRC: pleia2, email: If you’re interested in a role on the Ubuntu Community Council, I’m happy to chat about duties, expectations and goals. But know that I don’t need gripe buddies, sympathy is fine, but anger and negativity are what brought me here and I can’t handle more. I also don’t have the energy to fix anything else right now. Bring discussions about how to fix things to the ubuntu-community-team mailing list and see my Community Leadership post from July mentioned earlier to learn more about about some of the issues the community and the Community Council are working on.

by pleia2 at October 06, 2015 04:36 PM

October 05, 2015


Bjarne on C++11

header image

I saw this keynote quite a while ago, and I still refer to it sometimes, even though its almost 3 years old now. Its a good whirlwind tour of the advances in C++11.

by Kevin at October 05, 2015 01:20 PM

October 04, 2015

Akkana Peck

Aligning images to make an animation (or an image stack)

For the animations I made from the lunar eclipse last week, the hard part was aligning all the images so the moon (or, in the case of the moonrise image, the hillside) was in the same position in every time.

This is a problem that comes up a lot with astrophotography, where multiple images are stacked for a variety of reasons: to increase contrast, to increase detail, or to take an average of a series of images, as well as animations like I was making this time. And of course animations can be fun in any context, not just astrophotography.

In the tutorial that follows, clicking on the images will show a full sized screenshot with more detail.

Load all the images as layers in a single GIMP image

The first thing I did was load up all the images as layers in a single image: File->Open as Layers..., then navigate to where the images are and use shift-click to select all the filenames I wanted.

[Upper layer 50% opaque to align two layers]

Work on two layers at once

By clicking on the "eyeball" icon in the Layers dialog, I could adjust which layers were visible. For each pair of layers, I made the top layer about 50% opaque by dragging the opacity slider (it's not important that it be exactly at 50%, as long as you can see both images).

Then use the Move tool to drag the top image on top of the bottom image.

But it's hard to tell when they're exactly aligned

"Drag the top image on top of the bottom image": easy to say, hard to do. When the images are dim and red like that, and half of the image is nearly invisible, it's very hard to tell when they're exactly aligned.


Use a Contrast display filter

What helped was a Contrast filter. View->Display Filters... and in the dialog that pops up, click on Contrast, and click on the right arrow to move it to Active Filters.

The Contrast filter changes the colors so that dim red moon is fully visible, and it's much easier to tell when the layers are approximately on top of each other.


Use Difference mode for the final fine-tuning

Even with the Contrast filter, though, it's hard to see when the images are exactly on top of each other. When you have them within a few pixels, get rid of the contrast filter (you can keep the dialog up but disable the filter by un-checking its checkbox in Active Filters). Then, in the Layers dialog, slide the top layer's Opacity back to 100%, go to the Mode selector and set the layer's mode to Difference.

In Difference mode, you only see differences between the two layers. So if your alignment is off by a few pixels, it'll be much easier to see. Even in a case like an eclipse where the moon's appearance is changing from frame to frame as the earth's shadow moves across it, you can still get the best alignment by making the Difference between the two layers as small as you can.

Use the Move tool and the keyboard: left, right, up and down arrows move your layer by one pixel at a time. Pick a direction, hit the arrow key a couple of times and see how the difference changes. If it got bigger, use the opposite arrow key to go back the other way.

When you get to where there's almost no difference between the two layers, you're done. Change Mode back to Normal, make sure Opacity is at 100%, then move on to the next layer in the stack.

It's still a lot of work. I'd love to find a program that looks for circular or partially-circular shapes in successive images and does the alignment automatically. Someone on GIMP suggested I might be able to write something using OpenCV, which has circle-finding primitives (I've written briefly before about SimpleCV, a wrapper that makes OpenCV easy to use from Python). But doing the alignment by hand in GIMP, while somewhat tedious, didn't take as long as I expected once I got the hang of using the Contrast display filter along with Opacity and Difference mode.

Creating the animation

Once you have your layers, how do you turn them into an animation?

The obvious solution, which I originally intended to use, is to save as GIF and check the "animated" box. I tried that -- and discovered that the color errors you get when converting an image to indexed make a beautiful red lunar eclipse look absolutely awful.

So I threw together a Javascript script to animate images by loading a series of JPEGs. That meant that I needed to export all the layers from my GIMP image to separate JPG files.

GIMP doesn't have a built-in way to export all of an image's layers to separate new images. But that's an easy plug-in to write, and a web search found lots of plug-ins already written to do that job.

The one I ended up using was Lie Ryan's Python script in How to save different layers of a design in separate files; though a couple of others looked promising (I didn't try them), such as gimp-plugin-export-layers and save_all_layers.scm.

You can see the final animation here: Lunar eclipse of September 27, 2015: Animations.

October 04, 2015 03:44 PM

October 01, 2015

Nathan Haines

Beginning Ubuntu for Windows and Mac Users

Where do I begin? That’s the challenge ahead of anyone who tries something new. And the first step of any new experience. Sometimes this can be exciting, like when you sit down to try food at a new restaurant. Other times the question is paralyzing. Taking the first step is difficult when the path is unclear or unmarked.

Ubuntu is the world’s third most popular operating system. It powers twenty million desktop computers, and untold servers. But for even more people who grew up using Windows or OS X, their operating system is the computer. Ubuntu’s Linux and Unix heritage are no longer its greatest strength, but its biggest drawback. But it doesn’t have to be.

For new Ubuntu users, the first challenge to surmount is familiarity. Ubuntu thinks and behaves in different ways from the computing experience they’ve gained over the years. And those years of experience are an enemy at first. But using a new operating system is much like visiting a foreign country. Everything’s different, but after a chance to acclimate, it’s not that different. The trick is finding your way around until you know what’s the same. The differences aren’t that vast and soon everything is manageable.

book cover

My new book, Beginning Ubuntu for Windows and Mac Users was written to help speed that process along. Ubuntu is the perfect operating system for every day business, casual, and entertainment use. The book explains key concepts and helps users adapt to their new operating system. It’s a reference guide to the best software in Ubuntu that can get tasks done. And it teaches how to use Ubuntu so that any computer user can get started and learn from there.

Beginning Ubuntu for Windows and Mac Users expects readers to want to use Ubuntu graphically, and prefers this over command line shortcuts. When the command lie is introduced in Chapter 5, it’s from the perspective of a window into an older period of computing history, and after a short overview, it walks the user through specific tasks that demonstrate exactly why one would use the command line over the graphical tools. Simple information lookup, text-based browsing, and even games gives the command line a practical purpose and makes the chapter a handy reference.

The book finishes up with power user advice that shows simple yet powerful ways to make an Ubuntu system even more powerful, from enabling multiple workspaces to installing VirtualBox and working with virtual machines.

If you’ve been wanting to try Ubuntu but don’t know where to begin, this book is for you. It explains the origins of Ubuntu and walks you through the install process step by step. It talks about dual-booting and installing graphics drivers. It even helps you find the right “translation” as you learn the Ubuntu desktop. Looking for the Start Menu or Spotlight? The Dash icon provides the same functionality.

If you’re already an Ubuntu user, you may benefit from the clear instructions and format of the book. But you can also buy the book for friends. It’s a friendly, gentle introduction to Ubuntu that any Windows or Mac user will enjoy, and the perfect gift for anyone who could benefit from using Ubuntu.

Beginning Ubuntu for Windows and Mac Users is available today from Amazon, Barnes & Noble, and other fine booksellers around the world. Best of all, the companion ebook is only $5 through Apress when you buy the print version (even if you didn't buy it from the publisher), and the ebook is available DRM-free in PDF, EPUB, and MOBI (Kindle) formats. Not only is that an incredible bargain that offers all 150+ screenshots in full color, but the DRM-free files respect you and your investment.

Whether you’ve already taken the first steps into experiencing Ubuntu for yourself, or you’ve hesitated because you don’t know where to begin, this book is for you. We’ll walk through the first steps together, and your existing Windows and Mac experience will help you take the next steps as you explore the endless possibilities offered by Ubuntu.

October 01, 2015 08:51 PM

Akkana Peck

Lunar eclipse animations

[Eclipsed moon rising] The lunar eclipse on Sunday was gorgeous. The moon rose already in eclipse, and was high in the sky by the time totality turned the moon a nice satisfying deep red.

I took my usual slipshod approach to astrophotography. I had my 90mm f/5.6 Maksutov lens set up on the patio with the camera attached, and I made a shot whenever it seemed like things had changed significantly, adjusting the exposure if the review image looked like it might be under- or overexposed, occasionally attempting to refocus. The rest of the time I spent socializing with friends, trading views through other telescopes and binoculars, and enjoying an apple tart a la mode.

So the images I ended up with aren't all they could be -- not as sharply focused as I'd like (I never have figured out a good way of focusing the Rebel on astronomy images) and rather grainy.

Still, I took enough images to be able to put together a couple of animations: one of the lovely moonrise over the mountains, and one of the sequence of the eclipse through totality.

Since the 90mm Mak was on a fixed tripod, the moon drifted through the field and I had to adjust it periodically as it drifted out. So the main trick to making animations was aligning all the moon images. I haven't found an automated way of doing that, alas, but I did come up with some useful GIMP techniques, which I'm in the process of writing up as a tutorial.

Once I got the images all aligned as layers in a GIMP image, I saved them as an animated GIF -- and immediately discovered that the color error you get when converting to an indexed GIF image loses all the beauty of those red colors. Ick!

So instead, I wrote a little Javascript animation function that loads images one by one at fixed intervals. That worked a lot better than the GIF animation, plus it lets me add a Start/Stop button.

You can view the animations (or the source for the javascript animation function) here: Lunar eclipse animations

October 01, 2015 06:55 PM

September 30, 2015

Jono Bacon

Free Beer, Prizes, and Bad Voltage in Fulda Tonight!

Tonight, Wed 30th September 2015 at 7pm there are five important reasons why you should be in Fulda in Germany:

  1. A live Bad Voltage show that will feature technology discussion, competitions, and plenty of fun.
  2. Free beer.
  3. The chance to win an awesome Samsung Galaxy Tab S2.
  4. Free entry (including the beer!).
  5. A chance to meet some awesome people.

It is going to be a blast and we hope you can make it out here tonight.

Just remember, you might leave with one of these:

Doors open tonight at 7pm, show starts at 7.30pm at:

Hall 8
University of Applied Science Fulda,
Leipziger Str. 123, 36037
Fulda, Germany

We hope to see you there!

by Jono Bacon at September 30, 2015 08:16 AM

September 27, 2015

Akkana Peck

Make a series of contrasting colors with Python

[PyTopo with contrasting color track logs] Every now and then I need to create a series of contrasting colors. For instance, in my mapping app PyTopo, when displaying several track logs at once, I want them to be different colors so it's easy to tell which track is which.

Of course, I could make a list of five or ten different colors and cycle through the list. But I hate doing work that a computer could do for me.

Choosing random RGB (red, green and blue) values for the colors, though, doesn't work so well. Sometimes you end up getting two similar colors together. Other times, you get colors that just don't work well, because they're so light they look white, or so dark they look black, or so unsaturated they look like shades of grey.

What does work well is converting to the HSV color space: hue, saturation and value. Hue is a measure of the color -- that it's red, or blue, or yellow green, or orangeish, or a reddish purple. Saturation measures how intense the color is: is it a bright, vivid red or a washed-out red? Value tells you how light or dark it is: is it so pale it's almost white, so dark it's almost black, or somewhere in between? (A similar model, called HSL, substitutes Lightness for Value, but is similar enough in concept.)

[GIMP color chooser] If you're not familiar with HSV, you can get a good feel for it by playing with GIMP's color chooser (which pops up when you click the black Foreground or white Background color swatch in GIMP's toolbox). The vertical rainbow bar selects Hue. Once you have a hue, dragging up or down in the square changes Saturation; dragging right or left changes Value. You can also change one at a time by dragging the H, S or V sliders at the upper right of the dialog.

Why does this matter? Because once you've chosen a saturation and value, or at least ensured that saturation is fairly high and value is somewhere in the middle of its range, you can cycle through hues and be assured that you'll get colors that are fairly different each time. If you had a red last time, this time it'll be a green, or yellow, or blue, depending on how much you change the hue.

How does this work programmatically?

PyTopo uses Python-GTK, so I need a function that takes a gtk.gdk.Color and chooses a new, contrasting Color. Fortunately, gtk.gdk.Color already has hue, saturation and value built in. Color.hue is a floating-point number between 0 and 1, so I just have to choose how much to jump. Like this:

def contrasting_color(color):
    '''Returns a gtk.gdk.Color of similar saturation and value
       to the color passed in, but a contrasting hue.
       gtk.gdk.Color objects have a hue between 0 and 1.
    if not color:
        return self.first_track_color;

    # How much to jump in hue:
    jump = .37

    return gtk.gdk.color_from_hsv(color.hue + jump,

What if you're not using Python-GTK?

No problem. The first time I used this technique, I was generating Javascript code for a company's analytics web page. Python's colorsys module works fine for converting red, green, blue triples to HSV (or a variety of other colorspaces) which you can then use in whatever graphics package you prefer.

September 27, 2015 07:27 PM

September 23, 2015

Nathan Haines

Writing and Publishing a Book with Free Software

I’ve been a technology enthusiast since I was very little. I’ve always been fascinated by electronics and computers, and from the time I got my first computer when I was 10, I’ve loved computers for their own sake. That’s served me very well as a computer technician, but it can lead to narrow-sightedness, too. The one thing that doing computer support at my college campus drove home is that for most computer users, the computer is simply a tool.

Over the last year, I’ve been thinking a lot about Ubuntu in terms of getting specific tasks done. Not only because I was writing a book that would help Windows and Mac users get started with Ubuntu quickly, but also because Ubuntu development and documentation work best when they address clear user stories.

Ubuntu is exciting for many reasons. What stands out for me is how Ubuntu excels at providing the tools needed for so many different roles. For any hobbyist or professional, Ubuntu can be the foundation of a workflow that creates amazing results.

Ubuntu integrates seamlessly into my routine as an author, from planning, to writing, to revision and editing, to layout and design, all the way to the final step of publishing. Ubuntu gives me the tools I need whether my book is traditionally or self-published.

In this presentation, I talk about the process of writing and publishing a book, and although the presentation focuses on the steps involved in publishing, it also illustrates where the Free Software available in Ubuntu can be utilized along the way.

book cover

For a more comprehensive look at how Ubuntu can work for you as you come from Windows or OS X, take a look at my book, Beginning Ubuntu for Windows and Mac Users, available today on Amazon or from your local book retailer.

September 23, 2015 11:39 PM

Elizabeth Krumbach

Simcoe’s September 2015 Checkup

A few weeks ago I wrote about Simcoe’s lab work from July and some other medical issues that cropped up. I’m happy to report that the scabbing around her eyes has cleared up and we were able to get the ultrasound done last Thursday.

The bad news is that her kidneys are very small and deformed. Her vet seemed surprised that they were working at all. Fortunately she doesn’t seem to have anything else going on, no sign of infections from the tests they ran (UTIs are common at this stage). Her calcium levels have also remained low thanks to a weekly pill we’ve been giving her.

Her CRE levels do continue to creep up into a worrying range, which the vet warned could also lead to more vomiting:


But her BUN levels have dropped slightly since last time:


Her also weight continues to be lower than where it was trending for the past couple years:


All of this means it’s time to escalate her care beyond the subcutaneous fluids and calcium lowering pills. We have a few options, but the first step is making an appointment with the hospital veterinarian who has provided wise counsel in the past.

Simcoe melts

Otherwise, Simcoe has been joining us in melting during our typical late onset of summer here in San Francisco. Heat aside, her energy levels, appetite and general behavior has been normal. It’s pretty clear she’s not at all happy about our travel schedules though, I think we’ll all be relieved when I conclude my travel for the year in November.

by pleia2 at September 23, 2015 12:25 AM

September 21, 2015

Akkana Peck

The meaning of "fetid"; Albireo; and musings on variations in sensory perception

[Fetid marigold, which actually smells wonderfully minty] The street for a substantial radius around my mailbox has a wonderful, strong minty smell. The smell is coming from a clump of modest little yellow flowers.

They're apparently Dyssodia papposa, whose common name is "fetid marigold". It's in the sunflower family, Asteraceae, not related to Lamiaceae, the mints.

"Fetid", of course, means "Having an offensive smell; stinking". When I google for fetid marigold, I find quotes like "This plant is so abundant, and exhales an odor so unpleasant as to sicken the traveler over the western prairies of Illinois, in autumn." And nobody says it smells like mint -- at least, googling for the plant and "mint" or "minty" gets nothing.

But Dave and I both find the smell very minty and pleasant, and so do most of the other local people I queried. What's going on?

[Fetid goosefoot] Another local plant which turns strikingly red in autumn has an even worse name: fetid goosefoot. On a recent hike, several of us made a point of smelling it. Sure enough: everybody except one found it minty and pleasant. But one person on the hike said "Eeeeew!"

It's amazing how people's sensory perception can vary. Everybody knows how people's taste varies: some people perceive broccoli and cabbage as bitter while others love the taste. Some people can't taste lobster and crab at all and find Parmesan cheese unpleasant.

And then there's color vision. Every amateur astronomer who's worked public star parties knows about Albireo. Also known as beta Cygni, Albireo is a double star, the head of the constellation of the swan or the foot of the Northern Cross. In a telescope, it's a double star, and a special type of double: what's known as a "color double", two stars which are very different colors from each other.

Most non-astronomers probably don't think of stars having colors. Mostly, color isn't obvious when you're looking at things at night: you're using your rods, the cells in your retina that are sensitive to dim light, not your cones, which provide color vision but need a fair amount of light to work right.

But when you have two things right next to each other that are different colors, the contrast becomes more obvious. Sort of.

[Albireo, from Jefffisher10 on Wikimedia Commons] Point a telescope at Albireo at a public star party and ask the next ten people what two colors they see. You'll get at least six, more likely eight, different answers. I've heard blue and red, blue and gold, red and gold, red and white, pink and blue ... and white and white (some people can't see the colors at all).

Officially, the bright component is actually a close binary, too close to resolve as separate stars. The components are Aa (magnitude 3.18, spectral type K2II) and Ac (magnitude 5.82, spectral type B8). (There doesn't seem to be an Albireo Ab.) Officially that makes Albireo A's combined color yellow or amber. The dimmer component, Albireo B, is magnitude 5.09 and spectral type B8Ve: officially it's blue.

But that doesn't make the rest of the observers wrong. Color vision is a funny thing, and it's a lot more individual than most people think. Especially in dim light, at the limits of perception. I'm sure I'll continue to ask that question when I show Albireo in my telescope, fascinated with the range of answers.

In case you're wondering, I see Albireo's components as salmon-pink and pale blue. I enjoy broccoli and lobster but find bell peppers bitter. And I love the minty smell of plants that a few people, apparently, find "fetid".

September 21, 2015 10:09 PM

September 18, 2015

Elizabeth Krumbach

The Migration of OpenStack Translations to Zanata

The OpenStack infrastructure team that I’m part of provides tooling for OpenStack developers, translators, documentation writers and more. One of the commitments the OpenStack Infrastructure team has to the project, as outlined in our scope, is:

All of the software that we run is open source, and its configuration is public.

Like the rest of the project, we’ve committed ourselves to being Open. As a result, the infrastructure has become a mature open source project itself that we hope to see replicated by other projects.

With this in mind, the decision by Transifex to cease development on their open source platform meant that we needed to find a different solution that would meet the needs of our community and still be open source.

We were aware of the popular Pootle software, so we started there with evaluations. At the OpenStack Summit in Atlanta the i18n team first met up with Carlos Munoz and were given a demo of Zanata. As our need for a new solution increased in urgency, we worked with Pootle developers (thank you Dwayne Bailey!) and Zanata developers to find what was right for our community. Setting up development servers for testing for both and hosting demos through 2014. At the summit in Paris I had a great meeting with Andreas Jaeger of the OpenStack i18n team (and so much more!) and Carlos about Zanata.

Me, Carlos and Andreas in Paris

That summit was where we firmed up our plans to move forward with Zanata and wrote up spec so we could get to work.

Ying Chun Guo (Daisy) and I began by working closely with the Zanata team to identify requirements and file bugs that the team then made a priority. I worked closely with Stephanie Miller on our Puppet module for Zanata using Wildfly (an open source JBoss Application Server) and then later Steve Kowalik who worked on migrating our scripts from Transifex to Zanata. It was no small task, as we explored the behavior of the Zanata client that our scripts needed to use and worked to replicate what we had been doing previously.

As we worked on the scripts and rest of the infrastructure to support the team, this summer was spent by the translators with the formal trial of our final version of Zanata in preparation for the Liberty translations work. Final issues were worked out through this trial and the ever-responsive team from Zanata was able to work with us to fix a few more issues. I was thoroughly thankful for my infrastructure colleague Clark Boylan’s work keeping infrastructure things chugging along as I had some end of summer travel come up.


On September 10th Daisy announced that we had gone into production for Liberty translations in her email Liberty translation, go! In the past week the rest of us have worked to support all the moving parts that make our translations system work in the infrastructure side of production, with Wednesday being the day we switched to Zanata proposing all changes to Gerrit. Huge thanks to Alex Eng, Sean Flanigan and everyone else on the Zanata team who helped Steve, Andreas and me during the key parts of this switch.

I’m just now finishing up work on the documentation to call our project complete and Andreas has done a great job updating the documentation on the wiki.

Huge thanks to everyone who participated in this project, I’m really proud of the work we got done and so far the i18n team seems satisfied with the change. At the summit in Tokyo I will be leading the Translation tool support: What do we need to improve? session on Tuesday at 4:40pm where we’ll talk about the move to Zanata and other improvements that can be made to translations tooling. If you can’t attend the summit, please provide feedback on the openstack-i18n mailing list so it can be collected and considered for the session.

by pleia2 at September 18, 2015 09:43 PM

September 17, 2015

Elizabeth Krumbach

The OpenStack Ops mid-cycle, PLUG and Ubuntu & Debian gatherings

In the tail end of August I made my way down to Palo Alto for a day to attend the OpenStack Operators Mid-cycle event. I missed the first day because I wasn’t feeling well post-travel, but the second day gave me plenty of time to attend a few discussions and sync up with colleagues. My reason for going was largely to support the OpenStack Infrastructure work on running our own instance of OpenStack, the infra-cloud.

The event had about 200 people, and sessions were structured so they would have a moderator but were actually discussions to share knowledge between operators. It was also valuable to see several OpenStack project leads there trying to gain insight into how people are using their projects and to make themselves available for feedback. The day began with a large session covering the popularity and usage of configuration management databases (CMDBs) in order to track resources, notes here: PAO-ops-cmdb. Then there was a session covering OpenStack deployment tips, which included a nice chunk about preferred networking models (the room was about split when it came to OVS vs. LinuxBridge), notes from this session: PAO-ops-deployment-tips.

After lunch I attended a tools and monitoring session, and learned that they have a working group and an IRC meeting every other week. The session was meant to build upon a previous session from the summit, but the amount of overlap between that session and this seemed to be quite low and it ended up being a general session about sharing common tools. Notes from the session here: PAO-ops-tools-mon.

In all, an enjoyable event and I was impressed with how well-organized it all felt as an event with such a loose agenda going in. Participants seemed really engaged, not just expecting presentations, and it was great to see them all collaborating so openly.

My next event took me across the country, but only incidentally. Our recent trip back east happened to coincide with a PLUG meeting in downtown Philadelphia. The meetings are a great way for me to visit a bunch of my Philadelphia friends at once and I always have a good time. The presentation itself was by Debian Maintainer Guo Yixuan on “Debian: The community and the package management system” where he outlined the fundamentals regarding Debian community structure and organization and then did several demos of working with .deb packages, including unpacking, patching and rebuilding. After the meeting we adjourned to a local pizzeria where I got my ceremonial buffalo chicken cheese steak (fun fact: you can actually find a solid Philly cheese steak in San Francisco, but not one with chicken!).

Guo Yixuan prepares for his presentation, as Eric Lucas and CJ Fearnley host Q&A with attendees

Back home in San Francisco I hosted a couple events back to back last week. First up was the Ubuntu California Ubuntu Hour at a Starbucks downtown. One of the attendees was able to fill us in on his plans to ask his employer for space for a Wily Werewolf (15.10) release party in October. Unfortunately I’ll be out of town for this release, so I can’t really participate, but I’ll do what I can to support them from afar. After the Ubuntu Hour we all walked down the street to Henry’s Hunan in SOMA for a Bay Area Debian Dinner. There, talk continued about our work, upgrades and various bits of tech about Debian and not. We wrapped up the meeting with a GPG keysigning, which we hadn’t done in quite some time. I was also reminded post-meeting to upload my latest UID to a key server.

Next week rounds up my local-ish event schedule for the month by attending the CloudNOW Top Women in Cloud Awards in Menlo Park where my colleague Allison Randal is giving a keynote. Looking forward to it!

by pleia2 at September 17, 2015 10:42 PM

End of Summer Trip Back East

MJ and I spent the first week of September in New Jersey and Pennsylvania. During our trip we visited with an ailing close relative and spent additional time with other family. I’m really thankful for the support of friends, family and colleagues, even if I’ve been cagey about details. It made the best of what was a difficult trip and what continues to be a tough month.

It was also hot. A heat wave hit the northeast for the entire time we were there, each day the temperatures soaring into the 90s. Fortunately we spent our days ducking from the air-conditioned car to various air-conditioned buildings. Disappointingly there was also no rain, which is one of the things I miss the most, particularly now as California is suffering from such a severe drought.

We made time for a couple enjoyable evenings with friends. Our friend Danita met us downtown at The Continental in Philadelphia before we spent some time chatting and walking around Penn’s Landing. Later in the week we had dinner with our friend Tim at another of our favorites, CinCin in Chestnut Hill. On the New Jersey side we were able to have lunch with our friends Mike and Jess and their young one, David, at a typical local pizzeria near where we were staying. These are pretty common stops on our trips back east, but when you can only make it into town a couple times a year, you want to visit your favorites! Plus, any random pizzeria in New Jersey is often better and cheaper than what you find here in California. Sorry California, you just don’t do pizza right.

Much like our trip in the spring we also had a lot of work to do with storage to sort, consolidate and determine what we’ll be bringing out west. It’s a tedious and exhausting process, but we made good progress, all things considered. And there were moments where it was fun, like when we found MJ’s NES and all his games, then got to play our real world version of Tetris as we documented and packed it up into a plastic tote. We also got to assemble one of those two-wheeled hand trucks that we had delivered to the hotel (you should have seen their faces!). No one died in the process of building the hand truck. We also made a trip to the local scrap metal yard to get rid of an ancient, ridiculously heavy trash compactor that’s been taking up space in storage for years. We got a whopping $5.75 for it. Truthfully, I’m just glad we didn’t need to pay someone to haul it away. We also managed to get rid of some 1990s era x86 machines (sans harddrives) by bringing them to Best Buy for recycling, a service that I learned they offer nationwide for various computers and electronics.

Our trip also landed during the week of Force Friday, the official kickoff of the Star Wars Episode 7 merchandise blitz. Coming home late one evening anyway, we made it out to Toys”R”Us at midnight on Friday the 4th to check out the latest goodies. I picked up three Chewbacca toys, including the Chewbacca Furby, Furbacca. Upon returning to our hotel MJ managed to place an order for a BB-8 by Sphero for me, which I’m having a lot of fun with (and so have the cats!).

The midnight line at Toys”R”Us on Force Friday

And I also worked. One of my big projects at work this past year had deadlines coming up quickly and so I did what I could to squeeze in time to send emails and sync up with my team mates as needed to make sure everything was prepared for the launch into production that happened upon my return. I’m happy to report that it all worked out.

We flew home on Sunday, just before Labor Day. Unfortunately, we seemed to have brought the heat along with us, with San Francisco plunging into a heat wave upon our return!

Some more photos from the trip:

by pleia2 at September 17, 2015 04:01 AM

September 16, 2015

Akkana Peck

Hacking / Customizing a Kobo Touch ebook reader: Part II, Python

I wrote last week about tweaking a Kobo e-reader's sqlite database by hand.

But who wants to remember all the table names and type out those queries? I sure don't. So I wrote a Python wrapper that makes it much easier to interact with the Kobo databases.

Happily, Python already has a module called sqlite3. So all I had to do was come up with an API that included the calls I typically wanted -- list all the books, list all the shelves, figure out which books are on which shelves, and so forth.

The result was, which includes a main function that can list books, shelves, or shelf contents.

You can initialize kobo_utils like this:

import kobo_utils

koboDB = KoboDB("/path/where/your/kobo/is/mounted")
connect() throws an exception if it can't find the .sqlite file.

Then you can list books thusly:

or list shelf names:
or use print_shelf which books are on which shelves:
shelves = koboDB.get_dlist("Shelf", selectors=[ "Name" ])
for shelf in shelves:
    print shelf["Name"]

What I really wanted, though, was a way to organize my library, taking the tags in each of my epub books and assigning them to an appropriate shelf on the Kobo, creating new shelves as needed. Using plus the Python epub library I'd already written, that ended up being quite straightforward: shelves_by_tag.

September 16, 2015 02:38 AM

September 15, 2015

Jono Bacon

Bad Voltage Live in Germany: 30th Sep 2015

Some of you may know that I do a podcast called Bad Voltage with some friends; Stuart Langridge, Bryan Lunduke, and Jeremy Garcia.

The show covers Open Source, technology, politics, and more, and features interviews, reviews, and plenty of loose, fun, and at times argumentative discussion.

On Wed 30th Sep 2015, the Bad Voltage team will be doing a live show as part of the OpenNMS Users Conference. The show will be packed with discussion, surprises, contests, and give-aways.

The show takes place at the University Of Applied Sciences in Fulda, Germany. The address:

University of Applied Science Fulda, Leipziger Str. 123, 36037 Fulda, Germany Tel: +49 661 96400

For travel details of how to get there see this page.

Everyone is welcome to join and you don’t have to be joining the OpenNMS Users Conference to see the live Bad Voltage show. There will be a bunch of Ubuntu folks, SuSE folks, Linux folks, and more joining us. Also, after the show we plan on keeping the party going – it is going to be a huge amount of fun.

To watch the show, we have a small registration fee of €5. You can register here. While this is a nominal fee, we will also have some free beer and giveaways, so you will get your five euros worth.

So, be sure to come on join us. You can watch a fun show and meet some great people.

REGISTER FOR THE SHOW NOW; space is limited, so register ASAP.

by Jono Bacon at September 15, 2015 07:00 PM

Nathan Haines

Last call for Free Culture Showcase submissions!

In just a few hours, the Ubuntu Free Culture Showcase submission period will wrap up, and we'll begin the judging process.  You have until 23:59 UTC tonight, the 15th, to submit to the Flickr group, the Vimeo group, or the SoundCloud group and have a chance to see your Creative Commons-licensed media included in the Ubuntu 15.10 release which will be enjoyed worldwide!

So if you've been waiting until the last second, it's arrived!  View the wiki page at the link above for more information about the rules for submission and links to the submission pools.

September 15, 2015 09:44 AM

September 12, 2015


More Usable Code By Avoiding Two Step Objects

header image

Two step initialization is harmful to the objects that you write because it obfuscates the dependencies of the object, and makes the object harder to use.

Harder to use

Consider a header and some usage code:

struct Monkey
    void set_banana(std::shared_ptr<Banana> const& banana);
    void munch_banana();
    std::shared_ptr<Banana> const& banana;
int main(int argc, char** argv)
    Monkey jim;

Now jim.munch_banana(); could be a valid line to call, but the reader of the interface isn’t really assured that it is if the writer wrote the object with two step initialization. If the implementation is:

Monkey::Monkey() :
void Monkey::set_banana(std::shared_ptr<Banana> const& b)
    banana = b;
void Monkey::munch_banana()

Then calling jim.munch_banana(); would segfault! A more careful coder might have written:

void Monkey::munch_banana()
    if (banana)

This still is a problem though, as calling munch_banana() is silently doing nothing; and the caller can’t know that. If you tried to fix by writing:

void Monkey::munch_banana()
    if (banana)
        throw std::logic_error("monkey doesn't have a banana");

We’re at least to the point where we haven’t segfaulted and we’ve notified the caller that something has gone wrong…. But we’re still at the point where we’ve thrown an exception that the user has to recover from.

Obfuscated Dependencies

With the two-step object, you need more lines of code to initialize it, and you leave the object “vulnerable”.

auto monkey = std::make_unique<Monkey>();

If you notice, between lines 1 and 2, monkey isn’t really a constructed object. It’s in an indeterminate state! If monkey has to be passed around to an object that has a Banana to share, thats a recipe for a problem. Other objects don’t have a good way to know if this is a Monkey object, or if its a meta-Monkey object that can’t be used yet.

Can we do better?

Yes! By thinking about our object’s dependencies, we can avoid the situation altogether.
The truth is; Monkey really does depend on Banana..
If the class expresses this in its constructor, ala:

struct Monkey
    Monkey(std::shared_ptr<Banana> const& banana);
    void set_banana(std::shared_ptr<Banana> const& banana);
    void munch_banana();
    std::shared_ptr<Banana> banana;

We make it clear when constructing that the Monkey needs a Banana. The coder interested in calling Monkey::munch_banana() is guaranteed that it’ll work. The code implementing Monkey::munch_banana() becomes the original, and simple:

void Monkey::munch_banana()

Furthermore, if we update the banana later via Monkey::set_banana(), we’re still in the clear. The only way the coder’s going to run into problems is if they explicitly set a nullptr as the argument, which is a pretty easy error to avoid, as you have to actively do something silly, instead of doing something reasonable, and getting a silly error.

Getting the dependencies of the object right sorts out a lot of interface problems and makes the object easier to use.

by Kevin at September 12, 2015 07:31 PM

September 11, 2015

Elizabeth Krumbach

“The Year Without Pants” and OpenStack work

As I’ve talked about before, the team I work on at HP is a collection of folks scattered all over the world, working from home and hacking on OpenStack together. We’re joined by hundreds of other people from dozens of companies doing the same, or similar.

This year our team at HP kicked off an internal book club, each month or two we’d read the same book that focused on some kind of knowledge that we felt would be helpful or valuable to the team. So far on our schedule:

  • Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead by Brené Brown
  • The Year Without Pants: and the Future of Work by Scott Berkun
  • Crucial Conversations: Tools for Talking When Stakes Are High by Joseph Grenny, Kerry Patterson, and Ron McMillan

This month’s book was The Year Without Pants. I had previously read Scott Berkun’s Confessions of a Public Speaker which is my favorite book on public speaking, I recommend it to everyone. This, and given that our team is in some ways very similar to how the teams Automattic (makers of WordPress) work, I was very interested in reading this other book of his.

Stepping back for a high level view of how we do work, it’s probably easiest to begin with how we differ from Automattic as a team, rather than how we’re similar. There are certainly several notable things:

  • They have a contract to hire model, partially to weed out folks who can’t handle the work model. We and most companies who work on OpenStack instead either hire experienced open source people directly for an upstream OpenStack job or ease people into the position, making accommodations and changes if work from home and geographic distribution of the team isn’t working out for them (it happens).
  • All of the discussions about my OpenStack work are fully public, I don’t really have “inside the company”-only discussions directly related to my day to day project work.
  • I work with individuals from large, major companies all over the world for our project work on a day to day basis, not just one company and a broader community.

These differences mattered when reading the book, especially when it comes to the public-facing nature of our work. We don’t just entertain feedback and collaboration about our day to day discussions and work from people in our group or company, but from anyone who cares enough to take the time to find us on our public mailing list, IRC channel or meeting. As a member of the Infrastructure team I don’t believe we’ve suffered from this openness. Some people certainly have opinions about what our team “should” be working on, but we have pretty good filters for these things and I like to think that as a team we’re open to accepting efforts from anyone who comes to us with a good idea and people-power to help implement it.

The things we had in common were what interested me most so I could compare our experiences. In spite of working on open source software for many years, this is the first time I’ve been paid full time to do it and worked with such large companies. It’s been fascinating to see how the OpenStack community has evolved and how HP has met the challenges. Hiring the right people is certainly one of those challenges. Just like in the book, we’ve found that we need to find people who are technically talented and who also have good online communication skills and can let their personality show through in text. OpenStack is very IRC-focused, particularly the team I’m on. Additionally, it’s very important that we steer clear of people whose behavior may be toxic to the team and community, regardless of their technical skills. This is good advice in any company, but it becomes increasingly important on a self-motivated, remote team where it’s more difficult to casually notice or check in with people about how they’re doing. Someone feeling downtrodden or discouraged because of the behavior of a co-worker can be much harder to notice from afar and often difficult and awkward to talk about.

I think what struck me most about both the experience in the book and what I’ve seen in OpenStack is the need for in-person interactions. I love working from home, and in my head it’s something I believe I can just do forever because our team works well online. But if I’m completely honest about my experience over the past 3 years, I feel inspired, energized and empowered by our in-person time together as a team, even if it’s only 2-3 times a year. It also helps our team feel like a team, particularly as we’re growing in staff and scope, and our projects are becoming more segregated day to day (I’m working on Zanata, Jim is working on Zuulv3, Colleen is working on infra-cloud, etc). Reflecting upon my experience with the Ubuntu community these past couple years, I’ve seen first hand the damage done to a community and project when the in-person meetings cease (I went into this topic some following the Community Leadership Summit in July).

Now, the every-six-months developer and user summits (based on what Ubuntu used to do) have been a part of OpenStack all along. It’s been clear from the beginning that project leaders understood the value of getting people together in person twice a year to kick off the next release cycle. But as the OpenStack community has evolved, most teams have gotten in the habit of also having team-specific sprints each cycle, where team members come together face to face to work on specific projects between the summits. These sprints grew organically and without top-down direction from anyone. They satisfied a social need to retain team cohesion and the desire for high bandwidth collaboration. In the book this seemed very similar to the annual company meetings being supplemented by team sprints.

I think I’m going to call this “The year of realizing that in person interaction is vital to the health of a project and team.” Even if my introvert self doesn’t like it and still believes deep down I should just live far away in a cabin in the woods with my cats and computers.

It’s pretty obvious given my happiness with working from home and the teams I’m working on that I fully bought in to the premise of this book from the beginning, so it didn’t need to convince me of anything. And there was a lot more to this book, particularly for people who are seeking to manage a geographically distributed, remote team. I highly recommend it to anyone doing remote work, managing remote teams or looking for a different perspective than “tech workers need to be in the office to be productive.” Thanks, Scott!

by pleia2 at September 11, 2015 07:40 PM

Akkana Peck

The blooms of summer, and weeds that aren't weeds

[Wildflowers on the Quemazon trail] One of the adjustments we've had to make in moving to New Mexico is getting used to the backward (compared to California) weather. Like, rain in summer!

Not only is rain much more pleasant in summer, as a dramatic thundershower that cools you off on a hot day instead of a constant cold drizzle in winter (yes, I know that by now Calfornians need a lot more of that cold drizzle! But it's still not very pleasant being out in it). Summer rain has another unexpected effect: flowers all summer, a constantly changing series of them.

Right now the purple asters are just starting up, while skyrocket gilia and the last of the red penstemons add a note of scarlet to a huge array of yellow flowers of all shapes and sizes. Here's the vista that greeted us on a hike last weekend on the Quemazon trail.

Down in the piñon-juniper where we live, things aren't usually quite so colorful; we lack many red blooms, though we have just as many purple asters as they do up on the hill, plus lots of pale trumpets (a lovely pale violet gilia) and Cowpen daisy, a type of yellow sunflower.

But the real surprise is a plant with a modest name: snakeweed. It has other names, but they're no better: matchbrush, broomweed. It grows everywhere, and most of the year it just looks like a clump of bunchgrass.

[Snakeweed in bloom] Then come September, especially in a rainy year like this one, and all that snakeweed suddenly bursts into a glorious carpet of gold.

We have plenty of other weeds -- learning how to identify Russian thistle (tumbleweed), kochia and amaranth when they're young, so we can pull them up before they go to seed and spread farther, has launched me on a project of an Invasive Plants page for the nature center (we should be ready to make that public soon).

But snakeweed, despite the name, is a welcome guest in our yard, and it lifts my spirits to walk through it on a September evening.

By the way, if anyone in Los Alamos reads this blog, Dave and I are giving our first planetarium show at the nature center tomorrow (that's Friday) afternoon. Unlike most PEEC planetarium shows, it's free! Which is probably just as well since it's our debut. If you want to come see us, the info is here: Night Sky Fiesta Planetarium Show.

September 11, 2015 03:24 AM

September 04, 2015

Akkana Peck

Hacking / Customizing a Kobo Touch ebook reader: Part I, sqlite

I've been enjoying reading my new Kobo Touch quite a lot. The screen is crisp, clear and quite a bit whiter than my old Nook; the form factor is great, it's reasonably responsive (though there are a few places on the screen where I have to tap harder than other places to get it to turn the page), and I'm happy with the choice of fonts.

But as I mentioned in my previous Kobo article, there were a few tweaks I wanted to make; and I was very happy with how easy it was to tweak, compared to the Nook. Here's how.

Mount the Kobo

When you plug the Kobo in to USB, it automatically shows up as a USB-Storage device once you tap "Connect" on the Kobo -- or as two storage devices, if you have an SD card inserted.

Like the Nook, the Kobo's storage devices show up without partitions. For instance, on Linux, they might be /dev/sdb and /dev/sdc, rather than /dev/sdb1 and /dev/sdc1. That means they also don't present UUIDs until after they're already mounted, so it's hard to make an entry for them in /etc/fstab if you're the sort of dinosaur (like I am) who prefers that to automounters.

Instead, you can use the entry in /dev/disk/by-id. So fstab entries, if you're inclined to make them, might look like:

/dev/disk/by-id/usb-Kobo_eReader-3.16.0_N905K138254971:0 /kobo   vfat user,noauto,exec,fmask=133,shortname=lower 0 0
/dev/disk/by-id/usb-Kobo_eReader-3.16.0_N905K138254971:1 /kobosd vfat user,noauto,exec,fmask=133,shortname=lower 0 0

One other complication, for me, was that the Kobo is one of a few devices that don't work through my USB2 powered hub. Initially I thought the Kobo wasn't working, until I tried a cable plugged directly into my computer. I have no idea what controls which devices work through the hub and which ones don't. (The Kobo also doesn't give any indication when it's plugged in to a wall charger, nor does

The sqlite database

Once the Kobo is mouted, ls -a will show a directory named .kobo. That's where all the good stuff is: in particular, KoboReader.sqlite, the device's database, and Kobo/Kobo eReader.conf, a human-readable configuration file.

Browse through Kobo/Kobo eReader.conf for your own amusement, but the remainder of this article will be about KoboReader.sqlite.

I hadn't used sqlite before, and I'm certainly no SQL expert. But a little web searching and experimentation taught me what I needed to know.

First, make a local copy of KoboReader.sqlite, so you don't risk overwriting something important during your experimentation. The Kobo is apparently good at regenerating data it needs, but you might lose information on books you're reading.

To explore the database manually, run: sqlite3 KoboReader.sqlite

Some useful queries

Here are some useful sqlite commands, which you can generalize to whatever you want to search for on your own Kobo. Every query (not .tables) must end with a semicolon.

Show all tables in the database:

The most important ones, at least to me, are content (all your books), Shelf (a list of your shelves/collections), and ShelfContent (the table that assigns books to shelves).

Show all column names in a table:

PRAGMA table_info(content);
There are a lot of columns in content, so try PRAGMA table_info(content); to see a much simpler table.

Show the names of all your shelves/collections:


Show everything in a table:


Show all books assigned to shelves, and which shelves they're on:

SELECT ShelfName,ContentId FROM ShelfContent;
ContentId can be a URL to a sideloaded book, like file:///mnt/sd/TheWitchesOfKarres.epub, or a UUID like de98dbf6-e798-4de2-91fc-4be2723d952f for books from the Kobo store.

Show all books you have installed:

SELECT Title,Attribution,ContentID FROM content WHERE BookTitle is null ORDER BY Title;
One peculiarity of Kobo's database: each book has lots of entries, apparently one for each chapter. The entries for chapters have the chapter name as Title, and the book title as BookTitle. The entry for the book as a whole has BookTitle empty, and the book title as Title. For example, I have file:///mnt/sd/earnest.epub sideloaded:
sqlite> SELECT Title,BookTitle from content WHERE ContentID LIKE "%hamlet%";
ACT I.|Hamlet
Scene II. Elsinore. A room of state in the Castle.|Hamlet
Scene III. A room in Polonius's house.|Hamlet
Scene IV. The platform.|Hamlet
Scene V. A more remote part of the Castle.|Hamlet
Act II.|Hamlet
  [ ... and so on ... ]
ACT V.|Hamlet
Scene II. A hall in the Castle.|Hamlet
Each of these entries has Title set to the name of the chapter (an act in the play) and BookTitle set to Hamlet, except for the final entry, which has Title set to Hamlet and BookTitle set to nothing. That's why you need that query WHERE BookTitle is null if you just want a list of your books.

Show all books by an author:

SELECT Title,Attribution,ContentID FROM content WHERE BookTitle is null
AND Attribution LIKE "%twain%" ORDER BY Title;
Attribution is where the author's name goes. LIKE %% searches are case insensitive.

Of course, it's a lot handier to have a program that knows these queries so you don't have to type them in every time (especially since the sqlite3 app has no history or proper command-line editing). But this has gotten long enough, so I'll write about that separately.

September 04, 2015 01:11 AM

August 30, 2015

Jono Bacon

Go and back the Mycroft Kickstarter campaign

Disclaimer: I am not a member of the Mycroft team, but I think this is neat and an important example of open innovation that needs support.

Mycroft is an Open Source, Open Hardware, Open APIs product that you talk to and it provides information and services. It is a wonderful example of open innovation at work.

They are running a kickstarter campaign that is pretty close to the goal, but it needs further backers to nail it.

I recorded a short video about why I think this is important. You can watch it here.

I encourage you to go and back the campaign. This kind of open innovation across technology, software, hardware, and APIs is how we make the world a better and more hackable place.

by Jono Bacon at August 30, 2015 09:42 PM

Elizabeth Krumbach

Simcoe’s July 2015 Checkup and Beyond

Simcoe, our Siamese, was diagnosed with Chronic Renal Failure (CRF) in December of 2011. Since then, we’ve kept her going with quarterly vet visits and subcutaneous fluid injections every other day to keep her properly hydrated. Her previous checkup was in mid March, so working around our travel schedules, we brought her in on July 2nd for her latest checkup.

Unfortunately the levels of Blood urea nitrogen (BUN) and Creatinine (CRE) levels continue to increase past healthy levels.

This visit showed a drop in weight as well.

On the bright side, after being high for some time, the weekly Alendronate tablets that were prescribed in May have been effective in getting her Calcium levels down. Our hope is that this trend will continue and prolong the life her her kidneys.

However, the ever-increasing BUN and CRE levels, combined with the weight loss, are a concern. She’s due for another urine analysis and ultrasound to get a closer view into what’s going on internally.

We had this all scheduled for the end of July when something came up. She sometimes gets sniffly, so it’s not uncommon to see crusted “eye goo” build up around her eyes. One day at the end of July I noticed it had gotten quite bad and grabbed her to wash it off. It’s when I got close to her eyes that I noticed it wasn’t “eye goo” that had crusted, she had sores around her eyes that had scabbed over! With no appointments at her regular vet on the horizon, we whisked her off to the emergency vet to see what was going on.

After several hours of waiting, the vet was able to look at the scabbing under the microscope and do a quick culture to confirm a bacterial infection. They also had a dermatologist have a quick look and decided to give her an antibiotics shot to try and clear it up. The next week we swapped out her ultrasound appointment for a visit with her vet to do a follow up. The sores had begun to heal by then and we were just given a topical gel to help it continue to heal. By early August she was looking much better and I left for my trip to Peru, with MJ following a few days later.

A few scabs around her eyes

When we came home in mid August Simcoe still looked alright, but within a few days we noticed the sores coming back. We were able to make an appointment for Saturday, August 22nd with her regular vet to see if we could get to the bottom of it. The result was another topical gel and a twice-a-day dose of the antibiotic Clavamox. The topical gel seemed effective, but the Clavamox seemed to make her vomit. On Monday, with the guidance of her vet, we stopped administering the Clavamox. On Wednesday I noticed that she hadn’t really been eating, sigh! Another call to the vet and I went over to pick up an appetite stimulant. She finally ate, but there was more vomiting. Thankfully our every-other-day fluid injections ensured that she didn’t become dehydrated through all of this. We brought her in for the final follow up just a couple days ago, on Friday. Her sores around her eyes are once again looking better and she seemed to be eating normally when I left for our latest trip on Friday evening.

Not happy (at the vet!) but sores are clearing up, again

I do feel bad leaving on another trip as she’s going through this, but she’s with a trusted pet sitter and I’m really hoping this is finally clearing up. I have a full month at home after this trip so if not we will have time at home to treat her. The strangest thing about all of this is that we have no idea how this happened. She’s an indoor cat, we live in a high rise condo building, and Caligula shows no symptoms, in spite of their proximity and their snuggle and groom-each-other habits. How did she get exposed to something? Why is Caligula fine?

“I am cute, don’t leave!”

Whatever the reason for all of this, here’s to Simcoe feeling better! Once she is, we’ll finally pick up getting the ultrasound and anything else done.

by pleia2 at August 30, 2015 03:20 PM

August 27, 2015

Jono Bacon

Ubuntu, Canonical, and IP

Recently there has been a flurry of concerns relating to the IP policy at Canonical. I have not wanted to throw my hat into the ring, but I figured I would share a few simple thoughts.

Firstly, the caveat. I am not a lawyer. Far from it. So, take all of this with a pinch of salt.

The core issue here seems to be whether the act of compiling binaries provides copyright over those binaries. Some believe it does, some believe it doesn’t. My opinion: I just don’t know.

The issue here though is with intent.

In Canonical’s defense, and specifically Mark Shuttleworth’s defense, they set out with a promise at the inception of the Ubuntu project that Ubuntu will always be free. The promise was that there would not be a hampered community edition and full-flavor enterprise edition. There will be one Ubuntu, available freely to all.

Canonical, and Mark Shuttleworth as a primary investor, have stuck to their word. They have not gone down the road of the community and enterprise editions, of per-seat licensing, or some other compromise in software freedom. Canonical has entered multiple markets where having separate enterprise and community editions could have made life easier from a business perspective, but they haven’t. I think we sometimes forget this.

Now, from a revenue side, this has caused challenges. Canonical has invested a lot of money in engineering/design/marketing and some companies have used Ubuntu without contributing even nominally to it’s development. Thus, Canonical has at times struggled to find the right balance between a free product for the Open Source community and revenue. We have seen efforts such as training services, Ubuntu One etc, some of which have failed, some have succeeded.

Again though, Canonical has made their own life more complex with this commitment to freedom. When I was at Canonical I saw Mark very specifically reject notions of compromising on these ethics.

Now, I get the notional concept of this IP issue from Canonical’s perspective. Canonical invests in staff and infrastructure to build binaries that are part of a free platform and that other free platforms can use. If someone else takes those binaries and builds a commercial product from them, I can understand Canonical being a bit miffed about that and asking the company to pay it forward and cover some of the costs.

But here is the rub. While I understand this, it goes against the grain of the Free Software movement and the culture of Open Source collaboration.

Putting the legal question of copyrightable binaries aside for one second, the current Canonical IP policy is just culturally awkward. I think most of us expect that Free Software code will result in Free Software binaries and to make claim that those binaries are limited or restricted in some way seems unusual and the antithesis of the wider movement. It feels frankly like an attempt to find a loophole in a collaborative culture where the connective tissue is freedom.

Thus, I see this whole thing from both angles. Firstly, Canonical is trying to find the right balance of revenue and software freedom, but I also sympathize with the critics that this IP approach feels like a pretty weak way to accomplish that balance.

So, I ask my humble readers this question: if Canonical reverts this IP policy and binaries are free to all, what do you feel is the best way for Canonical to derive revenue from their products and services while also committing to software freedom? Thoughts and ideas welcome!

by Jono Bacon at August 27, 2015 11:59 PM

Elizabeth Krumbach

Travels in Peru: Machu Picchu

Our trip to Peru first took us to the cities ofLima and Cusco. We had a wonderful time in both, seeing the local sites and dining at some of their best restaurants. But if I’m honest, we left the most anticipated part of our journey for last, visiting Machu Picchu.

Before I talk about our trip to Machu Picchu, there are a few things worthy of note:

  1. I love history and ruins
  2. I’ve been fascinated by Peru since I was a kid
  3. Going to Machu Picchu has been a dream since I learned it existed

So, even being the world traveler that I am (I’d already been to Asia and Europe this year before going to South America), this was an exceptional trip for me. Growing up our land lord was from Peru, as a friend of his daughters I regularly got to see their home, which was full of Peruvian knickknacks and artifacts. As I dove into history during high school I learned about ancient ruins all over the world, from Egypt to Mexico and of course Machu Picchu in Peru. The mysterious city perched upon a mountaintop always held a special fascination to me. When the opportunity to go to Peru for a conference came up earlier this year, I agreed immediately and began planning. I had originally was going to go alone, but MJ decided to join me once I found a tour I wanted to book with. I’m so glad he did. Getting to share this experience with him meant the world to me.

Our trip from Cusco began very early on Friday morning in order to catch the 6:40AM train to Aguas Calientes, the village below Machu Picchu. Our tickets were for Peru Rail’s Vistadome train, and I was really looking forward to the ride. On the disappointing side, the Cusco half of the trip had foggy windows and the glare on the windows generally made it difficult to take pictures. But as we lowered in elevation my altitude headache went away and so did the condensation from the windows. The glare was still an issue, but as I settled in I just enjoyed the sights and didn’t end up taking many photos. It was probably the most enjoyable train journey I’ve ever been on. At 3 hours it was long enough to feel settled in and relaxed watching the countryside, rivers and mountains go by, but not too long that I got bored. I brought along my Nook but didn’t end up reading at all.

Of course I did take some pictures, here:

Once at Aguas Calientes our overnight bags (big suitcases were left at the hotel in Cusco, as is common) were collected and taken to the hotel. We followed the tour guide who met us with several others to take a bus up to Machu Picchu!

Our guide gave us a three hour tour of the site. At a medium pace, he took us to some of the key structures and took time for photo opportunities all around. Of particular interest to him was the Temple of the Sun (“J” shaped building, center of the photo below), which we saw from above and then explored around and below.

The hike up for these amazing views wasn’t very hard, but I was thankful for the stops along the way as he talked about the exploration and scientific discovery of the site in the early 20th century.

And then there were the llamas. Llamas were brought to Machu Picchu in modern times, some say to trim the grass and other say for tourists. It seems to be a mix of the two, and there is still a full staff of groundskeepers to keep tidy what the llamas don’t manage. I managed to get this nice people-free photo of a llama nursing.

There seem to be all kinds of jokes about “selfies with llamas” and I was totally in for that. Though I didn’t get next to a llama like some of my fellow selfie-takers, but I did get my lovely distance selfie with llamas.

Walking through what’s left of Machu Picchu is quite the experience. The tall stone walls, stepped terraces that make up the whole thing. Lots of climbing and walking at various elevations throughout the mountaintop. Even going through the ruins in Mexico didn’t quite prepare me for what it’s like to be on top of a mountain like this. Amazing place.

We really lucked out with the weather, much of the day was clear and sunny, and quite warm (in the 70s). It made for good walking weather as well as fantastic photos. When the afternoon showers did come in, it was just in time for our tour to end and for us to have lunch just outside the gates. When lunch was complete the sun came out again and we were able to go back in to explore a bit more and take more pictures!

I feel like I should write more about Machu Picchu, being such an epic event for me, but it was more of a visual experience much better shared via photos. I uploaded over 200 more photos from our walk through Machu Picchu here:

My photos were taken with a nice compact digital camera, but MJ brought along his DSLR camera. I’m really looking forward to seeing what he ended up with.

The park closes at 5PM, so close to that time we caught one of the buses back down to Aguas Calientes. I did a little shopping (went to Machu Picchu, got the t-shirt). We were able to check into our hotel, the Casa Andina Classic, which ended up being my favorite hotel of the trip, it was a shame we were only there for one night! Hot, high pressure shower, comfortable bed, and a lovely view of the river that runs along the village:

I was actually so tired from all our early mornings and late evenings the rest of the trip that after taking a shower at the hotel that evening I collapsed onto the bed and instead of reading, zombied out to some documentaries on the History channel, after figuring out the magical incantation on the remote to switch to English. So much for being selective about the TV I watch! We also decided to take advantage of the dinner that was included with our booking and had a really low key, but enjoyable and satisfying meal there at the hotel.

The next morning we took things slow and did some walking around the village before lunch. Aguas Calientes is very small, it’s quite possible that we saw almost all of it. I took the opportunity to also buy some post cards to send to my mother and sisters, plus find stamps for them. Finding stamps is always an interesting adventure. Our hotel couldn’t post them for me (or sell me stamps) and being a Saturday we struck out at the actual post office, but found a corner tourist goodie shop that sold them and a mailbox nearby to so I could send them off.

For lunch we made our way past all the restaurants who were trying to get us in their doors by telling us about their deals and pushing menus our way until we found what we were looking for, a strange little place called Indio Feliz. I found it first in the tour book I’d been lugging around, typical tourist that I am, and followed up with some online recommendations. The decor is straight up Caribbean pirate themed (what?) and with a French owner, they specialize in Franco-Peruvian cuisine. We did the fixed menu where you pick an appetizer, entree and dessert, though it was probably too much for lunch! They also had the best beer menu I had yet seen in Peru, finally far from the altitude headache in Cusco I had a Duvel and MJ went with a Chimay Red. Food-wise I began with an amazing avocado and papaya in lemon sauce. Entree was an exceptional skewer of beef with an orange sauce, and my meal concluded with coffee and apple pie that came with both custard and ice cream. While there we got to chat with some fellow diners from the US, they had just concluded the 4 day Inca Trail hike and regaled us with stories of rain and exhaustion as we swapped small talk about the work we do.

More photos from Aguas Calientes here:

After our leisurely lunch, it was off to the train station. We were back on the wonderful Vistadome train, and on the way back to Cusco there was some culturally-tuned entertainment as well as a “fashion show” featuring local clothing they were selling, mostly of alpaca wool. It was a fun touch, as the ride back was longer (going up the mountains) and being wintertime the last hour or so of the ride was in the dark.

We had our final night in Cusco, and Sunday was all travel. A quick flight from Cusco to Lima, where we had 7 hours before our next flight and took the opportunity to have one last meal in Lima. Unfortunately the timing of our stay meant that most restaurants were in their “closed between lunch and dinner” time, so we ended up at Larcomar, a shopping complex built into an oceanside cliff in Miraflores. We ate at Tanta, where we had a satisfying lunch with a wonderful ocean view!

Our late lunch concluded our trip, from there we went back to Lima airport and began our journey back home via Miami. I was truly sad to see the trip come to an end. Often times I am eager to get home after such an adventurey vacation (particularly when it’s attached to a conference!), but I will miss Peru. The sights, the foods, the llamas and alpacas! It’s a beautiful country that I hope to visit again.

by pleia2 at August 27, 2015 02:50 AM