Planet Ubuntu California

March 26, 2017

Nathan Haines

Winners of the Ubuntu 17.04 Free Culture Showcase

Spring is here and the release of Ubuntu 17.04 is just around the corner. I've been using it for two weeks and I can't say I'm disappointed! But one new feature that never disappoints me is appearance of the community wallpapers that were selected from the Ubuntu Free Culture Showcase!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. For Ubuntu 17.04, 96 images were submitted to the Ubuntu 17.04 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found.

But now the results are in, and the top choices, voted on by certain members of the Ubuntu community, and I'm proud to announce the winning images that will be included in Ubuntu 17.04:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers (along with dozens of other stunning wallpapers) today at the links above, or in your desktop wallpaper list after you upgrade or install Ubuntu 17.04 on April 13th.

March 26, 2017 08:35 AM

March 25, 2017

Akkana Peck

Reading keypresses in Python

As part of preparation for Everyone Does IT, I was working on a silly hack to my Python script that plays notes and chords: I wanted to use the computer keyboard like a music keyboard, and play different notes when I press different keys. Obviously, in a case like that I don't want line buffering -- I want the program to play notes as soon as I press a key, not wait until I hit Enter and then play the whole line at once. In Unix that's called "cbreak mode".

There are a few ways to do this in Python. The most straightforward way is to use the curses library, which is designed for console based user interfaces and games. But importing curses is overkill just to do key reading.

Years ago, I found a guide on the official Python Library and Extension FAQ: Python: How do I get a single keypress at a time?. I'd even used it once, for a one-off Raspberry Pi project that I didn't end up using much. I hadn't done much testing of it at the time, but trying it now, I found a big problem: it doesn't block.

Blocking is whether the read() waits for input or returns immediately. If I read a character with c = sys.stdin.read(1) but there's been no character typed yet, a non-blocking read will throw an IOError exception, while a blocking read will waitm not returning until the user types a character.

In the code on that Python FAQ page, blocking looks like it should be optional. This line:

fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)
is the part that requests non-blocking reads. Skipping that should let me read characters one at a time, block until each character is typed. But in practice, it doesn't work. If I omit the O_NONBLOCK flag, reads never return, not even if I hit Enter; if I set O_NONBLOCK, the read immediately raises an IOError. So I have to call read() over and over, spinning the CPU at 100% while I wait for the user to type something.

The way this is supposed to work is documented in the termios man page. Part of what tcgetattr returns is something called the cc structure, which includes two members called Vmin and Vtime. man termios is very clear on how they're supposed to work: for blocking, single character reads, you set Vmin to 1 (that's the number of characters you want it to batch up before returning), and Vtime to 0 (return immediately after getting that one character). But setting them in Python with tcsetattr doesn't make any difference.

(Python also has a module called tty that's supposed to simplify this stuff, and you should be able to call tty.setcbreak(fd). But that didn't work any better than termios: I suspect it just calls termios under the hood.)

But after a few hours of fiddling and googling, I realized that even if Python's termios can't block, there are other ways of blocking on input. The select system call lets you wait on any file descriptor until has input. So I should be able to set stdin to be non-blocking, then do my own blocking by waiting for it with select.

And that worked. Here's a minimal example:

import sys, os
import termios, fcntl
import select

fd = sys.stdin.fileno()
newattr = termios.tcgetattr(fd)
newattr[3] = newattr[3] & ~termios.ICANON
newattr[3] = newattr[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSANOW, newattr)

oldterm = termios.tcgetattr(fd)
oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

print "Type some stuff"
while True:
    inp, outp, err = select.select([sys.stdin], [], [])
    c = sys.stdin.read()
    if c == 'q':
        break
    print "-", c

# Reset the terminal:
termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)

A less minimal example: keyreader.py, a class to read characters, with blocking and echo optional. It also cleans up after itself on exit, though most of the time that seems to happen automatically when I exit the Python script.

March 25, 2017 06:40 PM

March 24, 2017

Jono Bacon

My Move to ProsperWorks CRM and Feature Requests

As some of you will know, I am a consultant that helps companies build internal and external communities, collaborative workflow, and teams. Like any consultant, I have different leads that I need to manage, I convert those leads into opportunities, and then I need to follow up and convert them into clients.

Managing my time is one of the most critical elements of what I do. I want to maximize my time to be as valuable as possible, so optimizing this process is key. Thus, the choice of CRM has been an important one. I started with Insightly, but it lacked a key requirement: integration.

I hate duplicating effort. I spend the majority of my day living in email, so when a conversation kicks off as a lead or opportunity, I want to magically move that from my email to the CRM. I want to be able to see and associate conversations from my email in the CRM. I want to be able to see calendar events in my CRM. Most importantly, I don’t want to be duplicating content from one place to another. Sure, it might not take much time, but the reality is that I am just going to end up not doing it.

Evaluations

So, I evaluated a few different platforms, with a strong bias to SalesforceIQ. The main attraction there was the tight integration with my email. The problem with SalesforceIQ is that it is expensive, it has limited integration beyond email, and it gets significantly more expensive when you want more control over your pipeline and reporting. SalesforceIQ has the notion of “lists” where each is kind of like a filtered spreadsheet view. For the basic plan you get one list, but beyond that you have to go up a plan in which I get more lists, but it also gets much more expensive.

As I courted different solutions I stumbled across ProsperWorks. I had never heard of it, but there were a number of features that I was attracted to.

ProsperWorks

Firstly, ProsperWorks really focuses on tight integration with Google services. Now, a big chunk of my business is using Google services. Prosperworks integrates with Gmail, but also Google Calendar, Google Docs, and other services.

They ship a Gmail plugin which makes it simple to click on a contact and add them to ProsperWorks. You can then create an opportunity from that contact with a single click. Again, this is from my email: this immediately offers an advantage to me.

ProsperWorks CRM

Yep, that’s not my Inbox. It is an image yanked off the Internet.

When viewing each opportunity, ProsperWorks will then show associated Google Calendar events and I can easily attach Google Docs documents or other documents there too. The opportunity is presented as a timeline with email conversations listed there, but then integrated note-taking for meetings, and other elements. It makes it easy to summarize the scope of the deal, add the value, and add all related material. Also, adding additional parties to the deal is simple because ProsperWorks knows about your contacts as it sucks it up from your Gmail.

While the contact management piece is less important to me, it is also nice that it brings in related accounts for each contact automatically such as Twitter, LinkedIn, pictures, and more. Again, this all reduces the time I need to spend screwing around in a CRM.

Managing opportunities across the pipeline is simple too. I can define my own stages and then it basically operates like Trello and you just drag cards from one stage to another. I love this. No more selecting drop down boxes and having to save contacts. I like how ProsperWorks just gets out of my way and lets me focus on action.

…also not my pipeline view. Thanks again Google Images!

I also love that I can order these stages based on “inactivity”. Because ProsperWorks integrates email into each opportunity, it knows how many inactive days there has been since I engaged with an opportunity. This means I can (a) sort my opportunities based on inactivity so I can keep on top of them easily, and (b) I can set reminders to add tasks when I need to follow up.

ProsperWorks CRM

The focus on inactivity is hugely helpful when managing lots of concurrent opportunities.

As I was evaluating ProsperWorks, there was one additional element that really clinched it for me: the design.

ProsperWorks looks and feels like a Google application. It uses material design, and it is sleek and modern. It doesn’t just look good, but it is smartly designed in terms of user interaction. It is abundantly clear that whoever does the interaction and UX design at ProsperWorks is doing an awesome job, and I hope someone there cuts this paragraph out and shows it to them. If they do, you kick ass!

Of course, ProsperWorks does a load of other stuff that is helpful for teams, but I am primarily assessing this from a sole consultant’s perspective. In the end, I pulled the trigger and subscribed, and I am delighted that I did. It provides a great service, is more cost efficient than the alternatives, provides an integrated solution, and the company looks like they are doing neat things.

Feature Requests

While I dig ProsperWorks, there are some things I would love to encourage the company to focus on. So, ProsperWorks folks, if you are reading this, I would love to see you focus on the following. If some of these already exist, let me know and I will update this post. Consider me a resource here: happy to talk to you about these ideas if it helps.

Wider Google Calendar integration

Currently the gcal integration is pretty neat. One limitation though is that it depends on a gmail.com domain. As such, calendar events where someone invites my jonobacon.com email doesn’t automatically get added to the opportunity (and dashboard). It would be great to be able to associate another email address with an account (e.g. a gmail.com and jonobacon.com address) so when calendar events have either or both of those addresses they are sucked into opportunities. It would also be nice to select which calendars are viewed: I use different calendars for different things (e.g. one calendar for booked work, one for prospect meetings, one for personal etc). Feature Request Link

It would also be great to have ProsperWorks be able to ease scheduling calendar meetings in available slots. I want to be able to talk to a client about scheduling a call, click a button in the opportunity, and ProsperWorks will tell me four different options for call times, I can select which ones I am interested in, and then offer these times to the client, where they can pick one. ProsperWorks knows my calendar, this should be doable, and would be hugely helpful. Feature Request Link

Improve the project management capabilities

I have a dream. I want my CRM to also offer simple project management capabilities. ProsperWorks does have a ‘projects’ view, but I am unclear on the point of it.

What I would love to see is simple project tracking which integrates (a) the ability to set milestones with deadlines and key deliverables, and (b) Objective Key Results. This would be huge: I could agree on a set of work complete with deliverables as part of an opportunity, and then with a single click be able to turn this into a project where the milestones would be added and I could assign tasks, track notes, and even display a burndown chart to see how on track I am within a project. Feature Request Link

This doesn’t need to be a huge project management system, just a simple way of adding milestones, their child tasks, tracking deliverables, and managing work that leads up to those deliverables. Even if ProsperWorks just adds simple Evernote functionality where I can attach a bunch of notes to a client, this would be hugely helpful.

Optimize or Integrate Task Tracking

Tracking tasks is an important part of my work. The gold standard for task tracking is Wunderlist. It makes it simple to add tasks (not all tasks need deadlines), and I can access them from anywhere.

I would love to ProsperWorks to either offer that simplicity of task tracking (hit a key, whack in a title for a task, and optionally add a deadline instead of picking an arbitrary deadline that it nags me about later), or integrate with Wunderlist directly. Feature Request Link

Dashboard Configurability

I want my CRM dashboard to be something I look at every day. I want it to tell me what calendar events I have today, which opportunities I need to follow up with, what tasks I need to complete, and how my overall pipeline is doing. ProspectWorks does some of this, but doesn’t allow me to configure this view. For example, I can’t get rid of the ‘Invite Team Members’ box, which is entirely irrelevant to me as an individual consultant. Feature Request Link

So, all in all, nice work, ProsperWorks! I love what you are doing, and I love how you are innovating in this space. Consider me a resource: I want to see you succeed!

UPDATE: Updated with feature request links.

The post My Move to ProsperWorks CRM and Feature Requests appeared first on Jono Bacon.

by Jono Bacon at March 24, 2017 05:13 PM

March 23, 2017

Jono Bacon

Community Leadership Summit 2017: 6th – 7th May in Austin

The Community Leadership Summit is taking place on the 6th – 7th May 2017 in Austin, USA.

The event brings together community managers and leaders, projects, and initiatives to share and learn how we build strong, engaging, and productive communities. The event takes place the weekend before OSCON in the same venue, the Austin Contention Center. It is entirely FREE to attend and welcomes everyone, whether you are a community veteran or just starting out your journey!

The event is broken into three key components.

Firstly, we have an awesome set of keynotes this year:

Secondly, the bulk of the event is an unconference where the attendees volunteer session ideas and run them. Each session is a discussion where the topic is discussed, debated, and we reach final conclusions. This results in a hugely diverse range of sessions covering topics such as event management, outreach, social media, governance, collaboration, diversity, building contributor programs, and more. These discussions are incredible for exploring and learning new ideas, meeting interesting people, building a network, and developing friendships.

Finally, we have social events on both evenings where you can meet and network with other attendees. Food and drinks are provided by data.world and Mattermost. Thanks to both for their awesome support!

Join Us

The Community Leadership Summit is entirely FREE to attend. If you would like to join, we would appreciate if you could register (this helps us with expected numbers). I look forward to seeing you there in Austin on the 6th – 7th May 2017!

The post Community Leadership Summit 2017: 6th – 7th May in Austin appeared first on Jono Bacon.

by Jono Bacon at March 23, 2017 04:40 PM

March 22, 2017

Elizabeth Krumbach

Your own Zesty Zapus

As we quickly approach the release of Ubuntu 17.04, Zesty Zapus, coming up on April 13th, you may be thinking of how you can mark this release.

Well, thanks to Tom Macfarlane of the Canonical Design Team you have one more goodie in your toolkit, the SVG of the official Zapus! It’s now been added to the Animal SVGs section of the Official Artwork page on the Ubuntu wiki.

Zesty Zapus

Download the SVG version for printing or using in any other release-related activities from the wiki page or directly here.

Over here, I’m also all ready with the little “zapus” I picked up on Amazon.

Zesty Zapus toy

by pleia2 at March 22, 2017 04:01 AM

March 21, 2017

Elizabeth Krumbach

SCaLE 15x

At the beginning of March I had the pleasure of heading down to Pasadena, California for SCaLE 15x. Just like last year, MJ also came down for work so it was fun syncing up with him here and there between going off to our respective meetings and meals.

I arrived the evening on March 1st and went out with my co-organizer of the Open Source Infrastructure Day to pick up some supplies for the event. I hope to write up a toolkit for running one of these days based on our experiences and what we needed to buy, but that will have to wait for another day.

March 2nd is when things began properly and we got busy! I spent most of my day running the Open Source Infrastructure day, which I wrote about here on opensource.com: How to grow healthy open source project infrastructures.

I also spent an hour over at the UbuCon Summit giving a talk on Xubuntu which I already blogged about here. Throughout the day I also handled the Twitter accounts for both @OpenSourceInfra and @ubuntu_us_ca. This was deceptively exhausting, by Thursday night I was ready to crash but we had a dinner to go to! Speakers, organizers and other key folks who were part of our Open Source Infrastructure day were treated to a meal by IBM.


Photo thanks to SpamapS (source)

Keynotes for the conference on Saturday and Sunday were both thoughtful, future-thinking talks about the importance of open source software, culture and methodologies in our world today. On Saturday we heard from Astrophysicist Christine Corbett Moran, who among her varied accomplishments has done research in Antarctica and led security-focused development of the now wildly popular Signal app for iOS. She spoke on the relationships between our development of software and the communities we’re building in the open. There is much to learn and appreciate in this process, but also more that we can learn from other communities. Slides from her talk, amusingly constructed as a series of tweets (some real, most not) are available as a pdf on the talk page.


Christine Corbett Moran on “Open Source Software as Activism”

In Karen Sandler’s keynote she looked at much of what is going on in the United States today and seriously questioned her devotion to free software when it seems like there are so many other important causes out there to fight for. She came back to free software though, since it’s such an important part of every aspect of our lives. As technologists, it’s important for us to continue our commitment to open source and support organizations fighting for it, a video of her talk is already available on YouTube at SCaLE 15x Keynote: Karen Sandler – In the Scheme of Things, How Important is Software Freedom?

A few other talks really stood out for me, Amanda Folson spoke on “10 Things I Hate About Your API” where she drew from her vast experience with hosted APIs to give advice to organizations who are working to build their customer and developer communities around a product. She scrutinized things like sign-up time and availability and complexity of code examples. She covered tooling problems, documentation, reliability and consistency, along with making sure the API is actually written for the user, listening to feedback from users to maintain and improve it, and abiding by best practices. Best of all, she also offered helpful advice and solutions for all these problems! The great slides from her talk are available on the talk page.


Amanda Folson

I also really appreciated VM Brasseur’s talk, “Passing the Baton: Succession planning for FOSS leadership”. I’ll admit right up front that I’m not great at succession planning. I tend to take on a lot in projects and then lack the time to actually train other people because I’m so overwhelmed. I’m not alone in this, succession planning is a relatively new topic in open source projects and only a handful have taken a serious look at it from a high, project-wide level. Key points she made centered around making sure skills for important roles are documented and passed around and suggested term limits for key roles. She walked attendees through a process of identifying critical roles and responsibilities in their community, refactoring roles that are really too large for individual contributors, and procedures and processes for knowledge transfer. I think one of the most important things about this talk was less about the “bus factor” worry of losing major contributors unexpectedly, but how documenting and making sure roles are documented makes your project more welcoming to new, and more divers contributors. Work is well-scoped, so it’s easy for someone new to come in and help on a small part, and the project has support built in for that.


VM Brasseur

For my part, I gave a talk on “Listening to the Needs of Your Global Open Source Community” where I had a small audience (last talk of the day, against a popular speaker) but an engaged one that had great questions. It’s nice sometimes nice to have a smaller crowd that allows you to talk to almost all of them after the talk, I even arranged a follow-up lunch meeting with a woman I met who is doing some work similar to what I did for the i18n team in the OpenStack community. Slides from my talk are here (7.4M PDF).

I heard a talk from AB Periasamy of Minio, the open source alternative to AWS S3 that we’re using at Mesosphere for some of our DC/OS demos that need object storage. My friend and open source colleague Nathan Handler also gave a very work-applicable talk on PaaSTA, the framework built by Yelp to support their Apache Mesos-driven infrastructure. I cover both of these talks in more depth in a blog post coming out soon on the dcos.io blog. Edit: The post on the DC/OS blog is now up: Reflecting on SCaLE 15x.

SCaLE 15x remains one of my favorite conferences. Lots of great talks, key people from various segments of open source communities I participate in and great pacing so that you can find time to socialize and learn. Huge thanks to Ilan Rabinovitch who I worked with a fair amount during this event to make sure the Open Source Infrastructure day came together, and to the fleet of volunteers who make this happen every year.

More photos from SCaLE 15x here: https://www.flickr.com/photos/pleia2/albums/72157681016586816

by pleia2 at March 21, 2017 07:53 PM

March 20, 2017

Akkana Peck

Everyone Does IT (and some Raspberry Pi gotchas)

I've been quiet for a while, partly because I've been busy preparing for a booth at the upcoming Everyone Does IT event at PEEC, organized by LANL.

In addition to booths from quite a few LANL and community groups, they'll show the movie "CODE: Debugging the Gender Gap" in the planetarium, I checked out the movie last week (our library has it) and it's a good overview of the problem of diversity, and especially the problems women face in in programming jobs.

I'll be at the Los Alamos Makers/Coder Dojo booth, where we'll be showing an assortment of Raspberry Pi and Arduino based projects. We've asked the Coder Dojo kids to come by and show off some of their projects. I'll have my RPi crittercam there (such as it is) as well as another Pi running motioneyeos, for comparison. (Motioneyeos turned out to be remarkably difficult to install and configure, and doesn't seem to do any better than my lightweight scripts at detecting motion without false positives. But it does offer streaming video, which might be nice for a booth.) I'll also be demonstrating cellular automata and the Game of Life (especially since the CODE movie uses Life as a background in quite a few scenes), music playing in Python, a couple of Arduino-driven NeoPixel LED light strings, and possibly an arm-waving penguin I built a few years ago for GetSET, if I can get it working again: the servos aren't behaving reliably, but I'm not sure yet whether it's a problem with the servos and their wiring or a power supply problem.

The music playing script turned up an interesting Raspberry Pi problem. The Pi has a headphone output, and initially when I plugged a powered speaker into it, the program worked fine. But then later, it didn't. After much debugging, it turned out that the difference was that I'd made myself a user so I could have my normal shell environment. I'd added my user to the audio group and all the other groups the default "pi" user is in, but the Pi's pulseaudio is set up to allow audio only from users root and pi, and it ignores groups. Nobody seems to have found a way around that, but sudo apt-get purge pulseaudio solved the problem nicely.

I also hit a minor snag attempting to upgrade some of my older Raspbian installs: lightdm can't upgrade itself (Errors were encountered while processing: lightdm). Lots of people on the web have hit this, and nobody has found a way around it; the only solution seems to be to abandon the old installation and download a new Raspbian image.

But I think I have all my Raspbian cards installed and working now; pulseaudio is gone, music plays, the Arduino light shows run. Now to play around with servo power supplies and see if I can get my penguin's arms waving again when someone steps in front of him. Should be fun, and I can't wait to see the demos the other booths will have.

If you're in northern New Mexico, come by Everyone Does IT this Tuesday night! It's 5:30-7:30 at PEEC, the Los Alamos Nature Center, and everybody's welcome.

March 20, 2017 06:29 PM

March 19, 2017

Elizabeth Krumbach

Simcoe’s January and March 2017 Checkups

Simcoe has had a few checkups since I last wrote in October. First was a regular checkup in mid-December, where I brought her in on my own and had to start thinking about how we’re going to keep her weight up. The next step will be a feeding tube, and we really don’t want to go down that path with a cat who has never even been able to tolerate a collar. Getting her to take fiber was getting to be stressful for all of us, so the doctor prescribed Lactulose to be taken daily to handle constipation. Medication for a kitty facing renal failure is always dicey option, but the constipation was clearly painful for her and causing her to vomit more. We started getting going with that slowly. We skipped the blood work with this visit since we were aiming to get it done again in January.

On January 7th she was not doing well and was brought in for an emergency visit to make sure she didn’t pass into crisis with her renal failure. Blood work was done then and we had to get more serious about making sure she stays regular and keeps eating. Still, her weight started falling more dramatically at this point, with her dropping below 8 lbs for the first time since she was diagnosed in 2011, landing her at a worrying 7.58. Her BUN level had gone from 100 in October to 141, CRE from 7.0 to 7.9.

At the end of January she went in for her regular checkup. We skipped the blood work since it had just been done a couple weeks before. We got a new, more concentrated formulation of Mirtazapine to stimulate her appetite since MJ had discovered that putting the liquid dosage into a capsule that she could swallow without tasting any of it was the only possible way we could get her to take it. The Calcitriol she was taking daily was also reformulated. We had to leave town unexpectedly for a week in early February, which she wasn’t at all happy with, but since then I’ve been home with her most of the time so she’s seems to have perked up a bit and after dipping in weight she seems to be doing tolerably well.

When we brought her into the vet on March 11th her weight came in at a low 6.83 lbs. The lowest weight she’d ever had was 6.09 when she was first diagnosed and not being treated at all, so she wasn’t down to her all time low. Still, dropping below 7 pounds is still troubling, especially since it has happened so rapidly.

The exam went well though, the vet continues to be surprised at how well she’s doing outwardly in spite of her weight and blood work. Apparently some cats just handle the condition better than others. Simcoe is a lucky, tough kitty.


Evidence of the blood draw!

I spoke with the vet this morning now that blood work has come back. Her phosphorous and calcium levels are not at all where we want them to be. Her CRE is up from 7.9 to 10.5, BUN went from 141 to 157. Sadly, these are pretty awful levels, her daily 100 ml Subcutaneous fluids are really what is keeping her going at this point.

With this in mind, as of today we’ve suspended use of the Calcitriol, switched the Atopica she’s taking for allergies to be every other day. We’re only continuing with the Mirtazapine, Lactulose and Subcutaneous fluids. I’m hoping that the reduction in medications she’s taking each day will stress her body and mind less, leading to a happier kitty even as her kidneys continue in their decline. I hope she’s not in a lot of pain day to day, she does still vomit a couple times a week, and I know her constipation isn’t fully addressed by the medication, she still is quite thirsty all the time. We can’t increase her fluids dosage since there’s only so much she can absorb in a day, and it would put stress on her heart (she has a slight heart murmur). Keeping her weight up remains incredibly important, with the vet pretty much writing off dietary restrictions and saying she can eat as much of whatever she likes (turkey prepared for humans? Oh yes!).

Still, mostly day to day we’re having a fun cat life over here. We sent our laundry out while the washer was broken recently and clothes came back bundled in strings that Simcoe had a whole evening of fun over. I picked up a laser pointer recently that she played with for a bit, before figuring it out, she just stares at my hand now when I use it, but at least Caligula still enjoys it! And in the evenings when I carve out some time to read or watch TV, it’s pretty common for her to camp out on my lap.

by pleia2 at March 19, 2017 10:09 PM

March 13, 2017

Eric Hammond

Incompatible: Static S3 Website With CloudFront Forwarding All Headers

a small lesson learned in setting up a static web site with S3 and CloudFront

I created a static web site hosted in an S3 bucket named www.example.com (not the real name) and enabled accessing it as a website. I wanted delivery to be fast to everybody around the world, so I created a CloudFront distribution in front of the S3 bucket.

I wanted S3 to automatically add “index.html” to URLs ending in a slash (CloudFront can’t do this), so I configured the CloudFront distribution to access the S3 bucket as a web site using www.example.com.s3-website-us-east-1.amazonaws.com as the origin server.

Before sending all of the www.example.com traffic to the new setup, I wanted to test it, so I added test.example.com to the list of CNAMEs in the CloudFront distribution.

After setting up Route53 so that DNS lookups for test.example.com would resolve to the new CloudFront endpoint, I loaded it in my browser and got the following error:

404 Not Found

Code: NoSuchBucket
Message: The specified bucket does not exist
BucketName: test.example.com
RequestId: [short string]
HostId: [long string]

Why would AWS be trying to find an S3 bucket named test.example.com? That was pointing at the CloudFront distribution endpoint, and CloudFront was configured to get the content from www.example.com.s3-website-us-east-1.amazonaws.com

After debugging, I found out that the problem was that I had configured the CloudFront distribution to forward “all” HTTP headers. I thought that this would be a sneaky way to turn off caching in CloudFront so that I could keep updating the content in S3 and not have to wait to see the latest changes.

However, this also means that CloudFront was forwarding the HTTP Host header from my browser to the S3 website handler. When S3 saw that I was requesting the host of test.example.com it looked for a bucket of the same name and didn’t find it, resulting in the above error.

When I turned off forwarding all HTTP headers in CloudFront, it then started sending through the correct header:

Host: www.example.com.s3-website-us-east-1.amazonaws.com

which S3 correctly interpreted as accessing the correct S3 bucket www.example.com in the website mode (adding index.html after trailing slashes).

It makes sense for CloudFront to support forwarding the Host header from the browser, especially when your origin server is a dynamic web site that can act on the original hostname. You can set up a wildcard *.example.com DNS entry pointing at your CloudFront distribution, and have the back end server return different results depending on what host the browser requested.

However, passing the Host header doesn’t work so well for an origin server S3 bucket in website mode. Lesson learned and lesson passed on.

Original article and comments: https://alestic.com/2017/03/cloudfront-s3-host-header/

March 13, 2017 09:30 PM

March 10, 2017

Akkana Peck

At last! A roadrunner!

We live in what seems like wonderful roadrunner territory. For the three years we've lived here, we've hoped to see a roadrunner, and have seen them a few times at neighbors' places, but never in our own yard.

Until this morning. Dave happened to be looking out the window at just the right time, and spotted it in the garden. I grabbed the camera, and we watched it as it came out from behind a bush and went into stalk mode.

[Roadrunner stalking]

And it caught something!

[close-up, Roadrunner with fence lizard] We could see something large in its bill as it triumphantly perched on the edge of the garden wall, before hopping off and making a beeline for a nearby juniper thicket.

It wasn't until I uploaded the photo that I discovered what it had caught: a fence lizard. Our lizards only started to come out of hibernation about a week ago, so the roadrunner picked the perfect time to show up.

I hope our roadrunner decides this is a good place to hang around.

March 10, 2017 09:33 PM

Elizabeth Krumbach

Ubuntu at SCaLE15x

On Thursday, March 2nd I spent most of the day running an Open Source Infrastructure Day, but across the way my Ubuntu friends were kicking off the first day of the second annual UbuCon Summit at SCaLE. The first day included a keynote from by Carl Richell of System76 where they made product announcements, including of their new Galago Pro laptop and their Starling Pro ARM server. The talk following came from Nextcloud, with a day continuing with talks from Aaron Atchison and Karl Fezer talking about the Mycroft AI, José Antonio Rey on Getting to know Juju: From zero to deployed in minutes and Amber Graner sharing the wisdom that You don’t need permission to contribute to your own destiny.

I ducked out of the Open Infrastructure Day in the mid-afternoon to give my talk, 10 Years of Xubuntu. This is a talk I’d been thinking about for some time, and I begin by walking folks through the history of the Xubuntu project. From there I spoke about where it sits in the Ubuntu community as a recognized flavor, and then on to how specific strategies that the team has employed with regard to motivating the completely volunteer-driven team.

When it came to social media accounts, we didn’t create them all ourselves, instead relying upon existing accounts on Facebook, G+ and LinkedIn that we promoted to being official ones, keeping the original volunteers in place, just giving access to a core Xubuntu team member in case they couldn’t continue running it. It worked out for all of us, we had solid contributors passionate about their specific platforms and excited to be made official, and as long as they kept them running we didn’t need to expend core team resources to keep them running. We’ve also worked to collect user stories in order to motivate current contributors, since it means a lot to see their work being used by others. I’ve also placed a great deal of value on the Xubuntu Strategy Document, which has set the guiding principles of the project and allowed us to steer the ship through difficult decisions in the project. Slides from the talk are available here: 10_years_of_Xubuntu_UbuCon_Summit.pdf (1.9M).

Thursday evening I met with my open source infrastructure friends for dinner, but afterwards swung by Porto Alegre to catch some folks for evening drinks and snacks. I had a really nice chat with Nathan Haines, who co-organized the UbuCon Summit.

On Friday I was able to attend the first keynote! Michael Hall gave a talk titled Sponsored by Canonical where he dove deep into Ubuntu history to highlight Canonical’s role in the support of the project from the early focus on desktop Linux, to the move into devices and the cloud. His talk was followed by one from Sergio Schvezov on Snaps. The afternoon was spent as an unconference, with the Ubuntu booth starting up in the expo hall on 2PM.

The weekend was all about the Ubuntu booth. Several volunteers staffed it Friday through Sunday.

They spent the event showing off the Ubuntu Phone, Mycroft AI, and several laptops.

It was also great to once again meet up with one of my co-authors for the 9th edition of The Official Ubuntu Book, José Antonio Rey. Our publisher sent a pile of books to give out at the event, some of which we gave out during our talks, and a couple more at the booth.

by pleia2 at March 10, 2017 05:39 AM

Work, wine, open source and… survival

So far 2017 has proven to be quite the challenge, but let’s hold off on all that until the end.

As I’ve mentioned in a couple of recent posts, I start new job in January, joining Mesosphere to move up the stack to work on containers and focus on application deployments. It’s the first time I’ve worked for a San Francisco startup and so far I’ve been having a lot of fun working with really smart people who are doing interesting work that’s on the cutting edge of what companies are doing today. Aside from travel for work, I’ve spent most of my time these first couple months in the office getting to know everyone. Now, we all know that offices aren’t my thing, but I have enjoyed the catered breakfasts and lunches, dog-friendly environment and ability to meet with colleagues in person as I get started.

I’ve now started going in mostly just for meetings, with productivity much higher when I can work from home like I have for the past decade. My team is working on outreach and defining open source strategies, helping with slide decks, guides and software demos. All stuff I’m finding great value in. As I start digging deeper into the tech I’m finding myself once again excited about work I’m doing and building things that people are using.

Switching gears into the open source work I still do for fun, I’ve started to increase my participation with Xubuntu again, just recently wrapping up the #LoveXubuntu Competition. At SCaLE15x last week I gave a Xubuntu presentation, which I’ll write about in a later post. Though I’ve stepped away from the Ubuntu Weekly Newsletter just recently, I did follow through with ordering and shipping stickers off to winners of our issue 500 competition.

I’ve also put a nice chunk of my free time into promoting Open Source Infrastructure work. In addition to a website that now has a huge list of infras thanks to various contributors submitting merge proposals via GitLab, I worked with a colleague from IBM to run a whole open source infra event at SCaLE15x. Though we went into it with a lot of uncertainty, we came out the other end having had a really successful event and excitement from a pile of new people.

It hasn’t been all work though. In spite of a mounting to do list, sometimes you just need to slow down.

At the beginning of February MJ and I spent a Saturday over at the California Historical Society to see their Vintage: Wine, Beer, and Spirits Labels from the Kemble Collections on Western Printing and Publishing exhibit. It’s just around the corner from us, so allowed for a lovely hour of taking a break after a Saturday brunch to peruse various labels spanning wine, beer and spirits from a designer and printer in California during the first half of the 20th century. The collection was of mass-production labels, there nothing artisanal about them and no artists signing their names, but it did capture a place in time and I’m a sucker for early 20th century design. It was a fascinating collection, beautifully curated like their exhibits always are, and I’m glad we made time to see it.

More photos from the exhibit are up here: https://www.flickr.com/photos/pleia2/albums/72157676346542394

At the end of February we noted our need to pick up our quarterly wine club subscription at Rutherford Hill. In what was probably our shortest trip up to Napa, we enjoyed a noontime brunch at Harvest Table in St. Helena. We picked up some Charbay hop-flavored whiskey, stopped by the Heitz Cellar tasting room where we picked up a bottle of my favorite Zinfandel and then made our way to Rutherford Hill to satisfy the real goal of our trip. Upon arrival we were pleased to learn that a members’ wine-tasting event was being held in the caves, where they had a whole array of wines to sample along with snacks and cheeses. Our wine adventures ended with this stop and we made a relatively early trek south, in the direction of home.

A few more photos from our winery jaunt are here: https://www.flickr.com/photos/pleia2/albums/72157677743529104

Challenge-wise, here we go. Starting a new job means a lot of time spent learning, while I also have had to to hit the ground running. We worked our way through a death in the family last month. I’ve been away from home a lot, and generally we’ve been doing a lot of running around to complete all the adult things related to life. Our refrigerator was replaced in December and in January I broke one of the shelves, resulting in a spectacular display of tomato sauce all over the floor. Weeks later our washing machine started acting up and overflowed (thankfully no damage done in our condo), we have our third repair visit booked and hopefully it’ll be properly fixed on Monday.

I spent the better part of January recovering from a severe bout of bronchitis that had lasted three months, surviving antibiotics, steroids and two types of inhalers. MJ is continuing to nurse a broken bone in his foot, transitioning from air cast to shoe-based aids, but there’s still pain and uncertainty around whether it’ll heal properly without surgery. Simcoe is not doing well, she is well into the final stages of renal failure. We’re doing the best we can to keep her weight up and make sure she’s happy, but I fear the end is rapidly approaching and I’m not sure how I’ll cope with it. I also lurked in the valley of depression for a while in February.

We’re also living in a very uncertain political climate here in the United States. I’ve been seeing people I care about being placed in vulnerable situations. I’m finding myself deeply worried every time browse the news or social media for too long. I never thought that in 2017 I’d be reading from a cousin who was evacuated from a Jewish center due to a bomb threat, or have to check to make sure the cemetery in Philadelphia that was desecrated wasn’t one that my relatives were in. A country I’ve loved and been proud of for my whole life, through so many progressive changes in recent years, has been transformed into something I don’t recognize. I have friends and colleagues overseas cancelling trips and moves here because they’re afraid of being turned away or otherwise made to feel unwelcome. I’m thankful for my fellow citizens who are standing up against it and organizations like the ACLU who have vowed to keep fighting, I just can’t muster the strength for it right now.

Right now we have a lot going on, and though we’re both stressed out and tired, we aren’t actively handling any crisis at the moment. I feel like I finally have a tiny bit of breathing room. These next two weekends will be spent catching up on tasks and paperwork. I’m planning on going back to Philadelphia for a week at the end of the month to start sorting through my mother-in-law’s belongings and hopefully wrap up sorting of things that belonged to MJ’s grandparents. I know a fair amount of heartache awaits me in these tasks, but we’ll be in a much better place to move forward once I’ve finished. Plus, though I’ll be working each day, I will be making time to visit with friends while I’m there and that always lifts my spirits.

by pleia2 at March 10, 2017 02:35 AM

March 05, 2017

Akkana Peck

The Curious Incident of the Junco in the Night-Time

Dave called from an upstairs bedroom. "You'll probably want to see this."

He had gone up after dinner to get something, turned the light on, and been surprised by an agitated junco, chirping and fluttering on the sill outside the window. It evidently was tring to fly through the window and into the room. Occasionally it would flutter backward to the balcony rail, but no further.

There's a piñon tree whose branches extend to within a few feet of the balcony, but the junco ignored the tree and seemed bent on getting inside the room.

As we watched, hoping the bird would calm down, instead it became increasingly more desperate and stressed. I remembered how, a few months earlier, I opened the door to a deck at night and surprised a large bird, maybe a dove, that had been roosting there under the eaves. The bird startled and flew off in a panic toward the nearest tree. I had wondered what happened to it -- whether it had managed to find a perch in the thick of a tree in the dark of night. (Unlike San Jose, White Rock gets very dark at night.)

And that thought solved the problem of our agitated junco. "Turn the porch light on", I suggested. Dave flipped a switch, and the porch light over the deck illuminated not only the deck where the junco was, but the nearest branches of the nearby piñon.

Sure enough, now that it could see the branches of the tree, the junco immediately turned around and flew to a safe perch. We turned the porch light back off, and we heard no more from our nocturnal junco.

March 05, 2017 06:27 PM

February 27, 2017

Nathan Haines

UbuCon Summit Comes to Pasadena this Week!

UbuCon SCALE 14x group photo

Once again, UbuCon Summit will be hosted by the Southern California Linux Expo in Pasadena, California on March 2nd and 3rd. UbuCon Summit is two days that celebrate Ubuntu and the community, and this year has some excitement in store.

Thursday's keynote will feature Carl Richell, the CEO and founder of System 76, a premium source of Ubuntu desktop and laptop computers. Entitled "Acrylic, Aluminum, Thumb Screws, and Heavy Machinery at System 76," he will share how System 76 is reinventing what it means to be a computer manufacturer, and talk about how they are changing the relationship between users and their devices. Don't miss this fascinating peek behind the scenes of a computer manufacturer that focuses on Ubuntu, and keep your ears peeled because they are announcing new products during the keynote!

We also have community member Amber Graner who will share her inspiring advice on how to forge a path to success with her talk "You Don't Need Permission to Contribute to Your Own Destiny," and Elizabeth Joseph who will talk about her 10 years in the Xubuntu community.

Thursday will wrap up with our traditional open Ubuntu Q&A panel where you can ask us your burning questions about Ubuntu, and Friday will see a talk from Michael Hall, "Sponsored by Canonical" where he describes the relationship between Canonical and Ubuntu and how it's changed, and Sergio Schvezov will describe Ubuntu's next-generation packaging format in "From Source to Snaps." After a short break for lunch and the expo floor, we'll be back for four unconference sessions, where attendees will come together to discuss the Ubuntu topics that matter most to them.

Ubuntu will be at booth 605 during the Southern California Linux Expo's exhibition floor hours from Friday through Sunday. You'll be able to see the latest version of Ubuntu, see how it works with touchscreens, laptops, phones, and embedded devices, and get questions answered by both community and Canonical volunteers at the booth.

Come for UbuCon, stay for SCALE! This is a weekend not to be missed!

SCALE 15x, the 15th Annual Southern California Linux Expo, is the largest community-run Linux/FOSS showcase event in North America. It will be held from March 2-5 at the Pasadena Convention Center in Pasadena, California. For more information on the expo, visit https://www.socallinuxexpo.org

February 27, 2017 01:24 AM

February 26, 2017

Elizabeth Krumbach

My mother-in-law

On Monday, February 6th MJ’s mother passed away.

She had been ill over the holidays and we had the opportunity to visit with her in the hospital a couple times while we were in Philadelphia in December. Still, with her move to a physical rehabilitation center in recent weeks I thought she was on the mend. Learning of her passing was quite the shock, and it hasn’t been easy. No arrangements had been made for her passing, so for the few hours following her death we notified family members and scrambled to select a cemetery and funeral home. Given the distance and our situations at work (I was about to leave for a conference and MJ had commitments as well) we decided to meet in Philadelphia at the end of the week and take things from there.

MJ and I met at the townhouse in Philadelphia on Saturday and began the week of work we needed to do to put her to rest. Selecting a plot in the cemetery, organizing her funeral, selecting a headstone. A lot of this was new for both of us. While we both have experienced loss in our families, most of these arrangements had already been made for the passing of our other family members. Thankfully everyone we worked with was kind and compassionate, and even when we weren’t sure of specifics, they had answers to fill in the blanks. We also spent time that week moving out her apartment and started the process of handling her estate. Her brother flew into town and stayed in the guest room of our town house, which we were suddenly grateful we had made time to set up on a previous trip.

We held her funeral on February 15th and she was laid to rest surrounded by a gathering of close family and friends. We had clear, beautiful weather as we gathered graveside to say goodbye. Her obituary can be found here.

There’s still a lot to do to finish handling her affairs and it’s been hard for me, but I’m incredibly thankful for friends, family and colleagues who have been so understanding as we’ve worked through this. We’re very grateful for the time we were able to spend with her. When she was well, we enjoyed countless dinners together and of course she joined us to celebrate at our wedding back in 2013. Even recently over the holidays in spite of her condition it was nice to have some time together. She will be missed.

by pleia2 at February 26, 2017 05:49 PM

February 25, 2017

Elizabeth Krumbach

Moving on from the Ubuntu Weekly Newsletter

Somewhere around 2010 I started getting involved with the Ubuntu News Team. My early involvement was limited to the Ubuntu Fridge team, where I would help post announcements from various teams, including the Ubuntu Community Council that I was part of. With Amber Graner at the helm of the Ubuntu Weekly Newsletter (UWN) I focused my energy elsewhere since I knew how much work the UWN editor position was at the time.

Ubuntu Weekly Newsletter

At the end of 2010 Amber stepped down from the team to pursue other interests, and with no one to fill her giant shoes the team entered a five month period of no newsletters. Finally in June, after being contacted numerous times about the fate of the newsletter, I worked with Nathan Handler to revive it so we could release issue 220. Our first job was to do an analysis of the newsletter as a whole. What was valuable about the newsletter and what could we do away with to save time? What could we automate? We decided to make some changes to reduce the amount of manual work put into it.

To this end, we ceased to include monthly reports inline and started linking to rather than sharing inline the upcoming meeting and event details in the newsletter itself. There was also a considerable amount of automation done thanks to Nathan’s work on scripts. No more would we be generating any of the release formats by hand, they’d all be generated with a single command, ready to be cut and pasted. Release time every week went from over two hours to about 20 minutes in the hands of an experienced editor. Our next editor would have considerably less work than those who came before them. From then on I’d say I’ve been pretty heavily involved.

500

The 500th issue lands on February 27th, this is an exceptional milestone for the team and the Ubuntu community. It is deserving of celebration, and we’ve worked behind the scenes to arrange a contest and a simple way for folks to say “thanks” to the team. We’ve also reached out to a handful of major players in the community to tell us what they get from the newsletter.

With the landing of this issue, I will have been involved with over 280 issues over 8 years. Almost every week in that time (I did skip a couple weeks for my honeymoon!) I’ve worked to collect Ubuntu news from around the community and internet, prepare it for our team of summary writers, move content to the wiki for our editors, and spend time on Monday doing the release. Over these years I’ve worked with several great contributors to keep the team going, rewarding contributors with all the thanks I could muster and even a run of UWN stickers specifically made for contributors. I’ve met and worked with some great people during this time, and I’m incredibly proud of what we’ve accomplished over these years and the quality we’ve been able to maintain with article selection and timely releases.

But all good things must come to an end. Several months ago as I was working on finding the next step in my career with a new position, I realized how much my life and the world of open source had changed since I first started working on the newsletter. Today there are considerable demands on my time, and while I hung on to the newsletter, I realized that I was letting other exciting projects and volunteer opportunities pass me by. At the end of October I sent a private email to several of the key contributors letting them know I’d conclude my participation with issue 500. That didn’t quite happen, but I am looking to actively wind down my participation starting with this issue and hope that others in the community can pick up where I’m leaving off.

UWN stickers

I’ll still be around the community, largely focusing my efforts on Xubuntu directly. Folks can reach out to me as they need help moving forward, but the awesome UWN team will need more contributors. Contributors collect news, write summaries and do editing, you can learn more about joining here. If you have questions about contributing, you can join #ubuntu-news on freenode and say hello or drop an email to our team mailing list (public archives).

by pleia2 at February 25, 2017 02:57 AM

February 24, 2017

Akkana Peck

Coder Dojo: Kids Teaching Themselves Programming

We have a terrific new program going on at Los Alamos Makers: a weekly Coder Dojo for kids, 6-7 on Tuesday nights.

Coder Dojo is a worldwide movement, and our local dojo is based on their ideas. Kids work on programming projects to earn colored USB wristbelts, with the requirements for belts getting progressively harder. Volunteer mentors are on hand to help, but we're not lecturing or teaching, just coaching.

Despite not much advertising, word has gotten around and we typically have 5-7 kids on Dojo nights, enough that all the makerspace's Raspberry Pi workstations are filled and we sometimes have to scrounge for more machines for the kids who don't bring their own laptops.

A fun moment early on came when we had a mentor meeting, and Neil, our head organizer (who deserves most of the credit for making this program work so well), looked around and said "One thing that might be good at some point is to get more men involved." Sure enough -- he was the only man in the room! For whatever reason, most of the programmers who have gotten involved have been women. A refreshing change from the usual programming group. (Come to think of it, the PEEC web development team is three women. A girl could get a skewed idea of gender demographics, living here.) The kids who come to program are about 40% girls.

I wondered at the beginning how it would work, with no lectures or formal programs. Would the kids just sit passively, waiting to be spoon fed? How would they get concepts like loops and conditionals and functions without someone actively teaching them?

It wasn't a problem. A few kids have some prior programming practice, and they help the others. Kids as young as 9 with no previous programming experience walk it, sit down at a Raspberry Pi station, and after five minutes of being shown how to bring up a Python console and use Python's turtle graphics module to draw a line and turn a corner, they're happily typing away, experimenting and making Python draw great colorful shapes.

Python-turtle turns out to be a wonderful way for beginners to learn. It's easy to get started, it makes pretty pictures, and yet, since it's Python, it's not just training wheels: kids are using a real programming language from the start, and they can search the web and find lots of helpful examples when they're trying to figure out how to do something new (just like professional programmers do. :-)

Initially we set easy requirements for the first (white) belt: attend for three weeks, learn the names of other Dojo members. We didn't require any actual programming until the second (yellow) belt, which required writing a program with two of three elements: a conditional, a loop, a function.

That plan went out the window at the end of the first evening, when two kids had already fulfilled the yellow belt requirements ... even though they were still two weeks away from the attendance requirement for the white belt. One of them had never programmed before. We've since scrapped the attendance belt, and now the white belt has the conditional/loop/function requirement that used to be the yellow belt.

The program has been going for a bit over three months now. We've awarded lots of white belts and a handful of yellows (three new ones just this week). Although most of the kids are working in Python, there are also several playing music or running LED strips using Arduino/C++, writing games and web pages in Javascript, writing adventure games Scratch, or just working through Khan Academy lectures.

When someone is ready for a belt, they present their program to everyone in the room and people ask questions about it: what does that line do? Which part of the program does that? How did you figure out that part? Then the mentors review the code over the next week, and they get the belt the following week.

For all but the first belt, helping newer members is a requirement, though I suspect even without that they'd be helping each other. Sit a first-timer next to someone who's typing away at a Python program and watch the magic happen. Sometimes it feels almost superfluous being a mentor. We chat with the kids and each other, work on our own projects, shoulder-surf, and wait for someone to ask for help with harder problems.

Overall, a terrific program, and our only problems now are getting funding for more belts and more workstations as the word spreads and our Dojo nights get more crowded. I've had several adults ask me if there was a comparable program for adults. Maybe some day (I hope).

February 24, 2017 08:46 PM

February 20, 2017

Elizabeth Krumbach

Adventures in Tasmania

Last month I attended my third Linux.conf.au, this time in Hobart, Tasmania, I wrote about the conference here and here. In an effort to be somewhat recovered from jet lag for the conference and take advantage of the trip to see the sights, I flew in a couple days early.

I arrived in Hobart after a trio of flights on Friday afternoon. It was incredibly windy, so much so that they warned people when deplaning onto the tarmac (no jet ways at the little Hobart airport) to hold tightly on to their belongings. But speaking of the weather for a moment, January is the middle of summer in the southern hemisphere. I prepare for brutal heat when I visit Australia at this time. But Hobart? They were enjoying beautiful, San Francisco-esque weather. Sunny and comfortably in the 60s every day. The sun was still brutal though, thinner ozone that far south means that I burned after being in the sun for a couple days, even after applying strong sunblock.


Beautiful view from my hotel room

On Saturday I didn’t make any solid plans, just in case there was a problem with my flights or I was too tired to go out. I lucked out though, and took the advice of many who suggested I visit Mona – Museum of Old and New Art. In spite of being tired, the good reviews of the museum, plus learning that you could take a ferry directly there and a nearby brewery featured their beers at the eateries around the museum encouraged me to go.

I walked to the ferry terminal from the hotel, which was just over a mile with some surprising hills along the way as I took the scenic route along the bay and through some older neighborhoods. I also walked past Salamanca Market that is set up every Saturday. I passed on the wallaby burritos and made my way to the ferry terminal. There it was quick and easy to buy my ferry and museum tickets.

Ferry rides are one of my favorite things, and the views on this one made the journey to the museum a lot of fun.

The ferry drops you off at a dock specifically for the museum. Since it was nearly noon and I was in need of nourishment, I walked up past the museum and explored the areas around the wine bar. They had little bars set up that opened at noon and allowed you to get a bottle of wine or some beers and enjoy the beautiful weather on chairs and bean bags placed around a large grassy area. On my own for this adventure, I skipped drinking on the grass and went up to enjoy lunch at the wonderful restaurant on site, The Source. I had a couple beers and discovered Tasmanian oysters. Wow. These wouldn’t be the last ones on my trip.

After lunch it was incredibly tempting to spend the entire afternoon snacking on cheese and wine, but I had museum tickets! So it was down to the museum to spend a couple hours exploring.

I’m not the biggest fan of modern art, so a museum mixing old and new art was an interesting choice for me. As I began to walk through the exhibits, I realized that it would have been great to have MJ there with me. He does enjoy newer art, so the museum would have had a little bit for each of us. There were a few modern exhibits that I did enjoy though, including Artifact which I took a video of: “Artifact” at the Museum of Old and New Art, Hobart (warning: strobe lights!).

Outside the museum I also walked around past a vineyard on site, as well as some beautiful outdoor structures. I took a bunch more photos before the ferry took me back to downtown Hobart. More photos from Mona here: https://www.flickr.com/photos/pleia2/albums/72157679331777806

It was late afternoon when I returned to the Salamanca area of Hobart and though the Market was closing down, I was able to take some time to visit a few shops. I picked up a small pen case for my fountain pens made of Tasmanian Huon Pine and a native Sassafras. That evening I spent some time in my room relaxing and getting some writing done before dinner with a couple open source colleagues who had just gotten into town. I turned in early that night to catch up on some sleep I missed during flights.

And then it was Sunday! As fun as the museum adventure was, my number one goal with this trip was actually to pet a new Australian critter. Last year I attended the conference in Geelong, not too far from Melbourne, and did a similar tourist trip. On that trip I got to feed kangaroos, pet a koala and see hundreds of fairy penguins return to their nests from the ocean at dusk. Topping that day wasn’t actually possible, but I wanted to do my best in Tasmania. I booked a private tour with a guide for the Sunday to take me up to the Bonorong Wildlife Sanctuary.

My tour guide was a very friendly women who owns a local tour company with her husband. She was super friendly and accommodating, plus she was similar in age to me, making for a comfortable journey. The tour included a few stops, but started with Bonorong. We had about an hour there to visit the standing exhibits before the pet-wombats tour begain. All the enclosures were populated by rescued wildlife that were either being rehabilitated or were too permanently injured for release. I had my first glimpse at Tasmanian devils running around (I’d seen some in Melbourne, but they were all sleeping!). I also got to see a tawny frogmouth, which is a bird that looks a bit like an owl, and the three-legged Randall the echidna, a spiky member of the species that is one of the few egg-laying mammals. I also took some time to commune with kangaroos and wallabies, picking up a handful of food to feed my new, bouncy friends.


Feeding a kangaroo, tiny wombat drinking from a bottle, pair of wombats, Tasmanian devil

And then there were the baby wombats. I saw my first wombat at the Perth Zoo four years ago and was surprised at how big they are. Growing to be a meter in length in Tasmania, wombats are hefty creatures and I got to pet one! At 11:30 they did a keeper talk and then allowed folks gathered to give one of the babies (about 9 months old) a quick pat. In a country of animals that have fur that’s more wiry and wool-like than you might expect (on kangaroos, koalas), the baby wombats are surprisingly soft.


Wombat petting mission accomplished.

The keeper talks continued with opportunities to pet a koala and visit some Tasmanian devils, but having already done these things I hit the gift shop for some post cards and then went to the nearby Richmond Village.

More photos from Bonorong Wildlife Sanctuary, Tasmania here: https://www.flickr.com/photos/pleia2/albums/72157679331734466

I enjoyed a meat pie lunch in the cute downtown of Richmond before continuing our journey to visit the oldest continuously operating Catholic church in all of Australia (not just Tasmania!), St John’s. It was built in 1836. Just a tad bit older, we also got to visit the oldest bridge, built in 1823. The bridge is surrounded by a beautiful park, making for a popular picnic and play area on days like the beautiful one we had while there. On the way back, we stopped at the Wicked Cheese Co. where I got to sample a variety of cheeses and pick up some Whiskey Cheddar to enjoy later in the week. A final stop at Rosny Hill rounded out the tour. It gave some really spectacular views of the bay and across to Hobart, I could see my hotel from there!

Sunday evening I met up with a gaggle of OpenStack friends for some Indian food back in the main shopping district of Hobart.

That wrapped up my real touristy part of my trip, as the week continued with the conference. However there were some treats still to be enjoyed! I had a whole bunch of Tasmanian cider throughout the week and as I had promised myself, more oysters! The thing about the oysters in Tasmania is that they’re creamy and they’re big. A mouthful of delicious.

I loved Tasmania, I hope I can make it back some day. More photos from my trip here: https://www.flickr.com/photos/pleia2/albums/72157677692771201

by pleia2 at February 20, 2017 06:47 PM

February 18, 2017

Akkana Peck

Highlight and remove extraneous whitespace in emacs

I recently got annoyed with all the trailing whitespace I saw in files edited by Windows and Mac users, and in code snippets pasted from sites like StackOverflow. I already had my emacs set up to indent with only spaces:

(setq-default indent-tabs-mode nil)
(setq tabify nil)
and I knew about M-x delete-trailing-whitespace ... but after seeing someone else who had an editor set up to show trailing spaces, and tabs that ought to be spaces, I wanted that too.

To show trailing spaces is easy, but it took me some digging to find a way to control the color emacs used:

;; Highlight trailing whitespace.
(setq-default show-trailing-whitespace t)
(set-face-background 'trailing-whitespace "yellow")

I also wanted to show tabs, since code indented with a mixture of tabs and spaces, especially if it's Python, can cause problems. That was a little harder, but I eventually found it on the EmacsWiki: Show whitespace:

;; Also show tabs.
(defface extra-whitespace-face
  '((t (:background "pale green")))
  "Color for tabs and such.")

(defvar bad-whitespace
  '(("\t" . 'extra-whitespace-face)))

While I was figuring this out, I got some useful advice related to emacs faces on the #emacs IRC channel: if you want to know why something is displayed in a particular color, put the cursor on it and type C-u C-x = (the command what-cursor-position with a prefix argument), which displays lots of information about whatever's under the cursor, including its current face.

Once I had my colors set up, I found that a surprising number of files I'd edited with vim had trailing whitespace. I would have expected vim to be better behaved than that! But it turns out that to eliminate trailing whitespace, you have to program it yourself. For instance, here are some recipes to Remove unwanted spaces automatically with vim.

February 18, 2017 11:41 PM

February 17, 2017

Elizabeth Krumbach

Spark Summit East 2017

“Do you want to go to Boston in February?”

So began my journey to Boston to attend the recent Spark Summit East 2017, joining my colleagues Kim, Jörg and Kapil to participate in the conference and meet attendees at our Mesosphere booth. I’ve only been to a handful of single-technology events over the years, so it was an interesting experience for me.


Selfie with Jörg!

The conference began with a keynote by Matei Zaharia which covered some of the major successes in the Apache Spark world in 2016, from the release of version 2.0, with structured streaming to the growth in community-driven meetups. As the keynotes continued, two trends came into clear focus:

  1. Increased use of Apache Spark with streaming data
  2. Strong desire to do data processing for artificial intelligence (AI) and machine learning

It was really fascinating to hear about all the AI and machine learning work being done from companies like Salesforce developing customized products to genetic data analysis by way of the Hail project that will ultimately improve and save lives. Work is even being done by Intel to improve hardware and open source tooling around deep learning (see their BigDL project on GitHub).

In perhaps my favorite keynote of the conference, we heard from Mike Gualtieri of Forrester where he presented the new “age of the customer” with a look toward very personalized, AI-driven learning about customer behavior, intent and more. He went on the use the term “pragmatic AI” to describe what we’re aiming for with an intelligence that’s good enough to succeed at what it’s put to. However, his main push for this talk was how much opportunity there is in this space. Companies and individuals skilled with processing massive amounts of data processing, AI and deep and machine learning can make a real impact in a variety of industries. Video and slides from this keynote are available here.


Mike Gualtieri on types of AI we’re looking at today

I was also impressed by how strong the open source assumption was at this conference. All of these universities, corporations, hardware manufacturers and more are working together to build platforms to do all of this work data processing work and they’re open sourcing them.

While at the event, Jörg gave a talk on Powering Predictive Mapping at Scale with Spark, Kafka, and Elastic Search (slides and videos at that link). In this he used DC/OS to give a demo based on NYC cab data.

At the booth the interest in open source was also strong. I’m working on DC/OS in my new role, and the fact that folks could hit the ground running with our open source version, and get help on mailing lists and Slack was in sync with their expectations. We were able to show off demos on our laptops and in spite of only having just over a month at the company under my belt, I was able to answer most of the questions that came my way and learned a lot from my colleagues.


The the Mesosphere booth included DC/OS hand warmers!

We had a bit of non-conference fun at the conference as well, Kapil took us out Wednesday night to the L.A. Burdick chocolate shop to get some hot chocolate… on ice. So good. Thursday the city was hit with a major snow storm, dumping 10 inches on us throughout the day as we spent our time inside the conference venue. Flights were cancelled after noon that day, but thankfully I had no trouble getting out on my Friday flight after lunch with my friend Deb who lives nearby.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157680153926395

by pleia2 at February 17, 2017 10:29 PM

February 15, 2017

Elizabeth Krumbach

Highlights from LCA 2017 in Hobart

Earlier this month I attended my first event while working as a DC/OS Developer Advocate over at Mesosphere. My talk on Listening to the needs of your global open source community was accepted before I joined the company, but this kind of listening is precisely what I need to be doing in this new role, so it fit nicely.

Work also gave me some goodies to bring along! So I was able to hand some out as I chatted with people about my new role, and left piles of stickers and squishy darts on the swag table throughout the week.

The topic of the conference this year was the future of open source. It led to an interesting series of keynotes, ranging from the hopeful and world-changing words from Pia Waugh about how technologists could really make a difference in her talk, Choose Your Own Adventure, Please!, to the Keeping Linux Great talk by Robert M. “r0ml” Lefkowitz that ended up imploring the audience to examine their values around the traditional open source model.

Pia’s keynote was a lot of fun, walking us through human history to demonstrate that our values, tools and assumptions are entirely of our own making, and able to be changed (indeed, they have!). She asked us to continually challenge our assumptions about the world around us and what we could change. She encouraged thinking beyond our own spaces, like how 3D printers could solve scarcity problems in developing nations or what faster travel would do to transform the world. As a room of open source enthusiasts who make small changes to change the world all the day, being the creators and innovators of the world, there’s always more we can do and strive for, curing the illness rather than scratching the itch for systematic change. I really loved the positive message of this talk, I think a lot of attendees walked out feeling empowered and hopeful. Plus, she had a brilliant human change.log, that demonstrated how we as humans have made some significant changes in our assumptions through the millennia.


Pia Waugh’s human change.log

The keynote by Dan Callahan on Wednesday morning on Designing for Failure explored the failure of Mozilla’s Persona project, and key things he learned from it. He walked through some key lessons:

  1. Free licenses are not enough, your code can’t be tied to proprietary infrastructure
  2. Bits rot more quickly online, an out of date desktop application is usually at much lower risk, and endangers fewer people, than a service running on the web
  3. Complexity limits agency, people need to be able to have the resources, system and time to try out and run your software

He went on to give tips about what to do to prolong project life, including making sure you have metrics and are measuring the right things for your project, explicitly defining your scope so the team doesn’t get spread too thin or try to solve the wrong problems, and ruthlessly opposing complexity, since that makes it harder to maintain and for others to get involved.

Finally, he had some excellent points for how to assist the survival of your users when a project does finally fail:

  1. If you know your project is dead (funding pulled, etc), say so, don’t draw things out
  2. Make sure your users can recover without your involvement (have a way to extract data, give them an escape path infrastructure-wise)
  3. Use standard data formats to minimize the migration harm when organizations have to move on

It was really great hearing lessons from this, I know how painful it is to see a project you’ve put a lot of work into die, the ability to not only move on in a healthy way but bring those lessons to a whole community during a keynote like this was commendable.

Thursday’s keynote by Nadia Eghbal was an interesting one that I haven’t seen a lot of public discussion around, Consider the Maintainer. In it she talked about the work that goes into being a maintainer of a project, which she defined as someone who is doing the work of keeping a project going: looking at the contributions coming in, actively responding to bug reports and handling any other interactions. This is a discussion that came up from time to time on some projects I’ve recently worked on where we were striving to prevent scope creep. How can we manage the needs of our maintainers who are sticking around, with the desire for new contributors to add features that benefit them? It’s a very important question that I was thrilled to see her talk about. To help address this, she proposed a twist on the The Four Essential Freedoms of Software as defined by the FSF, The Four Freedoms of Open Source Producers. They were:

  • The freedom to decide who participates in your community
  • The freedom to say no to contributions or requests
  • The freedom to define the priorities and policies of the project
  • The freedom to step down or move on from a project, temporarily or permanently

The speaker dinner was beautiful and delicious, taking us up to Frogmore Creek Winery. There was a radio telescope in the background and the sunset over the vineyard was breathtaking. Plus, great company.

Other talks I went to trended toward fun and community-focused. On Monday there was a WOOTConf, the entire playlist from the event is here. I caught a nice handful of talks, starting with Human-driven development where aurynn shaw spoke about some of the toxic behaviors in our technical spaces, primarily about how everyone is expected to know everything and that asking questions is not always acceptable. She implored us to work to make asking questions easier and more accepted, and working toward asking your team questions about what they need.

I learned about a couple websites in a talk by Kate Andrews on Seeing the big picture – using open source images, TinEye Reverse Image Search to help finding the source of an image to give credit, and sites like Unsplash where you can find freely licensed photos, in addition to various creative commons searches. Brenda Wallace’s Let’s put wifi in everything was a lot of fun, as she walked through various pieces of inexpensive hardware and open source tooling to build sensors to automate all kinds of little things around the house. I also enjoyed the talk by Kris Howard, Knit One, Compute One where very strong comparisons were made between computer programming and knitting patterns, and a talk by Grace Nolan on Condensed History of Lock Picking.

For my part, I gave a talk on Listening to the Needs of Your Global Open Source Community. This is similar to the talk I gave at FOSSCON back in August, where I walked through experiences I had in Ubuntu and OpenStack projects, along with in person LUGs and meetups. I had some great questions at the end, and I was excited to learn VM Brasseur was tweeting throughout and created a storify about it! The slides from the talk are available as a PDF here.


Thanks to VM Brasseur for the photo during my talk, source

The day concluded with Rikki Endsley’s Mamas Don’t Let Your Babies Grow Up to Be Rock Star Developers, which I really loved. She talked about the tendency to put “rock star” in job descriptions for developers, but when going through the traits of rock stars these weren’t actually what you want on your team. The call was for more Willie Nelson developers, and we were treated to a quick biography of Willie Nelson. In it she explained how he helped others, was always learning new skills, made himself available to his fans, and would innovate and lead. I also enjoyed that he actively worked to collaborate with a diverse mix of people and groups.

As the conference continued, I learned about the the great work that Whare Hauora from Brenda Wallace and Amber Craig, and heard from Josh Simmons about building communities outside of major metropolitan areas where he advocated for multidisciplinary meetups. Allison Randal spoke about the ways that open source accelerates innovation and Karen Sandler dove into what happens to our software when we die in a presentation punctuated by pictures of baby Tasmanian Devils to cheer us up. I also heard Chris Lamb gave us the status of the Reproducible Builds projects and then from Hamish Coleman on the work he’s done replacing ThinkPad keyboards and backwards engineering the tooling.

The final day wound down with a talk by VM (Vicky) Brasseur on working inside a company to support open source projects, where she talked about types of communities, the importance of having a solid open source plans and quickly covered some of the most common pitfalls within companies.

This conference remains one of my favorite open source conferences in the world, and I’m very glad I was able to attend again. It’s great meeting up with all my Australian and New Zealand open source colleagues, along with some of the usual suspects who attend many of the same conferences I do. Huge thanks for the organizers for making it such a great conference.

All the videos from the conference were uploaded very quickly to YouTube and are available here: https://www.youtube.com/user/linuxconfau2017/videos

More photos from the conference at https://www.flickr.com/photos/pleia2/sets/72157679331149816/

by pleia2 at February 15, 2017 01:09 AM

February 13, 2017

Akkana Peck

Emacs: Initializing code files with a template

Part of being a programmer is having an urge to automate repetitive tasks.

Every new HTML file I create should include some boilerplate HTML, like <html><head></head></body></body></html>. Every new Python file I create should start with #!/usr/bin/env python, and most of them should end with an if __name__ == "__main__": clause. I get tired of typing all that, especially the dunderscores and slash-greater-thans.

Long ago, I wrote an emacs function called newhtml to insert the boilerplate code:

(defun newhtml ()
  "Insert a template for an empty HTML page"
  (interactive)
  (insert "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n"
          "<html>\n"
          "<head>\n"
          "<title></title>\n"
          "</head>\n\n"
          "<body>\n\n"
          "<h1></h1>\n\n"
          "<p>\n\n"
          "</body>\n"
          "</html>\n")
  (forward-line -11)
  (forward-char 7)
  )

The motion commands at the end move the cursor back to point in between the <title> and </title>, so I'm ready to type the page title. (I should probably have it prompt me, so it can insert the same string in title and h1, which is almost always what I want.)

That has worked for quite a while. But when I decided it was time to write the same function for python:

(defun newpython ()
  "Insert a template for an empty Python script"
  (interactive)
  (insert "#!/usr/bin/env python\n"
          "\n"
          "\n"
          "\n"
          "if __name__ == '__main__':\n"
          "\n"
          )
  (forward-line -4)
  )
... I realized that I wanted to be even more lazy than that. Emacs knows what sort of file it's editing -- it switches to html-mode or python-mode as appropriate. Why not have it insert the template automatically?

My first thought was to have emacs run the function upon loading a file. There's a function with-eval-after-load which supposedly can act based on file suffix, so something like (with-eval-after-load ".py" (newpython)) is documented to work. But I found that it was never called, and couldn't find an example that actually worked.

But then I realized that I have mode hooks for all the programming modes anyway, to set up things like indentation preferences. Inserting some text at the end of the mode hook seems perfectly simple:

(add-hook 'python-mode-hook
          (lambda ()
            (electric-indent-local-mode -1)
            (font-lock-add-keywords nil bad-whitespace)
            (if (= (buffer-size) 0)
                (newpython))
            (message "python hook")
            ))

The (= (buffer-size) 0) test ensures this only happens if I open a new file. Obviously I don't want to be auto-inserting code inside existing programs!

HTML mode was a little more complicated. I edit some files, like blog posts, that use HTML formatting, and hence need html-mode, but they aren't standalone HTML files that need the usual HTML template inserted. For blog posts, I use a different file extension, so I can use the elisp string-suffix-p to test for that:

  ;; s-suffix? is like Python endswith
  (if (and (= (buffer-size) 0)
           (string-suffix-p ".html" (buffer-file-name)))
      (newhtml) )

I may eventually find other files that don't need the template; if I need to, it's easy to add other tests, like the directory where the new file will live.

A nice timesaver: open a new file and have a template automatically inserted.

February 13, 2017 04:52 PM

February 09, 2017

Jono Bacon

HackerOne Professional, Free for Open Source Projects

For some time now I have been working with HackerOne to help them shape and grow their hacker community. It has been a pleasure working with the team: they are doing great work, have fantastic leadership (including my friend, Mårten Mickos), are seeing consistent growth, and recently closed a $40 million round of funding. It is all systems go.

For those of you unfamiliar with HackerOne, they provide a powerful vulnerability coordination platform and a global community of hackers. Put simply, a company or project (such as Starbucks, Uber, GitHub, the US Army, etc) invite hackers to hack their products/services to find security issues, and HackerOne provides a platform for the submission, coordination, dupe detection, and triage of these issues, and other related functionality.

You can think of HackerOne in two pieces: a powerful platform for managing security vulnerabilities and a global community of hackers who use the platform to make the Internet safer and in many cases, make money. This effectively crowd-sources security using the same “with enough eyeballs are shallow” principle in open source: with enough eyeballs all security issues are shallow too.

HackerOne and Open Source

HackerOne unsurprisingly are big fans of open source. The CEO, Mårten Mickos, has led a number of successful open source companies including MySQL and Eucalyptus. The platform itself is built on top of chunks of open source, and HackerOne is a key participant in the Internet Bug Bounty program that helps to ensure core pieces of technology that power the Internet are kept secure.

One of the goals I have had in my work with HackerOne is to build an even closer bridge between HackerOne and the open source community. I am delighted to share the next iteration of this.

HackerOne for Open Source Projects

While not formally announced yet (this is coming soon), I am pleased to share the availability of HackerOne Community Edition.

Put simply, HackerOne is providing their HackerOne Professional service for free to open source projects.

This provides features such as a security page, vulnerability submission/coordination, duplicate detection, hacker reputation, a comprehensive API, analytics, CVEs, and more.

This not only provides a great platform for open source projects to gather vulnerability report and manage them, but also opens your project up to thousands of security researchers who can help identify security issues and make your code more secure.

Which projects are eligible?

To be eligible for this free service projects need to meet the following criteria:

  1. Open Source projects – projects in scope must only be Open Source projects that are covered by an OSI license.
  2. Be ready – projects must be active and at least 3 months old (age is defined by shipped releases/code contributions).
  3. Create a policy – you add a SECURITY.md in your project root that provides details for how to submit vulnerabilities (example).
  4. Advertise your program – display a link to your HackerOne profile from either the primary or secondary navigation on your project’s website.
  5. Be active – you maintain an initial response to new reports of less than a week.

If you meet these criteria and would like to apply, just see the HackerOne Community Edition page and click the button to apply.

Of course, let me know if you have any questions!

The post HackerOne Professional, Free for Open Source Projects appeared first on Jono Bacon.

by Jono Bacon at February 09, 2017 10:20 PM

February 06, 2017

Elizabeth Krumbach

Rogue One and Carrie Fisher

Back in December I wasn’t home in San Francisco very much. Most of my month was spent back east at our townhouse in Philadelphia and I spent a few days in Salt Lake City for a conference, but the one week I was in town was the week that Rogue One: A Star Wars Story came out! I was traveling to Europe when tickets went on sale, but fortunately for me our local theater transformed to swap most of it’s screens over to show the film opening night. I was able to snag tickets once I realized they were on sale.

And that’s how I continued my tradition of seeing all the new films (1-3, 7) opening night! MJ and I popped over to the Metreon, just a short walk from home, to see it. For this showing I didn’t do IMAX or 3D or anything fancy, just a modern AMC theater and a late night showing.

The movie was great. They did a really nice job of looping the story in with the past films and preserving the feel of Star Wars for me, which was absent in the prequels that George Lucas made. Clunky technology, the good guys achieving victories in the face of incredible odds and yet, quite a bit of heartbreak. Naturally, I saw it a second time later in the month while staying in Philadelphia for the holidays. It was great the second time too!

My hope is that the quality of the films will remain high while in the hands of Disney, and I’m really looking forward to The Last Jedi coming out at the end of this year.

Alas, the year wasn’t all good for a Star Wars fan like me. Back in August we lost Kenny Baker, the man behind my beloved R2-D2. Then on December 23rd we learned that Carrie Fisher had a heart attack on a flight from London. On December 27th she passed away.

Now, I am typically not one to write about the death of a celebrity in her blog. It’s pretty rare that I’m upset about the death of a celebrity at all. But this was Carrie Fisher. She was not on my radar for passing (only 60!) and she is the actress who played one of my all-time favorite characters, in case it wasn’t obvious from the domain name this blog is on.

The character of Princess Leia impacted my life in many ways, and at age 17 caused me to choose PrincessLeia2 (PrincessLeia was taken), and later pleia2, as my online handle. She was a princess of a mysterious world that was destroyed. She was a strong character who didn’t let people get in her way as she covertly assisted, then openly joined the rebel alliance because of what she believed in. She was also a character who also showed considerable kindness and compassion. In the Star Wars universe, and in the 1980s when I was a kid, she was often a shining beacon of what I aspired to. Her reprise of the character, returning as General Leia Organa, in Episode VII brought me to tears. I have a figure of her on my desk.


Halloween 2005, Leia costume!

A character she played aside, she also was a champion of de-stigmatizing mental illness. I have suffered from depression for over 20 years and have worked to treat my condition with over a dozen doctors, from primary care to neurologists and psychiatrists. Still, I haven’t found an effective medication-driven treatment that won’t conflict with my other neurological atypical conditions (migraines and seizures). Her outspokenness on the topic of both mental illness and the difficulty in treating it even when you have access to resources was transformational for me. I had a guilt lifted from me about not being “better” in spite of my access to treatment, and was generally more inclined to tackle the topic of mental illness in public.

Her passing was hard for me.

I was contacted by BBC Radio 5 Live on the day she passed away and interviewed by Chris Warburton for their show that would air the following morning. They reached out to me as a known fan, asking me about what her role as Leia Organa meant to me growing up, her critical view of the celebrity world and then on to her work in the space of mental illness. It meant a lot that they reached out to me, but I was also pained by what it brought up, it turns out that the day of her passing was the one day in my life I didn’t feeling like talking about her work and legacy.

It’s easier today as I reflect upon her impact. I’m appreciative of the character she brought to life for me. Appreciative of the woman she became and shared in so many memorable, funny and self-deprecating books, which line my shelves. Thank you, Carrie Fisher, for being such an inspiration and an advocate.

by pleia2 at February 06, 2017 08:17 AM

Akkana Peck

Rosy Finches

Los Alamos is having an influx of rare rosy-finches (which apparently are supposed to be hyphenated: they're rosy-finches, not finches that are rosy).

[Rosy-finches] They're normally birds of the snowy high altitudes, like the top of Sandia Crest, and quite unusual in Los Alamos. They're even rarer in White Rock, and although I've been keeping my eyes open I haven't seen any here at home; but a few days ago I was lucky enough to be invited to the home of a birder in town who's been seeing great flocks of rosy-finches at his feeders.

There are four types, of which three have ever been seen locally, and we saw all three. Most of the flock was brown-capped rosy-finches, with two each black rosy-finches and gray-capped rosy-finches. The upper bird at right, I believe, is one of the blacks, but it might be a grey-capped. They're a bit hard to tell apart. In any case, pretty birds, sparrow sized with nice head markings and a hint of pink under the wing, and it was fun to get to see them.

[Roadrunner] The local roadrunner also made a brief appearance, and we marveled at the combination of high-altitude snowbirds and a desert bird here at the same place and time. White Rock seems like much better roadrunner territory, and indeed they're sometimes seen here (though not, so far, at my house), but they're just as common up in the forests of Los Alamos. Our host said he only sees them in winter; in spring, just as they start singing, they leave and go somewhere else. How odd!

Speaking of birds and spring, we have a juniper titmouse determinedly singing his ray-gun song, a few house sparrows are singing sporadically, and we're starting to see cranes flying north. They started a few days ago, and I counted several hundred of them today, enjoying the sunny and relatively warm weather as they made their way north. Ironically, just two weeks ago I saw a group of about sixty cranes flying south -- very late migrants, who must have arrived at the Bosque del Apache just in time to see the first northbound migrants leave. "Hey, what's up, we just got here, where ya all going?"

A few more photos: Rosy-finches (and a few other nice birds).

We also have a mule deer buck frequenting our yard, sometimes hanging out in the garden just outside the house to drink from the heated birdbath while everything else is frozen. (We haven't seen him in a few days, with the warmer weather and most of the ice melted.) We know it's the same buck coming back: he's easy to recognize because he's missing a couple of tines on one antler.

The buck is a welcome guest now, but in a month or so when the trees start leafing out I may regret that as I try to find ways of keeping him from stripping all the foliage off my baby apple tree, like some deer did last spring. I'm told it helps to put smelly soap shavings, like Irish Spring, in a bag and hang it from the branches, and deer will avoid the smell. I will try the soap trick but will probably combine it with other measures, like a temporary fence.

February 06, 2017 02:39 AM

January 28, 2017

Nathan Haines

We're looking for Ubuntu 17.04 wallpapers right now!

We're looking for Ubuntu 17.04 wallpapers right now!

Ubuntu is a testament to the power of sharing, and we use the default selection of desktop wallpapers in each release as a way to celebrate the larger Free Culture movement. Talented artists across the globe create media and release it under licenses that don't simply allow, but cheerfully encourage sharing and adaptation. This cycle's Free Culture Showcase for Ubuntu 17.04 is now underway!

We're halfway to the next LTS, and we're looking for beautiful wallpaper images that will literally set the backdrop for new users as they use Ubuntu 17.04 every day. Whether on the desktop, phone, or tablet, your photo or can be the first thing Ubuntu users see whenever they are greeted by the ubiquitous Ubuntu welcome screen or access their desktop.

Submissions will be handled via Flickr at the Ubuntu 17.04 Free Culture Showcase - Wallpapers group, and the submission window begins now and ends on March 5th.

More information about the Free Culture Showcase is available on the Ubuntu wiki at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase.

I'm looking forward to seeing the 10 photos and 2 illustrations that will ship on all graphical Ubuntu 17.04-based systems and devices on April 13th!

January 28, 2017 08:08 AM

January 27, 2017

Akkana Peck

Making aliases for broken fonts

A web page I maintain (originally designed by someone else) specifies Times font. On all my Linux systems, Times displays impossibly tiny, at least two sizes smaller than any other font that's ostensibly the same size. So the page is hard to read. I'm forever tempted to get rid of that font specifier, but I have to assume that other people in the organization like the professional look of Times, and that this pathologic smallness of Times and Times New Roman is just a Linux font quirk.

In that case, a better solution is to alias it, so that pages that use Times will choose some larger, more readable font on my system. How to do that was in this excellent, clear post: How To Set Default Fonts and Font Aliases on Linux .

It turned out Times came from the gsfonts package, while Times New Roman came from msttcorefonts:

$ fc-match Times
n021003l.pfb: "Nimbus Roman No9 L" "Regular"
$ dpkg -S n021003l.pfb
gsfonts: /usr/share/fonts/type1/gsfonts/n021003l.pfb
$ fc-match "Times New Roman"
Times_New_Roman.ttf: "Times New Roman" "Normal"
$ dpkg -S Times_New_Roman.ttf
dpkg-query: no path found matching pattern *Times_New_Roman.ttf*
$ locate Times_New_Roman.ttf
/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttf
(dpkg -S doesn't find the file because msttcorefonts is a package that downloads a bunch of common fonts from Microsoft. Debian can't distribute the font files directly due to licensing restrictions.)

Removing gsfonts fonts isn't an option; aside from some documents and web pages possibly not working right (if they specify Times or Times New Roman and don't provide a fallback), removing gsfonts takes gnumeric and abiword with it, and I do occasionally use gnumeric. And I like having the msttcorefonts installed (hey, gotta have Comic Sans! :-) ). So aliasing the font is a better bet.

Following Chuan Ji's page, linked above, I edited ~/.config/fontconfig/fonts.conf (I already had one, specifying fonts for the fantasy and cursive web families), and added these stanzas:

    <match>
        <test name="family"><string>Times New Roman</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>
    <match>
        <test name="family"><string>Times</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>

The page says to log out and back in, but I found that restarting firefox was enough. Now I could load up a page that specified Times or Times New Roman and the text is easily readable.

January 27, 2017 09:47 PM

January 26, 2017

Elizabeth Krumbach

CLSx at LCA 2017

Last week I was in Hobart, Tasmania for LCA 2017. I’ll write broader blog post about the whole event soon, but I wanted to take some time to write this focused post about the CLSx (Community Leadership Summit X) event organized by VM Brasseur. I’d been to the original CLS event at OSCON a couple times, first in 2013 and again in 2015. This was the first time I was attending a satellite event, but with VM Brasseur at the helm and a glance at the community leadership talent in the room I knew we’d have a productive event.


VM Brasseur introduces CLSx

The event began with an introduction to the format and the schedule. As an unconference, CLS events topics are brainstormed by and the schedule organized by the attendees. It started with people in the room sharing topics they’d be interested in, and then we worked through the list to combine topics and reduce it down to just 9 topics:

  • Non-violent communication for diffusing charged situations
  • Practical strategies for fundraising
  • Rewarding community members
  • Reworking old communities
  • Increasing diversity: multi-factor
  • Recruiting a core
  • Community metrics
  • Community cohesion: retention
  • How to Participate When You Work for a Corporate Vendor

Or, if you’d rather, the whiteboard of topics!

The afternoon was split into four sessions, three of which were used to discuss the topics, with three topics being covered simultaneously by separate groups in each session slot. The final session of the day was reserved for the wrap-up of the event where participants shared summaries of each topic that was discussed.

The first session I participated in was the one I proposed, on Rewarding Community Members. The first question I asked the group was whether we should reward community members at all, just to make sure we were all starting with the same ideas. This quickly transitioned into what counts as a reward, were we talking physical gifts like stickers and t-shirts? Or recognition in the community? Some communities “reward” community members by giving them free or discounted entrance to conferences related to the project, or discounts on services with partners.

Simple recognition of work was a big topic for this session. We spent some time talking about how we welcome community members. Does your community have a mechanism for welcoming, even if it’s automated? Or is there a more personal touch to reaching out? We also covered whether projects have a path to go from new contributor to trusted committer, or the “internal circle” of a project, noting that if that path doesn’t exist, it could be discouraging to new contributors. Gamification was touched upon as a possible way to recognize contributors in a more automated fashion, but it was clear that you want to reward certain positive behaviors and not focus so strictly on statistics that can be cheated without bringing any actual value to the project or community.

What I found most valuable in this session was learning some of the really successful tips for rewards. It was interesting how far the personal touch goes when sending physical rewards to contributors, like including a personalized note along with stickers. It was also clear that metrics are not the full story, in every community the leaders, evangelists and advocates need to be very involved so they can identify contributors in a more qualitative way in order to recognize or reward them, maybe someone is particularly helpful and friendly, or are making contributions in ways that are not easily tracked by solid metrics. The one warning here was making sure you avoid personal bias, make sure you aren’t being more critical of contributions from minorities in your community or are ignoring folks who don’t boast about their contributions, this happens a lot.

Full notes from Rewarding Contributors, thanks go to Deirdré Straughan for taking notes during the session.

The next session brought me to a gathering to discuss Community Building, Cohesion and Retention. I’ve worked in very large open source communities for over a decade now, and as I embark on my new role at Mesosphere where the DC/OS community is largely driven by paid contributors from a single company today, I’m very much interested in making sure we work to attract more outside contributors.

One of the big topics of this session was the fragmentation of resources across platforms (mailing lists, Facebook, IRC, Slack, etc) and how we have very little control over this. Pulling from my own experience, we saw this in the Xubuntu user community where people would create unofficial channels on various resources, and so as an outreach team we had to seek these users out and begin engaging with them “where they lived” on these platforms. One of the things I learned from my work here, was that we could reduce our own burden by making some of these “unofficial” resources into official resources, thus having an official presence but leaving the folks who were passionate about the platform and community there in control, though we did ask for admin credentials for one person on the Xubuntu team to help with the bus factor.

Some other tips to building cohesion were making sure introductions were done during meetings and in person gatherings so that newcomers felt welcome, or offering a specific newcomer track so that no one felt like they were the only new person in the room, which can be very isolating. Similarly, making sure there were communication channels available before in-person events could be helpful to getting people comfortable with a community before meeting. One of the interesting proposals was also making sure there was a more official, announce-focused channel for communication so that people who were loosely interested could subscribe to that and not be burdened with an overly chatty communication channel if they’re only interested in important news from the community.

Full notes from Community building, cohesion and retention, with thanks to Josh Simmons for taking notes during this session.


Thanks to VM Brasseur for this photo of our building, cohesion and retention session (source)

The last session of the day I attended was around Community Metrics and held particular interest for me as the team I’m on at Mesosphere starts drilling down into community statistics for our young community. One of the early comments in this session is that our teams need to be aware that metrics can help drive value for your team within a company and in the project. You should make sure you’re collecting metrics and that you’re measuring the right things. It’s easy for those of us who are more technically inclined to “geek out” over numbers and statistics, which can lead to gathering too much data and drawing conclusions that may not necessarily be accurate.

There was value found in surveys of community members by some attendees, which was interesting for me to learn. I haven’t had great luck with surveys but it was suggested that making sure people know why they should spend their time replying and sharing information and how it will be used to improve things makes them more inclined to participate. It was also suggested to have staggered surveys targeted at specific contributors. Perhaps have one survey to newcomers, and another targeted at people who have succeeded in becoming a core contributor about the process challenges they’ve faced. Surveys also help gather some of the more qualitative data that is essential for proper tracking the health of a community. It’s not just numbers.

Specifically drilling down into value to the community, the following beyond surveys were found to be helpful:

  • Less focus on individuals and specific metrics in a silo, instead looking at trends and aggregations
  • Visitor count to the web pages on your site and specific blog posts
  • Metrics about community diversity in terms of number of organizations contributing, geographic distribution and human metrics (gender, race, age, etc) since all these types of diversity have proven to be indicators of project and team success.
  • Recruitment numbers linked to contributions, whether it’s how many people your company hires from the community or that companies in general do if the project has many companies involved (recruitment is expensive, you can bring real value here)

The consensus in the group was that it was difficult to correlate metrics like retweets, GitHub stars and other social media metrics to sales, so even though there may be value with regard to branding and excitement about your community, they may not help much to justify the existence of your team within a company. We didn’t talk much about metrics gathering tools, but I was OK with this, since it was nice to get a more general view into what we should be collecting rather than how.

Full notes from Community Metrics, which we can thank Andy Wingo for.

The event concluded with the note-taker from each group giving a five minute summary of what we talked about in each group. This was the only recorded portion of the event, you can watch it on YouTube here: Community Leadership Summit Summary.

Discussion notes from all the sessions can be found here: https://linux.conf.au/wiki/conference/miniconfs/clsx_at_lca/#wiki-toc-group-discussion-notes.

I really got a lot out of this event, and I hope others gained from my experience and perspectives as well. Huge thanks to the organizers and everyone who participated.

by pleia2 at January 26, 2017 02:58 AM

January 24, 2017

Jono Bacon

Endless Code and Mission Hardware Demo

Recently, I have had the pleasure of working with a fantastic company called Endless who are building a range of computers and a Linux-based operating system called Endless OS.

My work with them has primarily been involved in the community and product development of an initiative in which they are integrating functionality into the operating system that teaches you how to code. This provides a powerful platform where you can learn to code and easily hack on applications in the platform.

If this sounds interesting to you, I created a short video demo where I show off their Mission hardware as well as run through a demo of Endless Code in action. You can see it below:

I would love to hear what you think and how Endless Code can be improved in the comments below.

The post Endless Code and Mission Hardware Demo appeared first on Jono Bacon.

by Jono Bacon at January 24, 2017 12:35 PM

January 23, 2017

Akkana Peck

Testing a GitHub Pull Request

Several times recently I've come across someone with a useful fix to a program on GitHub, for which they'd filed a GitHub pull request.

The problem is that GitHub doesn't give you any link on the pull request to let you download the code in that pull request. You can get a list of the checkins inside it, or a list of the changed files so you can view the differences graphically. But if you want the code on your own computer, so you can test it, or use your own editors and diff tools to inspect it, it's not obvious how. That this is a problem is easily seen with a web search for something like download github pull request -- there are huge numbers of people asking how, and most of the answers are vague unclear.

That's a shame, because it turns out it's easy to pull a pull request. You can fetch it directly with git into a new branch as long as you have the pull request ID. That's the ID shown on the GitHub pull request page:

[GitHub pull request screenshot]

Once you have the pull request ID, choose a new name for your branch, then fetch it:

git fetch origin pull/PULL-REQUEST_ID/head:NEW-BRANCH-NAME
git checkout NEW-BRANCH-NAME

Then you can view diffs with something like git difftool NEW-BRANCH-NAME..master

Easy! GitHub should give a hint of that on its pull request pages.

Fetching a Pull Request diff to apply it to another tree

But shortly after I learned how to apply a pull request, I had a related but different problem in another project. There was a pull request for an older repository, but the part it applied to had since been split off into a separate project. (It was an old pull request that had fallen through the cracks, and as a new developer on the project, I wanted to see if I could help test it in the new repository.)

You can't pull a pull request that's for a whole different repository. But what you can do is go to the pull request's page on GitHub. There are 3 tabs: Conversation, Commits, and Files changed. Click on Files changed to see the diffs visually.

That works if the changes are small and only affect a few files (which fortunately was the case this time). It's not so great if there are a lot of changes or a lot of files affected. I couldn't find any "Raw" or "download" button that would give me a diff I could actually apply. You can select all and then paste the diffs into a local file, but you have to do that separately for each file affected. It might be, if you have a lot of files, that the best solution is to check out the original repo, apply the pull request, generate a diff locally with git diff, then apply that diff to the new repo. Rather circuitous. But with any luck that situation won't arise very often.

Update: thanks very much to Houz for the solution! (In the comments, below.) Just append .diff or .patch to the pull request URL, e.g. https://github.com/OWNER/REPO/pull/REQUEST-ID.diff which you can view in a browser or fetch with wget or curl.

January 23, 2017 09:34 PM

January 19, 2017

Akkana Peck

Plotting Shapes with Python Basemap wwithout Shapefiles

In my article on Plotting election (and other county-level) data with Python Basemap, I used ESRI shapefiles for both states and counties.

But one of the election data files I found, OpenDataSoft's USA 2016 Presidential Election by county had embedded county shapes, available either as CSV or as GeoJSON. (I used the CSV version, but inside the CSV the geo data are encoded as JSON so you'll need JSON decoding either way. But that's no problem.)

Just about all the documentation I found on coloring shapes in Basemap assumed that the shapes were defined as ESRI shapefiles. How do you draw shapes if you have latitude/longitude data in a more open format?

As it turns out, it's quite easy, but it took a fair amount of poking around inside Basemap to figure out how it worked.

In the loop over counties in the US in the previous article, the end goal was to create a matplotlib Polygon and use that to add a Basemap patch. But matplotlib's Polygon wants map coordinates, not latitude/longitude.

If m is your basemap (i.e. you created the map with m = Basemap( ... ), you can translate coordinates like this:

    (mapx, mapy) = m(longitude, latitude)

So once you have a region as a list of (longitude, latitude) coordinate pairs, you can create a colored, shaped patch like this:

    for coord_pair in region:
        coord_pair[0], coord_pair[1] = m(coord_pair[0], coord_pair[1])
    poly = Polygon(region, facecolor=color, edgecolor=color)
    ax.add_patch(poly)

Working with the OpenDataSoft data file was actually a little harder than that, because the list of coordinates was JSON-encoded inside the CSV file, so I had to decode it with json.loads(county["Geo Shape"]). Once decoded, it had some counties as a Polygonlist of lists (allowing for discontiguous outlines), and others as a MultiPolygonlist of list of lists (I'm not sure why, since the Polygon format already allows for discontiguous boundaries)

[Blue-red-purple 2016 election map]

And a few counties were missing, so there were blanks on the map, which show up as white patches in this screenshot. The counties missing data either have inconsistent formatting in their coordinate lists, or they have only one coordinate pair, and they include Washington, Virginia; Roane, Tennessee; Schley, Georgia; Terrell, Georgia; Marshall, Alabama; Williamsburg, Virginia; and Pike Georgia; plus Oglala Lakota (which is clearly meant to be Oglala, South Dakota), and all of Alaska.

One thing about crunching data files from the internet is that there are always a few special cases you have to code around. And I could have gotten those coordinates from the census shapefiles; but as long as I needed the census shapefile anyway, why use the CSV shapes at all? In this particular case, it makes more sense to use the shapefiles from the Census.

Still, I'm glad to have learned how to use arbitrary coordinates as shapes, freeing me from the proprietary and annoying ESRI shapefile format.

The code: Blue-red map using CSV with embedded county shapes

January 19, 2017 04:36 PM

Nathan Haines

UbuCon Summit at SCALE 15x Call for Papers

UbuCons are a remarkable achievement from the Ubuntu community: a network of conferences across the globe, organized by volunteers passionate about Open Source and about collaborating, contributing, and socializing around Ubuntu. UbuCon Summit at SCALE 15x is the next in the impressive series of conferences.

UbuCon Summit at SCALE 15x takes place in Pasadena, California on March 2nd and 3rd during the first two days of SCALE 15x. Ubuntu will also have a booth at SCALE's expo floor from March 3rd through 5th.

We are putting together the conference schedule and are announcing a call for papers. While we have some amazing speakers and an always-vibrant unconference schedule planned, it is the community, as always, who make UbuCon what it is—just as the community sets Ubuntu apart.

Interested speakers who have Ubuntu-related topics can submit their talk to the SCALE call for papers site. UbuCon Summit has a wide range of both developers and enthusiasts, so any interesting topic is welcome, no matter how casual or technical. The SCALE CFP form is available here:

http://www.socallinuxexpo.org/scale/15x/cfp

Over the next few weeks we’ll be sharing more details about the Summit, revamping the global UbuCon site and updating the SCALE schedule with all relevant information.

http://www.ubucon.org/

About SCaLE:

SCALE 15x, the 15th Annual Southern California Linux Expo, is the largest community-run Linux/FOSS showcase event in North America. It will be held from March 2-5 at the Pasadena Convention Center in Pasadena, California. For more information on the expo, visit https://www.socallinuxexpo.org

January 19, 2017 10:12 AM

January 14, 2017

Akkana Peck

Plotting election (and other county-level) data with Python Basemap

After my arduous search for open 2016 election data by county, as a first test I wanted one of those red-blue-purple charts of how Democratic or Republican each county's vote was.

I used the Basemap package for plotting. It used to be part of matplotlib, but it's been split off into its own toolkit, grouped under mpl_toolkits: on Debian, it's available as python-mpltoolkits.basemap, or you can find Basemap on GitHub.

It's easiest to start with the fillstates.py example that shows how to draw a US map with different states colored differently. You'll need the three shapefiles (because of ESRI's silly shapefile format): st99_d00.dbf, st99_d00.shp and st99_d00.shx, available in the same examples directory.

Of course, to plot counties, you need county shapefiles as well. The US Census has county shapefiles at several different resolutions (I used the 500k version). Then you can plot state and counties outlines like this:

from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt

def draw_us_map():
    # Set the lower left and upper right limits of the bounding box:
    lllon = -119
    urlon = -64
    lllat = 22.0
    urlat = 50.5
    # and calculate a centerpoint, needed for the projection:
    centerlon = float(lllon + urlon) / 2.0
    centerlat = float(lllat + urlat) / 2.0

    m = Basemap(resolution='i',  # crude, low, intermediate, high, full
                llcrnrlon = lllon, urcrnrlon = urlon,
                lon_0 = centerlon,
                llcrnrlat = lllat, urcrnrlat = urlat,
                lat_0 = centerlat,
                projection='tmerc')

    # Read state boundaries.
    shp_info = m.readshapefile('st99_d00', 'states',
                               drawbounds=True, color='lightgrey')

    # Read county boundaries
    shp_info = m.readshapefile('cb_2015_us_county_500k',
                               'counties',
                               drawbounds=True)

if __name__ == "__main__":
    draw_us_map()
    plt.title('US Counties')
    # Get rid of some of the extraneous whitespace matplotlib loves to use.
    plt.tight_layout(pad=0, w_pad=0, h_pad=0)
    plt.show()
[Simple map of US county borders]

Accessing the state and county data after reading shapefiles

Great. Now that we've plotted all the states and counties, how do we get a list of them, so that when I read out "Santa Clara, CA" from the data I'm trying to plot, I know which map object to color?

After calling readshapefile('st99_d00', 'states'), m has two new members, both lists: m.states and m.states_info.

m.states_info[] is a list of dicts mirroring what was in the shapefile. For the Census state list, the useful keys are NAME, AREA, and PERIMETER. There's also STATE, which is an integer (not restricted to 1 through 50) but I'll get to that.

If you want the shape for, say, California, iterate through m.states_info[] looking for the one where m.states_info[i]["NAME"] == "California". Note i; the shape coordinates will be in m.states[i]n (in basemap map coordinates, not latitude/longitude).

Correlating states and counties in Census shapefiles

County data is similar, with county names in m.counties_info[i]["NAME"]. Remember that STATE integer? Each county has a STATEFP, m.counties_info[i]["STATEFP"] that matches some state's m.states_info[i]["STATE"].

But doing that search every time would be slow. So right after calling readshapefile for the states, I make a table of states. Empirically, STATE in the state list goes up to 72. Why 72? Shrug.

    MAXSTATEFP = 73
    states = [None] * MAXSTATEFP
    for state in m.states_info:
        statefp = int(state["STATE"])
        # Many states have multiple entries in m.states (because of islands).
        # Only add it once.
        if not states[statefp]:
            states[statefp] = state["NAME"]

That'll make it easy to look up a county's state name quickly when we're looping through all the counties.

Calculating colors for each county

Time to figure out the colors from the Deleetdk election results CSV file. Reading lines from the CSV file into a dictionary is superficially easy enough:

    fp = open("tidy_data.csv")
    reader = csv.DictReader(fp)

    # Make a dictionary of all "county, state" and their colors.
    county_colors = {}
    for county in reader:
        # What color is this county?
        pop = float(county["votes"])
        blue = float(county["results.clintonh"])/pop
        red = float(county["Total.Population"])/pop
        county_colors["%s, %s" % (county["name"], county["State"])] \
            = (red, 0, blue)

But in practice, that wasn't good enough, because the county names in the Deleetdk names didn't always match the official Census county names.

Fuzzy matches

For instance, the CSV file had no results for Alaska or Puerto Rico, so I had to skip those. Non-ASCII characters were a problem: "Doña Ana" county in the census data was "Dona Ana" in the CSV. I had to strip off " County", " Borough" and similar terms: "St Louis" in the census data was "St. Louis County" in the CSV. Some names were capitalized differently, like PLYMOUTH vs. Plymouth, or Lac Qui Parle vs. Lac qui Parle. And some names were just different, like "Jeff Davis" vs. "Jefferson Davis".

To get around that I used SequenceMatcher to look for fuzzy matches when I couldn't find an exact match:

def fuzzy_find(s, slist):
    '''Try to find a fuzzy match for s in slist.
    '''
    best_ratio = -1
    best_match = None

    ls = s.lower()
    for ss in slist:
        r = SequenceMatcher(None, ls, ss.lower()).ratio()
        if r > best_ratio:
            best_ratio = r
            best_match = ss
    if best_ratio > .75:
        return best_match
    return None

Correlate the county names from the two datasets

It's finally time to loop through the counties in the map to color and plot them.

Remember STATE vs. STATEFP? It turns out there are a few counties in the census county shapefile with a STATEFP that doesn't match any STATE in the state shapefile. Mostly they're in the Virgin Islands and I don't have election data for them anyway, so I skipped them for now. I also skipped Puerto Rico and Alaska (no results in the election data) and counties that had no corresponding state: I'll omit that code here, but you can see it in the final script, linked at the end.

    for i, county in enumerate(m.counties_info):
        countyname = county["NAME"]
        try:
            statename = states[int(county["STATEFP"])]
        except IndexError:
            print countyname, "has out-of-index statefp of", county["STATEFP"]
            continue

        countystate = "%s, %s" % (countyname, statename)
        try:
            ccolor = county_colors[countystate]
        except KeyError:
            # No exact match; try for a fuzzy match
            fuzzyname = fuzzy_find(countystate, county_colors.keys())
            if fuzzyname:
                ccolor = county_colors[fuzzyname]
                county_colors[countystate] = ccolor
            else:
                print "No match for", countystate
                continue

        countyseg = m.counties[i]
        poly = Polygon(countyseg, facecolor=ccolor)  # edgecolor="white"
        ax.add_patch(poly)

Moving Hawaii

Finally, although the CSV didn't have results for Alaska, it did have Hawaii. To display it, you can move it when creating the patches:

    countyseg = m.counties[i]
    if statename == 'Hawaii':
        countyseg = list(map(lambda (x,y): (x + 5750000, y-1400000), countyseg))
    poly = Polygon(countyseg, facecolor=countycolor)
    ax.add_patch(poly)
The offsets are in map coordinates and are empirical; I fiddled with them until Hawaii showed up at a reasonable place. [Blue-red-purple 2016 election map]

Well, that was a much longer article than I intended. Turns out it takes a fair amount of code to correlate several datasets and turn them into a map. But a lot of the work will be applicable to other datasets.

Full script on GitHub: Blue-red map using Census county shapefile

January 14, 2017 10:10 PM

Elizabeth Krumbach

Holidays in Philadelphia

In December MJ and I spent a couple weeks on the east coast in the new townhouse. It was the first long stay we’ve had there together, and though the holidays limited how much we could get done, particularly when it came to contractors, we did have a whole bunch to do.

First, I continued my quest to go through boxes of things that almost exclusively belonged to MJ’s grandparents. Unpacking, cataloging and deciding what pieces stay in Pennsylvania and what we’re sending to California. In the course of this I also had a deadline creeping up on me as I needed to find the menorah before Hanukkah began on the evening of December 24th. The timing of Hanukkah landing right along Christmas and New Years worked out well for us, MJ had some time off and it made the timing of the visit even more of a no-brainer. Plus, we were able to celebrate the entire eight night holiday there in Philadelphia rather than breaking it up between there and San Francisco.

The most amusing thing about finding the menorah was that it’s nearly identical to the one we have at home. MJ had mentioned that it was similar when I picked it out, but I had no idea that it was almost identical. Nothing wrong with the familiar, it’s a beautiful menorah.

House-wise MJ got the garage door opener installed and shelves put up in the powder room. With the help of his friend Tim, he also got the coffee table put together and the television mounted over the fireplace on New Years Eve. The TV was up in time to watch some of the NYE midnight broadcasts! We got the mail handling, trash schedule and cleaning sorted out with relatives who will be helping us with that, so the house will be well looked after in our absence.

I put together the vacuum and used it for the first time as I did the first thorough tidying of the house since we’d moved everything in from storage. I got my desk put together in the den, even though it’s still surrounded by boxes and will be until we ship stuff out to California. I was able to finally unpack some things we had actually ordered the last time I was in town but never got to put around the house, like a bunch of trash cans for various rooms and some kitchen goodies from ThinkGeek (Death Star waffle maker! R2-D2 measuring cups!). We also ordered a pair of counter-height chairs for the kitchen and they arrived in time for me to put them together just before we left, so the kitchen is also coming together even though we still need to go shopping for pots and pans.

Family-wise, we did a lot of visiting. On Christmas Eve we went to the nearby Samarkand restaurant, featuring authentic Uzbeki food. It was wonderful. We also did various lunches and dinners. A couple days were also spent going down to the city to visit a relative who is recovering in the hospital.

I didn’t see everyone I wanted to see but we did also get to visit with various friends. I saw my beloved Rogue One: A Star Wars Story a second time and met up with Danita to see Moana, which was great. I’ve now listened to the Moana soundtrack more than a few times. We met up with Crissi and her boyfriend Henry at Grand Lux Cafe in King of Prussia, where we also had a few errands to run and I was able to pick up some mittens at L.L. Bean. New Years Eve was spent with our friends Tim and Colleen, where we ordered pizza and hung aforementioned television. They also brought along some sweet bubbly for us to enjoy at midnight.

We also had lots of our favorite foods! We celebrated together at MJ’s favorite French cuisine inspired Chinese restaurant in Chestnut Hill, CinCin. We visited some of our standard favorites, including The Continental and Mad Mex. Exploring around our new neighborhood, we indulged in some east coast Chinese, made it to a Jewish deli where I got a delicious hoagie, found a sushi place that has an excellent roll list. We also went to Chickie’s and Pete’s crab house a couple of times, which, while being a Philadelphia establishment, I’d never actually been to. We also had a dinner at The Melting Pot, where I was able to try some local beers along with our fondue, and I’m delighted to see how much the microbrewery scene has grown since I moved away. We also hit a few diners during our stay, and enjoyed some eggnog from Wawa, which is some of the best eggnog ever made.

Unfortunately it wasn’t all fun. I’ve been battling a nasty bout of bronchitis for the past couple months. This continued ailment led to a visit to urgent care to get it looked at, and an x-ray to confirm I didn’t have a pneumonia. A pile of medication later, my bronchitis lingered and later in the week I spontaneously developed hives on my neck, which confounded the doctor. In the midst of health woes, I also managed to cut my foot on some broken glass while I was unpacking. It bled a lot, and I was a bit hobbled for a couple days while it healed. Thankfully MJ cleaned it out thoroughly (ouch!) once the bleeding had subsided and it has healed up nicely.

As the trip wound down I found myself missing the cats and eager to get home where I’d begin my new job. Still, it was with a heavy heart that we left our beautiful new vacation home, family and friends on the east coast.

by pleia2 at January 14, 2017 07:32 AM

January 12, 2017

Akkana Peck

Getting Election Data, and Why Open Data is Important

Back in 2012, I got interested in fiddling around with election data as a way to learn about data analysis in Python. So I went searching for results data on the presidential election. And got a surprise: it wasn't available anywhere in the US. After many hours of searching, the only source I ever found was at the UK newspaper, The Guardian.

Surely in 2016, we're better off, right? But when I went looking, I found otherwise. There's still no official source for US election results data; there isn't even a source as reliable as The Guardian this time.

You might think Data.gov would be the place to go for official election results, but no: searching for 2016 election on Data.gov yields nothing remotely useful.

The Federal Election Commission has an election results page, but it only goes up to 2014 and only includes the Senate and House, not presidential elections. Archives.gov has popular vote totals for the 2012 election but not the current one. Maybe in four years, they'll have some data.

After striking out on official US government sites, I searched the web. I found a few sources, none of them even remotely official.

Early on I found Simon Rogers, How to Download County-Level Results Data, which leads to GitHub user tonmcg's County Level Election Results 12-16. It's a comparison of Democratic vs. Republican votes in the 2012 and 2016 elections (I assume that means votes for that party's presidential candidate, though the field names don't make that entirely clear), with no information on third-party candidates.

KidPixo's Presidential Election USA 2016 on GitHub is a little better: the fields make it clear that it's recording votes for Trump and Clinton, but still no third party information. It's also scraped from the New York Times, and it includes the scraping code so you can check it and have some confidence on the source of the data.

Kaggle claims to have election data, but you can't download their datasets or even see what they have without signing up for an account. Ben Hamner has some publically available Kaggle data on GitHub, but only for the primary. I also found several companies selling election data, and several universities that had datasets available for researchers with accounts at that university.

The most complete dataset I found, and the only open one that included third party candidates, was through OpenDataSoft. Like the other two, this data is scraped from the NYT. It has data for all the minor party candidates as well as the majors, plus lots of demographic data for each county in the lower 48, plus Hawaii, but not the territories, and the election data for all the Alaska counties is missing.

You can get it either from a GitHub repo, Deleetdk's USA.county.data (look in inst/ext/tidy_data.csv. If you want a larger version with geographic shape data included, clicking through several other opendatasoft pages eventually gets you to an export page, USA 2016 Presidential Election by county, where you can download CSV, JSON, GeoJSON and other formats.

The OpenDataSoft data file was pretty useful, though it had gaps (for instance, there's no data for Alaska). I was able to make my own red-blue-purple plot of county voting results (I'll write separately about how to do that with python-basemap), and to play around with statistics.

Implications of the lack of open data

But the point my search really brought home: By the time I finally found a workable dataset, I was so sick of the search, and so relieved to find anything at all, that I'd stopped being picky about where the data came from. I had long since given up on finding anything from a known source, like a government site or even a newspaper, and was just looking for data, any data.

And that's not good. It means that a lot of the people doing statistics on elections are using data from unverified sources, probably copied from someone else who claimed to have scraped it, using unknown code, from some post-election web page that likely no longer exists. Is it accurate? There's no way of knowing.

What if someone wanted to spread news and misinformation? There's a hunger for data, particularly on something as important as a US Presidential election. Looking at Google's suggested results and "Searches related to" made it clear that it wasn't just me: there are a lot of people searching for this information and not being able to find it through official sources.

If I were a foreign power wanting to spread disinformation, providing easily available data files -- to fill the gap left by the US Government's refusal to do so -- would be a great way to mislead people. I could put anything I wanted in those files: there's no way of checking them against official results since there are no official results. Just make sure the totals add up to what people expect to see. You could easily set up an official-looking site and put made-up data there, and it would look a lot more real than all the people scraping from the NYT.

If our government -- or newspapers, or anyone else -- really wanted to combat "fake news", they should take open data seriously. They should make datasets for important issues like the presidential election publically available, as soon as possible after the election -- not four years later when nobody but historians care any more. Without that, we're leaving ourselves open to fake news and fake data.

January 12, 2017 11:41 PM

January 09, 2017

Akkana Peck

Snowy Winter Days, and an Elk Visit

[Snowy view of the Rio Grande from Overlook]

The snowy days here have been so pretty, the snow contrasting with the darkness of the piñons and junipers and the black basalt. The light fluffy crystals sparkle in a rainbow of colors when they catch the sunlight at the right angle, but I've been unable to catch that effect in a photo.

We've had some unusual holiday visitors, too, culminating in this morning's visit from a huge bull elk.

[bull elk in the yard] Dave came down to make coffee and saw the elk in the garden right next to the window. But by the time I saw him, he was farther out in the yard. And my DSLR batteries were dead, so I grabbed the point-and-shoot and got what I could through the window.

Fortunately for my photography the elk wasn't going anywhere in any hurry. He has an injured leg, and was limping badly. He slowly made his way down the hill and into the neighbors' yard. I hope he returns. Even with a limp that bad, an elk that size has no predators in White Rock, so as long as he stays off the nearby San Ildefonso reservation (where hunting is allowed) and manages to find enough food, he should be all right. I'm tempted to buy some hay to leave out for him.

[Sunset light on the Sangre de Cristos] Some of the sunsets have been pretty nice, too.

A few more photos.

January 09, 2017 02:48 AM

January 08, 2017

Akkana Peck

Using virtualenv to replace the broken pip install --user

Python's installation tool, pip, has some problems on Debian.

The obvious way to use pip is as root: sudo pip install packagename. If you hang out in Python groups at all, you'll quickly find that this is strongly frowned upon. It can lead to your pip-installed packages intermingling with the ones installed by Debian's apt-get, possibly causing problems during apt system updates.

The second most obvious way, as you'll see if you read pip's man page, is pip --user install packagename. This installs the package with only user permissions, not root, under a directory called ~/.local. Python automatically checks .local as part of its PYTHONPATH, and you can add ~/.local/bin to your PATH, so this makes everything transparent.

Or so I thought until recently, when I discovered that pip install --user ignores system-installed packages when it's calculating its dependencies, so you could end up with a bunch of incompatible versions of packages installed. Plus it takes forever to re-download and re-install dependencies you already had.

Pip has a clear page describing how pip --user is supposed to work, and that isn't what it's doing. So I filed pip bug 4222; but since pip has 687 open bugs filed against it, I'm not terrifically hopeful of that getting fixed any time soon. So I needed a workaround.

Use virtualenv instead of --user

Fortunately, it turned out that pip install works correctly in a virtualenv if you include the --system-site-packages option. I had thought virtualenvs were for testing, but quite a few people on #python said they used virtualenvs all the time, as part of their normal runtime environments. (Maybe due to pip's deficiencies?) I had heard people speak deprecatingly of --user in favor of virtualenvs but was never clear why; maybe this is why.

So, what I needed was to set up a virtualenv that I can keep around all the time and use by default every time I log in. I called it ~/.pythonenv when I created it:

virtualenv --system-site-packages $HOME/.pythonenv

Normally, the next thing you do after creating a virtualenv is to source a script called bin/activate inside the venv. That sets up your PATH, PYTHONPATH and a bunch of other variables so the venv will be used in all the right ways. But activate also changes your prompt, which I didn't want in my normal runtime environment. So I stuck this in my .zlogin file:

VIRTUAL_ENV_DISABLE_PROMPT=1 source $HOME/.pythonenv/bin/activate

Now I'll activate the venv once, when I log in (and once in every xterm window since I set XTerm*loginShell: true in my .Xdefaults. I see my normal prompt, I can use the normal Debian-installed Python packages, and I can install additional PyPI packages with pip install packagename (no --user, no sudo).

January 08, 2017 06:37 PM

January 05, 2017

Elizabeth Krumbach

The Girard Avenue Line

While I was in Philadelphia over the holidays a friend clued me into the fact that one of the historic streetcars (trolleys) on the Girard Avenue Line was decorated for the holidays. This line, SEPTA Route 15, is the last historic trolley line in Philadelphia and I had never ridden it before. This was the perfect opportunity!

I decided that I’d make the whole day about trains, so that morning I hopped on the SEPTA West Trenton Line regional rail, which has a stop near our place north of Philadelphia. After cheesesteak lunch near Jefferson Station, it was on to the Market-Frankfort Line subway/surface train to get up to Girard Station.

My goal for the afternoon was to see and take pictures of the holiday car, number 2336. So, with the friend I dragged along on this crazy adventure, we started waiting. The first couple trolleys weren’t decorated, so we hopped on another to get out of the chilly weather for a bit. Got off that trolley and waited for a few more, in both directions. This was repeated a couple times until we finally got a glimpse of the decorated trolley heading back to Girard Station. Now on our radar, we hopped on the next one and followed that trolley!


The non-decorated, but still lovely, 2335

We caught up with the decorated trolley after the turnaround at the end of the line and got on just after Girard Station. From there we took it all the way to the end of the line in west Philadelphia at 63rd St. There we had to disembark, and I took a few pictures of the outside.

We were able to get on again after the driver took a break, which allowed us take it all the way back.

The car was decorated inside and out, with lights, garland and signs.

At the end the driver asked if we’d just been on it to take a ride. Yep! I came just to see this specific trolley! Since it was getting dark anyway, he was kind enough to turn the outside lights on for me so I could get some pictures.

As my first time riding this line, I was able to make some observations about how they differ from the PCCs that run in San Francisco. In the historic fleet of San Francisco streetcars, the 1055 has the same livery as the trolleys that run in Philadelphia today. Most of the PCC’s in San Francisco’s fleet actually came from SEPTA in Philadelphia and this one is no exception, originally numbered 2122 while in service there. However, taking a peek inside it’s easy to see that it’s a bit different than the ones that run in Philadelphia today:


Inside the 1055 in San Francisco

The inside of this looks shiny compared to the inside of the one still running in Philadelphia. It’s all metal versus the plastic inside in Philadelphia, and the walls of the car are much thinner in San Francisco. I suspect this is all due to climate control requirements. In San Francisco we don’t really have seasons and the temperature stays pretty comfortable, so while there is a little climate control, it’s nothing compared to what the cars in Philadelphia need in the summer and winter. You can also see a difference from the outside, the entire top of the Philadelphia cars has a raised portion which seems to be climate control, but on the San Francisco cars it’s only a small bit at the center:


Outside the 1055 in San Francisco

Finally, the seats and wheelchair accessibility is different. The seats are all plastic in San Francisco, whereas they have fabric in Philadelphia. The raised platforms themselves and a portable metal platform serve as wheelchair access in San Francisco, whereas Philadelphia has an actual operative lift since there are many street level stops.

To wrap up the trolley adventure, we hopped on a final one to get us to Broad Street where we took the Broad Street Line subway down to dinner at Sazon on Spring Garden Street, where we had a meal that concluded with some of the best hot chocolate I’ve ever had. Perfect to warm us up after spending all afternoon chasing trolleys in Philadelphia December weather.

Dinner finished, I took one last train, the regional rail to head back to the suburbs.

More photos from the trolleys on the Girard Avenue Line here: https://www.flickr.com/photos/pleia2/albums/72157676838141261

by pleia2 at January 05, 2017 08:47 AM

January 04, 2017

Akkana Peck

Firefox "Reader Mode" and NoScript

A couple of days ago I blogged about using Firefox's "Delete Node" to make web pages more readable. In a subsequent Twitter discussion someone pointed out that if the goal is to make a web page's content clearer, Firefox's relatively new "Reader Mode" might be a better way.

I knew about Reader Mode but hadn't used it. It only shows up on some pages. as a little "open book" icon to the right of the URLbar just left of the Refresh/Stop button. It did show up on the Pogue Yahoo article; but when I clicked it, I just got a big blank page with an icon of a circle with a horizontal dash; no text.

It turns out that to see Reader Mode content in noscript, you must explicitly enable javascript from about:reader.

There are some reasons it's not automatically whitelisted: see discussions in bug 1158071 and bug 1166455 -- so enable it at your own risk. But it's nice to be able to use Reader Mode, and I'm glad the Twitter discussion spurred me to figure out why it wasn't working.

January 04, 2017 06:37 PM

January 02, 2017

Akkana Peck

Firefox's "Delete Node" eliminates pesky content-hiding banners

It's trendy among web designers today -- the kind who care more about showing ads than about the people reading their pages -- to use fixed banner elements that hide part of the page. In other words, you have a header, some content, and maybe a footer; and when you scroll the content to get to the next page, the header and footer stay in place, meaning that you can only read the few lines sandwiched in between them. But at least you can see the name of the site no matter how far you scroll down in the article! Wouldn't want to forget the site name!

Worse, many of these sites don't scroll properly. If you Page Down, the content moves a full page up, which means that the top of the new page is now hidden under that fixed banner and you have to scroll back up a few lines to continue reading where you left off. David Pogue wrote about that problem recently and it got a lot of play when Slashdot picked it up: These 18 big websites fail the space-bar scrolling test.

It's a little too bad he concentrated on the spacebar. Certainly it's good to point out that hitting the spacebar scrolls down -- I was flabbergasted to read the Slashdot discussion and discover that lots of people didn't already know that, since it's been my most common way of paging since browsers were invented. (Shift-space does a Page Up.) But the Slashdot discussion then veered off into a chorus of "I've never used the spacebar to scroll so why should anyone else care?", when the issue has nothing to do with the spacebar: the issue is that Page Down doesn't work right, whichever key you use to trigger that page down.

But never mind that. Fixed headers that don't scroll are bad even if the content scrolls the right amount, because it wastes precious vertical screen space on useless cruft you don't need. And I'm here to tell you that you can get rid of those annoying fixed headers, at least in Firefox.

[Article with intrusive Yahoo headers]

Let's take Pogue's article itself, since Yahoo is a perfect example of annoying content that covers the page and doesn't go away. First there's that enormous header -- the bottom row of menus ("Tech Home" and so forth) disappear once you scroll, but the rest stay there forever. Worse, there's that annoying popup on the bottom right ("Privacy | Terms" etc.) which blocks content, and although Yahoo! scrolls the right amount to account for the header, it doesn't account for that privacy bar, which continues to block most of the last line of every page.

The first step is to call up the DOM Inspector. Right-click on the thing you want to get rid of and choose Inspect Element:

[Right-click menu with Inspect Element]


That brings up the DOM Inspector window, which looks like this (click on the image for a full-sized view):

[DOM Inspector]

The upper left area shows the hierarchical structure of the web page.

Don't Panic! You don't have to know HTML or understand any of this for this technique to work.

Hover your mouse over the items in the hierarchy. Notice that as you hover, different parts of the web page are highlighted in translucent blue.

Generally, whatever element you started on will be a small part of the header you're trying to eliminate. Move up one line, to the element's parent; you may see that a bigger part of the header is highlighted. Move up again, and keep moving up, one line at a time, until the whole header is highlighted, as in the screenshot. There's also a dark grey window telling you something about the HTML, if you're interested; if you're not, don't worry about it.

Eventually you'll move up too far, and some other part of the page, or the whole page, will be highlighted. You need to find the element that makes the whole header blue, but nothing else.

Once you've found that element, right-click on it to get a context menu, and look for Delete Node (near the bottom of the menu). Clicking on that will delete the header from the page.

Repeat for any other part of the page you want to remove, like that annoying bar at the bottom right. And you're left with a nice, readable page, which will scroll properly and let you read every line, and will show you more text per page so you don't have to scroll as often.

[Article with intrusive Yahoo headers]

It's a useful trick. You can also use Inspect/Delete Node for many of those popups that cover the screen telling you "subscribe to our content!" It's especially handy if you like to browse with NoScript, so you can't dismiss those popups by clicking on an X. So happy reading!

Addendum on Spacebars

By the way, in case you weren't aware that the spacebar did a page down, here's another tip that might come in useful: the spacebar also advances to the next slide in just about every presentation program, from PowerPoint to Libre Office to most PDF viewers. I'm amazed at how often I've seen presenters squinting with a flashlight at the keyboard trying to find the right-arrow or down-arrow or page-down or whatever key they're looking for. These are all ways of advancing to the next slide, but they're all much harder to find than that great big spacebar at the bottom of the keyboard.

January 02, 2017 11:23 PM

Elizabeth Krumbach

The adventures of 2016

2016 was filled with professional successes and exciting adventures, but also various personal struggles. I exhausted myself finishing two books, navigated some complicated parts of my marriage, experienced my whole team getting laid off from a job we loved, handled an uptick in migraines and a continuing bout of bronchitis, and am still coming to terms with the recent loss.

It’s been difficult to maintain perspective, but it actually was an incredible year. I succeeded in having two books come out, my travels took me to some new, amazing places, we bought a vacation house, all my blood work shows that I’m healthier than I was at this time last year.


Lots more running in 2016 led to a healthier me!

Some of the tough stuff has even been good. I have succeeded in strengthening bonds with my husband and several people in my life who I care about. I’ve worked hard to worry less and enjoy time with friends and family, which may explain why this year ended up being the one of the group selfie. I paused to capture happy moments with my loved ones a lot more often.

So without further ado, the more quantitative year roundup!

The 9th edition of the The Official Ubuntu Book came out in July. This is the second edition I’ve been part of preparing. The book has updates to bring us up to the 16.04 release and features a whole new chapter covering “Ubuntu, Convergence, and Devices of the Future” which I was really thrilled about adding. My work with Matthew Helmke and José Antonio Rey was also very enjoyable. I wrote about the release here.

I also finished the first book I was the lead author on, Common OpenStack Deployments. Writing a book takes a considerable amount of time and effort, I spent many long nights and weekends testing and tweaking configurations largely written by my contributing author, Matt Fischer, writing copy for the book and integrating feedback from our excellent fleet of reviewers and other contributors. In the end, we released a book that takes the reader from knowing nothing about OpenStack to doing sample deployments using the same Puppet-driven tooling that enterprises use in their environments. The book came out in September, I wrote about it on my own blog here and maintain a blog about the book at DeploymentsBook.com.


Book adventures at the Ocata OpenStack Summit in Barcelona! Thanks to Nithya Ruff for taking a picture of me with my book at the Women of OpenStack area of the expo hall (source) and Brent Haley for getting the picture of Lisa-Marie and I (source).

This year also brought a new investment to our lives, we bought a vacation home in Pennsylvania! It’s a new construction townhouse, so we spent a fair amount of time on the east coast the second half of this year searching for a place, picking out the details and closing. We then spent the winter holidays here, spending a full two weeks away from home to really settle in. I wrote more about our new place here.

I keep saying I won’t travel as much, but 2016 turned out to have more travel than ever, taking over 100,000 miles of flights again.


Feeding a kangaroo, just outside of Melbourne, Australia

At the Jain Temple in Mumbai, India

We had lots of beers in Germany! Photo in the center by Chris Hoge (source)

Barcelona is now one of my favorite places, and it’s Sagrada Familia Basilica was breathtaking

Most of these conferences and events had a speaking component for me, but I also did a fair number of local talks and at some conferences I spoke more than once. The following is a rundown of all these talks I did in 2016, along with slides.


Photo by Masayuki Igawa (source) from Linux Conf AU in Geelong

Photo by Johanna Koester (source) from my keynote at the Ocata OpenStack Summit

MJ and I have also continued to enjoy our beloved home city of San Francisco, both with just the two of us and with various friends and family. We saw a couple Giants baseball games, along with one of the Sharks playoff games! Sampled a variety of local drinks and foods, visited lots of local animals and took in some amazing local sights. We went to the San Francisco Symphony for the first time, enjoyed a wonderful time together over over Labor Day weekend and I’ve skipped out at times to visit museum exhibits and the zoo.


Dinner at Luce in San Francisco, celebrating MJ’s new job

This year I also geeked out over trains – in four states and five countries! In May MJ and I traveled to Maine to spend some time with family, and a couple days of that trip were spent visiting the Seashore Trolley Museum in Kennebunkport and the Narrow Gauge Railroad Museum in Portland, I wrote about it here. I also enjoyed MUNI Heritage Weekend with my friend Mark at the end of September, where we got to see some of the special street cars and ride several vintage buses, read about that here. I also went up to New York City to finally visit the famous New York Transit Museum in Brooklyn and accompanying holiday exhibit at the Central Station with my friend David, details here. In Philadelphia I enjoyed the entire Girard Street line (15) which is populated by historic PCC streetcars (trolleys), including one decorated for the holidays, I have a pile of pictures here. I also got a glimpse of a car on the historic streetcar/trolley line in Melbourne and my buddy Devdas convinced me to take a train in Mumbai, and I visited the amazing Chhatrapati Shivaji Terminus there too. MJ also helped me plan some train adventures in the Netherlands and Germany as I traveled from airports for events.


From the Seashore Trolley Museum barn

As I enter into 2017 I’m thrilled to report that I’ll be starting a new job. Travel continues as I have trips to Australia and Los Angeles already on my schedule. I’ll also be spending time getting settled back into my life on the west coast, as I have spent 75% of my time these past couple months elsewhere.

by pleia2 at January 02, 2017 03:19 PM

December 27, 2016

Elizabeth Krumbach

OpenStack Days Mountain West 2016

A couple weeks ago I attended my last conference of the year, OpenStack Days Mountain West. After much flight shuffling following a seriously delayed flight, I arrived late on the evening prior to the conference with plenty of time to get settled in and feel refreshed for the conference in the morning.

The event kicked off with a keynote from OpenStack Foundation COO Mark Collier who spoke on the growth and success of OpenStack. His talk strongly echoed topics he touched upon at the recent OpenStack Summit back in October as he cited several major companies who are successfully using OpenStack in massive, production deployments including Walmart, AT&T and China Mobile. In keeping with the “future” theme of the conference he also talked about organizations who are already pushing the future potential of OpenStack by betting on the technology for projects that will easily exceed the capacity of what OpenStack can handle today.

Also that morning, Lisa-Marie Namphy moderated a panel on the future of OpenStack with John Dickinson, K Rain Leander, Bruce Mathews and Robert Starmer. She dove right in with the tough questions by having panelists speculate as to why the three major cloud providers don’t run OpenStack. There was also discussion about who the actual users of OpenStack were (consensus was: infrastructure operators), which got into the question of whether app developers were OpenStack users today (perhaps not, app developers don’t want a full Linux environment, they want a place for their app to live). They also discussed the expansion of other languages beyond Python in the project.

That afternoon I saw a talk by Mike Wilson of Mirantis on “OpenStack in the post Moore’s Law World” where he reflected on the current status of Moore’s Law and how it relates to cloud technologies, and the projects that are part of OpenStack. He talked about how the major cloud players outside of OpenStack are helping drive innovation for their own platforms by working directly with chip manufacturers to create hardware specifically tuned to their needs. There’s a question of whether anyone in the OpenStack community is doing similar, and it seems that perhaps they should so that OpenStack can have a competitive edge.

My talk was next, speaking on “The OpenStack Project Continuous Integration System” where I gave a tour of our CI system and explained how we’ve been tracking project growth and steps we’ve taken with regard to scaling it to handle it going into the future. Slides from the talk are available here (PDF). At the end of my talk I gave away several copies of Common OpenStack Deployments which I also took the chance to sign. I’m delighted that one of the copies will be going to the San Diego OpenStack Meetup and another to one right there in Salt Lake City.

Later I attended Christopher Aedo’s “Transforming Organizations with OpenStack” where he walked the audience through hands on training his team did about the OpenStack project’s development process and tooling for IBM teams around the world. The lessons learned from working with these teams and getting them to love open processes once they could explain them in person was inspiring. Tassoula Kokkoris wrote a great summary of the talk here: Collaborative Culture Spotlight: OpenStack Days Mountain West. I rounded off the day by going to David Medberry’s “Private Cloud Cattle and Pet Wrangling” talk where he drew experience from the private cloud at Charter Communications to discuss the move from treating servers like pets to treating them like cattle and how that works in a large organization with departments that have varying needs.

The next day began with a talk by OpenStack veteran, and now VP of Solutions at SUSE, Joseph George. He gave a talk on the state of OpenStack, with a strong message about staying on the path we set forth, which he compared to his own personal transformation to lose a significant amount of weight. In this talk, he outlined three main points that we must keep in mind in order to succeed:

  1. Clarity on the Goal and the Motivation
  2. Staying Focused During the “Middle” of the Journey
  3. Constantly Learning and Adapting

He wrote a more extensive blog post about it here which fleshes out how each of these related to himself and how they map to OpenStack: OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation.

The next talk was a fun one from Lisa-Marie Namphy and Monty Taylor with the theme of being a naughty or nice list for the OpenStack community. They walked through various decisions, aspects of the project, and more to paint a picture of where the successes and pain points of the project are. They did a great job, managing to pull it off with humor, wit, and charm, all while also being actually informative. The morning concluded with a panel titled “OpenStack: Preferred Platform For PaaS Solutions” which had some interesting views. The panelists brought their expertise to the table to discuss what developers seeking to write to a platform wanted, and where OpenStack was weak and strong. It certainly seems to me that OpenStack is strongest as IaaS rather than PaaS, and it makes sense for OpenStack to continue focusing on being what they’ve called an “integration engine” to tie components together rather than focus on writing a PaaS solution directly. There was some talk about this on the panel, where some stressed that they did want to see OpenStack hooking into existing PaaS software offerings.


Great photo of Lisa and Monty by Gary Kevorkian, source

Lunch followed the morning talks, and I haven’t mentioned it, but the food at this event was quite good. In fact, I’d go as far as to say it was some of the best conference-supplied meals I’ve had. Nice job, folks!

Huge thanks to the OpenStack Days Mountain West crew for putting on the event. Lots of great talks and I enjoyed connecting with folks I knew, as well as meeting members of the community who haven’t managed to make it to one of the global events I’ve attended. It’s inspiring to meet with such passionate members of local groups like I found there.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157676117696131

by pleia2 at December 27, 2016 03:02 PM

December 25, 2016

Akkana Peck

Photographing Farolitos (and other night scenes)

Excellent Xmas to all! We're having a white Xmas here..

Dave and I have been discussing how "Merry Christmas" isn't alliterative like "Happy Holidays". We had trouble coming up with a good C or K adjective to go with Christmas, but then we hit on the answer: Have an Excellent Xmas! It also has the advantage of inclusivity: not everyone celebrates the birth of Christ, but Xmas is a secular holiday of lights, family and gifts, open to people of all belief systems.

Meanwhile: I spent a couple of nights recently learning how to photograph Xmas lights and farolitos.

Farolitos, a New Mexico Christmas tradition, are paper bags, weighted down with sand, with a candle inside. Sounds modest, but put a row of them alongside a roadway or along the top of a typical New Mexican adobe or faux-dobe and you have a beautiful display of lights.

They're also known as luminarias in southern New Mexico, but Northern New Mexicans insist that a luminaria is a bonfire, and the little paper bag lanterns should be called farolitos. They're pretty, whatever you call them.

Locally, residents of several streets in Los Alamos and White Rock set out farolitos along their roadsides for a few nights around Christmas, and the county cooperates by turning off streetlights on those streets. The display on Los Pueblos in Los Alamos is a zoo, a slow exhaust-choked parade of cars that reminds me of the Griffith Park light show in LA. But here in White Rock the farolito displays are a lot less crowded, and this year I wanted to try photographing them.

Canon bugs affecting night photography

I have a little past experience with night photography. I went through a brief astrophotography phase in my teens (in the pre-digital phase, so I was using film and occasionally glass plates). But I haven't done much night photography for years.

That's partly because I've had problems taking night shots with my current digital SLRcamera, a Rebel Xsi (known outside the US as a Canon 450d). It's old and modest as DSLRs go, but I've resisted upgrading since I don't really need more features.

Except maybe when it comes to night photography. I've tried shooting star trails, lightning shots and other nocturnal time exposures, and keep hitting a snag: the camera refuses to take a photo. I'll be in Manual mode, with my aperture and shutter speed set, with the lens in Manual Focus mode with Image Stabilization turned off. Plug in the remote shutter release, push the button ... and nothing happens except a lot of motorized lens whirring noises. Which shouldn't be happening -- in MF and non-IS mode the lens should be just sitting there intert, not whirring its motors. I couldn't seem to find a way to convince it that the MF switch meant that, yes, I wanted to focus manually.

It seemed to be primarily a problem with the EF-S 18-55mm kit lens; the camera will usually condescend to take a night photo with my other two lenses. I wondered if the MF switch might be broken, but then I noticed that in some modes the camera explicitly told me I was in manual focus mode.

I was almost to the point of ordering another lens just for night shots when I finally hit upon the right search terms and found, if not the reason it's happening, at least an excellent workaround.

Back Button Focus

I'm so sad that I went so many years without knowing about Back Button Focus. It's well hidden in the menus, under Custom Functions #10.

Normally, the shutter button does a bunch of things. When you press it halfway, the camera both autofocuses (sadly, even in manual focus mode) and calculates exposure settings.

But there's a custom function that lets you separate the focus and exposure calculations. In the Custom Functions menu option #10 (the number and exact text will be different on different Canon models, but apparently most or all Canon DSLRs have this somewhere), the heading says: Shutter/AE Lock Button. Following that is a list of four obscure-looking options:

  • AF/AE lock
  • AE lock/AF
  • AF/AF lock, no AE lock
  • AE/AF, no AE lock

The text before the slash indicates what the shutter button, pressed halfway, will do in that mode; the text after the slash is what happens when you press the * or AE lock button on the upper right of the camera back (the same button you use to zoom out when reviewing pictures on the LCD screen).

The first option is the default: press the shutter button halfway to activate autofocus; the AE lock button calculates and locks exposure settings.

The second option is the revelation: pressing the shutter button halfway will calculate exposure settings, but does nothing for focus. To focus, press the * or AE button, after which focus will be locked. Pressing the shutter button won't refocus. This mode is called "Back button focus" all over the web, but not in the manual.

Back button focus is useful in all sorts of cases. For instance, if you want to autofocus once then keep the same focus for subsequent shots, it gives you a way of doing that. It also solves my night focus problem: even with the bug (whether it's in the lens or the camera) that the lens tries to autofocus even in manual focus mode, in this mode, pressing the shutter won't trigger that. The camera assumes it's in focus and goes ahead and takes the picture.

Incidentally, the other two modes in that menu apply to AI SERVO mode when you're letting the focus change constantly as it follows a moving subject. The third mode makes the * button lock focus and stop adjusting it; the fourth lets you toggle focus-adjusting on and off.

Live View Focusing

There's one other thing that's crucial for night shots: live view focusing. Since you can't use autofocus in low light, you have to do the focusing yourself. But most DSLR's focusing screens aren't good enough that you can look through the viewfinder and get a reliable focus on a star or even a string of holiday lights or farolitos.

Instead, press the SET button (the one in the middle of the right/left/up/down buttons) to activate Live View (you may have to enable it in the menus first). The mirror locks up and a preview of what the camera is seeing appears on the LCD. Use the zoom button (the one to the right of that */AE lock button) to zoom in; there are two levels of zoom in addition to the un-zoomed view. You can use the right/left/up/down buttons to control which part of the field the zoomed view will show. Zoom all the way in (two clicks of the + button) to fine-tune your manual focus. Press SET again to exit live view.

It's not as good as a fine-grained focusing screen, but at least it gets you close. Consider using relatively small apertures, like f/8, since it will give you more latitude for focus errors. Yyou'll be doing time exposures on a tripod anyway, so a narrow aperture just means your exposures have to be a little longer than they otherwise would have been.

After all that, my Xmas Eve farolitos photos turned out mediocre. We had a storm blowing in, so a lot of the candles had blown out. (In the photo below you can see how the light string on the left is blurred, because the tree was blowing around so much during the 30-second exposure.) But I had fun, and maybe I'll go out and try again tonight.


An excellent X-mas to you all!

December 25, 2016 07:30 PM

Elizabeth Krumbach

The Temples and Dinosaurs of SLC

A few weeks ago I was in Salt Lake City for my last conference of the year. I was only there for a couple days, but I had some flexibility in my schedule. I was able to see most of the conference and still make time to sneak out to see some sights before my flight home at the conclusion of the conference.

The conference was located right near Temple Square. In spite of a couple flurries here and there, and the accompanying cold, I made time to visit out during lunch the first day of the conference. This square is where the most famous temple of The Church of Jesus Christ of Latter-day Saints resides, the Salt Lake Temple. Since I’d never been to Salt Lake City before, this landmark was the most obvious one to visit, and they had decorated it for Christmas.

While I don’t share their faith, it was worthy of my time. The temple is beautiful, everyone I met was welcoming and friendly, and there is important historical significance to the story of that church.

The really enjoyable time was that evening though. After some time at The Beer Hive I went for a walk with a couple colleagues through the square again, but this time all lit up with the Christmas lights! The lights were everywhere and spectacular.

And I’m sure regardless of the season, the temple itself at night is a sight to behold.

More photos from Temple Square here: https://www.flickr.com/photos/pleia2/albums/72157677633463925

The conference continued the next day and I departed in the afternoon to visit the Natural History Museum of Utah. Utah is a big deal when it comes to fossil hunting in the US, so I was eager to visit their dinosaur fossil exhibit. In addition to a variety of crafted scenes, it also features the “world’s largest display of horned dinosaur skulls” (source).

Unfortunately upon arrival I learned that the museum was without power. They were waving people in, but explained that there was only emergency lighting and some of the sections of the museum were completely closed. I sadly missed out on their very cool looking exhibit on poisons, and it was tricky seeing some of the areas that were open with so little light.

But the dinosaurs.

Have you ever seen dinosaur fossils under just emergency lighting? They were considerably more impactful and scary this way. Big fan.

I really enjoyed some of the shadows cast by their horned dinosaur skulls.

More photos from the museum here: https://www.flickr.com/photos/pleia2/sets/72157673744906273/

There should totally be an event where the fossils are showcased in this way in a planned manner. Alas, since this was unplanned, the staff decided in the late afternoon to close the museum early. This sent me on my way much earlier than I’d hoped. Still, I was glad I got to spend some time with the dinosaurs and hadn’t wasted much time elsewhere in the museum. If I’m ever in Salt Lake City again I would like to go back though, it was tricky to read the signs in such low light and I would like to have the experience as it was intended. Besides, I’ll rarely pass up the opportunity to see a good dinosaur exhibit. I haven’t been to the Salt Lake City Zoo yet, if it had been warmer I may have considered it – next time!

With that, my trip to Salt Lake City pretty much concluded. I made my way to the airport to head home that evening. This trip rounded almost a full month of being away from home, so I was particularly eager to get home and spend some time with MJ and the kitties.

by pleia2 at December 25, 2016 04:32 PM

December 22, 2016

Akkana Peck

Tips on Developing Python Projects for PyPI

I wrote two recent articles on Python packaging: Distributing Python Packages Part I: Creating a Python Package and Distributing Python Packages Part II: Submitting to PyPI. I was able to get a couple of my programs packaged and submitted.

Ongoing Development and Testing

But then I realized all was not quite right. I could install new releases of my package -- but I couldn't run it from the source directory any more. How could I test changes without needing to rebuild the package for every little change I made?

Fortunately, it turned out to be fairly easy. Set PYTHONPATH to a directory that includes all the modules you normally want to test. For example, inside my bin directory I have a python directory where I can symlink any development modules I might need:

mkdir ~/bin/python
ln -s ~/src/metapho/metapho ~/bin/python/

Then add the directory at the beginning of PYTHONPATH:

export PYTHONPATH=$HOME/bin/python

With that, I could test from the development directory again, without needing to rebuild and install a package every time.

Cleaning up files used in building

Building a package leaves some extra files and directories around, and git status will whine at you since they're not version controlled. Of course, you could gitignore them, but it's better to clean them up after you no longer need them.

To do that, you can add a clean command to setup.py.

from setuptools import Command

class CleanCommand(Command):
    """Custom clean command to tidy up the project root."""
    user_options = []
    def initialize_options(self):
        pass
    def finalize_options(self):
        pass
    def run(self):
        os.system('rm -vrf ./build ./dist ./*.pyc ./*.tgz ./*.egg-info ./docs/sphinxdoc/_build')
(Obviously, that includes file types beyond what you need for just cleaning up after package building. Adjust the list as needed.)

Then in the setup() function, add these lines:

      cmdclass={
          'clean': CleanCommand,
      }

Now you can type

python setup.py clean
and it will remove all the extra files.

Keeping version strings in sync

It's so easy to update the __version__ string in your module and forget that you also have to do it in setup.py, or vice versa. Much better to make sure they're always in sync.

I found several version of that using system("grep..."), but I decided to write my own that doesn't depend on system(). (Yes, I should do the same thing with that CleanCommand, I know.)

def get_version():
    '''Read the pytopo module versions from pytopo/__init__.py'''
    with open("pytopo/__init__.py") as fp:
        for line in fp:
            line = line.strip()
            if line.startswith("__version__"):
                parts = line.split("=")
                if len(parts) > 1:
                    return parts[1].strip()

Then in setup():

      version=get_version(),

Much better! Now you only have to update __version__ inside your module and setup.py will automatically use it.

Using your README for a package long description

setup has a long_description for the package, but you probably already have some sort of README in your package. You can use it for your long description this way:

# Utility function to read the README file.
# Used for the long_description.
def read(fname):
    return open(os.path.join(os.path.dirname(__file__), fname)).read()
    long_description=read('README'),

December 22, 2016 05:15 PM

Jono Bacon

Recommendations Requested: Building a Smart Home

Early next year Erica, the scamp, and I are likely to be moving house. As part of the move we would both love to turn this house into a smart home.

Now, when I say “smart home”, I don’t mean this:

Dogogram. It is the future.

We don’t need any holographic dogs. We are however interested in having cameras, lights, audio, screens, and other elements in the house connected and controlled in different ways. I really like the idea of the house being naturally responsive to us in different scenarios.

In other houses I have seen people with custom lighting patterns (e.g. work, party, romantic dinner), sensors on gates that trigger alarms/notifications, audio that follows you around the house, notifications on visible screens, and other features.

Obviously we will want all of this to be (a) secure, (b) reliable, and (c) simple to use. While we want a smart home, I don’t particularly want to have to learn a million details to set it up.

Can you help?

So, this is what we would like to explore.

Now, I would love to ask you folks two questions:

  1. What kind of smart-home functionality and features have you implemented in your house (in other words, what neat things can you do?)
  2. What hardware and software do you recommend for rigging a home up as a smarthome. I would ideally like to keep re-wiring to a minimum. Assume I have nothing already, so recommendations for cameras, light-switches, hubs, and anything else is much appreciated.

If you have something you would like to share, please plonk it into the comment box below. Thanks!

The post Recommendations Requested: Building a Smart Home appeared first on Jono Bacon.

by Jono Bacon at December 22, 2016 04:00 PM

December 19, 2016

Jono Bacon

Building Better Teams With Asynchronous Workflow

One of the core principles of open source and innersource communities is asynchronous workflow. That is, participants/employees should be able to collaborate together with ubiquitous access, from anywhere, at any time.

As a practical example, at a previous company I worked at, pretty much everything lived in GitHub. Not just the code for the various products, but also material and discussions from the legal, sales, HR, business development, and other teams.

This offered a number of benefits for both employees and the company:

  • History – all projects, discussions, and collaboration was recorded. This provided a wealth of material for understanding prior decisions, work, and relationships.
  • Transparency – transparency is something most employees welcome and this was the case here where all employees felt a sense of connection to work across the company.
  • Communication – with everyone using the same platform it meant that it was easier for people to communicate clearly and consistently and to see the full scope of a discussion/project when pulled in.
  • Accountability – sunlight is the best disinfectant and having all projects, discussions, and work items/commitments, available in the platform ensured people were accountable in both their work and commitments.
  • Collaboration – this platform made it easier for people to not just collaborate (e.g. issues and pull requests) but also to bring in other employees by referencing their username (e.g. @jonobacon).
  • Reduced Silos – the above factors reduced the silos in the company and resulted in wider cross-team collaboration.
  • Untethered Working – because everything was online and not buried in private meetings and notes, this meant employees could be productive at home, on the road, or outside of office hours (often when riddled with jetlag at 3am!)
  • Internationally Minded – this also made it easier to work with an international audience, crossing different timezones and geographical regions.

While asynchronous workflow is not perfect, it offers clear benefits for a company and is a core component for integrating open source methodology and workflows (also known as innersource) into an organization.

Asynchronous workflow is a common area in which I work with companies. As such, I thought I would write up some lessons learned that may be helpful for you folks.

Designing Asynchronous Workflow

Many of you reading this will likely want to bring in the above benefits to your own organization too. You likely have an existing workflow which will be a mixture of (a) in-person meetings, (b) remote conference/video calls, (c) various platforms for tracking tasks, and (d) various collaboration and communication tools.

As with any organizational change and management, culture lies at the core. Putting platforms in place is the easy bit: adapting those platforms to the needs, desires, and uncertainties that live in people is where the hard work lays.

In designing asynchronous workflow you will need to make the transition from your existing culture and workflow to a new way of working. Ultimately this is about designing workflow that generates behaviors we want to see (e.g. collaboration, open discussion, efficient working) and behaviors we want to deter (e.g. silos, land-grabbing, power-plays etc).

Influencing these behaviors will include platforms, processes, relationships, and more. You will need to take a gradual, thoughtful, and transparent approach in designing how these different pieces fit together and how you make the change in a way that teams are engaged in.

I recommend you manage this in the following way (in order):

  1. Survey the current culture – first, you need to understand your current environment. How technically savvy are your employees? How dependent on meetings are they? What are the natural connections between teams, and where are the divisions? With a mixture of (a) employee surveys, and (b) observational and quantitive data, summarize these dynamics into lists of “Behaviors to Improve” and “Behaviors to Preserve”. These lists will give us a sense of how we want to build a workflow that is mindful of these behaviors and adjusts them where we see fit.
  2. Design an asynchronous environment – based on this research, put together a proposed plan for some changes you want to make to be more asynchronous. This should cover platform choices, changes to processes/policies, and roll-out plan. Divide this plan up in priority order for which pieces you want to deliver in which order.
  3. Get buy-in – next we need to build buy-in in senior management, team leads, and with employees. Ideally this process should be as open as possible with a final call for input from the wider employee-base. This is a key part of making teams feel part of the process.
  4. Roll out in phases – now, based on your defined priorities in the design, gradually roll out the plan. As you do so, provide regular updates on this work across the company (you should include metrics of the value this work is driving in these updates).
  5. Regularly survey users – at regular check-points survey the users of the different systems you put in place. Give them express permission to be critical – we want this criticism to help us refine and make changes to the plan.

Of course, this is a simplication of the work that needs to happen, but it covers the key markers that need to be in place.

Asynchronous Principles

The specific choices in your own asynchronous workflow plan will be very specific to your organization. Every org is different, has different drivers, people, and focus, so it is impossible to make a generalized set of strategic, platform, and process recommendations. Of course, if you want to discuss your organization’s needs specifically, feel free to get in touch.

For the purposes of this piece though, and to serve as many of you as possible, I want to share the core asynchronous principles you should consider when designing your asynchronous workflow. These principles are pretty consistent across most organizations I have seen.

Be Explicitly Permissive

A fundamental principle of asynchronous working (and more broadly in innersource) is that employees have explicit permission to (a) contribute across different projects/teams, (b) explore new ideas and creative solutions to problems, and (c) challenge existing norms and strategy.

Now, this doesn’t mean it is a free for all. Employees will have work assigned to them and milestones to accomplish, but being permissive about the above areas will crisply define the behavior the organization wants to see in employees.

In some organizations the senior management team spoo forth said permission and expect it to stick. While this top-down permission and validation is key, it is also critical that team leads, middle managers, and others support this permissive principle in day-to-day work.

People change and cultures develop by others delivering behavioral patterns that become accepted in the current social structure. Thus, you need to encourage people to work across projects, explore new ideas, and challenge the norm, and validate that behavior publicly when it occurs. This is how we make culture stick.

Default to Open Access

Where possible, teams and users should default to open visibility for projects, communication, issues, and other material. Achieving this requires not just default access controls to be open, but also setting the cultural and organization expectation that material should be open for all employees.

Of course, you should trust your employees to use their judgement too. Some efforts will require private discussions and work (e.g. security issues). Also, some discussions may need to be confidential (e.g. HR). So, default to open, but be mindful of the exceptions.

Platforms Need to be Accessible, Rich, and Searchable

There are myriad platforms for asynchronous working. GitHub, GitLab, Slack, Mattermost, Asana, Phabricator, to name just a few.

When evaluating platforms it is key to ensure that they can be made (a) securely accessible from anywhere (e.g. desktop/mobile support, available outside the office), (b) provide a rich and efficient environment for collaboration (e.g. rich discussions with images/videos/links, project management, simple code collaboration and review), (c) and any material is easily searchable (finding previous projects/discussions to learn from them, or finding new issues to focus on).

Always Maintain History and Never Delete, but Archive

You should maintain history in everything you do. This should include discussions, work/issue tracking, code (revision control), releases, and more.

On a related note, you should never, ever permanently delete material. Instead, that material should be archived. As an example, if you file an issue for a bug or problem that is no longer pertinent, archive the issue so it doesn’t come up in popular searches, but still make it accessible.

Consolidate Identity and Authentication

Having a single identity for each employee on asynchronous infrastructure is important. We want to make it easy for people to reference individual employees, so a unique username/handle is key here. This is not just important technically, but also for building relationships – that username/handle will be a part of how people collaborate, build their reputations, and communicate.

A complex challenge with deploying asynchronous infrastructure is with identity and authentication. You may have multiple different platforms that have different accounts and authentication providers.

Where possible invest in Single Sign On and authentication. While it requires a heavier up-front lift, consolidating multiple accounts further down the line is a nightmare you want to avoid.

Validate, Incentivize, and Reward

Human beings need validation. We need to know we are on the right track, particularly when joining new teams and projects. As such, you need to ensure people can easily validate each other (e.g. likes and +1s, simple peer review processes) and encourage a culture of appreciation and thanking others (e.g. manager and leaders setting an example to always thank people for contributions).

Likewise, people often respond well to being incentivized and often enjoy the rewards of that work. Be sure to identify what a good contribution looks like (e.g. in software development, a merged pull request) and incentivize and reward great work via both baked-in features and specific campaigns.

Be Mindful of Uncertainty, so Train, Onboard, and Support

Moving to a more asynchronous way of working will cause uncertainty in some. Not only are people often reluctant to change, but operating in a very open and transparent manner can make people squeamish about looking stupid in front of their colleagues.

So, be sure to provide extensive training as part of the transition, onboard new staff members, and provide a helpdesk where people can always get help and their questions answered.


Of course, I am merely scratching the surface of how we build asynchronous workflow, but hopefully this will get your started and generate some ideas and thoughts about how you bring this to your organization.

Of course, feel free to get in touch if you want to discuss your organization’s needs in more detail. I would also love to hear additional ideas and approaches in the comments!

The post Building Better Teams With Asynchronous Workflow appeared first on Jono Bacon.

by Jono Bacon at December 19, 2016 03:54 PM

December 17, 2016

Akkana Peck

Distributing Python Packages Part II: Submitting to PyPI

In Part I, I discussed writing a setup.py to make a package you can submit to PyPI. Today I'll talk about better ways of testing the package, and how to submit it so other people can install it.

Testing in a VirtualEnv

You've verified that your package installs. But you still need to test it and make sure it works in a clean environment, without all your developer settings.

The best way to test is to set up a "virtual environment", where you can install your test packages without messing up your regular runtime environment. I shied away from virtualenvs for a long time, but they're actually very easy to set up:

virtualenv venv
source venv/bin/activate

That creates a directory named venv under the current directory, which it will use to install packages. Then you can pip install packagename or pip install /path/to/packagename-version.tar.gz

Except -- hold on! Nothing in Python packaging is that easy. It turns out there are a lot of packages that won't install inside a virtualenv, and one of them is PyGTK, the library I use for my user interfaces. Attempting to install pygtk inside a venv gets:

********************************************************************
* Building PyGTK using distutils is only supported on windows. *
* To build PyGTK in a supported way, read the INSTALL file.    *
********************************************************************

Windows only? Seriously? PyGTK works fine on both Linux and Mac; it's packaged on every Linux distribution, and on Mac it's packaged with GIMP. But for some reason, whoever maintains the PyPI PyGTK packages hasn't bothered to make it work on anything but Windows, and PyGTK seems to be mostly an orphaned project so that's not likely to change.

(There's a package called ruamel.venvgtk that's supposed to work around this, but it didn't make any difference for me.)

The solution is to let the virtualenv use your system-installed packages, so it can find GTK and other non-PyPI packages there:

virtualenv --system-site-packages venv
source venv/bin/activate

I also found that if I had a ~/.local directory (where packages normally go if I use pip install --user packagename), sometimes pip would install to .local instead of the venv. I never did track down why this happened some times and not others, but when it happened, a temporary mv ~/.local ~/old.local fixed it.

Test your Python package in the venv until everything works. When you're finished with your venv, you can run deactivate and then remove it with rm -rf venv.

Tag it on GitHub

Is your project ready to publish?

If your project is hosted on GitHub, you can have pypi download it automatically. In your setup.py, set

download_url='https://github.com/user/package/tarball/tagname',

Check that in. Then make a tag and push it:

git tag 0.1 -m "Name for this tag"
git push --tags origin master

Try to make your tag match the version you've set in setup.py and in your module.

Push it to pypitest

Register a new account and password on both pypitest and on pypi.

Then create a ~/.pypirc that looks like this:

[distutils]
index-servers =
  pypi
  pypitest

[pypi]
repository=https://pypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

[pypitest]
repository=https://testpypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

Yes, those passwords are in cleartext. Incredibly, there doesn't seem to be a way to store an encrypted password or even have it prompt you. There are tons of complaints about that all over the web but nobody seems to have a solution. You can specify a password on the command line, but that's not much better. So use a password you don't use anywhere else and don't mind too much if someone guesses.

Update: Apparently there's a newer method called twine that solves the password encryption problem. Read about it here: Uploading your project to PyPI. You should probably use twine instead of the setup.py commands discussed in the next paragraph.

Now register your project and upload it:

python setup.py register -r pypitest
python setup.py sdist upload -r pypitest

Wait a few minutes: it takes pypitest a little while before new packages become available. Then go to your venv (to be safe, maybe delete the old venv and create a new one, or at least pip uninstall) and try installing:

pip install -i https://testpypi.python.org/pypi YourPackageName

If you get "No matching distribution found for packagename", wait a few minutes then try again.

If it all works, then you're ready to submit to the real pypi:

python setup.py register -r pypi
python setup.py sdist upload -r pypi

Congratulations! If you've gone through all these steps, you've uploaded a package to pypi. Pat yourself on the back and go tell everybody they can pip install your package.

Some useful reading

Some pages I found useful:

A great tutorial except that it forgets to mention signing up for an account: Python Packaging with GitHub

Another good tutorial: First time with PyPI

Allowed PyPI classifiers -- the categories your project fits into Unfortunately there aren't very many of those, so you'll probably be stuck with 'Topic :: Utilities' and not much else.

Python Packages and You: not a tutorial, but a lot of good advice on style and designing good packages.

December 17, 2016 11:19 PM

December 12, 2016

Eric Hammond

How Much Does It Cost To Run A Serverless API on AWS?

Serving 2.1 million API requests for $11

Folks tend to be curious about how much real projects cost to run on AWS, so here’s a real example with breakdowns by AWS service and feature.

This article walks through the AWS invoice for charges accrued in November 2016 by the TimerCheck.io API service which runs in the us-east-1 (Northern Virginia) region and uses the following AWS services:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • Route 53
  • CloudFront
  • SNS (Simple Notification Service)
  • CloudWatch Logs
  • CloudWatch Metrics
  • CloudTrail
  • S3
  • Network data transfer
  • CloudWatch Alarms

During this month, TimerCheck.io service processed over 2 million API requests. Every request ran an AWS Lambda function that read from and/or wrote to a DynamoDB table.

This AWS account is older than 12 months, so any first year free tier specials are no longer applicable.

Total Cost Overview

At the very top of the AWS invoice, we can see that the total AWS charges for the month of November add up to $11.12. This is the total bill for processing the 2.1 million API requests and all of the infrastructure necessary to support them.

Invoice: Summary

Service Overview

The next part of the invoice lists the top level services and charges for each. You can see that two thirds of the month’s cost was in API Gateway at $7.47, with a few other services coming together to make up the other third.

Invoice: Service Overview

API Gateway

Current API Gateway pricing is $3.50 per million requests, plus data transfer. As you can see from the breakdown below, the requests are the bulk of the expense at $7.41. The responses from TimerCheck.io probably average in the hundreds of bytes, so there’s only $0.06 in data transfer cost.

You currently get a million requests at no charge for the first 12 months, which was not applicable to this invoice, but does end up making API Gateway free for the development of many small projects.

Invoice: API Gateway

CloudTrail

I don’t remember enabling CloudTrail on this account, but at some point I must have done the right thing, as this is something that should be active for every AWS account. There were are almost 400,000 events recorded by CloudTrail, but since the first trail is free, there is no charge listed here.

Note that there are some charges associated with the storage of the CloudTrail event logs in S3. See below.

Invoice: CloudTrail

CloudWatch

The CloudWatch costs for this service come from logs being sent to CloudWatch Logs and the storage of those logs. These logs are being generated by AWS Lambda function execution and by API Gateway execution, so you can consider them as additional costs of running those services. You can control some of the logs generated by your AWS Lambda function, so a portion of these costs are under your control.

There are also charges for CloudWatch Alarms, but for some reason, those are listed under EC2 (below) instead of here under CloudWatch.

Invoice: CloudWatch

Data Transfer

Data transfer costs can be complex as they depend on where the data is coming from and going to. Fortunately for TimerCheck.io, there is very little network traffic and most of it falls into the free tiers. What little we are being charged for here amounts to a measly $0.04 for 4 GB of data transferred between availability zones. I presume this is related to AWS services talking to each other (e.g., AWS Lambda to DynamoDB) because there are no EC2 instances in this stack.

Note that this is not the entirety of the data transfer charges, as some other services break out their own network costs.

Invoice: Data Transfer

DynamoDB

The DynamoDB pricing includes a permanent free tier of up to 25 write capacity units and 25 read capacity units. The TimerCheck.io has a single DynamoDB table that is set to a capacity of 25 write and 25 read so there are no charges for capacity.

The TimerCheck.io DynamoDB database size falls well under the 25 GB free tier, so that has no charge either.

Invoice: DynamoDB

Elastic Compute Cloud

The TimerCheck.io service does not use EC2 and yet there is a section in the invoice for EC2. For some reason this section lists the CloudWatch Alarm charges.

Each CloudWatch Alarm costs $0.10 per month and this account has eight for a total of $0.80/month. But, for some reason, I was only billed $0.73. *shrug*

Invoice: EC2

This AWS account has four AWS billing alarms that will email me whenever the cumulative charges for the month pass $10, $20, $30, and $40.

There is one CloudWatch alarm that emails me if the AWS Lambda function invocations are being throttled (more than 100 concurrent functions being executed).

There are two CloudWatch alarms that email me if the consumed read and write capacity units are trending so high that I should look at increasing the capacity settings of the DynamoDB table. We are nowhere near that at current usage volume.

Yes, that leaves one CloudWatch alarm, which was a duplicate. I have since removed it.

AWS Lambda

Since most of the development of the TimerCheck.io API service focuses on writing the 60 lines of code for the AWS Lambda function, this is where I was expecting the bulk of the charges to be. However, the AWS Lambda costs only amount to $0.22 for the month.

There were 2.1 million AWS Lambda function invocations, one per external consumer API request, same as the API Gateway. The first million AWS Lambda function calls are free. The rest are charged at $0.20 per million.

The permanent free tier also includes 400,000 GB-seconds of compute time per month. At an average of 0.15 GB-seconds per function call, we stayed within the free tier at a total of 320,000 GB-seconds.

Invoice: AWS Lambda

I have the AWS Lambda function configuration cranked up to the max 1536 GB memory so that it will run as fast as possible. Since the charges are rounded up in units of 100ms, we could probably save GB-seconds by scaling down the memory once we exceed the free tier. Most of the time is probably spent in DynamoDB calls anyway, so this should not affect API performance much.

Route 53

Route 53 charges $0.50 per hosted zone (domain). I have two domains hosted in Route 53, the expected timercheck.io plus the extra timercheck.com. The timercheck.com domain was supposed to redirect to timercheck.io, but I apparently haven’t gotten around to tossing in that feature yet. These two hosted zones account for $1 in charges.

There were 1.1 million DNS queries to timercheck.io and www.timercheck.io, but since those resolve to aliases for the API Gateway, there is no charge.

The other $0.09 comes from the 226,000 DNS queries to random timercheck.io and timercheck.com hostnames. These would include status.timercheck.io, which is a page displaying the uptime of TimerCheck.io as reported by StatusCake.

Invoice: Route 53

Simple Notification Service

During the month of November, there was one post to an SNS topic and one email delivery from an SNS topic. These were both for the CloudWatch alert notifying me that the charges on the account had exceeded $10 for the month. There were no charges for this.

Invoice: SNS

Simple Storage Service

The S3 costs in this account are entirely for storing the CloudTrail events. There were 222 GET requests ($0) and 13,000 requests of other types ($0.07). There was no charge for the 0.064 GB-Mo of actual data stored. Has Amazon started rounding fractional pennies down instead of up in some services?

Invoice: SNS

External Costs

The domains timercheck.io and timercheck.com are registered through other registrars. Those cost about $33 and $9 per year, respectively.

The SSL/TLS certificate for https support costs around $10-15 per year, though this should drop to zero once CloudFront distributions created with API Gateway support certificates with ACM (AWS Certificate Manager) #awswishlist

Not directly obvious from the above is the fact that I have spent no time or money maintaining the TimerCheck.io API service post-launch. It’s been running for 1.5 years and I haven’t had to upgrade software, apply security patches, replace failing hardware, recover from disasters, or scale up and down with demand. By using AWS services like API Gateway, AWS Lambda, and DynamoDB, Amazon takes care of everything.

Notes

Your Mileage May Vary.

For entertainment use only.

This is just one example from one month for one service architected one way. Your service running on AWS will not cost the same.

Though 2 million TimerCheck.io API requests in November cost about $11, this does not mean that an additional million would cost another $5.50. Some services would cost significantly more and some would cost about the same, probably averaging out to significantly more.

If you are reading this after November 2016, then the prices for these AWS services have certainly changed and you should not use any of the above numbers in making decisions about running on AWS.

Conclusion

Amazon, please lower the cost of the API Gateway; or provide a simpler, cheaper service that can trigger AWS Lambda functions with https endpoints. Thank you!

Original article and comments: https://alestic.com/2016/12/aws-invoice-example/

December 12, 2016 10:00 AM

Elizabeth Krumbach

Trains in NYC

I’ve wanted to visit the New York Transit Museum ever since I discovered it existed. Housed in the retired Court station in Brooklyn, even the museum venue had “transit geek heaven” written all over it. I figured I’d visit it some day when work brought me to the city, but then I learned about the 15th Annual Holiday Train Show at their Annex and Store at Grand Central going on now. I’d love to see that! I ended up going up to the NYC from Philadelphia with my friend David last Sunday morning and made a day of it. Even better, we parked in New Jersey so had a full on transit experience from there into Manhattan and Brooklyn and back as the day progressed.

Our first stop was Grand Central Station via the 5 subway line. Somehow I’d never been there before. Enjoy the obligatory station selfie.

From there it was straight down to the Annex and Store run by the transit museum. The holiday exhibit had glittering signs hanging from the ceiling of everything from buses to transit cards to subway cars and snowflakes. The big draw though was the massive o-gauge model train setup, as the site explains:

This year’s Holiday Train Show display will feature a 34-foot-long “O gauge” model train layout with Lionel’s model Metro-North, New York Central, and vintage subway trains running on eight separate loops of track, against a backdrop featuring graphics celebrating the Museum’s 40th anniversary by artist Julia Rothman.

It was quite busy there, but folks were very clearly enjoying it. I’m really glad I went, even if the whole thing made me pine for my future home office train set all the more. Some day! It’s also worthy to note that this shop is the one to visit transit-wise. The museum in Brooklyn also had a gift shop but it was smaller and had fewer things, I highly recommend picking things up here, I ended up going back after the transit museum to get something I wanted.

We then hopped on the 4 subway line into Brooklyn to visit the actual transit museum. As advertised, it’s in a retired subway station, so the entrance looks like any other subway entrance and you take stairs underground. You enter and buy your ticket and then are free to explore both levels of the museum. The first had several exhibits that rotate, including one about Coney Island and another providing a history of crises in New York City (including 9/11, hurricane Sandy) and how the transit system and operators responded to them. They also had displays of a variety of turnstiles throughout the years, and exhibits talking about street car (trolley) lines and the introduction of the bus systems.

The exhibits were great, but it was downstairs that things got really fun. They have functioning rails where the subway trains used to run through where they’ve lined up over a dozen cars from throughout transit history in NYC for visitors to explore, inside and out.

The evolution of seat designs and configurations was interesting to witness and feel, as you could sit on the seats to get the full experience. Each car also had an information sign next to it, so you could learn about the era and the place of that car in it. Transitions between wood to metal, paired (and ..tripled?) cars were showcased, along with a bunch that were stand alone interchangables. I also enjoyed seeing a caboose, though I didn’t quite recognize at first (“is this for someone to live in?”).

A late lunch was due following the transit museum. We ended up at Sottocasa Pizzeria right there in Brooklyn. It got great reviews and I enjoyed it a lot, but was definitely on the fancy pizza side. They also had selection of Italian beers, of which I chose the delicious Nora by Birra Baladin. Don’t worry, next time I’m in New York I’ll go to a great, not fancy, pizza place.

It was then back to Manhattan to spend a bit more time at Grand Central and for an evening walk through the city. We started by going up 5th Avenue to see Rockefeller Square at night during the holidays. I hadn’t been to Manhattan since 2013 when I went with my friend Danita and I’d never seen the square all decked out for the holidays. I didn’t quite think it through though, it’s probably the busiest time of the year there so the whole neighborhood for blocks was insanely crowded. After seeing the skating rink and tree, we escaped northwest and made our way through the crowds up to Central Park. It was cold, but all the walking was fun even with the crowds. For dinner we ended up at Jackson Hole for some delicious burgers. I went with the Guacamole Burger.

The trip back to north Jersey took us through the brand new World Trade Center Transportation Hub to take the PATH. It’s a very unusual space. It’s all bright white with tons of marble shaped in a modern look, and has a shopping mall with a surreal amount of open space. The trip back on the PATH that night was as smooth as expected. In all, a very enjoyable day of public transit stuff!

More photos from Grand Central Station and the Transit Museum here: https://www.flickr.com/photos/pleia2/albums/72157677457519215

Epilogue: I received incredibly bad news the day after this visit to NYC. It cast a shadow over it for me. I went back and forth about whether I should write about this visit at all and how I should present it if I did. I decided to present it as it was that day. It was a great day of visiting the city and geeking out over trains, enjoyed with a close friend, and detached from whatever happened later. I only wish I could convince my mind to do the same.

by pleia2 at December 12, 2016 01:29 AM

UbuCon EU 2016

Last month I had the opportunity to travel to Essen, Germany to attend UbuCon EU 2016. Europe has had UbuCons before, but the goal of this one was to make it a truly international event, bringing in speakers like me from all corners of the Ubuntu community to share our experiences with the European Ubuntu community. Getting to catch up with a bunch of my Ubuntu colleagues who I knew would be there and visiting Germany as the holiday season began were also quite compelling reasons for me to attend.

The event formally kicked off Saturday morning with a welcome and introduction by Sujeevan Vijayakumaran, he reported that 170 people registered for the event and shared other statistics about the number of countries attendees were from. He also introduced a member of the UbPorts team, Marius Gripsgård, who announced the USB docking station for Ubuntu Touch devices they were developing, more information in this article on their website: The StationDock.

Following these introductions and announcements, we were joined by Canonical CEO Jane Silber who provided a tour of the Ubuntu ecosystem today. She highlighted the variety of industries where Ubuntu was key, progress with Ubuntu on desktops/laptops, tablets, phones and venturing into the smart Internet of Things (IoT) space. Her focus was around the amount of innovation we’re seeing in the Ubuntu community and from Canonical, and talked about specifics regarding security, updates, the success in the cloud and where Ubuntu Core fits into the future of computing.

I also loved that she talked about the Ubuntu community. The strength of local meetups and events, the free support community that spans a variety of resources, ongoing work by the various Ubuntu flavors. She also spoke to the passion of Ubuntu contributors, citing comics and artwork that community members have made, including the stunning series of release animal artwork by Sylvia Ritter from right there in Germany, visit them here: Ubuntu Animals. I was also super thrilled that she mentioned the Ubuntu Weekly Newsletter as a valuable resource for keeping up with the community, a very small group of folks works very hard on it and that kind of validation is key to sustaining motivation.

The next talk I attended was by Fernando Lanero Barbero on Linux is education, Linux is science. Ubuntu to free educational environments. Fernando works at a school district in Spain where he has deployed Ubuntu across hundreds of computers, reaching over 1200 students in the three years he’s been doing the work. The talk outlined the strengths of the approach, explaining that there was cost savings for his school and also how Ubuntu and open source software is more in line with the school values. One of the key takeaways from his experience was one that I know a lot about from our own Linux in schools experiences here in the US at Partimus: focus on the people, not the technologies. We’re technologists who love Linux and want to promote it, but without engagement, understanding and buy-in from teachers, deployments won’t be successful. A lot of time needs to be spent making assessments of their needs, doing roll-outs slowly and methodically so that the change doesn’t happen to abruptly and leave them in a lurch. He also stressed the importance of consistency with the deployments. Don’t get super creative across machines, use the same flavor for everything, even the same icon set. Not everyone is as comfortable with variation as we are, and you want to make the transition as easy as possible across all the systems.

Laura Fautley (Czajkowski) spoke at the next talk I went to, on Supporting Inclusion & Involvement in a Remote Distributed Team. The Ubuntu community itself is distributed across the globe, so drawing experience from her work there and later at several jobs where she’s had to help manage communities, she had a great list of recommendations as you build out such a team. She talked about being sensitive to time zones, acknowledgement that decisions are sometimes made in social situations rather than that you need to somehow document and share these decisions with the broader community. She was also eager to highlight how you need to acknowledge and promote the achievements in your team, both within the team and to the broader organization and project to make sure everyone feels valued and so that everyone knows the great work you’re doing. Finally, it was interesting to hear some thoughts about remote member on-boarding, stressing the need to have a process so that new contributors and team mates can quickly get up to speed and feel included from the beginning.

I went to a few other talks throughout the two day event, but one of the big reasons for me attending was to meet up with some of my long-time friends in the Ubuntu community and finally meet some other folks face to face. We’ve had a number of new contributors join us since we stopped doing Ubuntu Developer Summits and today UbuCons are the only Ubuntu-specific events where we have an opportunity to meet up.


Laura Fautley, Elizabeth K. Joseph, Alan Pope, Michael Hall

Of course I was also there to give a pair of talks. I first spoke on Contributing to Ubuntu on Desktops (slides) which is a complete refresh of a talk I gave a couple of times back in 2014. The point of that talk was to pull people back from the hype-driven focus on phones and clouds for a bit and highlight some of the older projects that still need contributions. I also spoke on Building a career with Ubuntu and FOSS (slides) which was definitely the more popular talk. I’ve given a similar talk for a couple UbuCons in the past, but this one had the benefit of being given while I’m between jobs. This most recent job search as I sought out a new role working directly with open source again gave a new dimension to the talk, and also made for an amusing intro, “I don’t have a job at this very moment …but without a doubt I will soon!” And in fact, I do have something lined up now.


Thanks to Tiago Carrondo for taking this picture during my talk! (source)

The venue for the conference was a kind of artists space, which made it a bit quirky, but I think worked out well. We had a couple social gatherings there at the venue, and buffet lunches were included in our tickets, which meant we didn’t need to go far or wait on food elsewhere.

I didn’t have a whole lot of time for sight-seeing this trip because I had a lot going on stateside (like having just bought a house!) but I did get to enjoy the beautiful Christmas Market in Essen a few of nights while I was there.

For those of you not familiar with German Christmas Markets (I wasn’t), they close roads downtown and pop up streets of wooden shacks that sell everything from Christmas ornaments and cookies to hot drinks, beers and various hot foods. We went the first night I was in town we met up with several fellow conference-goers and got some fries with mayonnaise, grilled mushrooms with Bearnaise sauce, my first taste of German Glühwein (mulled wine) and hot chocolate. The next night we went was a quick walk through the market that landed us at a steakhouse where we had a late dinner and a couple beers.

The final night we didn’t stay out late, but did get some much anticipated Spanish churros, which inexplicably had sugar rather than the cinnamon I’m used to, as well as a couple more servings of Glühwein, this time in commemorative Christmas mugs shaped like boots!


Clockwise from top left: José Antonio Rey, Philip Ballew, Michael Hall, John and Laura Fautley, Elizabeth K. Joseph

The next morning I was up bright and early to catch a 6:45AM train that started me on my three train journey back to Amsterdam to fly back to Philadelphia.

It was a great little conference and a lot of fun. Huge thanks to Sujeevan for being so incredibly welcoming to all of us, and thanks to all the volunteers who worked for months to make the event happen. Also thanks to Ubuntu community members who donate to the community fund since I would have otherwise had to self-fund to attend.

More photos from the event (and the Christmas Market!) here: https://www.flickr.com/photos/pleia2/albums/72157676958738915

by pleia2 at December 12, 2016 12:03 AM

December 11, 2016

Akkana Peck

Distributing Python Packages Part I: Creating a Python Package

I write lots of Python scripts that I think would be useful to other people, but I've put off learning how to submit to the Python Package Index, PyPI, so that my packages can be installed using pip install.

Now that I've finally done it, I see why I put it off for so long. Unlike programming in Python, packaging is a huge, poorly documented hassle, and it took me days to get a working.package. Maybe some of the hints here will help other struggling Pythonistas.

Create a setup.py

The setup.py file is the file that describes the files in your project and other installation information. If you've never created a setup.py before, Submitting a Python package with GitHub and PyPI has a decent example, and you can find lots more good examples with a web search for "setup.py", so I'll skip the basics and just mention some of the parts that weren't straightforward.

Distutils vs. Setuptools

However, there's one confusing point that no one seems to mention. setup.py examples all rely on a predefined function called setup, but some examples start with

from distutils.core import setup
while others start with
from setuptools import setup

In other words, there are two different versions of setup! What's the difference? I still have no idea. The setuptools version seems to be a bit more advanced, and I found that using distutils.core , sometimes I'd get weird errors when trying to follow suggestions I found on the web. So I ended up using the setuptools version.

But I didn't initially have setuptools installed (it's not part of the standard Python distribution), so I installed it from the Debian package:

apt-get install python-setuptools python-wheel

The python-wheel package isn't strictly needed, but I found I got assorted warnings warnings from pip install later in the process ("Cannot build wheel") unless I installed it, so I recommend you install it from the start.

Including scripts

setup.py has a scripts option to include scripts that are part of your package:

    scripts=['script1', 'script2'],

But when I tried to use it, I had all sorts of problems, starting with scripts not actually being included in the source distribution. There isn't much support for using scripts -- it turns out you're actually supposed to use something called console_scripts, which is more elaborate.

First, you can't have a separate script file, or even a __main__ inside an existing class file. You must have a function, typically called main(), so you'll typically have this:

def main():
    # do your script stuff

if __name__ == "__main__":
    main()

Then add something like this to your setup.py:

      entry_points={
          'console_scripts': [
              script1=yourpackage.filename:main',
              script2=yourpackage.filename2:main'
          ]
      },

There's a secret undocumented alternative that a few people use for scripts with graphical user interfaces: use 'gui_scripts' rather than 'console_scripts'. It seems to work when I try it, but the fact that it's not documented and none of the Python experts even seem to know about it scared me off, and I stuck with 'console_scripts'.

Including data files

One of my packages, pytopo, has a couple of files it needs to install, like an icon image. setup.py has a provision for that:

      data_files=[('/usr/share/pixmaps',      ["resources/appname.png"]),
                  ('/usr/share/applications', ["resources/appname.desktop"]),
                  ('/usr/share/appname',      ["resources/pin.png"]),
                 ],

Great -- except it doesn't work. None of the files actually gets added to the source distribution.

One solution people mention to a "files not getting added" problem is to create an explicit MANIFEST file listing all files that need to be in the distribution. Normally, setup generates the MANIFEST automatically, but apparently it isn't smart enough to notice data_files and include those in its generated MANIFEST.

I tried creating a MANIFEST listing all the .py files plus the various resources -- but it didn't make any difference. My MANIFEST was ignored.

The solution turned out to be creating a MANIFEST.in file, which is used to generate a MANIFEST. It's easier than creating the MANIFEST itself: you don't have to list every file, just patterns that describe them:

include setup.py
include packagename/*.py
include resources/*
If you have any scripts that don't use the extension .py, don't forget to include them as well. This may have been why scripts= didn't work for me earlier, but by the time I found out about MANIFEST.in I had already switched to using console_scripts.

Testing setup.py

Once you have a setup.py, use it to generate a source distribution with:

python setup.py sdist
(You can also use bdist to generate a binary distribution, but you'll probably only need that if you're compiling C as part of your package. Source dists are apparently enough for pure Python packages.)

Your package will end up in dist/packagename-version.tar.gz so you can use tar tf dist/packagename-version.tar.gz to verify what files are in it. Work on your setup.py until you don't get any errors or warnings and the list of files looks right.

Congratulations -- you've made a Python package! I'll post a followup article in a day or two about more ways of testing, and how to submit your working package to PyPI.

Update: Part II is up: Distributing Python Packages Part II: Submitting to PyPI.

December 11, 2016 07:54 PM

December 08, 2016

Nathan Haines

UbuCon Europe 2016

UbuCon Europe 2016

Nathan Haines enjoying UbuCon Europe

If there is one defining aspect of Ubuntu, it's community. All around the world, community members and LoCo teams get together not just to work on Ubuntu, but also to teach, learn, and celebrate it. UbuCon Summit at SCALE was a great example of an event that was supported by the California LoCo Team, Canonical, and community members worldwide coming together to make an event that could host presentations on the newest developer technologies in Ubuntu, community discussion roundtables, and a keynote by Mark Shuttleworth, who answered audience questions thoughtfully, but also hung around in the hallway and made himself accessible to chat with UbuCon attendees.

Thanks to the Ubuntu Community Reimbursement Fund, the UbuCon Germany and UbuCon Paris coordinators were able to attend UbuCon Summit at SCALE, and we were able to compare notes, so to speak, as they prepared to expand by hosting the first UbuCon Europe in Germany this year. Thanks to the community fund, I also had the immense pleasure of attending UbuCon Europe. After I arrived, Sujeevan Vijayakumaran picked me up from the airport and we took the train to Essen, where we walked around the newly-opened Weihnachtsmarkt along with Philip Ballew and Elizabeth Joseph from Ubuntu California. I acted as official menu translator, so there were no missed opportunities for bratwurst, currywurst, glühwein, or beer. Happily fed, we called it a night and got plenty of sleep so that we would last the entire weekend long.

Zeche Zollverein, a UNESCO World Heritage site

UbuCon Europe was a marvelous experience. Friday started things off with social events so everyone could mingle and find shared interests. About 25 people attended the Zeche Zollverein Coal Mine Industrial Complex for a guided tour of the last operating coal extraction and processing site in the Ruhr region and was a fascinating look at the defining industry of the Ruhr region for a century. After that, about 60 people joined in a special dinner at Unperfekthaus, a unique location that is part creative studio, part art gallery, part restaurant, and all experience. With a buffet and large soda fountains and hot coffee/chocolate machine, dinner was another chance to mingle as we took over a dining room and pushed all the tables together in a snaking chain. It was there that some Portuguese attendees first recognized me as the default voice for uNav, which was something I had to get used to over the weekend. There's nothing like a good dinner to get people comfortable together, and the Telegram channel that was established for UbuCon Europe attendees was spread around.

Sujeevan Vijayakumaran addressing the UbuCon Europe attendees

The main event began bright and early on Saturday. Attendees were registered on the fifth floor of Unpefekthaus and received their swag bags full of cool stuff from the event sponsors. After some brief opening statements from Sujeevan, Marcus Gripsgård announced an exciting new Kickstarter campaign that will bring an easier convergence experience to not just most Ubuntu phones, but many Android phones as well. Then, Jane Silber, the CEO of Canonical, gave a keynote that went into detail about where Canonical sees Ubuntu in the future, how convergence and snaps will factor into future plans, and why Canonical wants to see one single Ubuntu on the cloud, server, desktop, laptop, tablet, phone, and Internet of Things. Afterward, she spent some time answering questions from the community, and she impressed me with her willingness to answer questions directly. Later on, she was chatting with a handful of people and it was great to see the consideration and thought she gave to those answers as well. Luckily, she also had a little time to just relax and enjoy herself without the third degree before she had to leave later that day. I was happy to have a couple minutes to chat with her.

Nathan Haines and Jane Silber

Microsoft Deutschland GmbH sent Malte Lantin to talk about Bash on Ubuntu on Windows and how the Windows Subsystem for Linux works, and while jokes about Microsoft and Windows were common all weekend, everyone kept their sense of humor and the community showed the usual respect that’s made Ubuntu so wonderful. While being able to run Ubuntu software natively on Windows makes many nervous, it also excites others. One thing is for sure: it’s convenient, and the prospect of having a robust terminal emulator built right in to Windows seemed to be something everyone could appreciate.

After that, I ate lunch and gave my talk, Advocacy for Advocates, where I gave advice on how to effectively recommend Ubuntu and other Free Software to people who aren’t currently using it or aren’t familiar with the concept. It was well-attended and I got good feedback. I also had a chance to speak in German for a minute, as the ambiguity of the term Free Software in English disappears in German, where freies Software is clear and not confused with kostenloses Software. It’s a talk I’ve given before and will definitely give again in the future.

After the talks were over, there was a raffle and then a UbuCon quiz show where the audience could win prizes. I gave away signed copies of my book, Beginning Ubuntu for Windows and Mac Users, in the raffle, and in fact I won a “xenial xeres” USB drive that looks like an origami squirrel as well as a Microsoft t-shirt. Afterwards was a dinner that was not only delicious with apple crumble for desert, but also free beer and wine, which rarely detracts from any meal.

Marcos Costales and Nathan Haines before the uNav presentation

Sunday was also full of great talks. I loved Marcos Costales’s talk on uNav, and as the video shows, I was inspired to jump up as the talk was about to begin and improvise the uNav-like announcement “You have arrived at the presentation.” With the crowd warmed up from the joke, Marcos took us on a fascinating journey of the evolution of uNav and finished with tips and tricks for using it effectively. I also appreciated Olivier Paroz’s talk about Nextcloud and its goals, as I run my own Nextcloud server. I was sure to be at the UbuCon Europe feedback and planning roundtable and was happy to hear that next year UbuCon Europe will be held in Paris. I’ll have to brush up on my restaurant French before then!

Nathan Haines contemplating tools with a Neanderthal

That was the end of UbuCon, but I hadn’t been to Germany in over 13 years so it wasn’t the end of my trip! Sujeevan was kind enough to put up with me for another four days, and he accompanied me on a couple shopping trips as well as some more sightseeing. The highlight was a trip to the Neanderthal Museum in the aptly-named Neandertal, Germany, and then afterward we met his friend (and UbuCon registration desk volunteer!) Philipp Schmidt in Düsseldorf at their Weihnachtsmarkt, where we tried the Feuerzangenbowle, where mulled wine is improved by soaking a block of sugar in rum, then putting it over the wine and lighting the sugarloaf on fire to drip into the wine. After that, we went to the Brauerei Schumacher where I enjoyed not only Schumacher Alt beer, but also the Rhein-style sauerbraten that has been on my to-do list for a decade and a half. (Other variations of sauerbraten—not to mention beer—remain on the list!)

I’d like to thank Sujeevan for his hospitality on top of the tremendous job that he, the German LoCo, and the French LoCo exerted to make the first UbuCon Europe a stunning success. I’d also like to thank everyone who contributed to the Ubuntu Community Reimbursement Fund for helping out with my travel expenses, and everyone who attended, because of course we put everything together for you to enjoy.

December 08, 2016 05:04 AM

December 05, 2016

Elizabeth Krumbach

Vacation Home in Pennsylvania

This year MJ and I embarked on a secret mission: Buy a vacation home in Pennsylvania.

It was a decision we’d mulled over for a couple years, and the state of the real estate market along with our place in lives, careers and frequent visits back to the Philadelphia area finally made the stars align to make it happen. With the help of family local to the area, including one who is a real estate agent, we spent the past few trips taking time to look at houses and make some decisions. In August we started signing the paperwork to take possession of a new home in November.

With the timing of our selection, we were able to pick out cabinets, counter tops and some of the other non-architectural options in the home. Admittedly none of that is my thing, but it’s still nice that we were able to put our touch on the end result. As we prepared for the purchase, MJ spent a lot of time making plans for taking care of the house and handling things like installations, deliveries and the move of our items from storage into the house.

In October we also bought a car that we’d be keeping at the house in Philadelphia, though we did enjoy it in California for a few weeks.

On November 15th we met at the title company office and signed the final paperwork.

The house was ours!

The next day I flew to Germany for a conference and MJ headed back to San Francisco. I enjoyed the conference and a few days in Germany, but I was eager to get back to the house.

Upon my return we had our first installation. Internet! And backup internet.

MJ came back into town for Thanksgiving which we enjoyed with family. The day after was the big move from storage into the house. Our storage units not only had our own things that we’d left in Pennsylvania, but everything from MJ’s grandparents, which included key contents of their own former vacation home which I never saw. We moved his grandmother into assisted care several years ago and had been keeping their things until we got a larger home in California. With the house here in Pennsylvania we decided to use some of the pieces to furnish the house here. It also meant I have a lot of boxes to go through.

Before MJ left to head back to work in San Francisco we did get a few things unpacked, including Champagne glasses, which meant on Saturday night following the move day we were able to pick up a proper bottle of Champagne and spend the evening together in front of the fireplace to celebrate.

I’d been planning on taking some time off following the layoff from my job as I consider new opportunities in the coming year. It ended up working well since I’ve been able to do that, plus spend the past week here in the Philadelphia house unpacking and getting the house set up. Several of the days I’ve also had to be here at the house to receive deliveries and be present for installs of all kinds to make sure the house is ready and secure (cameras!) for us to properly enjoy as soon as possible. Today is window blinds day. I am getting to enjoy it some too, between all these tasks I’ve spent time with local friends and family, had some time reading in front of the fireplace, have enjoyed a beautiful new Bluetooth speaker playing music all day. The house doesn’t have a television yet, but I have also curled up to watch a few episodes on my tablet here and there in the evenings as well.

There have also been some great sunsets in the neighborhood. I sure missed the Pennsylvania autumn sights and smells.

And not all the unpacking has been laborious. I found MJ’s telescope from years ago in storage and I was able to set that up the other night. Looking forward a clear night to try it out.

Tomorrow I’m flying off yet again for a conference and then to spend at least a week at home back in San Francisco. We’ll be back very soon though, planning on spending at least the eight days of Hanukkah here, and possibly flying in earlier if we can line up some of the other work we need to get done.

by pleia2 at December 05, 2016 07:21 PM

December 04, 2016

Elizabeth Krumbach

Breathtaking Barcelona

My father once told me that Madrid was his favorite city and that he generally loved Spain. When my aunt shipped me a series of family slides last year I was delighted to find ones from Madrid in the mix, I uploaded the album: Carl A. Krumbach – Spain 1967. I wish I had asked him why he loved Madrid, but in October I had the opportunity myself to learn why I now love Spain.

I landed in Barcelona the last week of October. First, it was a beautiful time to visit. Nice weather that wasn’t too hot or too cold. It rained over night a couple times and a bit some days, but not enough to deter activities, and I was busy with a conference during most of the days anyway. It was also warm enough to go swimming in the Mediterranean, though I failed to avail myself of this opportunity. The day I got in I met up with a couple friends to go to the aquarium, walk around the coastline and was able to touch the sea for the first time. That evening I also had my first of three seafood paellas that I enjoyed throughout the week. So good.

The night life was totally a thing. Many places would offer tapas along with drinks, so one night a bunch of us went out and just ate and drank our way through the Gothic Quarter. The restaurants also served late, often not even starting dinner service until 8PM. One night at midnight we found ourselves at a steakhouse dining on a giant steak that served the table and drinking a couple bottles of cava. Oh the cava, it was plentiful and inexpensive. As someone who lives in California these days I felt a bit bad by betraying my beloved California wine, but it was really good. I also enjoyed the Sangrias.

A couple mornings after evenings when I didn’t let the drinks get the better of me, I also went out for a run. Running along the beaches in Barcelona was a tiny slice of heaven. It was also wonderful to just go sit by the sea one evening when I needed some time away from conference chaos.


Seafood paella lunch for four! We also had a couple beers.

All of this happened before I even got out to do much tourist stuff. Saturday was my big day for seeing the famous sights. Early in the week I reserved tickets to see the Sagrada Familia Basilica. I like visiting religious buildings when I travel because they tend to be on the extravagant side. Plus, back at the OpenStack Summit in Paris we heard from a current architect of the building and I’ve since seen a documentary about the building and nineteenth century architect Antoni Gaudí. I was eager to see it, but nothing quite prepared me for the experience. I had tickets for 1:30PM and was there right on time.


Sagrada Familia selfie!

It was the most amazing place I’ve ever been.

The architecture sure is unusual but once you let that go and just enjoy it, everything comes together in a calming way that I’ve never quite experienced before. The use of every color through the hundreds of stained glass windows was astonishing.

I didn’t do the tower tour on this trip because once I realized how special this place was I wanted to save something new to do there the next time I visit.

The rest of my day was spent taking one of the tourist buses around town to get a taste of a bunch of the other sights. I got a glimpse of a couple more buildings by Gaudí. In the middle of the afternoon I stopped at a tapas restaurant across from La Monumental, a former bullfighting ring. They outlawed bullfighting several years ago, but the building is still used for other events and is worth seeing for the beautiful tiled exterior, even just on the outside.

I also walked through the Arc de Triomf and made my way over to the Barcelona Cathedral. After the tour bus brought me back to the stop near my hotel I spent the rest of the late afternoon enjoying some time at the beach.

That evening I met up with my friend Clint to do one last wander around the area. We stopped at the beach and had some cava and cheese. From there we went to dinner where we split a final paella and bottle of cava. Dessert was a Catalan cream, which is a lot like a crème brûlée but with cinnamon, yum!

As much as I wanted to stay longer and enjoy the gorgeous weather, the next morning I was scheduled to return home.

I loved Barcelona. It stole my heart like no other European city ever has and it’s now easily one of my favorite cities. I’ll be returning, hopefully sooner than later.

More photos from my adventures in Barcelona here: https://www.flickr.com/photos/pleia2/albums/72157674260004081

by pleia2 at December 04, 2016 03:18 AM

December 02, 2016

Elizabeth Krumbach

OpenStack book and Infra team at the Ocata Summit

At the end of October I attended the OpenStack Ocata Summit in beautiful Barcelona. My participation in this was a bittersweet one for me. It was the first summit following the release of our Common OpenStack Deployments book and OpenStack Infrastructure tooling was featured in a short keynote on Wednesday morning, making for quite the exciting summit. Unfortunately it also marked my last week with HPE and an uncertain future with regard to my continued full time participation with the OpenStack Infrastructure team. It was also the last OpenStack Summit where the conference and design summit are being hosted together, so the next several months will be worth keeping an eye on community-wise. Still, I largely took the position of assuming I’d continue to be able to work on the team, just with more caution in regards to work I was signing up for.

The first thing that I discovered during this summit was how amazing Barcelona is. The end of October presented us with some amazing weather for walking around and the city doesn’t go to sleep early, so we had plenty of time in the evenings to catch up with each other over drinks and scrumptious tapas. It worked out well since there were fewer sponsored parties in the evenings at this summit and attendance seemed limited at the ones that existed.

The high point for me at the summit was having the OpenStack Infrastructure tooling for handling our fleet of compute instances featured in a keynote! Given my speaking history, I was picked from the team to be up on the big stage with Jonathan Bryce to walk through a demonstration where we removed one of our US cloud providers and added three more in Europe. While the change was landing and tests started queuing up we also took time to talk about how tests are done against OpenStack patch sets across our various cloud providers.


Thanks to Johanna Koester for taking this picture (source)

It wasn’t just me presenting though. Clark Boylan and Jeremy Stanley were sitting in the front row making sure the changes landed and everything went according to plan during the brief window that this demonstration took up during the keynote. I’m thrilled to say that this live demonstration was actually the best run we had of all the testing, seeing all the tests start running on our new providers live on stage in front of such a large audience was pretty exciting. The team has built something really special here, and I’m glad I had the opportunity to help highlight that in the community with a keynote.


Mike Perez and David F. Flanders sitting next to Jeremy and Clark as they monitor demonstration progress. Photo credit for this one goes to Chris Hoge (source)

The full video of the keynote is available here: Demoing the World’s Largest Multi-Cloud CI Application

A couple of conference talks were presented by members of the Infrastructure team as well. On Tuesday Colleen Murphy, Paul Belanger and Ricardo Carrillo Cruz presented on the team’s Infra-Cloud. As I’ve written about before, the team has built a fully open source OpenStack cloud using the community Puppet modules and donated hardware and data center space from Hewlett Packard Enterprise. This talk outlined the architecture of that cloud, some of the challenges they’ve encountered, statistics from how it’s doing now and future plans. Video from their talk is here: InfraCloud, a Community Cloud Managed by the Project Infrastructure Team.

James E. Blair also gave a talk during the conference, this time on Zuul version 3. This version of Zuul has been under development for some time, so this was a good opportunity to update the community on the history of the Zuul project in general and why it exists, status of ongoing efforts with an eye on v3 and problems it’s trying to solve. I’m also in love with his slide deck, it was all text-based (including some “animations”!) and all with an Art Deco theme. Video from his talk is here: Zuul v3: OpenStack and Ansible Native CI/CD.

As usual, the Infrastructure team also had a series of sessions related to ongoing work. As a quick rundown, we have Etherpads for all the sessions (read-only links provided):

Friday concluded with a Contributors Meetup for the Infrastructure team in the afternoon where folks split off into small groups to tackle a series of ongoing projects together. I was also able to spend some time with the Internationalization (i18n) team that Friday afternoon. I dragged along Clark so someone else on the team could pick up where I left off in case I have less time in the future. We talked about the pending upgrade of Zanata and plans for a translations checksite, making progress on both fronts, especially when we realized that there’s a chance we could get away with just running a development version of Horizon itself, with a more stable back end.


With the i18n team!

Finally, the book! It was the first time I was able to see Matt Fischer, my contributing author, since the book came out. Catching up with him and signing a book together was fun. Thanks to my publisher I was also thrilled to donate the signed copies I brought along to the Women of OpenStack Speed Mentoring event on Tuesday morning. I wasn’t able to attend the event, but they were given out on my behalf, thanks to Nithya Ruff for handling the giveaway.


Thanks to Nithya Ruff for taking a picture of me with my book at the Women of OpenStack area of the expo hall (source) and Brent Haley for getting the picture of Lisa-Marie and I (source).

I was also invited to sit down with Lisa-Marie Namphy to chat about the book and changes to the OpenStack Infrastructure team in the Newton cycle. The increase in capacity to over 2000 test instances this past cycle was quite the milestone so I enjoyed talking about that. The full video is up on YouTube: OpenStack® Project Infra: Elizabeth K. Joseph shares how test capacity doubled in Newton

In all, it was an interesting summit with a lot of change happening in the community and with partner companies. The people that make the community are still there though and it’s always enjoyable spending time together. My next OpenStack event is coming up quickly, next week I’ll be speaking at OpenStack Days Mountain West on the The OpenStack Project Continuous Integration System. I’ll also have a pile of books to give away at that event!

by pleia2 at December 02, 2016 02:58 PM

December 01, 2016

Elizabeth Krumbach

A Zoo and an Aquarium

When I was in Ohio last month for the Ohio LinuxFest I added a day on to my trip to visit the Columbus Zoo. A world-class zoo, it’s one of the few northern state zoos that has manatees and their African savanna exhibit is worth visiting. I went with a couple friends I attended the conference with, one of whom was a local and offered to drive (thanks again Svetlana!).

We arrived mid-day, which was in time to see their cheetah run, where they give one of their cheetahs some exercise by having it run a quick course around what had just been moments before the hyena habitat. I also learned recently via ZooBorns that the Columbus Zoo is one that participates in the cheetah-puppy pairing from a young age. The dogs keep these big cats feeling secure with their calmness in an uncertain world, adorable article from the site here: A Cheetah and His Dog

Much to my delight, they were also selling Cheetah-and-Dog pins after the run to raise money. Yes, please!

As I said, I really enjoyed their African Savanna exhibit. It was big and sprawling and had a nice mixture of animals. The piles of lions they have was also quite the sight to behold.

Their kangaroo enclosure was open to walk through, so you could get quite close to the kangaroos just like I did at the Perth Zoo. There were also a trio of baby tigers and some mountain lions that were adorable. And then there were the manatees. I love manatees!

I’m really glad I took the time to stay longer in Columbus, I’d likely go again if I found myself in the area.

More photos from the zoo, including a tiger napping on his back, and those mountain lions here: https://www.flickr.com/photos/pleia2/albums/72157671610835663

Just a couple weeks later I found myself on another continent, and at the Barcelona Aquarium with my friends Julia and Summer. It was a sizable aquarium and really nicely laid out. Their selection of aquatic animals was diverse and interesting. In this aquarium I liked some of the smallest critters the most. Loved their seahorses.

And the axolotls.

There was also an octopus that was awake and wandering around the tank, much to the delight of the crowd.

They also had penguins, a great shark tube and tank with a moving walkway.

More photos from the Barcelona Aquarium: https://www.flickr.com/photos/pleia2/albums/72157675629122655

Barcelona also has a zoo, but in my limited time there in the city I didn’t make it over there. It’s now on my very long list of other things to see the next time I’m in Barcelona, and you bet there will be a next time.

by pleia2 at December 01, 2016 03:57 AM

November 30, 2016

Eric Hammond

Amazon Polly Text To Speech With aws-cli and Twilio

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Here’s a simple example of how to turn text into speech, using the latest aws-cli:

text="Hello. This speech is generated using Amazon Polly. Enjoy!"
audio_file=speech.mp3

aws polly synthesize-speech \
  --output-format "mp3" \
  --voice-id "Salli" \
  --text "$text" \
  $audio_file

You can listen to the resulting output file using your favorite audio player:

mpg123 -q $audio_file

2- Upload Audio to S3

Create or re-use an S3 bucket to store the audio files temporarily.

s3bucket=YOURBUCKETNAME
aws s3 mb s3://$s3bucket

Upload the generated speech audio file to the S3 bucket. I use a long, random key for a touch of security:

s3key=audio-for-twilio/$(uuid -v4 -FSIV).mp3
aws s3 cp --acl public-read $audio_file s3://$s3bucket/$s3key

For easy cleanup, you can use a bucket with a lifecycle that automatically deletes objects after a day or thirty. See instructions below for how to set this up.

3- Initiate Call With Twilio

Once you have set up an account with Twilio (see pointers below if you don’t have one yet), here are sample commands to initiate a phone call and play the Amazon Polly speech audio:

from_phone="+1..." # Your Twilio allocated phone number
to_phone="+1..."   # Your phone number to call

TWILIO_ACCOUNT_SID="..." # Your Twilio account SID
TWILIO_AUTH_TOKEN="..."  # Your Twilio auth token

speech_url="http://s3.amazonaws.com/$s3bucket/$s3key"
twimlet_url="http://twimlets.com/message?Message%5B0%5D=$speech_url"

curl -XPOST https://api.twilio.com/2010-04-01/Accounts/$TWILIO_ACCOUNT_SID/Calls.json \
  -u "$TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN" \
  --data-urlencode "From=$from_phone" \
  --data-urlencode "To=to_phone" \
  --data-urlencode "Url=$twimlet_url"

The Twilio web service will return immediately after queuing the phone call. It could take a few seconds for the call to be initiated.

Make sure you listen to the phone call as soon as you answer, as Twilio starts playing the audio immediately.

The ringspeak Command

For your convenience (actually for mine), I’ve put together a command line program that turns all the above into a single command. For example, I can now type things like:

... || ringspeak --to +1NUMBER "Please review the cron job failure messages"

or:

ringspeak --at 6:30am \
  "Good morning!" \
  "Breakfast is being served now in Venetian Hall G.." \
  "Werners keynote is at 8:30."

Twilio credentials, default phone numbers, S3 bucket configuration, and Amazon Polly voice defaults can be stored in a $HOME/.ringspeak file.

Here is the source for the ringspeak command:

https://github.com/alestic/ringspeak

Tip: S3 Setup

Here is a sample commands to configure an S3 bucket with automatic deletion of all keys after 1 day:

aws s3api put-bucket-lifecycle \
  --bucket "$s3bucket" \
  --lifecycle-configuration '{
    "Rules": [{
        "Status": "Enabled",
        "ID": "Delete all objects after 1 day",
        "Prefix": "",
        "Expiration": {
          "Days": 1
        }
  }]}'

This is convenient because you don’t have to worry about knowing when Twilio completes the phone call to clean up the temporary speech audio files.

Tip: Twilio Setup

This isn’t the place for an entire Twilio howto, but I will say that it is about this simple to set up:

  1. Create a Twilio account

  2. Reserve a phone number through Twilio.

  3. Find the ACCOUNT SID and AUTH TOKEN for use in Twilio API calls.

When you are using the Twilio free trial, it requires you to verify phone numbers before calling them. To call arbitrary numbers, enter your credit card and fund the minimum of $20.

Twilio will only charge you for what you use (about a dollar a month per phone number, about a penny per minute for phone calls, etc.).

Closing

A lot is possible when you start integrating Twilio with AWS. For example, my daughter developed an Alexa skill that lets her speak a message for a family member and have it delivered by phone. Alexa triggers her AWS Lambda function, which invokes the Twilio API to deliver the message by voice call.

With Amazon Polly, these types of voice applications can sound better than ever.

Original article and comments: https://alestic.com/2016/11/amazon-polly-text-to-speech/

November 30, 2016 06:30 PM

Elizabeth Krumbach

Ohio LinuxFest 2016

Last month I had the pleasure of finally attending an Ohio LinuxFest. The conference has been on my radar for years, but every year I seemed to have some kind of conflict. When my Tour of OpenStack Deployment Scenarios was accepted I was thrilled to finally be able to attend. My employer at the time also pitched in to the conference as a Bronze sponsor and by sending along a banner that showcased my talk, and my OpenStack book!

The event kicked off on Friday and the first talk I attended was by Jeff Gehlbach on What’s Happening with OpenNMS. I’ve been to several OpenNMS talks over the years and played with it some, so I knew the background of the project. This talk covered several of the latest improvements. Of particular note were some of their UI improvements, including both a website refresh and some stunning improvements to the WebUI. It was also interesting to learn about Newts, the time-series data store they’ve been developing to replace RRDtool, which they struggled to scale with their tooling. Newts is decoupled from the visualization tooling so you can hook in your own, like if you wanted to use Grafana instead.

I then went to Rob Kinyon’s Devs are from Mars, Ops are from Venus. He had some great points about communication between ops, dev and QA, starting with being aware and understanding of the fact that you all have different goals, which sometimes conflict. Pausing to make sure you know why different teams behave the way they do and knowing that they aren’t just doing it to make your life difficult, or because they’re incompetent, makes all the difference. He implored the audience to assume that we’re all smart, hard-working people trying to get our jobs done. He also touched upon improvements to communication, making sure you repeat requests in your own words so misunderstandings don’t occur due to differing vocabularies. Finally, he suggested that some cross-training happen between roles. A developer may never be able to take over full time for an operator, or vice versa, but walking a mile in someone else’s shoes helps build the awareness and understanding that he stresses is important.

The afternoon keynote was given by Catherine Devlin on Hacking Bureaucracy with 18F. She works for the government in the 18F digital services agency. Their mandate is to work with other federal agencies to improve their digital content, from websites to data delivery. Modeled after a startup, she explained that they try not to over-plan, like many government organizations do and can lead to failure, they want to fail fast and keep iterating. She also said their team has a focus on hiring good people and understanding the needs of the people they serve, rather than focusing on raw technical talent and the tools. Their practices center around an open by default philosophy (see: 18F: Open source policy), so much of their work is open source and can be adopted by other agencies. They also make sure they understand the culture of organizations they work with so that the tools they develop together will actually be used, as well as respecting the domain knowledge of teams they’re working with. Slides from her talk here, and include lots of great links to agency tooling they’ve worked on: https://github.com/catherinedevlin/olf-2016-keynote


Catherine Devlin on 18F

That evening folks gathered in the expo hall to meet and eat! That’s where I caught up with my friends from Computer Reach. This is the non-profit I went to Ghana with back in 2012 to deploy Ubuntu-based desktops. I spent a couple weeks there with Dave, Beth Lynn and Nancy (alas, unable to come to OLF) so it was great to see them again. I learned more about the work they’re continuing to do, having switched to using mostly Xubuntu on new installs which was written about here. On a personal level it was a lot of fun connecting with them too, we really bonded during our adventures over there.


Tyler Lamb, Dave Sevick, Elizabeth K. Joseph, Beth Lynn Eicher

Saturday morning began with a keynote from Ethan Galstad on Becoming the Next Tech Entrepreneur. Ethan is the founder of Nagios, and in his talk he traced some of the history of his work on getting Nagios off the ground as a proper project and company and his belief in why technologists make good founders. In his work he drew from his industry and market expertise from being a technologist and was able to play to the niche he was focused on. He also suggested that folks look to what other founders have done that has been successful, and recommended some books (notably Founders at Work and Work the System). Finaly, he walked through some of what can be done to get started, including the stages of idea development, basic business plan (don’t go crazy), a rough 1.0 release that you can have some early customers test and get feedback from, and then into marketing, documenting and focused product development. He concluded by stressing that open source project leaders are already entrepreneurs and the free users of your software are your initial market.

Next up was Robert Foreman’s Mixed Metaphors: Using Hiera with Foreman where he sketched out the work they’ve done that preserves usage of Hiera’s key-value store system but leverages Foreman for the actual orchestration. The mixing of provisioning and orchestration technologies is becoming more common, but I hadn’t seen this particular mashup.

My talk was A Tour of OpenStack Deployment Scenarios. This is the same talk I gave at FOSSCON back in August, walking the audience through a series of ways that OpenStack could be configured to provide compute instances, metering and two types of storage. For each I gave a live demo using DevStack. I also talked about several other popular components that could be added to a deployment. Slides from my talk are here (PDF), which also link to a text document with instructions for how to run the DevStack demos yourself.


Thanks to Vitaliy Matiyash for taking a picture during my talk! (source)

At lunch I met up with my Ubuntu friends to catch up. We later met at the booth where they had a few Ubuntu phones and tablets that gained a bunch of attention throughout the event. This event was also my first opportunity to meet Unit193 and Svetlana Belkin in person, both of whom I’ve worked with on Ubuntu for years.


Unit193, Svetlana Belkin, José Antonio Rey, Elizabeth K. Joseph and Nathan Handler

After lunch I went over to see David Griggs of Dell give us “A Look Under the Hood of Ohio Supercomputer Center’s Newest Linux Cluster.” Supercomputers are cool and it was interesting to learn about the system it was replacing, the planning that went into the replacement and workload cut-over and see in-progress photos of the installation. From there I saw Ryan Saunders speak on Automating Monitoring with Puppet and Shinken. I wasn’t super familiar with the Shinken monitoring framework, so this talk was an interesting and very applicable demonstration of the benefits.

The last talk I went to before the closing keynotes was from my Computer Reach friends Dave Sevick and Tyler Lamb. They presented their “Island Server” imaging server that’s now being used to image all of the machines that they re-purpose and deploy around the world. With this new imaging server they’re able to image both Mac and Linux PCs from one Macbook Pro rather than having a different imaging server for each. They were also able to do a live demo of a Mac and Linux PC being imaged from the same Island Server at once.


Tyler and Dave with the Island Server in action

The event concluded with a closing keynote by a father and daughter duo, Joe and Lily Born, on The Democratization of Invention. Joe Born first found fame in the 90s when he invented the SkipDoctor CD repair device, and is now the CEO of Aiwa which produces highly rated Bluetooth speakers. Lily Born invented the tip-proof Kangaroo Cup. The pair reflected on their work and how the idea to product in the hands of customers has changed in the past twenty years. While the path to selling SkipDoctor had a very high barrier to entry, globalization, crowd-funding, 3D printers and internet-driven word of mouth and greater access to the press all played a part in the success of Lily’s Kangaroo cup and the new Aiwa Bluetooth speakers. While I have no plans to invent anything any time soon (so much else to do!) it was inspiring to hear how the barriers have been lowered and inventors today have a lot more options. Also, I just bought an Aiwa Exos-9 Bluetooth Speaker, it’s pretty sweet.

My conference adventures concluded with a dinner with my friends José, Nathan and David, all three of whom I also spent time with at FOSSCON in Philadelphia the month before. It was fun getting together again, and we wandered around downtown Columbus until we found a nice little pizzeria. Good times.

More photos from the Ohio LinuxFest here: https://www.flickr.com/photos/pleia2/albums/72157674988712556

by pleia2 at November 30, 2016 06:29 PM

November 29, 2016

Jono Bacon

Luma Giveaway Winner – Garrett Nay

I little while back I kicked off a competition to give away a Luma Wifi Set.

The challenge? Share a great community that you feel does wonderful work. The most interesting one, according to yours truly, gets the prize.

Well, I am delighted to share that Garrett Nay bags the prize for sharing the following in his comment:

I don’t know if this counts, since I don’t live in Seattle and can’t be a part of this community, but I’m in a new group in Salt Lake City that’s modeled after it. The group is Story Games Seattle: http://www.meetup.com/Story-Games-Seattle/. They get together on a weekly+ basis to play story games, which are like role-playing games but have a stronger emphasis on giving everyone at the table the power to shape the story (this short video gives a good introduction to what story games are all about, featuring members of the group:

Story Games from Candace Fields on Vimeo.

Story games seem to scratch a creative itch that I have, but it’s usually tough to find friends who are willing to play them, so a group dedicated to them is amazing to me.

Getting started in RPGs and story games is intimidating, but this group is very welcoming to newcomers. The front page says that no experience with role-playing is required, and they insist in their FAQ that you’ll be surprised at what you’ll be able to create with these games even if you’ve never done it before. We’ve tried to take this same approach with our local group.

In addition to playing published games, they also regularly playtest games being developed by members of the group or others. As far as productivity goes, some of the best known story games have come from members of this and sister groups. A few examples I’m aware of are Microscope, Kingdom, Follow, Downfall, and Eden. I’ve personally played Microscope and can say that it is well designed and very polished, definitely a product of years of playtesting.

They’re also productive and engaging in that they keep a record on the forums of all the games they play each week, sometimes including descriptions of the stories they created and how the games went. I find this very useful because I’m always on the lookout for new story games to try out. I kind of wish I lived in Seattle and could join the story games community, but hopefully we can get our fledgling group in Salt Lake up to the standard they have set.

What struck me about this example was that it gets to the heart of what community should be and often is – providing a welcoming, supportive environment for people with like-minded ideas and interests.

While much of my work focuses on the complexities of building collaborative communities with the intricacies of how people work together, we should always remember the huge value of what I refer to as read communities where people simply get together to have fun with each other. Garrett’s example was a perfect summary of a group doing great work here.

Thanks everyone for your suggestions, congratulations to Garrett for winning the prize, and thanks to Luma for providing the prize. Garrett, your Luma will be in the mail soon!

The post Luma Giveaway Winner – Garrett Nay appeared first on Jono Bacon.

by Jono Bacon at November 29, 2016 12:08 AM