Planet Ubuntu California

January 14, 2017

Akkana Peck

Plotting election (and other county-level) data with Python Basemap

After my arduous search for open 2016 election data by county, as a first test I wanted one of those red-blue-purple charts of how Democratic or Republican each county's vote was.

I used the Basemap package for plotting. It used to be part of matplotlib, but it's been split off into its own toolkit, grouped under mpl_toolkits: on Debian, it's available as python-mpltoolkits.basemap, or you can find Basemap on GitHub.

It's easiest to start with the fillstates.py example that shows how to draw a US map with different states colored differently. You'll need the three shapefiles (because of ESRI's silly shapefile format): st99_d00.dbf, st99_d00.shp and st99_d00.shx, available in the same examples directory.

Of course, to plot counties, you need county shapefiles as well. The US Census has county shapefiles at several different resolutions (I used the 500k version). Then you can plot state and counties outlines like this:

from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt

def draw_us_map():
    # Set the lower left and upper right limits of the bounding box:
    lllon = -119
    urlon = -64
    lllat = 22.0
    urlat = 50.5
    # and calculate a centerpoint, needed for the projection:
    centerlon = float(lllon + urlon) / 2.0
    centerlat = float(lllat + urlat) / 2.0

    m = Basemap(resolution='i',  # crude, low, intermediate, high, full
                llcrnrlon = lllon, urcrnrlon = urlon,
                lon_0 = centerlon,
                llcrnrlat = lllat, urcrnrlat = urlat,
                lat_0 = centerlat,
                projection='tmerc')

    # Read state boundaries.
    shp_info = m.readshapefile('st99_d00', 'states',
                               drawbounds=True, color='lightgrey')

    # Read county boundaries
    shp_info = m.readshapefile('cb_2015_us_county_500k',
                               'counties',
                               drawbounds=True)

if __name__ == "__main__":
    draw_us_map()
    plt.title('US Counties')
    # Get rid of some of the extraneous whitespace matplotlib loves to use.
    plt.tight_layout(pad=0, w_pad=0, h_pad=0)
    plt.show()
[Simple map of US county borders]

Accessing the state and county data after reading shapefiles

Great. Now that we've plotted all the states and counties, how do we get a list of them, so that when I read out "Santa Clara, CA" from the data I'm trying to plot, I know which map object to color?

After calling readshapefile('st99_d00', 'states'), m has two new members, both lists: m.states and m.states_info.

m.states_info[] is a list of dicts mirroring what was in the shapefile. For the Census state list, the useful keys are NAME, AREA, and PERIMETER. There's also STATE, which is an integer (not restricted to 1 through 50) but I'll get to that.

If you want the shape for, say, California, iterate through m.states_info[] looking for the one where m.states_info[i]["NAME"] == "California". Note i; the shape coordinates will be in m.states[i]n (in basemap map coordinates, not latitude/longitude).

Correlating states and counties in Census shapefiles

County data is similar, with county names in m.counties_info[i]["NAME"]. Remember that STATE integer? Each county has a STATEFP, m.counties_info[i]["STATEFP"] that matches some state's m.states_info[i]["STATE"].

But doing that search every time would be slow. So right after calling readshapefile for the states, I make a table of states. Empirically, STATE in the state list goes up to 72. Why 72? Shrug.

    MAXSTATEFP = 73
    states = [None] * MAXSTATEFP
    for state in m.states_info:
        statefp = int(state["STATE"])
        # Many states have multiple entries in m.states (because of islands).
        # Only add it once.
        if not states[statefp]:
            states[statefp] = state["NAME"]

That'll make it easy to look up a county's state name quickly when we're looping through all the counties.

Calculating colors for each county

Time to figure out the colors from the Deleetdk election results CSV file. Reading lines from the CSV file into a dictionary is superficially easy enough:

    fp = open("tidy_data.csv")
    reader = csv.DictReader(fp)

    # Make a dictionary of all "county, state" and their colors.
    county_colors = {}
    for county in reader:
        # What color is this county?
        pop = float(county["votes"])
        blue = float(county["results.clintonh"])/pop
        red = float(county["Total.Population"])/pop
        county_colors["%s, %s" % (county["name"], county["State"])] \
            = (red, 0, blue)

But in practice, that wasn't good enough, because the county names in the Deleetdk names didn't always match the official Census county names.

Fuzzy matches

For instance, the CSV file had no results for Alaska or Puerto Rico, so I had to skip those. Non-ASCII characters were a problem: "Doña Ana" county in the census data was "Dona Ana" in the CSV. I had to strip off " County", " Borough" and similar terms: "St Louis" in the census data was "St. Louis County" in the CSV. Some names were capitalized differently, like PLYMOUTH vs. Plymouth, or Lac Qui Parle vs. Lac qui Parle. And some names were just different, like "Jeff Davis" vs. "Jefferson Davis".

To get around that I used SequenceMatcher to look for fuzzy matches when I couldn't find an exact match:

def fuzzy_find(s, slist):
    '''Try to find a fuzzy match for s in slist.
    '''
    best_ratio = -1
    best_match = None

    ls = s.lower()
    for ss in slist:
        r = SequenceMatcher(None, ls, ss.lower()).ratio()
        if r > best_ratio:
            best_ratio = r
            best_match = ss
    if best_ratio > .75:
        return best_match
    return None

Correlate the county names from the two datasets

It's finally time to loop through the counties in the map to color and plot them.

Remember STATE vs. STATEFP? It turns out there are a few counties in the census county shapefile with a STATEFP that doesn't match any STATE in the state shapefile. Mostly they're in the Virgin Islands and I don't have election data for them anyway, so I skipped them for now. I also skipped Puerto Rico and Alaska (no results in the election data) and counties that had no corresponding state: I'll omit that code here, but you can see it in the final script, linked at the end.

    for i, county in enumerate(m.counties_info):
        countyname = county["NAME"]
        try:
            statename = states[int(county["STATEFP"])]
        except IndexError:
            print countyname, "has out-of-index statefp of", county["STATEFP"]
            continue

        countystate = "%s, %s" % (countyname, statename)
        try:
            ccolor = county_colors[countystate]
        except KeyError:
            # No exact match; try for a fuzzy match
            fuzzyname = fuzzy_find(countystate, county_colors.keys())
            if fuzzyname:
                ccolor = county_colors[fuzzyname]
                county_colors[countystate] = ccolor
            else:
                print "No match for", countystate
                continue

        countyseg = m.counties[i]
        poly = Polygon(countyseg, facecolor=ccolor)  # edgecolor="white"
        ax.add_patch(poly)

Moving Hawaii

Finally, although the CSV didn't have results for Alaska, it did have Hawaii. To display it, you can move it when creating the patches:

    countyseg = m.counties[i]
    if statename == 'Hawaii':
        countyseg = list(map(lambda (x,y): (x + 5750000, y-1400000), countyseg))
    poly = Polygon(countyseg, facecolor=countycolor)
    ax.add_patch(poly)
The offsets are in map coordinates and are empirical; I fiddled with them until Hawaii showed up at a reasonable place. [Blue-red-purple 2016 election map]

Well, that was a much longer article than I intended. Turns out it takes a fair amount of code to correlate several datasets and turn them into a map. But a lot of the work will be applicable to other datasets.

Full script on GitHub: Blue-red map using Census county shapefile

January 14, 2017 10:10 PM

Elizabeth Krumbach

Holidays in Philadelphia

In December MJ and I spent a couple weeks on the east coast in the new townhouse. It was the first long stay we’ve had there together, and though the holidays limited how much we could get done, particularly when it came to contractors, we did have a whole bunch to do.

First, I continued my quest to go through boxes of things that almost exclusively belonged to MJ’s grandparents. Unpacking, cataloging and deciding what pieces stay in Pennsylvania and what we’re sending to California. In the course of this I also had a deadline creeping up on me as I needed to find the menorah before Hanukkah began on the evening of December 24th. The timing of Hanukkah landing right along Christmas and New Years worked out well for us, MJ had some time off and it made the timing of the visit even more of a no-brainer. Plus, we were able to celebrate the entire eight night holiday there in Philadelphia rather than breaking it up between there and San Francisco.

The most amusing thing about finding the menorah was that it’s nearly identical to the one we have at home. MJ had mentioned that it was similar when I picked it out, but I had no idea that it was almost identical. Nothing wrong with the familiar, it’s a beautiful menorah.

House-wise MJ got the garage door opener installed and shelves put up in the powder room. With the help of his friend Tim, he also got the coffee table put together and the television mounted over the fireplace on New Years Eve. The TV was up in time to watch some of the NYE midnight broadcasts! We got the mail handling, trash schedule and cleaning sorted out with relatives who will be helping us with that, so the house will be well looked after in our absence.

I put together the vacuum and used it for the first time as I did the first thorough tidying of the house since we’d moved everything in from storage. I got my desk put together in the den, even though it’s still surrounded by boxes and will be until we ship stuff out to California. I was able to finally unpack some things we had actually ordered the last time I was in town but never got to put around the house, like a bunch of trash cans for various rooms and some kitchen goodies from ThinkGeek (Death Star waffle maker! R2-D2 measuring cups!). We also ordered a pair of counter-height chairs for the kitchen and they arrived in time for me to put them together just before we left, so the kitchen is also coming together even though we still need to go shopping for pots and pans.

Family-wise, we did a lot of visiting. On Christmas Eve we went to the nearby Samarkand restaurant, featuring authentic Uzbeki food. It was wonderful. We also did various lunches and dinners. A couple days were also spent going down to the city to visit a relative who is recovering in the hospital.

I didn’t see everyone I wanted to see but we did also get to visit with various friends. I saw my beloved Rogue One: A Star Wars Story a second time and met up with Danita to see Moana, which was great. I’ve now listened to the Moana soundtrack more than a few times. We met up with Crissi and her boyfriend Henry at Grand Lux Cafe in King of Prussia, where we also had a few errands to run and I was able to pick up some mittens at L.L. Bean. New Years Eve was spent with our friends Tim and Colleen, where we ordered pizza and hung aforementioned television. They also brought along some sweet bubbly for us to enjoy at midnight.

We also had lots of our favorite foods! We celebrated together at MJ’s favorite French cuisine inspired Chinese restaurant in Chestnut Hill, CinCin. We visited some of our standard favorites, including The Continental and Mad Mex. Exploring around our new neighborhood, we indulged in some east coast Chinese, made it to a Jewish deli where I got a delicious hoagie, found a sushi place that has an excellent roll list. We also went to Chickie’s and Pete’s crab house a couple of times, which, while being a Philadelphia establishment, I’d never actually been to. We also had a dinner at The Melting Pot, where I was able to try some local beers along with our fondue, and I’m delighted to see how much the microbrewery scene has grown since I moved away. We also hit a few diners during our stay, and enjoyed some eggnog from Wawa, which is some of the best eggnog ever made.

Unfortunately it wasn’t all fun. I’ve been battling a nasty bout of bronchitis for the past couple months. This continued ailment led to a visit to urgent care to get it looked at, and an x-ray to confirm I didn’t have a pneumonia. A pile of medication later, my bronchitis lingered and later in the week I spontaneously developed hives on my neck, which confounded the doctor. In the midst of health woes, I also managed to cut my foot on some broken glass while I was unpacking. It bled a lot, and I was a bit hobbled for a couple days while it healed. Thankfully MJ cleaned it out thoroughly (ouch!) once the bleeding had subsided and it has healed up nicely.

As the trip wound down I found myself missing the cats and eager to get home where I’d begin my new job. Still, it was with a heavy heart that we left our beautiful new vacation home, family and friends on the east coast.

by pleia2 at January 14, 2017 07:32 AM

January 12, 2017

Akkana Peck

Getting Election Data, and Why Open Data is Important

Back in 2012, I got interested in fiddling around with election data as a way to learn about data analysis in Python. So I went searching for results data on the presidential election. And got a surprise: it wasn't available anywhere in the US. After many hours of searching, the only source I ever found was at the UK newspaper, The Guardian.

Surely in 2016, we're better off, right? But when I went looking, I found otherwise. There's still no official source for US election results data; there isn't even a source as reliable as The Guardian this time.

You might think Data.gov would be the place to go for official election results, but no: searching for 2016 election on Data.gov yields nothing remotely useful.

The Federal Election Commission has an election results page, but it only goes up to 2014 and only includes the Senate and House, not presidential elections. Archives.gov has popular vote totals for the 2012 election but not the current one. Maybe in four years, they'll have some data.

After striking out on official US government sites, I searched the web. I found a few sources, none of them even remotely official.

Early on I found Simon Rogers, How to Download County-Level Results Data, which leads to GitHub user tonmcg's County Level Election Results 12-16. It's a comparison of Democratic vs. Republican votes in the 2012 and 2016 elections (I assume that means votes for that party's presidential candidate, though the field names don't make that entirely clear), with no information on third-party candidates.

KidPixo's Presidential Election USA 2016 on GitHub is a little better: the fields make it clear that it's recording votes for Trump and Clinton, but still no third party information. It's also scraped from the New York Times, and it includes the scraping code so you can check it and have some confidence on the source of the data.

Kaggle claims to have election data, but you can't download their datasets or even see what they have without signing up for an account. Ben Hamner has some publically available Kaggle data on GitHub, but only for the primary. I also found several companies selling election data, and several universities that had datasets available for researchers with accounts at that university.

The most complete dataset I found, and the only open one that included third party candidates, was through OpenDataSoft. Like the other two, this data is scraped from the NYT. It has data for all the minor party candidates as well as the majors, plus lots of demographic data for each county in the lower 48, plus Hawaii, but not the territories, and the election data for all the Alaska counties is missing.

You can get it either from a GitHub repo, Deleetdk's USA.county.data (look in inst/ext/tidy_data.csv. If you want a larger version with geographic shape data included, clicking through several other opendatasoft pages eventually gets you to an export page, USA 2016 Presidential Election by county, where you can download CSV, JSON, GeoJSON and other formats.

The OpenDataSoft data file was pretty useful, though it had gaps (for instance, there's no data for Alaska). I was able to make my own red-blue-purple plot of county voting results (I'll write separately about how to do that with python-basemap), and to play around with statistics.

Implications of the lack of open data

But the point my search really brought home: By the time I finally found a workable dataset, I was so sick of the search, and so relieved to find anything at all, that I'd stopped being picky about where the data came from. I had long since given up on finding anything from a known source, like a government site or even a newspaper, and was just looking for data, any data.

And that's not good. It means that a lot of the people doing statistics on elections are using data from unverified sources, probably copied from someone else who claimed to have scraped it, using unknown code, from some post-election web page that likely no longer exists. Is it accurate? There's no way of knowing.

What if someone wanted to spread news and misinformation? There's a hunger for data, particularly on something as important as a US Presidential election. Looking at Google's suggested results and "Searches related to" made it clear that it wasn't just me: there are a lot of people searching for this information and not being able to find it through official sources.

If I were a foreign power wanting to spread disinformation, providing easily available data files -- to fill the gap left by the US Government's refusal to do so -- would be a great way to mislead people. I could put anything I wanted in those files: there's no way of checking them against official results since there are no official results. Just make sure the totals add up to what people expect to see. You could easily set up an official-looking site and put made-up data there, and it would look a lot more real than all the people scraping from the NYT.

If our government -- or newspapers, or anyone else -- really wanted to combat "fake news", they should take open data seriously. They should make datasets for important issues like the presidential election publically available, as soon as possible after the election -- not four years later when nobody but historians care any more. Without that, we're leaving ourselves open to fake news and fake data.

January 12, 2017 11:41 PM

January 09, 2017

Akkana Peck

Snowy Winter Days, and an Elk Visit

[Snowy view of the Rio Grande from Overlook]

The snowy days here have been so pretty, the snow contrasting with the darkness of the piñons and junipers and the black basalt. The light fluffy crystals sparkle in a rainbow of colors when they catch the sunlight at the right angle, but I've been unable to catch that effect in a photo.

We've had some unusual holiday visitors, too, culminating in this morning's visit from a huge bull elk.

[bull elk in the yard] Dave came down to make coffee and saw the elk in the garden right next to the window. But by the time I saw him, he was farther out in the yard. And my DSLR batteries were dead, so I grabbed the point-and-shoot and got what I could through the window.

Fortunately for my photography the elk wasn't going anywhere in any hurry. He has an injured leg, and was limping badly. He slowly made his way down the hill and into the neighbors' yard. I hope he returns. Even with a limp that bad, an elk that size has no predators in White Rock, so as long as he stays off the nearby San Ildefonso reservation (where hunting is allowed) and manages to find enough food, he should be all right. I'm tempted to buy some hay to leave out for him.

[Sunset light on the Sangre de Cristos] Some of the sunsets have been pretty nice, too.

A few more photos.

January 09, 2017 02:48 AM

January 08, 2017

Akkana Peck

Using virtualenv to replace the broken pip install --user

Python's installation tool, pip, has some problems on Debian.

The obvious way to use pip is as root: sudo pip install packagename. If you hang out in Python groups at all, you'll quickly find that this is strongly frowned upon. It can lead to your pip-installed packages intermingling with the ones installed by Debian's apt-get, possibly causing problems during apt system updates.

The second most obvious way, as you'll see if you read pip's man page, is pip --user install packagename. This installs the package with only user permissions, not root, under a directory called ~/.local. Python automatically checks .local as part of its PYTHONPATH, and you can add ~/.local/bin to your PATH, so this makes everything transparent.

Or so I thought until recently, when I discovered that pip install --user ignores system-installed packages when it's calculating its dependencies, so you could end up with a bunch of incompatible versions of packages installed. Plus it takes forever to re-download and re-install dependencies you already had.

Pip has a clear page describing how pip --user is supposed to work, and that isn't what it's doing. So I filed pip bug 4222; but since pip has 687 open bugs filed against it, I'm not terrifically hopeful of that getting fixed any time soon. So I needed a workaround.

Use virtualenv instead of --user

Fortunately, it turned out that pip install works correctly in a virtualenv if you include the --system-site-packages option. I had thought virtualenvs were for testing, but quite a few people on #python said they used virtualenvs all the time, as part of their normal runtime environments. (Maybe due to pip's deficiencies?) I had heard people speak deprecatingly of --user in favor of virtualenvs but was never clear why; maybe this is why.

So, what I needed was to set up a virtualenv that I can keep around all the time and use by default every time I log in. I called it ~/.pythonenv when I created it:

virtualenv --system-site-packages $HOME/.pythonenv

Normally, the next thing you do after creating a virtualenv is to source a script called bin/activate inside the venv. That sets up your PATH, PYTHONPATH and a bunch of other variables so the venv will be used in all the right ways. But activate also changes your prompt, which I didn't want in my normal runtime environment. So I stuck this in my .zlogin file:

VIRTUAL_ENV_DISABLE_PROMPT=1 source $HOME/.pythonenv/bin/activate

Now I'll activate the venv once, when I log in (and once in every xterm window since I set XTerm*loginShell: true in my .Xdefaults. I see my normal prompt, I can use the normal Debian-installed Python packages, and I can install additional PyPI packages with pip install packagename (no --user, no sudo).

January 08, 2017 06:37 PM

January 05, 2017

Elizabeth Krumbach

The Girard Avenue Line

While I was in Philadelphia over the holidays a friend clued me into the fact that one of the historic streetcars (trolleys) on the Girard Avenue Line was decorated for the holidays. This line, SEPTA Route 15, is the last historic trolley line in Philadelphia and I had never ridden it before. This was the perfect opportunity!

I decided that I’d make the whole day about trains, so that morning I hopped on the SEPTA West Trenton Line regional rail, which has a stop near our place north of Philadelphia. After cheesesteak lunch near Jefferson Station, it was on to the Market-Frankfort Line subway/surface train to get up to Girard Station.

My goal for the afternoon was to see and take pictures of the holiday car, number 2336. So, with the friend I dragged along on this crazy adventure, we started waiting. The first couple trolleys weren’t decorated, so we hopped on another to get out of the chilly weather for a bit. Got off that trolley and waited for a few more, in both directions. This was repeated a couple times until we finally got a glimpse of the decorated trolley heading back to Girard Station. Now on our radar, we hopped on the next one and followed that trolley!


The non-decorated, but still lovely, 2335

We caught up with the decorated trolley after the turnaround at the end of the line and got on just after Girard Station. From there we took it all the way to the end of the line in west Philadelphia at 63rd St. There we had to disembark, and I took a few pictures of the outside.

We were able to get on again after the driver took a break, which allowed us take it all the way back.

The car was decorated inside and out, with lights, garland and signs.

At the end the driver asked if we’d just been on it to take a ride. Yep! I came just to see this specific trolley! Since it was getting dark anyway, he was kind enough to turn the outside lights on for me so I could get some pictures.

As my first time riding this line, I was able to make some observations about how they differ from the PCCs that run in San Francisco. In the historic fleet of San Francisco streetcars, the 1055 has the same livery as the trolleys that run in Philadelphia today. Most of the PCC’s in San Francisco’s fleet actually came from SEPTA in Philadelphia and this one is no exception, originally numbered 2122 while in service there. However, taking a peek inside it’s easy to see that it’s a bit different than the ones that run in Philadelphia today:


Inside the 1055 in San Francisco

The inside of this looks shiny compared to the inside of the one still running in Philadelphia. It’s all metal versus the plastic inside in Philadelphia, and the walls of the car are much thinner in San Francisco. I suspect this is all due to climate control requirements. In San Francisco we don’t really have seasons and the temperature stays pretty comfortable, so while there is a little climate control, it’s nothing compared to what the cars in Philadelphia need in the summer and winter. You can also see a difference from the outside, the entire top of the Philadelphia cars has a raised portion which seems to be climate control, but on the San Francisco cars it’s only a small bit at the center:


Outside the 1055 in San Francisco

Finally, the seats and wheelchair accessibility is different. The seats are all plastic in San Francisco, whereas they have fabric in Philadelphia. The raised platforms themselves and a portable metal platform serve as wheelchair access in San Francisco, whereas Philadelphia has an actual operative lift since there are many street level stops.

To wrap up the trolley adventure, we hopped on a final one to get us to Broad Street where we took the Broad Street Line subway down to dinner at Sazon on Spring Garden Street, where we had a meal that concluded with some of the best hot chocolate I’ve ever had. Perfect to warm us up after spending all afternoon chasing trolleys in Philadelphia December weather.

Dinner finished, I took one last train, the regional rail to head back to the suburbs.

More photos from the trolleys on the Girard Avenue Line here: https://www.flickr.com/photos/pleia2/albums/72157676838141261

by pleia2 at January 05, 2017 08:47 AM

January 04, 2017

Akkana Peck

Firefox "Reader Mode" and NoScript

A couple of days ago I blogged about using Firefox's "Delete Node" to make web pages more readable. In a subsequent Twitter discussion someone pointed out that if the goal is to make a web page's content clearer, Firefox's relatively new "Reader Mode" might be a better way.

I knew about Reader Mode but hadn't used it. It only shows up on some pages. as a little "open book" icon to the right of the URLbar just left of the Refresh/Stop button. It did show up on the Pogue Yahoo article; but when I clicked it, I just got a big blank page with an icon of a circle with a horizontal dash; no text.

It turns out that to see Reader Mode content in noscript, you must explicitly enable javascript from about:reader.

There are some reasons it's not automatically whitelisted: see discussions in bug 1158071 and bug 1166455 -- so enable it at your own risk. But it's nice to be able to use Reader Mode, and I'm glad the Twitter discussion spurred me to figure out why it wasn't working.

January 04, 2017 06:37 PM

January 02, 2017

Akkana Peck

Firefox's "Delete Node" eliminates pesky content-hiding banners

It's trendy among web designers today -- the kind who care more about showing ads than about the people reading their pages -- to use fixed banner elements that hide part of the page. In other words, you have a header, some content, and maybe a footer; and when you scroll the content to get to the next page, the header and footer stay in place, meaning that you can only read the few lines sandwiched in between them. But at least you can see the name of the site no matter how far you scroll down in the article! Wouldn't want to forget the site name!

Worse, many of these sites don't scroll properly. If you Page Down, the content moves a full page up, which means that the top of the new page is now hidden under that fixed banner and you have to scroll back up a few lines to continue reading where you left off. David Pogue wrote about that problem recently and it got a lot of play when Slashdot picked it up: These 18 big websites fail the space-bar scrolling test.

It's a little too bad he concentrated on the spacebar. Certainly it's good to point out that hitting the spacebar scrolls down -- I was flabbergasted to read the Slashdot discussion and discover that lots of people didn't already know that, since it's been my most common way of paging since browsers were invented. (Shift-space does a Page Up.) But the Slashdot discussion then veered off into a chorus of "I've never used the spacebar to scroll so why should anyone else care?", when the issue has nothing to do with the spacebar: the issue is that Page Down doesn't work right, whichever key you use to trigger that page down.

But never mind that. Fixed headers that don't scroll are bad even if the content scrolls the right amount, because it wastes precious vertical screen space on useless cruft you don't need. And I'm here to tell you that you can get rid of those annoying fixed headers, at least in Firefox.

[Article with intrusive Yahoo headers]

Let's take Pogue's article itself, since Yahoo is a perfect example of annoying content that covers the page and doesn't go away. First there's that enormous header -- the bottom row of menus ("Tech Home" and so forth) disappear once you scroll, but the rest stay there forever. Worse, there's that annoying popup on the bottom right ("Privacy | Terms" etc.) which blocks content, and although Yahoo! scrolls the right amount to account for the header, it doesn't account for that privacy bar, which continues to block most of the last line of every page.

The first step is to call up the DOM Inspector. Right-click on the thing you want to get rid of and choose Inspect Element:

[Right-click menu with Inspect Element]


That brings up the DOM Inspector window, which looks like this (click on the image for a full-sized view):

[DOM Inspector]

The upper left area shows the hierarchical structure of the web page.

Don't Panic! You don't have to know HTML or understand any of this for this technique to work.

Hover your mouse over the items in the hierarchy. Notice that as you hover, different parts of the web page are highlighted in translucent blue.

Generally, whatever element you started on will be a small part of the header you're trying to eliminate. Move up one line, to the element's parent; you may see that a bigger part of the header is highlighted. Move up again, and keep moving up, one line at a time, until the whole header is highlighted, as in the screenshot. There's also a dark grey window telling you something about the HTML, if you're interested; if you're not, don't worry about it.

Eventually you'll move up too far, and some other part of the page, or the whole page, will be highlighted. You need to find the element that makes the whole header blue, but nothing else.

Once you've found that element, right-click on it to get a context menu, and look for Delete Node (near the bottom of the menu). Clicking on that will delete the header from the page.

Repeat for any other part of the page you want to remove, like that annoying bar at the bottom right. And you're left with a nice, readable page, which will scroll properly and let you read every line, and will show you more text per page so you don't have to scroll as often.

[Article with intrusive Yahoo headers]

It's a useful trick. You can also use Inspect/Delete Node for many of those popups that cover the screen telling you "subscribe to our content!" It's especially handy if you like to browse with NoScript, so you can't dismiss those popups by clicking on an X. So happy reading!

Addendum on Spacebars

By the way, in case you weren't aware that the spacebar did a page down, here's another tip that might come in useful: the spacebar also advances to the next slide in just about every presentation program, from PowerPoint to Libre Office to most PDF viewers. I'm amazed at how often I've seen presenters squinting with a flashlight at the keyboard trying to find the right-arrow or down-arrow or page-down or whatever key they're looking for. These are all ways of advancing to the next slide, but they're all much harder to find than that great big spacebar at the bottom of the keyboard.

January 02, 2017 11:23 PM

Elizabeth Krumbach

The adventures of 2016

2016 was filled with professional successes and exciting adventures, but also various personal struggles. I exhausted myself finishing two books, navigated some complicated parts of my marriage, experienced my whole team getting laid off from a job we loved, handled an uptick in migraines and a continuing bout of bronchitis, and am still coming to terms with the recent loss.

It’s been difficult to maintain perspective, but it actually was an incredible year. I succeeded in having two books come out, my travels took me to some new, amazing places, we bought a vacation house, all my blood work shows that I’m healthier than I was at this time last year.


Lots more running in 2016 led to a healthier me!

Some of the tough stuff has even been good. I have succeeded in strengthening bonds with my husband and several people in my life who I care about. I’ve worked hard to worry less and enjoy time with friends and family, which may explain why this year ended up being the one of the group selfie. I paused to capture happy moments with my loved ones a lot more often.

So without further ado, the more quantitative year roundup!

The 9th edition of the The Official Ubuntu Book came out in July. This is the second edition I’ve been part of preparing. The book has updates to bring us up to the 16.04 release and features a whole new chapter covering “Ubuntu, Convergence, and Devices of the Future” which I was really thrilled about adding. My work with Matthew Helmke and José Antonio Rey was also very enjoyable. I wrote about the release here.

I also finished the first book I was the lead author on, Common OpenStack Deployments. Writing a book takes a considerable amount of time and effort, I spent many long nights and weekends testing and tweaking configurations largely written by my contributing author, Matt Fischer, writing copy for the book and integrating feedback from our excellent fleet of reviewers and other contributors. In the end, we released a book that takes the reader from knowing nothing about OpenStack to doing sample deployments using the same Puppet-driven tooling that enterprises use in their environments. The book came out in September, I wrote about it on my own blog here and maintain a blog about the book at DeploymentsBook.com.


Book adventures at the Ocata OpenStack Summit in Barcelona! Thanks to Nithya Ruff for taking a picture of me with my book at the Women of OpenStack area of the expo hall (source) and Brent Haley for getting the picture of Lisa-Marie and I (source).

This year also brought a new investment to our lives, we bought a vacation home in Pennsylvania! It’s a new construction townhouse, so we spent a fair amount of time on the east coast the second half of this year searching for a place, picking out the details and closing. We then spent the winter holidays here, spending a full two weeks away from home to really settle in. I wrote more about our new place here.

I keep saying I won’t travel as much, but 2016 turned out to have more travel than ever, taking over 100,000 miles of flights again.


Feeding a kangaroo, just outside of Melbourne, Australia

At the Jain Temple in Mumbai, India

We had lots of beers in Germany! Photo in the center by Chris Hoge (source)

Barcelona is now one of my favorite places, and it’s Sagrada Familia Basilica was breathtaking

Most of these conferences and events had a speaking component for me, but I also did a fair number of local talks and at some conferences I spoke more than once. The following is a rundown of all these talks I did in 2016, along with slides.


Photo by Masayuki Igawa (source) from Linux Conf AU in Geelong

Photo by Johanna Koester (source) from my keynote at the Ocata OpenStack Summit

MJ and I have also continued to enjoy our beloved home city of San Francisco, both with just the two of us and with various friends and family. We saw a couple Giants baseball games, along with one of the Sharks playoff games! Sampled a variety of local drinks and foods, visited lots of local animals and took in some amazing local sights. We went to the San Francisco Symphony for the first time, enjoyed a wonderful time together over over Labor Day weekend and I’ve skipped out at times to visit museum exhibits and the zoo.


Dinner at Luce in San Francisco, celebrating MJ’s new job

This year I also geeked out over trains – in four states and five countries! In May MJ and I traveled to Maine to spend some time with family, and a couple days of that trip were spent visiting the Seashore Trolley Museum in Kennebunkport and the Narrow Gauge Railroad Museum in Portland, I wrote about it here. I also enjoyed MUNI Heritage Weekend with my friend Mark at the end of September, where we got to see some of the special street cars and ride several vintage buses, read about that here. I also went up to New York City to finally visit the famous New York Transit Museum in Brooklyn and accompanying holiday exhibit at the Central Station with my friend David, details here. In Philadelphia I enjoyed the entire Girard Street line (15) which is populated by historic PCC streetcars (trolleys), including one decorated for the holidays, I have a pile of pictures here. I also got a glimpse of a car on the historic streetcar/trolley line in Melbourne and my buddy Devdas convinced me to take a train in Mumbai, and I visited the amazing Chhatrapati Shivaji Terminus there too. MJ also helped me plan some train adventures in the Netherlands and Germany as I traveled from airports for events.


From the Seashore Trolley Museum barn

As I enter into 2017 I’m thrilled to report that I’ll be starting a new job. Travel continues as I have trips to Australia and Los Angeles already on my schedule. I’ll also be spending time getting settled back into my life on the west coast, as I have spent 75% of my time these past couple months elsewhere.

by pleia2 at January 02, 2017 03:19 PM

December 27, 2016

Elizabeth Krumbach

OpenStack Days Mountain West 2016

A couple weeks ago I attended my last conference of the year, OpenStack Days Mountain West. After much flight shuffling following a seriously delayed flight, I arrived late on the evening prior to the conference with plenty of time to get settled in and feel refreshed for the conference in the morning.

The event kicked off with a keynote from OpenStack Foundation COO Mark Collier who spoke on the growth and success of OpenStack. His talk strongly echoed topics he touched upon at the recent OpenStack Summit back in October as he cited several major companies who are successfully using OpenStack in massive, production deployments including Walmart, AT&T and China Mobile. In keeping with the “future” theme of the conference he also talked about organizations who are already pushing the future potential of OpenStack by betting on the technology for projects that will easily exceed the capacity of what OpenStack can handle today.

Also that morning, Lisa-Marie Namphy moderated a panel on the future of OpenStack with John Dickinson, K Rain Leander, Bruce Mathews and Robert Starmer. She dove right in with the tough questions by having panelists speculate as to why the three major cloud providers don’t run OpenStack. There was also discussion about who the actual users of OpenStack were (consensus was: infrastructure operators), which got into the question of whether app developers were OpenStack users today (perhaps not, app developers don’t want a full Linux environment, they want a place for their app to live). They also discussed the expansion of other languages beyond Python in the project.

That afternoon I saw a talk by Mike Wilson of Mirantis on “OpenStack in the post Moore’s Law World” where he reflected on the current status of Moore’s Law and how it relates to cloud technologies, and the projects that are part of OpenStack. He talked about how the major cloud players outside of OpenStack are helping drive innovation for their own platforms by working directly with chip manufacturers to create hardware specifically tuned to their needs. There’s a question of whether anyone in the OpenStack community is doing similar, and it seems that perhaps they should so that OpenStack can have a competitive edge.

My talk was next, speaking on “The OpenStack Project Continuous Integration System” where I gave a tour of our CI system and explained how we’ve been tracking project growth and steps we’ve taken with regard to scaling it to handle it going into the future. Slides from the talk are available here (PDF). At the end of my talk I gave away several copies of Common OpenStack Deployments which I also took the chance to sign. I’m delighted that one of the copies will be going to the San Diego OpenStack Meetup and another to one right there in Salt Lake City.

Later I attended Christopher Aedo’s “Transforming Organizations with OpenStack” where he walked the audience through hands on training his team did about the OpenStack project’s development process and tooling for IBM teams around the world. The lessons learned from working with these teams and getting them to love open processes once they could explain them in person was inspiring. Tassoula Kokkoris wrote a great summary of the talk here: Collaborative Culture Spotlight: OpenStack Days Mountain West. I rounded off the day by going to David Medberry’s “Private Cloud Cattle and Pet Wrangling” talk where he drew experience from the private cloud at Charter Communications to discuss the move from treating servers like pets to treating them like cattle and how that works in a large organization with departments that have varying needs.

The next day began with a talk by OpenStack veteran, and now VP of Solutions at SUSE, Joseph George. He gave a talk on the state of OpenStack, with a strong message about staying on the path we set forth, which he compared to his own personal transformation to lose a significant amount of weight. In this talk, he outlined three main points that we must keep in mind in order to succeed:

  1. Clarity on the Goal and the Motivation
  2. Staying Focused During the “Middle” of the Journey
  3. Constantly Learning and Adapting

He wrote a more extensive blog post about it here which fleshes out how each of these related to himself and how they map to OpenStack: OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation.

The next talk was a fun one from Lisa-Marie Namphy and Monty Taylor with the theme of being a naughty or nice list for the OpenStack community. They walked through various decisions, aspects of the project, and more to paint a picture of where the successes and pain points of the project are. They did a great job, managing to pull it off with humor, wit, and charm, all while also being actually informative. The morning concluded with a panel titled “OpenStack: Preferred Platform For PaaS Solutions” which had some interesting views. The panelists brought their expertise to the table to discuss what developers seeking to write to a platform wanted, and where OpenStack was weak and strong. It certainly seems to me that OpenStack is strongest as IaaS rather than PaaS, and it makes sense for OpenStack to continue focusing on being what they’ve called an “integration engine” to tie components together rather than focus on writing a PaaS solution directly. There was some talk about this on the panel, where some stressed that they did want to see OpenStack hooking into existing PaaS software offerings.


Great photo of Lisa and Monty by Gary Kevorkian, source

Lunch followed the morning talks, and I haven’t mentioned it, but the food at this event was quite good. In fact, I’d go as far as to say it was some of the best conference-supplied meals I’ve had. Nice job, folks!

Huge thanks to the OpenStack Days Mountain West crew for putting on the event. Lots of great talks and I enjoyed connecting with folks I knew, as well as meeting members of the community who haven’t managed to make it to one of the global events I’ve attended. It’s inspiring to meet with such passionate members of local groups like I found there.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157676117696131

by pleia2 at December 27, 2016 03:02 PM

December 25, 2016

Akkana Peck

Photographing Farolitos (and other night scenes)

Excellent Xmas to all! We're having a white Xmas here..

Dave and I have been discussing how "Merry Christmas" isn't alliterative like "Happy Holidays". We had trouble coming up with a good C or K adjective to go with Christmas, but then we hit on the answer: Have an Excellent Xmas! It also has the advantage of inclusivity: not everyone celebrates the birth of Christ, but Xmas is a secular holiday of lights, family and gifts, open to people of all belief systems.

Meanwhile: I spent a couple of nights recently learning how to photograph Xmas lights and farolitos.

Farolitos, a New Mexico Christmas tradition, are paper bags, weighted down with sand, with a candle inside. Sounds modest, but put a row of them alongside a roadway or along the top of a typical New Mexican adobe or faux-dobe and you have a beautiful display of lights.

They're also known as luminarias in southern New Mexico, but Northern New Mexicans insist that a luminaria is a bonfire, and the little paper bag lanterns should be called farolitos. They're pretty, whatever you call them.

Locally, residents of several streets in Los Alamos and White Rock set out farolitos along their roadsides for a few nights around Christmas, and the county cooperates by turning off streetlights on those streets. The display on Los Pueblos in Los Alamos is a zoo, a slow exhaust-choked parade of cars that reminds me of the Griffith Park light show in LA. But here in White Rock the farolito displays are a lot less crowded, and this year I wanted to try photographing them.

Canon bugs affecting night photography

I have a little past experience with night photography. I went through a brief astrophotography phase in my teens (in the pre-digital phase, so I was using film and occasionally glass plates). But I haven't done much night photography for years.

That's partly because I've had problems taking night shots with my current digital SLRcamera, a Rebel Xsi (known outside the US as a Canon 450d). It's old and modest as DSLRs go, but I've resisted upgrading since I don't really need more features.

Except maybe when it comes to night photography. I've tried shooting star trails, lightning shots and other nocturnal time exposures, and keep hitting a snag: the camera refuses to take a photo. I'll be in Manual mode, with my aperture and shutter speed set, with the lens in Manual Focus mode with Image Stabilization turned off. Plug in the remote shutter release, push the button ... and nothing happens except a lot of motorized lens whirring noises. Which shouldn't be happening -- in MF and non-IS mode the lens should be just sitting there intert, not whirring its motors. I couldn't seem to find a way to convince it that the MF switch meant that, yes, I wanted to focus manually.

It seemed to be primarily a problem with the EF-S 18-55mm kit lens; the camera will usually condescend to take a night photo with my other two lenses. I wondered if the MF switch might be broken, but then I noticed that in some modes the camera explicitly told me I was in manual focus mode.

I was almost to the point of ordering another lens just for night shots when I finally hit upon the right search terms and found, if not the reason it's happening, at least an excellent workaround.

Back Button Focus

I'm so sad that I went so many years without knowing about Back Button Focus. It's well hidden in the menus, under Custom Functions #10.

Normally, the shutter button does a bunch of things. When you press it halfway, the camera both autofocuses (sadly, even in manual focus mode) and calculates exposure settings.

But there's a custom function that lets you separate the focus and exposure calculations. In the Custom Functions menu option #10 (the number and exact text will be different on different Canon models, but apparently most or all Canon DSLRs have this somewhere), the heading says: Shutter/AE Lock Button. Following that is a list of four obscure-looking options:

  • AF/AE lock
  • AE lock/AF
  • AF/AF lock, no AE lock
  • AE/AF, no AE lock

The text before the slash indicates what the shutter button, pressed halfway, will do in that mode; the text after the slash is what happens when you press the * or AE lock button on the upper right of the camera back (the same button you use to zoom out when reviewing pictures on the LCD screen).

The first option is the default: press the shutter button halfway to activate autofocus; the AE lock button calculates and locks exposure settings.

The second option is the revelation: pressing the shutter button halfway will calculate exposure settings, but does nothing for focus. To focus, press the * or AE button, after which focus will be locked. Pressing the shutter button won't refocus. This mode is called "Back button focus" all over the web, but not in the manual.

Back button focus is useful in all sorts of cases. For instance, if you want to autofocus once then keep the same focus for subsequent shots, it gives you a way of doing that. It also solves my night focus problem: even with the bug (whether it's in the lens or the camera) that the lens tries to autofocus even in manual focus mode, in this mode, pressing the shutter won't trigger that. The camera assumes it's in focus and goes ahead and takes the picture.

Incidentally, the other two modes in that menu apply to AI SERVO mode when you're letting the focus change constantly as it follows a moving subject. The third mode makes the * button lock focus and stop adjusting it; the fourth lets you toggle focus-adjusting on and off.

Live View Focusing

There's one other thing that's crucial for night shots: live view focusing. Since you can't use autofocus in low light, you have to do the focusing yourself. But most DSLR's focusing screens aren't good enough that you can look through the viewfinder and get a reliable focus on a star or even a string of holiday lights or farolitos.

Instead, press the SET button (the one in the middle of the right/left/up/down buttons) to activate Live View (you may have to enable it in the menus first). The mirror locks up and a preview of what the camera is seeing appears on the LCD. Use the zoom button (the one to the right of that */AE lock button) to zoom in; there are two levels of zoom in addition to the un-zoomed view. You can use the right/left/up/down buttons to control which part of the field the zoomed view will show. Zoom all the way in (two clicks of the + button) to fine-tune your manual focus. Press SET again to exit live view.

It's not as good as a fine-grained focusing screen, but at least it gets you close. Consider using relatively small apertures, like f/8, since it will give you more latitude for focus errors. Yyou'll be doing time exposures on a tripod anyway, so a narrow aperture just means your exposures have to be a little longer than they otherwise would have been.

After all that, my Xmas Eve farolitos photos turned out mediocre. We had a storm blowing in, so a lot of the candles had blown out. (In the photo below you can see how the light string on the left is blurred, because the tree was blowing around so much during the 30-second exposure.) But I had fun, and maybe I'll go out and try again tonight.


An excellent X-mas to you all!

December 25, 2016 07:30 PM

Elizabeth Krumbach

The Temples and Dinosaurs of SLC

A few weeks ago I was in Salt Lake City for my last conference of the year. I was only there for a couple days, but I had some flexibility in my schedule. I was able to see most of the conference and still make time to sneak out to see some sights before my flight home at the conclusion of the conference.

The conference was located right near Temple Square. In spite of a couple flurries here and there, and the accompanying cold, I made time to visit out during lunch the first day of the conference. This square is where the most famous temple of The Church of Jesus Christ of Latter-day Saints resides, the Salt Lake Temple. Since I’d never been to Salt Lake City before, this landmark was the most obvious one to visit, and they had decorated it for Christmas.

While I don’t share their faith, it was worthy of my time. The temple is beautiful, everyone I met was welcoming and friendly, and there is important historical significance to the story of that church.

The really enjoyable time was that evening though. After some time at The Beer Hive I went for a walk with a couple colleagues through the square again, but this time all lit up with the Christmas lights! The lights were everywhere and spectacular.

And I’m sure regardless of the season, the temple itself at night is a sight to behold.

More photos from Temple Square here: https://www.flickr.com/photos/pleia2/albums/72157677633463925

The conference continued the next day and I departed in the afternoon to visit the Natural History Museum of Utah. Utah is a big deal when it comes to fossil hunting in the US, so I was eager to visit their dinosaur fossil exhibit. In addition to a variety of crafted scenes, it also features the “world’s largest display of horned dinosaur skulls” (source).

Unfortunately upon arrival I learned that the museum was without power. They were waving people in, but explained that there was only emergency lighting and some of the sections of the museum were completely closed. I sadly missed out on their very cool looking exhibit on poisons, and it was tricky seeing some of the areas that were open with so little light.

But the dinosaurs.

Have you ever seen dinosaur fossils under just emergency lighting? They were considerably more impactful and scary this way. Big fan.

I really enjoyed some of the shadows cast by their horned dinosaur skulls.

More photos from the museum here: https://www.flickr.com/photos/pleia2/sets/72157673744906273/

There should totally be an event where the fossils are showcased in this way in a planned manner. Alas, since this was unplanned, the staff decided in the late afternoon to close the museum early. This sent me on my way much earlier than I’d hoped. Still, I was glad I got to spend some time with the dinosaurs and hadn’t wasted much time elsewhere in the museum. If I’m ever in Salt Lake City again I would like to go back though, it was tricky to read the signs in such low light and I would like to have the experience as it was intended. Besides, I’ll rarely pass up the opportunity to see a good dinosaur exhibit. I haven’t been to the Salt Lake City Zoo yet, if it had been warmer I may have considered it – next time!

With that, my trip to Salt Lake City pretty much concluded. I made my way to the airport to head home that evening. This trip rounded almost a full month of being away from home, so I was particularly eager to get home and spend some time with MJ and the kitties.

by pleia2 at December 25, 2016 04:32 PM

December 22, 2016

Akkana Peck

Tips on Developing Python Projects for PyPI

I wrote two recent articles on Python packaging: Distributing Python Packages Part I: Creating a Python Package and Distributing Python Packages Part II: Submitting to PyPI. I was able to get a couple of my programs packaged and submitted.

Ongoing Development and Testing

But then I realized all was not quite right. I could install new releases of my package -- but I couldn't run it from the source directory any more. How could I test changes without needing to rebuild the package for every little change I made?

Fortunately, it turned out to be fairly easy. Set PYTHONPATH to a directory that includes all the modules you normally want to test. For example, inside my bin directory I have a python directory where I can symlink any development modules I might need:

mkdir ~/bin/python
ln -s ~/src/metapho/metapho ~/bin/python/

Then add the directory at the beginning of PYTHONPATH:

export PYTHONPATH=$HOME/bin/python

With that, I could test from the development directory again, without needing to rebuild and install a package every time.

Cleaning up files used in building

Building a package leaves some extra files and directories around, and git status will whine at you since they're not version controlled. Of course, you could gitignore them, but it's better to clean them up after you no longer need them.

To do that, you can add a clean command to setup.py.

from setuptools import Command

class CleanCommand(Command):
    """Custom clean command to tidy up the project root."""
    user_options = []
    def initialize_options(self):
        pass
    def finalize_options(self):
        pass
    def run(self):
        os.system('rm -vrf ./build ./dist ./*.pyc ./*.tgz ./*.egg-info ./docs/sphinxdoc/_build')
(Obviously, that includes file types beyond what you need for just cleaning up after package building. Adjust the list as needed.)

Then in the setup() function, add these lines:

      cmdclass={
          'clean': CleanCommand,
      }

Now you can type

python setup.py clean
and it will remove all the extra files.

Keeping version strings in sync

It's so easy to update the __version__ string in your module and forget that you also have to do it in setup.py, or vice versa. Much better to make sure they're always in sync.

I found several version of that using system("grep..."), but I decided to write my own that doesn't depend on system(). (Yes, I should do the same thing with that CleanCommand, I know.)

def get_version():
    '''Read the pytopo module versions from pytopo/__init__.py'''
    with open("pytopo/__init__.py") as fp:
        for line in fp:
            line = line.strip()
            if line.startswith("__version__"):
                parts = line.split("=")
                if len(parts) > 1:
                    return parts[1].strip()

Then in setup():

      version=get_version(),

Much better! Now you only have to update __version__ inside your module and setup.py will automatically use it.

Using your README for a package long description

setup has a long_description for the package, but you probably already have some sort of README in your package. You can use it for your long description this way:

# Utility function to read the README file.
# Used for the long_description.
def read(fname):
    return open(os.path.join(os.path.dirname(__file__), fname)).read()
    long_description=read('README'),

December 22, 2016 05:15 PM

Jono Bacon

Recommendations Requested: Building a Smart Home

Early next year Erica, the scamp, and I are likely to be moving house. As part of the move we would both love to turn this house into a smart home.

Now, when I say “smart home”, I don’t mean this:

Dogogram. It is the future.

We don’t need any holographic dogs. We are however interested in having cameras, lights, audio, screens, and other elements in the house connected and controlled in different ways. I really like the idea of the house being naturally responsive to us in different scenarios.

In other houses I have seen people with custom lighting patterns (e.g. work, party, romantic dinner), sensors on gates that trigger alarms/notifications, audio that follows you around the house, notifications on visible screens, and other features.

Obviously we will want all of this to be (a) secure, (b) reliable, and (c) simple to use. While we want a smart home, I don’t particularly want to have to learn a million details to set it up.

Can you help?

So, this is what we would like to explore.

Now, I would love to ask you folks two questions:

  1. What kind of smart-home functionality and features have you implemented in your house (in other words, what neat things can you do?)
  2. What hardware and software do you recommend for rigging a home up as a smarthome. I would ideally like to keep re-wiring to a minimum. Assume I have nothing already, so recommendations for cameras, light-switches, hubs, and anything else is much appreciated.

If you have something you would like to share, please plonk it into the comment box below. Thanks!

The post Recommendations Requested: Building a Smart Home appeared first on Jono Bacon.

by Jono Bacon at December 22, 2016 04:00 PM

December 19, 2016

Jono Bacon

Building Better Teams With Asynchronous Workflow

One of the core principles of open source and innersource communities is asynchronous workflow. That is, participants/employees should be able to collaborate together with ubiquitous access, from anywhere, at any time.

As a practical example, at a previous company I worked at, pretty much everything lived in GitHub. Not just the code for the various products, but also material and discussions from the legal, sales, HR, business development, and other teams.

This offered a number of benefits for both employees and the company:

  • History – all projects, discussions, and collaboration was recorded. This provided a wealth of material for understanding prior decisions, work, and relationships.
  • Transparency – transparency is something most employees welcome and this was the case here where all employees felt a sense of connection to work across the company.
  • Communication – with everyone using the same platform it meant that it was easier for people to communicate clearly and consistently and to see the full scope of a discussion/project when pulled in.
  • Accountability – sunlight is the best disinfectant and having all projects, discussions, and work items/commitments, available in the platform ensured people were accountable in both their work and commitments.
  • Collaboration – this platform made it easier for people to not just collaborate (e.g. issues and pull requests) but also to bring in other employees by referencing their username (e.g. @jonobacon).
  • Reduced Silos – the above factors reduced the silos in the company and resulted in wider cross-team collaboration.
  • Untethered Working – because everything was online and not buried in private meetings and notes, this meant employees could be productive at home, on the road, or outside of office hours (often when riddled with jetlag at 3am!)
  • Internationally Minded – this also made it easier to work with an international audience, crossing different timezones and geographical regions.

While asynchronous workflow is not perfect, it offers clear benefits for a company and is a core component for integrating open source methodology and workflows (also known as innersource) into an organization.

Asynchronous workflow is a common area in which I work with companies. As such, I thought I would write up some lessons learned that may be helpful for you folks.

Designing Asynchronous Workflow

Many of you reading this will likely want to bring in the above benefits to your own organization too. You likely have an existing workflow which will be a mixture of (a) in-person meetings, (b) remote conference/video calls, (c) various platforms for tracking tasks, and (d) various collaboration and communication tools.

As with any organizational change and management, culture lies at the core. Putting platforms in place is the easy bit: adapting those platforms to the needs, desires, and uncertainties that live in people is where the hard work lays.

In designing asynchronous workflow you will need to make the transition from your existing culture and workflow to a new way of working. Ultimately this is about designing workflow that generates behaviors we want to see (e.g. collaboration, open discussion, efficient working) and behaviors we want to deter (e.g. silos, land-grabbing, power-plays etc).

Influencing these behaviors will include platforms, processes, relationships, and more. You will need to take a gradual, thoughtful, and transparent approach in designing how these different pieces fit together and how you make the change in a way that teams are engaged in.

I recommend you manage this in the following way (in order):

  1. Survey the current culture – first, you need to understand your current environment. How technically savvy are your employees? How dependent on meetings are they? What are the natural connections between teams, and where are the divisions? With a mixture of (a) employee surveys, and (b) observational and quantitive data, summarize these dynamics into lists of “Behaviors to Improve” and “Behaviors to Preserve”. These lists will give us a sense of how we want to build a workflow that is mindful of these behaviors and adjusts them where we see fit.
  2. Design an asynchronous environment – based on this research, put together a proposed plan for some changes you want to make to be more asynchronous. This should cover platform choices, changes to processes/policies, and roll-out plan. Divide this plan up in priority order for which pieces you want to deliver in which order.
  3. Get buy-in – next we need to build buy-in in senior management, team leads, and with employees. Ideally this process should be as open as possible with a final call for input from the wider employee-base. This is a key part of making teams feel part of the process.
  4. Roll out in phases – now, based on your defined priorities in the design, gradually roll out the plan. As you do so, provide regular updates on this work across the company (you should include metrics of the value this work is driving in these updates).
  5. Regularly survey users – at regular check-points survey the users of the different systems you put in place. Give them express permission to be critical – we want this criticism to help us refine and make changes to the plan.

Of course, this is a simplication of the work that needs to happen, but it covers the key markers that need to be in place.

Asynchronous Principles

The specific choices in your own asynchronous workflow plan will be very specific to your organization. Every org is different, has different drivers, people, and focus, so it is impossible to make a generalized set of strategic, platform, and process recommendations. Of course, if you want to discuss your organization’s needs specifically, feel free to get in touch.

For the purposes of this piece though, and to serve as many of you as possible, I want to share the core asynchronous principles you should consider when designing your asynchronous workflow. These principles are pretty consistent across most organizations I have seen.

Be Explicitly Permissive

A fundamental principle of asynchronous working (and more broadly in innersource) is that employees have explicit permission to (a) contribute across different projects/teams, (b) explore new ideas and creative solutions to problems, and (c) challenge existing norms and strategy.

Now, this doesn’t mean it is a free for all. Employees will have work assigned to them and milestones to accomplish, but being permissive about the above areas will crisply define the behavior the organization wants to see in employees.

In some organizations the senior management team spoo forth said permission and expect it to stick. While this top-down permission and validation is key, it is also critical that team leads, middle managers, and others support this permissive principle in day-to-day work.

People change and cultures develop by others delivering behavioral patterns that become accepted in the current social structure. Thus, you need to encourage people to work across projects, explore new ideas, and challenge the norm, and validate that behavior publicly when it occurs. This is how we make culture stick.

Default to Open Access

Where possible, teams and users should default to open visibility for projects, communication, issues, and other material. Achieving this requires not just default access controls to be open, but also setting the cultural and organization expectation that material should be open for all employees.

Of course, you should trust your employees to use their judgement too. Some efforts will require private discussions and work (e.g. security issues). Also, some discussions may need to be confidential (e.g. HR). So, default to open, but be mindful of the exceptions.

Platforms Need to be Accessible, Rich, and Searchable

There are myriad platforms for asynchronous working. GitHub, GitLab, Slack, Mattermost, Asana, Phabricator, to name just a few.

When evaluating platforms it is key to ensure that they can be made (a) securely accessible from anywhere (e.g. desktop/mobile support, available outside the office), (b) provide a rich and efficient environment for collaboration (e.g. rich discussions with images/videos/links, project management, simple code collaboration and review), (c) and any material is easily searchable (finding previous projects/discussions to learn from them, or finding new issues to focus on).

Always Maintain History and Never Delete, but Archive

You should maintain history in everything you do. This should include discussions, work/issue tracking, code (revision control), releases, and more.

On a related note, you should never, ever permanently delete material. Instead, that material should be archived. As an example, if you file an issue for a bug or problem that is no longer pertinent, archive the issue so it doesn’t come up in popular searches, but still make it accessible.

Consolidate Identity and Authentication

Having a single identity for each employee on asynchronous infrastructure is important. We want to make it easy for people to reference individual employees, so a unique username/handle is key here. This is not just important technically, but also for building relationships – that username/handle will be a part of how people collaborate, build their reputations, and communicate.

A complex challenge with deploying asynchronous infrastructure is with identity and authentication. You may have multiple different platforms that have different accounts and authentication providers.

Where possible invest in Single Sign On and authentication. While it requires a heavier up-front lift, consolidating multiple accounts further down the line is a nightmare you want to avoid.

Validate, Incentivize, and Reward

Human beings need validation. We need to know we are on the right track, particularly when joining new teams and projects. As such, you need to ensure people can easily validate each other (e.g. likes and +1s, simple peer review processes) and encourage a culture of appreciation and thanking others (e.g. manager and leaders setting an example to always thank people for contributions).

Likewise, people often respond well to being incentivized and often enjoy the rewards of that work. Be sure to identify what a good contribution looks like (e.g. in software development, a merged pull request) and incentivize and reward great work via both baked-in features and specific campaigns.

Be Mindful of Uncertainty, so Train, Onboard, and Support

Moving to a more asynchronous way of working will cause uncertainty in some. Not only are people often reluctant to change, but operating in a very open and transparent manner can make people squeamish about looking stupid in front of their colleagues.

So, be sure to provide extensive training as part of the transition, onboard new staff members, and provide a helpdesk where people can always get help and their questions answered.


Of course, I am merely scratching the surface of how we build asynchronous workflow, but hopefully this will get your started and generate some ideas and thoughts about how you bring this to your organization.

Of course, feel free to get in touch if you want to discuss your organization’s needs in more detail. I would also love to hear additional ideas and approaches in the comments!

The post Building Better Teams With Asynchronous Workflow appeared first on Jono Bacon.

by Jono Bacon at December 19, 2016 03:54 PM

December 17, 2016

Akkana Peck

Distributing Python Packages Part II: Submitting to PyPI

In Part I, I discussed writing a setup.py to make a package you can submit to PyPI. Today I'll talk about better ways of testing the package, and how to submit it so other people can install it.

Testing in a VirtualEnv

You've verified that your package installs. But you still need to test it and make sure it works in a clean environment, without all your developer settings.

The best way to test is to set up a "virtual environment", where you can install your test packages without messing up your regular runtime environment. I shied away from virtualenvs for a long time, but they're actually very easy to set up:

virtualenv venv
source venv/bin/activate

That creates a directory named venv under the current directory, which it will use to install packages. Then you can pip install packagename or pip install /path/to/packagename-version.tar.gz

Except -- hold on! Nothing in Python packaging is that easy. It turns out there are a lot of packages that won't install inside a virtualenv, and one of them is PyGTK, the library I use for my user interfaces. Attempting to install pygtk inside a venv gets:

********************************************************************
* Building PyGTK using distutils is only supported on windows. *
* To build PyGTK in a supported way, read the INSTALL file.    *
********************************************************************

Windows only? Seriously? PyGTK works fine on both Linux and Mac; it's packaged on every Linux distribution, and on Mac it's packaged with GIMP. But for some reason, whoever maintains the PyPI PyGTK packages hasn't bothered to make it work on anything but Windows, and PyGTK seems to be mostly an orphaned project so that's not likely to change.

(There's a package called ruamel.venvgtk that's supposed to work around this, but it didn't make any difference for me.)

The solution is to let the virtualenv use your system-installed packages, so it can find GTK and other non-PyPI packages there:

virtualenv --system-site-packages venv
source venv/bin/activate

I also found that if I had a ~/.local directory (where packages normally go if I use pip install --user packagename), sometimes pip would install to .local instead of the venv. I never did track down why this happened some times and not others, but when it happened, a temporary mv ~/.local ~/old.local fixed it.

Test your Python package in the venv until everything works. When you're finished with your venv, you can run deactivate and then remove it with rm -rf venv.

Tag it on GitHub

Is your project ready to publish?

If your project is hosted on GitHub, you can have pypi download it automatically. In your setup.py, set

download_url='https://github.com/user/package/tarball/tagname',

Check that in. Then make a tag and push it:

git tag 0.1 -m "Name for this tag"
git push --tags origin master

Try to make your tag match the version you've set in setup.py and in your module.

Push it to pypitest

Register a new account and password on both pypitest and on pypi.

Then create a ~/.pypirc that looks like this:

[distutils]
index-servers =
  pypi
  pypitest

[pypi]
repository=https://pypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

[pypitest]
repository=https://testpypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

Yes, those passwords are in cleartext. Incredibly, there doesn't seem to be a way to store an encrypted password or even have it prompt you. There are tons of complaints about that all over the web but nobody seems to have a solution. You can specify a password on the command line, but that's not much better. So use a password you don't use anywhere else and don't mind too much if someone guesses.

Update: Apparently there's a newer method called twine that solves the password encryption problem. Read about it here: Uploading your project to PyPI. You should probably use twine instead of the setup.py commands discussed in the next paragraph.

Now register your project and upload it:

python setup.py register -r pypitest
python setup.py sdist upload -r pypitest

Wait a few minutes: it takes pypitest a little while before new packages become available. Then go to your venv (to be safe, maybe delete the old venv and create a new one, or at least pip uninstall) and try installing:

pip install -i https://testpypi.python.org/pypi YourPackageName

If you get "No matching distribution found for packagename", wait a few minutes then try again.

If it all works, then you're ready to submit to the real pypi:

python setup.py register -r pypi
python setup.py sdist upload -r pypi

Congratulations! If you've gone through all these steps, you've uploaded a package to pypi. Pat yourself on the back and go tell everybody they can pip install your package.

Some useful reading

Some pages I found useful:

A great tutorial except that it forgets to mention signing up for an account: Python Packaging with GitHub

Another good tutorial: First time with PyPI

Allowed PyPI classifiers -- the categories your project fits into Unfortunately there aren't very many of those, so you'll probably be stuck with 'Topic :: Utilities' and not much else.

Python Packages and You: not a tutorial, but a lot of good advice on style and designing good packages.

December 17, 2016 11:19 PM

December 12, 2016

Eric Hammond

How Much Does It Cost To Run A Serverless API on AWS?

Serving 2.1 million API requests for $11

Folks tend to be curious about how much real projects cost to run on AWS, so here’s a real example with breakdowns by AWS service and feature.

This article walks through the AWS invoice for charges accrued in November 2016 by the TimerCheck.io API service which runs in the us-east-1 (Northern Virginia) region and uses the following AWS services:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • Route 53
  • CloudFront
  • SNS (Simple Notification Service)
  • CloudWatch Logs
  • CloudWatch Metrics
  • CloudTrail
  • S3
  • Network data transfer
  • CloudWatch Alarms

During this month, TimerCheck.io service processed over 2 million API requests. Every request ran an AWS Lambda function that read from and/or wrote to a DynamoDB table.

This AWS account is older than 12 months, so any first year free tier specials are no longer applicable.

Total Cost Overview

At the very top of the AWS invoice, we can see that the total AWS charges for the month of November add up to $11.12. This is the total bill for processing the 2.1 million API requests and all of the infrastructure necessary to support them.

Invoice: Summary

Service Overview

The next part of the invoice lists the top level services and charges for each. You can see that two thirds of the month’s cost was in API Gateway at $7.47, with a few other services coming together to make up the other third.

Invoice: Service Overview

API Gateway

Current API Gateway pricing is $3.50 per million requests, plus data transfer. As you can see from the breakdown below, the requests are the bulk of the expense at $7.41. The responses from TimerCheck.io probably average in the hundreds of bytes, so there’s only $0.06 in data transfer cost.

You currently get a million requests at no charge for the first 12 months, which was not applicable to this invoice, but does end up making API Gateway free for the development of many small projects.

Invoice: API Gateway

CloudTrail

I don’t remember enabling CloudTrail on this account, but at some point I must have done the right thing, as this is something that should be active for every AWS account. There were are almost 400,000 events recorded by CloudTrail, but since the first trail is free, there is no charge listed here.

Note that there are some charges associated with the storage of the CloudTrail event logs in S3. See below.

Invoice: CloudTrail

CloudWatch

The CloudWatch costs for this service come from logs being sent to CloudWatch Logs and the storage of those logs. These logs are being generated by AWS Lambda function execution and by API Gateway execution, so you can consider them as additional costs of running those services. You can control some of the logs generated by your AWS Lambda function, so a portion of these costs are under your control.

There are also charges for CloudWatch Alarms, but for some reason, those are listed under EC2 (below) instead of here under CloudWatch.

Invoice: CloudWatch

Data Transfer

Data transfer costs can be complex as they depend on where the data is coming from and going to. Fortunately for TimerCheck.io, there is very little network traffic and most of it falls into the free tiers. What little we are being charged for here amounts to a measly $0.04 for 4 GB of data transferred between availability zones. I presume this is related to AWS services talking to each other (e.g., AWS Lambda to DynamoDB) because there are no EC2 instances in this stack.

Note that this is not the entirety of the data transfer charges, as some other services break out their own network costs.

Invoice: Data Transfer

DynamoDB

The DynamoDB pricing includes a permanent free tier of up to 25 write capacity units and 25 read capacity units. The TimerCheck.io has a single DynamoDB table that is set to a capacity of 25 write and 25 read so there are no charges for capacity.

The TimerCheck.io DynamoDB database size falls well under the 25 GB free tier, so that has no charge either.

Invoice: DynamoDB

Elastic Compute Cloud

The TimerCheck.io service does not use EC2 and yet there is a section in the invoice for EC2. For some reason this section lists the CloudWatch Alarm charges.

Each CloudWatch Alarm costs $0.10 per month and this account has eight for a total of $0.80/month. But, for some reason, I was only billed $0.73. *shrug*

Invoice: EC2

This AWS account has four AWS billing alarms that will email me whenever the cumulative charges for the month pass $10, $20, $30, and $40.

There is one CloudWatch alarm that emails me if the AWS Lambda function invocations are being throttled (more than 100 concurrent functions being executed).

There are two CloudWatch alarms that email me if the consumed read and write capacity units are trending so high that I should look at increasing the capacity settings of the DynamoDB table. We are nowhere near that at current usage volume.

Yes, that leaves one CloudWatch alarm, which was a duplicate. I have since removed it.

AWS Lambda

Since most of the development of the TimerCheck.io API service focuses on writing the 60 lines of code for the AWS Lambda function, this is where I was expecting the bulk of the charges to be. However, the AWS Lambda costs only amount to $0.22 for the month.

There were 2.1 million AWS Lambda function invocations, one per external consumer API request, same as the API Gateway. The first million AWS Lambda function calls are free. The rest are charged at $0.20 per million.

The permanent free tier also includes 400,000 GB-seconds of compute time per month. At an average of 0.15 GB-seconds per function call, we stayed within the free tier at a total of 320,000 GB-seconds.

Invoice: AWS Lambda

I have the AWS Lambda function configuration cranked up to the max 1536 GB memory so that it will run as fast as possible. Since the charges are rounded up in units of 100ms, we could probably save GB-seconds by scaling down the memory once we exceed the free tier. Most of the time is probably spent in DynamoDB calls anyway, so this should not affect API performance much.

Route 53

Route 53 charges $0.50 per hosted zone (domain). I have two domains hosted in Route 53, the expected timercheck.io plus the extra timercheck.com. The timercheck.com domain was supposed to redirect to timercheck.io, but I apparently haven’t gotten around to tossing in that feature yet. These two hosted zones account for $1 in charges.

There were 1.1 million DNS queries to timercheck.io and www.timercheck.io, but since those resolve to aliases for the API Gateway, there is no charge.

The other $0.09 comes from the 226,000 DNS queries to random timercheck.io and timercheck.com hostnames. These would include status.timercheck.io, which is a page displaying the uptime of TimerCheck.io as reported by StatusCake.

Invoice: Route 53

Simple Notification Service

During the month of November, there was one post to an SNS topic and one email delivery from an SNS topic. These were both for the CloudWatch alert notifying me that the charges on the account had exceeded $10 for the month. There were no charges for this.

Invoice: SNS

Simple Storage Service

The S3 costs in this account are entirely for storing the CloudTrail events. There were 222 GET requests ($0) and 13,000 requests of other types ($0.07). There was no charge for the 0.064 GB-Mo of actual data stored. Has Amazon started rounding fractional pennies down instead of up in some services?

Invoice: SNS

External Costs

The domains timercheck.io and timercheck.com are registered through other registrars. Those cost about $33 and $9 per year, respectively.

The SSL/TLS certificate for https support costs around $10-15 per year, though this should drop to zero once CloudFront distributions created with API Gateway support certificates with ACM (AWS Certificate Manager) #awswishlist

Not directly obvious from the above is the fact that I have spent no time or money maintaining the TimerCheck.io API service post-launch. It’s been running for 1.5 years and I haven’t had to upgrade software, apply security patches, replace failing hardware, recover from disasters, or scale up and down with demand. By using AWS services like API Gateway, AWS Lambda, and DynamoDB, Amazon takes care of everything.

Notes

Your Mileage May Vary.

For entertainment use only.

This is just one example from one month for one service architected one way. Your service running on AWS will not cost the same.

Though 2 million TimerCheck.io API requests in November cost about $11, this does not mean that an additional million would cost another $5.50. Some services would cost significantly more and some would cost about the same, probably averaging out to significantly more.

If you are reading this after November 2016, then the prices for these AWS services have certainly changed and you should not use any of the above numbers in making decisions about running on AWS.

Conclusion

Amazon, please lower the cost of the API Gateway; or provide a simpler, cheaper service that can trigger AWS Lambda functions with https endpoints. Thank you!

Original article and comments: https://alestic.com/2016/12/aws-invoice-example/

December 12, 2016 10:00 AM

Elizabeth Krumbach

Trains in NYC

I’ve wanted to visit the New York Transit Museum ever since I discovered it existed. Housed in the retired Court station in Brooklyn, even the museum venue had “transit geek heaven” written all over it. I figured I’d visit it some day when work brought me to the city, but then I learned about the 15th Annual Holiday Train Show at their Annex and Store at Grand Central going on now. I’d love to see that! I ended up going up to the NYC from Philadelphia with my friend David last Sunday morning and made a day of it. Even better, we parked in New Jersey so had a full on transit experience from there into Manhattan and Brooklyn and back as the day progressed.

Our first stop was Grand Central Station via the 5 subway line. Somehow I’d never been there before. Enjoy the obligatory station selfie.

From there it was straight down to the Annex and Store run by the transit museum. The holiday exhibit had glittering signs hanging from the ceiling of everything from buses to transit cards to subway cars and snowflakes. The big draw though was the massive o-gauge model train setup, as the site explains:

This year’s Holiday Train Show display will feature a 34-foot-long “O gauge” model train layout with Lionel’s model Metro-North, New York Central, and vintage subway trains running on eight separate loops of track, against a backdrop featuring graphics celebrating the Museum’s 40th anniversary by artist Julia Rothman.

It was quite busy there, but folks were very clearly enjoying it. I’m really glad I went, even if the whole thing made me pine for my future home office train set all the more. Some day! It’s also worthy to note that this shop is the one to visit transit-wise. The museum in Brooklyn also had a gift shop but it was smaller and had fewer things, I highly recommend picking things up here, I ended up going back after the transit museum to get something I wanted.

We then hopped on the 4 subway line into Brooklyn to visit the actual transit museum. As advertised, it’s in a retired subway station, so the entrance looks like any other subway entrance and you take stairs underground. You enter and buy your ticket and then are free to explore both levels of the museum. The first had several exhibits that rotate, including one about Coney Island and another providing a history of crises in New York City (including 9/11, hurricane Sandy) and how the transit system and operators responded to them. They also had displays of a variety of turnstiles throughout the years, and exhibits talking about street car (trolley) lines and the introduction of the bus systems.

The exhibits were great, but it was downstairs that things got really fun. They have functioning rails where the subway trains used to run through where they’ve lined up over a dozen cars from throughout transit history in NYC for visitors to explore, inside and out.

The evolution of seat designs and configurations was interesting to witness and feel, as you could sit on the seats to get the full experience. Each car also had an information sign next to it, so you could learn about the era and the place of that car in it. Transitions between wood to metal, paired (and ..tripled?) cars were showcased, along with a bunch that were stand alone interchangables. I also enjoyed seeing a caboose, though I didn’t quite recognize at first (“is this for someone to live in?”).

A late lunch was due following the transit museum. We ended up at Sottocasa Pizzeria right there in Brooklyn. It got great reviews and I enjoyed it a lot, but was definitely on the fancy pizza side. They also had selection of Italian beers, of which I chose the delicious Nora by Birra Baladin. Don’t worry, next time I’m in New York I’ll go to a great, not fancy, pizza place.

It was then back to Manhattan to spend a bit more time at Grand Central and for an evening walk through the city. We started by going up 5th Avenue to see Rockefeller Square at night during the holidays. I hadn’t been to Manhattan since 2013 when I went with my friend Danita and I’d never seen the square all decked out for the holidays. I didn’t quite think it through though, it’s probably the busiest time of the year there so the whole neighborhood for blocks was insanely crowded. After seeing the skating rink and tree, we escaped northwest and made our way through the crowds up to Central Park. It was cold, but all the walking was fun even with the crowds. For dinner we ended up at Jackson Hole for some delicious burgers. I went with the Guacamole Burger.

The trip back to north Jersey took us through the brand new World Trade Center Transportation Hub to take the PATH. It’s a very unusual space. It’s all bright white with tons of marble shaped in a modern look, and has a shopping mall with a surreal amount of open space. The trip back on the PATH that night was as smooth as expected. In all, a very enjoyable day of public transit stuff!

More photos from Grand Central Station and the Transit Museum here: https://www.flickr.com/photos/pleia2/albums/72157677457519215

Epilogue: I received incredibly bad news the day after this visit to NYC. It cast a shadow over it for me. I went back and forth about whether I should write about this visit at all and how I should present it if I did. I decided to present it as it was that day. It was a great day of visiting the city and geeking out over trains, enjoyed with a close friend, and detached from whatever happened later. I only wish I could convince my mind to do the same.

by pleia2 at December 12, 2016 01:29 AM

UbuCon EU 2016

Last month I had the opportunity to travel to Essen, Germany to attend UbuCon EU 2016. Europe has had UbuCons before, but the goal of this one was to make it a truly international event, bringing in speakers like me from all corners of the Ubuntu community to share our experiences with the European Ubuntu community. Getting to catch up with a bunch of my Ubuntu colleagues who I knew would be there and visiting Germany as the holiday season began were also quite compelling reasons for me to attend.

The event formally kicked off Saturday morning with a welcome and introduction by Sujeevan Vijayakumaran, he reported that 170 people registered for the event and shared other statistics about the number of countries attendees were from. He also introduced a member of the UbPorts team, Marius Gripsgård, who announced the USB docking station for Ubuntu Touch devices they were developing, more information in this article on their website: The StationDock.

Following these introductions and announcements, we were joined by Canonical CEO Jane Silber who provided a tour of the Ubuntu ecosystem today. She highlighted the variety of industries where Ubuntu was key, progress with Ubuntu on desktops/laptops, tablets, phones and venturing into the smart Internet of Things (IoT) space. Her focus was around the amount of innovation we’re seeing in the Ubuntu community and from Canonical, and talked about specifics regarding security, updates, the success in the cloud and where Ubuntu Core fits into the future of computing.

I also loved that she talked about the Ubuntu community. The strength of local meetups and events, the free support community that spans a variety of resources, ongoing work by the various Ubuntu flavors. She also spoke to the passion of Ubuntu contributors, citing comics and artwork that community members have made, including the stunning series of release animal artwork by Sylvia Ritter from right there in Germany, visit them here: Ubuntu Animals. I was also super thrilled that she mentioned the Ubuntu Weekly Newsletter as a valuable resource for keeping up with the community, a very small group of folks works very hard on it and that kind of validation is key to sustaining motivation.

The next talk I attended was by Fernando Lanero Barbero on Linux is education, Linux is science. Ubuntu to free educational environments. Fernando works at a school district in Spain where he has deployed Ubuntu across hundreds of computers, reaching over 1200 students in the three years he’s been doing the work. The talk outlined the strengths of the approach, explaining that there was cost savings for his school and also how Ubuntu and open source software is more in line with the school values. One of the key takeaways from his experience was one that I know a lot about from our own Linux in schools experiences here in the US at Partimus: focus on the people, not the technologies. We’re technologists who love Linux and want to promote it, but without engagement, understanding and buy-in from teachers, deployments won’t be successful. A lot of time needs to be spent making assessments of their needs, doing roll-outs slowly and methodically so that the change doesn’t happen to abruptly and leave them in a lurch. He also stressed the importance of consistency with the deployments. Don’t get super creative across machines, use the same flavor for everything, even the same icon set. Not everyone is as comfortable with variation as we are, and you want to make the transition as easy as possible across all the systems.

Laura Fautley (Czajkowski) spoke at the next talk I went to, on Supporting Inclusion & Involvement in a Remote Distributed Team. The Ubuntu community itself is distributed across the globe, so drawing experience from her work there and later at several jobs where she’s had to help manage communities, she had a great list of recommendations as you build out such a team. She talked about being sensitive to time zones, acknowledgement that decisions are sometimes made in social situations rather than that you need to somehow document and share these decisions with the broader community. She was also eager to highlight how you need to acknowledge and promote the achievements in your team, both within the team and to the broader organization and project to make sure everyone feels valued and so that everyone knows the great work you’re doing. Finally, it was interesting to hear some thoughts about remote member on-boarding, stressing the need to have a process so that new contributors and team mates can quickly get up to speed and feel included from the beginning.

I went to a few other talks throughout the two day event, but one of the big reasons for me attending was to meet up with some of my long-time friends in the Ubuntu community and finally meet some other folks face to face. We’ve had a number of new contributors join us since we stopped doing Ubuntu Developer Summits and today UbuCons are the only Ubuntu-specific events where we have an opportunity to meet up.


Laura Fautley, Elizabeth K. Joseph, Alan Pope, Michael Hall

Of course I was also there to give a pair of talks. I first spoke on Contributing to Ubuntu on Desktops (slides) which is a complete refresh of a talk I gave a couple of times back in 2014. The point of that talk was to pull people back from the hype-driven focus on phones and clouds for a bit and highlight some of the older projects that still need contributions. I also spoke on Building a career with Ubuntu and FOSS (slides) which was definitely the more popular talk. I’ve given a similar talk for a couple UbuCons in the past, but this one had the benefit of being given while I’m between jobs. This most recent job search as I sought out a new role working directly with open source again gave a new dimension to the talk, and also made for an amusing intro, “I don’t have a job at this very moment …but without a doubt I will soon!” And in fact, I do have something lined up now.


Thanks to Tiago Carrondo for taking this picture during my talk! (source)

The venue for the conference was a kind of artists space, which made it a bit quirky, but I think worked out well. We had a couple social gatherings there at the venue, and buffet lunches were included in our tickets, which meant we didn’t need to go far or wait on food elsewhere.

I didn’t have a whole lot of time for sight-seeing this trip because I had a lot going on stateside (like having just bought a house!) but I did get to enjoy the beautiful Christmas Market in Essen a few of nights while I was there.

For those of you not familiar with German Christmas Markets (I wasn’t), they close roads downtown and pop up streets of wooden shacks that sell everything from Christmas ornaments and cookies to hot drinks, beers and various hot foods. We went the first night I was in town we met up with several fellow conference-goers and got some fries with mayonnaise, grilled mushrooms with Bearnaise sauce, my first taste of German Glühwein (mulled wine) and hot chocolate. The next night we went was a quick walk through the market that landed us at a steakhouse where we had a late dinner and a couple beers.

The final night we didn’t stay out late, but did get some much anticipated Spanish churros, which inexplicably had sugar rather than the cinnamon I’m used to, as well as a couple more servings of Glühwein, this time in commemorative Christmas mugs shaped like boots!


Clockwise from top left: José Antonio Rey, Philip Ballew, Michael Hall, John and Laura Fautley, Elizabeth K. Joseph

The next morning I was up bright and early to catch a 6:45AM train that started me on my three train journey back to Amsterdam to fly back to Philadelphia.

It was a great little conference and a lot of fun. Huge thanks to Sujeevan for being so incredibly welcoming to all of us, and thanks to all the volunteers who worked for months to make the event happen. Also thanks to Ubuntu community members who donate to the community fund since I would have otherwise had to self-fund to attend.

More photos from the event (and the Christmas Market!) here: https://www.flickr.com/photos/pleia2/albums/72157676958738915

by pleia2 at December 12, 2016 12:03 AM

December 11, 2016

Akkana Peck

Distributing Python Packages Part I: Creating a Python Package

I write lots of Python scripts that I think would be useful to other people, but I've put off learning how to submit to the Python Package Index, PyPI, so that my packages can be installed using pip install.

Now that I've finally done it, I see why I put it off for so long. Unlike programming in Python, packaging is a huge, poorly documented hassle, and it took me days to get a working.package. Maybe some of the hints here will help other struggling Pythonistas.

Create a setup.py

The setup.py file is the file that describes the files in your project and other installation information. If you've never created a setup.py before, Submitting a Python package with GitHub and PyPI has a decent example, and you can find lots more good examples with a web search for "setup.py", so I'll skip the basics and just mention some of the parts that weren't straightforward.

Distutils vs. Setuptools

However, there's one confusing point that no one seems to mention. setup.py examples all rely on a predefined function called setup, but some examples start with

from distutils.core import setup
while others start with
from setuptools import setup

In other words, there are two different versions of setup! What's the difference? I still have no idea. The setuptools version seems to be a bit more advanced, and I found that using distutils.core , sometimes I'd get weird errors when trying to follow suggestions I found on the web. So I ended up using the setuptools version.

But I didn't initially have setuptools installed (it's not part of the standard Python distribution), so I installed it from the Debian package:

apt-get install python-setuptools python-wheel

The python-wheel package isn't strictly needed, but I found I got assorted warnings warnings from pip install later in the process ("Cannot build wheel") unless I installed it, so I recommend you install it from the start.

Including scripts

setup.py has a scripts option to include scripts that are part of your package:

    scripts=['script1', 'script2'],

But when I tried to use it, I had all sorts of problems, starting with scripts not actually being included in the source distribution. There isn't much support for using scripts -- it turns out you're actually supposed to use something called console_scripts, which is more elaborate.

First, you can't have a separate script file, or even a __main__ inside an existing class file. You must have a function, typically called main(), so you'll typically have this:

def main():
    # do your script stuff

if __name__ == "__main__":
    main()

Then add something like this to your setup.py:

      entry_points={
          'console_scripts': [
              script1=yourpackage.filename:main',
              script2=yourpackage.filename2:main'
          ]
      },

There's a secret undocumented alternative that a few people use for scripts with graphical user interfaces: use 'gui_scripts' rather than 'console_scripts'. It seems to work when I try it, but the fact that it's not documented and none of the Python experts even seem to know about it scared me off, and I stuck with 'console_scripts'.

Including data files

One of my packages, pytopo, has a couple of files it needs to install, like an icon image. setup.py has a provision for that:

      data_files=[('/usr/share/pixmaps',      ["resources/appname.png"]),
                  ('/usr/share/applications', ["resources/appname.desktop"]),
                  ('/usr/share/appname',      ["resources/pin.png"]),
                 ],

Great -- except it doesn't work. None of the files actually gets added to the source distribution.

One solution people mention to a "files not getting added" problem is to create an explicit MANIFEST file listing all files that need to be in the distribution. Normally, setup generates the MANIFEST automatically, but apparently it isn't smart enough to notice data_files and include those in its generated MANIFEST.

I tried creating a MANIFEST listing all the .py files plus the various resources -- but it didn't make any difference. My MANIFEST was ignored.

The solution turned out to be creating a MANIFEST.in file, which is used to generate a MANIFEST. It's easier than creating the MANIFEST itself: you don't have to list every file, just patterns that describe them:

include setup.py
include packagename/*.py
include resources/*
If you have any scripts that don't use the extension .py, don't forget to include them as well. This may have been why scripts= didn't work for me earlier, but by the time I found out about MANIFEST.in I had already switched to using console_scripts.

Testing setup.py

Once you have a setup.py, use it to generate a source distribution with:

python setup.py sdist
(You can also use bdist to generate a binary distribution, but you'll probably only need that if you're compiling C as part of your package. Source dists are apparently enough for pure Python packages.)

Your package will end up in dist/packagename-version.tar.gz so you can use tar tf dist/packagename-version.tar.gz to verify what files are in it. Work on your setup.py until you don't get any errors or warnings and the list of files looks right.

Congratulations -- you've made a Python package! I'll post a followup article in a day or two about more ways of testing, and how to submit your working package to PyPI.

Update: Part II is up: Distributing Python Packages Part II: Submitting to PyPI.

December 11, 2016 07:54 PM

December 08, 2016

Nathan Haines

UbuCon Europe 2016

UbuCon Europe 2016

Nathan Haines enjoying UbuCon Europe

If there is one defining aspect of Ubuntu, it's community. All around the world, community members and LoCo teams get together not just to work on Ubuntu, but also to teach, learn, and celebrate it. UbuCon Summit at SCALE was a great example of an event that was supported by the California LoCo Team, Canonical, and community members worldwide coming together to make an event that could host presentations on the newest developer technologies in Ubuntu, community discussion roundtables, and a keynote by Mark Shuttleworth, who answered audience questions thoughtfully, but also hung around in the hallway and made himself accessible to chat with UbuCon attendees.

Thanks to the Ubuntu Community Reimbursement Fund, the UbuCon Germany and UbuCon Paris coordinators were able to attend UbuCon Summit at SCALE, and we were able to compare notes, so to speak, as they prepared to expand by hosting the first UbuCon Europe in Germany this year. Thanks to the community fund, I also had the immense pleasure of attending UbuCon Europe. After I arrived, Sujeevan Vijayakumaran picked me up from the airport and we took the train to Essen, where we walked around the newly-opened Weihnachtsmarkt along with Philip Ballew and Elizabeth Joseph from Ubuntu California. I acted as official menu translator, so there were no missed opportunities for bratwurst, currywurst, glühwein, or beer. Happily fed, we called it a night and got plenty of sleep so that we would last the entire weekend long.

Zeche Zollverein, a UNESCO World Heritage site

UbuCon Europe was a marvelous experience. Friday started things off with social events so everyone could mingle and find shared interests. About 25 people attended the Zeche Zollverein Coal Mine Industrial Complex for a guided tour of the last operating coal extraction and processing site in the Ruhr region and was a fascinating look at the defining industry of the Ruhr region for a century. After that, about 60 people joined in a special dinner at Unperfekthaus, a unique location that is part creative studio, part art gallery, part restaurant, and all experience. With a buffet and large soda fountains and hot coffee/chocolate machine, dinner was another chance to mingle as we took over a dining room and pushed all the tables together in a snaking chain. It was there that some Portuguese attendees first recognized me as the default voice for uNav, which was something I had to get used to over the weekend. There's nothing like a good dinner to get people comfortable together, and the Telegram channel that was established for UbuCon Europe attendees was spread around.

Sujeevan Vijayakumaran addressing the UbuCon Europe attendees

The main event began bright and early on Saturday. Attendees were registered on the fifth floor of Unpefekthaus and received their swag bags full of cool stuff from the event sponsors. After some brief opening statements from Sujeevan, Marcus Gripsgård announced an exciting new Kickstarter campaign that will bring an easier convergence experience to not just most Ubuntu phones, but many Android phones as well. Then, Jane Silber, the CEO of Canonical, gave a keynote that went into detail about where Canonical sees Ubuntu in the future, how convergence and snaps will factor into future plans, and why Canonical wants to see one single Ubuntu on the cloud, server, desktop, laptop, tablet, phone, and Internet of Things. Afterward, she spent some time answering questions from the community, and she impressed me with her willingness to answer questions directly. Later on, she was chatting with a handful of people and it was great to see the consideration and thought she gave to those answers as well. Luckily, she also had a little time to just relax and enjoy herself without the third degree before she had to leave later that day. I was happy to have a couple minutes to chat with her.

Nathan Haines and Jane Silber

Microsoft Deutschland GmbH sent Malte Lantin to talk about Bash on Ubuntu on Windows and how the Windows Subsystem for Linux works, and while jokes about Microsoft and Windows were common all weekend, everyone kept their sense of humor and the community showed the usual respect that’s made Ubuntu so wonderful. While being able to run Ubuntu software natively on Windows makes many nervous, it also excites others. One thing is for sure: it’s convenient, and the prospect of having a robust terminal emulator built right in to Windows seemed to be something everyone could appreciate.

After that, I ate lunch and gave my talk, Advocacy for Advocates, where I gave advice on how to effectively recommend Ubuntu and other Free Software to people who aren’t currently using it or aren’t familiar with the concept. It was well-attended and I got good feedback. I also had a chance to speak in German for a minute, as the ambiguity of the term Free Software in English disappears in German, where freies Software is clear and not confused with kostenloses Software. It’s a talk I’ve given before and will definitely give again in the future.

After the talks were over, there was a raffle and then a UbuCon quiz show where the audience could win prizes. I gave away signed copies of my book, Beginning Ubuntu for Windows and Mac Users, in the raffle, and in fact I won a “xenial xeres” USB drive that looks like an origami squirrel as well as a Microsoft t-shirt. Afterwards was a dinner that was not only delicious with apple crumble for desert, but also free beer and wine, which rarely detracts from any meal.

Marcos Costales and Nathan Haines before the uNav presentation

Sunday was also full of great talks. I loved Marcos Costales’s talk on uNav, and as the video shows, I was inspired to jump up as the talk was about to begin and improvise the uNav-like announcement “You have arrived at the presentation.” With the crowd warmed up from the joke, Marcos took us on a fascinating journey of the evolution of uNav and finished with tips and tricks for using it effectively. I also appreciated Olivier Paroz’s talk about Nextcloud and its goals, as I run my own Nextcloud server. I was sure to be at the UbuCon Europe feedback and planning roundtable and was happy to hear that next year UbuCon Europe will be held in Paris. I’ll have to brush up on my restaurant French before then!

Nathan Haines contemplating tools with a Neanderthal

That was the end of UbuCon, but I hadn’t been to Germany in over 13 years so it wasn’t the end of my trip! Sujeevan was kind enough to put up with me for another four days, and he accompanied me on a couple shopping trips as well as some more sightseeing. The highlight was a trip to the Neanderthal Museum in the aptly-named Neandertal, Germany, and then afterward we met his friend (and UbuCon registration desk volunteer!) Philipp Schmidt in Düsseldorf at their Weihnachtsmarkt, where we tried the Feuerzangenbowle, where mulled wine is improved by soaking a block of sugar in rum, then putting it over the wine and lighting the sugarloaf on fire to drip into the wine. After that, we went to the Brauerei Schumacher where I enjoyed not only Schumacher Alt beer, but also the Rhein-style sauerbraten that has been on my to-do list for a decade and a half. (Other variations of sauerbraten—not to mention beer—remain on the list!)

I’d like to thank Sujeevan for his hospitality on top of the tremendous job that he, the German LoCo, and the French LoCo exerted to make the first UbuCon Europe a stunning success. I’d also like to thank everyone who contributed to the Ubuntu Community Reimbursement Fund for helping out with my travel expenses, and everyone who attended, because of course we put everything together for you to enjoy.

December 08, 2016 05:04 AM

December 05, 2016

Elizabeth Krumbach

Vacation Home in Pennsylvania

This year MJ and I embarked on a secret mission: Buy a vacation home in Pennsylvania.

It was a decision we’d mulled over for a couple years, and the state of the real estate market along with our place in lives, careers and frequent visits back to the Philadelphia area finally made the stars align to make it happen. With the help of family local to the area, including one who is a real estate agent, we spent the past few trips taking time to look at houses and make some decisions. In August we started signing the paperwork to take possession of a new home in November.

With the timing of our selection, we were able to pick out cabinets, counter tops and some of the other non-architectural options in the home. Admittedly none of that is my thing, but it’s still nice that we were able to put our touch on the end result. As we prepared for the purchase, MJ spent a lot of time making plans for taking care of the house and handling things like installations, deliveries and the move of our items from storage into the house.

In October we also bought a car that we’d be keeping at the house in Philadelphia, though we did enjoy it in California for a few weeks.

On November 15th we met at the title company office and signed the final paperwork.

The house was ours!

The next day I flew to Germany for a conference and MJ headed back to San Francisco. I enjoyed the conference and a few days in Germany, but I was eager to get back to the house.

Upon my return we had our first installation. Internet! And backup internet.

MJ came back into town for Thanksgiving which we enjoyed with family. The day after was the big move from storage into the house. Our storage units not only had our own things that we’d left in Pennsylvania, but everything from MJ’s grandparents, which included key contents of their own former vacation home which I never saw. We moved his grandmother into assisted care several years ago and had been keeping their things until we got a larger home in California. With the house here in Pennsylvania we decided to use some of the pieces to furnish the house here. It also meant I have a lot of boxes to go through.

Before MJ left to head back to work in San Francisco we did get a few things unpacked, including Champagne glasses, which meant on Saturday night following the move day we were able to pick up a proper bottle of Champagne and spend the evening together in front of the fireplace to celebrate.

I’d been planning on taking some time off following the layoff from my job as I consider new opportunities in the coming year. It ended up working well since I’ve been able to do that, plus spend the past week here in the Philadelphia house unpacking and getting the house set up. Several of the days I’ve also had to be here at the house to receive deliveries and be present for installs of all kinds to make sure the house is ready and secure (cameras!) for us to properly enjoy as soon as possible. Today is window blinds day. I am getting to enjoy it some too, between all these tasks I’ve spent time with local friends and family, had some time reading in front of the fireplace, have enjoyed a beautiful new Bluetooth speaker playing music all day. The house doesn’t have a television yet, but I have also curled up to watch a few episodes on my tablet here and there in the evenings as well.

There have also been some great sunsets in the neighborhood. I sure missed the Pennsylvania autumn sights and smells.

And not all the unpacking has been laborious. I found MJ’s telescope from years ago in storage and I was able to set that up the other night. Looking forward a clear night to try it out.

Tomorrow I’m flying off yet again for a conference and then to spend at least a week at home back in San Francisco. We’ll be back very soon though, planning on spending at least the eight days of Hanukkah here, and possibly flying in earlier if we can line up some of the other work we need to get done.

by pleia2 at December 05, 2016 07:21 PM

December 04, 2016

Elizabeth Krumbach

Breathtaking Barcelona

My father once told me that Madrid was his favorite city and that he generally loved Spain. When my aunt shipped me a series of family slides last year I was delighted to find ones from Madrid in the mix, I uploaded the album: Carl A. Krumbach – Spain 1967. I wish I had asked him why he loved Madrid, but in October I had the opportunity myself to learn why I now love Spain.

I landed in Barcelona the last week of October. First, it was a beautiful time to visit. Nice weather that wasn’t too hot or too cold. It rained over night a couple times and a bit some days, but not enough to deter activities, and I was busy with a conference during most of the days anyway. It was also warm enough to go swimming in the Mediterranean, though I failed to avail myself of this opportunity. The day I got in I met up with a couple friends to go to the aquarium, walk around the coastline and was able to touch the sea for the first time. That evening I also had my first of three seafood paellas that I enjoyed throughout the week. So good.

The night life was totally a thing. Many places would offer tapas along with drinks, so one night a bunch of us went out and just ate and drank our way through the Gothic Quarter. The restaurants also served late, often not even starting dinner service until 8PM. One night at midnight we found ourselves at a steakhouse dining on a giant steak that served the table and drinking a couple bottles of cava. Oh the cava, it was plentiful and inexpensive. As someone who lives in California these days I felt a bit bad by betraying my beloved California wine, but it was really good. I also enjoyed the Sangrias.

A couple mornings after evenings when I didn’t let the drinks get the better of me, I also went out for a run. Running along the beaches in Barcelona was a tiny slice of heaven. It was also wonderful to just go sit by the sea one evening when I needed some time away from conference chaos.


Seafood paella lunch for four! We also had a couple beers.

All of this happened before I even got out to do much tourist stuff. Saturday was my big day for seeing the famous sights. Early in the week I reserved tickets to see the Sagrada Familia Basilica. I like visiting religious buildings when I travel because they tend to be on the extravagant side. Plus, back at the OpenStack Summit in Paris we heard from a current architect of the building and I’ve since seen a documentary about the building and nineteenth century architect Antoni Gaudí. I was eager to see it, but nothing quite prepared me for the experience. I had tickets for 1:30PM and was there right on time.


Sagrada Familia selfie!

It was the most amazing place I’ve ever been.

The architecture sure is unusual but once you let that go and just enjoy it, everything comes together in a calming way that I’ve never quite experienced before. The use of every color through the hundreds of stained glass windows was astonishing.

I didn’t do the tower tour on this trip because once I realized how special this place was I wanted to save something new to do there the next time I visit.

The rest of my day was spent taking one of the tourist buses around town to get a taste of a bunch of the other sights. I got a glimpse of a couple more buildings by Gaudí. In the middle of the afternoon I stopped at a tapas restaurant across from La Monumental, a former bullfighting ring. They outlawed bullfighting several years ago, but the building is still used for other events and is worth seeing for the beautiful tiled exterior, even just on the outside.

I also walked through the Arc de Triomf and made my way over to the Barcelona Cathedral. After the tour bus brought me back to the stop near my hotel I spent the rest of the late afternoon enjoying some time at the beach.

That evening I met up with my friend Clint to do one last wander around the area. We stopped at the beach and had some cava and cheese. From there we went to dinner where we split a final paella and bottle of cava. Dessert was a Catalan cream, which is a lot like a crème brûlée but with cinnamon, yum!

As much as I wanted to stay longer and enjoy the gorgeous weather, the next morning I was scheduled to return home.

I loved Barcelona. It stole my heart like no other European city ever has and it’s now easily one of my favorite cities. I’ll be returning, hopefully sooner than later.

More photos from my adventures in Barcelona here: https://www.flickr.com/photos/pleia2/albums/72157674260004081

by pleia2 at December 04, 2016 03:18 AM

December 02, 2016

Elizabeth Krumbach

OpenStack book and Infra team at the Ocata Summit

At the end of October I attended the OpenStack Ocata Summit in beautiful Barcelona. My participation in this was a bittersweet one for me. It was the first summit following the release of our Common OpenStack Deployments book and OpenStack Infrastructure tooling was featured in a short keynote on Wednesday morning, making for quite the exciting summit. Unfortunately it also marked my last week with HPE and an uncertain future with regard to my continued full time participation with the OpenStack Infrastructure team. It was also the last OpenStack Summit where the conference and design summit are being hosted together, so the next several months will be worth keeping an eye on community-wise. Still, I largely took the position of assuming I’d continue to be able to work on the team, just with more caution in regards to work I was signing up for.

The first thing that I discovered during this summit was how amazing Barcelona is. The end of October presented us with some amazing weather for walking around and the city doesn’t go to sleep early, so we had plenty of time in the evenings to catch up with each other over drinks and scrumptious tapas. It worked out well since there were fewer sponsored parties in the evenings at this summit and attendance seemed limited at the ones that existed.

The high point for me at the summit was having the OpenStack Infrastructure tooling for handling our fleet of compute instances featured in a keynote! Given my speaking history, I was picked from the team to be up on the big stage with Jonathan Bryce to walk through a demonstration where we removed one of our US cloud providers and added three more in Europe. While the change was landing and tests started queuing up we also took time to talk about how tests are done against OpenStack patch sets across our various cloud providers.


Thanks to Johanna Koester for taking this picture (source)

It wasn’t just me presenting though. Clark Boylan and Jeremy Stanley were sitting in the front row making sure the changes landed and everything went according to plan during the brief window that this demonstration took up during the keynote. I’m thrilled to say that this live demonstration was actually the best run we had of all the testing, seeing all the tests start running on our new providers live on stage in front of such a large audience was pretty exciting. The team has built something really special here, and I’m glad I had the opportunity to help highlight that in the community with a keynote.


Mike Perez and David F. Flanders sitting next to Jeremy and Clark as they monitor demonstration progress. Photo credit for this one goes to Chris Hoge (source)

The full video of the keynote is available here: Demoing the World’s Largest Multi-Cloud CI Application

A couple of conference talks were presented by members of the Infrastructure team as well. On Tuesday Colleen Murphy, Paul Belanger and Ricardo Carrillo Cruz presented on the team’s Infra-Cloud. As I’ve written about before, the team has built a fully open source OpenStack cloud using the community Puppet modules and donated hardware and data center space from Hewlett Packard Enterprise. This talk outlined the architecture of that cloud, some of the challenges they’ve encountered, statistics from how it’s doing now and future plans. Video from their talk is here: InfraCloud, a Community Cloud Managed by the Project Infrastructure Team.

James E. Blair also gave a talk during the conference, this time on Zuul version 3. This version of Zuul has been under development for some time, so this was a good opportunity to update the community on the history of the Zuul project in general and why it exists, status of ongoing efforts with an eye on v3 and problems it’s trying to solve. I’m also in love with his slide deck, it was all text-based (including some “animations”!) and all with an Art Deco theme. Video from his talk is here: Zuul v3: OpenStack and Ansible Native CI/CD.

As usual, the Infrastructure team also had a series of sessions related to ongoing work. As a quick rundown, we have Etherpads for all the sessions (read-only links provided):

Friday concluded with a Contributors Meetup for the Infrastructure team in the afternoon where folks split off into small groups to tackle a series of ongoing projects together. I was also able to spend some time with the Internationalization (i18n) team that Friday afternoon. I dragged along Clark so someone else on the team could pick up where I left off in case I have less time in the future. We talked about the pending upgrade of Zanata and plans for a translations checksite, making progress on both fronts, especially when we realized that there’s a chance we could get away with just running a development version of Horizon itself, with a more stable back end.


With the i18n team!

Finally, the book! It was the first time I was able to see Matt Fischer, my contributing author, since the book came out. Catching up with him and signing a book together was fun. Thanks to my publisher I was also thrilled to donate the signed copies I brought along to the Women of OpenStack Speed Mentoring event on Tuesday morning. I wasn’t able to attend the event, but they were given out on my behalf, thanks to Nithya Ruff for handling the giveaway.


Thanks to Nithya Ruff for taking a picture of me with my book at the Women of OpenStack area of the expo hall (source) and Brent Haley for getting the picture of Lisa-Marie and I (source).

I was also invited to sit down with Lisa-Marie Namphy to chat about the book and changes to the OpenStack Infrastructure team in the Newton cycle. The increase in capacity to over 2000 test instances this past cycle was quite the milestone so I enjoyed talking about that. The full video is up on YouTube: OpenStack® Project Infra: Elizabeth K. Joseph shares how test capacity doubled in Newton

In all, it was an interesting summit with a lot of change happening in the community and with partner companies. The people that make the community are still there though and it’s always enjoyable spending time together. My next OpenStack event is coming up quickly, next week I’ll be speaking at OpenStack Days Mountain West on the The OpenStack Project Continuous Integration System. I’ll also have a pile of books to give away at that event!

by pleia2 at December 02, 2016 02:58 PM

December 01, 2016

Elizabeth Krumbach

A Zoo and an Aquarium

When I was in Ohio last month for the Ohio LinuxFest I added a day on to my trip to visit the Columbus Zoo. A world-class zoo, it’s one of the few northern state zoos that has manatees and their African savanna exhibit is worth visiting. I went with a couple friends I attended the conference with, one of whom was a local and offered to drive (thanks again Svetlana!).

We arrived mid-day, which was in time to see their cheetah run, where they give one of their cheetahs some exercise by having it run a quick course around what had just been moments before the hyena habitat. I also learned recently via ZooBorns that the Columbus Zoo is one that participates in the cheetah-puppy pairing from a young age. The dogs keep these big cats feeling secure with their calmness in an uncertain world, adorable article from the site here: A Cheetah and His Dog

Much to my delight, they were also selling Cheetah-and-Dog pins after the run to raise money. Yes, please!

As I said, I really enjoyed their African Savanna exhibit. It was big and sprawling and had a nice mixture of animals. The piles of lions they have was also quite the sight to behold.

Their kangaroo enclosure was open to walk through, so you could get quite close to the kangaroos just like I did at the Perth Zoo. There were also a trio of baby tigers and some mountain lions that were adorable. And then there were the manatees. I love manatees!

I’m really glad I took the time to stay longer in Columbus, I’d likely go again if I found myself in the area.

More photos from the zoo, including a tiger napping on his back, and those mountain lions here: https://www.flickr.com/photos/pleia2/albums/72157671610835663

Just a couple weeks later I found myself on another continent, and at the Barcelona Aquarium with my friends Julia and Summer. It was a sizable aquarium and really nicely laid out. Their selection of aquatic animals was diverse and interesting. In this aquarium I liked some of the smallest critters the most. Loved their seahorses.

And the axolotls.

There was also an octopus that was awake and wandering around the tank, much to the delight of the crowd.

They also had penguins, a great shark tube and tank with a moving walkway.

More photos from the Barcelona Aquarium: https://www.flickr.com/photos/pleia2/albums/72157675629122655

Barcelona also has a zoo, but in my limited time there in the city I didn’t make it over there. It’s now on my very long list of other things to see the next time I’m in Barcelona, and you bet there will be a next time.

by pleia2 at December 01, 2016 03:57 AM

November 30, 2016

Eric Hammond

Amazon Polly Text To Speech With aws-cli and Twilio

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Here’s a simple example of how to turn text into speech, using the latest aws-cli:

text="Hello. This speech is generated using Amazon Polly. Enjoy!"
audio_file=speech.mp3

aws polly synthesize-speech \
  --output-format "mp3" \
  --voice-id "Salli" \
  --text "$text" \
  $audio_file

You can listen to the resulting output file using your favorite audio player:

mpg123 -q $audio_file

2- Upload Audio to S3

Create or re-use an S3 bucket to store the audio files temporarily.

s3bucket=YOURBUCKETNAME
aws s3 mb s3://$s3bucket

Upload the generated speech audio file to the S3 bucket. I use a long, random key for a touch of security:

s3key=audio-for-twilio/$(uuid -v4 -FSIV).mp3
aws s3 cp --acl public-read $audio_file s3://$s3bucket/$s3key

For easy cleanup, you can use a bucket with a lifecycle that automatically deletes objects after a day or thirty. See instructions below for how to set this up.

3- Initiate Call With Twilio

Once you have set up an account with Twilio (see pointers below if you don’t have one yet), here are sample commands to initiate a phone call and play the Amazon Polly speech audio:

from_phone="+1..." # Your Twilio allocated phone number
to_phone="+1..."   # Your phone number to call

TWILIO_ACCOUNT_SID="..." # Your Twilio account SID
TWILIO_AUTH_TOKEN="..."  # Your Twilio auth token

speech_url="http://s3.amazonaws.com/$s3bucket/$s3key"
twimlet_url="http://twimlets.com/message?Message%5B0%5D=$speech_url"

curl -XPOST https://api.twilio.com/2010-04-01/Accounts/$TWILIO_ACCOUNT_SID/Calls.json \
  -u "$TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN" \
  --data-urlencode "From=$from_phone" \
  --data-urlencode "To=to_phone" \
  --data-urlencode "Url=$twimlet_url"

The Twilio web service will return immediately after queuing the phone call. It could take a few seconds for the call to be initiated.

Make sure you listen to the phone call as soon as you answer, as Twilio starts playing the audio immediately.

The ringspeak Command

For your convenience (actually for mine), I’ve put together a command line program that turns all the above into a single command. For example, I can now type things like:

... || ringspeak --to +1NUMBER "Please review the cron job failure messages"

or:

ringspeak --at 6:30am \
  "Good morning!" \
  "Breakfast is being served now in Venetian Hall G.." \
  "Werners keynote is at 8:30."

Twilio credentials, default phone numbers, S3 bucket configuration, and Amazon Polly voice defaults can be stored in a $HOME/.ringspeak file.

Here is the source for the ringspeak command:

https://github.com/alestic/ringspeak

Tip: S3 Setup

Here is a sample commands to configure an S3 bucket with automatic deletion of all keys after 1 day:

aws s3api put-bucket-lifecycle \
  --bucket "$s3bucket" \
  --lifecycle-configuration '{
    "Rules": [{
        "Status": "Enabled",
        "ID": "Delete all objects after 1 day",
        "Prefix": "",
        "Expiration": {
          "Days": 1
        }
  }]}'

This is convenient because you don’t have to worry about knowing when Twilio completes the phone call to clean up the temporary speech audio files.

Tip: Twilio Setup

This isn’t the place for an entire Twilio howto, but I will say that it is about this simple to set up:

  1. Create a Twilio account

  2. Reserve a phone number through Twilio.

  3. Find the ACCOUNT SID and AUTH TOKEN for use in Twilio API calls.

When you are using the Twilio free trial, it requires you to verify phone numbers before calling them. To call arbitrary numbers, enter your credit card and fund the minimum of $20.

Twilio will only charge you for what you use (about a dollar a month per phone number, about a penny per minute for phone calls, etc.).

Closing

A lot is possible when you start integrating Twilio with AWS. For example, my daughter developed an Alexa skill that lets her speak a message for a family member and have it delivered by phone. Alexa triggers her AWS Lambda function, which invokes the Twilio API to deliver the message by voice call.

With Amazon Polly, these types of voice applications can sound better than ever.

Original article and comments: https://alestic.com/2016/11/amazon-polly-text-to-speech/

November 30, 2016 06:30 PM

Elizabeth Krumbach

Ohio LinuxFest 2016

Last month I had the pleasure of finally attending an Ohio LinuxFest. The conference has been on my radar for years, but every year I seemed to have some kind of conflict. When my Tour of OpenStack Deployment Scenarios was accepted I was thrilled to finally be able to attend. My employer at the time also pitched in to the conference as a Bronze sponsor and by sending along a banner that showcased my talk, and my OpenStack book!

The event kicked off on Friday and the first talk I attended was by Jeff Gehlbach on What’s Happening with OpenNMS. I’ve been to several OpenNMS talks over the years and played with it some, so I knew the background of the project. This talk covered several of the latest improvements. Of particular note were some of their UI improvements, including both a website refresh and some stunning improvements to the WebUI. It was also interesting to learn about Newts, the time-series data store they’ve been developing to replace RRDtool, which they struggled to scale with their tooling. Newts is decoupled from the visualization tooling so you can hook in your own, like if you wanted to use Grafana instead.

I then went to Rob Kinyon’s Devs are from Mars, Ops are from Venus. He had some great points about communication between ops, dev and QA, starting with being aware and understanding of the fact that you all have different goals, which sometimes conflict. Pausing to make sure you know why different teams behave the way they do and knowing that they aren’t just doing it to make your life difficult, or because they’re incompetent, makes all the difference. He implored the audience to assume that we’re all smart, hard-working people trying to get our jobs done. He also touched upon improvements to communication, making sure you repeat requests in your own words so misunderstandings don’t occur due to differing vocabularies. Finally, he suggested that some cross-training happen between roles. A developer may never be able to take over full time for an operator, or vice versa, but walking a mile in someone else’s shoes helps build the awareness and understanding that he stresses is important.

The afternoon keynote was given by Catherine Devlin on Hacking Bureaucracy with 18F. She works for the government in the 18F digital services agency. Their mandate is to work with other federal agencies to improve their digital content, from websites to data delivery. Modeled after a startup, she explained that they try not to over-plan, like many government organizations do and can lead to failure, they want to fail fast and keep iterating. She also said their team has a focus on hiring good people and understanding the needs of the people they serve, rather than focusing on raw technical talent and the tools. Their practices center around an open by default philosophy (see: 18F: Open source policy), so much of their work is open source and can be adopted by other agencies. They also make sure they understand the culture of organizations they work with so that the tools they develop together will actually be used, as well as respecting the domain knowledge of teams they’re working with. Slides from her talk here, and include lots of great links to agency tooling they’ve worked on: https://github.com/catherinedevlin/olf-2016-keynote


Catherine Devlin on 18F

That evening folks gathered in the expo hall to meet and eat! That’s where I caught up with my friends from Computer Reach. This is the non-profit I went to Ghana with back in 2012 to deploy Ubuntu-based desktops. I spent a couple weeks there with Dave, Beth Lynn and Nancy (alas, unable to come to OLF) so it was great to see them again. I learned more about the work they’re continuing to do, having switched to using mostly Xubuntu on new installs which was written about here. On a personal level it was a lot of fun connecting with them too, we really bonded during our adventures over there.


Tyler Lamb, Dave Sevick, Elizabeth K. Joseph, Beth Lynn Eicher

Saturday morning began with a keynote from Ethan Galstad on Becoming the Next Tech Entrepreneur. Ethan is the founder of Nagios, and in his talk he traced some of the history of his work on getting Nagios off the ground as a proper project and company and his belief in why technologists make good founders. In his work he drew from his industry and market expertise from being a technologist and was able to play to the niche he was focused on. He also suggested that folks look to what other founders have done that has been successful, and recommended some books (notably Founders at Work and Work the System). Finaly, he walked through some of what can be done to get started, including the stages of idea development, basic business plan (don’t go crazy), a rough 1.0 release that you can have some early customers test and get feedback from, and then into marketing, documenting and focused product development. He concluded by stressing that open source project leaders are already entrepreneurs and the free users of your software are your initial market.

Next up was Robert Foreman’s Mixed Metaphors: Using Hiera with Foreman where he sketched out the work they’ve done that preserves usage of Hiera’s key-value store system but leverages Foreman for the actual orchestration. The mixing of provisioning and orchestration technologies is becoming more common, but I hadn’t seen this particular mashup.

My talk was A Tour of OpenStack Deployment Scenarios. This is the same talk I gave at FOSSCON back in August, walking the audience through a series of ways that OpenStack could be configured to provide compute instances, metering and two types of storage. For each I gave a live demo using DevStack. I also talked about several other popular components that could be added to a deployment. Slides from my talk are here (PDF), which also link to a text document with instructions for how to run the DevStack demos yourself.


Thanks to Vitaliy Matiyash for taking a picture during my talk! (source)

At lunch I met up with my Ubuntu friends to catch up. We later met at the booth where they had a few Ubuntu phones and tablets that gained a bunch of attention throughout the event. This event was also my first opportunity to meet Unit193 and Svetlana Belkin in person, both of whom I’ve worked with on Ubuntu for years.


Unit193, Svetlana Belkin, José Antonio Rey, Elizabeth K. Joseph and Nathan Handler

After lunch I went over to see David Griggs of Dell give us “A Look Under the Hood of Ohio Supercomputer Center’s Newest Linux Cluster.” Supercomputers are cool and it was interesting to learn about the system it was replacing, the planning that went into the replacement and workload cut-over and see in-progress photos of the installation. From there I saw Ryan Saunders speak on Automating Monitoring with Puppet and Shinken. I wasn’t super familiar with the Shinken monitoring framework, so this talk was an interesting and very applicable demonstration of the benefits.

The last talk I went to before the closing keynotes was from my Computer Reach friends Dave Sevick and Tyler Lamb. They presented their “Island Server” imaging server that’s now being used to image all of the machines that they re-purpose and deploy around the world. With this new imaging server they’re able to image both Mac and Linux PCs from one Macbook Pro rather than having a different imaging server for each. They were also able to do a live demo of a Mac and Linux PC being imaged from the same Island Server at once.


Tyler and Dave with the Island Server in action

The event concluded with a closing keynote by a father and daughter duo, Joe and Lily Born, on The Democratization of Invention. Joe Born first found fame in the 90s when he invented the SkipDoctor CD repair device, and is now the CEO of Aiwa which produces highly rated Bluetooth speakers. Lily Born invented the tip-proof Kangaroo Cup. The pair reflected on their work and how the idea to product in the hands of customers has changed in the past twenty years. While the path to selling SkipDoctor had a very high barrier to entry, globalization, crowd-funding, 3D printers and internet-driven word of mouth and greater access to the press all played a part in the success of Lily’s Kangaroo cup and the new Aiwa Bluetooth speakers. While I have no plans to invent anything any time soon (so much else to do!) it was inspiring to hear how the barriers have been lowered and inventors today have a lot more options. Also, I just bought an Aiwa Exos-9 Bluetooth Speaker, it’s pretty sweet.

My conference adventures concluded with a dinner with my friends José, Nathan and David, all three of whom I also spent time with at FOSSCON in Philadelphia the month before. It was fun getting together again, and we wandered around downtown Columbus until we found a nice little pizzeria. Good times.

More photos from the Ohio LinuxFest here: https://www.flickr.com/photos/pleia2/albums/72157674988712556

by pleia2 at November 30, 2016 06:29 PM

November 29, 2016

Jono Bacon

Luma Giveaway Winner – Garrett Nay

I little while back I kicked off a competition to give away a Luma Wifi Set.

The challenge? Share a great community that you feel does wonderful work. The most interesting one, according to yours truly, gets the prize.

Well, I am delighted to share that Garrett Nay bags the prize for sharing the following in his comment:

I don’t know if this counts, since I don’t live in Seattle and can’t be a part of this community, but I’m in a new group in Salt Lake City that’s modeled after it. The group is Story Games Seattle: http://www.meetup.com/Story-Games-Seattle/. They get together on a weekly+ basis to play story games, which are like role-playing games but have a stronger emphasis on giving everyone at the table the power to shape the story (this short video gives a good introduction to what story games are all about, featuring members of the group:

Story Games from Candace Fields on Vimeo.

Story games seem to scratch a creative itch that I have, but it’s usually tough to find friends who are willing to play them, so a group dedicated to them is amazing to me.

Getting started in RPGs and story games is intimidating, but this group is very welcoming to newcomers. The front page says that no experience with role-playing is required, and they insist in their FAQ that you’ll be surprised at what you’ll be able to create with these games even if you’ve never done it before. We’ve tried to take this same approach with our local group.

In addition to playing published games, they also regularly playtest games being developed by members of the group or others. As far as productivity goes, some of the best known story games have come from members of this and sister groups. A few examples I’m aware of are Microscope, Kingdom, Follow, Downfall, and Eden. I’ve personally played Microscope and can say that it is well designed and very polished, definitely a product of years of playtesting.

They’re also productive and engaging in that they keep a record on the forums of all the games they play each week, sometimes including descriptions of the stories they created and how the games went. I find this very useful because I’m always on the lookout for new story games to try out. I kind of wish I lived in Seattle and could join the story games community, but hopefully we can get our fledgling group in Salt Lake up to the standard they have set.

What struck me about this example was that it gets to the heart of what community should be and often is – providing a welcoming, supportive environment for people with like-minded ideas and interests.

While much of my work focuses on the complexities of building collaborative communities with the intricacies of how people work together, we should always remember the huge value of what I refer to as read communities where people simply get together to have fun with each other. Garrett’s example was a perfect summary of a group doing great work here.

Thanks everyone for your suggestions, congratulations to Garrett for winning the prize, and thanks to Luma for providing the prize. Garrett, your Luma will be in the mail soon!

The post Luma Giveaway Winner – Garrett Nay appeared first on Jono Bacon.

by Jono Bacon at November 29, 2016 12:08 AM

November 23, 2016

Elizabeth Krumbach

Holiday cards 2016!

Every year I send out a big batch of winter-themed holiday cards to friends and acquaintances online.

Reading this? That means you! Even if you’re outside the United States!

Send me an email at lyz@princessleia.com with your postal mailing address. Please put “Holiday Card” in the subject so I can filter it appropriately. Please do this even if I’ve sent you a card in the past, I won’t be reusing the list from last year.

Typical disclaimer: My husband is Jewish and we celebrate Hanukkah, but the cards are non-religious, with some variation of “Happy Holidays” or “Season’s Greetings” on them.

by pleia2 at November 23, 2016 07:06 PM

Jono Bacon

Microsoft and Open Source: A New Era?

Last week the Linux Foundation announced Microsoft becoming a Platinum member.

In the eyes of some, hell finally froze over. For many though, myself included, this was not an entirely surprising move. Microsoft are becoming an increasingly active member of the open source community, and they deserve credit for this continual stream of improvements.

When I first discovered open source in 1998, the big M were painted as a bit of a villain. This accusation was largely fair. The company went to great lengths to discredit open source, including comparing Linux to a cancer, patent litigation, and campaigns formed of misinformation and FUD. This rightly left a rather sour taste in the mouths of open source supporters.

The remnants of that sour taste are still strong in some. These folks will likely never trust the Redmond mammoth, their decisions, or their intent. While I am not condoning these prior actions from the company, I would argue that the steady stream of forward progress means that…and I know this will be a tough pill to swallow for some of you…means that it is time to forgive and forget.

Forward Progress

This forward progress is impressive. They released their version of FreeBSD for Azure. They partnered with Canonical to bring Ubuntu user-space to Windows (as well as supporting Debian on Azure and even building their own Linux distribution, the Azure Cloud Switch). They supported an open source version of .NET, known as Mono, later buying Xamarin who led this development and open sourced those components. They brought .NET core to Linux, started their own Linux certification, released a litany of projects (including Visual Studio Code) as open source, founded the Microsoft Open Technologies group, and then later merged the group into the wider organization as openness was a core part of the company.

Microsoft's Satya Nadella seems to have fallen in love.

Satya Nadella, seemingly doing a puppet show, without the puppet.

My personal experience with them has reflected this trend. I first got to know the company back in 2001 when I spoke at a DeveloperDeveloperDeveloper day in the UK. Over the years I flew out to Redmond to provide input on initiatives such as .NET, got to know the Microsoft Open Technologies group, and most recently signed the company as a client where I am helping them to build the next generation of their MVP and RD community. Microsoft are not begrudgingly supporting open source, they are actively pursuing it.

As such, this recent announcement from the Linux Foundation wasn’t a huge surprise to me, but was an impressive formal articulation of Microsoft’s commitment to open source. Leaders at Microsoft and the Linux Foundation should be both credited with this additional important step in the right direction, not just for Microsoft, but for the wider acceptance and growth of open source and collaboration.

Work In Progress

Now, some of the critics will be reading this and will cite many examples of Microsoft still acting as the big bad wolf. You are perfectly right to do so. So, let me zone in on this.

I am not suggesting they are perfect. They aren’t. Companies are merely vessels of people, some of which will still continue to have antiquated perspectives. Microsoft will be no different here. Of course, for all the great steps forward, sometimes there will be some steps back.

What I try to focus on however is the larger story and trends. I would argue that Microsoft is trending in the right direction based on many of their recent moves, including the ones I cited above.

Let’s not forget that this is a big company to turn around. With 114,000 employees and 40+ years of cultural heritage and norms, change understandably takes time. The challenge with change is that it doesn’t just require strategic, product, and leadership focus, but the real challenge is cultural change.

Culture at Microsoft seems to be changing.

Culture is something of an amorphous concept. It isn’t a specific thing you can point to. Culture is instead the aggregate of the actions and intent of the many. You can often make strategic changes that result in new products, services, and projects, but those achievements could be underpinned by a broken, divisive, and ugly culture.

As such, culture is hard to change and requires a mix of top-down leadership and bottom-up action.

From my experience of working with Microsoft, the move to a more open company is not merely based on product-based decisions, but it has percolated in the core culture of how the company operates. I have seen this in my day to day interactions with the company and with my consulting work there. I credit Satya Nadella (and likely many others) for helping to trigger these positive forward motions.

So, are they perfect? No. Are they an entirely different company? No. But are they making a concerted thoughtful effort to really understand and integrate openness into the company? I think so. Is the work complete? No. But do they deserve our support as fellow friends in the open source community? I believe so, yes.

The post Microsoft and Open Source: A New Era? appeared first on Jono Bacon.

by Jono Bacon at November 23, 2016 04:00 PM

November 22, 2016

Jono Bacon

2017 Community Leadership Events: An Update

This week I was delighted to see that we could take the wraps off a new event that I am running in conjunction with my friends at the Linux Foundation called the Community Leadership Conference. The event will be part of the Open Source Summit which was previously known as LinuxCon and I will be running it in Los Angeles from 11th – 13th Sep 2017 and Prague from 23rd – 25th Oct 2017.

Now, some of you may be wondering if this replaces or is different to the Community Leadership Summit in Portland/Austin. Let me add some clarity.

The Community Leadership Summit

The Community Leadership Summit takes place each year the weekend before OSCON. I can confirm that there will be another Community Leadership Summit in 2017 in Austin. We plan to announce this soon formally.

The Community Leadership Summit has the primary goal of bringing together community managers from around the world to discuss and debate community leadership principles. The event is an unconference and is focused on discussions as opposed to formal presentations. As such, and as with any unconference, the thrill of the event is the organic schedule and conversations that follow. Thus, CLS is a great event for those who are interested in playing an active role in furthering the art of and science of community leadership more broadly in an organic way.

The Community Leadership Conference

The Community Leadership Conference, which will be part of the Open Source Summit in Los Angeles and Prague, has a slightly different format and focus.

CLC will instead be a traditional conference. My goal here is to bring together speakers from around the world to deliver presentations, panels, and other material that shares best practices, methods, and approaches in community leadership, and specific to open source. CLC is not intended to shape the future of community leadership, but more to present best practices and principles for consumption, and tailed to the needs of open source projects and organizations.

In Summary

So, in summary, the Community Leadership Conference is designed to be a place to consume community leadership best practices and principles via carefully curated presentations, panels, and networking. The Community Leadership Summit is designed to be more of an informal roll-your-sleeves up summit where attendees discuss and debate community leadership to help shape how it evolves and grows.

As regular readers will know, I am passionate about evolving the art and science of community leadership and while CLS has been an important component in this evolution, I felt we needed to augment it with CLC. These two events, combined with the respective audiences of their shared conferences, and bolstered by my wonderful friends at O’Reilly and the Linux Foundation, are going to help us to evolve this art and science faster and more efficiently than ever.

I hope to see you all at either or both of these events!

The post 2017 Community Leadership Events: An Update appeared first on Jono Bacon.

by Jono Bacon at November 22, 2016 06:12 AM

November 21, 2016

Eric Hammond

Watching AWS CloudFormation Stack Status

live display of current event status for each stack resource

Would you like to be able to watch the progress of your new CloudFormation stack resources like this? (press play)

That’s what the output of the new aws-cloudformation-stack-status command looks like when I launch a new AWS Git-backed Static Website CloudFormation stack.

It shows me in real time which resources have completed, which are still in progress, and which, if any, have experienced problems.

Background

AWS provides a few ways to look at the status of resources in a CloudFormation stack including the stream of stack events in the Web console and in the aws-cli.

Unfortunately, these displays show multiple events for each resource (e.g., CREATE_IN_PROGRESS, CREATE_COMPLETE) and it’s difficult to match up all of the resource events by hand to figure out which resources are incomplete and still in progress.

Solution

I created a bit of wrapper code that goes around the aws cloudformation describe-stack-events command. It performs these operations:

  1. Cuts the output down to the few fields that matter: status, resource name, type, event time.

  2. Removes all but the ost recent status event for each stack resource.

  3. Sorts the output to put the resources with the most recent status changes at the top.

  4. Repeatedly runs this command so that you can see the stack progress live and know exactly which resource is taking the longest.

I tossed the simple script up here in case you’d like to try it out:

GitHub: aws-cloudformation-stack-status

You can run it to monitor your CloudFormation stack with this command:

aws-cloudformation-stack-status --watch --region $region --stack-name $stack

Interrupt with Ctrl-C to exit.

Note: You will probably need to start your terminal out wider than 80 columns for a clean presentation.

Note: This does use the aws-cli, so installing and configuring that is a prerequisite.

Stack Delete Example

Here’s another example terminal session watching a stack-delete operation, including some skipped deletions (because of a retention policy). It finally ends with a “stack not found error” which is exactly what we hope for after a stack has been deleted successfully. Again, the resources with the most recent state change events are at the top.

Note: These sample terminal replays cut out almost 40 minutes of waiting for the creation and deletion of the CloudFront distributions. You can see the real timestamps in the rightmost columns.

Original article and comments: https://alestic.com/2016/11/aws-cloudformation-stack-status/

November 21, 2016 09:00 AM

November 14, 2016

Eric Hammond

Optional Parameters For Pre-existing Resources in AWS CloudFormation Templates

stack creates new AWS resources unless user specifies pre-existing

Background

I like to design CloudFormation templates that create all of the resources necessary to implement the desired functionality without requiring a lot of separate, advanced setup. For example, the AWS Git-backed Static Website creates all of the interesting pieces including a CodeCommit Git repository, S3 buckets for web site content and logging, and even the Route 53 hosted zone.

Creating all of these resources is great if you were starting from scratch on a new project. However, you may sometimes want to use a CloudFormation template to enhance an existing account where one or more of the AWS resources already exist.

For example, consider the case where the user already has a CodeCommit Git repository and a Route 53 hosted zone for their domain. They still want all of the enhanced functionality provided in the Git-backed static website CloudFormation stack, but would rather not have to fork and edit the template code just to fit it in to the existing environment.

What if we could use the same CloudFormation template for different types of situations, sometimes pluging in pre-existing AWS resources, and other times letting the stack create the resources for us?

Solution

With assistance from Ryan Scott Brown, the Git-backed static website CloudFormation template now allows the user to optionally specify a number of pre-existing resources to be integrated into the new stack. If any of those parameters are left empty, then the CloudFormation template automatically creates the required resources.

Let’s walk through relevant pieces of the CloudFormation template code using the CodeCommit Git repository as an example of an optional resource. [Note: Code exerpts below may have been abbreviated and slightly altered for article clarity.]

In the CloudFormation template Parameters section, we allow the user to pass in the name of a CodeCommit Git repository that was previously created in the AWS account. If this parameter is specified, then the CloudFormation template uses the pre-existing repository in the new stack. If the parameter is left empty when the template is run, then the CloudFormation stack will create a new CodeCommit Git repository.

Parameters:
  PreExistingGitRepository:
    Description: "Optional Git repository name for pre-existing CodeCommit repository. Leave empty to have CodeCommit Repository created and managed by this stack."
    Type: String
    Default: ""

We add an entry to the Conditions section in the CloudFormation template that will indicate whether or not a pre-existing CodeCommit Git repository name was provided. If the parameter is empty, then we will need to create a new repository.

Conditions:
  NeedsNewGitRepository: !Equals [!Ref PreExistingGitRepository, ""]

In the Resources section, we create a new CodeCommit Git repository, but only on the condition that we need a new one (i.e., the user did not specify one in the parameters). If a pre-existing CodeCommit Git repository name was specified in the stack parameters, then this resource creation will be skipped entirely.

Resources:
  GitRepository:
    Condition: NeedsNewGitRepository
    Type: "AWS::CodeCommit::Repository"
    Properties:
      RepositoryName: !Ref GitRepositoryName
    DeletionPolicy: Retain

We then come to parts of the CloudFormation template where other resources need to refer to the CodeCommit Git repository. We need to use an If conditional to refer to the correct resource, since it might be a pre-existing one passed in a parameter or it might be one created in this stack.

Here’s an example where the CodePipeline resource needs to specify the Git repository name as the source of a pipeline stage.

Resources:
  CodePipeline:
    Type: "AWS::CodePipeline::Pipeline"
    [...]
      RepositoryName: !If [NeedsNewGitRepository, !Ref GitRepositoryName, !Ref PreExistingGitRepository]

We use the same conditional to place the name of the Git repository in the CloudFormation stack outputs so that the user can easily find out what repository is being used by the stack.

Outputs:
  GitRepositoryName:
    Description: Git repository name
    Value: !If [NeedsNewGitRepository, !Ref GitRepositoryName, !Ref PreExistingGitRepository]

We also want to show the URL for cloning the repository. If we created the repository in the stack, this is an easy attribute to query. If a pre-existing repository name was passed in, we can’t determine the correct URL; so we just output that it is not available and hope the user remembers how to access the repository they created in the past.

Outputs:
  GitCloneUrlHttp:
    Description: Git https clone endpoint
    Value: !If [NeedsNewGitRepository, !GetAtt GitRepository.CloneUrlHttp, "N/A"]

Read more from Amazon about the AWS CloudFormation Conditions that are used in this template.

Replacing a Stack Without Losing Important Resources

You may have noticed in the above code that we specify a DeletionPolicy of Retain for the CodeCommit Git repository. This keeps the repository from being deleted if and when the the CloudFormation stack is deleted.

This prevents the accidental loss of what may be the master copy of the website source. It may still be deleted manually if you no longer need it after deleting the stack.

A number of resources in the Git-backed static website stack are retained, including the Route53 hosted zone, various S3 buckets, and the CodeCommit Git repository. Not coincidentally, all of these retained resources can be subsequently passed back into a new stack as pre-existing resources!

Though CloudFormation stacks can often be updated in place, sometimes I like to replace them with completely different templates. It is convenient to leave foundational components in place while deleting and replacing the other stack resources that connect them.

Original article and comments: https://alestic.com/2016/11/aws-cloudformation-optional-resources/

November 14, 2016 11:00 AM

November 13, 2016

Eric Hammond

Alestic.com Blog Infrastructure Upgrade

publishing new blog posts with “git push”

For the curious, the Alestic.com blog has been running for a while on the Git-backed Static Website Cloudformation stack using the AWS Lambda Static Site Generator Plugin for Hugo.

Not much has changed in the design because I had been using Hugo before. However, Hugo is now automatically run inside of an AWS Lambda function triggered by updates to a CodeCommit Git repository.

It has been a pleasure writing with transparent review and publication processes enabled by Hugo and AWS:

  • When I save a blog post change in my editor (written using Markdown), a local Hugo process on my laptop automatically detects the file change, regenerates static pages, and refreshes the view in my browser.

  • When I commit and push blog post changes to my CodeCommit Git repository, the Git-backed Static Website stack automatically regenerates the static blog site using Hugo and deploys to the live website served by AWS.

Blog posts I don’t want to go public yet can be marked as “draft” using Hugo’s content metadata format.

Bigger site changes can be developed and reviewed in a Git feature branch and merged to “master” when completed, automatically triggering publication.

I love it when technology gets out of my way and lets me be focus on being productive.

Original article and comments: https://alestic.com/2016/11/alestic-blog-stack/

November 13, 2016 03:10 AM

November 07, 2016

Eric Hammond

Running aws-cli Commands Inside An AWS Lambda Function

even though aws-cli is not available by default in AWS Lambda

The AWS Lambda environments for each programming language (e.g., Python, Node, Java) already have the AWS client SDK packages pre-installed for those languages. For example, the Python AWS Lambda environment has boto3 available, which is ideal for connecting to and using AWS services in your function.

This makes it easy to use AWS Lambda as the glue for AWS. A function can be triggered by many different service events, and can respond by reading from, storing to, and triggering other services in the AWS ecosystem.

However, there are a few things that aws-cli currently does better than the AWS SDKs alone. For example, the following command is an efficient way to take the files in a local directory and recursively update a website bucket, uploading (in parallel) files that have changed, while setting important object attributes including MIME types guessing:

aws s3 sync --delete --acl public-read LOCALDIR/ s3://BUCKET/

The aws-cli software is not currently pre-installed in the AWS Lambda environment, but we can fix that with a little effort.

Background

The key to solving this is to remember that aws-cli is available as a Python package. Mitch Garnaat reminded me of this when I was lamenting the lack of aws-cli in AWS Lambda, causing me to smack my virtual forehead. Amazon has already taught us how to install most Python packages, and we can apply the same process for aws-cli, though a little extra work is required, because a command line program is involved.

NodeJS/Java/Go developers: Don’t stop reading! We are using Python to install aws-cli, true, but this is a command line program. Once the command is installed in the AWS Lambda environment, you can invoke it using the system command running functions in your respective languages.

Steps

Here are the steps I followed to add aws-cli to my AWS Lambda function. Adjust to suit your particular preferred way of building AWS Lambda functions.

Create a temporary directory to work in, including paths for a temporary virtualenv, and an output ZIP file:

tmpdir=$(mktemp -d /tmp/lambda-XXXXXX)
virtualenv=$tmpdir/virtual-env
zipfile=$tmpdir/lambda.zip

Create the virtualenv and install the aws-cli Python package into it using a subshell:

(
  virtualenv $virtualenv
  source $virtualenv/bin/activate
  pip install awscli
)

Copy the aws command file into the ZIP file, but adjust the first (shabang) line so that it will run with the system python command in the AWS Lambda environment, instead of assuming python is in the virtualenv on our local system. This is the valuable nugget of information buried deep in this article!

rsync -va $virtualenv/bin/aws $tmpdir/aws
perl -pi -e '$_ = "#!/usr/bin/python\n" if $. == 1' $tmpdir/aws
(cd $tmpdir; zip -r9 $zipfile aws)

Copy the Python packages required for aws-cli into the ZIP file:

(cd $virtualenv/lib/python2.7/site-packages; zip -r9 $zipfile .)

Copy in your AWS Lambda function, other packages, configuration, and other files needed by the function code. These don’t need to be in Python.

cd YOURLAMBDADIR
zip -r9 $zipfile YOURFILES

Upload the ZIP file to S3 (or directly to AWS Lambda) and clean up:

aws s3 cp $zipfile s3://YOURBUCKET/YOURKEY.zip
rm -r $tmpdir

In your Lambda function, you can invoke aws-cli commands. For example, in Python, you might use:

import subprocess
command = ["./aws", "s3", "sync", "--acl", "public-read", "--delete",
           source_dir + "/", "s3://" + to_bucket + "/"]
print(subprocess.check_output(command, stderr=subprocess.STDOUT))

Note that you will need to specify the location of the aws command with a leading "./" or you could add /var/task (cwd) to the $PATH environment variable.

Example

This approach is used to add the aws-cli command to the AWS Lambda function used by the AWS Git-backed Static Website CloudFormation stack.

You can see the code that builds the AWS Lambda function ZIP file here, including the installation of the aws-cli command:

https://github.com/alestic/aws-git-backed-static-website/blob/master/build-upload-aws-lambda-function

Notes

  • I would still love to see aws-cli pre-installed on all the AWS Lambda environments. This simple change would remove quite a bit of setup complexity and would even let me drop my AWS Lambda function inline in the CloudFormation template. Eliminating the external dependency and having everything in one file would be huge!

  • I had success building awscli on Ubuntu for use in AWS Lambda, probably because all of the package requirements are pure Python. This approach does not always work. It is recommended you build packages on Amazon Linux so that they are compatible with the AWS Lambda environment.

  • The pip install -t DIR approach did not work for aws-cli when I tried it, which is why I went with virtualenv. Tips welcomed.

  • I am not an expert at virtualenv or Python, but I am persistent when I want to figure out how to get things to work. The above approach worked. I welcome improvements and suggestions from the experts.

Original article and comments: https://alestic.com/2016/11/aws-lambda-awscli/

November 07, 2016 03:00 PM

November 02, 2016

Jono Bacon

Luma Wifi Review and Giveaway

For some reason, wifi has always been the bane of my technological existence. Every house, every router, every cable provider…I have always suffered from bad wifi. I have tried to fix it and in most cases failed.

As such, I was rather excited when I discovered the Luma a little while ago. Put simply, the Luma is a wifi access point, but it comes in multiple units to provide repeaters around your home. The promise of Luma is that this makes it easier to bathe your home in fast and efficient wifi, and comes with other perks such as enhanced security, access controls and more.

So, I pre-ordered one and it arrived recently.

I rather like the Luma so I figured I would write up some thoughts. Stay tuned though, because I am also going to give one away to a lucky reader.

Setup

When it arrived I set it up and followed the configuration process. This was about as simple as you can imagine. The set came with three of these:

luma-kitchen-counter

I plugged each one in turn and the Android app detected each one and configured it. It even recommended where in the house I should put them.

So, I plonked the different Lumas around my house and I was getting pretty reputable speeds.

Usage

Of course, the very best wifi routers blend into the background and don’t require any attention. After a few weeks of use, this has been the case with the Luma. They just sit there working and we have had great wifi across the house.

There are though some interesting features in the app that are handy to have.

Firstly, device management is simple. You can view, remove, and block Internet to different devices and even group devices by person. So, for example, if you neighbors use your Internet from time to time you can group their devices and switch them off if you need precious bandwidth.

Viewing these devices from an app and not an archaic admin panel also makes auditing devices simple. For example, I saw two weird-looking devices on our network and after some research they turned out to be Kindles. Thanks, Amazon, for not having descriptive identifiers for the devices, by the way. 😉

Another neat feature is content filtering. If you have kids and don’t want them to see some naughty content online, you can filter by device or across the whole network. You could also switch off their access when dinner is ready.

So, overall, I am pretty happy with the Luma. Great hardware, simple setup, and reliable execution.

Win a Luma

I want to say a huge thank-you to the kind folks at Luma, because they provided me with an additional Luma to give away here!

Participating is simple. As you know, my true passion in life is building powerful, connected, and productive communities. So, unsurprisingly, I have a question that relates to community:

What is the most interesting, productive, and engaging community you have ever seen?

To participate simply share your answer as a comment on this post. Be sure to tell me which community you are nomating, share pertinant links, and tell me why that community is doing great work. These don’t have to be tech communities – they can be anything, craft, arts, sports, charities, or anything else. I want you to sell me on why the community is interesting and does great work.

Please note: if you include a lot of links, or haven’t posted here before, sometimes comments get stuck in the moderation queue. Rest assured though, I am regularly reviewing the queue and your comment will appear – please don’t submit multiple comments that are the same!

The deadline for submissions is 12pm Pacific time on Fri 18th Nov 2016.

I will then pick my favorite answer and announce the winners. My decision is final and based on what I consider to be the most interesting submission, so no complaining, people. Thanks again to Luma for the kind provision of the prize!

The post Luma Wifi Review and Giveaway appeared first on Jono Bacon.

by Jono Bacon at November 02, 2016 03:25 PM

October 31, 2016

Eric Hammond

AWS Lambda Static Site Generator Plugins

starting with Hugo!

A week ago, I presented a CloudFormation template for an AWS Git-backed Static Website stack. If you are not familiar with it, please click through and review the features of this complete Git + static website CloudFormation stack.

This weekend, I extended the stack to support a plugin architecture to run the static site generator of your choosing against your CodeCommit Git repository content. You specify the AWS Lambda function at stack launch time using CloudFormation parameters (ZIP location in S3).

The first serious static site generator plugin is for Hugo, but others can be added with or without my involvement and used with the same unmodified CloudFormation template.

The Git-backed static website stack automatically invokes the static site generator whenever the site source is updated in the CodeCommit Git repository. It then syncs the generated static website content to the S3 bucket where the stack serves it over a CDN using https with DNS served by Route 53.

I have written three AWS Lambda static site generator plugins to demonstrate the concept and to serve as templates for new plugins:

  1. Identity transformation plugin - This copies the entire Git repository content to the static website with no modifications. This is currently the default plugin for the static website CloudFormation template.

  2. Subdirectory plugin - This plugin is useful if your Git repository has files that should not be included as part of the static site. It publishes a specified subdirectory (e.g., “htdocs” or “public-html”) as the static website, keeping the rest of your repository private.

  3. Hugo plugin - This plugin runs the popular Hugo static site generator. The Git repository should include all source templates, content, theme, and config.

You are welcome to use any of these plugins when running an AWS Git-backed Static Website stack. The documentation in each of the above plugin repositories describes how to set the CloudFormation template parameters on stack create.

You may also write your own AWS Lambda function static site generator plugin using one of the above as a starting point. Let me know if you write plugins; I may add new ones to the list above.

The sample AWS Lambda handler plugin code takes care of downloading the source, and uploading the resulting site and can be copied as is. All you have to do is fill in the “generate_static_site” code to generate the site from the source.

The plugin code for Hugo is basically this:

def generate_static_site(source_dir, site_dir, user_parameters):
    command = ["./hugo", "--source=" + source_dir, "--destination=" + site_dir]
    if user_parameters.startswith("-"):
        command.extend(shlex.split(user_parameters))
    print(subprocess.check_output(command, stderr=subprocess.STDOUT))

I have provided build scripts so that you can build the sample AWS Lambda functions yourself, because you shoudn’t trust other people’s blackbox code if you can help it. That said, I have also made it easy to use pre-built AWS Lambda function ZIP files to try this out.

These CloudFormation template and AWS Lambda functions are very new and somewhat experimental. Please let me know where you run into issues using them and I’ll update documentation. I also welcome pull requests, especially if you work with me in advance to make sure the proposed changes fit the vision for this stack.

Original article and comments: https://alestic.com/2016/10/aws-static-site-generator-plugins/

October 31, 2016 09:41 AM

October 24, 2016

Eric Hammond

AWS Git-backed Static Website

with automatic updates on changes in CodeCommit Git repository

A number of CloudFormation templates have been published that generate AWS infrastructure to support a static website. I’ll toss another one into the ring with a feature I haven’t seen yet.

In this stack, changes to the CodeCommit Git repository automatically trigger an update to the content served by the static website. This automatic update is performed using CodePipeline and AWS Lambda.

This stack also includes features like HTTPS (with a free certificate), www redirect, email notification of Git updates, complete DNS support, web site access logs, infinite scaling, zero maintenance, and low cost.

One of the most exciting features is the launch-time ability to specify an AWS Lambda function plugin (ZIP file) that defines a static site generator to run on the Git repository site source before deploying to the static website. A sample plugin is provided for the popular Hugo static site generator.

Here is an architecture diagram outlining the various AWS services used in this stack. The arrows indicate the major direction of data flow. The heavy arrows indicate the flow of website content.

CloudFormation stack architecture diagram

Sure, this does look a bit complicated for something as simple as a static web site. But remember, this is all set up for you with a simple aws-cli command (or AWS Web Console button push) and there is nothing you need to maintain except the web site content in a Git repository. All of the AWS components are managed, scaled, replicated, protected, monitored, and repaired by Amazon.

The input to the CloudFormation stack includes:

  • Domain name for the static website

  • Email address to be notified of Git repository changes

The output of the CloudFormation stack includes:

  • DNS nameservers for you to set in your domain registrar

  • Git repository endpoint URL

Though I created this primarily as a proof of concept and demonstration of some nice CloudFormation and AWS service features, this stack is suitable for use in a production environment if its features match your requirements.

Speaking of which, no CloudFormation template meets everybody’s needs. For example, this one conveniently provides complete DNS nameservers for your domain. However, that also means that it assumes you only want a static website for your domain name and nothing else. If you need email or other services associated with the domain, you will need to modify the CloudFormation template, or use another approach.

How to run

To fire up an AWS Git-backed Static Website CloudFormation stack, you can click this button and fill out a couple input fields in the AWS console:

Launch CloudFormation stack

I have provided copy+paste aws-cli commands in the GitHub repository. The GitHub repository provides all the source for this stack including the AWS Lambda function that syncs Git repository content to the website S3 bucket:

AWS Git-backed Static Website GitHub repo

If you have aws-cli set up, you might find it easier to use the provided commands than the AWS web console.

When the stack starts up, two email messages will be sent to the address associated with your domain’s registration and one will be sent to your AWS account address. Open each email and approve these:

  • ACM Certificate (2)
  • SNS topic subscription

The CloudFormation stack will be stuck until the ACM certificates are approved. The CloudFront distributions are created afterwards and can take over 30 minutes to complete.

Once the stack completes, get the nameservers for the Route 53 hosted zone, and set these in your domain’s registrar. Get the CodeCommit endpoint URL and use this to clone the Git repository. There are convenient aws-cli commands to perform these fuctions in the project’s GitHub repository linked to above.

AWS Services

The stack uses a number of AWS services including:

  • CloudFormation - Infrastructure management.

  • CodeCommit - Git repository.

  • CodePipeline - Passes Git repository content to AWS Lambda when modified.

  • AWS Lambda - Syncs Git repository content to S3 bucket for website

  • S3 buckets - Website content, www redirect, access logs, CodePipeline artifacts

  • CloudFront - CDN, HTTPS management

  • Certificate Manager - Creation of free certificate for HTTPS

  • CloudWatch - AWS Lambda log output, metrics

  • SNS - Git repository activity notification

  • Route 53 - DNS for website

  • IAM - Manage resource security and permissions

Cost

As far as I can tell, this CloudFormation stack currently costs around $0.51 per month in a new AWS account with nothing else running a reasonable amount of storage for the web site content, and up to 5 Git users. This minimal cost is due to there being no free tier for Route 53 at the moment.

If you have too many GB of content, too many tens of thousands of requests, etc., you may start to see additional pennies being added to your costs.

If you stop and start the stack, it will cost an additional $1 each time because of the odd CodePipeline pricing structure. See the AWS pricing guides for complete details, and monitor your account spending closely.

Notes

  • This CloudFormation stack will only work in regions that have all of the required services and features available. The only one I’m sure about is ue-east-1. Let me know if you get it to work elsewhere.

  • This CloudFormation stack uses an AWS Lambda function that is installed from the run.alestic.com S3 bucket provided by Eric Hammond. You are welcome to use the provided script to build your own AWS Lambda function ZIP file, upload it to S3, and specify the location in the launch parameters.

  • Git changes are not reflected immediately on the website. It takes a minute for CodeDeploy to notice the change; a minute to get the latest Git branch content, ZIP, upload to S3; and a minute for the AWS Lambda function to download, unzip, and sync the content to the S3 bucket. Then the CloudFront CDN TTL may prevent the changes from being seen for another minute. Or so.

Thanks

Thanks to Mitch Garnaat for pointing me in the right direction for getting the aws-cli into an AWS Lambda function. This was important because “aws s3 sync” is much smarter than the other currently availble options for syncing website content with S3.

Thanks to AWS Community Hero Onur Salk for pointing me in the direction of CloudPipeline for triggering AWS Lamda functions off of CodeCommit changes.

Thanks to Ryan Brown for already submitting a pull request with lots of nice cleanup of the CloudFormation template, teaching me a few things in the process.

Some other resources you might fine useful:

Creating a Static Website Using a Custom Domain - Amazon Web Services

S3 Static Website with CloudFront and Route 53 - AWS Sysadmin

Continuous Delivery with AWS CodePipeline - Onur Salk

Automate CodeCommit and CodePipeline in AWS CloudFormation - Stelligent

Running AWS Lambda Functions in AWS CodePipeline using CloudFormation - Stelligent

You are welcome to use, copy, and fork this repository. I would recommend contacting me before spending time on pull requests, as I have specific limited goals for this stack and don’t plan to extend its features much more.

[Update 2016-10-28: Added Notes section.]

[Update 2016-11-01: Added note about static site generation and Hugo plugin.]

Original article and comments: https://alestic.com/2016/10/aws-git-backed-static-website/

October 24, 2016 10:00 AM

October 23, 2016

Akkana Peck

Los Alamos Artists Studio Tour

[JunkDNA Art at the LA Studio Tour] The Los Alamos Artists Studio Tour was last weekend. It was a fun and somewhat successful day.

I was borrowing space in the studio of the fabulous scratchboard artist Heather Ward, because we didn't have enough White Rock artists signed up for the tour.

Traffic was sporadic: we'd have long periods when nobody came by (I was glad I'd brought my laptop, and managed to get some useful development done on track management in pytopo), punctuated by bursts where three or four groups would show up all at once.

It was fun talking to the people who came by. They all had questions about both my metalwork and Heather's scratchboard, and we had a lot of good conversations. Not many of them were actually buying -- I heard the same thing afterward from most of the other artists on the tour, so it wasn't just us. But I still sold enough that I more than made back the cost of the tour. (I hadn't realized, prior to this, that artists have to pay to be in shows and tours like this, so there's a lot of incentive to sell enough at least to break even.) Of course, I'm nowhere near covering the cost of materials and equipment. Maybe some day ...

[JunkDNA Art at the LA Studio Tour]

I figured snacks are always appreciated, so I set out my pelican snack bowl -- one of my first art pieces -- with brownies and cookies in it, next to the business cards.

It was funny how wrong I was in predicting what people would like. I thought everyone would want the roadrunners and dragonflies; in practice, scorpions were much more popular, along with a sea serpent that had been sitting on my garage shelf for a month while I tried to figure out how to finish it. (I do like how it eventually came out, though.)

And then after selling both my scorpions on Saturday, I rushed to make two more on Saturday night and Sunday morning, and of course no one on Sunday had the slightest interest in scorpions. Dave, who used to have a foot in the art world, tells me this is typical, and that artists should never make what they think the market will like; just go on making what you like yourself, and hope it works out.

Which, fortunately, is mostly what I do at this stage, since I'm mostly puttering around for fun and learning.

Anyway, it was a good learning experience, though I was a little stressed getting ready for it and I'm glad it's over. Next up: a big spider for the front yard, before Halloween.

October 23, 2016 02:17 AM

October 22, 2016

Elizabeth Krumbach

Simcoe’s October Checkup

On October 13th MJ and I took Simcoe in to the vet for her quarterly checkup. The last time she had been in was back in June.

As usual, she wasn’t thrilled about this vet visit plan.

This time her allergies were flaring up and we were preparing to increase her dosage of Atopica to fight back on some of the areas she was scratching and breaking out. The poor thing continues to suffer from constipation, so we’re continuing to try to give her wet food with pumpkin or fiber mixed in, but it’s not easy since food isn’t really her thing. We also have been keeping an eye on her weight and giving her an appetite stimulant here and there when I’m around to monitor her. Back in June her weight was at 8.4lbs, and this time she’s down to 8.1. I hope to spend more time giving her the stimulant after my next trip.

Sadly her bloodwork related to kidney values continues to worsen. Her CRE levels are the worst we’ve ever seen, with them shooting up higher than when she first crashed and we were notified of her renal failure back in 2011, almost five years ago. From 5.5 in June, she’s now at a very concerning 7.0.

Her BUN has stayed steady at 100, the same as it was in June.

My travel has been pretty hard on her, and I feel incredibly guilty about this. She’s more agitated and upset than we’d like to see so the vet prescribed a low dose of Alprazolam that she can be given during the worst times. We’re going to reduce her Calcitriol, but otherwise are continuing with the care routine.

It’s upsetting to see her decline in this way, and I have noticed a slight drop in energy as well. I’m still hoping we have a lot more time with my darling kitten-cat, but she turns ten next month and these value are definitely cause for concern.

But let’s not leave it on a sad note. The other day she made herself at home in a box that had the sun pointed directly inside it. SO CUTE!

She also tried to go with MJ on a business trip this week.

I love this cat.

by pleia2 at October 22, 2016 02:24 AM

October 20, 2016

Jono Bacon

All Things Open Next Week – MCing, Talks, and More

Last year I went to All Things Open for the first time and did a keynote. You can watch the keynote here.

I was really impressed with All Things Open last year and have subsequently become friends with the principle organizer, Todd Lewis. I loved how the team put together a show with the right balance of community and corporation, great content, exhibition and more.

All Thing Open 2016 is happening next week and I will be participating in a number of areas:

  • I will be MCing the keynotes for the event. I am looking forward to introducing such a tremendous group of speakers.
  • Jeremy King, CTO of Walmart Labs and I will be having a fireside chat. I am looking forward to delving into the work they are doing.
  • I will also be participating in a panel about openness and collaboration, and delivering a talk about building a community exoskeleton.
  • It is looking pretty likely I will be doing a book signing with free copies of The Art of Community to be given away thanks to my friends at O’Reilly!

The event takes place in Raleigh, and if you haven’t registered yet, do so right here!

Also, a huge thanks to Red Hat and opensource.com for flying me out. I will be joining the team for a day of meetings prior to All Things Open – looking forward to the discussions!

The post All Things Open Next Week – MCing, Talks, and More appeared first on Jono Bacon.

by Jono Bacon at October 20, 2016 08:20 PM

October 17, 2016

Elizabeth Krumbach

Seeking a new role

Today I was notified that I am being laid off from the upstream OpenStack Infrastructure job I have through HPE. It’s a workforce reduction and our whole team at HPE was hit. I love this job. I work with a great team on the OpenStack Infrastructure team. HPE has treated me very well, supporting travel to conferences I’m speaking at, helping to promote my books (Common OpenStack Deployments and The Official Ubuntu Book, 9th and 8th editions) and other work. I spent almost four years there and I’m grateful for what they did for my career.

But now I have to move on.

I’ve worked as a Linux Systems Administrator for the past decade and I’d love to continue that. I live in San Francisco so there are a lot of ops positions around here that I can look at, but I really want to find a place where my expertise with open source, writing and public speaking can will be used and appreciated. I’d also be open to a more Community or Developer Evangelist role that leverages my systems and cloud background.

Whatever I end up doing next the tl;dr (too long; didn’t read) version of what I need in my next role are as follows:

  • Most of my job to be focused on open source
  • Support for travel to conferences where I speak at (6-12 per year)
  • Work from home
  • Competitive pay

My resume is over here: http://elizabethkjoseph.com

Now the long version, and a quick note about what I do today.

OpenStack project Infrastructure Team

I’ve spent nearly four years working full time on the OpenStack project Infrastructure Team. We run all the services that developers on the OpenStack project interact with on a daily basis, from our massive Continuous Integration system to translations and the Etherpads. I love it there. I also just wrote a book about OpenStack.

HPE has paid me to do this upstream OpenStack project Infrastructure work full time, but we have team members from various companies. I’d love to find a company in the OpenStack ecosystem willing to pay for me to continue this and support me like HPE did. All the companies who use and contribute to OpenStack rely upon the infrastructure our team provides, and as a root/core member of this team I have an important role to play. It would be a shame for me to have to leave.

However, I am willing to move on from this team and this work for something new. During my career thus far I’ve spent time working on both the Ubuntu and Debian projects, so I do have experience with other large open source projects, and reducing my involvement in them as my life dictates.

Most of my job to be focused on open source

This is extremely important to me. I’ve spent the past 15 years working intensively in open source communities, from Linux Users Groups to small and large open source projects. Today I work on a team where everything we do is open source. All system configs, Puppet modules, everything but the obvious private data that needs to be private for the integrity of the infrastructure (SSH keys, SSL certificates, passwords, etc). While I’d love a role where this is also the case, I realize how unrealistic it is for a company to have such an open infrastructure.

An alternative would be a position where I’m one of the ops people who understands the tooling (probably from gaining an understanding of it internally) and then going on to help manage the projects that have been open sourced by the team. I’d make sure best practices are followed for the open sourcing of things, that projects are paid attention to and contributors outside the organization are well-supported. I’d also go to conferences to present on this work, write about it on a blog somewhere (company blog? opensource.com?) and be encouraging and helping other team members do the same.

Support for travel to conferences where I speak at (to chat at 6-12 per year)

I speak a lot and I’m good at it. I’ve given keynotes at conferences in Europe, South America and right here in the US. Any company I go to work for will need to support me in this by giving me the time to prepare and give talks, and by compensating me for travel for conferences where I’m speaking.

Work from home

I’ve been doing this for the past ten years and I’d really struggle to go back into an office. Since operations, open source and travel doesn’t need me to be in an office, I’d prefer to stick with the flexibility and time working from home gives me.

For the right job I may be willing to consider going into an office or visiting client/customer sites (SF Bay Area is GREAT for this!) once a week, or some kind of arrangement where I travel to a home office for a week here and there. I can’t relocate for a position at this time.

Competitive pay

It should go without saying, but I do live in one of the most expensive places in the world and need to be compensated accordingly. I love my work, I love open source, but I have bills to pay and I’m not willing to compromise on this at this point in my life.

Contact me

If you think your organization would be interested in someone like me and can help me meet my requirements, please reach out via email at lyz@princessleia.com

I’m pretty sad today about the passing of what’s been such a great journey for me at HPE and in the OpenStack community, but I’m eager to learn more about the doors this change is opening up for me.

by pleia2 at October 17, 2016 11:23 PM

October 11, 2016

Akkana Peck

New Mexico LWV Voter Guides are here!

[Vote button] I'm happy to say that our state League of Women Voters Voter Guides are out for the 2016 election.

My grandmother was active in the League of Women Voters most of her life (at least after I was old enough to be aware of such things). I didn't appreciate it at the time -- and I also didn't appreciate that she had been born in a time when women couldn't legally vote, and the 19th amendment, giving women the vote, was ratified just a year before she reached voting age. No wonder she considered the League so important!

The LWV continues to work to extend voting to people of all genders, races, and economic groups -- especially important in these days when the Voting Rights Act is under attack and so many groups are being disenfranchised. But the League is important for another reason: local LWV chapters across the country produce detailed, non-partisan voter guides for each major election, which are distributed free of charge to voters. In many areas -- including here in New Mexico -- there's no equivalent of the "Legislative Analyst" who writes the lengthy analyses that appear on California ballots weighing the pros, cons and financial impact of each measure. In the election two years ago, not that long after Dave and I moved here, finding information on the candidates and ballot measures wasn't easy, and the LWV Voter Guide was by far the best source I saw. It's the main reason I joined the League, though I also appreciate the public candidate forums and other programs they put on.

LWV chapters are scrupulous about collecting information from candidates in a fair, non-partisan way. Candidates' statements are presented exactly as they're received, and all candidates are given the same specifications and deadlines. A few candidates ignored us this year and didn't send statements despite repeated emails and phone calls, but we did what we could.

New Mexico's state-wide voter guide -- the one I was primarily involved in preparing -- is at New Mexico Voter Guide 2016. It has links to guides from three of the four local LWV chapters: Los Alamos, Santa Fe, and Central New Mexico (Albuquerque and surrounding areas). The fourth chapter, Las Cruces, is still working on their guide and they expect it soon.

I was surprised to see that our candidate information doesn't include links to websites or social media. Apparently that's not part of the question sheet they send out, and I got blank looks when I suggested we should make sure to include that next time. The LWV does a lot of important work but they're a little backward in some respects. That's definitely on my checklist for next time, but for now, if you want a candidate's website, there's always Google.

I also helped a little on Los Alamos's voter guide, making suggestions on how to present it on the website (I maintain the state League website but not the Los Alamos site), and participated in the committee that wrote the analysis and pro and con arguments for our contentious charter amendment proposal to eliminate the elective office sheriff. We learned a lot about the history of the sheriff's office here in Los Alamos, and about state laws and insurance rules regarding sheriffs, and I hope the important parts of what we learned are reflected in both sides of the argument.

The Voter Guides also have a link to a Youtube recording of the first Los Alamos LWV candidate forum, featuring NM House candidates, DA, Probate judge and, most important, the debate over the sheriff proposition. The second candidate forum, featuring US House of Representatives, County Council and County Clerk candidates, will be this Thursday, October 13 at 7 (refreshments at 6:30). It will also be recorded thanks to a contribution from the AAUW.

So -- busy, busy with election-related projects. But I think the work is mostly done (except for the one remaining forum), the guides are out, and now it's time to go through and read the guides. And then the most important part of all: vote!

October 11, 2016 10:08 PM

October 06, 2016

Nathan Haines

Winners of the Ubuntu 16.10 Free Culture Showcase

It's an exciting time for Ubuntu fans because next week will see the release of Ubuntu 16.10 and some interesting new features. But today we're going to talk about one exciting user-facing change: the community wallpapers that were selected from the Ubuntu Free Culture Showcase!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. For Ubuntu 16.10, hundreds of such wallpapers were submitted to the Ubuntu 16.10 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found.

But now the results are in, and the top choices, voted on by certain members of the Ubuntu community, and I'm proud to announce the winning images that will be included in Ubuntu 16.10:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers today at the links above, or in your desktop wallpaper list after you upgrade or install Ubuntu 16.10 on October 13th.

October 06, 2016 04:20 AM

October 05, 2016

Akkana Peck

Play notes, chords and arbitrary waveforms from Python

Reading Stephen Wolfram's latest discussion of teaching computational thinking (which, though I mostly agree with it, is more an extended ad for Wolfram Programming Lab than a discussion of what computational thinking is and why we should teach it) I found myself musing over ideas for future computer classes for Los Alamos Makers. Students, and especially kids, like to see something other than words on a screen. Graphics and games good, or robotics when possible ... but another fun project a novice programmer can appreciate is music.

I found myself curious what you could do with Python, since I hadn't played much with Python sound generation libraries. I did discover a while ago that Python is rather bad at playing audio files, though I did eventually manage to write a music player script that works quite well. What about generating tones and chords?

A web search revealed that this is another thing Python is bad at. I found lots of people asking about chord generation, and a handful of half-baked ideas that relied on long obsolete packages or external program. But none of it actually worked, at least without requiring Windows or relying on larger packages like fluidsynth (which looked worth exploring some day when I have more time).

Play an arbitrary waveform with Pygame and NumPy

But I did find one example based on a long-obsolete Python package called Numeric which, when rewritten to use NumPy, actually played a sound. You can take a NumPy array and play it using a pygame.sndarray object this way:

import pygame, pygame.sndarray

def play_for(sample_wave, ms):
    """Play the given NumPy array, as a sound, for ms milliseconds."""
    sound = pygame.sndarray.make_sound(sample_wave)
    sound.play(-1)
    pygame.time.delay(ms)
    sound.stop()

Then you just need to calculate the waveform you want to play. NumPy can generate sine waves on its own, while scipy.signal can generate square and sawtooth waves. Like this:

import numpy
import scipy.signal

sample_rate = 44100

def sine_wave(hz, peak, n_samples=sample_rate):
    """Compute N samples of a sine wave with given frequency and peak amplitude.
       Defaults to one second.
    """
    length = sample_rate / float(hz)
    omega = numpy.pi * 2 / length
    xvalues = numpy.arange(int(length)) * omega
    onecycle = peak * numpy.sin(xvalues)
    return numpy.resize(onecycle, (n_samples,)).astype(numpy.int16)

def square_wave(hz, peak, duty_cycle=.5, n_samples=sample_rate):
    """Compute N samples of a sine wave with given frequency and peak amplitude.
       Defaults to one second.
    """
    t = numpy.linspace(0, 1, 500 * 440/hz, endpoint=False)
    wave = scipy.signal.square(2 * numpy.pi * 5 * t, duty=duty_cycle)
    wave = numpy.resize(wave, (n_samples,))
    return (peak / 2 * wave.astype(numpy.int16))

# Play A (440Hz) for 1 second as a sine wave:
play_for(sine_wave(440, 4096), 1000)

# Play A-440 for 1 second as a square wave:
play_for(square_wave(440, 4096), 1000)

Playing chords

That's all very well, but it's still a single tone, not a chord.

To generate a chord of two notes, you can add the waveforms for the two notes. For instance, 440Hz is concert A, and the A one octave above it is double the frequence, or 880 Hz. If you wanted to play a chord consisting of those two As, you could do it like this:

play_for(sum([sine_wave(440, 4096), sine_wave(880, 4096)]), 1000)

Simple octaves aren't very interesting to listen to. What you want is chords like major and minor triads and so forth. If you google for chord ratios Google helpfully gives you a few of them right off, then links to a page with a table of ratios for some common chords.

For instance, the major triad ratios are listed as 4:5:6. What does that mean? It means that for a C-E-G triad (the first C chord you learn in piano), the E's frequency is 5/4 of the C's frequency, and the G is 6/4 of the C.

You can pass that list, [4, 5, 5] to a function that will calculate the right ratios to produce the set of waveforms you need to add to get your chord:

def make_chord(hz, ratios):
    """Make a chord based on a list of frequency ratios."""
    sampling = 4096
    chord = waveform(hz, sampling)
    for r in ratios[1:]:
        chord = sum([chord, sine_wave(hz * r / ratios[0], sampling)])
    return chord

def major_triad(hz):
    return make_chord(hz, [4, 5, 6])

play_for(major_triad(440), length)

Even better, you can pass in the waveform you want to use when you're adding instruments together:

def make_chord(hz, ratios, waveform=None):
    """Make a chord based on a list of frequency ratios
       using a given waveform (defaults to a sine wave).
    """
    sampling = 4096
    if not waveform:
        waveform = sine_wave
    chord = waveform(hz, sampling)
    for r in ratios[1:]:
        chord = sum([chord, waveform(hz * r / ratios[0], sampling)])
    return chord

def major_triad(hz, waveform=None):
    return make_chord(hz, [4, 5, 6], waveform)

play_for(major_triad(440, square_wave), length)

There are still some problems. For instance, sawtooth_wave() works fine individually or for pairs of notes, but triads of sawtooths don't play correctly. I'm guessing something about the sampling rate is making their overtones cancel out part of the sawtooth wave. Triangle waves (in scipy.signal, that's a sawtooth wave with rising ramp width of 0.5) don't seem to work right even for single tones. I'm sure these are solvable, perhaps by fiddling with the sampling rate. I'll probably need to add graphics so I can look at the waveform for debugging purposes.

In any case, it was a fun morning hack. Most chords work pretty well, and it's nice to know how to to play any waveform I can generate.

The full script is here: play_chord.py on GitHub.

October 05, 2016 05:29 PM

October 02, 2016

Elizabeth Krumbach

Autumn in Philadelphia

I spent this past week in Philadelphia. For those of you following along at home, I was there just a month before, for FOSSCON and other time with friends. This time our trip was also purposeful, we were in town for the gravestone unveiling for MJ’s grandmother, to celebrate my birthday with Philly friends and to work on a secret mission (secret until November).

Before all that, I spent some time enjoying the city upon arrival. The first morning I was there I got in early and a friend picked me up at the airport. After breakfast we headed toward the Philadelphia Zoo. We killed some time with a walk before making our way up to the zoo itself, where I insisted we spend a bit of time watching the street cars (they call them trolleys) on the SEPTA Route 15 that goes right past the zoo. These SEPTA PCC cars are direct relatives to the ones that run in San Francisco, in fact, San Francisco bought a large portion of their PCC fleet directly from SEPTA several decades ago. Almost all the PCC cars you see running on the F-line in San Francisco are from Philadelphia. It was fun to have some time to hang out and enjoy the cars in their home city.

And of course the zoo! I’ve been to the zoo a few times, but it’s a nice sized one and I always enjoy going. I don’t remember them having an aye-aye exhibit, so it was nice to see that, particularly since the one at the San Francisco Zoo has been closed for some time. The penguins are always great to see, and the red pandas are super adorable.


Humboldt penguins at the Philadelphia Zoo

More photos from the zoo here: https://www.flickr.com/photos/pleia2/albums/72157673262888271

Tuesday I spent working and spending time with my friend Danita. Camped out on her couch I got a pile of work done and later in the evening we went out to do a bit of shopping. That evening MJ arrived in Philadelphia from a work trip he was on and picked me up to grab some dinner.

Wednesday morning was the gravestone unveiling. According to Jewish tradition, this ceremony is completed approximately a year after the passing of your loved one and coincides with the conclusion of the year of mourning. We had 10 people there, and though the weather did threaten rain, it held out as we made our way through some words about her life, prayers and quiet moments together. Afterwards the family attending all went out to lunch together.

Thursday’s big event was my 35th birthday! In the morning I went out Core Creek Park a few miles from our hotel to go out for a run. The weather wasn’t entirely cooperative, but I wasn’t about to be put off by a hint of drizzle. It was totally the right thing to do, I parked near the lake in the park and did a run/walk of a couple miles on a trail around that edge of the park. I saw a deer, lots of birds and was generally pleased with the sights. I love autumn in Philadelphia and this was such a perfect way to experience it.

That night MJ drove us down to the city and met up with a whole pile of friends (Danita, Crissi, Mike and Jess, Jon, David, Tim, Mike and Heidi, Walt, and Paul) for a birthday party at The Continental near Penn’s Landing. I love this place. We had our wedding party dinner here, and we eat here, or at the mid-town location, almost every time we’re in town. MJ and Danita had reserved a private room which allowed for mingling throughout the night. Danita helped me pick out some killer shoes that I had fun wearing with my awesome dress and I drank a lot of Twizzle martinis (Smirnoff citrus, strawberry puree, lemon, red licorice wheel) along with all the spectacular food they brought to our tables through the night. There was also a delicious walnut-free carrot cake …with only 5 candles, which was appreciated, hah! Did I mention I drank a lot of martinis? It was an awesome night, my friends are the best.

Late Friday and into Saturday were secret mission days, but I took some time for work like every day and we also got to see friends and family both days. I also was able to get down to the hotel gym on Saturday morning to visit the treadmill and spend some time in the pool.

Our flight took us home to our kitties on Saturday evening. I’ve been incredibly stressed out lately with a lot going on with my career (work, book, other open source things) and personally (where to begin…), but it was a very good trip over all.

Rosh Hashanah begins tonight and means a day of observation tomorrow too. Tuesday and Wednesday are packed with work and spending evenings with MJ before I fly off again on Thursday. This time to Ohio for the Ohio LinuxFest in Columbus where my talk is A Tour of OpenStack Deployment Scenarios. While I’m there I also have plans to meet up with my Ubuntu community friends (including going to the Columbus Zoo!) and most of the crew I went to Ghana with in 2012.

by pleia2 at October 02, 2016 03:49 PM

MUNI Heritage Weekend

Before heading to Philadelphia last weekend I took time to spend Saturday with my friend Mark at MUNI Heritage Weekend. As an active transit advocate in San Francisco, Mark is a fun person to attend such an event with. I like to think I know a fair amount about things on rails in San Francisco, but he’s much more knowledgeable about transit in general.

I was pretty excited about this day, I was all decked out in my cable car earrings and Seashore Trolley Museum t-shirt.

The day began with a walk down Market to meet Mark near the railway museum, which was the center of all the activity for the day. I arrived a bit early and spent my time snapping pictures of all the interesting streetcars and buses coming around. When we met up our first adventure was to take a ride on our first vintage bus of the day, the 5300!

Now, as far as vintage goes, the 5300 doesn’t go very far back in history. This bus was an electric from 1975 and it had a good run, still riding around the city just over a decade ago. This was a long ride, taking us down Howard, South Van Ness, all the way down to Mission and 25th street, then back to the railway museum. It took about 45 minutes, during which time Mark and I had lots of time to catch up.

We then had some time to walk around a bit and see what else was out. Throughout the day we saw one of the Blackpool “boat” streetcars, the Melbourne streetcar (which I still haven’t ridden in!) the Number 1 streetcar and more.

Next up was a ride on the short 042 from 1938! This was a fun one, it’s the oldest bus in the fleet and the blog post about the event had this to say:

A surprise participant was Muni’s oldest bus, the 042, built in 1938 by the White Motor Company. Its engine had given up the ghost, but the top-notch mechanics at Woods Motor Coach Division swapped it out for one in a White bus Market Street Railway’s Paul Wells located in the Santa Cruz Mountains and repatriated. The 042 operated like a dream looping around Union Square all weekend, as did 1970 GMC “fishbowl” 3287, shown behind it

Pretty cool! As the quote suggests, it was not electric so it was able to do its own thing in the Union Square loop, in a ride that only took about 20 minutes.

Then, more viewing of random cars. I think the highlight of my time then was getting to see the 578 “dinky” close up. Built in 1896, this street car looks quite a bit like a cable car, making it a distinctive sight among all the other street cars.

By then we were well into the late afternoon and decided to grab some late lunch. Continuing our transit-related day, I took him up Howard street to get a view of the progress on the new Transbay Transit Center. After walking past it on street level, we went up to the roof deck where we live to get some views and pictures from up above.

This was definitely a bus-heavy heritage day for me, but it was fun. Lots more photos from the day here: https://www.flickr.com/photos/pleia2/albums/72157674240825576

That evening it was time for me to get off the buses and rails to take another form of transportation, I was off to Philadelphia on a plane!

by pleia2 at October 02, 2016 02:49 PM

October 01, 2016

Akkana Peck

Zsh magic: remove all raw photos that don't have a corresponding JPEG

Lately, when shooting photos with my DSLR, I've been shooting raw mode but with a JPEG copy as well. When I triage and label my photos (with pho and metapho), I use only the JPEG files, since they load faster and there's no need to index both. But that means that sometimes I delete a .jpg file while the huge .cr2 raw file is still on my disk.

I wanted some way of removing these orphaned raw files: in other words, for every .cr2 file that doesn't have a corresponding .jpg file, delete the .cr2.

That's an easy enough shell function to write: loop over *.cr2, change the .cr2 extension to .jpg, check whether that file exists, and if it doesn't, delete the .cr2.

But as I started to write the shell function, it occurred to me: this is just the sort of magic trick zsh tends to have built in.

So I hopped on over to #zsh and asked, and in just a few minutes, I had an answer:

rm *.cr2(e:'[[ ! -e ${REPLY%.cr2}.jpg ]]':)

Yikes! And it works! But how does it work? It's cheating to rely on people in IRC channels without trying to understand the answer so I can solve the next similar problem on my own.

Most of the answer is in the zshexpn man page, but it still took some reading and jumping around to put the pieces together.

First, we take all files matching the initial wildcard, *.cr2. We're going to apply to them the filename generation code expression in parentheses after the wildcard. (I think you need EXTENDED_GLOB set to use that sort of parenthetical expression.)

The variable $REPLY is set to the filename the wildcard expression matched; so it will be set to each .cr2 filename, e.g. img001.cr2.

The expression ${REPLY%.cr2} removes the .cr2 extension. Then we tack on a .jpg: ${REPLY%.cr2}.jpg. So now we have img001.jpg.

[[ ! -e ${REPLY%.cr2}.jpg ]] checks for the existence of that jpg filename, just like in a shell script.

So that explains the quoted shell expression. The final, and hardest part, is how to use that quoted expression. That's in section 14.8.7 Glob Qualifiers. (estring) executes string as shell code, and the filename will be included in the list if and only if the code returns a zero status.

The colons -- after the e and before the closing parenthesis -- are just separator characters. Whatever character immediately follows the e will be taken as the separator, and anything from there to the next instance of that separator (the second colon, in this case) is taken as the string to execute. Colons seem to be the character to use by convention, but you could use anything. This is also the part of the expression responsible for setting $REPLY to the filename being tested.

So why the quotes inside the colons? They're because some of the substitutions being done would be evaluated too early without them: "Note that expansions must be quoted in the string to prevent them from being expanded before globbing is done. string is then executed as shell code."

Whew! Complicated, but awfully handy. I know I'll have lots of other uses for that.

One additional note: section 14.8.5, Approximate Matching, in that manual page caught my eye. zsh can do fuzzy matches! I can't think offhand what I need that for ... but I'm sure an idea will come to me.

October 01, 2016 09:28 PM

September 28, 2016

Jono Bacon

Bacon Roundup – 28th September 2016

Here we are with another roundup of things I have been working on, complete with a juicy foray into the archives too. So, sit back, grab a cup of something delicious, and enjoy.

To gamify or not to gamify community (opensource.com)

In this piece I explore whether gamification is something we should apply to building communities. I also pull from my experience building a gamification platform for Ubuntu called Ubuntu Accomplishments.

The GitLab Master Plan (gitlab.com)

Recently I have been working with GitLab. The team has been building their vision for conversational development and I MCed their announcement of their plan. You can watch the video below for convenience:


Social Media: 10 Ways To Not Screw It Up (jonobacon.org)

Here I share 10 tips and tricks that I have learned over the years for doing social media right. This applies to tooling, content, distribution, and more. I would love to learn your tips too, so be sure to share them in the comments!

Linux, Linus, Bradley, and Open Source Protection (jonobacon.org)

Recently there was something of a spat in the Linux kernel community about when is the right time to litigate companies who misuse the GPL. As a friend of both sides of the debate, this was my analysis.

The Psychology of Report/Issue Templates (jonobacon.org)

As many of you will know, I am something of a behavioral economics fan. In this piece I explore the interesting human psychology behind issue/report templates. It is subtle nudges like this that can influence the behavioral patterns you want to see.

My Reddit AMA

It would be remiss without sharing a link to my recent reddit AMA where I was asked a range of questions about community leadership, open source, and more. Thanks to all of you who joined and asked questions!

Looking For Talent

I also posted a few pieces about some companies who I am working with who want to hire smart, dedicated, and talented community leaders. If you are looking for a new role, be sure to see these:

From The Archives

Dan Ariely on Building More Human Technology, Data, Artificial Intelligence, and More (forbes.com)

My Forbes piece on the impact of behavioral economics on technologies, including an interview with Dan Ariely, TED speaker, and author of many books on the topic.

Advice for building a career in open source (opensource.com)

In this piece I share some recommendations I have developed over the years for those of you who want to build a career in open source. Of course, I would love to hear you tips and tricks too!

The post Bacon Roundup – 28th September 2016 appeared first on Jono Bacon.

by Jono Bacon at September 28, 2016 03:00 PM

Elizabeth Krumbach

Yak Coloring

A couple cycles ago I asked Ronnie Tucker, artist artist and creator of Full Circle Magazine, to create a werewolf coloring page for the 15.10 release (details here). He then created another for Xenial Xerus, see here.

He’s now created one for the upcoming Yakkety Yak release! So if you’re sick of all the yak shaving you’re doing as we prepare for this release, you may consider giving yak coloring a try.

But that’s not the only yak! We have Tom Macfarlane in the Canonical Design Team once again for sending me the SVG to update the Animal SVGs section of the Official Artwork page on the Ubuntu wiki. They’re sticking with a kind of origami theme this time for our official yak.

Download the SVG version for printing from the wiki page or directly here.

by pleia2 at September 28, 2016 12:43 AM

September 26, 2016

Akkana Peck

Unclaimed Alcoholic Beverages

Dave was reading New Mexico laws regarding a voter guide issue we're researching, and he came across this gem in Section 29-1-14 G of the "Law Enforcement: Peace Officers in General: Unclaimed Property" laws:

Any alcoholic beverage that has been unclaimed by the true owner, is no longer necessary for use in obtaining a conviction, is not needed for any other public purpose and has been in the possession of a state, county or municipal law enforcement agency for more than ninety days may be destroyed or may be utilized by the scientific laboratory division of the department of health for educational or scientific purposes.

We can't decide which part is more fun: contemplating what the "other public purposes" might be, or musing on the various "educational or scientific purposes" one might come up with for a month-old beverage that's been sitting in the storage locker ... I'm envisioning a room surrounded by locked chain-link containing dusty shelves containing rows of half-full martini and highball glasses.

September 26, 2016 05:04 PM

Eric Hammond

Deleting a Route 53 Hosted Zone And All DNS Records Using aws-cli

fast, easy, and slightly dangerous recursive deletion of a domain’s DNS

Amazon Route 53 currently charges $0.50/month per hosted zone for your first 25 domains, and $0.10/month for additional hosted zones, even if they are not getting any DNS requests. I recently stopped using Route 53 to serve DNS for 25 domains and wanted to save on the $150/year these were costing.

Amazon’s instructions for using the Route 53 Console to delete Record Sets and a Hosted Zone make it look simple. I started in the Route 53 Console clicking into a hosted zone, selecting each DNS record set (but not the NS or SOA ones), clicking delete, clicking confirm, going back a level, selecting the next domain, and so on. This got old quickly.

Being lazy, I decided to spend a lot more effort figuring out how to automate this process with the aws-cli, and pass the savings on to you.

Steps with aws-cli

Let’s start by putting the hosted zone domain name into an environment variable. Do not skip this step! Do make sure you have the right name! If this is not correct, you may end up wiping out DNS for a domain that you wanted to keep.

domain_to_delete=example.com

Install the jq json parsing command line tool. I couldn’t quite get the normal aws-cli --query option to get me the output format I wanted.

sudo apt-get install jq

Look up the hosted zone id for the domain. This assumes that you only have one hosted zone for the domain. (It is possible to have multiple, in which case I recommend using the Route 53 console to make sure you delete the right one.)

hosted_zone_id=$(
  aws route53 list-hosted-zones \
    --output text \
    --query 'HostedZones[?Name==`'$domain_to_delete'.`].Id'
)
echo hosted_zone_id=$hosted_zone_id

Use list-resource-record-sets to find all of the current DNS entries in the hosted zone, then delete each one with change-resource-record-sets.

aws route53 list-resource-record-sets \
  --hosted-zone-id $hosted_zone_id |
jq -c '.ResourceRecordSets[]' |
while read -r resourcerecordset; do
  read -r name type <<<$(jq -r '.Name,.Type' <<<"$resourcerecordset")
  if [ $type != "NS" -a $type != "SOA" ]; then
    aws route53 change-resource-record-sets \
      --hosted-zone-id $hosted_zone_id \
      --change-batch '{"Changes":[{"Action":"DELETE","ResourceRecordSet":
          '"$resourcerecordset"'
        }]}' \
      --output text --query 'ChangeInfo.Id'
  fi
done

Finally, delete the hosted zone itself:

aws route53 delete-hosted-zone \
  --id $hosted_zone_id \
  --output text --query 'ChangeInfo.Id'

As written, the above commands output the change ids. You can monitor the background progress using a command like:

change_id=...
aws route53 wait resource-record-sets-changed \
  --id "$change_id"

GitHub repo

To make it easy to automate the destruction of your critical DNS resources, I’ve wrapped the above commands into a command line tool and tossed it into a GitHub repo here:

https://github.com/alestic/aws-route53-wipe-hosted-zone

You are welcome to use as is, fork, add protections, rewrite with Boto3, and generally knock yourself out.

Alternative: CloudFormation

A colleague pointed out that a better way to manage all of this (in many situations) would be to simply toss my DNS records into a CloudFormation template for each domain. Benefits include:

  • Easy to store whole DNS definition in revision control with history tracking.

  • Single command creation of the hosted zone and all record sets.

  • Single command updating of all changed record sets, no matter what has changed since the last update.

  • Single command deletion of the hosted zone and all record sets (my current challenge).

This doesn’t work as well for hosted zones where different records are added, updated, and deleted by automated processes (e.g., instance startup), but for simple, static domain DNS, it sounds ideal.

How do you create, update, and delete DNS in Route 53 for your domains?

Original article and comments: https://alestic.com/2016/09/aws-route53-wipe-hosted-zone/

September 26, 2016 09:30 AM

Jono Bacon

Looking for a data.world Director of Community

data.world

Some time ago I signed an Austin-based data company called data.world as a client. The team are building an incredible platform where the community can store data, collaborate around the shape/content of that data, and build an extensive open data commons.

As I wrote about previously I believe data.world is going to play an important role in opening up the potential for finding discoveries in disparate data sets and helping people innovate faster.

I have been working with the team to help shape their community strategy and they are now ready to hire a capable Director of Community to start executing these different pieces. The role description is presented below. The data.world team are an incredible bunch with some strong heritage in the leadership of Brett Hurt, Matt Laessig, Jon Loyens, Bryon Jacob, and others.

As such, I am looking to find the team some strong candidates. If I know you, I would invite you to confidentially share your interest in this role by filling my form here. This way I can get a good sense of who is interested and also recommend people I personally know and can vouch for. I will then reach out to those of you who this seems to be a good potential fit for and play a supporting role in brokering the conversation.

This role will require candidates to either be based in Austin or be willing to relocate to Austin. This is a great opportunity, and feel free to get in touch with me if you have any questions.

Director of Community Role Description

data.world is building a world-class data commons, management, and collaboration platform. We believe that data.world is the very best place to build great data communities that can make data science fun, enjoyable, and impactful. We want to ensure we can provide the very best support, guidance, and engagement to help these communities be successful. This will involve engagement in workflow, product, outreach, events, and more.

As Director of Community, you will lead, coordinate, and manage our global community development initiatives. You will use your community leadership experience to shape our community experience and infrastructure, feed into the product roadmap with community needs and requirements, build growth and engagement, and more. You will help connect, celebrate, and amplify the existing communities on data.world and assist new ones as they form. You will help our users to think bigger, be the best they can be, and succeed more. You’ll work across teams within data.world to promote the community’s voice within our different internal teams. You should be a content expert, superb communicator, and humble facilitator.

Typical activities for this role include:

  • Building and executing programs that grow communities on data.world and empower them to do great work.
  • Taking a structured approach to community roles, on-boarding, and working with our teams to ensure community members have a simple and powerful experience.
  • Developing content that promotes the longevity and sustainability of fast growing, organically built data communities with high impact outcomes.
  • Building relationships within the industry and community to be their representative for data.world in helping to engage, be successful, and deliver great work and collaboration.
  • Working with product, user operations, and marketing teams on product roadmap for community features and needs.
  • Being a data.world representative and spokesperson at conferences, events, and within the media and external data communities.
  • Always challenging our assumptions, our culture, and being singularly focused on delivering the very best data community platform in the world.

Experience with the following is required:

  • 5-7 years of experience participating in and building communities, preferably data based, or technical in nature.
  • Experience with working in open source, open data, and other online communities.
  • Public speaking, blogging, and content development.
  • Facilitating complex and sensitive community management situations with humility, judgment, tact, and humor.
  • Integrating company brand, voice, and messaging into developed content. Working independently and autonomously, managing multiple competing priorities.

Experience with any of the following preferred:

  • Data science experience and expertise.
  • 3-5 years of experience leading community management programs within a software or Internet-based company.
  • Media training and experience in communicating with journalists, bloggers, and other media on a range of technical topics.
  • Existing network from a diverse set of communities and social media platforms.
  • Software development capabilities and experience

The post Looking for a data.world Director of Community appeared first on Jono Bacon.

by Jono Bacon at September 26, 2016 04:16 AM

September 25, 2016

Elizabeth Krumbach

Beer and trains in Germany

I spent most of this past week in Germany with the OpenStack Infrastructure and QA teams doing a sprint at the SAP offices in Walldorf, I wrote about it here.

The last (and first!) time I was in Germany was for the same purpose, a sprint, that time in Darmstadt where I snuck in a tiny amount of touristing but due troubles with my gallbladder, I could have any fried foods or beer. Well, I had one beer to celebrate Germany winning the World Cup, but I regretted it big time.

This time was different, finally I could have liters of German beer! And I did. The first night there I even had some wiener schnitzel (fried veal!), even if we were all too tired from our travels to leave the hotel that night. We went out to beer gardens every other night after that, taking in the beautiful late summer weather and enjoying great beers.


Photo in the center by Chris Hoge (source)

But I have a confession to make: I don’t like pilsners and that makes Belgium my favorite beer country in Europe. Still, Germany has quite the title. Fortunately while they are the default, pilsners were not my only option. I indulged in dark lagers and hefeweizens all week. Our evening in Heidelberg I also had an excellent Octoberfest Märzen by Heidelberger, which was probably my favorite beer of the whole trip.

Now I’m getting ahead of myself because I was excited about all the beer. I arrived on Sunday, sadly much later than I had intended. My original flights had been rescheduled so ended up meeting my colleague Clark at the Frankfurt airport around 4PM to catch our trains to Walldorf. The train station is right there in the airport, and clear signs meant a no fuss transfer halfway through our journey to get to the next train. We were on the trains for about an hour before arriving at Wiesloch-Walldorf station. A ten Euro cab ride then got us to the hotel where we met up with several other colleagues for drinks.

Of course we were there to work, so that’s what we spend 9-5 each day doing, but the evenings were ours to explore our little corner of Germany. The first night we just walked into Walldorf after work and enjoyed drinks and food until the sun went down. Walldorf is a very cute little town and the outdoor seating at the beer garden we went to was a wonderful treat, especially since the weather was so warm and clear. We spent Wednesday night in Walldorf too.

More photos from Walldorf here: https://www.flickr.com/photos/pleia2/sets/72157670828593814/

Tuesday night was our big night out. We all headed out to the nearby Heidelberg for a big group dinner. After parking, we had a lovely short walk to the restaurant which took me by a shop that sold post cards! I picked up a trio of cards for my mother and sisters, as I typically do when traveling. The walk also gave a couple of us time to take pictures of the city before the sun went down.

Dinner was at Zum Weissen Schwanen (The White Swan). That was my four beer night.

After the meal several of us took a nice walk around the city a bit more. We got to look up and see the massive, lit up, Heidelberg Castle. It’s a pretty exceptional place, I’d love to properly visit some time. The post cards I sent to family all included the castle.

The drive back to the hotel was fun too. I got a tiny taste of the German autobahn as we got up to 220 kilometers per hour on our way back to the hotel before our exit came up. Woo!

My pile of Heidelberg photos are here: https://www.flickr.com/photos/pleia2/albums/72157674174957385

Thursday morning was my big morning of trains. I flew into Frankfurt like everyone else, but I flew home out of Dusseldorf because it was several hundred dollars cheaper. The problem is Walldorf and Dusseldorf aren’t exactly close, but I could spend a couple hours on European ICE (Inter-City Express) and get there. MJ highly recommended I try it out since I like trains, and with the simplicity of routing he convinced me to take a route from Mannheim all the way to Dusseldorf Airport with one simple connection, which just required walking across the platform.

I’m super thankful he convinced me to take the trains. The ticket wasn’t very expensive and I really do like trains. In addition to being reasonably priced, they’re fast, on time and all the signs were great so I didn’t feel worried about getting lost or ending up in the wrong place. The signs even report where each coach will show up on the platform so I had no trouble figuring out where to stand to get to my assigned seat.

I took a few more pictures while on my train adventure, here: https://www.flickr.com/photos/pleia2/albums/72157670930346613

And so I spent a couple hours on my way to Dusseldorf. I was a bit tired since my first train left the station at 7:36AM, so I mostly just listened to music and stared out the window. My flight out of Dusseldorf was uneventful, and was a direct to San Francisco so I was able to come home to my kitties in the early evening. Unfortunately MJ had left home the day before, so I’ll have to wait until we’re both in Philadelphia next week to see him.

by pleia2 at September 25, 2016 12:16 AM

September 24, 2016

Elizabeth Krumbach

OpenStack QA/Infrastructure Meetup in Walldorf

I spent this week in the lovely town of Walldorf, Germany with about 25 of my OpenStack Quality Assurance and Infrastructure colleagues. We were there for a late-cycle sprint, where we all huddled in a couple of rooms for three days to talk, script and code our way through some challenges that are much easier to tackle when all the key players are in a room together. QA and Infra have always been a good match for an event like this since we’re so tightly linked as things QA works on are supported by and tested in the Continuous Integration system we run.

Our venue this time around were the SAP offices in Walldorf. They graciously donated the space to us for this event, and kept us blessedly fed, hydrated and caffeinated throughout the day.

Each day we enjoyed a lovely walk from and to the hotel many of us stayed at. We lucked out and there wasn’t any rain while we were there so we got to take in the best of late summer weather in Germany. Our walk took us through a corn field, past flowers, gave us a nice glimpse at the town of Walldorf on the other side of the highway and then began in on the approach to the SAP buildings of which there are many.

The first day began with an opening from our host at the SAP offices, Marc Koderer and by the QA project lead Ken’ichi Ohmichi. From there we went through the etherpad for the event to figure out where to begin. A big chunk of the Infrastructure team went to their own room to chat about Zuulv3 and some of the work on Ansible, and a couple of us hung back with the QA team to move some of their work along.

Spending time with the QA folks I learned about future plans for a more useful series of bugday graphs. I also worked with Spencer Krum and Matt Treinish to land a few patches related to the new Firehose service. Firehose is a MQTT-based unified message bus that seeks to encompass all the developer-facing infra alerts and updates in a single stream. This includes job results from Gerrit, updates on bugs from Launchpad, specific logs that are processed by logstash and more. At the beginning of the sprint only Gerrit was feeding into it using germqtt, but by the end of Monday we had Launchpad bugs submitting events over email via lpmqtt. The work was mostly centered around setting up Cyrus with Exim and then configuring the accounts and MX records, and trying to do this all in a way that the rest of the team would be happy with. All seems to have worked out, and at the end of the day Matt sent out an email announcing it: Announcing firehose.openstack.org.

That evening we gathered in the little town of Walldorf to have a couple beers, dinner, and relax in a lovely beer garden for a few hours as the sun went down. It was really nice to catch up with some of my colleagues that I have less day to day contact with. I especially enjoyed catching up with Yolanda and Gema, both of whom I’ve known for years through their past work at Canonical on Ubuntu. The three of us also were walk buddies back to the hotel, before which I demanded a quick photo together.

Tuesday morning we started off by inviting Khai Do over to give a quick demo of the Gerrit verify plugin. Now, Khai is one of us, so what do I mean by “come over”? Of all the places and times in the world, Khai was also at the SAP offices in Walldorf, Germany, but he was there for a Gerrit Hackathon. He brought along another Gerrit contributor and showed us how the verify plugin would replace our somewhat hacked into place Javascript that we currently have on our review pages to give a quick view into the test results. It also offers the ability in the web UI to run rechecks on tests, and will provide a page including history of all results through all the patchsets and queues. They’ve done a great job on it, and I was thrilled to see upstream Gerrit working with us to solve some of our problems.


Khai demos the Gerrit verify plugin

After Khai’s little presentation, I plugged my laptop into the projector and brought up the etherpad so we could spend a few minutes going over work that was done on Monday. A Zuulv3 etherpad had been worked on to capture a lot of the work from the Infrastructure team on Monday. Updates were added to our main etherpad about things other people worked on and reviews that were now pending to complete the work.

Groups then split off again, this time I followed most of the rest of the Infrastructure team into a room where we worked on infra-cloud, our infra-spun, fully open source OpenStack deployment that we started running a chunk of our CI tests on a few weeks ago. The key folks working on it gave a quick introduction and then we dove right into debugging some performance problems that were causing failed initial launches. This took us through poking at the Glance image service, rules in Neutron and defaults in the Puppet modules. A fair amount of multi-player (using screen) debugging was done up on the projector as we shifted around options, took the cloud out of the pool of servers for some time, and spent some time debugging individual compute nodes and instances as we watched what they did when they came up for the first time. In addition to our “vanilla” region, Ricardo Carrillo Cruz also made progress that day on getting our “chocolate” region working (next up: strawberry!).

I also was able to take some time on Tuesday to finally get notice and alert notifications going to our new @openstackinfra Twitter account. Monty Taylor had added support for this months ago, but I had just set up the account and written the patches to land it a few days before. We ran into one snafu, but a quick patch (thanks Andreas Jaeger!) got us on our way to automatically sending out our first Tweet. This will be fun, and I can stop being the unofficial Twitter status bot.

That evening we all piled into cars to head over to the nearby city of Heidelberg for dinner and drinks at Zum Weissen Schwanen (The White Swan). This ended up being our big team dinner. Lots of beers, great conversation and catching up on some discussions we didn’t have during the day. I had a really nice time and during our walk back to the car I got to see Heidelberg Castle light up at night as it looms over the city.

Friday kicked off once again at 9AM. For me this day was a lot of talking and chasing down loose ends while I had key people in the room. I also worked on some more Firehose stuff, this time working our way down the path to get logstash also sending data to Firehose. In the midst of which, we embarrassingly brought down our cluster due to failure to quote strings in the config file, but we did get it back online and then more progress was made after everyone got home on Friday. Still, it was good to get part of the way there during the sprint, and we all learned about the amount of logging (in this case, not much!) our tooling for all this MQTT stuff was providing for us to debug. Never hurts to get a bit more familiar with logstash either.

The final evening was spent once again in Walldorf, this time at the restaurant just across the road from the one we went to on Monday. We weren’t there long enough to grow tired of the limited selection, so we all had a lovely time. My early morning to catch a train meant I stuck to a single beer and left shortly after 8PM with a colleague, but that was plenty late for me.


Photo courtesy of Chris Hoge (source)

Huge thanks to Marc and SAP for hosting us. The spaces worked out really well for everything we needed to get done. I also have to say I really enjoyed my time. I work with some amazing people, and come Thursday morning all I could think was “What a great week! But I better get home so I can get back to work.” Hey! This all was work! Also thanks to Jeremy Stanley, our fearless Infrastructure Project Team Leader who sat this sprint out and kept things going on the home front while we were all focused on the sprint.

A few more photos from our sprint here: https://www.flickr.com/photos/pleia2/albums/72157674174936355

by pleia2 at September 24, 2016 03:30 PM

September 20, 2016

Eric Hammond

Developing CloudStatus, an Alexa Skill to Query AWS Service Status -- an interview with Kira Hammond by Eric Hammond

Interview conducted in writing July-August 2016.

[Eric] Good morning, Kira. It is a pleasure to interview you today and to help you introduce your recently launched Alexa skill, “CloudStatus”. Can you provide a brief overview about what the skill does?

[Kira] Good morning, Papa! Thank you for inviting me.

CloudStatus allows users to check the service availability of any AWS region. On opening the skill, Alexa says which (if any) regions are experiencing service issues or were recently having problems. Then the user can inquire about the services in specific regions.

This skill was made at my dad’s request. He wanted to quickly see how AWS services were operating, without needing to open his laptop. As well as summarizing service issues for him, my dad thought CloudStatus would be a good opportunity for me to learn about retrieving and parsing web pages in Python.

All the data can be found in more detail at status.aws.amazon.com. But with CloudStatus, developers can hear AWS statuses with their Amazon Echo. Instead of scrolling through dozens of green checkmarks to find errors, users of CloudStatus listen to which services are having problems, as well as how many services are operating satisfactorily.

CloudStatus is intended for anyone who uses Amazon Web Services and wants to know about current (and recent) AWS problems. Eventually it might be expanded to talk about other clouds as well.

[Eric] Assuming I have an Amazon Echo, how do I install and use the CloudStatus Alexa skill?

[Kira] Just say “Alexa, enable CloudStatus skill”! Ask Alexa to “open CloudStatus” and she will give you a summary of regions with problems. An example of what she might say on the worst of days is:

“3 out of 11 AWS regions are experiencing service issues: Mumbai (ap-south-1), Tokyo (ap-northeast-1), Ireland (eu-west-1). 1 out of 11 AWS regions was having problems, but the issues have been resolved: Northern Virginia (us-east-1). The remaining 7 regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Or on most days:

“All 62 regional services in the 12 AWS regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Request any AWS region you are interested in, and Alexa will present you with current and recent service issues in that region.

Here’s the full recording of an example session: http://pub.alestic.com/alexa/cloudstatus/CloudStatus-Alexa-Skill-sample-20160908.mp3

[Eric] What technologies did you use to create the CloudStatus Alexa skill?

[Kira] I wrote CloudStatus using AWS Lambda, a service that manages servers and scaling for you. Developers need only pay for their servers when the code is called. AWS Lambda also displays metrics from Amazon CloudWatch.

Amazon CloudWatch gives statistics from the last couple weeks, such as the number of invocations, how long they took, and whether there were any errors. CloudWatch Logs is also a very useful service. It allows me to see all the errors and print() output from my code. Without it, I wouldn’t be able to debug my skill!

I used Amazon EC2 to build the Python modules necessary for my program. The modules (Requests and LXML) download and parse the AWS status page, so I can get the data I need. The Python packages and my code files are zipped and uploaded to AWS Lambda.

Fun fact: My Lambda function is based in us-east-1. If AWS Lambda stops working in that region, you can’t use CloudStatus to check if Northern Virginia AWS Lambda is working! For that matter, CloudStatus will be completely dysfunctional.

[Eric] Why do you enjoy programming?

[Kira] Programming is so much fun and so rewarding! I enjoy making tools so I can be lazy.

Let’s rephrase that: Sometimes I’m repeatedly doing a non-programming activity—say, making a long list of equations for math practice. I think of two “random” numbers between one and a hundred (a human can’t actually come up with a random set of numbers) and pick an operation: addition, subtraction, multiplication, or division. After doing this several times, the activity begins to tire me. My brain starts to shut off and wants to do something more interesting. Then I realize that I’m doing the same thing over and over again. Hey! Why not make a program?

Computers can do so much in so little time. Unlike humans, they are capable of picking completely random items from a list. And they aren’t going to make mistakes. You can tell a computer to do the same thing hundreds of times, and it won’t be bored.

Finish the program, type in a command, and voila! Look at that page full of math problems. Plus, I can get a new one whenever I want, in just a couple seconds. Laziness in this case drives a person to put time and effort into ever-changing problem-solving, all so they don’t have to put time and effort into a dull, repetitive task. See http://threevirtues.com/.

But programming isn’t just for tools! I also enjoy making simple games and am learning about websites.

One downside to having computers do things for you: You can’t blame a computer for not doing what you told it to. It did do what you told it to; you just didn’t tell it to do what you thought you did.

Coding can be challenging (even frustrating) and it can be tempting to give up on a debug issue. But, oh, the thrill that comes after solving a difficult coding problem!

The problem-solving can be exciting even when a program is nowhere near finished. My second Alexa program wasn’t coming along that well when—finally!—I got her to say “One plus one is eleven.” and later “Three plus four is twelve.” Though it doesn’t seem that impressive, it showed me that I was getting somewhere and the next problem seemed reasonable.

[Eric] How did you get started programming with the Alexa Skills Kit (ASK)?

[Kira] My very first Alexa skill was based on an AWS Lambda blueprint called Color Expert (alexa-skills-kit-color-expert-python). A blueprint is a sample program that AWS programmers can copy and modify. In the sample skill, the user tells Alexa their favorite color and Alexa stores the color name. Then the user can ask Alexa what their favorite color is. I didn’t make many changes: maybe Alexa’s responses here and there, and I added the color “rainbow sparkles.”

I also made a skill called Calculator in which the user gets answers to simple equations.

Last year, I took a music history class. To help me study for the test, I created a trivia game from Reindeer Games, an Alexa Skills Kit template (see https://developer.amazon.com/public/community/post/TxDJWS16KUPVKO/New-Alexa-Skills-Kit-Template-Build-a-Trivia-Skill-in-under-an-Hour). That was a lot of fun and helped me to grow in my knowledge of how Alexa works behind the scenes.

[Eric] How does Alexa development differ from other programming you have done?

[Kira] At first Alexa was pretty overwhelming. It was so different from anything I’d ever done before, and there were lines and lines of unfamiliar code written by professional Amazon people.

I found the ASK blueprints and templates extremely helpful. Instead of just being a functional program, the code is commented so developers know why it’s there and are encouraged to play around with it.

Still, the pages of code can be scary. One thing new Alexa developers can try: Before modifying your blueprint, set up the skill and ask Alexa to run it. Everything she says from that point on is somewhere in your program! Find her response in the program and tweak it. The variable name is something like “speech_output” or “speechOutput.”

It’s a really cool experience making voice apps. You can make Alexa say ridiculous things in a serious voice! Because CloudStatus started with the Color Expert blueprint, my first successful edit ended with our Echo saying, “I now know your favorite color is Northern Virginia. You can ask me your favorite color by saying, ‘What’s my favorite color?’.”

Voice applications involve factors you never need to deal with in a text app. When the user is interacting through text, they can take as long as they want to read and respond. Speech must be concise so the listener understands the first time. Another challenge is that Alexa doesn’t necessarily know how to pronounce technical terms and foreign names, but the software is always improving.

One plus side to voice apps is not having to build your own language model. With text-based programs, I spend a considerable amount of time listing all the ways a person can answer “yes,” or request help. Luckily, with Alexa I don’t have to worry too much about how the user will phrase their sentences. Amazon already has an algorithm, and it’s constantly getting smarter! Hint: If you’re making your own skill, use some built-in Amazon intents, like AMAZON.YesIntent or AMAZON.HelpIntent.

[Eric] What challenges did you encounter as you built the CloudStatus Alexa skill?

[Kira] At first, I edited the code directly in the Lambda console. Pretty soon though, I needed to import modules that weren’t built in to Python. Now I keep my code and modules in the same directory on a personal computer. That directory gets zipped and uploaded to Lambda, so the modules are right there sitting next to the code.

One challenge of mine has been wanting to fix and improve everything at once. Naturally, there is an error practically every time I upload my code for testing. Isn’t that what testing is for? But when I modify everything instead of improving bit by bit, the bugs are more difficult to sort out. I’m slowly learning from my dad to make small changes and update often. “Ship it!” he cries regularly.

During development, I grew tired of constantly opening my code, modifying it, zipping it and the modules, uploading it to Lambda, and waiting for the Lambda function to save. Eventually I wrote a separate Bash program that lets me type “edit-cloudstatus” into my shell. The program runs unit tests and opens my code files in the Atom editor. After that, it calls the command “fileschanged” to automatically test and zip all the code every time I edit something or add a Python module. That was exciting!

I’ve found that the Alexa speech-to-text conversions aren’t always what I think they will be. For example, if I tell CloudStatus I want to know about “Northern Virginia,” it sends my code “northern Virginia” (lowercase then capitalized), whereas saying “Northern California” turns into “northern california” (all lowercase). To at least fix the capitalization inconsistencies, my dad suggested lowercasing the input and mapping it to the standardized AWS region code as soon as possible.

[Eric] What Alexa skills do you plan on creating in the future?

[Kira] I will probably continue to work on CloudStatus for a while. There’s always something to improve, a feature to add, or something to learn about—right now it’s Speech Synthesis Markup Language (SSML). I don’t think it’s possible to finish a program for good!

My brother and I also want to learn about controlling our lights and thermostat with Alexa. Every time my family leaves the house, we say basically the same thing: “Alexa, turn off all the lights. Alexa, turn the kitchen light to twenty percent. Alexa, tell the thermostat we’re leaving.” I know it’s only three sentences, but wouldn’t it be easier to just say: “Alexa, start Leaving Home” or something like that? If I learned to control the lights, I could also make them flash and turn different colors, which would be super fun. :)

In August a new ASK template was released for decision tree skills. I want to make some sort of dichotomous key with that. https://developer.amazon.com/public/community/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill

[Eric] Do you have any advice for others who want to publish an Alexa skill?

[Kira]

  • Before submitting your skill for certification, make sure you read through the submission checklist. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-submission-checklist#submission-checklist

  • Remember to check your skill’s home cards often. They are displayed in the Alexa App. Sometimes the text that Alexa pronounces should be different from the reader-friendly card content. For example, in CloudStatus, “N. Virginia (us-east-1)” might be easy to read, but Alexa is likely to pronounce it “En Virginia, Us [as in ‘we’] East 1.” I have to tell Alexa to say “northern virginia, u.s. east 1,” while leaving the card readable for humans.

  • Since readers can process text at their own pace, the home card may display more details than Alexa speaks, if necessary.

  • If you don’t want a card to accompany a specific response, remove the ‘card’ item from your response dict. Look for the function build_speechlet_response() or buildSpeechletResponse().

  • Never point your live/public skill at the $LATEST version of your code. The $LATEST version is for you to edit and test your code, and it’s where you catch errors.

  • If the skill raises errors frequently, don’t be intimidated! It’s part of the process of coding. To find out exactly what the problem is, read the “log streams” for your Lambda function. To print debug information to the logs, print() the information you want (Python) or use a console.log() statement (JavaScript/Node.js).

  • It helps me to keep a list of phrases to try, including words that the skill won’t understand. Make sure Alexa doesn’t raise an error and exit the skill, no matter what nonsense the user says.

  • Many great tips for designing voice interactions are on the ASK blog. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-voice-design-best-practices

  • Have fun!

In The News

Amazon had early access to this interview and to Kira and wrote an article about her in the Alexa Blog:

14-Year-Old Girl Creates CloudStatus Alexa Skill That Benefits AWS Developers

which was then picked up by VentureBeat:

A 14-year-old built an Alexa skill for checking the status of AWS

Original article and comments: https://alestic.com/2016/09/alexa-skill-aws-cloudstatus/

September 20, 2016 04:15 AM

Akkana Peck

Frogs on the Rio, and Other Amusements

Saturday, a friend led a group hike for the nature center from the Caja del Rio down to the Rio Grande.

The Caja (literally "box", referring to the depth of White Rock Canyon) is an area of national forest land west of Santa Fe, just across the river from Bandelier and White Rock. Getting there involves a lot of driving: first to Santa Fe, then out along increasingly dicey dirt roads until the road looks too daunting and it's time to get out and walk.

[Dave climbs the Frijoles Overlook trail] From where we stopped, it was only about a six mile hike, but the climb out is about 1100 feet and the day was unexpectedly hot and sunny (a mixed blessing: if it had been rainy, our Rav4 might have gotten stuck in mud on the way out). So it was a notable hike. But well worth it: the views of Frijoles Canyon (in Bandelier) were spectacular. We could see the lower Bandelier Falls, which I've never seen before, since Bandelier's Falls Trail washed out below the upper falls the summer before we moved here. Dave was convinced he could see the upper falls too, but no one else was convinced, though we could definitely see the red wall of the maar volcano in the canyon just below the upper falls.

[Canyon Tree Frog on the Rio Grande] We had lunch in a little grassy thicket by the Rio Grande, and we even saw a few little frogs, well camouflaged against the dirt: you could even see how their darker brown spots imitated the pebbles in the sand, and we wouldn't have had a chance of spotting them if they hadn't hopped. I believe these were canyon treefrogs (Hyla arenicolor). It's always nice to see frogs -- they're not as common as they used to be. We've heard canyon treefrogs at home a few times on rainy evenings: they make a loud, strange ratcheting noise which I managed to record on my digital camera. Of course, at noon on the Rio the frogs weren't making any noise: just hanging around looking cute.

[Chick Keller shows a burdock leaf] Sunday we drove around the Pojoaque Valley following their art tour, then after coming home I worked on setting up a new sandblaster to help with making my own art. The hardest and least fun part of welded art is cleaning the metal of rust and paint, so it's exciting to finally have a sandblaster to help with odd-shaped pieces like chains.

Then tonight was a flower walk in Pajarito Canyon, which is bursting at the seams with flowers, especially purple aster, goldeneye, Hooker's evening primrose and bahia. Now I'll sign off so I can catalog my flower photos before I forget what's what.

September 20, 2016 02:17 AM

September 19, 2016

Jono Bacon

Looking For Talent For ClusterHQ

clusterhq_logo

Recently I signed ClusterHQ as a client. If you are unfamiliar with them, they provide a neat technology for managing data as part of the overall lifecycle of an application. You can learn more about them here.

I will be consulting with Cluster to help them (a) build their community strategy, (b) find a great candidate as Senior Developer Evanglist, and (c) help to mentor that person in their role to be successful.

If you are looking for a new career, this could be a good opportunity. ClusterHQ are doing some interesting work, and if this role is a good fit for you, I will also be there to help you work within a crisply defined strategy and be successful in the execution. Think of it as having a friend on the inside. 🙂

You can learn more in the job description, but you should have these skills:

  • You are a deep full-stack cloud technologist. You have a track record of building distributed applications end-to-end.
  • You either have a Bachelor’s in Computer Science or are self-motivated and self-taught such that you don’t need one.
  • You are passionate about containers, data management, and building stateful applications in modern clusters.
  • You have a history of leadership and service in developer and DevOps communities, and you have a passion for making applications work.
  • You have expertise in lifecycle management of data.
  • You understand how developers and IT organizations consume cloud technologies, and are able to influence enterprise technology adoption outcomes based on that understanding.
  • You have great technical writing skills demonstrated via documentation, blog posts and other written work.
  • You are a social butterfly. You like meeting new people on and offline.
  • You are a great public speaker and are sought after for your expertise and presentation style.
  • You don’t mind charging your laptop and phone in airport lounges so are willing and eager to travel anywhere our developer communities live, and stay productive and professional on the road.
  • You like your weekend and evening time to focus on your outside-of-work passions, but don’t mind working irregular hours and weekends occasionally (as the job demands) to support hackathons, conferences, user groups, and other developer events.

ClusterHQ are primarily looking for help with:

  • Creating high-quality technical content for publication on our blog and other channels to show developers how to implement specific stateful container management technologies.
  • Spreading the word about container data services by speaking and sharing your expertise at relevant user groups and conferences.
  • Evangelizing stateful container management and ClusterHQ technologies to the Docker Swarm, Kubernetes, and Mesosphere communities, as well as to DevOPs/IT organizations chartered with operational management of stateful containers.
  • Promoting the needs of developers and users to the ClusterHQ product & engineering team, so that we build the right products in the right way.
  • Supporting developers building containerized applications wherever they are, on forums, social media, and everywhere in between.

Pretty neat opportunity.

Interested?

If you are interested in this role, there are few options for next steps:

  1. You can apply directly by clicking here.
  2. Alternatively, if I know you, I would invite you to confidentially share your interest in this role by filling in my form here. This way I can get a good sense of who is interested and also recommend people I personally know and can vouch for. I will then reach out to those of you who this seems to be a good potential fit for and play a supporting role in brokering the conversation.

By the way, there are going to be a number of these kinds of opportunities shared here on my blog. So, be sure to subscribe to my posts if you want to keep up to date with the latest opportunities.

The post Looking For Talent For ClusterHQ appeared first on Jono Bacon.

by Jono Bacon at September 19, 2016 07:25 PM

September 17, 2016

Elizabeth Krumbach

Kubrick, Typeface to Interface and the Zoo

I’ve been home for almost three weeks, and now I’m back in an airport. For almost two weeks of that MJ has been on a business trip overseas and I’ve kept myself busy with work, the book release and meeting up with friends and acquaintances. The incredibly ambitious plans I had for this time at home weren’t fully realized, but with everything we have going on I’m kind of glad I was able to spend some time at home.

Mornings have changed some for me during these three weeks. Coming off of trips from Mumbai and Philadelphia in August my sleep schedule was heavily shifted and I decided to take advantage of that by going out running in the mornings. I’d been meaning to get back into it, and my doctor has gotten a bit more insistent of late based on some results from blood work, and she’s right. Instead of doing proper C25K this time I’ve just been doing interval run/walks. I walk about a half mile, do pretty even run/walk for two miles and then a half mile back. It’s not a lot, but I’ll up the difficultly level as I the run/walk I have going feels easier, I have been going out 4-5 days a week and so far it feels great and seems sustainable. Fingers crossed for keeping this up during my next few weeks of travel.

With MJ out of town I’ve made plans with a bunch of local friends. Meals with my friends James, Emma, Sabice and Amanda last week were all a lot of fun and reversed my at home trend of being a hermit. Last weekend I made my way over to to the Stanley Kubrick: The Exhibition. It opened in June and I’ve been interested in going, but sorting out timing and who to go with has been impossible. I finally just went by myself last Saturday after some having some sushi for lunch nearby.

I wouldn’t say I’m a huge Kubrick fan, but I have enjoyed a lot of his work. The exhibit does a really exceptional job showcasing his work, with bright walls throughout and really nicely laid out scripts, cameras, costumes and props from the films. I had just recently seen Eyes Wide Shut again, but the exhibit made me want to go back and watch the rest, and ones I haven’t seen (Lolita, Spartacus). I particularly enjoyed the bits about my favorite movies of his, 2001: A Space Odyssey and Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.

Some photos from the exhibition here: https://www.flickr.com/photos/pleia2/albums/72157670417890794

I did get out to a movie recently with my friend mct. We saw Complete Unknown which was OK, but not as good as I had hoped. Dinner at a nearby brewery rounded off the evening nicely.

With the whirlwind week of my book release, preparations for the OpenStack QA/Infrastructure sprint (which I’m on my way to now) and other things, I called it a day early on Thursday and met up with my friend Atul for some down time to visit the San Francisco Zoo. He’s been in town for several weeks doing a rotation for work, and we kept missing each other between other plans and my travel schedule. We got to the zoo in time to spend about 90 minutes there before they closed, making it around to most of the exhibits. We got a picture together by the giraffes, but they’ve opened exhibits for the Mexican Wolves and Sifaka lemurs since I last visited! It was fun to finally see both of those. I have some more zoo visits in my future too, hoping to visit the Philadelphia zoo when I’m there next weekend and then the Columbus Zoo after the Ohio LinuxFest in early October.

More zoo pictures here: https://www.flickr.com/photos/pleia2/albums/72157670651372724

Thursday night I met up with my friend Steve to go to the San Francisco Museum of Modern Art to see the Typeface to Interface exhibit. This museum is just a block from where I live and the recently reopened after a few years of massive renovations. They’re open until 9PM on Thursdays and we got there around 7:30 to quite the crowd of people, so these later hours seem to be really working for them. Unfortunately I’ve never been much of a fan of modern art. This exhibit interested me though, and I’m really glad I went. It walks you through the beginning of bringing typeface work into the digital realm, presenting you with the IBM Selectric that had replaceable typing element ball for different fonts. You see a variety of digital-inspired posters and other art, the New York Transit Authority Graphics Standards Manual. It was fun going with Steve too, since his UX expertise meant that he actually knew a thing or two about these things out of the geeky computer context I was approaching it with. I think they could have done a bit more to tie the exhibit together, but it’s probably the best one I’ve seen there.

We spent the rest of the evening before closing walking through several of the other galleries in the museum. Nothing really grabbed my interest, and a lot of it I found difficult to understand why it was in a museum. I do understand the aesthetically pleasing nature of much abstract art, but when it starts being really simple (panel of solid magenta) or really eclectic I struggle with understanding the appeal. Dinner was great though, both of us are east coasters by origin and we went to my favorite fish place in SOMA for east coast oysters, mussels, lobster rolls and strawberry shortcake.

Yesterday afternoon MJ got home from his work trip. In the midst of packing and laundry we were able to catch up and spend some precious time together, including a wonderful dinner at Fogo de Chão. Now I’m off to Germany for work. I had time to write this post because the first flight I had was delayed by an astonishing 6 hours, sailing past catching my connection. I’ve now been rebooked on a two stop itinerary that’s getting me in 5 hours later than I had expected. Sadly, this means I’m missing most of the tourist day in Heidelberg I had planned with colleagues on Sunday, but I expect we’ll still be able to get out for drinks in the evening before work on Monday morning.

by pleia2 at September 17, 2016 04:53 PM

September 13, 2016

Elizabeth Krumbach

Common OpenStack Deployments released!

Back in the fall of 2014 I signed a contract with Prentice Hall that began my work on my second book, Common OpenStack Deployments. This was the first book I was writing from scratch and the first where I was the lead author (the first books I was co-author on were the 8th and 9th editions of The Official Ubuntu Book). That contract started me on a nearly two year journey to write a this book about OpenStack, which I talk a lot about here: How the book came to be.

Along the way I recruited my excellent contributing author Matt Fischer, who in addition to his Puppet and OpenStack expertise, shares a history with me in the Ubuntu community and Mystery Science Theater 3000 fandom (he had a letter read on the show once!). In short, he’s pretty awesome.

A lot of work and a lot of people went into making this book a reality, so I’m excited and happy to announce that the book has been officially released as of last week, and yesterday I got my first copy direct from the printer!

As I was putting the finishing touches on it in the spring, the dedication came up. I decided to dedicate the book to the OpenStack community, with a special nod to the Puppet OpenStack team.

Text:

This book is dedicated to the OpenStack community. Of the community, I’d also like to specifically call out the help and support received from the Puppet OpenStack Team, whose work directly laid the foundation for the deployment scenarios in this book.

Huge thanks to everyone who participated in making this book a reality, whether they were diligently testing all of our Puppet manifests, lent their OpenStack or systems administration experience to reviewing or gave me support as I worked my way through the tough parts of the book (my husband was particularly supportive during some of the really grim moments). This is a really major thing for me and I couldn’t have done it without all of you.

I’ll be continuing to write about updates to the book over on the blog that lives on the book’s website: DeploymentsBook.com (RSS). You can also follow updates on Twitter via @deploymentsbook, if that’s your thing.

If you’re interested in getting your hands on a copy, it’s sold by all the usual book sellers and available on Safari. The publisher’s website also routinely has sales and deals, especially if you buy the paper and digital copies together, so keep an eye out. I’ll also be speaking at conferences over the next few months and will be giving out signed copies. Check out my current speaking engagements here to see where I’ll be and I will have a few copies at the upcoming OpenStack Summit in Barcelona.

by pleia2 at September 13, 2016 06:53 PM