Planet Ubuntu California

April 28, 2017

Jono Bacon

Anonymous Open Source Projects

Today Solomon asked an interesting question on Twitter:

He made it clear he is not advocating for this view, just a thought experiment. I had, well, a few thoughts on this.

I tend to think of open source projects in three broad buckets.

Firstly, we have the overall workflow in which the community works together to build things. This is your code review processes, issue management, translations workflow, event strategy, governance, and other pieces.

Secondly, there are the individual contributions. This is how we assess what we want to build, what quality looks like, how we build modularity, and other elements.

Thirdly, there is identity which covers the identity of the project and the individuals who contribute to it. Solomon taps into this third component.

Identity

While the first two components, workflow and contributions are clearly important in defining what you want to work on and how you build it, identity is more subtle.

I think identity plays a few different roles at the individual level.

Firstly, it helps to build reputation. Open source communities are at a core level meritocracies: contributions are assessed on their value, and the overall value of the contributor is based on their merits. Now, yes, I know some of you will balk at whether open source communities are actually meritocracies. The thing is, too many people treat “meritocracy” as a framework or model: it isn’t. It is more of a philosophy…a star that we move towards.

It is impossible to build a meritocracy without some form of identity attached to the contribution. We need to have a mapping between each contribution and the same identity that delivered it: this helps that individual build their reputation as they deliver more and more contributions. This also helps them flow from being a new contributor, to a regular, and then to a leader.

This leads to my second point. Identity is also critical for accountability. Now, when I say accountability we tend to think of someone being responsible for their actions. Sure, this is the case, but accountability also plays an important and healthy role in people receiving peer feedback on their work.

According to Google Images search, “accountability” requires lots of fist bumps.

Open source communities are kinda weird places to be. It is easy to forget that (a) joining a community, (b) making a contribution, (c) asking for help, (d) having your contribution critically reviewed, and (e) solving problems, all happens out in the open, for all to see. This can be remarkably weird and stressful for people new to or unfamiliar with open source, and on a bed of the cornucopia of human insecurities about looking stupid, embarrassing yourself etc. While I have never been to one (honest), I imagine this is what it must be like going to a nudist colony: everything out on display, both good and bad.

All of this rolls up to identity playing an important role for building the fabric of a community, for effective peer review, and the overall growth of individual participants (and thus the network effect of the community).

Real Names vs. Handles

If we therefore presume identity is important, do we require that identity to be a real name (e.g. “Jono Bacon”) or a handle (e.g. “MetalDude666”)? – not my actual handle, btw.

We have all said this at some point.

In terms of the areas I presented above such as building reputation, accountability, and peer review, this can all be accomplished if people use handles, under the prerequisite that there is some way of knowing that “MetalDude666” is the same person each time. Many gaming communities have players who build remarkable reputations and accountability and no one knows who they really are, just their handles.

Where things get trickier is assuring the same quality of community experience for those who use real names and those who use handles in the same community. On core infrastructure (such as code hosting, communication channels, websites, etc) this can typically be assured. It can get trickier with areas such as real-world events. For example, if the community has an in-person event, the folks with the handles may not feel comfortable attending so as to preserve their anonymity. Given how key these kinds of events can be to building relationships, it can therefore result in a social/collaborative delta between those with real names and those with handles.

So, in answer to Solomon’s question, I do think identity is critical, but it could be all handles if required. What is key is to either (a) require only handles/real names (which is tough), or (b) provide very careful community strategy and execution to reduce the delta of experience between those with real names and handles (tough, but easier).

So, what do you think, folks? Do you agree with me, or am I speaking nonsense? Can you share great examples of anonymous of open source communities? Are there elements I missed out in my assessment here? Share them in the comments below!

The post Anonymous Open Source Projects appeared first on Jono Bacon.

by Jono Bacon at April 28, 2017 06:55 PM

April 25, 2017

Akkana Peck

Typing Greek letters

I'm taking a MOOC that includes equations involving Greek letters like epsilon. I'm taking notes online, in Emacs, using the iimage mode tricks for taking MOOC class notes in emacs that I worked out a few years back.

Iimage mode works fine for taking screenshots of the blackboard in the videos, but sometimes I'd prefer to just put the equations inline in my file. At first I was typing out things like E = epsilon * sigma * T^4 but that's silly, and of course the professor isn't spelling out the Greek letters like that when he writes the equations on the blackboard. There's got to be a way to type Greek letters on this US keyboard.

I know how to type things like accented characters using the "Multi key" or "Compose key". In /etc/default/keyboard I have XKBOPTIONS="ctrl:nocaps,compose:menu,terminate:ctrl_alt_bksp" which, among other things, sets the compose key to be my "Menu" key, which I never used otherwise. And there's a file, /usr/share/X11/locale/en_US.UTF-8/Compose, that includes all the built-in compose key sequences. I have a shell function in my .zshrc,

composekey() {
  grep -i $1 /usr/share/X11/locale/en_US.UTF-8/Compose
}
so I can type something like composekey epsilon and find out how to type specific codes. But that didn't work so well for Greek letters. It turns out this is how you type them:
<dead_greek> <A>            : "Α"   U0391    # GREEK CAPITAL LETTER ALPHA
<dead_greek> <a>            : "α"   U03B1    # GREEK SMALL LETTER ALPHA
<dead_greek> <B>            : "Β"   U0392    # GREEK CAPITAL LETTER BETA
<dead_greek> <b>            : "β"   U03B2    # GREEK SMALL LETTER BETA
<dead_greek> <D>            : "Δ"   U0394    # GREEK CAPITAL LETTER DELTA
<dead_greek> <d>            : "δ"   U03B4    # GREEK SMALL LETTER DELTA
<dead_greek> <E>            : "Ε"   U0395    # GREEK CAPITAL LETTER EPSILON
<dead_greek> <e>            : "ε"   U03B5    # GREEK SMALL LETTER EPSILON
... and so forth. And this <dead_greek> key isn't actually defined in most US/English keyboard layouts: you can check whether it's defined for you with: xmodmap -pke | grep dead_greek

Of course you can use xmodmap to define a key to be <dead_greek>. I stared at my keyboard for a bit, and decided that, considering how seldom I actually need to type Greek characters, I didn't see the point of losing a key for that purpose (though if you want to, here's a thread on how to map <dead_greek> with xmodmap).

I decided it would make much more sense to map it to the compose key with a prefix, like 'g', that I don't need otherwise. I can do that in ~/.XCompose like this:

<Multi_key> <g> <A>            : "Α"   U0391    # GREEK CAPITAL LETTER ALPHA
<Multi_key> <g> <a>            : "α"   U03B1    # GREEK SMALL LETTER ALPHA
<Multi_key> <g> <B>            : "Β"   U0392    # GREEK CAPITAL LETTER BETA
<Multi_key> <g> <b>            : "β"   U03B2    # GREEK SMALL LETTER BETA
<Multi_key> <g> <D>            : "Δ"   U0394    # GREEK CAPITAL LETTER DELTA
<Multi_key> <g> <d>            : "δ"   U03B4    # GREEK SMALL LETTER DELTA
<Multi_key> <g> <E>            : "Ε"   U0395    # GREEK CAPITAL LETTER EPSILON
<Multi_key> <g> <e>            : "ε"   U03B5    # GREEK SMALL LETTER EPSILON
... and so forth.

And now I can type [MENU] g e and a lovely ε appears, at least in any app that supports Greek fonts, which is most of them nowadays.

April 25, 2017 06:57 PM

April 21, 2017

Akkana Peck

Comb Ridge and Cedar Mesa Trip

[House on Fire ruin, Mule Canyon UT] Last week, my hiking group had its annual trip, which this year was Bluff, Utah, near Comb Ridge and Cedar Mesa, an area particular known for its Anasazi ruins and petroglyphs.

(I'm aware that "Anasazi" is considered a politically incorrect term these days, though it still seems to be in common use in Utah; it isn't in New Mexico. My view is that I can understand why Pueblo people dislike hearing their ancestors referred to by a term that means something like "ancient enemies" in Navajo; but if they want everyone to switch from using a mellifluous and easy to pronounce word like "Anasazi", they ought to come up with a better, and shorter, replacement than "Ancestral Puebloans." I mean, really.)

The photo at right is probably the most photogenic of the ruins I saw. It's in Mule Canyon, on Cedar Mesa, and it's called "House on Fire" because of the colors in the rock when the light is right.

The light was not right when we encountered it, in late morning around 10 am; but fortunately, we were doing an out-and-back hike. Someone in our group had said that the best light came when sunlight reflected off the red rock below the ruin up onto the rock above it, an effect I've seen in other places, most notably Bryce Canyon, where the hoodoos look positively radiant when seen backlit, because that's when the most reflected light adds to the reds and oranges in the rock.

Sure enough, when we got back to House on Fire at 1:30 pm, the light was much better. It wasn't completely obvious to the eye, but comparing the photos afterward, the difference is impressive: Changing light on House on Fire Ruin.

[Brain main? petroglyph at Sand Island] The weather was almost perfect for our trip, except for one overly hot afternoon on Wednesday. And the hikes were fairly perfect, too -- fantastic ruins you can see up close, huge petroglyph panels with hundreds of different creatures and patterns (and some that could only have been science fiction, like brain-man at left), sweeping views of canyons and slickrock, and the geology of Comb Ridge and the Monument Upwarp.

And in case you read my last article, on translucent windows, and are wondering how those generated waypoints worked: they were terrific, and in some cases made the difference between finding a ruin and wandering lost on the slickrock. I wish I'd had that years ago.

Most of what I have to say about the trip are already in the comments to the photos, so I'll just link to the photo page:

Photos: Bluff trip, 2017.

April 21, 2017 01:28 AM

April 20, 2017

kdub

5 years of Test Driven Development, Visualized

Here’s a 15 minute video covering nearly 5 years of active Mir development in C++.

We (the original Mir team), started the project by reading “Growing Object-Oriented Software, Guided by Tests” by Steve Freeman and Nat Price (book site).  We followed the philosophy of test-driven development
closely after that, really throughout the whole project.

This is a video generated by the program ‘gource’. What you see is every file or directory is a node. Little avatars, one for every contributor zoom around the files and zap them when a change is made. If you watch closely, some contributors will fixate on a few files for a bit (bug hunting, maybe?) and sometimes the avatars zap a lot of files during close succession, during feature additions and refactoring.

A few things to point out about TDD in the visualization. At the end, the 3 biggest branches are headers, source code and tests. The size of those components remained roughly the same throughout the whole project, which shows that we were adding a test, and then fleshing out the source code (and supporting headers from there). When an avatar zaps a lot of files at once, there’s always a corresponding zapping of the tests tree and the source tree. Write the test first! Very cool to see how things come together with TDD on a large, sustained, and high quality code base.

 

by kdub at April 20, 2017 05:03 PM

April 11, 2017

Elizabeth Krumbach

After the last snow of the season

Just over a week ago I returned to San Francisco from a visit back east. I imagine trips back to Philadelphia will grow less noteworthy as we grow more accustom to having property there as well. Visits may cease having specific reasons and instead just be a part of our lives. This wasn’t one of those trips though, I had a specific agenda going in to spend the week going through and organizing my mother-in-law’s belongings.

When we laid her to rest back in February things were a bit of a whirlwind. Very little time was spent going through her belongings as we quickly packed up her apartment and dropped all the boxes in our den at the townhouse before getting back to funeral arrangements, other immediate end of life tasks, and time spent with family during our visit. When I arrived in Philadelphia for this trip, I had my work cut out for me.

The last snow of the season was also in recent memory for the area. Though much of the naturally fallen snow around had melted, piles remained all over my neighborhood. As the week progressed, a series of rain storms coming through the area and warmer weather meant that by the time my trip came to a close much of the snow was gone. I was fortunate weather-wise with many of my plans though, even the ones that took me outside largely landed on the dry parts of my visit.

I spent Monday-Friday working nominally 9-5 (earlier or later depending on the day, meetings scheduled). It was a great test for my ability to interact with the team remotely during a normal work week. Fortunately the team is used to being distributed and I have been working from home often even when I’m in town, so it’s not been a huge culture shift for any of us. It was also good to get comfortable working in that space, having breakfast and lunches at the townhouse and starting to develop a normal life routine out there instead of feeling like I’m on a trip.


DC/OS Office Hours at work taken upstairs!

Since I was working, it took me the whole trip to get through her 20 or so boxes (excluding clothes), but it wasn’t just about time. I knew this work would be difficult. The loss was still somewhat fresh, and though MJ was just a call or text away, it was still hard going through all of her things on my own. There’s also no denying the personal impact that seeing someone’s life packed into 20 boxes. How many boxes would I end up with? What would my family surviving me do with it all? What is sentimental to me but would be confusing or unimportant to almost anyone else? What makes me happy today but will be a burden-of-stuff to those who come after me? All of this lead to a great amount of care and respect as I went through to catalog and repack all of her things, and decide what few items here and there could be donated, which ended up being almost exclusively clothes and linens we had no use for.

While I was there, regular updates also came in from MJ about Simcoe’s rapidly declining health. Not all communication was sad though. I was getting pictures and updates about what was going on in San Francisco, and was able to loop MJ in whenever I had questions or comments about things. I ended up having to bring a few piles of paperwork home with me, but staying in touch was really nice.

To balance the difficulty presented by all this, I also spent time with friends and family. The Sunday following my arrival I took advantage of the nearby train station for the second time since moving into the townhouse to head to downtown Philly. When I lived in the area previously I’d never lived near a rail line, and my use of public transit was rare. As a result, proximity to a regional rail line was not an intended goal of where we ended up buying, but it’s quickly turning into something I value considerably. Living in the bay area has turned me into quite the rail and public transit fan. In the past six months the amount of time I’ve spent on Philly public transit has rivaled what I experienced while living in the area. City life here in San Francisco has also reduced my apprehension about driving in cities, but I’m still not super keen on dealing with traffic or parking once I get to near my destination downtown, and I actively enjoy train rides. The line I take down to the city runs hourly and takes 40 minutes to drop me off at Market East station, nearly matching what I’d get driving when traffic and parking are considered.

The end of my train ride brought me to a lunch with my friend Tom at Bareburger. They had a surprisingly option-filled menu for a burger place, and I think I’d go back for their milkshakes alone and I don’t think I’d be able to resist adding duck bacon to my burger. It was a pleasure to catch up, Tom was one of those friends who I met through LiveJournal well over a decade ago, and in spite of living in proximity to each other for years, our in person time spent was quite limited.

This trip also afforded me the opportunity to have dinner with my friend Jace. We hadn’t seen each other in probably eight years, but he lives not too far from the townhouse and we’ve kept in touch online. He’s also the designer who came up with the last two iterations of the main page of princessleia.com and we’ve both published books in the past year, leading to piles of options for discussion. Given his proximity to our new place I hope we can make more time to hang out in the coming years, it was nice to reconnect.

Some move-in work progressed on the townhouse as well as we customize it to our liking. My brother-in-law came over to do some wall excavation off or our garage to see if a closet could be put in under the stairs. Success! The void we speculated existing does indeed exist and we’ll be working with him on a quote to do the formal build out work in the coming months. After the wall work, I joined him, his mother and my father-in-law for a wonderful dinner at the nearby Uzbekistan Restaurant on Bustleton Ave in Philadelphia. It was my first time there, but after enjoying their culinary delights, it won’t be my last.

On Wednesday I met up with my friends Crissi and Nita to see Beauty and the Beast for what was the second time for all three of us. We went to one of the new theaters that serves dinner along with the movie due to time constraints with Nita’s pre-surgical eating schedule, and then met afterwards for dessert elsewhere so we could catch up and actually talk.

And pre-surgical? Nita was having a procedure done the following morning. In spite of her living thirty minutes from my place, fate would have it that the center she went to for the procedure was just a couple miles from the townhouse. On Thursday I headed over right after work to spend a few hours with her and several folks who dropped in to visit her throughout the evening. When she was discharged the following afternoon her and her sister came over to my place to spend the night so she wouldn’t have the thirty minute drive home so soon after surgery. I really enjoyed the company, making the first proper meal not by myself for dinner with our new pots and pans (spaghetti!), and a spread of omelettes the following morning. We also engaged in a Pirates of the Caribbean marathon, making our way through three of the movies, since I’d somehow neglected to see any of them.

Time constraints got in the way of plans to visit some of my friends in New Jersey on Saturday, which I’m disappointed about but couldn’t be avoided. My final day in town was Sunday and spent with yet another friend, making our way down to Delaware for a vintage toy show, and then taking some time before my flight for a walk in a local park where we could enjoy the weather and talk. It was the most beautiful day of my trip, and though it wasn’t particularly warm, the temperatures in the 50s made for a San Francisco-like feel that I have come to really enjoy taking walks in.

I haven’t had the easiest time over the past few months, and this visit definitely continued in the vein of complicated emotions. Rebuilding the in-person relationships that had largely shifted to being online since I moved away has brought some peace to what has been a difficult time. I’m incredibly grateful for the wonderful people I have in my life, and am reminded that of those finally organized boxes that my mother-in-law left behind, 20% of what I went through were photo-based. Photos of friends, family, various moments in her life that she held on to through the years. In my often work-focused life, it was a good reminder of what is important in life besides what we accomplish professionally. I need the people I love and care for to really make me feel whole.

And upon returning home, MJ met me at the airport with roses. I love and am loved, so much to still be grateful for.

by pleia2 at April 11, 2017 02:07 AM

April 10, 2017

Elizabeth Krumbach

Simcoe loved

Yesterday we had to let go of our precious Simcoe. She was almost ten and a half years old, and has spent the past five and a half years undergoing treatment for her Chronic Renal Failure (CRF).

I’ll be doing a final medical post that has details about her care over the years and how her levels looked as the disease progressed, but these very painful past twenty-four hours have reminded me of so many of the things our little kitty loved and made her the sweet, loving, fun critter she was. So this post is just a simple one.

Simcoe loved her seagull on a stick, I had to covertly buy an identical one when her old one broke

Simcoe loved being a country cat, hunting bugs and watching chipmunks at the house in Pennsylvania

Simcoe loved being a city cat, staring down at cars and people from the high rise window sill in San Francisco

Simcoe loved little cat tents and houses

Simcoe loved sitting on our laps

Simcoe loved cold cut turkey

Simcoe loved bringing toys on to the bed so we would play with her

Simcoe loved snuggling Caligula

Simcoe loved being inside boxes

Simcoe loved sitting in the sun

Simcoe loved pop corn

Simcoe loved sitting on books and magazines I was trying to read

Simcoe loved string

Simcoe loved having a perfectly groomed coat of fur

Simcoe loved sitting on suitcases

Simcoe loved her Millennium Falcon on a stick

Simcoe loved climbing on top of boxes

Simcoe loved paper bags

Simcoe loved talking

Simcoe loved sleeping on our warm laptops if we left them open

Simcoe loved living, which made this all so much harder

Simcoe loved us, and we loved her so very much

by pleia2 at April 10, 2017 07:25 PM

April 06, 2017

Akkana Peck

Clicking through a translucent window: using X11 input shapes

It happened again: someone sent me a JPEG file with an image of a topo map, with a hiking trail and interesting stopping points drawn on it. Better than nothing. But what I really want on a hike is GPX waypoints that I can load into OsmAnd, so I can see whether I'm still on the trail and how to get to each point from where I am now.

My PyTopo program lets you view the coordinates of any point, so you can make a waypoint from that. But for adding lots of waypoints, that's too much work, so I added an "Add Waypoint" context menu item -- that was easy, took maybe twenty minutes. PyTopo already had the ability to save its existing tracks and waypoints as a GPX file, so no problem there.

[transparent image viewer overlayed on top of topo map] But how do you locate the waypoints you want? You can do it the hard way: show the JPEG in one window, PyTopo in the other, and do the "let's see the road bends left then right, and the point is off to the northwest just above the right bend and about two and a half times as far away as the distance through both road bends". Ugh. It takes forever and it's terribly inaccurate.

More than once, I've wished for a way to put up a translucent image overlay that would let me click through it. So I could see the image, line it up with the map in PyTopo (resizing as needed), then click exactly where I wanted waypoints.

I needed two features beyond what normal image viewers offer: translucency, and the ability to pass mouse clicks through to the window underneath.

A translucent image viewer, in Python

The first part, translucency, turned out to be trivial. In a class inheriting from my Python ImageViewerWindow, I just needed to add this line to the constructor:

    self.set_opacity(.5)

Plus one more step. The window was translucent now, but it didn't look translucent, because I'm running a simple window manager (Openbox) that doesn't have a compositor built in. Turns out you can run a compositor on top of Openbox. There are lots of compositors; the first one I found, which worked fine, was xcompmgr -c -t-6 -l-6 -o.1

The -c specifies client-side compositing. -t and -l specify top and left offsets for window shadows (negative so they go on the bottom right). -o.1 sets the opacity of window shadows. In the long run, -o0 is probably best (no shadows at all) since the shadow interferes a bit with seeing the window under the translucent one. But having a subtle .1 shadow was useful while I was debugging.

That's all I needed: voilà, translucent windows. Now on to the (much) harder part.

A click-through window, in C

X11 has something called the SHAPE extension, which I experimented with once before to make a silly program called moonroot. It's also used for the familiar "xeyes" program. It's used to make windows that aren't square, by passing a shape mask telling X what shape you want your window to be. In theory, I knew I could do something like make a mask where every other pixel was transparent, which would simulate a translucent image, and I'd at least be able to pass clicks through on half the pixels.

But fortunately, first I asked the estimable Openbox guru Mikael Magnusson, who tipped me off that the SHAPE extension also allows for an "input shape" that does exactly what I wanted: lets you catch events on only part of the window and pass them through on the rest, regardless of which parts of the window are visible.

Knowing that was great. Making it work was another matter. Input shapes turn out to be something hardly anyone uses, and there's very little documentation.

In both C and Python, I struggled with drawing onto a pixmap and using it to set the input shape. Finally I realized that there's a call to set the input shape from an X region. It's much easier to build a region out of rectangles than to draw onto a pixmap.

I got a C demo working first. The essence of it was this:

    if (!XShapeQueryExtension(dpy, &shape_event_base, &shape_error_base)) {
        printf("No SHAPE extension\n");
        return;
    }

    /* Make a shaped window, a rectangle smaller than the total
     * size of the window. The rest will be transparent.
     */
    region = CreateRegion(outerBound, outerBound,
                          XWinSize-outerBound*2, YWinSize-outerBound*2);
    XShapeCombineRegion(dpy, win, ShapeBounding, 0, 0, region, ShapeSet);
    XDestroyRegion(region);

    /* Make a frame region.
     * So in the outer frame, we get input, but inside it, it passes through.
     */
    region = CreateFrameRegion(innerBound);
    XShapeCombineRegion(dpy, win, ShapeInput, 0, 0, region, ShapeSet);
    XDestroyRegion(region);

CreateRegion sets up rectangle boundaries, then creates a region from those boundaries:

Region CreateRegion(int x, int y, int w, int h) {
    Region region = XCreateRegion();
    XRectangle rectangle;
    rectangle.x = x;
    rectangle.y = y;
    rectangle.width = w;
    rectangle.height = h;
    XUnionRectWithRegion(&rectangle, region, region);

    return region;
}

CreateFrameRegion() is similar but a little longer. Rather than post it all here, I've created a GIST: transregion.c, demonstrating X11 shaped input.

Next problem: once I had shaped input working, I could no longer move or resize the window, because the window manager passed events through the window's titlebar and decorations as well as through the rest of the window. That's why you'll see that CreateFrameRegion call in the gist: -- I had a theory that if I omitted the outer part of the window from the input shape, and handled input normally around the outside, maybe that would extend to the window manager decorations. But the problem turned out to be a minor Openbox bug, which Mikael quickly tracked down (in openbox/frame.c, in the XShapeCombineRectangles call on line 321, change ShapeBounding to kind). Openbox developers are the greatest!

Input Shapes in Python

Okay, now I had a proof of concept: X input shapes definitely can work, at least in C. How about in Python?

There's a set of python-xlib bindings, and they even supports the SHAPE extension, but they have no documentation and didn't seem to include input shapes. I filed a GitHub issue and traded a few notes with the maintainer of the project. It turned out the newest version of python-xlib had been completely rewritten, and supposedly does support input shapes. But the API is completely different from the C API, and after wasting about half a day tweaking the demo program trying to reverse engineer it, I gave up.

Fortunately, it turns out there's a much easier way. Python-gtk has shape support, even including input shapes. And if you use regions instead of pixmaps, it's this simple:

    if self.is_composited():
        region = gtk.gdk.region_rectangle(gtk.gdk.Rectangle(0, 0, 1, 1))
        self.window.input_shape_combine_region(region, 0, 0)

My transimageviewer.py came out nice and simple, inheriting from imageviewer.py and adding only translucency and the input shape.

If you want to define an input shape based on pixmaps instead of regions, it's a bit harder and you need to use the Cairo drawing API. I never got as far as working code, but I believe it should go something like this:

    # Warning: untested code!
    bitmap = gtk.gdk.Pixmap(None, self.width, self.height, 1)
    cr = bitmap.cairo_create()
    # Draw a white circle in a black rect:
    cr.rectangle(0, 0, self.width, self.height)
    cr.set_operator(cairo.OPERATOR_CLEAR)
    cr.fill();

    # draw white filled circle
    cr.arc(self.width / 2, self.height / 2, self.width / 4,
           0, 2 * math.pi);
    cr.set_operator(cairo.OPERATOR_OVER);
    cr.fill();

    self.window.input_shape_combine_mask(bitmap, 0, 0)

The translucent image viewer worked just as I'd hoped. I was able to take a JPG of a trailmap, overlay it on top of a PyTopo window, scale the JPG using the normal Openbox window manager handles, then right-click on top of trail markers to set waypoints. When I was done, a "Save as GPX" in PyTopo and I had a file ready to take with me on my phone.

April 06, 2017 11:08 PM

April 05, 2017

Jono Bacon

Canonical Refocus

I wrote this on G+, but it seemed appropriate to share it here too:

So, today Canonical decided to refocus their business and move away from convergence and devices. This means that the Ubuntu desktop will move back to GNOME.

I have seen various responses to this news. Some sad that it is the end of an era, and a non-zero amount of “we told you so” smugness.

While Unity didn’t pan out, and there were many good steps and missteps along the way, I am proud that Canonical tried to innovate. Innovation is tough and fraught with risk. The Linux desktop has always been a tough nut to crack, and one filled with an army of voices, but I am proud Canonical gave it a shot even if it didn’t succeed it’s ultimate goals. That spirit of experimentation is at the epicenter of open source, and I hope everyone involved here takes a good look at how they contributed to and exacerbated this innovation. I know I have looked inwards at this.

Much as some critics may deny, everyone I know who worked on Unity and Mir, across engineering, product, community, design, translations, QA, and beyond did so with big hearts and open minds. I just hope we see that talent and passion continue to thrive and we continue to see Ubuntu as a powerful driver for the Linux desktop. I am excited to see how this work manifests in GNOME, which has been doing some awesome work in recent years.

And, Mark, Jane, I know this will have been a tough decision to come to, and this will be a tough day for the different teams affected. Hang in there: Ubuntu has had such a profound impact on open source and while the future path may be a little different, I am certain it will be fruitful.

The post Canonical Refocus appeared first on Jono Bacon.

by Jono Bacon at April 05, 2017 10:12 PM

March 31, 2017

Akkana Peck

Show mounted filesystems

Used to be that you could see your mounted filesystems by typing mount or df. But with modern Linux kernels, all sorts are implemented as virtual filesystems -- proc, /run, /sys/kernel/security, /dev/shm, /run/lock, /sys/fs/cgroup -- I have no idea what most of these things are except that they make it much more difficult to answer questions like "Where did that ebook reader mount, and did I already unmount it so it's safe to unplug it?" Neither mount nor df has a simple option to get rid of all the extraneous virtual filesystems and only show real filesystems.

http://unix.stackexchange.com/questions/177014/showing-only-interesting-mount-p oints-filtering-non-interesting-types had some suggestions that got me started:

mount -t ext3,ext4,cifs,nfs,nfs4,zfs
mount | grep -E --color=never  '^(/|[[:alnum:]\.-]*:/)'
Another answer there says it's better to use findmnt --df, but that still shows all the tmpfs entries (findmnt --df | grep -v tmpfs might do the job).

And real mounts are always mounted on a filesystem path starting with /, so you can do mount | grep '^/'.

But it also turns out that mount will accept a blacklist of types as well as a whitelist: -t notype1,notype2... I prefer the idea of excluding a blacklist of filesystem types versus restricting it to a whitelist; that way if I mount something unusual like curlftpfs that I forgot to add to the whitelist, or I mount a USB stick with a filesystem type I don't use very often (ntfs?), I'll see it.

On my system, this was the list of types I had to disable (sheesh!):

mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl

df is easier: like findmnt, it excludes most of those filesystem types to begin with, so there are only a few you need to exclude:

df -hTx tmpfs -x devtmpfs -x rootfs

Obviously I don't want to have to type either of those commands every time I want to check my mount list. SoI put this in my .zshrc. If you call mount or df with no args, it applies the filters, otherwise it passes your arguments through. Of course, you could make a similar alias for findmnt.

# Mount and df are no longer useful to show mounted filesystems,
# since they show so much irrelevant crap now.
# Here are ways to clean them up:
mount() {
    if [[ $# -ne 0 ]]; then
        /bin/mount $*
        return
    fi

    # Else called with no arguments: we want to list mounted filesystems.
    /bin/mount -t nosysfs,nodevtmpfs,nocgroup,nomqueue,notmpfs,noproc,nopstore,nohugetlbfs,nodebugfs,nodevpts,noautofs,nosecurityfs,nofusectl
}

df() {
    if [[ $# -ne 0 ]]; then
        /bin/df $*
        return
    fi

    # Else called with no arguments: we want to list mounted filesystems.
    /bin/df -hTx tmpfs -x devtmpfs -x rootfs
}

Update: Chris X Edwards suggests lsblk or lsblk -o 'NAME,MOUNTPOINT'. it wouldn't have solved my problem because it only shows /dev devices, not virtual filesystems like sshfs, but it's still a command worth knowing about.

March 31, 2017 06:25 PM

March 26, 2017

Nathan Haines

Winners of the Ubuntu 17.04 Free Culture Showcase

Spring is here and the release of Ubuntu 17.04 is just around the corner. I've been using it for two weeks and I can't say I'm disappointed! But one new feature that never disappoints me is appearance of the community wallpapers that were selected from the Ubuntu Free Culture Showcase!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. For Ubuntu 17.04, 96 images were submitted to the Ubuntu 17.04 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found.

But now the results are in, and the top choices, voted on by certain members of the Ubuntu community, and I'm proud to announce the winning images that will be included in Ubuntu 17.04:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers (along with dozens of other stunning wallpapers) today at the links above, or in your desktop wallpaper list after you upgrade or install Ubuntu 17.04 on April 13th.

March 26, 2017 08:35 AM

March 25, 2017

Akkana Peck

Reading keypresses in Python

As part of preparation for Everyone Does IT, I was working on a silly hack to my Python script that plays notes and chords: I wanted to use the computer keyboard like a music keyboard, and play different notes when I press different keys. Obviously, in a case like that I don't want line buffering -- I want the program to play notes as soon as I press a key, not wait until I hit Enter and then play the whole line at once. In Unix that's called "cbreak mode".

There are a few ways to do this in Python. The most straightforward way is to use the curses library, which is designed for console based user interfaces and games. But importing curses is overkill just to do key reading.

Years ago, I found a guide on the official Python Library and Extension FAQ: Python: How do I get a single keypress at a time?. I'd even used it once, for a one-off Raspberry Pi project that I didn't end up using much. I hadn't done much testing of it at the time, but trying it now, I found a big problem: it doesn't block.

Blocking is whether the read() waits for input or returns immediately. If I read a character with c = sys.stdin.read(1) but there's been no character typed yet, a non-blocking read will throw an IOError exception, while a blocking read will wait, not returning until the user types a character.

In the code on that Python FAQ page, blocking looks like it should be optional. This line:

fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)
is the part that requests non-blocking reads. Skipping that should let me read characters one at a time, block until each character is typed. But in practice, it doesn't work. If I omit the O_NONBLOCK flag, reads never return, not even if I hit Enter; if I set O_NONBLOCK, the read immediately raises an IOError. So I have to call read() over and over, spinning the CPU at 100% while I wait for the user to type something.

The way this is supposed to work is documented in the termios man page. Part of what tcgetattr returns is something called the cc structure, which includes two members called Vmin and Vtime. man termios is very clear on how they're supposed to work: for blocking, single character reads, you set Vmin to 1 (that's the number of characters you want it to batch up before returning), and Vtime to 0 (return immediately after getting that one character). But setting them in Python with tcsetattr doesn't make any difference.

(Python also has a module called tty that's supposed to simplify this stuff, and you should be able to call tty.setcbreak(fd). But that didn't work any better than termios: I suspect it just calls termios under the hood.)

But after a few hours of fiddling and googling, I realized that even if Python's termios can't block, there are other ways of blocking on input. The select system call lets you wait on any file descriptor until has input. So I should be able to set stdin to be non-blocking, then do my own blocking by waiting for it with select.

And that worked. Here's a minimal example:

import sys, os
import termios, fcntl
import select

fd = sys.stdin.fileno()
newattr = termios.tcgetattr(fd)
newattr[3] = newattr[3] & ~termios.ICANON
newattr[3] = newattr[3] & ~termios.ECHO
termios.tcsetattr(fd, termios.TCSANOW, newattr)

oldterm = termios.tcgetattr(fd)
oldflags = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags | os.O_NONBLOCK)

print "Type some stuff"
while True:
    inp, outp, err = select.select([sys.stdin], [], [])
    c = sys.stdin.read()
    if c == 'q':
        break
    print "-", c

# Reset the terminal:
termios.tcsetattr(fd, termios.TCSAFLUSH, oldterm)
fcntl.fcntl(fd, fcntl.F_SETFL, oldflags)

A less minimal example: keyreader.py, a class to read characters, with blocking and echo optional. It also cleans up after itself on exit, though most of the time that seems to happen automatically when I exit the Python script.

March 25, 2017 06:42 PM

March 24, 2017

Jono Bacon

My Move to ProsperWorks CRM and Feature Requests

As some of you will know, I am a consultant that helps companies build internal and external communities, collaborative workflow, and teams. Like any consultant, I have different leads that I need to manage, I convert those leads into opportunities, and then I need to follow up and convert them into clients.

Managing my time is one of the most critical elements of what I do. I want to maximize my time to be as valuable as possible, so optimizing this process is key. Thus, the choice of CRM has been an important one. I started with Insightly, but it lacked a key requirement: integration.

I hate duplicating effort. I spend the majority of my day living in email, so when a conversation kicks off as a lead or opportunity, I want to magically move that from my email to the CRM. I want to be able to see and associate conversations from my email in the CRM. I want to be able to see calendar events in my CRM. Most importantly, I don’t want to be duplicating content from one place to another. Sure, it might not take much time, but the reality is that I am just going to end up not doing it.

Evaluations

So, I evaluated a few different platforms, with a strong bias to SalesforceIQ. The main attraction there was the tight integration with my email. The problem with SalesforceIQ is that it is expensive, it has limited integration beyond email, and it gets significantly more expensive when you want more control over your pipeline and reporting. SalesforceIQ has the notion of “lists” where each is kind of like a filtered spreadsheet view. For the basic plan you get one list, but beyond that you have to go up a plan in which I get more lists, but it also gets much more expensive.

As I courted different solutions I stumbled across ProsperWorks. I had never heard of it, but there were a number of features that I was attracted to.

ProsperWorks

Firstly, ProsperWorks really focuses on tight integration with Google services. Now, a big chunk of my business is using Google services. Prosperworks integrates with Gmail, but also Google Calendar, Google Docs, and other services.

They ship a Gmail plugin which makes it simple to click on a contact and add them to ProsperWorks. You can then create an opportunity from that contact with a single click. Again, this is from my email: this immediately offers an advantage to me.

ProsperWorks CRM

Yep, that’s not my Inbox. It is an image yanked off the Internet.

When viewing each opportunity, ProsperWorks will then show associated Google Calendar events and I can easily attach Google Docs documents or other documents there too. The opportunity is presented as a timeline with email conversations listed there, but then integrated note-taking for meetings, and other elements. It makes it easy to summarize the scope of the deal, add the value, and add all related material. Also, adding additional parties to the deal is simple because ProsperWorks knows about your contacts as it sucks it up from your Gmail.

While the contact management piece is less important to me, it is also nice that it brings in related accounts for each contact automatically such as Twitter, LinkedIn, pictures, and more. Again, this all reduces the time I need to spend screwing around in a CRM.

Managing opportunities across the pipeline is simple too. I can define my own stages and then it basically operates like Trello and you just drag cards from one stage to another. I love this. No more selecting drop down boxes and having to save contacts. I like how ProsperWorks just gets out of my way and lets me focus on action.

…also not my pipeline view. Thanks again Google Images!

I also love that I can order these stages based on “inactivity”. Because ProsperWorks integrates email into each opportunity, it knows how many inactive days there has been since I engaged with an opportunity. This means I can (a) sort my opportunities based on inactivity so I can keep on top of them easily, and (b) I can set reminders to add tasks when I need to follow up.

ProsperWorks CRM

The focus on inactivity is hugely helpful when managing lots of concurrent opportunities.

As I was evaluating ProsperWorks, there was one additional element that really clinched it for me: the design.

ProsperWorks looks and feels like a Google application. It uses material design, and it is sleek and modern. It doesn’t just look good, but it is smartly designed in terms of user interaction. It is abundantly clear that whoever does the interaction and UX design at ProsperWorks is doing an awesome job, and I hope someone there cuts this paragraph out and shows it to them. If they do, you kick ass!

Of course, ProsperWorks does a load of other stuff that is helpful for teams, but I am primarily assessing this from a sole consultant’s perspective. In the end, I pulled the trigger and subscribed, and I am delighted that I did. It provides a great service, is more cost efficient than the alternatives, provides an integrated solution, and the company looks like they are doing neat things.

Feature Requests

While I dig ProsperWorks, there are some things I would love to encourage the company to focus on. So, ProsperWorks folks, if you are reading this, I would love to see you focus on the following. If some of these already exist, let me know and I will update this post. Consider me a resource here: happy to talk to you about these ideas if it helps.

Wider Google Calendar integration

Currently the gcal integration is pretty neat. One limitation though is that it depends on a gmail.com domain. As such, calendar events where someone invites my jonobacon.com email doesn’t automatically get added to the opportunity (and dashboard). It would be great to be able to associate another email address with an account (e.g. a gmail.com and jonobacon.com address) so when calendar events have either or both of those addresses they are sucked into opportunities. It would also be nice to select which calendars are viewed: I use different calendars for different things (e.g. one calendar for booked work, one for prospect meetings, one for personal etc). Feature Request Link

It would also be great to have ProsperWorks be able to ease scheduling calendar meetings in available slots. I want to be able to talk to a client about scheduling a call, click a button in the opportunity, and ProsperWorks will tell me four different options for call times, I can select which ones I am interested in, and then offer these times to the client, where they can pick one. ProsperWorks knows my calendar, this should be doable, and would be hugely helpful. Feature Request Link

Improve the project management capabilities

I have a dream. I want my CRM to also offer simple project management capabilities. ProsperWorks does have a ‘projects’ view, but I am unclear on the point of it.

What I would love to see is simple project tracking which integrates (a) the ability to set milestones with deadlines and key deliverables, and (b) Objective Key Results. This would be huge: I could agree on a set of work complete with deliverables as part of an opportunity, and then with a single click be able to turn this into a project where the milestones would be added and I could assign tasks, track notes, and even display a burndown chart to see how on track I am within a project. Feature Request Link

This doesn’t need to be a huge project management system, just a simple way of adding milestones, their child tasks, tracking deliverables, and managing work that leads up to those deliverables. Even if ProsperWorks just adds simple Evernote functionality where I can attach a bunch of notes to a client, this would be hugely helpful.

Optimize or Integrate Task Tracking

Tracking tasks is an important part of my work. The gold standard for task tracking is Wunderlist. It makes it simple to add tasks (not all tasks need deadlines), and I can access them from anywhere.

I would love to ProsperWorks to either offer that simplicity of task tracking (hit a key, whack in a title for a task, and optionally add a deadline instead of picking an arbitrary deadline that it nags me about later), or integrate with Wunderlist directly. Feature Request Link

Dashboard Configurability

I want my CRM dashboard to be something I look at every day. I want it to tell me what calendar events I have today, which opportunities I need to follow up with, what tasks I need to complete, and how my overall pipeline is doing. ProspectWorks does some of this, but doesn’t allow me to configure this view. For example, I can’t get rid of the ‘Invite Team Members’ box, which is entirely irrelevant to me as an individual consultant. Feature Request Link

So, all in all, nice work, ProsperWorks! I love what you are doing, and I love how you are innovating in this space. Consider me a resource: I want to see you succeed!

UPDATE: Updated with feature request links.

The post My Move to ProsperWorks CRM and Feature Requests appeared first on Jono Bacon.

by Jono Bacon at March 24, 2017 05:13 PM

March 23, 2017

Jono Bacon

Community Leadership Summit 2017: 6th – 7th May in Austin

The Community Leadership Summit is taking place on the 6th – 7th May 2017 in Austin, USA.

The event brings together community managers and leaders, projects, and initiatives to share and learn how we build strong, engaging, and productive communities. The event takes place the weekend before OSCON in the same venue, the Austin Contention Center. It is entirely FREE to attend and welcomes everyone, whether you are a community veteran or just starting out your journey!

The event is broken into three key components.

Firstly, we have an awesome set of keynotes this year:

Secondly, the bulk of the event is an unconference where the attendees volunteer session ideas and run them. Each session is a discussion where the topic is discussed, debated, and we reach final conclusions. This results in a hugely diverse range of sessions covering topics such as event management, outreach, social media, governance, collaboration, diversity, building contributor programs, and more. These discussions are incredible for exploring and learning new ideas, meeting interesting people, building a network, and developing friendships.

Finally, we have social events on both evenings where you can meet and network with other attendees. Food and drinks are provided by data.world and Mattermost. Thanks to both for their awesome support!

Join Us

The Community Leadership Summit is entirely FREE to attend. If you would like to join, we would appreciate if you could register (this helps us with expected numbers). I look forward to seeing you there in Austin on the 6th – 7th May 2017!

The post Community Leadership Summit 2017: 6th – 7th May in Austin appeared first on Jono Bacon.

by Jono Bacon at March 23, 2017 04:40 PM

March 22, 2017

Elizabeth Krumbach

Your own Zesty Zapus

As we quickly approach the release of Ubuntu 17.04, Zesty Zapus, coming up on April 13th, you may be thinking of how you can mark this release.

Well, thanks to Tom Macfarlane of the Canonical Design Team you have one more goodie in your toolkit, the SVG of the official Zapus! It’s now been added to the Animal SVGs section of the Official Artwork page on the Ubuntu wiki.

Zesty Zapus

Download the SVG version for printing or using in any other release-related activities from the wiki page or directly here.

Over here, I’m also all ready with the little “zapus” I picked up on Amazon.

Zesty Zapus toy

by pleia2 at March 22, 2017 04:01 AM

March 21, 2017

Elizabeth Krumbach

SCaLE 15x

At the beginning of March I had the pleasure of heading down to Pasadena, California for SCaLE 15x. Just like last year, MJ also came down for work so it was fun syncing up with him here and there between going off to our respective meetings and meals.

I arrived the evening on March 1st and went out with my co-organizer of the Open Source Infrastructure Day to pick up some supplies for the event. I hope to write up a toolkit for running one of these days based on our experiences and what we needed to buy, but that will have to wait for another day.

March 2nd is when things began properly and we got busy! I spent most of my day running the Open Source Infrastructure day, which I wrote about here on opensource.com: How to grow healthy open source project infrastructures.

I also spent an hour over at the UbuCon Summit giving a talk on Xubuntu which I already blogged about here. Throughout the day I also handled the Twitter accounts for both @OpenSourceInfra and @ubuntu_us_ca. This was deceptively exhausting, by Thursday night I was ready to crash but we had a dinner to go to! Speakers, organizers and other key folks who were part of our Open Source Infrastructure day were treated to a meal by IBM.


Photo thanks to SpamapS (source)

Keynotes for the conference on Saturday and Sunday were both thoughtful, future-thinking talks about the importance of open source software, culture and methodologies in our world today. On Saturday we heard from Astrophysicist Christine Corbett Moran, who among her varied accomplishments has done research in Antarctica and led security-focused development of the now wildly popular Signal app for iOS. She spoke on the relationships between our development of software and the communities we’re building in the open. There is much to learn and appreciate in this process, but also more that we can learn from other communities. Slides from her talk, amusingly constructed as a series of tweets (some real, most not) are available as a pdf on the talk page.


Christine Corbett Moran on “Open Source Software as Activism”

In Karen Sandler’s keynote she looked at much of what is going on in the United States today and seriously questioned her devotion to free software when it seems like there are so many other important causes out there to fight for. She came back to free software though, since it’s such an important part of every aspect of our lives. As technologists, it’s important for us to continue our commitment to open source and support organizations fighting for it, a video of her talk is already available on YouTube at SCaLE 15x Keynote: Karen Sandler – In the Scheme of Things, How Important is Software Freedom?

A few other talks really stood out for me, Amanda Folson spoke on “10 Things I Hate About Your API” where she drew from her vast experience with hosted APIs to give advice to organizations who are working to build their customer and developer communities around a product. She scrutinized things like sign-up time and availability and complexity of code examples. She covered tooling problems, documentation, reliability and consistency, along with making sure the API is actually written for the user, listening to feedback from users to maintain and improve it, and abiding by best practices. Best of all, she also offered helpful advice and solutions for all these problems! The great slides from her talk are available on the talk page.


Amanda Folson

I also really appreciated VM Brasseur’s talk, “Passing the Baton: Succession planning for FOSS leadership”. I’ll admit right up front that I’m not great at succession planning. I tend to take on a lot in projects and then lack the time to actually train other people because I’m so overwhelmed. I’m not alone in this, succession planning is a relatively new topic in open source projects and only a handful have taken a serious look at it from a high, project-wide level. Key points she made centered around making sure skills for important roles are documented and passed around and suggested term limits for key roles. She walked attendees through a process of identifying critical roles and responsibilities in their community, refactoring roles that are really too large for individual contributors, and procedures and processes for knowledge transfer. I think one of the most important things about this talk was less about the “bus factor” worry of losing major contributors unexpectedly, but how documenting and making sure roles are documented makes your project more welcoming to new, and more divers contributors. Work is well-scoped, so it’s easy for someone new to come in and help on a small part, and the project has support built in for that.


VM Brasseur

For my part, I gave a talk on “Listening to the Needs of Your Global Open Source Community” where I had a small audience (last talk of the day, against a popular speaker) but an engaged one that had great questions. It’s nice sometimes nice to have a smaller crowd that allows you to talk to almost all of them after the talk, I even arranged a follow-up lunch meeting with a woman I met who is doing some work similar to what I did for the i18n team in the OpenStack community. Slides from my talk are here (7.4M PDF).

I heard a talk from AB Periasamy of Minio, the open source alternative to AWS S3 that we’re using at Mesosphere for some of our DC/OS demos that need object storage. My friend and open source colleague Nathan Handler also gave a very work-applicable talk on PaaSTA, the framework built by Yelp to support their Apache Mesos-driven infrastructure. I cover both of these talks in more depth in a blog post coming out soon on the dcos.io blog. Edit: The post on the DC/OS blog is now up: Reflecting on SCaLE 15x.

SCaLE 15x remains one of my favorite conferences. Lots of great talks, key people from various segments of open source communities I participate in and great pacing so that you can find time to socialize and learn. Huge thanks to Ilan Rabinovitch who I worked with a fair amount during this event to make sure the Open Source Infrastructure day came together, and to the fleet of volunteers who make this happen every year.

More photos from SCaLE 15x here: https://www.flickr.com/photos/pleia2/albums/72157681016586816

by pleia2 at March 21, 2017 07:53 PM

March 20, 2017

Akkana Peck

Everyone Does IT (and some Raspberry Pi gotchas)

I've been quiet for a while, partly because I've been busy preparing for a booth at the upcoming Everyone Does IT event at PEEC, organized by LANL.

In addition to booths from quite a few LANL and community groups, they'll show the movie "CODE: Debugging the Gender Gap" in the planetarium, I checked out the movie last week (our library has it) and it's a good overview of the problem of diversity, and especially the problems women face in in programming jobs.

I'll be at the Los Alamos Makers/Coder Dojo booth, where we'll be showing an assortment of Raspberry Pi and Arduino based projects. We've asked the Coder Dojo kids to come by and show off some of their projects. I'll have my RPi crittercam there (such as it is) as well as another Pi running motioneyeos, for comparison. (Motioneyeos turned out to be remarkably difficult to install and configure, and doesn't seem to do any better than my lightweight scripts at detecting motion without false positives. But it does offer streaming video, which might be nice for a booth.) I'll also be demonstrating cellular automata and the Game of Life (especially since the CODE movie uses Life as a background in quite a few scenes), music playing in Python, a couple of Arduino-driven NeoPixel LED light strings, and possibly an arm-waving penguin I built a few years ago for GetSET, if I can get it working again: the servos aren't behaving reliably, but I'm not sure yet whether it's a problem with the servos and their wiring or a power supply problem.

The music playing script turned up an interesting Raspberry Pi problem. The Pi has a headphone output, and initially when I plugged a powered speaker into it, the program worked fine. But then later, it didn't. After much debugging, it turned out that the difference was that I'd made myself a user so I could have my normal shell environment. I'd added my user to the audio group and all the other groups the default "pi" user is in, but the Pi's pulseaudio is set up to allow audio only from users root and pi, and it ignores groups. Nobody seems to have found a way around that, but sudo apt-get purge pulseaudio solved the problem nicely.

I also hit a minor snag attempting to upgrade some of my older Raspbian installs: lightdm can't upgrade itself (Errors were encountered while processing: lightdm). Lots of people on the web have hit this, and nobody has found a way around it; the only solution seems to be to abandon the old installation and download a new Raspbian image.

But I think I have all my Raspbian cards installed and working now; pulseaudio is gone, music plays, the Arduino light shows run. Now to play around with servo power supplies and see if I can get my penguin's arms waving again when someone steps in front of him. Should be fun, and I can't wait to see the demos the other booths will have.

If you're in northern New Mexico, come by Everyone Does IT this Tuesday night! It's 5:30-7:30 at PEEC, the Los Alamos Nature Center, and everybody's welcome.

March 20, 2017 06:29 PM

March 19, 2017

Elizabeth Krumbach

Simcoe’s January and March 2017 Checkups

Simcoe has had a few checkups since I last wrote in October. First was a regular checkup in mid-December, where I brought her in on my own and had to start thinking about how we’re going to keep her weight up. The next step will be a feeding tube, and we really don’t want to go down that path with a cat who has never even been able to tolerate a collar. Getting her to take fiber was getting to be stressful for all of us, so the doctor prescribed Lactulose to be taken daily to handle constipation. Medication for a kitty facing renal failure is always dicey option, but the constipation was clearly painful for her and causing her to vomit more. We started getting going with that slowly. We skipped the blood work with this visit since we were aiming to get it done again in January.

On January 7th she was not doing well and was brought in for an emergency visit to make sure she didn’t pass into crisis with her renal failure. Blood work was done then and we had to get more serious about making sure she stays regular and keeps eating. Still, her weight started falling more dramatically at this point, with her dropping below 8 lbs for the first time since she was diagnosed in 2011, landing her at a worrying 7.58. Her BUN level had gone from 100 in October to 141, CRE from 7.0 to 7.9.

At the end of January she went in for her regular checkup. We skipped the blood work since it had just been done a couple weeks before. We got a new, more concentrated formulation of Mirtazapine to stimulate her appetite since MJ had discovered that putting the liquid dosage into a capsule that she could swallow without tasting any of it was the only possible way we could get her to take it. The Calcitriol she was taking daily was also reformulated. We had to leave town unexpectedly for a week in early February, which she wasn’t at all happy with, but since then I’ve been home with her most of the time so she’s seems to have perked up a bit and after dipping in weight she seems to be doing tolerably well.

When we brought her into the vet on March 11th her weight came in at a low 6.83 lbs. The lowest weight she’d ever had was 6.09 when she was first diagnosed and not being treated at all, so she wasn’t down to her all time low. Still, dropping below 7 pounds is still troubling, especially since it has happened so rapidly.

The exam went well though, the vet continues to be surprised at how well she’s doing outwardly in spite of her weight and blood work. Apparently some cats just handle the condition better than others. Simcoe is a lucky, tough kitty.


Evidence of the blood draw!

I spoke with the vet this morning now that blood work has come back. Her phosphorous and calcium levels are not at all where we want them to be. Her CRE is up from 7.9 to 10.5, BUN went from 141 to 157. Sadly, these are pretty awful levels, her daily 100 ml Subcutaneous fluids are really what is keeping her going at this point.

With this in mind, as of today we’ve suspended use of the Calcitriol, switched the Atopica she’s taking for allergies to be every other day. We’re only continuing with the Mirtazapine, Lactulose and Subcutaneous fluids. I’m hoping that the reduction in medications she’s taking each day will stress her body and mind less, leading to a happier kitty even as her kidneys continue in their decline. I hope she’s not in a lot of pain day to day, she does still vomit a couple times a week, and I know her constipation isn’t fully addressed by the medication, she still is quite thirsty all the time. We can’t increase her fluids dosage since there’s only so much she can absorb in a day, and it would put stress on her heart (she has a slight heart murmur). Keeping her weight up remains incredibly important, with the vet pretty much writing off dietary restrictions and saying she can eat as much of whatever she likes (turkey prepared for humans? Oh yes!).

Still, mostly day to day we’re having a fun cat life over here. We sent our laundry out while the washer was broken recently and clothes came back bundled in strings that Simcoe had a whole evening of fun over. I picked up a laser pointer recently that she played with for a bit, before figuring it out, she just stares at my hand now when I use it, but at least Caligula still enjoys it! And in the evenings when I carve out some time to read or watch TV, it’s pretty common for her to camp out on my lap.

by pleia2 at March 19, 2017 10:09 PM

March 13, 2017

Eric Hammond

Incompatible: Static S3 Website With CloudFront Forwarding All Headers

a small lesson learned in setting up a static web site with S3 and CloudFront

I created a static web site hosted in an S3 bucket named www.example.com (not the real name) and enabled accessing it as a website. I wanted delivery to be fast to everybody around the world, so I created a CloudFront distribution in front of the S3 bucket.

I wanted S3 to automatically add “index.html” to URLs ending in a slash (CloudFront can’t do this), so I configured the CloudFront distribution to access the S3 bucket as a web site using www.example.com.s3-website-us-east-1.amazonaws.com as the origin server.

Before sending all of the www.example.com traffic to the new setup, I wanted to test it, so I added test.example.com to the list of CNAMEs in the CloudFront distribution.

After setting up Route53 so that DNS lookups for test.example.com would resolve to the new CloudFront endpoint, I loaded it in my browser and got the following error:

404 Not Found

Code: NoSuchBucket
Message: The specified bucket does not exist
BucketName: test.example.com
RequestId: [short string]
HostId: [long string]

Why would AWS be trying to find an S3 bucket named test.example.com? That was pointing at the CloudFront distribution endpoint, and CloudFront was configured to get the content from www.example.com.s3-website-us-east-1.amazonaws.com

After debugging, I found out that the problem was that I had configured the CloudFront distribution to forward “all” HTTP headers. I thought that this would be a sneaky way to turn off caching in CloudFront so that I could keep updating the content in S3 and not have to wait to see the latest changes.

However, this also means that CloudFront was forwarding the HTTP Host header from my browser to the S3 website handler. When S3 saw that I was requesting the host of test.example.com it looked for a bucket of the same name and didn’t find it, resulting in the above error.

When I turned off forwarding all HTTP headers in CloudFront, it then started sending through the correct header:

Host: www.example.com.s3-website-us-east-1.amazonaws.com

which S3 correctly interpreted as accessing the correct S3 bucket www.example.com in the website mode (adding index.html after trailing slashes).

It makes sense for CloudFront to support forwarding the Host header from the browser, especially when your origin server is a dynamic web site that can act on the original hostname. You can set up a wildcard *.example.com DNS entry pointing at your CloudFront distribution, and have the back end server return different results depending on what host the browser requested.

However, passing the Host header doesn’t work so well for an origin server S3 bucket in website mode. Lesson learned and lesson passed on.

Original article and comments: https://alestic.com/2017/03/cloudfront-s3-host-header/

March 13, 2017 09:30 PM

March 10, 2017

Akkana Peck

At last! A roadrunner!

We live in what seems like wonderful roadrunner territory. For the three years we've lived here, we've hoped to see a roadrunner, and have seen them a few times at neighbors' places, but never in our own yard.

Until this morning. Dave happened to be looking out the window at just the right time, and spotted it in the garden. I grabbed the camera, and we watched it as it came out from behind a bush and went into stalk mode.

[Roadrunner stalking]

And it caught something!

[close-up, Roadrunner with fence lizard] We could see something large in its bill as it triumphantly perched on the edge of the garden wall, before hopping off and making a beeline for a nearby juniper thicket.

It wasn't until I uploaded the photo that I discovered what it had caught: a fence lizard. Our lizards only started to come out of hibernation about a week ago, so the roadrunner picked the perfect time to show up.

I hope our roadrunner decides this is a good place to hang around.

March 10, 2017 09:33 PM

Elizabeth Krumbach

Ubuntu at SCaLE15x

On Thursday, March 2nd I spent most of the day running an Open Source Infrastructure Day, but across the way my Ubuntu friends were kicking off the first day of the second annual UbuCon Summit at SCaLE. The first day included a keynote from by Carl Richell of System76 where they made product announcements, including of their new Galago Pro laptop and their Starling Pro ARM server. The talk following came from Nextcloud, with a day continuing with talks from Aaron Atchison and Karl Fezer talking about the Mycroft AI, José Antonio Rey on Getting to know Juju: From zero to deployed in minutes and Amber Graner sharing the wisdom that You don’t need permission to contribute to your own destiny.

I ducked out of the Open Infrastructure Day in the mid-afternoon to give my talk, 10 Years of Xubuntu. This is a talk I’d been thinking about for some time, and I begin by walking folks through the history of the Xubuntu project. From there I spoke about where it sits in the Ubuntu community as a recognized flavor, and then on to how specific strategies that the team has employed with regard to motivating the completely volunteer-driven team.

When it came to social media accounts, we didn’t create them all ourselves, instead relying upon existing accounts on Facebook, G+ and LinkedIn that we promoted to being official ones, keeping the original volunteers in place, just giving access to a core Xubuntu team member in case they couldn’t continue running it. It worked out for all of us, we had solid contributors passionate about their specific platforms and excited to be made official, and as long as they kept them running we didn’t need to expend core team resources to keep them running. We’ve also worked to collect user stories in order to motivate current contributors, since it means a lot to see their work being used by others. I’ve also placed a great deal of value on the Xubuntu Strategy Document, which has set the guiding principles of the project and allowed us to steer the ship through difficult decisions in the project. Slides from the talk are available here: 10_years_of_Xubuntu_UbuCon_Summit.pdf (1.9M).

Thursday evening I met with my open source infrastructure friends for dinner, but afterwards swung by Porto Alegre to catch some folks for evening drinks and snacks. I had a really nice chat with Nathan Haines, who co-organized the UbuCon Summit.

On Friday I was able to attend the first keynote! Michael Hall gave a talk titled Sponsored by Canonical where he dove deep into Ubuntu history to highlight Canonical’s role in the support of the project from the early focus on desktop Linux, to the move into devices and the cloud. His talk was followed by one from Sergio Schvezov on Snaps. The afternoon was spent as an unconference, with the Ubuntu booth starting up in the expo hall on 2PM.

The weekend was all about the Ubuntu booth. Several volunteers staffed it Friday through Sunday.

They spent the event showing off the Ubuntu Phone, Mycroft AI, and several laptops.

It was also great to once again meet up with one of my co-authors for the 9th edition of The Official Ubuntu Book, José Antonio Rey. Our publisher sent a pile of books to give out at the event, some of which we gave out during our talks, and a couple more at the booth.

by pleia2 at March 10, 2017 05:39 AM

Work, wine, open source and… survival

So far 2017 has proven to be quite the challenge, but let’s hold off on all that until the end.

As I’ve mentioned in a couple of recent posts, I start new job in January, joining Mesosphere to move up the stack to work on containers and focus on application deployments. It’s the first time I’ve worked for a San Francisco startup and so far I’ve been having a lot of fun working with really smart people who are doing interesting work that’s on the cutting edge of what companies are doing today. Aside from travel for work, I’ve spent most of my time these first couple months in the office getting to know everyone. Now, we all know that offices aren’t my thing, but I have enjoyed the catered breakfasts and lunches, dog-friendly environment and ability to meet with colleagues in person as I get started.

I’ve now started going in mostly just for meetings, with productivity much higher when I can work from home like I have for the past decade. My team is working on outreach and defining open source strategies, helping with slide decks, guides and software demos. All stuff I’m finding great value in. As I start digging deeper into the tech I’m finding myself once again excited about work I’m doing and building things that people are using.

Switching gears into the open source work I still do for fun, I’ve started to increase my participation with Xubuntu again, just recently wrapping up the #LoveXubuntu Competition. At SCaLE15x last week I gave a Xubuntu presentation, which I’ll write about in a later post. Though I’ve stepped away from the Ubuntu Weekly Newsletter just recently, I did follow through with ordering and shipping stickers off to winners of our issue 500 competition.

I’ve also put a nice chunk of my free time into promoting Open Source Infrastructure work. In addition to a website that now has a huge list of infras thanks to various contributors submitting merge proposals via GitLab, I worked with a colleague from IBM to run a whole open source infra event at SCaLE15x. Though we went into it with a lot of uncertainty, we came out the other end having had a really successful event and excitement from a pile of new people.

It hasn’t been all work though. In spite of a mounting to do list, sometimes you just need to slow down.

At the beginning of February MJ and I spent a Saturday over at the California Historical Society to see their Vintage: Wine, Beer, and Spirits Labels from the Kemble Collections on Western Printing and Publishing exhibit. It’s just around the corner from us, so allowed for a lovely hour of taking a break after a Saturday brunch to peruse various labels spanning wine, beer and spirits from a designer and printer in California during the first half of the 20th century. The collection was of mass-production labels, there nothing artisanal about them and no artists signing their names, but it did capture a place in time and I’m a sucker for early 20th century design. It was a fascinating collection, beautifully curated like their exhibits always are, and I’m glad we made time to see it.

More photos from the exhibit are up here: https://www.flickr.com/photos/pleia2/albums/72157676346542394

At the end of February we noted our need to pick up our quarterly wine club subscription at Rutherford Hill. In what was probably our shortest trip up to Napa, we enjoyed a noontime brunch at Harvest Table in St. Helena. We picked up some Charbay hop-flavored whiskey, stopped by the Heitz Cellar tasting room where we picked up a bottle of my favorite Zinfandel and then made our way to Rutherford Hill to satisfy the real goal of our trip. Upon arrival we were pleased to learn that a members’ wine-tasting event was being held in the caves, where they had a whole array of wines to sample along with snacks and cheeses. Our wine adventures ended with this stop and we made a relatively early trek south, in the direction of home.

A few more photos from our winery jaunt are here: https://www.flickr.com/photos/pleia2/albums/72157677743529104

Challenge-wise, here we go. Starting a new job means a lot of time spent learning, while I also have had to to hit the ground running. We worked our way through a death in the family last month. I’ve been away from home a lot, and generally we’ve been doing a lot of running around to complete all the adult things related to life. Our refrigerator was replaced in December and in January I broke one of the shelves, resulting in a spectacular display of tomato sauce all over the floor. Weeks later our washing machine started acting up and overflowed (thankfully no damage done in our condo), we have our third repair visit booked and hopefully it’ll be properly fixed on Monday.

I spent the better part of January recovering from a severe bout of bronchitis that had lasted three months, surviving antibiotics, steroids and two types of inhalers. MJ is continuing to nurse a broken bone in his foot, transitioning from air cast to shoe-based aids, but there’s still pain and uncertainty around whether it’ll heal properly without surgery. Simcoe is not doing well, she is well into the final stages of renal failure. We’re doing the best we can to keep her weight up and make sure she’s happy, but I fear the end is rapidly approaching and I’m not sure how I’ll cope with it. I also lurked in the valley of depression for a while in February.

We’re also living in a very uncertain political climate here in the United States. I’ve been seeing people I care about being placed in vulnerable situations. I’m finding myself deeply worried every time browse the news or social media for too long. I never thought that in 2017 I’d be reading from a cousin who was evacuated from a Jewish center due to a bomb threat, or have to check to make sure the cemetery in Philadelphia that was desecrated wasn’t one that my relatives were in. A country I’ve loved and been proud of for my whole life, through so many progressive changes in recent years, has been transformed into something I don’t recognize. I have friends and colleagues overseas cancelling trips and moves here because they’re afraid of being turned away or otherwise made to feel unwelcome. I’m thankful for my fellow citizens who are standing up against it and organizations like the ACLU who have vowed to keep fighting, I just can’t muster the strength for it right now.

Right now we have a lot going on, and though we’re both stressed out and tired, we aren’t actively handling any crisis at the moment. I feel like I finally have a tiny bit of breathing room. These next two weekends will be spent catching up on tasks and paperwork. I’m planning on going back to Philadelphia for a week at the end of the month to start sorting through my mother-in-law’s belongings and hopefully wrap up sorting of things that belonged to MJ’s grandparents. I know a fair amount of heartache awaits me in these tasks, but we’ll be in a much better place to move forward once I’ve finished. Plus, though I’ll be working each day, I will be making time to visit with friends while I’m there and that always lifts my spirits.

by pleia2 at March 10, 2017 02:35 AM

March 05, 2017

Akkana Peck

The Curious Incident of the Junco in the Night-Time

Dave called from an upstairs bedroom. "You'll probably want to see this."

He had gone up after dinner to get something, turned the light on, and been surprised by an agitated junco, chirping and fluttering on the sill outside the window. It evidently was tring to fly through the window and into the room. Occasionally it would flutter backward to the balcony rail, but no further.

There's a piñon tree whose branches extend to within a few feet of the balcony, but the junco ignored the tree and seemed bent on getting inside the room.

As we watched, hoping the bird would calm down, instead it became increasingly more desperate and stressed. I remembered how, a few months earlier, I opened the door to a deck at night and surprised a large bird, maybe a dove, that had been roosting there under the eaves. The bird startled and flew off in a panic toward the nearest tree. I had wondered what happened to it -- whether it had managed to find a perch in the thick of a tree in the dark of night. (Unlike San Jose, White Rock gets very dark at night.)

And that thought solved the problem of our agitated junco. "Turn the porch light on", I suggested. Dave flipped a switch, and the porch light over the deck illuminated not only the deck where the junco was, but the nearest branches of the nearby piñon.

Sure enough, now that it could see the branches of the tree, the junco immediately turned around and flew to a safe perch. We turned the porch light back off, and we heard no more from our nocturnal junco.

March 05, 2017 06:27 PM

February 27, 2017

Nathan Haines

UbuCon Summit Comes to Pasadena this Week!

UbuCon SCALE 14x group photo

Once again, UbuCon Summit will be hosted by the Southern California Linux Expo in Pasadena, California on March 2nd and 3rd. UbuCon Summit is two days that celebrate Ubuntu and the community, and this year has some excitement in store.

Thursday's keynote will feature Carl Richell, the CEO and founder of System 76, a premium source of Ubuntu desktop and laptop computers. Entitled "Acrylic, Aluminum, Thumb Screws, and Heavy Machinery at System 76," he will share how System 76 is reinventing what it means to be a computer manufacturer, and talk about how they are changing the relationship between users and their devices. Don't miss this fascinating peek behind the scenes of a computer manufacturer that focuses on Ubuntu, and keep your ears peeled because they are announcing new products during the keynote!

We also have community member Amber Graner who will share her inspiring advice on how to forge a path to success with her talk "You Don't Need Permission to Contribute to Your Own Destiny," and Elizabeth Joseph who will talk about her 10 years in the Xubuntu community.

Thursday will wrap up with our traditional open Ubuntu Q&A panel where you can ask us your burning questions about Ubuntu, and Friday will see a talk from Michael Hall, "Sponsored by Canonical" where he describes the relationship between Canonical and Ubuntu and how it's changed, and Sergio Schvezov will describe Ubuntu's next-generation packaging format in "From Source to Snaps." After a short break for lunch and the expo floor, we'll be back for four unconference sessions, where attendees will come together to discuss the Ubuntu topics that matter most to them.

Ubuntu will be at booth 605 during the Southern California Linux Expo's exhibition floor hours from Friday through Sunday. You'll be able to see the latest version of Ubuntu, see how it works with touchscreens, laptops, phones, and embedded devices, and get questions answered by both community and Canonical volunteers at the booth.

Come for UbuCon, stay for SCALE! This is a weekend not to be missed!

SCALE 15x, the 15th Annual Southern California Linux Expo, is the largest community-run Linux/FOSS showcase event in North America. It will be held from March 2-5 at the Pasadena Convention Center in Pasadena, California. For more information on the expo, visit https://www.socallinuxexpo.org

February 27, 2017 01:24 AM

February 26, 2017

Elizabeth Krumbach

My mother-in-law

On Monday, February 6th MJ’s mother passed away.

She had been ill over the holidays and we had the opportunity to visit with her in the hospital a couple times while we were in Philadelphia in December. Still, with her move to a physical rehabilitation center in recent weeks I thought she was on the mend. Learning of her passing was quite the shock, and it hasn’t been easy. No arrangements had been made for her passing, so for the few hours following her death we notified family members and scrambled to select a cemetery and funeral home. Given the distance and our situations at work (I was about to leave for a conference and MJ had commitments as well) we decided to meet in Philadelphia at the end of the week and take things from there.

MJ and I met at the townhouse in Philadelphia on Saturday and began the week of work we needed to do to put her to rest. Selecting a plot in the cemetery, organizing her funeral, selecting a headstone. A lot of this was new for both of us. While we both have experienced loss in our families, most of these arrangements had already been made for the passing of our other family members. Thankfully everyone we worked with was kind and compassionate, and even when we weren’t sure of specifics, they had answers to fill in the blanks. We also spent time that week moving out her apartment and started the process of handling her estate. Her brother flew into town and stayed in the guest room of our town house, which we were suddenly grateful we had made time to set up on a previous trip.

We held her funeral on February 15th and she was laid to rest surrounded by a gathering of close family and friends. We had clear, beautiful weather as we gathered graveside to say goodbye. Her obituary can be found here.

There’s still a lot to do to finish handling her affairs and it’s been hard for me, but I’m incredibly thankful for friends, family and colleagues who have been so understanding as we’ve worked through this. We’re very grateful for the time we were able to spend with her. When she was well, we enjoyed countless dinners together and of course she joined us to celebrate at our wedding back in 2013. Even recently over the holidays in spite of her condition it was nice to have some time together. She will be missed.

by pleia2 at February 26, 2017 05:49 PM

February 25, 2017

Elizabeth Krumbach

Moving on from the Ubuntu Weekly Newsletter

Somewhere around 2010 I started getting involved with the Ubuntu News Team. My early involvement was limited to the Ubuntu Fridge team, where I would help post announcements from various teams, including the Ubuntu Community Council that I was part of. With Amber Graner at the helm of the Ubuntu Weekly Newsletter (UWN) I focused my energy elsewhere since I knew how much work the UWN editor position was at the time.

Ubuntu Weekly Newsletter

At the end of 2010 Amber stepped down from the team to pursue other interests, and with no one to fill her giant shoes the team entered a five month period of no newsletters. Finally in June, after being contacted numerous times about the fate of the newsletter, I worked with Nathan Handler to revive it so we could release issue 220. Our first job was to do an analysis of the newsletter as a whole. What was valuable about the newsletter and what could we do away with to save time? What could we automate? We decided to make some changes to reduce the amount of manual work put into it.

To this end, we ceased to include monthly reports inline and started linking to rather than sharing inline the upcoming meeting and event details in the newsletter itself. There was also a considerable amount of automation done thanks to Nathan’s work on scripts. No more would we be generating any of the release formats by hand, they’d all be generated with a single command, ready to be cut and pasted. Release time every week went from over two hours to about 20 minutes in the hands of an experienced editor. Our next editor would have considerably less work than those who came before them. From then on I’d say I’ve been pretty heavily involved.

500

The 500th issue lands on February 27th, this is an exceptional milestone for the team and the Ubuntu community. It is deserving of celebration, and we’ve worked behind the scenes to arrange a contest and a simple way for folks to say “thanks” to the team. We’ve also reached out to a handful of major players in the community to tell us what they get from the newsletter.

With the landing of this issue, I will have been involved with over 280 issues over 8 years. Almost every week in that time (I did skip a couple weeks for my honeymoon!) I’ve worked to collect Ubuntu news from around the community and internet, prepare it for our team of summary writers, move content to the wiki for our editors, and spend time on Monday doing the release. Over these years I’ve worked with several great contributors to keep the team going, rewarding contributors with all the thanks I could muster and even a run of UWN stickers specifically made for contributors. I’ve met and worked with some great people during this time, and I’m incredibly proud of what we’ve accomplished over these years and the quality we’ve been able to maintain with article selection and timely releases.

But all good things must come to an end. Several months ago as I was working on finding the next step in my career with a new position, I realized how much my life and the world of open source had changed since I first started working on the newsletter. Today there are considerable demands on my time, and while I hung on to the newsletter, I realized that I was letting other exciting projects and volunteer opportunities pass me by. At the end of October I sent a private email to several of the key contributors letting them know I’d conclude my participation with issue 500. That didn’t quite happen, but I am looking to actively wind down my participation starting with this issue and hope that others in the community can pick up where I’m leaving off.

UWN stickers

I’ll still be around the community, largely focusing my efforts on Xubuntu directly. Folks can reach out to me as they need help moving forward, but the awesome UWN team will need more contributors. Contributors collect news, write summaries and do editing, you can learn more about joining here. If you have questions about contributing, you can join #ubuntu-news on freenode and say hello or drop an email to our team mailing list (public archives).

by pleia2 at February 25, 2017 02:57 AM

February 24, 2017

Akkana Peck

Coder Dojo: Kids Teaching Themselves Programming

We have a terrific new program going on at Los Alamos Makers: a weekly Coder Dojo for kids, 6-7 on Tuesday nights.

Coder Dojo is a worldwide movement, and our local dojo is based on their ideas. Kids work on programming projects to earn colored USB wristbelts, with the requirements for belts getting progressively harder. Volunteer mentors are on hand to help, but we're not lecturing or teaching, just coaching.

Despite not much advertising, word has gotten around and we typically have 5-7 kids on Dojo nights, enough that all the makerspace's Raspberry Pi workstations are filled and we sometimes have to scrounge for more machines for the kids who don't bring their own laptops.

A fun moment early on came when we had a mentor meeting, and Neil, our head organizer (who deserves most of the credit for making this program work so well), looked around and said "One thing that might be good at some point is to get more men involved." Sure enough -- he was the only man in the room! For whatever reason, most of the programmers who have gotten involved have been women. A refreshing change from the usual programming group. (Come to think of it, the PEEC web development team is three women. A girl could get a skewed idea of gender demographics, living here.) The kids who come to program are about 40% girls.

I wondered at the beginning how it would work, with no lectures or formal programs. Would the kids just sit passively, waiting to be spoon fed? How would they get concepts like loops and conditionals and functions without someone actively teaching them?

It wasn't a problem. A few kids have some prior programming practice, and they help the others. Kids as young as 9 with no previous programming experience walk it, sit down at a Raspberry Pi station, and after five minutes of being shown how to bring up a Python console and use Python's turtle graphics module to draw a line and turn a corner, they're happily typing away, experimenting and making Python draw great colorful shapes.

Python-turtle turns out to be a wonderful way for beginners to learn. It's easy to get started, it makes pretty pictures, and yet, since it's Python, it's not just training wheels: kids are using a real programming language from the start, and they can search the web and find lots of helpful examples when they're trying to figure out how to do something new (just like professional programmers do. :-)

Initially we set easy requirements for the first (white) belt: attend for three weeks, learn the names of other Dojo members. We didn't require any actual programming until the second (yellow) belt, which required writing a program with two of three elements: a conditional, a loop, a function.

That plan went out the window at the end of the first evening, when two kids had already fulfilled the yellow belt requirements ... even though they were still two weeks away from the attendance requirement for the white belt. One of them had never programmed before. We've since scrapped the attendance belt, and now the white belt has the conditional/loop/function requirement that used to be the yellow belt.

The program has been going for a bit over three months now. We've awarded lots of white belts and a handful of yellows (three new ones just this week). Although most of the kids are working in Python, there are also several playing music or running LED strips using Arduino/C++, writing games and web pages in Javascript, writing adventure games Scratch, or just working through Khan Academy lectures.

When someone is ready for a belt, they present their program to everyone in the room and people ask questions about it: what does that line do? Which part of the program does that? How did you figure out that part? Then the mentors review the code over the next week, and they get the belt the following week.

For all but the first belt, helping newer members is a requirement, though I suspect even without that they'd be helping each other. Sit a first-timer next to someone who's typing away at a Python program and watch the magic happen. Sometimes it feels almost superfluous being a mentor. We chat with the kids and each other, work on our own projects, shoulder-surf, and wait for someone to ask for help with harder problems.

Overall, a terrific program, and our only problems now are getting funding for more belts and more workstations as the word spreads and our Dojo nights get more crowded. I've had several adults ask me if there was a comparable program for adults. Maybe some day (I hope).

February 24, 2017 08:46 PM

February 20, 2017

Elizabeth Krumbach

Adventures in Tasmania

Last month I attended my third Linux.conf.au, this time in Hobart, Tasmania, I wrote about the conference here and here. In an effort to be somewhat recovered from jet lag for the conference and take advantage of the trip to see the sights, I flew in a couple days early.

I arrived in Hobart after a trio of flights on Friday afternoon. It was incredibly windy, so much so that they warned people when deplaning onto the tarmac (no jet ways at the little Hobart airport) to hold tightly on to their belongings. But speaking of the weather for a moment, January is the middle of summer in the southern hemisphere. I prepare for brutal heat when I visit Australia at this time. But Hobart? They were enjoying beautiful, San Francisco-esque weather. Sunny and comfortably in the 60s every day. The sun was still brutal though, thinner ozone that far south means that I burned after being in the sun for a couple days, even after applying strong sunblock.


Beautiful view from my hotel room

On Saturday I didn’t make any solid plans, just in case there was a problem with my flights or I was too tired to go out. I lucked out though, and took the advice of many who suggested I visit Mona – Museum of Old and New Art. In spite of being tired, the good reviews of the museum, plus learning that you could take a ferry directly there and a nearby brewery featured their beers at the eateries around the museum encouraged me to go.

I walked to the ferry terminal from the hotel, which was just over a mile with some surprising hills along the way as I took the scenic route along the bay and through some older neighborhoods. I also walked past Salamanca Market that is set up every Saturday. I passed on the wallaby burritos and made my way to the ferry terminal. There it was quick and easy to buy my ferry and museum tickets.

Ferry rides are one of my favorite things, and the views on this one made the journey to the museum a lot of fun.

The ferry drops you off at a dock specifically for the museum. Since it was nearly noon and I was in need of nourishment, I walked up past the museum and explored the areas around the wine bar. They had little bars set up that opened at noon and allowed you to get a bottle of wine or some beers and enjoy the beautiful weather on chairs and bean bags placed around a large grassy area. On my own for this adventure, I skipped drinking on the grass and went up to enjoy lunch at the wonderful restaurant on site, The Source. I had a couple beers and discovered Tasmanian oysters. Wow. These wouldn’t be the last ones on my trip.

After lunch it was incredibly tempting to spend the entire afternoon snacking on cheese and wine, but I had museum tickets! So it was down to the museum to spend a couple hours exploring.

I’m not the biggest fan of modern art, so a museum mixing old and new art was an interesting choice for me. As I began to walk through the exhibits, I realized that it would have been great to have MJ there with me. He does enjoy newer art, so the museum would have had a little bit for each of us. There were a few modern exhibits that I did enjoy though, including Artifact which I took a video of: “Artifact” at the Museum of Old and New Art, Hobart (warning: strobe lights!).

Outside the museum I also walked around past a vineyard on site, as well as some beautiful outdoor structures. I took a bunch more photos before the ferry took me back to downtown Hobart. More photos from Mona here: https://www.flickr.com/photos/pleia2/albums/72157679331777806

It was late afternoon when I returned to the Salamanca area of Hobart and though the Market was closing down, I was able to take some time to visit a few shops. I picked up a small pen case for my fountain pens made of Tasmanian Huon Pine and a native Sassafras. That evening I spent some time in my room relaxing and getting some writing done before dinner with a couple open source colleagues who had just gotten into town. I turned in early that night to catch up on some sleep I missed during flights.

And then it was Sunday! As fun as the museum adventure was, my number one goal with this trip was actually to pet a new Australian critter. Last year I attended the conference in Geelong, not too far from Melbourne, and did a similar tourist trip. On that trip I got to feed kangaroos, pet a koala and see hundreds of fairy penguins return to their nests from the ocean at dusk. Topping that day wasn’t actually possible, but I wanted to do my best in Tasmania. I booked a private tour with a guide for the Sunday to take me up to the Bonorong Wildlife Sanctuary.

My tour guide was a very friendly women who owns a local tour company with her husband. She was super friendly and accommodating, plus she was similar in age to me, making for a comfortable journey. The tour included a few stops, but started with Bonorong. We had about an hour there to visit the standing exhibits before the pet-wombats tour begain. All the enclosures were populated by rescued wildlife that were either being rehabilitated or were too permanently injured for release. I had my first glimpse at Tasmanian devils running around (I’d seen some in Melbourne, but they were all sleeping!). I also got to see a tawny frogmouth, which is a bird that looks a bit like an owl, and the three-legged Randall the echidna, a spiky member of the species that is one of the few egg-laying mammals. I also took some time to commune with kangaroos and wallabies, picking up a handful of food to feed my new, bouncy friends.


Feeding a kangaroo, tiny wombat drinking from a bottle, pair of wombats, Tasmanian devil

And then there were the baby wombats. I saw my first wombat at the Perth Zoo four years ago and was surprised at how big they are. Growing to be a meter in length in Tasmania, wombats are hefty creatures and I got to pet one! At 11:30 they did a keeper talk and then allowed folks gathered to give one of the babies (about 9 months old) a quick pat. In a country of animals that have fur that’s more wiry and wool-like than you might expect (on kangaroos, koalas), the baby wombats are surprisingly soft.


Wombat petting mission accomplished.

The keeper talks continued with opportunities to pet a koala and visit some Tasmanian devils, but having already done these things I hit the gift shop for some post cards and then went to the nearby Richmond Village.

More photos from Bonorong Wildlife Sanctuary, Tasmania here: https://www.flickr.com/photos/pleia2/albums/72157679331734466

I enjoyed a meat pie lunch in the cute downtown of Richmond before continuing our journey to visit the oldest continuously operating Catholic church in all of Australia (not just Tasmania!), St John’s. It was built in 1836. Just a tad bit older, we also got to visit the oldest bridge, built in 1823. The bridge is surrounded by a beautiful park, making for a popular picnic and play area on days like the beautiful one we had while there. On the way back, we stopped at the Wicked Cheese Co. where I got to sample a variety of cheeses and pick up some Whiskey Cheddar to enjoy later in the week. A final stop at Rosny Hill rounded out the tour. It gave some really spectacular views of the bay and across to Hobart, I could see my hotel from there!

Sunday evening I met up with a gaggle of OpenStack friends for some Indian food back in the main shopping district of Hobart.

That wrapped up my real touristy part of my trip, as the week continued with the conference. However there were some treats still to be enjoyed! I had a whole bunch of Tasmanian cider throughout the week and as I had promised myself, more oysters! The thing about the oysters in Tasmania is that they’re creamy and they’re big. A mouthful of delicious.

I loved Tasmania, I hope I can make it back some day. More photos from my trip here: https://www.flickr.com/photos/pleia2/albums/72157677692771201

by pleia2 at February 20, 2017 06:47 PM

February 18, 2017

Akkana Peck

Highlight and remove extraneous whitespace in emacs

I recently got annoyed with all the trailing whitespace I saw in files edited by Windows and Mac users, and in code snippets pasted from sites like StackOverflow. I already had my emacs set up to indent with only spaces:

(setq-default indent-tabs-mode nil)
(setq tabify nil)
and I knew about M-x delete-trailing-whitespace ... but after seeing someone else who had an editor set up to show trailing spaces, and tabs that ought to be spaces, I wanted that too.

To show trailing spaces is easy, but it took me some digging to find a way to control the color emacs used:

;; Highlight trailing whitespace.
(setq-default show-trailing-whitespace t)
(set-face-background 'trailing-whitespace "yellow")

I also wanted to show tabs, since code indented with a mixture of tabs and spaces, especially if it's Python, can cause problems. That was a little harder, but I eventually found it on the EmacsWiki: Show whitespace:

;; Also show tabs.
(defface extra-whitespace-face
  '((t (:background "pale green")))
  "Color for tabs and such.")

(defvar bad-whitespace
  '(("\t" . 'extra-whitespace-face)))

While I was figuring this out, I got some useful advice related to emacs faces on the #emacs IRC channel: if you want to know why something is displayed in a particular color, put the cursor on it and type C-u C-x = (the command what-cursor-position with a prefix argument), which displays lots of information about whatever's under the cursor, including its current face.

Once I had my colors set up, I found that a surprising number of files I'd edited with vim had trailing whitespace. I would have expected vim to be better behaved than that! But it turns out that to eliminate trailing whitespace, you have to program it yourself. For instance, here are some recipes to Remove unwanted spaces automatically with vim.

February 18, 2017 11:41 PM

February 17, 2017

Elizabeth Krumbach

Spark Summit East 2017

“Do you want to go to Boston in February?”

So began my journey to Boston to attend the recent Spark Summit East 2017, joining my colleagues Kim, Jörg and Kapil to participate in the conference and meet attendees at our Mesosphere booth. I’ve only been to a handful of single-technology events over the years, so it was an interesting experience for me.


Selfie with Jörg!

The conference began with a keynote by Matei Zaharia which covered some of the major successes in the Apache Spark world in 2016, from the release of version 2.0, with structured streaming to the growth in community-driven meetups. As the keynotes continued, two trends came into clear focus:

  1. Increased use of Apache Spark with streaming data
  2. Strong desire to do data processing for artificial intelligence (AI) and machine learning

It was really fascinating to hear about all the AI and machine learning work being done from companies like Salesforce developing customized products to genetic data analysis by way of the Hail project that will ultimately improve and save lives. Work is even being done by Intel to improve hardware and open source tooling around deep learning (see their BigDL project on GitHub).

In perhaps my favorite keynote of the conference, we heard from Mike Gualtieri of Forrester where he presented the new “age of the customer” with a look toward very personalized, AI-driven learning about customer behavior, intent and more. He went on the use the term “pragmatic AI” to describe what we’re aiming for with an intelligence that’s good enough to succeed at what it’s put to. However, his main push for this talk was how much opportunity there is in this space. Companies and individuals skilled with processing massive amounts of data processing, AI and deep and machine learning can make a real impact in a variety of industries. Video and slides from this keynote are available here.


Mike Gualtieri on types of AI we’re looking at today

I was also impressed by how strong the open source assumption was at this conference. All of these universities, corporations, hardware manufacturers and more are working together to build platforms to do all of this work data processing work and they’re open sourcing them.

While at the event, Jörg gave a talk on Powering Predictive Mapping at Scale with Spark, Kafka, and Elastic Search (slides and videos at that link). In this he used DC/OS to give a demo based on NYC cab data.

At the booth the interest in open source was also strong. I’m working on DC/OS in my new role, and the fact that folks could hit the ground running with our open source version, and get help on mailing lists and Slack was in sync with their expectations. We were able to show off demos on our laptops and in spite of only having just over a month at the company under my belt, I was able to answer most of the questions that came my way and learned a lot from my colleagues.


The the Mesosphere booth included DC/OS hand warmers!

We had a bit of non-conference fun at the conference as well, Kapil took us out Wednesday night to the L.A. Burdick chocolate shop to get some hot chocolate… on ice. So good. Thursday the city was hit with a major snow storm, dumping 10 inches on us throughout the day as we spent our time inside the conference venue. Flights were cancelled after noon that day, but thankfully I had no trouble getting out on my Friday flight after lunch with my friend Deb who lives nearby.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157680153926395

by pleia2 at February 17, 2017 10:29 PM

February 15, 2017

Elizabeth Krumbach

Highlights from LCA 2017 in Hobart

Earlier this month I attended my first event while working as a DC/OS Developer Advocate over at Mesosphere. My talk on Listening to the needs of your global open source community was accepted before I joined the company, but this kind of listening is precisely what I need to be doing in this new role, so it fit nicely.

Work also gave me some goodies to bring along! So I was able to hand some out as I chatted with people about my new role, and left piles of stickers and squishy darts on the swag table throughout the week.

The topic of the conference this year was the future of open source. It led to an interesting series of keynotes, ranging from the hopeful and world-changing words from Pia Waugh about how technologists could really make a difference in her talk, Choose Your Own Adventure, Please!, to the Keeping Linux Great talk by Robert M. “r0ml” Lefkowitz that ended up imploring the audience to examine their values around the traditional open source model.

Pia’s keynote was a lot of fun, walking us through human history to demonstrate that our values, tools and assumptions are entirely of our own making, and able to be changed (indeed, they have!). She asked us to continually challenge our assumptions about the world around us and what we could change. She encouraged thinking beyond our own spaces, like how 3D printers could solve scarcity problems in developing nations or what faster travel would do to transform the world. As a room of open source enthusiasts who make small changes to change the world all the day, being the creators and innovators of the world, there’s always more we can do and strive for, curing the illness rather than scratching the itch for systematic change. I really loved the positive message of this talk, I think a lot of attendees walked out feeling empowered and hopeful. Plus, she had a brilliant human change.log, that demonstrated how we as humans have made some significant changes in our assumptions through the millennia.


Pia Waugh’s human change.log

The keynote by Dan Callahan on Wednesday morning on Designing for Failure explored the failure of Mozilla’s Persona project, and key things he learned from it. He walked through some key lessons:

  1. Free licenses are not enough, your code can’t be tied to proprietary infrastructure
  2. Bits rot more quickly online, an out of date desktop application is usually at much lower risk, and endangers fewer people, than a service running on the web
  3. Complexity limits agency, people need to be able to have the resources, system and time to try out and run your software

He went on to give tips about what to do to prolong project life, including making sure you have metrics and are measuring the right things for your project, explicitly defining your scope so the team doesn’t get spread too thin or try to solve the wrong problems, and ruthlessly opposing complexity, since that makes it harder to maintain and for others to get involved.

Finally, he had some excellent points for how to assist the survival of your users when a project does finally fail:

  1. If you know your project is dead (funding pulled, etc), say so, don’t draw things out
  2. Make sure your users can recover without your involvement (have a way to extract data, give them an escape path infrastructure-wise)
  3. Use standard data formats to minimize the migration harm when organizations have to move on

It was really great hearing lessons from this, I know how painful it is to see a project you’ve put a lot of work into die, the ability to not only move on in a healthy way but bring those lessons to a whole community during a keynote like this was commendable.

Thursday’s keynote by Nadia Eghbal was an interesting one that I haven’t seen a lot of public discussion around, Consider the Maintainer. In it she talked about the work that goes into being a maintainer of a project, which she defined as someone who is doing the work of keeping a project going: looking at the contributions coming in, actively responding to bug reports and handling any other interactions. This is a discussion that came up from time to time on some projects I’ve recently worked on where we were striving to prevent scope creep. How can we manage the needs of our maintainers who are sticking around, with the desire for new contributors to add features that benefit them? It’s a very important question that I was thrilled to see her talk about. To help address this, she proposed a twist on the The Four Essential Freedoms of Software as defined by the FSF, The Four Freedoms of Open Source Producers. They were:

  • The freedom to decide who participates in your community
  • The freedom to say no to contributions or requests
  • The freedom to define the priorities and policies of the project
  • The freedom to step down or move on from a project, temporarily or permanently

The speaker dinner was beautiful and delicious, taking us up to Frogmore Creek Winery. There was a radio telescope in the background and the sunset over the vineyard was breathtaking. Plus, great company.

Other talks I went to trended toward fun and community-focused. On Monday there was a WOOTConf, the entire playlist from the event is here. I caught a nice handful of talks, starting with Human-driven development where aurynn shaw spoke about some of the toxic behaviors in our technical spaces, primarily about how everyone is expected to know everything and that asking questions is not always acceptable. She implored us to work to make asking questions easier and more accepted, and working toward asking your team questions about what they need.

I learned about a couple websites in a talk by Kate Andrews on Seeing the big picture – using open source images, TinEye Reverse Image Search to help finding the source of an image to give credit, and sites like Unsplash where you can find freely licensed photos, in addition to various creative commons searches. Brenda Wallace’s Let’s put wifi in everything was a lot of fun, as she walked through various pieces of inexpensive hardware and open source tooling to build sensors to automate all kinds of little things around the house. I also enjoyed the talk by Kris Howard, Knit One, Compute One where very strong comparisons were made between computer programming and knitting patterns, and a talk by Grace Nolan on Condensed History of Lock Picking.

For my part, I gave a talk on Listening to the Needs of Your Global Open Source Community. This is similar to the talk I gave at FOSSCON back in August, where I walked through experiences I had in Ubuntu and OpenStack projects, along with in person LUGs and meetups. I had some great questions at the end, and I was excited to learn VM Brasseur was tweeting throughout and created a storify about it! The slides from the talk are available as a PDF here.


Thanks to VM Brasseur for the photo during my talk, source

The day concluded with Rikki Endsley’s Mamas Don’t Let Your Babies Grow Up to Be Rock Star Developers, which I really loved. She talked about the tendency to put “rock star” in job descriptions for developers, but when going through the traits of rock stars these weren’t actually what you want on your team. The call was for more Willie Nelson developers, and we were treated to a quick biography of Willie Nelson. In it she explained how he helped others, was always learning new skills, made himself available to his fans, and would innovate and lead. I also enjoyed that he actively worked to collaborate with a diverse mix of people and groups.

As the conference continued, I learned about the the great work that Whare Hauora from Brenda Wallace and Amber Craig, and heard from Josh Simmons about building communities outside of major metropolitan areas where he advocated for multidisciplinary meetups. Allison Randal spoke about the ways that open source accelerates innovation and Karen Sandler dove into what happens to our software when we die in a presentation punctuated by pictures of baby Tasmanian Devils to cheer us up. I also heard Chris Lamb gave us the status of the Reproducible Builds projects and then from Hamish Coleman on the work he’s done replacing ThinkPad keyboards and backwards engineering the tooling.

The final day wound down with a talk by VM (Vicky) Brasseur on working inside a company to support open source projects, where she talked about types of communities, the importance of having a solid open source plans and quickly covered some of the most common pitfalls within companies.

This conference remains one of my favorite open source conferences in the world, and I’m very glad I was able to attend again. It’s great meeting up with all my Australian and New Zealand open source colleagues, along with some of the usual suspects who attend many of the same conferences I do. Huge thanks for the organizers for making it such a great conference.

All the videos from the conference were uploaded very quickly to YouTube and are available here: https://www.youtube.com/user/linuxconfau2017/videos

More photos from the conference at https://www.flickr.com/photos/pleia2/sets/72157679331149816/

by pleia2 at February 15, 2017 01:09 AM

February 13, 2017

Akkana Peck

Emacs: Initializing code files with a template

Part of being a programmer is having an urge to automate repetitive tasks.

Every new HTML file I create should include some boilerplate HTML, like <html><head></head></body></body></html>. Every new Python file I create should start with #!/usr/bin/env python, and most of them should end with an if __name__ == "__main__": clause. I get tired of typing all that, especially the dunderscores and slash-greater-thans.

Long ago, I wrote an emacs function called newhtml to insert the boilerplate code:

(defun newhtml ()
  "Insert a template for an empty HTML page"
  (interactive)
  (insert "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n"
          "<html>\n"
          "<head>\n"
          "<title></title>\n"
          "</head>\n\n"
          "<body>\n\n"
          "<h1></h1>\n\n"
          "<p>\n\n"
          "</body>\n"
          "</html>\n")
  (forward-line -11)
  (forward-char 7)
  )

The motion commands at the end move the cursor back to point in between the <title> and </title>, so I'm ready to type the page title. (I should probably have it prompt me, so it can insert the same string in title and h1, which is almost always what I want.)

That has worked for quite a while. But when I decided it was time to write the same function for python:

(defun newpython ()
  "Insert a template for an empty Python script"
  (interactive)
  (insert "#!/usr/bin/env python\n"
          "\n"
          "\n"
          "\n"
          "if __name__ == '__main__':\n"
          "\n"
          )
  (forward-line -4)
  )
... I realized that I wanted to be even more lazy than that. Emacs knows what sort of file it's editing -- it switches to html-mode or python-mode as appropriate. Why not have it insert the template automatically?

My first thought was to have emacs run the function upon loading a file. There's a function with-eval-after-load which supposedly can act based on file suffix, so something like (with-eval-after-load ".py" (newpython)) is documented to work. But I found that it was never called, and couldn't find an example that actually worked.

But then I realized that I have mode hooks for all the programming modes anyway, to set up things like indentation preferences. Inserting some text at the end of the mode hook seems perfectly simple:

(add-hook 'python-mode-hook
          (lambda ()
            (electric-indent-local-mode -1)
            (font-lock-add-keywords nil bad-whitespace)
            (if (= (buffer-size) 0)
                (newpython))
            (message "python hook")
            ))

The (= (buffer-size) 0) test ensures this only happens if I open a new file. Obviously I don't want to be auto-inserting code inside existing programs!

HTML mode was a little more complicated. I edit some files, like blog posts, that use HTML formatting, and hence need html-mode, but they aren't standalone HTML files that need the usual HTML template inserted. For blog posts, I use a different file extension, so I can use the elisp string-suffix-p to test for that:

  ;; s-suffix? is like Python endswith
  (if (and (= (buffer-size) 0)
           (string-suffix-p ".html" (buffer-file-name)))
      (newhtml) )

I may eventually find other files that don't need the template; if I need to, it's easy to add other tests, like the directory where the new file will live.

A nice timesaver: open a new file and have a template automatically inserted.

February 13, 2017 04:52 PM

February 09, 2017

Jono Bacon

HackerOne Professional, Free for Open Source Projects

For some time now I have been working with HackerOne to help them shape and grow their hacker community. It has been a pleasure working with the team: they are doing great work, have fantastic leadership (including my friend, Mårten Mickos), are seeing consistent growth, and recently closed a $40 million round of funding. It is all systems go.

For those of you unfamiliar with HackerOne, they provide a powerful vulnerability coordination platform and a global community of hackers. Put simply, a company or project (such as Starbucks, Uber, GitHub, the US Army, etc) invite hackers to hack their products/services to find security issues, and HackerOne provides a platform for the submission, coordination, dupe detection, and triage of these issues, and other related functionality.

You can think of HackerOne in two pieces: a powerful platform for managing security vulnerabilities and a global community of hackers who use the platform to make the Internet safer and in many cases, make money. This effectively crowd-sources security using the same “with enough eyeballs are shallow” principle in open source: with enough eyeballs all security issues are shallow too.

HackerOne and Open Source

HackerOne unsurprisingly are big fans of open source. The CEO, Mårten Mickos, has led a number of successful open source companies including MySQL and Eucalyptus. The platform itself is built on top of chunks of open source, and HackerOne is a key participant in the Internet Bug Bounty program that helps to ensure core pieces of technology that power the Internet are kept secure.

One of the goals I have had in my work with HackerOne is to build an even closer bridge between HackerOne and the open source community. I am delighted to share the next iteration of this.

HackerOne for Open Source Projects

While not formally announced yet (this is coming soon), I am pleased to share the availability of HackerOne Community Edition.

Put simply, HackerOne is providing their HackerOne Professional service for free to open source projects.

This provides features such as a security page, vulnerability submission/coordination, duplicate detection, hacker reputation, a comprehensive API, analytics, CVEs, and more.

This not only provides a great platform for open source projects to gather vulnerability report and manage them, but also opens your project up to thousands of security researchers who can help identify security issues and make your code more secure.

Which projects are eligible?

To be eligible for this free service projects need to meet the following criteria:

  1. Open Source projects – projects in scope must only be Open Source projects that are covered by an OSI license.
  2. Be ready – projects must be active and at least 3 months old (age is defined by shipped releases/code contributions).
  3. Create a policy – you add a SECURITY.md in your project root that provides details for how to submit vulnerabilities (example).
  4. Advertise your program – display a link to your HackerOne profile from either the primary or secondary navigation on your project’s website.
  5. Be active – you maintain an initial response to new reports of less than a week.

If you meet these criteria and would like to apply, just see the HackerOne Community Edition page and click the button to apply.

Of course, let me know if you have any questions!

The post HackerOne Professional, Free for Open Source Projects appeared first on Jono Bacon.

by Jono Bacon at February 09, 2017 10:20 PM

February 06, 2017

Elizabeth Krumbach

Rogue One and Carrie Fisher

Back in December I wasn’t home in San Francisco very much. Most of my month was spent back east at our townhouse in Philadelphia and I spent a few days in Salt Lake City for a conference, but the one week I was in town was the week that Rogue One: A Star Wars Story came out! I was traveling to Europe when tickets went on sale, but fortunately for me our local theater transformed to swap most of it’s screens over to show the film opening night. I was able to snag tickets once I realized they were on sale.

And that’s how I continued my tradition of seeing all the new films (1-3, 7) opening night! MJ and I popped over to the Metreon, just a short walk from home, to see it. For this showing I didn’t do IMAX or 3D or anything fancy, just a modern AMC theater and a late night showing.

The movie was great. They did a really nice job of looping the story in with the past films and preserving the feel of Star Wars for me, which was absent in the prequels that George Lucas made. Clunky technology, the good guys achieving victories in the face of incredible odds and yet, quite a bit of heartbreak. Naturally, I saw it a second time later in the month while staying in Philadelphia for the holidays. It was great the second time too!

My hope is that the quality of the films will remain high while in the hands of Disney, and I’m really looking forward to The Last Jedi coming out at the end of this year.

Alas, the year wasn’t all good for a Star Wars fan like me. Back in August we lost Kenny Baker, the man behind my beloved R2-D2. Then on December 23rd we learned that Carrie Fisher had a heart attack on a flight from London. On December 27th she passed away.

Now, I am typically not one to write about the death of a celebrity in her blog. It’s pretty rare that I’m upset about the death of a celebrity at all. But this was Carrie Fisher. She was not on my radar for passing (only 60!) and she is the actress who played one of my all-time favorite characters, in case it wasn’t obvious from the domain name this blog is on.

The character of Princess Leia impacted my life in many ways, and at age 17 caused me to choose PrincessLeia2 (PrincessLeia was taken), and later pleia2, as my online handle. She was a princess of a mysterious world that was destroyed. She was a strong character who didn’t let people get in her way as she covertly assisted, then openly joined the rebel alliance because of what she believed in. She was also a character who also showed considerable kindness and compassion. In the Star Wars universe, and in the 1980s when I was a kid, she was often a shining beacon of what I aspired to. Her reprise of the character, returning as General Leia Organa, in Episode VII brought me to tears. I have a figure of her on my desk.


Halloween 2005, Leia costume!

A character she played aside, she also was a champion of de-stigmatizing mental illness. I have suffered from depression for over 20 years and have worked to treat my condition with over a dozen doctors, from primary care to neurologists and psychiatrists. Still, I haven’t found an effective medication-driven treatment that won’t conflict with my other neurological atypical conditions (migraines and seizures). Her outspokenness on the topic of both mental illness and the difficulty in treating it even when you have access to resources was transformational for me. I had a guilt lifted from me about not being “better” in spite of my access to treatment, and was generally more inclined to tackle the topic of mental illness in public.

Her passing was hard for me.

I was contacted by BBC Radio 5 Live on the day she passed away and interviewed by Chris Warburton for their show that would air the following morning. They reached out to me as a known fan, asking me about what her role as Leia Organa meant to me growing up, her critical view of the celebrity world and then on to her work in the space of mental illness. It meant a lot that they reached out to me, but I was also pained by what it brought up, it turns out that the day of her passing was the one day in my life I didn’t feeling like talking about her work and legacy.

It’s easier today as I reflect upon her impact. I’m appreciative of the character she brought to life for me. Appreciative of the woman she became and shared in so many memorable, funny and self-deprecating books, which line my shelves. Thank you, Carrie Fisher, for being such an inspiration and an advocate.

by pleia2 at February 06, 2017 08:17 AM

Akkana Peck

Rosy Finches

Los Alamos is having an influx of rare rosy-finches (which apparently are supposed to be hyphenated: they're rosy-finches, not finches that are rosy).

[Rosy-finches] They're normally birds of the snowy high altitudes, like the top of Sandia Crest, and quite unusual in Los Alamos. They're even rarer in White Rock, and although I've been keeping my eyes open I haven't seen any here at home; but a few days ago I was lucky enough to be invited to the home of a birder in town who's been seeing great flocks of rosy-finches at his feeders.

There are four types, of which three have ever been seen locally, and we saw all three. Most of the flock was brown-capped rosy-finches, with two each black rosy-finches and gray-capped rosy-finches. The upper bird at right, I believe, is one of the blacks, but it might be a grey-capped. They're a bit hard to tell apart. In any case, pretty birds, sparrow sized with nice head markings and a hint of pink under the wing, and it was fun to get to see them.

[Roadrunner] The local roadrunner also made a brief appearance, and we marveled at the combination of high-altitude snowbirds and a desert bird here at the same place and time. White Rock seems like much better roadrunner territory, and indeed they're sometimes seen here (though not, so far, at my house), but they're just as common up in the forests of Los Alamos. Our host said he only sees them in winter; in spring, just as they start singing, they leave and go somewhere else. How odd!

Speaking of birds and spring, we have a juniper titmouse determinedly singing his ray-gun song, a few house sparrows are singing sporadically, and we're starting to see cranes flying north. They started a few days ago, and I counted several hundred of them today, enjoying the sunny and relatively warm weather as they made their way north. Ironically, just two weeks ago I saw a group of about sixty cranes flying south -- very late migrants, who must have arrived at the Bosque del Apache just in time to see the first northbound migrants leave. "Hey, what's up, we just got here, where ya all going?"

A few more photos: Rosy-finches (and a few other nice birds).

We also have a mule deer buck frequenting our yard, sometimes hanging out in the garden just outside the house to drink from the heated birdbath while everything else is frozen. (We haven't seen him in a few days, with the warmer weather and most of the ice melted.) We know it's the same buck coming back: he's easy to recognize because he's missing a couple of tines on one antler.

The buck is a welcome guest now, but in a month or so when the trees start leafing out I may regret that as I try to find ways of keeping him from stripping all the foliage off my baby apple tree, like some deer did last spring. I'm told it helps to put smelly soap shavings, like Irish Spring, in a bag and hang it from the branches, and deer will avoid the smell. I will try the soap trick but will probably combine it with other measures, like a temporary fence.

February 06, 2017 02:39 AM

January 28, 2017

Nathan Haines

We're looking for Ubuntu 17.04 wallpapers right now!

We're looking for Ubuntu 17.04 wallpapers right now!

Ubuntu is a testament to the power of sharing, and we use the default selection of desktop wallpapers in each release as a way to celebrate the larger Free Culture movement. Talented artists across the globe create media and release it under licenses that don't simply allow, but cheerfully encourage sharing and adaptation. This cycle's Free Culture Showcase for Ubuntu 17.04 is now underway!

We're halfway to the next LTS, and we're looking for beautiful wallpaper images that will literally set the backdrop for new users as they use Ubuntu 17.04 every day. Whether on the desktop, phone, or tablet, your photo or can be the first thing Ubuntu users see whenever they are greeted by the ubiquitous Ubuntu welcome screen or access their desktop.

Submissions will be handled via Flickr at the Ubuntu 17.04 Free Culture Showcase - Wallpapers group, and the submission window begins now and ends on March 5th.

More information about the Free Culture Showcase is available on the Ubuntu wiki at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase.

I'm looking forward to seeing the 10 photos and 2 illustrations that will ship on all graphical Ubuntu 17.04-based systems and devices on April 13th!

January 28, 2017 08:08 AM

January 27, 2017

Akkana Peck

Making aliases for broken fonts

A web page I maintain (originally designed by someone else) specifies Times font. On all my Linux systems, Times displays impossibly tiny, at least two sizes smaller than any other font that's ostensibly the same size. So the page is hard to read. I'm forever tempted to get rid of that font specifier, but I have to assume that other people in the organization like the professional look of Times, and that this pathologic smallness of Times and Times New Roman is just a Linux font quirk.

In that case, a better solution is to alias it, so that pages that use Times will choose some larger, more readable font on my system. How to do that was in this excellent, clear post: How To Set Default Fonts and Font Aliases on Linux .

It turned out Times came from the gsfonts package, while Times New Roman came from msttcorefonts:

$ fc-match Times
n021003l.pfb: "Nimbus Roman No9 L" "Regular"
$ dpkg -S n021003l.pfb
gsfonts: /usr/share/fonts/type1/gsfonts/n021003l.pfb
$ fc-match "Times New Roman"
Times_New_Roman.ttf: "Times New Roman" "Normal"
$ dpkg -S Times_New_Roman.ttf
dpkg-query: no path found matching pattern *Times_New_Roman.ttf*
$ locate Times_New_Roman.ttf
/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttf
(dpkg -S doesn't find the file because msttcorefonts is a package that downloads a bunch of common fonts from Microsoft. Debian can't distribute the font files directly due to licensing restrictions.)

Removing gsfonts fonts isn't an option; aside from some documents and web pages possibly not working right (if they specify Times or Times New Roman and don't provide a fallback), removing gsfonts takes gnumeric and abiword with it, and I do occasionally use gnumeric. And I like having the msttcorefonts installed (hey, gotta have Comic Sans! :-) ). So aliasing the font is a better bet.

Following Chuan Ji's page, linked above, I edited ~/.config/fontconfig/fonts.conf (I already had one, specifying fonts for the fantasy and cursive web families), and added these stanzas:

    <match>
        <test name="family"><string>Times New Roman</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>
    <match>
        <test name="family"><string>Times</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>

The page says to log out and back in, but I found that restarting firefox was enough. Now I could load up a page that specified Times or Times New Roman and the text is easily readable.

January 27, 2017 09:47 PM

January 26, 2017

Elizabeth Krumbach

CLSx at LCA 2017

Last week I was in Hobart, Tasmania for LCA 2017. I’ll write broader blog post about the whole event soon, but I wanted to take some time to write this focused post about the CLSx (Community Leadership Summit X) event organized by VM Brasseur. I’d been to the original CLS event at OSCON a couple times, first in 2013 and again in 2015. This was the first time I was attending a satellite event, but with VM Brasseur at the helm and a glance at the community leadership talent in the room I knew we’d have a productive event.


VM Brasseur introduces CLSx

The event began with an introduction to the format and the schedule. As an unconference, CLS events topics are brainstormed by and the schedule organized by the attendees. It started with people in the room sharing topics they’d be interested in, and then we worked through the list to combine topics and reduce it down to just 9 topics:

  • Non-violent communication for diffusing charged situations
  • Practical strategies for fundraising
  • Rewarding community members
  • Reworking old communities
  • Increasing diversity: multi-factor
  • Recruiting a core
  • Community metrics
  • Community cohesion: retention
  • How to Participate When You Work for a Corporate Vendor

Or, if you’d rather, the whiteboard of topics!

The afternoon was split into four sessions, three of which were used to discuss the topics, with three topics being covered simultaneously by separate groups in each session slot. The final session of the day was reserved for the wrap-up of the event where participants shared summaries of each topic that was discussed.

The first session I participated in was the one I proposed, on Rewarding Community Members. The first question I asked the group was whether we should reward community members at all, just to make sure we were all starting with the same ideas. This quickly transitioned into what counts as a reward, were we talking physical gifts like stickers and t-shirts? Or recognition in the community? Some communities “reward” community members by giving them free or discounted entrance to conferences related to the project, or discounts on services with partners.

Simple recognition of work was a big topic for this session. We spent some time talking about how we welcome community members. Does your community have a mechanism for welcoming, even if it’s automated? Or is there a more personal touch to reaching out? We also covered whether projects have a path to go from new contributor to trusted committer, or the “internal circle” of a project, noting that if that path doesn’t exist, it could be discouraging to new contributors. Gamification was touched upon as a possible way to recognize contributors in a more automated fashion, but it was clear that you want to reward certain positive behaviors and not focus so strictly on statistics that can be cheated without bringing any actual value to the project or community.

What I found most valuable in this session was learning some of the really successful tips for rewards. It was interesting how far the personal touch goes when sending physical rewards to contributors, like including a personalized note along with stickers. It was also clear that metrics are not the full story, in every community the leaders, evangelists and advocates need to be very involved so they can identify contributors in a more qualitative way in order to recognize or reward them, maybe someone is particularly helpful and friendly, or are making contributions in ways that are not easily tracked by solid metrics. The one warning here was making sure you avoid personal bias, make sure you aren’t being more critical of contributions from minorities in your community or are ignoring folks who don’t boast about their contributions, this happens a lot.

Full notes from Rewarding Contributors, thanks go to Deirdré Straughan for taking notes during the session.

The next session brought me to a gathering to discuss Community Building, Cohesion and Retention. I’ve worked in very large open source communities for over a decade now, and as I embark on my new role at Mesosphere where the DC/OS community is largely driven by paid contributors from a single company today, I’m very much interested in making sure we work to attract more outside contributors.

One of the big topics of this session was the fragmentation of resources across platforms (mailing lists, Facebook, IRC, Slack, etc) and how we have very little control over this. Pulling from my own experience, we saw this in the Xubuntu user community where people would create unofficial channels on various resources, and so as an outreach team we had to seek these users out and begin engaging with them “where they lived” on these platforms. One of the things I learned from my work here, was that we could reduce our own burden by making some of these “unofficial” resources into official resources, thus having an official presence but leaving the folks who were passionate about the platform and community there in control, though we did ask for admin credentials for one person on the Xubuntu team to help with the bus factor.

Some other tips to building cohesion were making sure introductions were done during meetings and in person gatherings so that newcomers felt welcome, or offering a specific newcomer track so that no one felt like they were the only new person in the room, which can be very isolating. Similarly, making sure there were communication channels available before in-person events could be helpful to getting people comfortable with a community before meeting. One of the interesting proposals was also making sure there was a more official, announce-focused channel for communication so that people who were loosely interested could subscribe to that and not be burdened with an overly chatty communication channel if they’re only interested in important news from the community.

Full notes from Community building, cohesion and retention, with thanks to Josh Simmons for taking notes during this session.


Thanks to VM Brasseur for this photo of our building, cohesion and retention session (source)

The last session of the day I attended was around Community Metrics and held particular interest for me as the team I’m on at Mesosphere starts drilling down into community statistics for our young community. One of the early comments in this session is that our teams need to be aware that metrics can help drive value for your team within a company and in the project. You should make sure you’re collecting metrics and that you’re measuring the right things. It’s easy for those of us who are more technically inclined to “geek out” over numbers and statistics, which can lead to gathering too much data and drawing conclusions that may not necessarily be accurate.

There was value found in surveys of community members by some attendees, which was interesting for me to learn. I haven’t had great luck with surveys but it was suggested that making sure people know why they should spend their time replying and sharing information and how it will be used to improve things makes them more inclined to participate. It was also suggested to have staggered surveys targeted at specific contributors. Perhaps have one survey to newcomers, and another targeted at people who have succeeded in becoming a core contributor about the process challenges they’ve faced. Surveys also help gather some of the more qualitative data that is essential for proper tracking the health of a community. It’s not just numbers.

Specifically drilling down into value to the community, the following beyond surveys were found to be helpful:

  • Less focus on individuals and specific metrics in a silo, instead looking at trends and aggregations
  • Visitor count to the web pages on your site and specific blog posts
  • Metrics about community diversity in terms of number of organizations contributing, geographic distribution and human metrics (gender, race, age, etc) since all these types of diversity have proven to be indicators of project and team success.
  • Recruitment numbers linked to contributions, whether it’s how many people your company hires from the community or that companies in general do if the project has many companies involved (recruitment is expensive, you can bring real value here)

The consensus in the group was that it was difficult to correlate metrics like retweets, GitHub stars and other social media metrics to sales, so even though there may be value with regard to branding and excitement about your community, they may not help much to justify the existence of your team within a company. We didn’t talk much about metrics gathering tools, but I was OK with this, since it was nice to get a more general view into what we should be collecting rather than how.

Full notes from Community Metrics, which we can thank Andy Wingo for.

The event concluded with the note-taker from each group giving a five minute summary of what we talked about in each group. This was the only recorded portion of the event, you can watch it on YouTube here: Community Leadership Summit Summary.

Discussion notes from all the sessions can be found here: https://linux.conf.au/wiki/conference/miniconfs/clsx_at_lca/#wiki-toc-group-discussion-notes.

I really got a lot out of this event, and I hope others gained from my experience and perspectives as well. Huge thanks to the organizers and everyone who participated.

by pleia2 at January 26, 2017 02:58 AM

January 24, 2017

Jono Bacon

Endless Code and Mission Hardware Demo

Recently, I have had the pleasure of working with a fantastic company called Endless who are building a range of computers and a Linux-based operating system called Endless OS.

My work with them has primarily been involved in the community and product development of an initiative in which they are integrating functionality into the operating system that teaches you how to code. This provides a powerful platform where you can learn to code and easily hack on applications in the platform.

If this sounds interesting to you, I created a short video demo where I show off their Mission hardware as well as run through a demo of Endless Code in action. You can see it below:

I would love to hear what you think and how Endless Code can be improved in the comments below.

The post Endless Code and Mission Hardware Demo appeared first on Jono Bacon.

by Jono Bacon at January 24, 2017 12:35 PM

January 23, 2017

Akkana Peck

Testing a GitHub Pull Request

Several times recently I've come across someone with a useful fix to a program on GitHub, for which they'd filed a GitHub pull request.

The problem is that GitHub doesn't give you any link on the pull request to let you download the code in that pull request. You can get a list of the checkins inside it, or a list of the changed files so you can view the differences graphically. But if you want the code on your own computer, so you can test it, or use your own editors and diff tools to inspect it, it's not obvious how. That this is a problem is easily seen with a web search for something like download github pull request -- there are huge numbers of people asking how, and most of the answers are vague unclear.

That's a shame, because it turns out it's easy to pull a pull request. You can fetch it directly with git into a new branch as long as you have the pull request ID. That's the ID shown on the GitHub pull request page:

[GitHub pull request screenshot]

Once you have the pull request ID, choose a new name for your branch, then fetch it:

git fetch origin pull/PULL-REQUEST_ID/head:NEW-BRANCH-NAME
git checkout NEW-BRANCH-NAME

Then you can view diffs with something like git difftool NEW-BRANCH-NAME..master

Easy! GitHub should give a hint of that on its pull request pages.

Fetching a Pull Request diff to apply it to another tree

But shortly after I learned how to apply a pull request, I had a related but different problem in another project. There was a pull request for an older repository, but the part it applied to had since been split off into a separate project. (It was an old pull request that had fallen through the cracks, and as a new developer on the project, I wanted to see if I could help test it in the new repository.)

You can't pull a pull request that's for a whole different repository. But what you can do is go to the pull request's page on GitHub. There are 3 tabs: Conversation, Commits, and Files changed. Click on Files changed to see the diffs visually.

That works if the changes are small and only affect a few files (which fortunately was the case this time). It's not so great if there are a lot of changes or a lot of files affected. I couldn't find any "Raw" or "download" button that would give me a diff I could actually apply. You can select all and then paste the diffs into a local file, but you have to do that separately for each file affected. It might be, if you have a lot of files, that the best solution is to check out the original repo, apply the pull request, generate a diff locally with git diff, then apply that diff to the new repo. Rather circuitous. But with any luck that situation won't arise very often.

Update: thanks very much to Houz for the solution! (In the comments, below.) Just append .diff or .patch to the pull request URL, e.g. https://github.com/OWNER/REPO/pull/REQUEST-ID.diff which you can view in a browser or fetch with wget or curl.

January 23, 2017 09:34 PM

January 19, 2017

Akkana Peck

Plotting Shapes with Python Basemap wwithout Shapefiles

In my article on Plotting election (and other county-level) data with Python Basemap, I used ESRI shapefiles for both states and counties.

But one of the election data files I found, OpenDataSoft's USA 2016 Presidential Election by county had embedded county shapes, available either as CSV or as GeoJSON. (I used the CSV version, but inside the CSV the geo data are encoded as JSON so you'll need JSON decoding either way. But that's no problem.)

Just about all the documentation I found on coloring shapes in Basemap assumed that the shapes were defined as ESRI shapefiles. How do you draw shapes if you have latitude/longitude data in a more open format?

As it turns out, it's quite easy, but it took a fair amount of poking around inside Basemap to figure out how it worked.

In the loop over counties in the US in the previous article, the end goal was to create a matplotlib Polygon and use that to add a Basemap patch. But matplotlib's Polygon wants map coordinates, not latitude/longitude.

If m is your basemap (i.e. you created the map with m = Basemap( ... ), you can translate coordinates like this:

    (mapx, mapy) = m(longitude, latitude)

So once you have a region as a list of (longitude, latitude) coordinate pairs, you can create a colored, shaped patch like this:

    for coord_pair in region:
        coord_pair[0], coord_pair[1] = m(coord_pair[0], coord_pair[1])
    poly = Polygon(region, facecolor=color, edgecolor=color)
    ax.add_patch(poly)

Working with the OpenDataSoft data file was actually a little harder than that, because the list of coordinates was JSON-encoded inside the CSV file, so I had to decode it with json.loads(county["Geo Shape"]). Once decoded, it had some counties as a Polygonlist of lists (allowing for discontiguous outlines), and others as a MultiPolygonlist of list of lists (I'm not sure why, since the Polygon format already allows for discontiguous boundaries)

[Blue-red-purple 2016 election map]

And a few counties were missing, so there were blanks on the map, which show up as white patches in this screenshot. The counties missing data either have inconsistent formatting in their coordinate lists, or they have only one coordinate pair, and they include Washington, Virginia; Roane, Tennessee; Schley, Georgia; Terrell, Georgia; Marshall, Alabama; Williamsburg, Virginia; and Pike Georgia; plus Oglala Lakota (which is clearly meant to be Oglala, South Dakota), and all of Alaska.

One thing about crunching data files from the internet is that there are always a few special cases you have to code around. And I could have gotten those coordinates from the census shapefiles; but as long as I needed the census shapefile anyway, why use the CSV shapes at all? In this particular case, it makes more sense to use the shapefiles from the Census.

Still, I'm glad to have learned how to use arbitrary coordinates as shapes, freeing me from the proprietary and annoying ESRI shapefile format.

The code: Blue-red map using CSV with embedded county shapes

January 19, 2017 04:36 PM

Nathan Haines

UbuCon Summit at SCALE 15x Call for Papers

UbuCons are a remarkable achievement from the Ubuntu community: a network of conferences across the globe, organized by volunteers passionate about Open Source and about collaborating, contributing, and socializing around Ubuntu. UbuCon Summit at SCALE 15x is the next in the impressive series of conferences.

UbuCon Summit at SCALE 15x takes place in Pasadena, California on March 2nd and 3rd during the first two days of SCALE 15x. Ubuntu will also have a booth at SCALE's expo floor from March 3rd through 5th.

We are putting together the conference schedule and are announcing a call for papers. While we have some amazing speakers and an always-vibrant unconference schedule planned, it is the community, as always, who make UbuCon what it is—just as the community sets Ubuntu apart.

Interested speakers who have Ubuntu-related topics can submit their talk to the SCALE call for papers site. UbuCon Summit has a wide range of both developers and enthusiasts, so any interesting topic is welcome, no matter how casual or technical. The SCALE CFP form is available here:

http://www.socallinuxexpo.org/scale/15x/cfp

Over the next few weeks we’ll be sharing more details about the Summit, revamping the global UbuCon site and updating the SCALE schedule with all relevant information.

http://www.ubucon.org/

About SCaLE:

SCALE 15x, the 15th Annual Southern California Linux Expo, is the largest community-run Linux/FOSS showcase event in North America. It will be held from March 2-5 at the Pasadena Convention Center in Pasadena, California. For more information on the expo, visit https://www.socallinuxexpo.org

January 19, 2017 10:12 AM

January 14, 2017

Akkana Peck

Plotting election (and other county-level) data with Python Basemap

After my arduous search for open 2016 election data by county, as a first test I wanted one of those red-blue-purple charts of how Democratic or Republican each county's vote was.

I used the Basemap package for plotting. It used to be part of matplotlib, but it's been split off into its own toolkit, grouped under mpl_toolkits: on Debian, it's available as python-mpltoolkits.basemap, or you can find Basemap on GitHub.

It's easiest to start with the fillstates.py example that shows how to draw a US map with different states colored differently. You'll need the three shapefiles (because of ESRI's silly shapefile format): st99_d00.dbf, st99_d00.shp and st99_d00.shx, available in the same examples directory.

Of course, to plot counties, you need county shapefiles as well. The US Census has county shapefiles at several different resolutions (I used the 500k version). Then you can plot state and counties outlines like this:

from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt

def draw_us_map():
    # Set the lower left and upper right limits of the bounding box:
    lllon = -119
    urlon = -64
    lllat = 22.0
    urlat = 50.5
    # and calculate a centerpoint, needed for the projection:
    centerlon = float(lllon + urlon) / 2.0
    centerlat = float(lllat + urlat) / 2.0

    m = Basemap(resolution='i',  # crude, low, intermediate, high, full
                llcrnrlon = lllon, urcrnrlon = urlon,
                lon_0 = centerlon,
                llcrnrlat = lllat, urcrnrlat = urlat,
                lat_0 = centerlat,
                projection='tmerc')

    # Read state boundaries.
    shp_info = m.readshapefile('st99_d00', 'states',
                               drawbounds=True, color='lightgrey')

    # Read county boundaries
    shp_info = m.readshapefile('cb_2015_us_county_500k',
                               'counties',
                               drawbounds=True)

if __name__ == "__main__":
    draw_us_map()
    plt.title('US Counties')
    # Get rid of some of the extraneous whitespace matplotlib loves to use.
    plt.tight_layout(pad=0, w_pad=0, h_pad=0)
    plt.show()
[Simple map of US county borders]

Accessing the state and county data after reading shapefiles

Great. Now that we've plotted all the states and counties, how do we get a list of them, so that when I read out "Santa Clara, CA" from the data I'm trying to plot, I know which map object to color?

After calling readshapefile('st99_d00', 'states'), m has two new members, both lists: m.states and m.states_info.

m.states_info[] is a list of dicts mirroring what was in the shapefile. For the Census state list, the useful keys are NAME, AREA, and PERIMETER. There's also STATE, which is an integer (not restricted to 1 through 50) but I'll get to that.

If you want the shape for, say, California, iterate through m.states_info[] looking for the one where m.states_info[i]["NAME"] == "California". Note i; the shape coordinates will be in m.states[i]n (in basemap map coordinates, not latitude/longitude).

Correlating states and counties in Census shapefiles

County data is similar, with county names in m.counties_info[i]["NAME"]. Remember that STATE integer? Each county has a STATEFP, m.counties_info[i]["STATEFP"] that matches some state's m.states_info[i]["STATE"].

But doing that search every time would be slow. So right after calling readshapefile for the states, I make a table of states. Empirically, STATE in the state list goes up to 72. Why 72? Shrug.

    MAXSTATEFP = 73
    states = [None] * MAXSTATEFP
    for state in m.states_info:
        statefp = int(state["STATE"])
        # Many states have multiple entries in m.states (because of islands).
        # Only add it once.
        if not states[statefp]:
            states[statefp] = state["NAME"]

That'll make it easy to look up a county's state name quickly when we're looping through all the counties.

Calculating colors for each county

Time to figure out the colors from the Deleetdk election results CSV file. Reading lines from the CSV file into a dictionary is superficially easy enough:

    fp = open("tidy_data.csv")
    reader = csv.DictReader(fp)

    # Make a dictionary of all "county, state" and their colors.
    county_colors = {}
    for county in reader:
        # What color is this county?
        pop = float(county["votes"])
        blue = float(county["results.clintonh"])/pop
        red = float(county["Total.Population"])/pop
        county_colors["%s, %s" % (county["name"], county["State"])] \
            = (red, 0, blue)

But in practice, that wasn't good enough, because the county names in the Deleetdk names didn't always match the official Census county names.

Fuzzy matches

For instance, the CSV file had no results for Alaska or Puerto Rico, so I had to skip those. Non-ASCII characters were a problem: "Doña Ana" county in the census data was "Dona Ana" in the CSV. I had to strip off " County", " Borough" and similar terms: "St Louis" in the census data was "St. Louis County" in the CSV. Some names were capitalized differently, like PLYMOUTH vs. Plymouth, or Lac Qui Parle vs. Lac qui Parle. And some names were just different, like "Jeff Davis" vs. "Jefferson Davis".

To get around that I used SequenceMatcher to look for fuzzy matches when I couldn't find an exact match:

def fuzzy_find(s, slist):
    '''Try to find a fuzzy match for s in slist.
    '''
    best_ratio = -1
    best_match = None

    ls = s.lower()
    for ss in slist:
        r = SequenceMatcher(None, ls, ss.lower()).ratio()
        if r > best_ratio:
            best_ratio = r
            best_match = ss
    if best_ratio > .75:
        return best_match
    return None

Correlate the county names from the two datasets

It's finally time to loop through the counties in the map to color and plot them.

Remember STATE vs. STATEFP? It turns out there are a few counties in the census county shapefile with a STATEFP that doesn't match any STATE in the state shapefile. Mostly they're in the Virgin Islands and I don't have election data for them anyway, so I skipped them for now. I also skipped Puerto Rico and Alaska (no results in the election data) and counties that had no corresponding state: I'll omit that code here, but you can see it in the final script, linked at the end.

    for i, county in enumerate(m.counties_info):
        countyname = county["NAME"]
        try:
            statename = states[int(county["STATEFP"])]
        except IndexError:
            print countyname, "has out-of-index statefp of", county["STATEFP"]
            continue

        countystate = "%s, %s" % (countyname, statename)
        try:
            ccolor = county_colors[countystate]
        except KeyError:
            # No exact match; try for a fuzzy match
            fuzzyname = fuzzy_find(countystate, county_colors.keys())
            if fuzzyname:
                ccolor = county_colors[fuzzyname]
                county_colors[countystate] = ccolor
            else:
                print "No match for", countystate
                continue

        countyseg = m.counties[i]
        poly = Polygon(countyseg, facecolor=ccolor)  # edgecolor="white"
        ax.add_patch(poly)

Moving Hawaii

Finally, although the CSV didn't have results for Alaska, it did have Hawaii. To display it, you can move it when creating the patches:

    countyseg = m.counties[i]
    if statename == 'Hawaii':
        countyseg = list(map(lambda (x,y): (x + 5750000, y-1400000), countyseg))
    poly = Polygon(countyseg, facecolor=countycolor)
    ax.add_patch(poly)
The offsets are in map coordinates and are empirical; I fiddled with them until Hawaii showed up at a reasonable place. [Blue-red-purple 2016 election map]

Well, that was a much longer article than I intended. Turns out it takes a fair amount of code to correlate several datasets and turn them into a map. But a lot of the work will be applicable to other datasets.

Full script on GitHub: Blue-red map using Census county shapefile

January 14, 2017 10:10 PM

Elizabeth Krumbach

Holidays in Philadelphia

In December MJ and I spent a couple weeks on the east coast in the new townhouse. It was the first long stay we’ve had there together, and though the holidays limited how much we could get done, particularly when it came to contractors, we did have a whole bunch to do.

First, I continued my quest to go through boxes of things that almost exclusively belonged to MJ’s grandparents. Unpacking, cataloging and deciding what pieces stay in Pennsylvania and what we’re sending to California. In the course of this I also had a deadline creeping up on me as I needed to find the menorah before Hanukkah began on the evening of December 24th. The timing of Hanukkah landing right along Christmas and New Years worked out well for us, MJ had some time off and it made the timing of the visit even more of a no-brainer. Plus, we were able to celebrate the entire eight night holiday there in Philadelphia rather than breaking it up between there and San Francisco.

The most amusing thing about finding the menorah was that it’s nearly identical to the one we have at home. MJ had mentioned that it was similar when I picked it out, but I had no idea that it was almost identical. Nothing wrong with the familiar, it’s a beautiful menorah.

House-wise MJ got the garage door opener installed and shelves put up in the powder room. With the help of his friend Tim, he also got the coffee table put together and the television mounted over the fireplace on New Years Eve. The TV was up in time to watch some of the NYE midnight broadcasts! We got the mail handling, trash schedule and cleaning sorted out with relatives who will be helping us with that, so the house will be well looked after in our absence.

I put together the vacuum and used it for the first time as I did the first thorough tidying of the house since we’d moved everything in from storage. I got my desk put together in the den, even though it’s still surrounded by boxes and will be until we ship stuff out to California. I was able to finally unpack some things we had actually ordered the last time I was in town but never got to put around the house, like a bunch of trash cans for various rooms and some kitchen goodies from ThinkGeek (Death Star waffle maker! R2-D2 measuring cups!). We also ordered a pair of counter-height chairs for the kitchen and they arrived in time for me to put them together just before we left, so the kitchen is also coming together even though we still need to go shopping for pots and pans.

Family-wise, we did a lot of visiting. On Christmas Eve we went to the nearby Samarkand restaurant, featuring authentic Uzbeki food. It was wonderful. We also did various lunches and dinners. A couple days were also spent going down to the city to visit a relative who is recovering in the hospital.

I didn’t see everyone I wanted to see but we did also get to visit with various friends. I saw my beloved Rogue One: A Star Wars Story a second time and met up with Danita to see Moana, which was great. I’ve now listened to the Moana soundtrack more than a few times. We met up with Crissi and her boyfriend Henry at Grand Lux Cafe in King of Prussia, where we also had a few errands to run and I was able to pick up some mittens at L.L. Bean. New Years Eve was spent with our friends Tim and Colleen, where we ordered pizza and hung aforementioned television. They also brought along some sweet bubbly for us to enjoy at midnight.

We also had lots of our favorite foods! We celebrated together at MJ’s favorite French cuisine inspired Chinese restaurant in Chestnut Hill, CinCin. We visited some of our standard favorites, including The Continental and Mad Mex. Exploring around our new neighborhood, we indulged in some east coast Chinese, made it to a Jewish deli where I got a delicious hoagie, found a sushi place that has an excellent roll list. We also went to Chickie’s and Pete’s crab house a couple of times, which, while being a Philadelphia establishment, I’d never actually been to. We also had a dinner at The Melting Pot, where I was able to try some local beers along with our fondue, and I’m delighted to see how much the microbrewery scene has grown since I moved away. We also hit a few diners during our stay, and enjoyed some eggnog from Wawa, which is some of the best eggnog ever made.

Unfortunately it wasn’t all fun. I’ve been battling a nasty bout of bronchitis for the past couple months. This continued ailment led to a visit to urgent care to get it looked at, and an x-ray to confirm I didn’t have a pneumonia. A pile of medication later, my bronchitis lingered and later in the week I spontaneously developed hives on my neck, which confounded the doctor. In the midst of health woes, I also managed to cut my foot on some broken glass while I was unpacking. It bled a lot, and I was a bit hobbled for a couple days while it healed. Thankfully MJ cleaned it out thoroughly (ouch!) once the bleeding had subsided and it has healed up nicely.

As the trip wound down I found myself missing the cats and eager to get home where I’d begin my new job. Still, it was with a heavy heart that we left our beautiful new vacation home, family and friends on the east coast.

by pleia2 at January 14, 2017 07:32 AM

January 12, 2017

Akkana Peck

Getting Election Data, and Why Open Data is Important

Back in 2012, I got interested in fiddling around with election data as a way to learn about data analysis in Python. So I went searching for results data on the presidential election. And got a surprise: it wasn't available anywhere in the US. After many hours of searching, the only source I ever found was at the UK newspaper, The Guardian.

Surely in 2016, we're better off, right? But when I went looking, I found otherwise. There's still no official source for US election results data; there isn't even a source as reliable as The Guardian this time.

You might think Data.gov would be the place to go for official election results, but no: searching for 2016 election on Data.gov yields nothing remotely useful.

The Federal Election Commission has an election results page, but it only goes up to 2014 and only includes the Senate and House, not presidential elections. Archives.gov has popular vote totals for the 2012 election but not the current one. Maybe in four years, they'll have some data.

After striking out on official US government sites, I searched the web. I found a few sources, none of them even remotely official.

Early on I found Simon Rogers, How to Download County-Level Results Data, which leads to GitHub user tonmcg's County Level Election Results 12-16. It's a comparison of Democratic vs. Republican votes in the 2012 and 2016 elections (I assume that means votes for that party's presidential candidate, though the field names don't make that entirely clear), with no information on third-party candidates.

KidPixo's Presidential Election USA 2016 on GitHub is a little better: the fields make it clear that it's recording votes for Trump and Clinton, but still no third party information. It's also scraped from the New York Times, and it includes the scraping code so you can check it and have some confidence on the source of the data.

Kaggle claims to have election data, but you can't download their datasets or even see what they have without signing up for an account. Ben Hamner has some publically available Kaggle data on GitHub, but only for the primary. I also found several companies selling election data, and several universities that had datasets available for researchers with accounts at that university.

The most complete dataset I found, and the only open one that included third party candidates, was through OpenDataSoft. Like the other two, this data is scraped from the NYT. It has data for all the minor party candidates as well as the majors, plus lots of demographic data for each county in the lower 48, plus Hawaii, but not the territories, and the election data for all the Alaska counties is missing.

You can get it either from a GitHub repo, Deleetdk's USA.county.data (look in inst/ext/tidy_data.csv. If you want a larger version with geographic shape data included, clicking through several other opendatasoft pages eventually gets you to an export page, USA 2016 Presidential Election by county, where you can download CSV, JSON, GeoJSON and other formats.

The OpenDataSoft data file was pretty useful, though it had gaps (for instance, there's no data for Alaska). I was able to make my own red-blue-purple plot of county voting results (I'll write separately about how to do that with python-basemap), and to play around with statistics.

Implications of the lack of open data

But the point my search really brought home: By the time I finally found a workable dataset, I was so sick of the search, and so relieved to find anything at all, that I'd stopped being picky about where the data came from. I had long since given up on finding anything from a known source, like a government site or even a newspaper, and was just looking for data, any data.

And that's not good. It means that a lot of the people doing statistics on elections are using data from unverified sources, probably copied from someone else who claimed to have scraped it, using unknown code, from some post-election web page that likely no longer exists. Is it accurate? There's no way of knowing.

What if someone wanted to spread news and misinformation? There's a hunger for data, particularly on something as important as a US Presidential election. Looking at Google's suggested results and "Searches related to" made it clear that it wasn't just me: there are a lot of people searching for this information and not being able to find it through official sources.

If I were a foreign power wanting to spread disinformation, providing easily available data files -- to fill the gap left by the US Government's refusal to do so -- would be a great way to mislead people. I could put anything I wanted in those files: there's no way of checking them against official results since there are no official results. Just make sure the totals add up to what people expect to see. You could easily set up an official-looking site and put made-up data there, and it would look a lot more real than all the people scraping from the NYT.

If our government -- or newspapers, or anyone else -- really wanted to combat "fake news", they should take open data seriously. They should make datasets for important issues like the presidential election publically available, as soon as possible after the election -- not four years later when nobody but historians care any more. Without that, we're leaving ourselves open to fake news and fake data.

January 12, 2017 11:41 PM

January 09, 2017

Akkana Peck

Snowy Winter Days, and an Elk Visit

[Snowy view of the Rio Grande from Overlook]

The snowy days here have been so pretty, the snow contrasting with the darkness of the piñons and junipers and the black basalt. The light fluffy crystals sparkle in a rainbow of colors when they catch the sunlight at the right angle, but I've been unable to catch that effect in a photo.

We've had some unusual holiday visitors, too, culminating in this morning's visit from a huge bull elk.

[bull elk in the yard] Dave came down to make coffee and saw the elk in the garden right next to the window. But by the time I saw him, he was farther out in the yard. And my DSLR batteries were dead, so I grabbed the point-and-shoot and got what I could through the window.

Fortunately for my photography the elk wasn't going anywhere in any hurry. He has an injured leg, and was limping badly. He slowly made his way down the hill and into the neighbors' yard. I hope he returns. Even with a limp that bad, an elk that size has no predators in White Rock, so as long as he stays off the nearby San Ildefonso reservation (where hunting is allowed) and manages to find enough food, he should be all right. I'm tempted to buy some hay to leave out for him.

[Sunset light on the Sangre de Cristos] Some of the sunsets have been pretty nice, too.

A few more photos.

January 09, 2017 02:48 AM

January 08, 2017

Akkana Peck

Using virtualenv to replace the broken pip install --user

Python's installation tool, pip, has some problems on Debian.

The obvious way to use pip is as root: sudo pip install packagename. If you hang out in Python groups at all, you'll quickly find that this is strongly frowned upon. It can lead to your pip-installed packages intermingling with the ones installed by Debian's apt-get, possibly causing problems during apt system updates.

The second most obvious way, as you'll see if you read pip's man page, is pip --user install packagename. This installs the package with only user permissions, not root, under a directory called ~/.local. Python automatically checks .local as part of its PYTHONPATH, and you can add ~/.local/bin to your PATH, so this makes everything transparent.

Or so I thought until recently, when I discovered that pip install --user ignores system-installed packages when it's calculating its dependencies, so you could end up with a bunch of incompatible versions of packages installed. Plus it takes forever to re-download and re-install dependencies you already had.

Pip has a clear page describing how pip --user is supposed to work, and that isn't what it's doing. So I filed pip bug 4222; but since pip has 687 open bugs filed against it, I'm not terrifically hopeful of that getting fixed any time soon. So I needed a workaround.

Use virtualenv instead of --user

Fortunately, it turned out that pip install works correctly in a virtualenv if you include the --system-site-packages option. I had thought virtualenvs were for testing, but quite a few people on #python said they used virtualenvs all the time, as part of their normal runtime environments. (Maybe due to pip's deficiencies?) I had heard people speak deprecatingly of --user in favor of virtualenvs but was never clear why; maybe this is why.

So, what I needed was to set up a virtualenv that I can keep around all the time and use by default every time I log in. I called it ~/.pythonenv when I created it:

virtualenv --system-site-packages $HOME/.pythonenv

Normally, the next thing you do after creating a virtualenv is to source a script called bin/activate inside the venv. That sets up your PATH, PYTHONPATH and a bunch of other variables so the venv will be used in all the right ways. But activate also changes your prompt, which I didn't want in my normal runtime environment. So I stuck this in my .zlogin file:

VIRTUAL_ENV_DISABLE_PROMPT=1 source $HOME/.pythonenv/bin/activate

Now I'll activate the venv once, when I log in (and once in every xterm window since I set XTerm*loginShell: true in my .Xdefaults. I see my normal prompt, I can use the normal Debian-installed Python packages, and I can install additional PyPI packages with pip install packagename (no --user, no sudo).

January 08, 2017 06:37 PM

January 05, 2017

Elizabeth Krumbach

The Girard Avenue Line

While I was in Philadelphia over the holidays a friend clued me into the fact that one of the historic streetcars (trolleys) on the Girard Avenue Line was decorated for the holidays. This line, SEPTA Route 15, is the last historic trolley line in Philadelphia and I had never ridden it before. This was the perfect opportunity!

I decided that I’d make the whole day about trains, so that morning I hopped on the SEPTA West Trenton Line regional rail, which has a stop near our place north of Philadelphia. After cheesesteak lunch near Jefferson Station, it was on to the Market-Frankfort Line subway/surface train to get up to Girard Station.

My goal for the afternoon was to see and take pictures of the holiday car, number 2336. So, with the friend I dragged along on this crazy adventure, we started waiting. The first couple trolleys weren’t decorated, so we hopped on another to get out of the chilly weather for a bit. Got off that trolley and waited for a few more, in both directions. This was repeated a couple times until we finally got a glimpse of the decorated trolley heading back to Girard Station. Now on our radar, we hopped on the next one and followed that trolley!


The non-decorated, but still lovely, 2335

We caught up with the decorated trolley after the turnaround at the end of the line and got on just after Girard Station. From there we took it all the way to the end of the line in west Philadelphia at 63rd St. There we had to disembark, and I took a few pictures of the outside.

We were able to get on again after the driver took a break, which allowed us take it all the way back.

The car was decorated inside and out, with lights, garland and signs.

At the end the driver asked if we’d just been on it to take a ride. Yep! I came just to see this specific trolley! Since it was getting dark anyway, he was kind enough to turn the outside lights on for me so I could get some pictures.

As my first time riding this line, I was able to make some observations about how they differ from the PCCs that run in San Francisco. In the historic fleet of San Francisco streetcars, the 1055 has the same livery as the trolleys that run in Philadelphia today. Most of the PCC’s in San Francisco’s fleet actually came from SEPTA in Philadelphia and this one is no exception, originally numbered 2122 while in service there. However, taking a peek inside it’s easy to see that it’s a bit different than the ones that run in Philadelphia today:


Inside the 1055 in San Francisco

The inside of this looks shiny compared to the inside of the one still running in Philadelphia. It’s all metal versus the plastic inside in Philadelphia, and the walls of the car are much thinner in San Francisco. I suspect this is all due to climate control requirements. In San Francisco we don’t really have seasons and the temperature stays pretty comfortable, so while there is a little climate control, it’s nothing compared to what the cars in Philadelphia need in the summer and winter. You can also see a difference from the outside, the entire top of the Philadelphia cars has a raised portion which seems to be climate control, but on the San Francisco cars it’s only a small bit at the center:


Outside the 1055 in San Francisco

Finally, the seats and wheelchair accessibility is different. The seats are all plastic in San Francisco, whereas they have fabric in Philadelphia. The raised platforms themselves and a portable metal platform serve as wheelchair access in San Francisco, whereas Philadelphia has an actual operative lift since there are many street level stops.

To wrap up the trolley adventure, we hopped on a final one to get us to Broad Street where we took the Broad Street Line subway down to dinner at Sazon on Spring Garden Street, where we had a meal that concluded with some of the best hot chocolate I’ve ever had. Perfect to warm us up after spending all afternoon chasing trolleys in Philadelphia December weather.

Dinner finished, I took one last train, the regional rail to head back to the suburbs.

More photos from the trolleys on the Girard Avenue Line here: https://www.flickr.com/photos/pleia2/albums/72157676838141261

by pleia2 at January 05, 2017 08:47 AM

January 04, 2017

Akkana Peck

Firefox "Reader Mode" and NoScript

A couple of days ago I blogged about using Firefox's "Delete Node" to make web pages more readable. In a subsequent Twitter discussion someone pointed out that if the goal is to make a web page's content clearer, Firefox's relatively new "Reader Mode" might be a better way.

I knew about Reader Mode but hadn't used it. It only shows up on some pages. as a little "open book" icon to the right of the URLbar just left of the Refresh/Stop button. It did show up on the Pogue Yahoo article; but when I clicked it, I just got a big blank page with an icon of a circle with a horizontal dash; no text.

It turns out that to see Reader Mode content in noscript, you must explicitly enable javascript from about:reader.

There are some reasons it's not automatically whitelisted: see discussions in bug 1158071 and bug 1166455 -- so enable it at your own risk. But it's nice to be able to use Reader Mode, and I'm glad the Twitter discussion spurred me to figure out why it wasn't working.

January 04, 2017 06:37 PM

January 02, 2017

Akkana Peck

Firefox's "Delete Node" eliminates pesky content-hiding banners

It's trendy among web designers today -- the kind who care more about showing ads than about the people reading their pages -- to use fixed banner elements that hide part of the page. In other words, you have a header, some content, and maybe a footer; and when you scroll the content to get to the next page, the header and footer stay in place, meaning that you can only read the few lines sandwiched in between them. But at least you can see the name of the site no matter how far you scroll down in the article! Wouldn't want to forget the site name!

Worse, many of these sites don't scroll properly. If you Page Down, the content moves a full page up, which means that the top of the new page is now hidden under that fixed banner and you have to scroll back up a few lines to continue reading where you left off. David Pogue wrote about that problem recently and it got a lot of play when Slashdot picked it up: These 18 big websites fail the space-bar scrolling test.

It's a little too bad he concentrated on the spacebar. Certainly it's good to point out that hitting the spacebar scrolls down -- I was flabbergasted to read the Slashdot discussion and discover that lots of people didn't already know that, since it's been my most common way of paging since browsers were invented. (Shift-space does a Page Up.) But the Slashdot discussion then veered off into a chorus of "I've never used the spacebar to scroll so why should anyone else care?", when the issue has nothing to do with the spacebar: the issue is that Page Down doesn't work right, whichever key you use to trigger that page down.

But never mind that. Fixed headers that don't scroll are bad even if the content scrolls the right amount, because it wastes precious vertical screen space on useless cruft you don't need. And I'm here to tell you that you can get rid of those annoying fixed headers, at least in Firefox.

[Article with intrusive Yahoo headers]

Let's take Pogue's article itself, since Yahoo is a perfect example of annoying content that covers the page and doesn't go away. First there's that enormous header -- the bottom row of menus ("Tech Home" and so forth) disappear once you scroll, but the rest stay there forever. Worse, there's that annoying popup on the bottom right ("Privacy | Terms" etc.) which blocks content, and although Yahoo! scrolls the right amount to account for the header, it doesn't account for that privacy bar, which continues to block most of the last line of every page.

The first step is to call up the DOM Inspector. Right-click on the thing you want to get rid of and choose Inspect Element:

[Right-click menu with Inspect Element]


That brings up the DOM Inspector window, which looks like this (click on the image for a full-sized view):

[DOM Inspector]

The upper left area shows the hierarchical structure of the web page.

Don't Panic! You don't have to know HTML or understand any of this for this technique to work.

Hover your mouse over the items in the hierarchy. Notice that as you hover, different parts of the web page are highlighted in translucent blue.

Generally, whatever element you started on will be a small part of the header you're trying to eliminate. Move up one line, to the element's parent; you may see that a bigger part of the header is highlighted. Move up again, and keep moving up, one line at a time, until the whole header is highlighted, as in the screenshot. There's also a dark grey window telling you something about the HTML, if you're interested; if you're not, don't worry about it.

Eventually you'll move up too far, and some other part of the page, or the whole page, will be highlighted. You need to find the element that makes the whole header blue, but nothing else.

Once you've found that element, right-click on it to get a context menu, and look for Delete Node (near the bottom of the menu). Clicking on that will delete the header from the page.

Repeat for any other part of the page you want to remove, like that annoying bar at the bottom right. And you're left with a nice, readable page, which will scroll properly and let you read every line, and will show you more text per page so you don't have to scroll as often.

[Article with intrusive Yahoo headers]

It's a useful trick. You can also use Inspect/Delete Node for many of those popups that cover the screen telling you "subscribe to our content!" It's especially handy if you like to browse with NoScript, so you can't dismiss those popups by clicking on an X. So happy reading!

Addendum on Spacebars

By the way, in case you weren't aware that the spacebar did a page down, here's another tip that might come in useful: the spacebar also advances to the next slide in just about every presentation program, from PowerPoint to Libre Office to most PDF viewers. I'm amazed at how often I've seen presenters squinting with a flashlight at the keyboard trying to find the right-arrow or down-arrow or page-down or whatever key they're looking for. These are all ways of advancing to the next slide, but they're all much harder to find than that great big spacebar at the bottom of the keyboard.

January 02, 2017 11:23 PM

Elizabeth Krumbach

The adventures of 2016

2016 was filled with professional successes and exciting adventures, but also various personal struggles. I exhausted myself finishing two books, navigated some complicated parts of my marriage, experienced my whole team getting laid off from a job we loved, handled an uptick in migraines and a continuing bout of bronchitis, and am still coming to terms with the recent loss.

It’s been difficult to maintain perspective, but it actually was an incredible year. I succeeded in having two books come out, my travels took me to some new, amazing places, we bought a vacation house, all my blood work shows that I’m healthier than I was at this time last year.


Lots more running in 2016 led to a healthier me!

Some of the tough stuff has even been good. I have succeeded in strengthening bonds with my husband and several people in my life who I care about. I’ve worked hard to worry less and enjoy time with friends and family, which may explain why this year ended up being the one of the group selfie. I paused to capture happy moments with my loved ones a lot more often.

So without further ado, the more quantitative year roundup!

The 9th edition of the The Official Ubuntu Book came out in July. This is the second edition I’ve been part of preparing. The book has updates to bring us up to the 16.04 release and features a whole new chapter covering “Ubuntu, Convergence, and Devices of the Future” which I was really thrilled about adding. My work with Matthew Helmke and José Antonio Rey was also very enjoyable. I wrote about the release here.

I also finished the first book I was the lead author on, Common OpenStack Deployments. Writing a book takes a considerable amount of time and effort, I spent many long nights and weekends testing and tweaking configurations largely written by my contributing author, Matt Fischer, writing copy for the book and integrating feedback from our excellent fleet of reviewers and other contributors. In the end, we released a book that takes the reader from knowing nothing about OpenStack to doing sample deployments using the same Puppet-driven tooling that enterprises use in their environments. The book came out in September, I wrote about it on my own blog here and maintain a blog about the book at DeploymentsBook.com.


Book adventures at the Ocata OpenStack Summit in Barcelona! Thanks to Nithya Ruff for taking a picture of me with my book at the Women of OpenStack area of the expo hall (source) and Brent Haley for getting the picture of Lisa-Marie and I (source).

This year also brought a new investment to our lives, we bought a vacation home in Pennsylvania! It’s a new construction townhouse, so we spent a fair amount of time on the east coast the second half of this year searching for a place, picking out the details and closing. We then spent the winter holidays here, spending a full two weeks away from home to really settle in. I wrote more about our new place here.

I keep saying I won’t travel as much, but 2016 turned out to have more travel than ever, taking over 100,000 miles of flights again.


Feeding a kangaroo, just outside of Melbourne, Australia

At the Jain Temple in Mumbai, India

We had lots of beers in Germany! Photo in the center by Chris Hoge (source)

Barcelona is now one of my favorite places, and it’s Sagrada Familia Basilica was breathtaking

Most of these conferences and events had a speaking component for me, but I also did a fair number of local talks and at some conferences I spoke more than once. The following is a rundown of all these talks I did in 2016, along with slides.


Photo by Masayuki Igawa (source) from Linux Conf AU in Geelong

Photo by Johanna Koester (source) from my keynote at the Ocata OpenStack Summit

MJ and I have also continued to enjoy our beloved home city of San Francisco, both with just the two of us and with various friends and family. We saw a couple Giants baseball games, along with one of the Sharks playoff games! Sampled a variety of local drinks and foods, visited lots of local animals and took in some amazing local sights. We went to the San Francisco Symphony for the first time, enjoyed a wonderful time together over over Labor Day weekend and I’ve skipped out at times to visit museum exhibits and the zoo.


Dinner at Luce in San Francisco, celebrating MJ’s new job

This year I also geeked out over trains – in four states and five countries! In May MJ and I traveled to Maine to spend some time with family, and a couple days of that trip were spent visiting the Seashore Trolley Museum in Kennebunkport and the Narrow Gauge Railroad Museum in Portland, I wrote about it here. I also enjoyed MUNI Heritage Weekend with my friend Mark at the end of September, where we got to see some of the special street cars and ride several vintage buses, read about that here. I also went up to New York City to finally visit the famous New York Transit Museum in Brooklyn and accompanying holiday exhibit at the Central Station with my friend David, details here. In Philadelphia I enjoyed the entire Girard Street line (15) which is populated by historic PCC streetcars (trolleys), including one decorated for the holidays, I have a pile of pictures here. I also got a glimpse of a car on the historic streetcar/trolley line in Melbourne and my buddy Devdas convinced me to take a train in Mumbai, and I visited the amazing Chhatrapati Shivaji Terminus there too. MJ also helped me plan some train adventures in the Netherlands and Germany as I traveled from airports for events.


From the Seashore Trolley Museum barn

As I enter into 2017 I’m thrilled to report that I’ll be starting a new job. Travel continues as I have trips to Australia and Los Angeles already on my schedule. I’ll also be spending time getting settled back into my life on the west coast, as I have spent 75% of my time these past couple months elsewhere.

by pleia2 at January 02, 2017 03:19 PM

December 27, 2016

Elizabeth Krumbach

OpenStack Days Mountain West 2016

A couple weeks ago I attended my last conference of the year, OpenStack Days Mountain West. After much flight shuffling following a seriously delayed flight, I arrived late on the evening prior to the conference with plenty of time to get settled in and feel refreshed for the conference in the morning.

The event kicked off with a keynote from OpenStack Foundation COO Mark Collier who spoke on the growth and success of OpenStack. His talk strongly echoed topics he touched upon at the recent OpenStack Summit back in October as he cited several major companies who are successfully using OpenStack in massive, production deployments including Walmart, AT&T and China Mobile. In keeping with the “future” theme of the conference he also talked about organizations who are already pushing the future potential of OpenStack by betting on the technology for projects that will easily exceed the capacity of what OpenStack can handle today.

Also that morning, Lisa-Marie Namphy moderated a panel on the future of OpenStack with John Dickinson, K Rain Leander, Bruce Mathews and Robert Starmer. She dove right in with the tough questions by having panelists speculate as to why the three major cloud providers don’t run OpenStack. There was also discussion about who the actual users of OpenStack were (consensus was: infrastructure operators), which got into the question of whether app developers were OpenStack users today (perhaps not, app developers don’t want a full Linux environment, they want a place for their app to live). They also discussed the expansion of other languages beyond Python in the project.

That afternoon I saw a talk by Mike Wilson of Mirantis on “OpenStack in the post Moore’s Law World” where he reflected on the current status of Moore’s Law and how it relates to cloud technologies, and the projects that are part of OpenStack. He talked about how the major cloud players outside of OpenStack are helping drive innovation for their own platforms by working directly with chip manufacturers to create hardware specifically tuned to their needs. There’s a question of whether anyone in the OpenStack community is doing similar, and it seems that perhaps they should so that OpenStack can have a competitive edge.

My talk was next, speaking on “The OpenStack Project Continuous Integration System” where I gave a tour of our CI system and explained how we’ve been tracking project growth and steps we’ve taken with regard to scaling it to handle it going into the future. Slides from the talk are available here (PDF). At the end of my talk I gave away several copies of Common OpenStack Deployments which I also took the chance to sign. I’m delighted that one of the copies will be going to the San Diego OpenStack Meetup and another to one right there in Salt Lake City.

Later I attended Christopher Aedo’s “Transforming Organizations with OpenStack” where he walked the audience through hands on training his team did about the OpenStack project’s development process and tooling for IBM teams around the world. The lessons learned from working with these teams and getting them to love open processes once they could explain them in person was inspiring. Tassoula Kokkoris wrote a great summary of the talk here: Collaborative Culture Spotlight: OpenStack Days Mountain West. I rounded off the day by going to David Medberry’s “Private Cloud Cattle and Pet Wrangling” talk where he drew experience from the private cloud at Charter Communications to discuss the move from treating servers like pets to treating them like cattle and how that works in a large organization with departments that have varying needs.

The next day began with a talk by OpenStack veteran, and now VP of Solutions at SUSE, Joseph George. He gave a talk on the state of OpenStack, with a strong message about staying on the path we set forth, which he compared to his own personal transformation to lose a significant amount of weight. In this talk, he outlined three main points that we must keep in mind in order to succeed:

  1. Clarity on the Goal and the Motivation
  2. Staying Focused During the “Middle” of the Journey
  3. Constantly Learning and Adapting

He wrote a more extensive blog post about it here which fleshes out how each of these related to himself and how they map to OpenStack: OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation.

The next talk was a fun one from Lisa-Marie Namphy and Monty Taylor with the theme of being a naughty or nice list for the OpenStack community. They walked through various decisions, aspects of the project, and more to paint a picture of where the successes and pain points of the project are. They did a great job, managing to pull it off with humor, wit, and charm, all while also being actually informative. The morning concluded with a panel titled “OpenStack: Preferred Platform For PaaS Solutions” which had some interesting views. The panelists brought their expertise to the table to discuss what developers seeking to write to a platform wanted, and where OpenStack was weak and strong. It certainly seems to me that OpenStack is strongest as IaaS rather than PaaS, and it makes sense for OpenStack to continue focusing on being what they’ve called an “integration engine” to tie components together rather than focus on writing a PaaS solution directly. There was some talk about this on the panel, where some stressed that they did want to see OpenStack hooking into existing PaaS software offerings.


Great photo of Lisa and Monty by Gary Kevorkian, source

Lunch followed the morning talks, and I haven’t mentioned it, but the food at this event was quite good. In fact, I’d go as far as to say it was some of the best conference-supplied meals I’ve had. Nice job, folks!

Huge thanks to the OpenStack Days Mountain West crew for putting on the event. Lots of great talks and I enjoyed connecting with folks I knew, as well as meeting members of the community who haven’t managed to make it to one of the global events I’ve attended. It’s inspiring to meet with such passionate members of local groups like I found there.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157676117696131

by pleia2 at December 27, 2016 03:02 PM

December 25, 2016

Akkana Peck

Photographing Farolitos (and other night scenes)

Excellent Xmas to all! We're having a white Xmas here..

Dave and I have been discussing how "Merry Christmas" isn't alliterative like "Happy Holidays". We had trouble coming up with a good C or K adjective to go with Christmas, but then we hit on the answer: Have an Excellent Xmas! It also has the advantage of inclusivity: not everyone celebrates the birth of Christ, but Xmas is a secular holiday of lights, family and gifts, open to people of all belief systems.

Meanwhile: I spent a couple of nights recently learning how to photograph Xmas lights and farolitos.

Farolitos, a New Mexico Christmas tradition, are paper bags, weighted down with sand, with a candle inside. Sounds modest, but put a row of them alongside a roadway or along the top of a typical New Mexican adobe or faux-dobe and you have a beautiful display of lights.

They're also known as luminarias in southern New Mexico, but Northern New Mexicans insist that a luminaria is a bonfire, and the little paper bag lanterns should be called farolitos. They're pretty, whatever you call them.

Locally, residents of several streets in Los Alamos and White Rock set out farolitos along their roadsides for a few nights around Christmas, and the county cooperates by turning off streetlights on those streets. The display on Los Pueblos in Los Alamos is a zoo, a slow exhaust-choked parade of cars that reminds me of the Griffith Park light show in LA. But here in White Rock the farolito displays are a lot less crowded, and this year I wanted to try photographing them.

Canon bugs affecting night photography

I have a little past experience with night photography. I went through a brief astrophotography phase in my teens (in the pre-digital phase, so I was using film and occasionally glass plates). But I haven't done much night photography for years.

That's partly because I've had problems taking night shots with my current digital SLRcamera, a Rebel Xsi (known outside the US as a Canon 450d). It's old and modest as DSLRs go, but I've resisted upgrading since I don't really need more features.

Except maybe when it comes to night photography. I've tried shooting star trails, lightning shots and other nocturnal time exposures, and keep hitting a snag: the camera refuses to take a photo. I'll be in Manual mode, with my aperture and shutter speed set, with the lens in Manual Focus mode with Image Stabilization turned off. Plug in the remote shutter release, push the button ... and nothing happens except a lot of motorized lens whirring noises. Which shouldn't be happening -- in MF and non-IS mode the lens should be just sitting there intert, not whirring its motors. I couldn't seem to find a way to convince it that the MF switch meant that, yes, I wanted to focus manually.

It seemed to be primarily a problem with the EF-S 18-55mm kit lens; the camera will usually condescend to take a night photo with my other two lenses. I wondered if the MF switch might be broken, but then I noticed that in some modes the camera explicitly told me I was in manual focus mode.

I was almost to the point of ordering another lens just for night shots when I finally hit upon the right search terms and found, if not the reason it's happening, at least an excellent workaround.

Back Button Focus

I'm so sad that I went so many years without knowing about Back Button Focus. It's well hidden in the menus, under Custom Functions #10.

Normally, the shutter button does a bunch of things. When you press it halfway, the camera both autofocuses (sadly, even in manual focus mode) and calculates exposure settings.

But there's a custom function that lets you separate the focus and exposure calculations. In the Custom Functions menu option #10 (the number and exact text will be different on different Canon models, but apparently most or all Canon DSLRs have this somewhere), the heading says: Shutter/AE Lock Button. Following that is a list of four obscure-looking options:

  • AF/AE lock
  • AE lock/AF
  • AF/AF lock, no AE lock
  • AE/AF, no AE lock

The text before the slash indicates what the shutter button, pressed halfway, will do in that mode; the text after the slash is what happens when you press the * or AE lock button on the upper right of the camera back (the same button you use to zoom out when reviewing pictures on the LCD screen).

The first option is the default: press the shutter button halfway to activate autofocus; the AE lock button calculates and locks exposure settings.

The second option is the revelation: pressing the shutter button halfway will calculate exposure settings, but does nothing for focus. To focus, press the * or AE button, after which focus will be locked. Pressing the shutter button won't refocus. This mode is called "Back button focus" all over the web, but not in the manual.

Back button focus is useful in all sorts of cases. For instance, if you want to autofocus once then keep the same focus for subsequent shots, it gives you a way of doing that. It also solves my night focus problem: even with the bug (whether it's in the lens or the camera) that the lens tries to autofocus even in manual focus mode, in this mode, pressing the shutter won't trigger that. The camera assumes it's in focus and goes ahead and takes the picture.

Incidentally, the other two modes in that menu apply to AI SERVO mode when you're letting the focus change constantly as it follows a moving subject. The third mode makes the * button lock focus and stop adjusting it; the fourth lets you toggle focus-adjusting on and off.

Live View Focusing

There's one other thing that's crucial for night shots: live view focusing. Since you can't use autofocus in low light, you have to do the focusing yourself. But most DSLR's focusing screens aren't good enough that you can look through the viewfinder and get a reliable focus on a star or even a string of holiday lights or farolitos.

Instead, press the SET button (the one in the middle of the right/left/up/down buttons) to activate Live View (you may have to enable it in the menus first). The mirror locks up and a preview of what the camera is seeing appears on the LCD. Use the zoom button (the one to the right of that */AE lock button) to zoom in; there are two levels of zoom in addition to the un-zoomed view. You can use the right/left/up/down buttons to control which part of the field the zoomed view will show. Zoom all the way in (two clicks of the + button) to fine-tune your manual focus. Press SET again to exit live view.

It's not as good as a fine-grained focusing screen, but at least it gets you close. Consider using relatively small apertures, like f/8, since it will give you more latitude for focus errors. Yyou'll be doing time exposures on a tripod anyway, so a narrow aperture just means your exposures have to be a little longer than they otherwise would have been.

After all that, my Xmas Eve farolitos photos turned out mediocre. We had a storm blowing in, so a lot of the candles had blown out. (In the photo below you can see how the light string on the left is blurred, because the tree was blowing around so much during the 30-second exposure.) But I had fun, and maybe I'll go out and try again tonight.


An excellent X-mas to you all!

December 25, 2016 07:30 PM

Elizabeth Krumbach

The Temples and Dinosaurs of SLC

A few weeks ago I was in Salt Lake City for my last conference of the year. I was only there for a couple days, but I had some flexibility in my schedule. I was able to see most of the conference and still make time to sneak out to see some sights before my flight home at the conclusion of the conference.

The conference was located right near Temple Square. In spite of a couple flurries here and there, and the accompanying cold, I made time to visit out during lunch the first day of the conference. This square is where the most famous temple of The Church of Jesus Christ of Latter-day Saints resides, the Salt Lake Temple. Since I’d never been to Salt Lake City before, this landmark was the most obvious one to visit, and they had decorated it for Christmas.

While I don’t share their faith, it was worthy of my time. The temple is beautiful, everyone I met was welcoming and friendly, and there is important historical significance to the story of that church.

The really enjoyable time was that evening though. After some time at The Beer Hive I went for a walk with a couple colleagues through the square again, but this time all lit up with the Christmas lights! The lights were everywhere and spectacular.

And I’m sure regardless of the season, the temple itself at night is a sight to behold.

More photos from Temple Square here: https://www.flickr.com/photos/pleia2/albums/72157677633463925

The conference continued the next day and I departed in the afternoon to visit the Natural History Museum of Utah. Utah is a big deal when it comes to fossil hunting in the US, so I was eager to visit their dinosaur fossil exhibit. In addition to a variety of crafted scenes, it also features the “world’s largest display of horned dinosaur skulls” (source).

Unfortunately upon arrival I learned that the museum was without power. They were waving people in, but explained that there was only emergency lighting and some of the sections of the museum were completely closed. I sadly missed out on their very cool looking exhibit on poisons, and it was tricky seeing some of the areas that were open with so little light.

But the dinosaurs.

Have you ever seen dinosaur fossils under just emergency lighting? They were considerably more impactful and scary this way. Big fan.

I really enjoyed some of the shadows cast by their horned dinosaur skulls.

More photos from the museum here: https://www.flickr.com/photos/pleia2/sets/72157673744906273/

There should totally be an event where the fossils are showcased in this way in a planned manner. Alas, since this was unplanned, the staff decided in the late afternoon to close the museum early. This sent me on my way much earlier than I’d hoped. Still, I was glad I got to spend some time with the dinosaurs and hadn’t wasted much time elsewhere in the museum. If I’m ever in Salt Lake City again I would like to go back though, it was tricky to read the signs in such low light and I would like to have the experience as it was intended. Besides, I’ll rarely pass up the opportunity to see a good dinosaur exhibit. I haven’t been to the Salt Lake City Zoo yet, if it had been warmer I may have considered it – next time!

With that, my trip to Salt Lake City pretty much concluded. I made my way to the airport to head home that evening. This trip rounded almost a full month of being away from home, so I was particularly eager to get home and spend some time with MJ and the kitties.

by pleia2 at December 25, 2016 04:32 PM

December 22, 2016

Akkana Peck

Tips on Developing Python Projects for PyPI

I wrote two recent articles on Python packaging: Distributing Python Packages Part I: Creating a Python Package and Distributing Python Packages Part II: Submitting to PyPI. I was able to get a couple of my programs packaged and submitted.

Ongoing Development and Testing

But then I realized all was not quite right. I could install new releases of my package -- but I couldn't run it from the source directory any more. How could I test changes without needing to rebuild the package for every little change I made?

Fortunately, it turned out to be fairly easy. Set PYTHONPATH to a directory that includes all the modules you normally want to test. For example, inside my bin directory I have a python directory where I can symlink any development modules I might need:

mkdir ~/bin/python
ln -s ~/src/metapho/metapho ~/bin/python/

Then add the directory at the beginning of PYTHONPATH:

export PYTHONPATH=$HOME/bin/python

With that, I could test from the development directory again, without needing to rebuild and install a package every time.

Cleaning up files used in building

Building a package leaves some extra files and directories around, and git status will whine at you since they're not version controlled. Of course, you could gitignore them, but it's better to clean them up after you no longer need them.

To do that, you can add a clean command to setup.py.

from setuptools import Command

class CleanCommand(Command):
    """Custom clean command to tidy up the project root."""
    user_options = []
    def initialize_options(self):
        pass
    def finalize_options(self):
        pass
    def run(self):
        os.system('rm -vrf ./build ./dist ./*.pyc ./*.tgz ./*.egg-info ./docs/sphinxdoc/_build')
(Obviously, that includes file types beyond what you need for just cleaning up after package building. Adjust the list as needed.)

Then in the setup() function, add these lines:

      cmdclass={
          'clean': CleanCommand,
      }

Now you can type

python setup.py clean
and it will remove all the extra files.

Keeping version strings in sync

It's so easy to update the __version__ string in your module and forget that you also have to do it in setup.py, or vice versa. Much better to make sure they're always in sync.

I found several version of that using system("grep..."), but I decided to write my own that doesn't depend on system(). (Yes, I should do the same thing with that CleanCommand, I know.)

def get_version():
    '''Read the pytopo module versions from pytopo/__init__.py'''
    with open("pytopo/__init__.py") as fp:
        for line in fp:
            line = line.strip()
            if line.startswith("__version__"):
                parts = line.split("=")
                if len(parts) > 1:
                    return parts[1].strip()

Then in setup():

      version=get_version(),

Much better! Now you only have to update __version__ inside your module and setup.py will automatically use it.

Using your README for a package long description

setup has a long_description for the package, but you probably already have some sort of README in your package. You can use it for your long description this way:

# Utility function to read the README file.
# Used for the long_description.
def read(fname):
    return open(os.path.join(os.path.dirname(__file__), fname)).read()
    long_description=read('README'),

December 22, 2016 05:15 PM

Jono Bacon

Recommendations Requested: Building a Smart Home

Early next year Erica, the scamp, and I are likely to be moving house. As part of the move we would both love to turn this house into a smart home.

Now, when I say “smart home”, I don’t mean this:

Dogogram. It is the future.

We don’t need any holographic dogs. We are however interested in having cameras, lights, audio, screens, and other elements in the house connected and controlled in different ways. I really like the idea of the house being naturally responsive to us in different scenarios.

In other houses I have seen people with custom lighting patterns (e.g. work, party, romantic dinner), sensors on gates that trigger alarms/notifications, audio that follows you around the house, notifications on visible screens, and other features.

Obviously we will want all of this to be (a) secure, (b) reliable, and (c) simple to use. While we want a smart home, I don’t particularly want to have to learn a million details to set it up.

Can you help?

So, this is what we would like to explore.

Now, I would love to ask you folks two questions:

  1. What kind of smart-home functionality and features have you implemented in your house (in other words, what neat things can you do?)
  2. What hardware and software do you recommend for rigging a home up as a smarthome. I would ideally like to keep re-wiring to a minimum. Assume I have nothing already, so recommendations for cameras, light-switches, hubs, and anything else is much appreciated.

If you have something you would like to share, please plonk it into the comment box below. Thanks!

The post Recommendations Requested: Building a Smart Home appeared first on Jono Bacon.

by Jono Bacon at December 22, 2016 04:00 PM

December 19, 2016

Jono Bacon

Building Better Teams With Asynchronous Workflow

One of the core principles of open source and innersource communities is asynchronous workflow. That is, participants/employees should be able to collaborate together with ubiquitous access, from anywhere, at any time.

As a practical example, at a previous company I worked at, pretty much everything lived in GitHub. Not just the code for the various products, but also material and discussions from the legal, sales, HR, business development, and other teams.

This offered a number of benefits for both employees and the company:

  • History – all projects, discussions, and collaboration was recorded. This provided a wealth of material for understanding prior decisions, work, and relationships.
  • Transparency – transparency is something most employees welcome and this was the case here where all employees felt a sense of connection to work across the company.
  • Communication – with everyone using the same platform it meant that it was easier for people to communicate clearly and consistently and to see the full scope of a discussion/project when pulled in.
  • Accountability – sunlight is the best disinfectant and having all projects, discussions, and work items/commitments, available in the platform ensured people were accountable in both their work and commitments.
  • Collaboration – this platform made it easier for people to not just collaborate (e.g. issues and pull requests) but also to bring in other employees by referencing their username (e.g. @jonobacon).
  • Reduced Silos – the above factors reduced the silos in the company and resulted in wider cross-team collaboration.
  • Untethered Working – because everything was online and not buried in private meetings and notes, this meant employees could be productive at home, on the road, or outside of office hours (often when riddled with jetlag at 3am!)
  • Internationally Minded – this also made it easier to work with an international audience, crossing different timezones and geographical regions.

While asynchronous workflow is not perfect, it offers clear benefits for a company and is a core component for integrating open source methodology and workflows (also known as innersource) into an organization.

Asynchronous workflow is a common area in which I work with companies. As such, I thought I would write up some lessons learned that may be helpful for you folks.

Designing Asynchronous Workflow

Many of you reading this will likely want to bring in the above benefits to your own organization too. You likely have an existing workflow which will be a mixture of (a) in-person meetings, (b) remote conference/video calls, (c) various platforms for tracking tasks, and (d) various collaboration and communication tools.

As with any organizational change and management, culture lies at the core. Putting platforms in place is the easy bit: adapting those platforms to the needs, desires, and uncertainties that live in people is where the hard work lays.

In designing asynchronous workflow you will need to make the transition from your existing culture and workflow to a new way of working. Ultimately this is about designing workflow that generates behaviors we want to see (e.g. collaboration, open discussion, efficient working) and behaviors we want to deter (e.g. silos, land-grabbing, power-plays etc).

Influencing these behaviors will include platforms, processes, relationships, and more. You will need to take a gradual, thoughtful, and transparent approach in designing how these different pieces fit together and how you make the change in a way that teams are engaged in.

I recommend you manage this in the following way (in order):

  1. Survey the current culture – first, you need to understand your current environment. How technically savvy are your employees? How dependent on meetings are they? What are the natural connections between teams, and where are the divisions? With a mixture of (a) employee surveys, and (b) observational and quantitive data, summarize these dynamics into lists of “Behaviors to Improve” and “Behaviors to Preserve”. These lists will give us a sense of how we want to build a workflow that is mindful of these behaviors and adjusts them where we see fit.
  2. Design an asynchronous environment – based on this research, put together a proposed plan for some changes you want to make to be more asynchronous. This should cover platform choices, changes to processes/policies, and roll-out plan. Divide this plan up in priority order for which pieces you want to deliver in which order.
  3. Get buy-in – next we need to build buy-in in senior management, team leads, and with employees. Ideally this process should be as open as possible with a final call for input from the wider employee-base. This is a key part of making teams feel part of the process.
  4. Roll out in phases – now, based on your defined priorities in the design, gradually roll out the plan. As you do so, provide regular updates on this work across the company (you should include metrics of the value this work is driving in these updates).
  5. Regularly survey users – at regular check-points survey the users of the different systems you put in place. Give them express permission to be critical – we want this criticism to help us refine and make changes to the plan.

Of course, this is a simplication of the work that needs to happen, but it covers the key markers that need to be in place.

Asynchronous Principles

The specific choices in your own asynchronous workflow plan will be very specific to your organization. Every org is different, has different drivers, people, and focus, so it is impossible to make a generalized set of strategic, platform, and process recommendations. Of course, if you want to discuss your organization’s needs specifically, feel free to get in touch.

For the purposes of this piece though, and to serve as many of you as possible, I want to share the core asynchronous principles you should consider when designing your asynchronous workflow. These principles are pretty consistent across most organizations I have seen.

Be Explicitly Permissive

A fundamental principle of asynchronous working (and more broadly in innersource) is that employees have explicit permission to (a) contribute across different projects/teams, (b) explore new ideas and creative solutions to problems, and (c) challenge existing norms and strategy.

Now, this doesn’t mean it is a free for all. Employees will have work assigned to them and milestones to accomplish, but being permissive about the above areas will crisply define the behavior the organization wants to see in employees.

In some organizations the senior management team spoo forth said permission and expect it to stick. While this top-down permission and validation is key, it is also critical that team leads, middle managers, and others support this permissive principle in day-to-day work.

People change and cultures develop by others delivering behavioral patterns that become accepted in the current social structure. Thus, you need to encourage people to work across projects, explore new ideas, and challenge the norm, and validate that behavior publicly when it occurs. This is how we make culture stick.

Default to Open Access

Where possible, teams and users should default to open visibility for projects, communication, issues, and other material. Achieving this requires not just default access controls to be open, but also setting the cultural and organization expectation that material should be open for all employees.

Of course, you should trust your employees to use their judgement too. Some efforts will require private discussions and work (e.g. security issues). Also, some discussions may need to be confidential (e.g. HR). So, default to open, but be mindful of the exceptions.

Platforms Need to be Accessible, Rich, and Searchable

There are myriad platforms for asynchronous working. GitHub, GitLab, Slack, Mattermost, Asana, Phabricator, to name just a few.

When evaluating platforms it is key to ensure that they can be made (a) securely accessible from anywhere (e.g. desktop/mobile support, available outside the office), (b) provide a rich and efficient environment for collaboration (e.g. rich discussions with images/videos/links, project management, simple code collaboration and review), (c) and any material is easily searchable (finding previous projects/discussions to learn from them, or finding new issues to focus on).

Always Maintain History and Never Delete, but Archive

You should maintain history in everything you do. This should include discussions, work/issue tracking, code (revision control), releases, and more.

On a related note, you should never, ever permanently delete material. Instead, that material should be archived. As an example, if you file an issue for a bug or problem that is no longer pertinent, archive the issue so it doesn’t come up in popular searches, but still make it accessible.

Consolidate Identity and Authentication

Having a single identity for each employee on asynchronous infrastructure is important. We want to make it easy for people to reference individual employees, so a unique username/handle is key here. This is not just important technically, but also for building relationships – that username/handle will be a part of how people collaborate, build their reputations, and communicate.

A complex challenge with deploying asynchronous infrastructure is with identity and authentication. You may have multiple different platforms that have different accounts and authentication providers.

Where possible invest in Single Sign On and authentication. While it requires a heavier up-front lift, consolidating multiple accounts further down the line is a nightmare you want to avoid.

Validate, Incentivize, and Reward

Human beings need validation. We need to know we are on the right track, particularly when joining new teams and projects. As such, you need to ensure people can easily validate each other (e.g. likes and +1s, simple peer review processes) and encourage a culture of appreciation and thanking others (e.g. manager and leaders setting an example to always thank people for contributions).

Likewise, people often respond well to being incentivized and often enjoy the rewards of that work. Be sure to identify what a good contribution looks like (e.g. in software development, a merged pull request) and incentivize and reward great work via both baked-in features and specific campaigns.

Be Mindful of Uncertainty, so Train, Onboard, and Support

Moving to a more asynchronous way of working will cause uncertainty in some. Not only are people often reluctant to change, but operating in a very open and transparent manner can make people squeamish about looking stupid in front of their colleagues.

So, be sure to provide extensive training as part of the transition, onboard new staff members, and provide a helpdesk where people can always get help and their questions answered.


Of course, I am merely scratching the surface of how we build asynchronous workflow, but hopefully this will get your started and generate some ideas and thoughts about how you bring this to your organization.

Of course, feel free to get in touch if you want to discuss your organization’s needs in more detail. I would also love to hear additional ideas and approaches in the comments!

The post Building Better Teams With Asynchronous Workflow appeared first on Jono Bacon.

by Jono Bacon at December 19, 2016 03:54 PM

December 17, 2016

Akkana Peck

Distributing Python Packages Part II: Submitting to PyPI

In Part I, I discussed writing a setup.py to make a package you can submit to PyPI. Today I'll talk about better ways of testing the package, and how to submit it so other people can install it.

Testing in a VirtualEnv

You've verified that your package installs. But you still need to test it and make sure it works in a clean environment, without all your developer settings.

The best way to test is to set up a "virtual environment", where you can install your test packages without messing up your regular runtime environment. I shied away from virtualenvs for a long time, but they're actually very easy to set up:

virtualenv venv
source venv/bin/activate

That creates a directory named venv under the current directory, which it will use to install packages. Then you can pip install packagename or pip install /path/to/packagename-version.tar.gz

Except -- hold on! Nothing in Python packaging is that easy. It turns out there are a lot of packages that won't install inside a virtualenv, and one of them is PyGTK, the library I use for my user interfaces. Attempting to install pygtk inside a venv gets:

********************************************************************
* Building PyGTK using distutils is only supported on windows. *
* To build PyGTK in a supported way, read the INSTALL file.    *
********************************************************************

Windows only? Seriously? PyGTK works fine on both Linux and Mac; it's packaged on every Linux distribution, and on Mac it's packaged with GIMP. But for some reason, whoever maintains the PyPI PyGTK packages hasn't bothered to make it work on anything but Windows, and PyGTK seems to be mostly an orphaned project so that's not likely to change.

(There's a package called ruamel.venvgtk that's supposed to work around this, but it didn't make any difference for me.)

The solution is to let the virtualenv use your system-installed packages, so it can find GTK and other non-PyPI packages there:

virtualenv --system-site-packages venv
source venv/bin/activate

I also found that if I had a ~/.local directory (where packages normally go if I use pip install --user packagename), sometimes pip would install to .local instead of the venv. I never did track down why this happened some times and not others, but when it happened, a temporary mv ~/.local ~/old.local fixed it.

Test your Python package in the venv until everything works. When you're finished with your venv, you can run deactivate and then remove it with rm -rf venv.

Tag it on GitHub

Is your project ready to publish?

If your project is hosted on GitHub, you can have pypi download it automatically. In your setup.py, set

download_url='https://github.com/user/package/tarball/tagname',

Check that in. Then make a tag and push it:

git tag 0.1 -m "Name for this tag"
git push --tags origin master

Try to make your tag match the version you've set in setup.py and in your module.

Push it to pypitest

Register a new account and password on both pypitest and on pypi.

Then create a ~/.pypirc that looks like this:

[distutils]
index-servers =
  pypi
  pypitest

[pypi]
repository=https://pypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

[pypitest]
repository=https://testpypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

Yes, those passwords are in cleartext. Incredibly, there doesn't seem to be a way to store an encrypted password or even have it prompt you. There are tons of complaints about that all over the web but nobody seems to have a solution. You can specify a password on the command line, but that's not much better. So use a password you don't use anywhere else and don't mind too much if someone guesses.

Update: Apparently there's a newer method called twine that solves the password encryption problem. Read about it here: Uploading your project to PyPI. You should probably use twine instead of the setup.py commands discussed in the next paragraph.

Now register your project and upload it:

python setup.py register -r pypitest
python setup.py sdist upload -r pypitest

Wait a few minutes: it takes pypitest a little while before new packages become available. Then go to your venv (to be safe, maybe delete the old venv and create a new one, or at least pip uninstall) and try installing:

pip install -i https://testpypi.python.org/pypi YourPackageName

If you get "No matching distribution found for packagename", wait a few minutes then try again.

If it all works, then you're ready to submit to the real pypi:

python setup.py register -r pypi
python setup.py sdist upload -r pypi

Congratulations! If you've gone through all these steps, you've uploaded a package to pypi. Pat yourself on the back and go tell everybody they can pip install your package.

Some useful reading

Some pages I found useful:

A great tutorial except that it forgets to mention signing up for an account: Python Packaging with GitHub

Another good tutorial: First time with PyPI

Allowed PyPI classifiers -- the categories your project fits into Unfortunately there aren't very many of those, so you'll probably be stuck with 'Topic :: Utilities' and not much else.

Python Packages and You: not a tutorial, but a lot of good advice on style and designing good packages.

December 17, 2016 11:19 PM

December 12, 2016

Eric Hammond

How Much Does It Cost To Run A Serverless API on AWS?

Serving 2.1 million API requests for $11

Folks tend to be curious about how much real projects cost to run on AWS, so here’s a real example with breakdowns by AWS service and feature.

This article walks through the AWS invoice for charges accrued in November 2016 by the TimerCheck.io API service which runs in the us-east-1 (Northern Virginia) region and uses the following AWS services:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • Route 53
  • CloudFront
  • SNS (Simple Notification Service)
  • CloudWatch Logs
  • CloudWatch Metrics
  • CloudTrail
  • S3
  • Network data transfer
  • CloudWatch Alarms

During this month, TimerCheck.io service processed over 2 million API requests. Every request ran an AWS Lambda function that read from and/or wrote to a DynamoDB table.

This AWS account is older than 12 months, so any first year free tier specials are no longer applicable.

Total Cost Overview

At the very top of the AWS invoice, we can see that the total AWS charges for the month of November add up to $11.12. This is the total bill for processing the 2.1 million API requests and all of the infrastructure necessary to support them.

Invoice: Summary

Service Overview

The next part of the invoice lists the top level services and charges for each. You can see that two thirds of the month’s cost was in API Gateway at $7.47, with a few other services coming together to make up the other third.

Invoice: Service Overview

API Gateway

Current API Gateway pricing is $3.50 per million requests, plus data transfer. As you can see from the breakdown below, the requests are the bulk of the expense at $7.41. The responses from TimerCheck.io probably average in the hundreds of bytes, so there’s only $0.06 in data transfer cost.

You currently get a million requests at no charge for the first 12 months, which was not applicable to this invoice, but does end up making API Gateway free for the development of many small projects.

Invoice: API Gateway

CloudTrail

I don’t remember enabling CloudTrail on this account, but at some point I must have done the right thing, as this is something that should be active for every AWS account. There were are almost 400,000 events recorded by CloudTrail, but since the first trail is free, there is no charge listed here.

Note that there are some charges associated with the storage of the CloudTrail event logs in S3. See below.

Invoice: CloudTrail

CloudWatch

The CloudWatch costs for this service come from logs being sent to CloudWatch Logs and the storage of those logs. These logs are being generated by AWS Lambda function execution and by API Gateway execution, so you can consider them as additional costs of running those services. You can control some of the logs generated by your AWS Lambda function, so a portion of these costs are under your control.

There are also charges for CloudWatch Alarms, but for some reason, those are listed under EC2 (below) instead of here under CloudWatch.

Invoice: CloudWatch

Data Transfer

Data transfer costs can be complex as they depend on where the data is coming from and going to. Fortunately for TimerCheck.io, there is very little network traffic and most of it falls into the free tiers. What little we are being charged for here amounts to a measly $0.04 for 4 GB of data transferred between availability zones. I presume this is related to AWS services talking to each other (e.g., AWS Lambda to DynamoDB) because there are no EC2 instances in this stack.

Note that this is not the entirety of the data transfer charges, as some other services break out their own network costs.

Invoice: Data Transfer

DynamoDB

The DynamoDB pricing includes a permanent free tier of up to 25 write capacity units and 25 read capacity units. The TimerCheck.io has a single DynamoDB table that is set to a capacity of 25 write and 25 read so there are no charges for capacity.

The TimerCheck.io DynamoDB database size falls well under the 25 GB free tier, so that has no charge either.

Invoice: DynamoDB

Elastic Compute Cloud

The TimerCheck.io service does not use EC2 and yet there is a section in the invoice for EC2. For some reason this section lists the CloudWatch Alarm charges.

Each CloudWatch Alarm costs $0.10 per month and this account has eight for a total of $0.80/month. But, for some reason, I was only billed $0.73. *shrug*

Invoice: EC2

This AWS account has four AWS billing alarms that will email me whenever the cumulative charges for the month pass $10, $20, $30, and $40.

There is one CloudWatch alarm that emails me if the AWS Lambda function invocations are being throttled (more than 100 concurrent functions being executed).

There are two CloudWatch alarms that email me if the consumed read and write capacity units are trending so high that I should look at increasing the capacity settings of the DynamoDB table. We are nowhere near that at current usage volume.

Yes, that leaves one CloudWatch alarm, which was a duplicate. I have since removed it.

AWS Lambda

Since most of the development of the TimerCheck.io API service focuses on writing the 60 lines of code for the AWS Lambda function, this is where I was expecting the bulk of the charges to be. However, the AWS Lambda costs only amount to $0.22 for the month.

There were 2.1 million AWS Lambda function invocations, one per external consumer API request, same as the API Gateway. The first million AWS Lambda function calls are free. The rest are charged at $0.20 per million.

The permanent free tier also includes 400,000 GB-seconds of compute time per month. At an average of 0.15 GB-seconds per function call, we stayed within the free tier at a total of 320,000 GB-seconds.

Invoice: AWS Lambda

I have the AWS Lambda function configuration cranked up to the max 1536 GB memory so that it will run as fast as possible. Since the charges are rounded up in units of 100ms, we could probably save GB-seconds by scaling down the memory once we exceed the free tier. Most of the time is probably spent in DynamoDB calls anyway, so this should not affect API performance much.

Route 53

Route 53 charges $0.50 per hosted zone (domain). I have two domains hosted in Route 53, the expected timercheck.io plus the extra timercheck.com. The timercheck.com domain was supposed to redirect to timercheck.io, but I apparently haven’t gotten around to tossing in that feature yet. These two hosted zones account for $1 in charges.

There were 1.1 million DNS queries to timercheck.io and www.timercheck.io, but since those resolve to aliases for the API Gateway, there is no charge.

The other $0.09 comes from the 226,000 DNS queries to random timercheck.io and timercheck.com hostnames. These would include status.timercheck.io, which is a page displaying the uptime of TimerCheck.io as reported by StatusCake.

Invoice: Route 53

Simple Notification Service

During the month of November, there was one post to an SNS topic and one email delivery from an SNS topic. These were both for the CloudWatch alert notifying me that the charges on the account had exceeded $10 for the month. There were no charges for this.

Invoice: SNS

Simple Storage Service

The S3 costs in this account are entirely for storing the CloudTrail events. There were 222 GET requests ($0) and 13,000 requests of other types ($0.07). There was no charge for the 0.064 GB-Mo of actual data stored. Has Amazon started rounding fractional pennies down instead of up in some services?

Invoice: SNS

External Costs

The domains timercheck.io and timercheck.com are registered through other registrars. Those cost about $33 and $9 per year, respectively.

The SSL/TLS certificate for https support costs around $10-15 per year, though this should drop to zero once CloudFront distributions created with API Gateway support certificates with ACM (AWS Certificate Manager) #awswishlist

Not directly obvious from the above is the fact that I have spent no time or money maintaining the TimerCheck.io API service post-launch. It’s been running for 1.5 years and I haven’t had to upgrade software, apply security patches, replace failing hardware, recover from disasters, or scale up and down with demand. By using AWS services like API Gateway, AWS Lambda, and DynamoDB, Amazon takes care of everything.

Notes

Your Mileage May Vary.

For entertainment use only.

This is just one example from one month for one service architected one way. Your service running on AWS will not cost the same.

Though 2 million TimerCheck.io API requests in November cost about $11, this does not mean that an additional million would cost another $5.50. Some services would cost significantly more and some would cost about the same, probably averaging out to significantly more.

If you are reading this after November 2016, then the prices for these AWS services have certainly changed and you should not use any of the above numbers in making decisions about running on AWS.

Conclusion

Amazon, please lower the cost of the API Gateway; or provide a simpler, cheaper service that can trigger AWS Lambda functions with https endpoints. Thank you!

Original article and comments: https://alestic.com/2016/12/aws-invoice-example/

December 12, 2016 10:00 AM

Elizabeth Krumbach

Trains in NYC

I’ve wanted to visit the New York Transit Museum ever since I discovered it existed. Housed in the retired Court station in Brooklyn, even the museum venue had “transit geek heaven” written all over it. I figured I’d visit it some day when work brought me to the city, but then I learned about the 15th Annual Holiday Train Show at their Annex and Store at Grand Central going on now. I’d love to see that! I ended up going up to the NYC from Philadelphia with my friend David last Sunday morning and made a day of it. Even better, we parked in New Jersey so had a full on transit experience from there into Manhattan and Brooklyn and back as the day progressed.

Our first stop was Grand Central Station via the 5 subway line. Somehow I’d never been there before. Enjoy the obligatory station selfie.

From there it was straight down to the Annex and Store run by the transit museum. The holiday exhibit had glittering signs hanging from the ceiling of everything from buses to transit cards to subway cars and snowflakes. The big draw though was the massive o-gauge model train setup, as the site explains:

This year’s Holiday Train Show display will feature a 34-foot-long “O gauge” model train layout with Lionel’s model Metro-North, New York Central, and vintage subway trains running on eight separate loops of track, against a backdrop featuring graphics celebrating the Museum’s 40th anniversary by artist Julia Rothman.

It was quite busy there, but folks were very clearly enjoying it. I’m really glad I went, even if the whole thing made me pine for my future home office train set all the more. Some day! It’s also worthy to note that this shop is the one to visit transit-wise. The museum in Brooklyn also had a gift shop but it was smaller and had fewer things, I highly recommend picking things up here, I ended up going back after the transit museum to get something I wanted.

We then hopped on the 4 subway line into Brooklyn to visit the actual transit museum. As advertised, it’s in a retired subway station, so the entrance looks like any other subway entrance and you take stairs underground. You enter and buy your ticket and then are free to explore both levels of the museum. The first had several exhibits that rotate, including one about Coney Island and another providing a history of crises in New York City (including 9/11, hurricane Sandy) and how the transit system and operators responded to them. They also had displays of a variety of turnstiles throughout the years, and exhibits talking about street car (trolley) lines and the introduction of the bus systems.

The exhibits were great, but it was downstairs that things got really fun. They have functioning rails where the subway trains used to run through where they’ve lined up over a dozen cars from throughout transit history in NYC for visitors to explore, inside and out.

The evolution of seat designs and configurations was interesting to witness and feel, as you could sit on the seats to get the full experience. Each car also had an information sign next to it, so you could learn about the era and the place of that car in it. Transitions between wood to metal, paired (and ..tripled?) cars were showcased, along with a bunch that were stand alone interchangables. I also enjoyed seeing a caboose, though I didn’t quite recognize at first (“is this for someone to live in?”).

A late lunch was due following the transit museum. We ended up at Sottocasa Pizzeria right there in Brooklyn. It got great reviews and I enjoyed it a lot, but was definitely on the fancy pizza side. They also had selection of Italian beers, of which I chose the delicious Nora by Birra Baladin. Don’t worry, next time I’m in New York I’ll go to a great, not fancy, pizza place.

It was then back to Manhattan to spend a bit more time at Grand Central and for an evening walk through the city. We started by going up 5th Avenue to see Rockefeller Square at night during the holidays. I hadn’t been to Manhattan since 2013 when I went with my friend Danita and I’d never seen the square all decked out for the holidays. I didn’t quite think it through though, it’s probably the busiest time of the year there so the whole neighborhood for blocks was insanely crowded. After seeing the skating rink and tree, we escaped northwest and made our way through the crowds up to Central Park. It was cold, but all the walking was fun even with the crowds. For dinner we ended up at Jackson Hole for some delicious burgers. I went with the Guacamole Burger.

The trip back to north Jersey took us through the brand new World Trade Center Transportation Hub to take the PATH. It’s a very unusual space. It’s all bright white with tons of marble shaped in a modern look, and has a shopping mall with a surreal amount of open space. The trip back on the PATH that night was as smooth as expected. In all, a very enjoyable day of public transit stuff!

More photos from Grand Central Station and the Transit Museum here: https://www.flickr.com/photos/pleia2/albums/72157677457519215

Epilogue: I received incredibly bad news the day after this visit to NYC. It cast a shadow over it for me. I went back and forth about whether I should write about this visit at all and how I should present it if I did. I decided to present it as it was that day. It was a great day of visiting the city and geeking out over trains, enjoyed with a close friend, and detached from whatever happened later. I only wish I could convince my mind to do the same.

by pleia2 at December 12, 2016 01:29 AM

UbuCon EU 2016

Last month I had the opportunity to travel to Essen, Germany to attend UbuCon EU 2016. Europe has had UbuCons before, but the goal of this one was to make it a truly international event, bringing in speakers like me from all corners of the Ubuntu community to share our experiences with the European Ubuntu community. Getting to catch up with a bunch of my Ubuntu colleagues who I knew would be there and visiting Germany as the holiday season began were also quite compelling reasons for me to attend.

The event formally kicked off Saturday morning with a welcome and introduction by Sujeevan Vijayakumaran, he reported that 170 people registered for the event and shared other statistics about the number of countries attendees were from. He also introduced a member of the UbPorts team, Marius Gripsgård, who announced the USB docking station for Ubuntu Touch devices they were developing, more information in this article on their website: The StationDock.

Following these introductions and announcements, we were joined by Canonical CEO Jane Silber who provided a tour of the Ubuntu ecosystem today. She highlighted the variety of industries where Ubuntu was key, progress with Ubuntu on desktops/laptops, tablets, phones and venturing into the smart Internet of Things (IoT) space. Her focus was around the amount of innovation we’re seeing in the Ubuntu community and from Canonical, and talked about specifics regarding security, updates, the success in the cloud and where Ubuntu Core fits into the future of computing.

I also loved that she talked about the Ubuntu community. The strength of local meetups and events, the free support community that spans a variety of resources, ongoing work by the various Ubuntu flavors. She also spoke to the passion of Ubuntu contributors, citing comics and artwork that community members have made, including the stunning series of release animal artwork by Sylvia Ritter from right there in Germany, visit them here: Ubuntu Animals. I was also super thrilled that she mentioned the Ubuntu Weekly Newsletter as a valuable resource for keeping up with the community, a very small group of folks works very hard on it and that kind of validation is key to sustaining motivation.

The next talk I attended was by Fernando Lanero Barbero on Linux is education, Linux is science. Ubuntu to free educational environments. Fernando works at a school district in Spain where he has deployed Ubuntu across hundreds of computers, reaching over 1200 students in the three years he’s been doing the work. The talk outlined the strengths of the approach, explaining that there was cost savings for his school and also how Ubuntu and open source software is more in line with the school values. One of the key takeaways from his experience was one that I know a lot about from our own Linux in schools experiences here in the US at Partimus: focus on the people, not the technologies. We’re technologists who love Linux and want to promote it, but without engagement, understanding and buy-in from teachers, deployments won’t be successful. A lot of time needs to be spent making assessments of their needs, doing roll-outs slowly and methodically so that the change doesn’t happen to abruptly and leave them in a lurch. He also stressed the importance of consistency with the deployments. Don’t get super creative across machines, use the same flavor for everything, even the same icon set. Not everyone is as comfortable with variation as we are, and you want to make the transition as easy as possible across all the systems.

Laura Fautley (Czajkowski) spoke at the next talk I went to, on Supporting Inclusion & Involvement in a Remote Distributed Team. The Ubuntu community itself is distributed across the globe, so drawing experience from her work there and later at several jobs where she’s had to help manage communities, she had a great list of recommendations as you build out such a team. She talked about being sensitive to time zones, acknowledgement that decisions are sometimes made in social situations rather than that you need to somehow document and share these decisions with the broader community. She was also eager to highlight how you need to acknowledge and promote the achievements in your team, both within the team and to the broader organization and project to make sure everyone feels valued and so that everyone knows the great work you’re doing. Finally, it was interesting to hear some thoughts about remote member on-boarding, stressing the need to have a process so that new contributors and team mates can quickly get up to speed and feel included from the beginning.

I went to a few other talks throughout the two day event, but one of the big reasons for me attending was to meet up with some of my long-time friends in the Ubuntu community and finally meet some other folks face to face. We’ve had a number of new contributors join us since we stopped doing Ubuntu Developer Summits and today UbuCons are the only Ubuntu-specific events where we have an opportunity to meet up.


Laura Fautley, Elizabeth K. Joseph, Alan Pope, Michael Hall

Of course I was also there to give a pair of talks. I first spoke on Contributing to Ubuntu on Desktops (slides) which is a complete refresh of a talk I gave a couple of times back in 2014. The point of that talk was to pull people back from the hype-driven focus on phones and clouds for a bit and highlight some of the older projects that still need contributions. I also spoke on Building a career with Ubuntu and FOSS (slides) which was definitely the more popular talk. I’ve given a similar talk for a couple UbuCons in the past, but this one had the benefit of being given while I’m between jobs. This most recent job search as I sought out a new role working directly with open source again gave a new dimension to the talk, and also made for an amusing intro, “I don’t have a job at this very moment …but without a doubt I will soon!” And in fact, I do have something lined up now.


Thanks to Tiago Carrondo for taking this picture during my talk! (source)

The venue for the conference was a kind of artists space, which made it a bit quirky, but I think worked out well. We had a couple social gatherings there at the venue, and buffet lunches were included in our tickets, which meant we didn’t need to go far or wait on food elsewhere.

I didn’t have a whole lot of time for sight-seeing this trip because I had a lot going on stateside (like having just bought a house!) but I did get to enjoy the beautiful Christmas Market in Essen a few of nights while I was there.

For those of you not familiar with German Christmas Markets (I wasn’t), they close roads downtown and pop up streets of wooden shacks that sell everything from Christmas ornaments and cookies to hot drinks, beers and various hot foods. We went the first night I was in town we met up with several fellow conference-goers and got some fries with mayonnaise, grilled mushrooms with Bearnaise sauce, my first taste of German Glühwein (mulled wine) and hot chocolate. The next night we went was a quick walk through the market that landed us at a steakhouse where we had a late dinner and a couple beers.

The final night we didn’t stay out late, but did get some much anticipated Spanish churros, which inexplicably had sugar rather than the cinnamon I’m used to, as well as a couple more servings of Glühwein, this time in commemorative Christmas mugs shaped like boots!


Clockwise from top left: José Antonio Rey, Philip Ballew, Michael Hall, John and Laura Fautley, Elizabeth K. Joseph

The next morning I was up bright and early to catch a 6:45AM train that started me on my three train journey back to Amsterdam to fly back to Philadelphia.

It was a great little conference and a lot of fun. Huge thanks to Sujeevan for being so incredibly welcoming to all of us, and thanks to all the volunteers who worked for months to make the event happen. Also thanks to Ubuntu community members who donate to the community fund since I would have otherwise had to self-fund to attend.

More photos from the event (and the Christmas Market!) here: https://www.flickr.com/photos/pleia2/albums/72157676958738915

by pleia2 at December 12, 2016 12:03 AM