Planet Ubuntu California

February 25, 2017

Elizabeth Krumbach

Moving on from the Ubuntu Weekly Newsletter

Somewhere around 2010 I started getting involved with the Ubuntu News Team. My early involvement was limited to the Ubuntu Fridge team, where I would help post announcements from various teams, including the Ubuntu Community Council that I was part of. With Amber Graner at the helm of the Ubuntu Weekly Newsletter (UWN) I focused my energy elsewhere since I knew how much work the UWN editor position was at the time.

Ubuntu Weekly Newsletter

At the end of 2010 Amber stepped down from the team to pursue other interests, and with no one to fill her giant shoes the team entered a five month period of no newsletters. Finally in June, after being contacted numerous times about the fate of the newsletter, I worked with Nathan Handler to revive it so we could release issue 220. Our first job was to do an analysis of the newsletter as a whole. What was valuable about the newsletter and what could we do away with to save time? What could we automate? We decided to make some changes to reduce the amount of manual work put into it.

To this end, we ceased to include monthly reports inline and started linking to rather than sharing inline the upcoming meeting and event details in the newsletter itself. There was also a considerable amount of automation done thanks to Nathan’s work on scripts. No more would we be generating any of the release formats by hand, they’d all be generated with a single command, ready to be cut and pasted. Release time every week went from over two hours to about 20 minutes in the hands of an experienced editor. Our next editor would have considerably less work than those who came before them. From then on I’d say I’ve been pretty heavily involved.

500

The 500th issue lands on February 27th, this is an exceptional milestone for the team and the Ubuntu community. It is deserving of celebration, and we’ve worked behind the scenes to arrange a contest and a simple way for folks to say “thanks” to the team. We’ve also reached out to a handful of major players in the community to tell us what they get from the newsletter.

With the landing of this issue, I will have been involved with over 280 issues over 8 years. Almost every week in that time (I did skip a couple weeks for my honeymoon!) I’ve worked to collect Ubuntu news from around the community and internet, prepare it for our team of summary writers, move content to the wiki for our editors, and spend time on Monday doing the release. Over these years I’ve worked with several great contributors to keep the team going, rewarding contributors with all the thanks I could muster and even a run of UWN stickers specifically made for contributors. I’ve met and worked with some great people during this time, and I’m incredibly proud of what we’ve accomplished over these years and the quality we’ve been able to maintain with article selection and timely releases.

But all good things must come to an end. Several months ago as I was working on finding the next step in my career with a new position, I realized how much my life and the world of open source had changed since I first started working on the newsletter. Today there are considerable demands on my time, and while I hung on to the newsletter, I realized that I was letting other exciting projects and volunteer opportunities pass me by. At the end of October I sent a private email to several of the key contributors letting them know I’d conclude my participation with issue 500. That didn’t quite happen, but I am looking to actively wind down my participation starting with this issue and hope that others in the community can pick up where I’m leaving off.

UWN stickers

I’ll still be around the community, largely focusing my efforts on Xubuntu directly. Folks can reach out to me as they need help moving forward, but the awesome UWN team will need more contributors. Contributors collect news, write summaries and do editing, you can learn more about joining here. If you have questions about contributing, you can join #ubuntu-news on freenode and say hello or drop an email to our team mailing list (public archives).

by pleia2 at February 25, 2017 02:57 AM

February 24, 2017

Akkana Peck

Coder Dojo: Kids Teaching Themselves Programming

We have a terrific new program going on at Los Alamos Makers: a weekly Coder Dojo for kids, 6-7 on Tuesday nights.

Coder Dojo is a worldwide movement, and our local dojo is based on their ideas. Kids work on programming projects to earn colored USB wristbelts, with the requirements for belts getting progressively harder. Volunteer mentors are on hand to help, but we're not lecturing or teaching, just coaching.

Despite not much advertising, word has gotten around and we typically have 5-7 kids on Dojo nights, enough that all the makerspace's Raspberry Pi workstations are filled and we sometimes have to scrounge for more machines for the kids who don't bring their own laptops.

A fun moment early on came when we had a mentor meeting, and Neil, our head organizer (who deserves most of the credit for making this program work so well), looked around and said "One thing that might be good at some point is to get more men involved." Sure enough -- he was the only man in the room! For whatever reason, most of the programmers who have gotten involved have been women. A refreshing change from the usual programming group. (Come to think of it, the PEEC web development team is three women. A girl could get a skewed idea of gender demographics, living here.) The kids who come to program are about 40% girls.

I wondered at the beginning how it would work, with no lectures or formal programs. Would the kids just sit passively, waiting to be spoon fed? How would they get concepts like loops and conditionals and functions without someone actively teaching them?

It wasn't a problem. A few kids have some prior programming practice, and they help the others. Kids as young as 9 with no previous programming experience walk it, sit down at a Raspberry Pi station, and after five minutes of being shown how to bring up a Python console and use Python's turtle graphics module to draw a line and turn a corner, they're happily typing away, experimenting and making Python draw great colorful shapes.

Python-turtle turns out to be a wonderful way for beginners to learn. It's easy to get started, it makes pretty pictures, and yet, since it's Python, it's not just training wheels: kids are using a real programming language from the start, and they can search the web and find lots of helpful examples when they're trying to figure out how to do something new (just like professional programmers do. :-)

Initially we set easy requirements for the first (white) belt: attend for three weeks, learn the names of other Dojo members. We didn't require any actual programming until the second (yellow) belt, which required writing a program with two of three elements: a conditional, a loop, a function.

That plan went out the window at the end of the first evening, when two kids had already fulfilled the yellow belt requirements ... even though they were still two weeks away from the attendance requirement for the white belt. One of them had never programmed before. We've since scrapped the attendance belt, and now the white belt has the conditional/loop/function requirement that used to be the yellow belt.

The program has been going for a bit over three months now. We've awarded lots of white belts and a handful of yellows (three new ones just this week). Although most of the kids are working in Python, there are also several playing music or running LED strips using Arduino/C++, writing games and web pages in Javascript, writing adventure games Scratch, or just working through Khan Academy lectures.

When someone is ready for a belt, they present their program to everyone in the room and people ask questions about it: what does that line do? Which part of the program does that? How did you figure out that part? Then the mentors review the code over the next week, and they get the belt the following week.

For all but the first belt, helping newer members is a requirement, though I suspect even without that they'd be helping each other. Sit a first-timer next to someone who's typing away at a Python program and watch the magic happen. Sometimes it feels almost superfluous being a mentor. We chat with the kids and each other, work on our own projects, shoulder-surf, and wait for someone to ask for help with harder problems.

Overall, a terrific program, and our only problems now are getting funding for more belts and more workstations as the word spreads and our Dojo nights get more crowded. I've had several adults ask me if there was a comparable program for adults. Maybe some day (I hope).

February 24, 2017 08:46 PM

February 20, 2017

Elizabeth Krumbach

Adventures in Tasmania

Last month I attended my third Linux.conf.au, this time in Hobart, Tasmania, I wrote about the conference here and here. In an effort to be somewhat recovered from jet lag for the conference and take advantage of the trip to see the sights, I flew in a couple days early.

I arrived in Hobart after a trio of flights on Friday afternoon. It was incredibly windy, so much so that they warned people when deplaning onto the tarmac (no jet ways at the little Hobart airport) to hold tightly on to their belongings. But speaking of the weather for a moment, January is the middle of summer in the southern hemisphere. I prepare for brutal heat when I visit Australia at this time. But Hobart? They were enjoying beautiful, San Francisco-esque weather. Sunny and comfortably in the 60s every day. The sun was still brutal though, thinner ozone that far south means that I burned after being in the sun for a couple days, even after applying strong sunblock.


Beautiful view from my hotel room

On Saturday I didn’t make any solid plans, just in case there was a problem with my flights or I was too tired to go out. I lucked out though, and took the advice of many who suggested I visit Mona – Museum of Old and New Art. In spite of being tired, the good reviews of the museum, plus learning that you could take a ferry directly there and a nearby brewery featured their beers at the eateries around the museum encouraged me to go.

I walked to the ferry terminal from the hotel, which was just over a mile with some surprising hills along the way as I took the scenic route along the bay and through some older neighborhoods. I also walked past Salamanca Market that is set up every Saturday. I passed on the wallaby burritos and made my way to the ferry terminal. There it was quick and easy to buy my ferry and museum tickets.

Ferry rides are one of my favorite things, and the views on this one made the journey to the museum a lot of fun.

The ferry drops you off at a dock specifically for the museum. Since it was nearly noon and I was in need of nourishment, I walked up past the museum and explored the areas around the wine bar. They had little bars set up that opened at noon and allowed you to get a bottle of wine or some beers and enjoy the beautiful weather on chairs and bean bags placed around a large grassy area. On my own for this adventure, I skipped drinking on the grass and went up to enjoy lunch at the wonderful restaurant on site, The Source. I had a couple beers and discovered Tasmanian oysters. Wow. These wouldn’t be the last ones on my trip.

After lunch it was incredibly tempting to spend the entire afternoon snacking on cheese and wine, but I had museum tickets! So it was down to the museum to spend a couple hours exploring.

I’m not the biggest fan of modern art, so a museum mixing old and new art was an interesting choice for me. As I began to walk through the exhibits, I realized that it would have been great to have MJ there with me. He does enjoy newer art, so the museum would have had a little bit for each of us. There were a few modern exhibits that I did enjoy though, including Artifact which I took a video of: “Artifact” at the Museum of Old and New Art, Hobart (warning: strobe lights!).

Outside the museum I also walked around past a vineyard on site, as well as some beautiful outdoor structures. I took a bunch more photos before the ferry took me back to downtown Hobart. More photos from Mona here: https://www.flickr.com/photos/pleia2/albums/72157679331777806

It was late afternoon when I returned to the Salamanca area of Hobart and though the Market was closing down, I was able to take some time to visit a few shops. I picked up a small pen case for my fountain pens made of Tasmanian Huon Pine and a native Sassafras. That evening I spent some time in my room relaxing and getting some writing done before dinner with a couple open source colleagues who had just gotten into town. I turned in early that night to catch up on some sleep I missed during flights.

And then it was Sunday! As fun as the museum adventure was, my number one goal with this trip was actually to pet a new Australian critter. Last year I attended the conference in Geelong, not too far from Melbourne, and did a similar tourist trip. On that trip I got to feed kangaroos, pet a koala and see hundreds of fairy penguins return to their nests from the ocean at dusk. Topping that day wasn’t actually possible, but I wanted to do my best in Tasmania. I booked a private tour with a guide for the Sunday to take me up to the Bonorong Wildlife Sanctuary.

My tour guide was a very friendly women who owns a local tour company with her husband. She was super friendly and accommodating, plus she was similar in age to me, making for a comfortable journey. The tour included a few stops, but started with Bonorong. We had about an hour there to visit the standing exhibits before the pet-wombats tour begain. All the enclosures were populated by rescued wildlife that were either being rehabilitated or were too permanently injured for release. I had my first glimpse at Tasmanian devils running around (I’d seen some in Melbourne, but they were all sleeping!). I also got to see a tawny frogmouth, which is a bird that looks a bit like an owl, and the three-legged Randall the echidna, a spiky member of the species that is one of the few egg-laying mammals. I also took some time to commune with kangaroos and wallabies, picking up a handful of food to feed my new, bouncy friends.


Feeding a kangaroo, tiny wombat drinking from a bottle, pair of wombats, Tasmanian devil

And then there were the baby wombats. I saw my first wombat at the Perth Zoo four years ago and was surprised at how big they are. Growing to be a meter in length in Tasmania, wombats are hefty creatures and I got to pet one! At 11:30 they did a keeper talk and then allowed folks gathered to give one of the babies (about 9 months old) a quick pat. In a country of animals that have fur that’s more wiry and wool-like than you might expect (on kangaroos, koalas), the baby wombats are surprisingly soft.


Wombat petting mission accomplished.

The keeper talks continued with opportunities to pet a koala and visit some Tasmanian devils, but having already done these things I hit the gift shop for some post cards and then went to the nearby Richmond Village.

More photos from Bonorong Wildlife Sanctuary, Tasmania here: https://www.flickr.com/photos/pleia2/albums/72157679331734466

I enjoyed a meat pie lunch in the cute downtown of Richmond before continuing our journey to visit the oldest continuously operating Catholic church in all of Australia (not just Tasmania!), St John’s. It was built in 1836. Just a tad bit older, we also got to visit the oldest bridge, built in 1823. The bridge is surrounded by a beautiful park, making for a popular picnic and play area on days like the beautiful one we had while there. On the way back, we stopped at the Wicked Cheese Co. where I got to sample a variety of cheeses and pick up some Whiskey Cheddar to enjoy later in the week. A final stop at Rosny Hill rounded out the tour. It gave some really spectacular views of the bay and across to Hobart, I could see my hotel from there!

Sunday evening I met up with a gaggle of OpenStack friends for some Indian food back in the main shopping district of Hobart.

That wrapped up my real touristy part of my trip, as the week continued with the conference. However there were some treats still to be enjoyed! I had a whole bunch of Tasmanian cider throughout the week and as I had promised myself, more oysters! The thing about the oysters in Tasmania is that they’re creamy and they’re big. A mouthful of delicious.

I loved Tasmania, I hope I can make it back some day. More photos from my trip here: https://www.flickr.com/photos/pleia2/albums/72157677692771201

by pleia2 at February 20, 2017 06:47 PM

February 18, 2017

Akkana Peck

Highlight and remove extraneous whitespace in emacs

I recently got annoyed with all the trailing whitespace I saw in files edited by Windows and Mac users, and in code snippets pasted from sites like StackOverflow. I already had my emacs set up to indent with only spaces:

(setq-default indent-tabs-mode nil)
(setq tabify nil)
and I knew about M-x delete-trailing-whitespace ... but after seeing someone else who had an editor set up to show trailing spaces, and tabs that ought to be spaces, I wanted that too.

To show trailing spaces is easy, but it took me some digging to find a way to control the color emacs used:

;; Highlight trailing whitespace.
(setq-default show-trailing-whitespace t)
(set-face-background 'trailing-whitespace "yellow")

I also wanted to show tabs, since code indented with a mixture of tabs and spaces, especially if it's Python, can cause problems. That was a little harder, but I eventually found it on the EmacsWiki: Show whitespace:

;; Also show tabs.
(defface extra-whitespace-face
  '((t (:background "pale green")))
  "Color for tabs and such.")

(defvar bad-whitespace
  '(("\t" . 'extra-whitespace-face)))

While I was figuring this out, I got some useful advice related to emacs faces on the #emacs IRC channel: if you want to know why something is displayed in a particular color, put the cursor on it and type C-u C-x = (the command what-cursor-position with a prefix argument), which displays lots of information about whatever's under the cursor, including its current face.

Once I had my colors set up, I found that a surprising number of files I'd edited with vim had trailing whitespace. I would have expected vim to be better behaved than that! But it turns out that to eliminate trailing whitespace, you have to program it yourself. For instance, here are some recipes to Remove unwanted spaces automatically with vim.

February 18, 2017 11:41 PM

February 17, 2017

Elizabeth Krumbach

Spark Summit East 2017

“Do you want to go to Boston in February?”

So began my journey to Boston to attend the recent Spark Summit East 2017, joining my colleagues Kim, Jörg and Kapil to participate in the conference and meet attendees at our Mesosphere booth. I’ve only been to a handful of single-technology events over the years, so it was an interesting experience for me.


Selfie with Jörg!

The conference began with a keynote by Matei Zaharia which covered some of the major successes in the Apache Spark world in 2016, from the release of version 2.0, with structured streaming to the growth in community-driven meetups. As the keynotes continued, two trends came into clear focus:

  1. Increased use of Apache Spark with streaming data
  2. Strong desire to do data processing for artificial intelligence (AI) and machine learning

It was really fascinating to hear about all the AI and machine learning work being done from companies like Salesforce developing customized products to genetic data analysis by way of the Hail project that will ultimately improve and save lives. Work is even being done by Intel to improve hardware and open source tooling around deep learning (see their BigDL project on GitHub).

In perhaps my favorite keynote of the conference, we heard from Mike Gualtieri of Forrester where he presented the new “age of the customer” with a look toward very personalized, AI-driven learning about customer behavior, intent and more. He went on the use the term “pragmatic AI” to describe what we’re aiming for with an intelligence that’s good enough to succeed at what it’s put to. However, his main push for this talk was how much opportunity there is in this space. Companies and individuals skilled with processing massive amounts of data processing, AI and deep and machine learning can make a real impact in a variety of industries. Video and slides from this keynote are available here.


Mike Gualtieri on types of AI we’re looking at today

I was also impressed by how strong the open source assumption was at this conference. All of these universities, corporations, hardware manufacturers and more are working together to build platforms to do all of this work data processing work and they’re open sourcing them.

While at the event, Jörg gave a talk on Powering Predictive Mapping at Scale with Spark, Kafka, and Elastic Search (slides and videos at that link). In this he used DC/OS to give a demo based on NYC cab data.

At the booth the interest in open source was also strong. I’m working on DC/OS in my new role, and the fact that folks could hit the ground running with our open source version, and get help on mailing lists and Slack was in sync with their expectations. We were able to show off demos on our laptops and in spite of only having just over a month at the company under my belt, I was able to answer most of the questions that came my way and learned a lot from my colleagues.


The the Mesosphere booth included DC/OS hand warmers!

We had a bit of non-conference fun at the conference as well, Kapil took us out Wednesday night to the L.A. Burdick chocolate shop to get some hot chocolate… on ice. So good. Thursday the city was hit with a major snow storm, dumping 10 inches on us throughout the day as we spent our time inside the conference venue. Flights were cancelled after noon that day, but thankfully I had no trouble getting out on my Friday flight after lunch with my friend Deb who lives nearby.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157680153926395

by pleia2 at February 17, 2017 10:29 PM

February 15, 2017

Elizabeth Krumbach

Highlights from LCA 2017 in Hobart

Earlier this month I attended my first event while working as a DC/OS Developer Advocate over at Mesosphere. My talk on Listening to the needs of your global open source community was accepted before I joined the company, but this kind of listening is precisely what I need to be doing in this new role, so it fit nicely.

Work also gave me some goodies to bring along! So I was able to hand some out as I chatted with people about my new role, and left piles of stickers and squishy darts on the swag table throughout the week.

The topic of the conference this year was the future of open source. It led to an interesting series of keynotes, ranging from the hopeful and world-changing words from Pia Waugh about how technologists could really make a difference in her talk, Choose Your Own Adventure, Please!, to the Keeping Linux Great talk by Robert M. “r0ml” Lefkowitz that ended up imploring the audience to examine their values around the traditional open source model.

Pia’s keynote was a lot of fun, walking us through human history to demonstrate that our values, tools and assumptions are entirely of our own making, and able to be changed (indeed, they have!). She asked us to continually challenge our assumptions about the world around us and what we could change. She encouraged thinking beyond our own spaces, like how 3D printers could solve scarcity problems in developing nations or what faster travel would do to transform the world. As a room of open source enthusiasts who make small changes to change the world all the day, being the creators and innovators of the world, there’s always more we can do and strive for, curing the illness rather than scratching the itch for systematic change. I really loved the positive message of this talk, I think a lot of attendees walked out feeling empowered and hopeful. Plus, she had a brilliant human change.log, that demonstrated how we as humans have made some significant changes in our assumptions through the millennia.


Pia Waugh’s human change.log

The keynote by Dan Callahan on Wednesday morning on Designing for Failure explored the failure of Mozilla’s Persona project, and key things he learned from it. He walked through some key lessons:

  1. Free licenses are not enough, your code can’t be tied to proprietary infrastructure
  2. Bits rot more quickly online, an out of date desktop application is usually at much lower risk, and endangers fewer people, than a service running on the web
  3. Complexity limits agency, people need to be able to have the resources, system and time to try out and run your software

He went on to give tips about what to do to prolong project life, including making sure you have metrics and are measuring the right things for your project, explicitly defining your scope so the team doesn’t get spread too thin or try to solve the wrong problems, and ruthlessly opposing complexity, since that makes it harder to maintain and for others to get involved.

Finally, he had some excellent points for how to assist the survival of your users when a project does finally fail:

  1. If you know your project is dead (funding pulled, etc), say so, don’t draw things out
  2. Make sure your users can recover without your involvement (have a way to extract data, give them an escape path infrastructure-wise)
  3. Use standard data formats to minimize the migration harm when organizations have to move on

It was really great hearing lessons from this, I know how painful it is to see a project you’ve put a lot of work into die, the ability to not only move on in a healthy way but bring those lessons to a whole community during a keynote like this was commendable.

Thursday’s keynote by Nadia Eghbal was an interesting one that I haven’t seen a lot of public discussion around, Consider the Maintainer. In it she talked about the work that goes into being a maintainer of a project, which she defined as someone who is doing the work of keeping a project going: looking at the contributions coming in, actively responding to bug reports and handling any other interactions. This is a discussion that came up from time to time on some projects I’ve recently worked on where we were striving to prevent scope creep. How can we manage the needs of our maintainers who are sticking around, with the desire for new contributors to add features that benefit them? It’s a very important question that I was thrilled to see her talk about. To help address this, she proposed a twist on the The Four Essential Freedoms of Software as defined by the FSF, The Four Freedoms of Open Source Producers. They were:

  • The freedom to decide who participates in your community
  • The freedom to say no to contributions or requests
  • The freedom to define the priorities and policies of the project
  • The freedom to step down or move on from a project, temporarily or permanently

The speaker dinner was beautiful and delicious, taking us up to Frogmore Creek Winery. There was a radio telescope in the background and the sunset over the vineyard was breathtaking. Plus, great company.

Other talks I went to trended toward fun and community-focused. On Monday there was a WOOTConf, the entire playlist from the event is here. I caught a nice handful of talks, starting with Human-driven development where aurynn shaw spoke about some of the toxic behaviors in our technical spaces, primarily about how everyone is expected to know everything and that asking questions is not always acceptable. She implored us to work to make asking questions easier and more accepted, and working toward asking your team questions about what they need.

I learned about a couple websites in a talk by Kate Andrews on Seeing the big picture – using open source images, TinEye Reverse Image Search to help finding the source of an image to give credit, and sites like Unsplash where you can find freely licensed photos, in addition to various creative commons searches. Brenda Wallace’s Let’s put wifi in everything was a lot of fun, as she walked through various pieces of inexpensive hardware and open source tooling to build sensors to automate all kinds of little things around the house. I also enjoyed the talk by Kris Howard, Knit One, Compute One where very strong comparisons were made between computer programming and knitting patterns, and a talk by Grace Nolan on Condensed History of Lock Picking.

For my part, I gave a talk on Listening to the Needs of Your Global Open Source Community. This is similar to the talk I gave at FOSSCON back in August, where I walked through experiences I had in Ubuntu and OpenStack projects, along with in person LUGs and meetups. I had some great questions at the end, and I was excited to learn VM Brasseur was tweeting throughout and created a storify about it! The slides from the talk are available as a PDF here.


Thanks to VM Brasseur for the photo during my talk, source

The day concluded with Rikki Endsley’s Mamas Don’t Let Your Babies Grow Up to Be Rock Star Developers, which I really loved. She talked about the tendency to put “rock star” in job descriptions for developers, but when going through the traits of rock stars these weren’t actually what you want on your team. The call was for more Willie Nelson developers, and we were treated to a quick biography of Willie Nelson. In it she explained how he helped others, was always learning new skills, made himself available to his fans, and would innovate and lead. I also enjoyed that he actively worked to collaborate with a diverse mix of people and groups.

As the conference continued, I learned about the the great work that Whare Hauora from Brenda Wallace and Amber Craig, and heard from Josh Simmons about building communities outside of major metropolitan areas where he advocated for multidisciplinary meetups. Allison Randal spoke about the ways that open source accelerates innovation and Karen Sandler dove into what happens to our software when we die in a presentation punctuated by pictures of baby Tasmanian Devils to cheer us up. I also heard Chris Lamb gave us the status of the Reproducible Builds projects and then from Hamish Coleman on the work he’s done replacing ThinkPad keyboards and backwards engineering the tooling.

The final day wound down with a talk by VM (Vicky) Brasseur on working inside a company to support open source projects, where she talked about types of communities, the importance of having a solid open source plans and quickly covered some of the most common pitfalls within companies.

This conference remains one of my favorite open source conferences in the world, and I’m very glad I was able to attend again. It’s great meeting up with all my Australian and New Zealand open source colleagues, along with some of the usual suspects who attend many of the same conferences I do. Huge thanks for the organizers for making it such a great conference.

All the videos from the conference were uploaded very quickly to YouTube and are available here: https://www.youtube.com/user/linuxconfau2017/videos

More photos from the conference at https://www.flickr.com/photos/pleia2/sets/72157679331149816/

by pleia2 at February 15, 2017 01:09 AM

February 13, 2017

Akkana Peck

Emacs: Initializing code files with a template

Part of being a programmer is having an urge to automate repetitive tasks.

Every new HTML file I create should include some boilerplate HTML, like <html><head></head></body></body></html>. Every new Python file I create should start with #!/usr/bin/env python, and most of them should end with an if __name__ == "__main__": clause. I get tired of typing all that, especially the dunderscores and slash-greater-thans.

Long ago, I wrote an emacs function called newhtml to insert the boilerplate code:

(defun newhtml ()
  "Insert a template for an empty HTML page"
  (interactive)
  (insert "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n"
          "<html>\n"
          "<head>\n"
          "<title></title>\n"
          "</head>\n\n"
          "<body>\n\n"
          "<h1></h1>\n\n"
          "<p>\n\n"
          "</body>\n"
          "</html>\n")
  (forward-line -11)
  (forward-char 7)
  )

The motion commands at the end move the cursor back to point in between the <title> and </title>, so I'm ready to type the page title. (I should probably have it prompt me, so it can insert the same string in title and h1, which is almost always what I want.)

That has worked for quite a while. But when I decided it was time to write the same function for python:

(defun newpython ()
  "Insert a template for an empty Python script"
  (interactive)
  (insert "#!/usr/bin/env python\n"
          "\n"
          "\n"
          "\n"
          "if __name__ == '__main__':\n"
          "\n"
          )
  (forward-line -4)
  )
... I realized that I wanted to be even more lazy than that. Emacs knows what sort of file it's editing -- it switches to html-mode or python-mode as appropriate. Why not have it insert the template automatically?

My first thought was to have emacs run the function upon loading a file. There's a function with-eval-after-load which supposedly can act based on file suffix, so something like (with-eval-after-load ".py" (newpython)) is documented to work. But I found that it was never called, and couldn't find an example that actually worked.

But then I realized that I have mode hooks for all the programming modes anyway, to set up things like indentation preferences. Inserting some text at the end of the mode hook seems perfectly simple:

(add-hook 'python-mode-hook
          (lambda ()
            (electric-indent-local-mode -1)
            (font-lock-add-keywords nil bad-whitespace)
            (if (= (buffer-size) 0)
                (newpython))
            (message "python hook")
            ))

The (= (buffer-size) 0) test ensures this only happens if I open a new file. Obviously I don't want to be auto-inserting code inside existing programs!

HTML mode was a little more complicated. I edit some files, like blog posts, that use HTML formatting, and hence need html-mode, but they aren't standalone HTML files that need the usual HTML template inserted. For blog posts, I use a different file extension, so I can use the elisp string-suffix-p to test for that:

  ;; s-suffix? is like Python endswith
  (if (and (= (buffer-size) 0)
           (string-suffix-p ".html" (buffer-file-name)))
      (newhtml) )

I may eventually find other files that don't need the template; if I need to, it's easy to add other tests, like the directory where the new file will live.

A nice timesaver: open a new file and have a template automatically inserted.

February 13, 2017 04:52 PM

February 09, 2017

Jono Bacon

HackerOne Professional, Free for Open Source Projects

For some time now I have been working with HackerOne to help them shape and grow their hacker community. It has been a pleasure working with the team: they are doing great work, have fantastic leadership (including my friend, Mårten Mickos), are seeing consistent growth, and recently closed a $40 million round of funding. It is all systems go.

For those of you unfamiliar with HackerOne, they provide a powerful vulnerability coordination platform and a global community of hackers. Put simply, a company or project (such as Starbucks, Uber, GitHub, the US Army, etc) invite hackers to hack their products/services to find security issues, and HackerOne provides a platform for the submission, coordination, dupe detection, and triage of these issues, and other related functionality.

You can think of HackerOne in two pieces: a powerful platform for managing security vulnerabilities and a global community of hackers who use the platform to make the Internet safer and in many cases, make money. This effectively crowd-sources security using the same “with enough eyeballs are shallow” principle in open source: with enough eyeballs all security issues are shallow too.

HackerOne and Open Source

HackerOne unsurprisingly are big fans of open source. The CEO, Mårten Mickos, has led a number of successful open source companies including MySQL and Eucalyptus. The platform itself is built on top of chunks of open source, and HackerOne is a key participant in the Internet Bug Bounty program that helps to ensure core pieces of technology that power the Internet are kept secure.

One of the goals I have had in my work with HackerOne is to build an even closer bridge between HackerOne and the open source community. I am delighted to share the next iteration of this.

HackerOne for Open Source Projects

While not formally announced yet (this is coming soon), I am pleased to share the availability of HackerOne Community Edition.

Put simply, HackerOne is providing their HackerOne Professional service for free to open source projects.

This provides features such as a security page, vulnerability submission/coordination, duplicate detection, hacker reputation, a comprehensive API, analytics, CVEs, and more.

This not only provides a great platform for open source projects to gather vulnerability report and manage them, but also opens your project up to thousands of security researchers who can help identify security issues and make your code more secure.

Which projects are eligible?

To be eligible for this free service projects need to meet the following criteria:

  1. Open Source projects – projects in scope must only be Open Source projects that are covered by an OSI license.
  2. Be ready – projects must be active and at least 3 months old (age is defined by shipped releases/code contributions).
  3. Create a policy – you add a SECURITY.md in your project root that provides details for how to submit vulnerabilities (example).
  4. Advertise your program – display a link to your HackerOne profile from either the primary or secondary navigation on your project’s website.
  5. Be active – you maintain an initial response to new reports of less than a week.

If you meet these criteria and would like to apply, just see the HackerOne Community Edition page and click the button to apply.

Of course, let me know if you have any questions!

The post HackerOne Professional, Free for Open Source Projects appeared first on Jono Bacon.

by Jono Bacon at February 09, 2017 10:20 PM

February 06, 2017

Elizabeth Krumbach

Rogue One and Carrie Fisher

Back in December I wasn’t home in San Francisco very much. Most of my month was spent back east at our townhouse in Philadelphia and I spent a few days in Salt Lake City for a conference, but the one week I was in town was the week that Rogue One: A Star Wars Story came out! I was traveling to Europe when tickets went on sale, but fortunately for me our local theater transformed to swap most of it’s screens over to show the film opening night. I was able to snag tickets once I realized they were on sale.

And that’s how I continued my tradition of seeing all the new films (1-3, 7) opening night! MJ and I popped over to the Metreon, just a short walk from home, to see it. For this showing I didn’t do IMAX or 3D or anything fancy, just a modern AMC theater and a late night showing.

The movie was great. They did a really nice job of looping the story in with the past films and preserving the feel of Star Wars for me, which was absent in the prequels that George Lucas made. Clunky technology, the good guys achieving victories in the face of incredible odds and yet, quite a bit of heartbreak. Naturally, I saw it a second time later in the month while staying in Philadelphia for the holidays. It was great the second time too!

My hope is that the quality of the films will remain high while in the hands of Disney, and I’m really looking forward to The Last Jedi coming out at the end of this year.

Alas, the year wasn’t all good for a Star Wars fan like me. Back in August we lost Kenny Baker, the man behind my beloved R2-D2. Then on December 23rd we learned that Carrie Fisher had a heart attack on a flight from London. On December 27th she passed away.

Now, I am typically not one to write about the death of a celebrity in her blog. It’s pretty rare that I’m upset about the death of a celebrity at all. But this was Carrie Fisher. She was not on my radar for passing (only 60!) and she is the actress who played one of my all-time favorite characters, in case it wasn’t obvious from the domain name this blog is on.

The character of Princess Leia impacted my life in many ways, and at age 17 caused me to choose PrincessLeia2 (PrincessLeia was taken), and later pleia2, as my online handle. She was a princess of a mysterious world that was destroyed. She was a strong character who didn’t let people get in her way as she covertly assisted, then openly joined the rebel alliance because of what she believed in. She was also a character who also showed considerable kindness and compassion. In the Star Wars universe, and in the 1980s when I was a kid, she was often a shining beacon of what I aspired to. Her reprise of the character, returning as General Leia Organa, in Episode VII brought me to tears. I have a figure of her on my desk.


Halloween 2005, Leia costume!

A character she played aside, she also was a champion of de-stigmatizing mental illness. I have suffered from depression for over 20 years and have worked to treat my condition with over a dozen doctors, from primary care to neurologists and psychiatrists. Still, I haven’t found an effective medication-driven treatment that won’t conflict with my other neurological atypical conditions (migraines and seizures). Her outspokenness on the topic of both mental illness and the difficulty in treating it even when you have access to resources was transformational for me. I had a guilt lifted from me about not being “better” in spite of my access to treatment, and was generally more inclined to tackle the topic of mental illness in public.

Her passing was hard for me.

I was contacted by BBC Radio 5 Live on the day she passed away and interviewed by Chris Warburton for their show that would air the following morning. They reached out to me as a known fan, asking me about what her role as Leia Organa meant to me growing up, her critical view of the celebrity world and then on to her work in the space of mental illness. It meant a lot that they reached out to me, but I was also pained by what it brought up, it turns out that the day of her passing was the one day in my life I didn’t feeling like talking about her work and legacy.

It’s easier today as I reflect upon her impact. I’m appreciative of the character she brought to life for me. Appreciative of the woman she became and shared in so many memorable, funny and self-deprecating books, which line my shelves. Thank you, Carrie Fisher, for being such an inspiration and an advocate.

by pleia2 at February 06, 2017 08:17 AM

Akkana Peck

Rosy Finches

Los Alamos is having an influx of rare rosy-finches (which apparently are supposed to be hyphenated: they're rosy-finches, not finches that are rosy).

[Rosy-finches] They're normally birds of the snowy high altitudes, like the top of Sandia Crest, and quite unusual in Los Alamos. They're even rarer in White Rock, and although I've been keeping my eyes open I haven't seen any here at home; but a few days ago I was lucky enough to be invited to the home of a birder in town who's been seeing great flocks of rosy-finches at his feeders.

There are four types, of which three have ever been seen locally, and we saw all three. Most of the flock was brown-capped rosy-finches, with two each black rosy-finches and gray-capped rosy-finches. The upper bird at right, I believe, is one of the blacks, but it might be a grey-capped. They're a bit hard to tell apart. In any case, pretty birds, sparrow sized with nice head markings and a hint of pink under the wing, and it was fun to get to see them.

[Roadrunner] The local roadrunner also made a brief appearance, and we marveled at the combination of high-altitude snowbirds and a desert bird here at the same place and time. White Rock seems like much better roadrunner territory, and indeed they're sometimes seen here (though not, so far, at my house), but they're just as common up in the forests of Los Alamos. Our host said he only sees them in winter; in spring, just as they start singing, they leave and go somewhere else. How odd!

Speaking of birds and spring, we have a juniper titmouse determinedly singing his ray-gun song, a few house sparrows are singing sporadically, and we're starting to see cranes flying north. They started a few days ago, and I counted several hundred of them today, enjoying the sunny and relatively warm weather as they made their way north. Ironically, just two weeks ago I saw a group of about sixty cranes flying south -- very late migrants, who must have arrived at the Bosque del Apache just in time to see the first northbound migrants leave. "Hey, what's up, we just got here, where ya all going?"

A few more photos: Rosy-finches (and a few other nice birds).

We also have a mule deer buck frequenting our yard, sometimes hanging out in the garden just outside the house to drink from the heated birdbath while everything else is frozen. (We haven't seen him in a few days, with the warmer weather and most of the ice melted.) We know it's the same buck coming back: he's easy to recognize because he's missing a couple of tines on one antler.

The buck is a welcome guest now, but in a month or so when the trees start leafing out I may regret that as I try to find ways of keeping him from stripping all the foliage off my baby apple tree, like some deer did last spring. I'm told it helps to put smelly soap shavings, like Irish Spring, in a bag and hang it from the branches, and deer will avoid the smell. I will try the soap trick but will probably combine it with other measures, like a temporary fence.

February 06, 2017 02:39 AM

January 28, 2017

Nathan Haines

We're looking for Ubuntu 17.04 wallpapers right now!

We're looking for Ubuntu 17.04 wallpapers right now!

Ubuntu is a testament to the power of sharing, and we use the default selection of desktop wallpapers in each release as a way to celebrate the larger Free Culture movement. Talented artists across the globe create media and release it under licenses that don't simply allow, but cheerfully encourage sharing and adaptation. This cycle's Free Culture Showcase for Ubuntu 17.04 is now underway!

We're halfway to the next LTS, and we're looking for beautiful wallpaper images that will literally set the backdrop for new users as they use Ubuntu 17.04 every day. Whether on the desktop, phone, or tablet, your photo or can be the first thing Ubuntu users see whenever they are greeted by the ubiquitous Ubuntu welcome screen or access their desktop.

Submissions will be handled via Flickr at the Ubuntu 17.04 Free Culture Showcase - Wallpapers group, and the submission window begins now and ends on March 5th.

More information about the Free Culture Showcase is available on the Ubuntu wiki at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase.

I'm looking forward to seeing the 10 photos and 2 illustrations that will ship on all graphical Ubuntu 17.04-based systems and devices on April 13th!

January 28, 2017 08:08 AM

January 27, 2017

Akkana Peck

Making aliases for broken fonts

A web page I maintain (originally designed by someone else) specifies Times font. On all my Linux systems, Times displays impossibly tiny, at least two sizes smaller than any other font that's ostensibly the same size. So the page is hard to read. I'm forever tempted to get rid of that font specifier, but I have to assume that other people in the organization like the professional look of Times, and that this pathologic smallness of Times and Times New Roman is just a Linux font quirk.

In that case, a better solution is to alias it, so that pages that use Times will choose some larger, more readable font on my system. How to do that was in this excellent, clear post: How To Set Default Fonts and Font Aliases on Linux .

It turned out Times came from the gsfonts package, while Times New Roman came from msttcorefonts:

$ fc-match Times
n021003l.pfb: "Nimbus Roman No9 L" "Regular"
$ dpkg -S n021003l.pfb
gsfonts: /usr/share/fonts/type1/gsfonts/n021003l.pfb
$ fc-match "Times New Roman"
Times_New_Roman.ttf: "Times New Roman" "Normal"
$ dpkg -S Times_New_Roman.ttf
dpkg-query: no path found matching pattern *Times_New_Roman.ttf*
$ locate Times_New_Roman.ttf
/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttf
(dpkg -S doesn't find the file because msttcorefonts is a package that downloads a bunch of common fonts from Microsoft. Debian can't distribute the font files directly due to licensing restrictions.)

Removing gsfonts fonts isn't an option; aside from some documents and web pages possibly not working right (if they specify Times or Times New Roman and don't provide a fallback), removing gsfonts takes gnumeric and abiword with it, and I do occasionally use gnumeric. And I like having the msttcorefonts installed (hey, gotta have Comic Sans! :-) ). So aliasing the font is a better bet.

Following Chuan Ji's page, linked above, I edited ~/.config/fontconfig/fonts.conf (I already had one, specifying fonts for the fantasy and cursive web families), and added these stanzas:

    <match>
        <test name="family"><string>Times New Roman</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>
    <match>
        <test name="family"><string>Times</string></test>
        <edit name="family" mode="assign" binding="strong">
            <string>DejaVu Serif</string>
        </edit>
    </match>

The page says to log out and back in, but I found that restarting firefox was enough. Now I could load up a page that specified Times or Times New Roman and the text is easily readable.

January 27, 2017 09:47 PM

January 26, 2017

Elizabeth Krumbach

CLSx at LCA 2017

Last week I was in Hobart, Tasmania for LCA 2017. I’ll write broader blog post about the whole event soon, but I wanted to take some time to write this focused post about the CLSx (Community Leadership Summit X) event organized by VM Brasseur. I’d been to the original CLS event at OSCON a couple times, first in 2013 and again in 2015. This was the first time I was attending a satellite event, but with VM Brasseur at the helm and a glance at the community leadership talent in the room I knew we’d have a productive event.


VM Brasseur introduces CLSx

The event began with an introduction to the format and the schedule. As an unconference, CLS events topics are brainstormed by and the schedule organized by the attendees. It started with people in the room sharing topics they’d be interested in, and then we worked through the list to combine topics and reduce it down to just 9 topics:

  • Non-violent communication for diffusing charged situations
  • Practical strategies for fundraising
  • Rewarding community members
  • Reworking old communities
  • Increasing diversity: multi-factor
  • Recruiting a core
  • Community metrics
  • Community cohesion: retention
  • How to Participate When You Work for a Corporate Vendor

Or, if you’d rather, the whiteboard of topics!

The afternoon was split into four sessions, three of which were used to discuss the topics, with three topics being covered simultaneously by separate groups in each session slot. The final session of the day was reserved for the wrap-up of the event where participants shared summaries of each topic that was discussed.

The first session I participated in was the one I proposed, on Rewarding Community Members. The first question I asked the group was whether we should reward community members at all, just to make sure we were all starting with the same ideas. This quickly transitioned into what counts as a reward, were we talking physical gifts like stickers and t-shirts? Or recognition in the community? Some communities “reward” community members by giving them free or discounted entrance to conferences related to the project, or discounts on services with partners.

Simple recognition of work was a big topic for this session. We spent some time talking about how we welcome community members. Does your community have a mechanism for welcoming, even if it’s automated? Or is there a more personal touch to reaching out? We also covered whether projects have a path to go from new contributor to trusted committer, or the “internal circle” of a project, noting that if that path doesn’t exist, it could be discouraging to new contributors. Gamification was touched upon as a possible way to recognize contributors in a more automated fashion, but it was clear that you want to reward certain positive behaviors and not focus so strictly on statistics that can be cheated without bringing any actual value to the project or community.

What I found most valuable in this session was learning some of the really successful tips for rewards. It was interesting how far the personal touch goes when sending physical rewards to contributors, like including a personalized note along with stickers. It was also clear that metrics are not the full story, in every community the leaders, evangelists and advocates need to be very involved so they can identify contributors in a more qualitative way in order to recognize or reward them, maybe someone is particularly helpful and friendly, or are making contributions in ways that are not easily tracked by solid metrics. The one warning here was making sure you avoid personal bias, make sure you aren’t being more critical of contributions from minorities in your community or are ignoring folks who don’t boast about their contributions, this happens a lot.

Full notes from Rewarding Contributors, thanks go to Deirdré Straughan for taking notes during the session.

The next session brought me to a gathering to discuss Community Building, Cohesion and Retention. I’ve worked in very large open source communities for over a decade now, and as I embark on my new role at Mesosphere where the DC/OS community is largely driven by paid contributors from a single company today, I’m very much interested in making sure we work to attract more outside contributors.

One of the big topics of this session was the fragmentation of resources across platforms (mailing lists, Facebook, IRC, Slack, etc) and how we have very little control over this. Pulling from my own experience, we saw this in the Xubuntu user community where people would create unofficial channels on various resources, and so as an outreach team we had to seek these users out and begin engaging with them “where they lived” on these platforms. One of the things I learned from my work here, was that we could reduce our own burden by making some of these “unofficial” resources into official resources, thus having an official presence but leaving the folks who were passionate about the platform and community there in control, though we did ask for admin credentials for one person on the Xubuntu team to help with the bus factor.

Some other tips to building cohesion were making sure introductions were done during meetings and in person gatherings so that newcomers felt welcome, or offering a specific newcomer track so that no one felt like they were the only new person in the room, which can be very isolating. Similarly, making sure there were communication channels available before in-person events could be helpful to getting people comfortable with a community before meeting. One of the interesting proposals was also making sure there was a more official, announce-focused channel for communication so that people who were loosely interested could subscribe to that and not be burdened with an overly chatty communication channel if they’re only interested in important news from the community.

Full notes from Community building, cohesion and retention, with thanks to Josh Simmons for taking notes during this session.


Thanks to VM Brasseur for this photo of our building, cohesion and retention session (source)

The last session of the day I attended was around Community Metrics and held particular interest for me as the team I’m on at Mesosphere starts drilling down into community statistics for our young community. One of the early comments in this session is that our teams need to be aware that metrics can help drive value for your team within a company and in the project. You should make sure you’re collecting metrics and that you’re measuring the right things. It’s easy for those of us who are more technically inclined to “geek out” over numbers and statistics, which can lead to gathering too much data and drawing conclusions that may not necessarily be accurate.

There was value found in surveys of community members by some attendees, which was interesting for me to learn. I haven’t had great luck with surveys but it was suggested that making sure people know why they should spend their time replying and sharing information and how it will be used to improve things makes them more inclined to participate. It was also suggested to have staggered surveys targeted at specific contributors. Perhaps have one survey to newcomers, and another targeted at people who have succeeded in becoming a core contributor about the process challenges they’ve faced. Surveys also help gather some of the more qualitative data that is essential for proper tracking the health of a community. It’s not just numbers.

Specifically drilling down into value to the community, the following beyond surveys were found to be helpful:

  • Less focus on individuals and specific metrics in a silo, instead looking at trends and aggregations
  • Visitor count to the web pages on your site and specific blog posts
  • Metrics about community diversity in terms of number of organizations contributing, geographic distribution and human metrics (gender, race, age, etc) since all these types of diversity have proven to be indicators of project and team success.
  • Recruitment numbers linked to contributions, whether it’s how many people your company hires from the community or that companies in general do if the project has many companies involved (recruitment is expensive, you can bring real value here)

The consensus in the group was that it was difficult to correlate metrics like retweets, GitHub stars and other social media metrics to sales, so even though there may be value with regard to branding and excitement about your community, they may not help much to justify the existence of your team within a company. We didn’t talk much about metrics gathering tools, but I was OK with this, since it was nice to get a more general view into what we should be collecting rather than how.

Full notes from Community Metrics, which we can thank Andy Wingo for.

The event concluded with the note-taker from each group giving a five minute summary of what we talked about in each group. This was the only recorded portion of the event, you can watch it on YouTube here: Community Leadership Summit Summary.

Discussion notes from all the sessions can be found here: https://linux.conf.au/wiki/conference/miniconfs/clsx_at_lca/#wiki-toc-group-discussion-notes.

I really got a lot out of this event, and I hope others gained from my experience and perspectives as well. Huge thanks to the organizers and everyone who participated.

by pleia2 at January 26, 2017 02:58 AM

January 24, 2017

Jono Bacon

Endless Code and Mission Hardware Demo

Recently, I have had the pleasure of working with a fantastic company called Endless who are building a range of computers and a Linux-based operating system called Endless OS.

My work with them has primarily been involved in the community and product development of an initiative in which they are integrating functionality into the operating system that teaches you how to code. This provides a powerful platform where you can learn to code and easily hack on applications in the platform.

If this sounds interesting to you, I created a short video demo where I show off their Mission hardware as well as run through a demo of Endless Code in action. You can see it below:

I would love to hear what you think and how Endless Code can be improved in the comments below.

The post Endless Code and Mission Hardware Demo appeared first on Jono Bacon.

by Jono Bacon at January 24, 2017 12:35 PM

January 23, 2017

Akkana Peck

Testing a GitHub Pull Request

Several times recently I've come across someone with a useful fix to a program on GitHub, for which they'd filed a GitHub pull request.

The problem is that GitHub doesn't give you any link on the pull request to let you download the code in that pull request. You can get a list of the checkins inside it, or a list of the changed files so you can view the differences graphically. But if you want the code on your own computer, so you can test it, or use your own editors and diff tools to inspect it, it's not obvious how. That this is a problem is easily seen with a web search for something like download github pull request -- there are huge numbers of people asking how, and most of the answers are vague unclear.

That's a shame, because it turns out it's easy to pull a pull request. You can fetch it directly with git into a new branch as long as you have the pull request ID. That's the ID shown on the GitHub pull request page:

[GitHub pull request screenshot]

Once you have the pull request ID, choose a new name for your branch, then fetch it:

git fetch origin pull/PULL-REQUEST_ID/head:NEW-BRANCH-NAME
git checkout NEW-BRANCH-NAME

Then you can view diffs with something like git difftool NEW-BRANCH-NAME..master

Easy! GitHub should give a hint of that on its pull request pages.

Fetching a Pull Request diff to apply it to another tree

But shortly after I learned how to apply a pull request, I had a related but different problem in another project. There was a pull request for an older repository, but the part it applied to had since been split off into a separate project. (It was an old pull request that had fallen through the cracks, and as a new developer on the project, I wanted to see if I could help test it in the new repository.)

You can't pull a pull request that's for a whole different repository. But what you can do is go to the pull request's page on GitHub. There are 3 tabs: Conversation, Commits, and Files changed. Click on Files changed to see the diffs visually.

That works if the changes are small and only affect a few files (which fortunately was the case this time). It's not so great if there are a lot of changes or a lot of files affected. I couldn't find any "Raw" or "download" button that would give me a diff I could actually apply. You can select all and then paste the diffs into a local file, but you have to do that separately for each file affected. It might be, if you have a lot of files, that the best solution is to check out the original repo, apply the pull request, generate a diff locally with git diff, then apply that diff to the new repo. Rather circuitous. But with any luck that situation won't arise very often.

Update: thanks very much to Houz for the solution! (In the comments, below.) Just append .diff or .patch to the pull request URL, e.g. https://github.com/OWNER/REPO/pull/REQUEST-ID.diff which you can view in a browser or fetch with wget or curl.

January 23, 2017 09:34 PM

January 19, 2017

Akkana Peck

Plotting Shapes with Python Basemap wwithout Shapefiles

In my article on Plotting election (and other county-level) data with Python Basemap, I used ESRI shapefiles for both states and counties.

But one of the election data files I found, OpenDataSoft's USA 2016 Presidential Election by county had embedded county shapes, available either as CSV or as GeoJSON. (I used the CSV version, but inside the CSV the geo data are encoded as JSON so you'll need JSON decoding either way. But that's no problem.)

Just about all the documentation I found on coloring shapes in Basemap assumed that the shapes were defined as ESRI shapefiles. How do you draw shapes if you have latitude/longitude data in a more open format?

As it turns out, it's quite easy, but it took a fair amount of poking around inside Basemap to figure out how it worked.

In the loop over counties in the US in the previous article, the end goal was to create a matplotlib Polygon and use that to add a Basemap patch. But matplotlib's Polygon wants map coordinates, not latitude/longitude.

If m is your basemap (i.e. you created the map with m = Basemap( ... ), you can translate coordinates like this:

    (mapx, mapy) = m(longitude, latitude)

So once you have a region as a list of (longitude, latitude) coordinate pairs, you can create a colored, shaped patch like this:

    for coord_pair in region:
        coord_pair[0], coord_pair[1] = m(coord_pair[0], coord_pair[1])
    poly = Polygon(region, facecolor=color, edgecolor=color)
    ax.add_patch(poly)

Working with the OpenDataSoft data file was actually a little harder than that, because the list of coordinates was JSON-encoded inside the CSV file, so I had to decode it with json.loads(county["Geo Shape"]). Once decoded, it had some counties as a Polygonlist of lists (allowing for discontiguous outlines), and others as a MultiPolygonlist of list of lists (I'm not sure why, since the Polygon format already allows for discontiguous boundaries)

[Blue-red-purple 2016 election map]

And a few counties were missing, so there were blanks on the map, which show up as white patches in this screenshot. The counties missing data either have inconsistent formatting in their coordinate lists, or they have only one coordinate pair, and they include Washington, Virginia; Roane, Tennessee; Schley, Georgia; Terrell, Georgia; Marshall, Alabama; Williamsburg, Virginia; and Pike Georgia; plus Oglala Lakota (which is clearly meant to be Oglala, South Dakota), and all of Alaska.

One thing about crunching data files from the internet is that there are always a few special cases you have to code around. And I could have gotten those coordinates from the census shapefiles; but as long as I needed the census shapefile anyway, why use the CSV shapes at all? In this particular case, it makes more sense to use the shapefiles from the Census.

Still, I'm glad to have learned how to use arbitrary coordinates as shapes, freeing me from the proprietary and annoying ESRI shapefile format.

The code: Blue-red map using CSV with embedded county shapes

January 19, 2017 04:36 PM

Nathan Haines

UbuCon Summit at SCALE 15x Call for Papers

UbuCons are a remarkable achievement from the Ubuntu community: a network of conferences across the globe, organized by volunteers passionate about Open Source and about collaborating, contributing, and socializing around Ubuntu. UbuCon Summit at SCALE 15x is the next in the impressive series of conferences.

UbuCon Summit at SCALE 15x takes place in Pasadena, California on March 2nd and 3rd during the first two days of SCALE 15x. Ubuntu will also have a booth at SCALE's expo floor from March 3rd through 5th.

We are putting together the conference schedule and are announcing a call for papers. While we have some amazing speakers and an always-vibrant unconference schedule planned, it is the community, as always, who make UbuCon what it is—just as the community sets Ubuntu apart.

Interested speakers who have Ubuntu-related topics can submit their talk to the SCALE call for papers site. UbuCon Summit has a wide range of both developers and enthusiasts, so any interesting topic is welcome, no matter how casual or technical. The SCALE CFP form is available here:

http://www.socallinuxexpo.org/scale/15x/cfp

Over the next few weeks we’ll be sharing more details about the Summit, revamping the global UbuCon site and updating the SCALE schedule with all relevant information.

http://www.ubucon.org/

About SCaLE:

SCALE 15x, the 15th Annual Southern California Linux Expo, is the largest community-run Linux/FOSS showcase event in North America. It will be held from March 2-5 at the Pasadena Convention Center in Pasadena, California. For more information on the expo, visit https://www.socallinuxexpo.org

January 19, 2017 10:12 AM

January 14, 2017

Akkana Peck

Plotting election (and other county-level) data with Python Basemap

After my arduous search for open 2016 election data by county, as a first test I wanted one of those red-blue-purple charts of how Democratic or Republican each county's vote was.

I used the Basemap package for plotting. It used to be part of matplotlib, but it's been split off into its own toolkit, grouped under mpl_toolkits: on Debian, it's available as python-mpltoolkits.basemap, or you can find Basemap on GitHub.

It's easiest to start with the fillstates.py example that shows how to draw a US map with different states colored differently. You'll need the three shapefiles (because of ESRI's silly shapefile format): st99_d00.dbf, st99_d00.shp and st99_d00.shx, available in the same examples directory.

Of course, to plot counties, you need county shapefiles as well. The US Census has county shapefiles at several different resolutions (I used the 500k version). Then you can plot state and counties outlines like this:

from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt

def draw_us_map():
    # Set the lower left and upper right limits of the bounding box:
    lllon = -119
    urlon = -64
    lllat = 22.0
    urlat = 50.5
    # and calculate a centerpoint, needed for the projection:
    centerlon = float(lllon + urlon) / 2.0
    centerlat = float(lllat + urlat) / 2.0

    m = Basemap(resolution='i',  # crude, low, intermediate, high, full
                llcrnrlon = lllon, urcrnrlon = urlon,
                lon_0 = centerlon,
                llcrnrlat = lllat, urcrnrlat = urlat,
                lat_0 = centerlat,
                projection='tmerc')

    # Read state boundaries.
    shp_info = m.readshapefile('st99_d00', 'states',
                               drawbounds=True, color='lightgrey')

    # Read county boundaries
    shp_info = m.readshapefile('cb_2015_us_county_500k',
                               'counties',
                               drawbounds=True)

if __name__ == "__main__":
    draw_us_map()
    plt.title('US Counties')
    # Get rid of some of the extraneous whitespace matplotlib loves to use.
    plt.tight_layout(pad=0, w_pad=0, h_pad=0)
    plt.show()
[Simple map of US county borders]

Accessing the state and county data after reading shapefiles

Great. Now that we've plotted all the states and counties, how do we get a list of them, so that when I read out "Santa Clara, CA" from the data I'm trying to plot, I know which map object to color?

After calling readshapefile('st99_d00', 'states'), m has two new members, both lists: m.states and m.states_info.

m.states_info[] is a list of dicts mirroring what was in the shapefile. For the Census state list, the useful keys are NAME, AREA, and PERIMETER. There's also STATE, which is an integer (not restricted to 1 through 50) but I'll get to that.

If you want the shape for, say, California, iterate through m.states_info[] looking for the one where m.states_info[i]["NAME"] == "California". Note i; the shape coordinates will be in m.states[i]n (in basemap map coordinates, not latitude/longitude).

Correlating states and counties in Census shapefiles

County data is similar, with county names in m.counties_info[i]["NAME"]. Remember that STATE integer? Each county has a STATEFP, m.counties_info[i]["STATEFP"] that matches some state's m.states_info[i]["STATE"].

But doing that search every time would be slow. So right after calling readshapefile for the states, I make a table of states. Empirically, STATE in the state list goes up to 72. Why 72? Shrug.

    MAXSTATEFP = 73
    states = [None] * MAXSTATEFP
    for state in m.states_info:
        statefp = int(state["STATE"])
        # Many states have multiple entries in m.states (because of islands).
        # Only add it once.
        if not states[statefp]:
            states[statefp] = state["NAME"]

That'll make it easy to look up a county's state name quickly when we're looping through all the counties.

Calculating colors for each county

Time to figure out the colors from the Deleetdk election results CSV file. Reading lines from the CSV file into a dictionary is superficially easy enough:

    fp = open("tidy_data.csv")
    reader = csv.DictReader(fp)

    # Make a dictionary of all "county, state" and their colors.
    county_colors = {}
    for county in reader:
        # What color is this county?
        pop = float(county["votes"])
        blue = float(county["results.clintonh"])/pop
        red = float(county["Total.Population"])/pop
        county_colors["%s, %s" % (county["name"], county["State"])] \
            = (red, 0, blue)

But in practice, that wasn't good enough, because the county names in the Deleetdk names didn't always match the official Census county names.

Fuzzy matches

For instance, the CSV file had no results for Alaska or Puerto Rico, so I had to skip those. Non-ASCII characters were a problem: "Doña Ana" county in the census data was "Dona Ana" in the CSV. I had to strip off " County", " Borough" and similar terms: "St Louis" in the census data was "St. Louis County" in the CSV. Some names were capitalized differently, like PLYMOUTH vs. Plymouth, or Lac Qui Parle vs. Lac qui Parle. And some names were just different, like "Jeff Davis" vs. "Jefferson Davis".

To get around that I used SequenceMatcher to look for fuzzy matches when I couldn't find an exact match:

def fuzzy_find(s, slist):
    '''Try to find a fuzzy match for s in slist.
    '''
    best_ratio = -1
    best_match = None

    ls = s.lower()
    for ss in slist:
        r = SequenceMatcher(None, ls, ss.lower()).ratio()
        if r > best_ratio:
            best_ratio = r
            best_match = ss
    if best_ratio > .75:
        return best_match
    return None

Correlate the county names from the two datasets

It's finally time to loop through the counties in the map to color and plot them.

Remember STATE vs. STATEFP? It turns out there are a few counties in the census county shapefile with a STATEFP that doesn't match any STATE in the state shapefile. Mostly they're in the Virgin Islands and I don't have election data for them anyway, so I skipped them for now. I also skipped Puerto Rico and Alaska (no results in the election data) and counties that had no corresponding state: I'll omit that code here, but you can see it in the final script, linked at the end.

    for i, county in enumerate(m.counties_info):
        countyname = county["NAME"]
        try:
            statename = states[int(county["STATEFP"])]
        except IndexError:
            print countyname, "has out-of-index statefp of", county["STATEFP"]
            continue

        countystate = "%s, %s" % (countyname, statename)
        try:
            ccolor = county_colors[countystate]
        except KeyError:
            # No exact match; try for a fuzzy match
            fuzzyname = fuzzy_find(countystate, county_colors.keys())
            if fuzzyname:
                ccolor = county_colors[fuzzyname]
                county_colors[countystate] = ccolor
            else:
                print "No match for", countystate
                continue

        countyseg = m.counties[i]
        poly = Polygon(countyseg, facecolor=ccolor)  # edgecolor="white"
        ax.add_patch(poly)

Moving Hawaii

Finally, although the CSV didn't have results for Alaska, it did have Hawaii. To display it, you can move it when creating the patches:

    countyseg = m.counties[i]
    if statename == 'Hawaii':
        countyseg = list(map(lambda (x,y): (x + 5750000, y-1400000), countyseg))
    poly = Polygon(countyseg, facecolor=countycolor)
    ax.add_patch(poly)
The offsets are in map coordinates and are empirical; I fiddled with them until Hawaii showed up at a reasonable place. [Blue-red-purple 2016 election map]

Well, that was a much longer article than I intended. Turns out it takes a fair amount of code to correlate several datasets and turn them into a map. But a lot of the work will be applicable to other datasets.

Full script on GitHub: Blue-red map using Census county shapefile

January 14, 2017 10:10 PM

Elizabeth Krumbach

Holidays in Philadelphia

In December MJ and I spent a couple weeks on the east coast in the new townhouse. It was the first long stay we’ve had there together, and though the holidays limited how much we could get done, particularly when it came to contractors, we did have a whole bunch to do.

First, I continued my quest to go through boxes of things that almost exclusively belonged to MJ’s grandparents. Unpacking, cataloging and deciding what pieces stay in Pennsylvania and what we’re sending to California. In the course of this I also had a deadline creeping up on me as I needed to find the menorah before Hanukkah began on the evening of December 24th. The timing of Hanukkah landing right along Christmas and New Years worked out well for us, MJ had some time off and it made the timing of the visit even more of a no-brainer. Plus, we were able to celebrate the entire eight night holiday there in Philadelphia rather than breaking it up between there and San Francisco.

The most amusing thing about finding the menorah was that it’s nearly identical to the one we have at home. MJ had mentioned that it was similar when I picked it out, but I had no idea that it was almost identical. Nothing wrong with the familiar, it’s a beautiful menorah.

House-wise MJ got the garage door opener installed and shelves put up in the powder room. With the help of his friend Tim, he also got the coffee table put together and the television mounted over the fireplace on New Years Eve. The TV was up in time to watch some of the NYE midnight broadcasts! We got the mail handling, trash schedule and cleaning sorted out with relatives who will be helping us with that, so the house will be well looked after in our absence.

I put together the vacuum and used it for the first time as I did the first thorough tidying of the house since we’d moved everything in from storage. I got my desk put together in the den, even though it’s still surrounded by boxes and will be until we ship stuff out to California. I was able to finally unpack some things we had actually ordered the last time I was in town but never got to put around the house, like a bunch of trash cans for various rooms and some kitchen goodies from ThinkGeek (Death Star waffle maker! R2-D2 measuring cups!). We also ordered a pair of counter-height chairs for the kitchen and they arrived in time for me to put them together just before we left, so the kitchen is also coming together even though we still need to go shopping for pots and pans.

Family-wise, we did a lot of visiting. On Christmas Eve we went to the nearby Samarkand restaurant, featuring authentic Uzbeki food. It was wonderful. We also did various lunches and dinners. A couple days were also spent going down to the city to visit a relative who is recovering in the hospital.

I didn’t see everyone I wanted to see but we did also get to visit with various friends. I saw my beloved Rogue One: A Star Wars Story a second time and met up with Danita to see Moana, which was great. I’ve now listened to the Moana soundtrack more than a few times. We met up with Crissi and her boyfriend Henry at Grand Lux Cafe in King of Prussia, where we also had a few errands to run and I was able to pick up some mittens at L.L. Bean. New Years Eve was spent with our friends Tim and Colleen, where we ordered pizza and hung aforementioned television. They also brought along some sweet bubbly for us to enjoy at midnight.

We also had lots of our favorite foods! We celebrated together at MJ’s favorite French cuisine inspired Chinese restaurant in Chestnut Hill, CinCin. We visited some of our standard favorites, including The Continental and Mad Mex. Exploring around our new neighborhood, we indulged in some east coast Chinese, made it to a Jewish deli where I got a delicious hoagie, found a sushi place that has an excellent roll list. We also went to Chickie’s and Pete’s crab house a couple of times, which, while being a Philadelphia establishment, I’d never actually been to. We also had a dinner at The Melting Pot, where I was able to try some local beers along with our fondue, and I’m delighted to see how much the microbrewery scene has grown since I moved away. We also hit a few diners during our stay, and enjoyed some eggnog from Wawa, which is some of the best eggnog ever made.

Unfortunately it wasn’t all fun. I’ve been battling a nasty bout of bronchitis for the past couple months. This continued ailment led to a visit to urgent care to get it looked at, and an x-ray to confirm I didn’t have a pneumonia. A pile of medication later, my bronchitis lingered and later in the week I spontaneously developed hives on my neck, which confounded the doctor. In the midst of health woes, I also managed to cut my foot on some broken glass while I was unpacking. It bled a lot, and I was a bit hobbled for a couple days while it healed. Thankfully MJ cleaned it out thoroughly (ouch!) once the bleeding had subsided and it has healed up nicely.

As the trip wound down I found myself missing the cats and eager to get home where I’d begin my new job. Still, it was with a heavy heart that we left our beautiful new vacation home, family and friends on the east coast.

by pleia2 at January 14, 2017 07:32 AM

January 12, 2017

Akkana Peck

Getting Election Data, and Why Open Data is Important

Back in 2012, I got interested in fiddling around with election data as a way to learn about data analysis in Python. So I went searching for results data on the presidential election. And got a surprise: it wasn't available anywhere in the US. After many hours of searching, the only source I ever found was at the UK newspaper, The Guardian.

Surely in 2016, we're better off, right? But when I went looking, I found otherwise. There's still no official source for US election results data; there isn't even a source as reliable as The Guardian this time.

You might think Data.gov would be the place to go for official election results, but no: searching for 2016 election on Data.gov yields nothing remotely useful.

The Federal Election Commission has an election results page, but it only goes up to 2014 and only includes the Senate and House, not presidential elections. Archives.gov has popular vote totals for the 2012 election but not the current one. Maybe in four years, they'll have some data.

After striking out on official US government sites, I searched the web. I found a few sources, none of them even remotely official.

Early on I found Simon Rogers, How to Download County-Level Results Data, which leads to GitHub user tonmcg's County Level Election Results 12-16. It's a comparison of Democratic vs. Republican votes in the 2012 and 2016 elections (I assume that means votes for that party's presidential candidate, though the field names don't make that entirely clear), with no information on third-party candidates.

KidPixo's Presidential Election USA 2016 on GitHub is a little better: the fields make it clear that it's recording votes for Trump and Clinton, but still no third party information. It's also scraped from the New York Times, and it includes the scraping code so you can check it and have some confidence on the source of the data.

Kaggle claims to have election data, but you can't download their datasets or even see what they have without signing up for an account. Ben Hamner has some publically available Kaggle data on GitHub, but only for the primary. I also found several companies selling election data, and several universities that had datasets available for researchers with accounts at that university.

The most complete dataset I found, and the only open one that included third party candidates, was through OpenDataSoft. Like the other two, this data is scraped from the NYT. It has data for all the minor party candidates as well as the majors, plus lots of demographic data for each county in the lower 48, plus Hawaii, but not the territories, and the election data for all the Alaska counties is missing.

You can get it either from a GitHub repo, Deleetdk's USA.county.data (look in inst/ext/tidy_data.csv. If you want a larger version with geographic shape data included, clicking through several other opendatasoft pages eventually gets you to an export page, USA 2016 Presidential Election by county, where you can download CSV, JSON, GeoJSON and other formats.

The OpenDataSoft data file was pretty useful, though it had gaps (for instance, there's no data for Alaska). I was able to make my own red-blue-purple plot of county voting results (I'll write separately about how to do that with python-basemap), and to play around with statistics.

Implications of the lack of open data

But the point my search really brought home: By the time I finally found a workable dataset, I was so sick of the search, and so relieved to find anything at all, that I'd stopped being picky about where the data came from. I had long since given up on finding anything from a known source, like a government site or even a newspaper, and was just looking for data, any data.

And that's not good. It means that a lot of the people doing statistics on elections are using data from unverified sources, probably copied from someone else who claimed to have scraped it, using unknown code, from some post-election web page that likely no longer exists. Is it accurate? There's no way of knowing.

What if someone wanted to spread news and misinformation? There's a hunger for data, particularly on something as important as a US Presidential election. Looking at Google's suggested results and "Searches related to" made it clear that it wasn't just me: there are a lot of people searching for this information and not being able to find it through official sources.

If I were a foreign power wanting to spread disinformation, providing easily available data files -- to fill the gap left by the US Government's refusal to do so -- would be a great way to mislead people. I could put anything I wanted in those files: there's no way of checking them against official results since there are no official results. Just make sure the totals add up to what people expect to see. You could easily set up an official-looking site and put made-up data there, and it would look a lot more real than all the people scraping from the NYT.

If our government -- or newspapers, or anyone else -- really wanted to combat "fake news", they should take open data seriously. They should make datasets for important issues like the presidential election publically available, as soon as possible after the election -- not four years later when nobody but historians care any more. Without that, we're leaving ourselves open to fake news and fake data.

January 12, 2017 11:41 PM

January 09, 2017

Akkana Peck

Snowy Winter Days, and an Elk Visit

[Snowy view of the Rio Grande from Overlook]

The snowy days here have been so pretty, the snow contrasting with the darkness of the piñons and junipers and the black basalt. The light fluffy crystals sparkle in a rainbow of colors when they catch the sunlight at the right angle, but I've been unable to catch that effect in a photo.

We've had some unusual holiday visitors, too, culminating in this morning's visit from a huge bull elk.

[bull elk in the yard] Dave came down to make coffee and saw the elk in the garden right next to the window. But by the time I saw him, he was farther out in the yard. And my DSLR batteries were dead, so I grabbed the point-and-shoot and got what I could through the window.

Fortunately for my photography the elk wasn't going anywhere in any hurry. He has an injured leg, and was limping badly. He slowly made his way down the hill and into the neighbors' yard. I hope he returns. Even with a limp that bad, an elk that size has no predators in White Rock, so as long as he stays off the nearby San Ildefonso reservation (where hunting is allowed) and manages to find enough food, he should be all right. I'm tempted to buy some hay to leave out for him.

[Sunset light on the Sangre de Cristos] Some of the sunsets have been pretty nice, too.

A few more photos.

January 09, 2017 02:48 AM

January 08, 2017

Akkana Peck

Using virtualenv to replace the broken pip install --user

Python's installation tool, pip, has some problems on Debian.

The obvious way to use pip is as root: sudo pip install packagename. If you hang out in Python groups at all, you'll quickly find that this is strongly frowned upon. It can lead to your pip-installed packages intermingling with the ones installed by Debian's apt-get, possibly causing problems during apt system updates.

The second most obvious way, as you'll see if you read pip's man page, is pip --user install packagename. This installs the package with only user permissions, not root, under a directory called ~/.local. Python automatically checks .local as part of its PYTHONPATH, and you can add ~/.local/bin to your PATH, so this makes everything transparent.

Or so I thought until recently, when I discovered that pip install --user ignores system-installed packages when it's calculating its dependencies, so you could end up with a bunch of incompatible versions of packages installed. Plus it takes forever to re-download and re-install dependencies you already had.

Pip has a clear page describing how pip --user is supposed to work, and that isn't what it's doing. So I filed pip bug 4222; but since pip has 687 open bugs filed against it, I'm not terrifically hopeful of that getting fixed any time soon. So I needed a workaround.

Use virtualenv instead of --user

Fortunately, it turned out that pip install works correctly in a virtualenv if you include the --system-site-packages option. I had thought virtualenvs were for testing, but quite a few people on #python said they used virtualenvs all the time, as part of their normal runtime environments. (Maybe due to pip's deficiencies?) I had heard people speak deprecatingly of --user in favor of virtualenvs but was never clear why; maybe this is why.

So, what I needed was to set up a virtualenv that I can keep around all the time and use by default every time I log in. I called it ~/.pythonenv when I created it:

virtualenv --system-site-packages $HOME/.pythonenv

Normally, the next thing you do after creating a virtualenv is to source a script called bin/activate inside the venv. That sets up your PATH, PYTHONPATH and a bunch of other variables so the venv will be used in all the right ways. But activate also changes your prompt, which I didn't want in my normal runtime environment. So I stuck this in my .zlogin file:

VIRTUAL_ENV_DISABLE_PROMPT=1 source $HOME/.pythonenv/bin/activate

Now I'll activate the venv once, when I log in (and once in every xterm window since I set XTerm*loginShell: true in my .Xdefaults. I see my normal prompt, I can use the normal Debian-installed Python packages, and I can install additional PyPI packages with pip install packagename (no --user, no sudo).

January 08, 2017 06:37 PM

January 05, 2017

Elizabeth Krumbach

The Girard Avenue Line

While I was in Philadelphia over the holidays a friend clued me into the fact that one of the historic streetcars (trolleys) on the Girard Avenue Line was decorated for the holidays. This line, SEPTA Route 15, is the last historic trolley line in Philadelphia and I had never ridden it before. This was the perfect opportunity!

I decided that I’d make the whole day about trains, so that morning I hopped on the SEPTA West Trenton Line regional rail, which has a stop near our place north of Philadelphia. After cheesesteak lunch near Jefferson Station, it was on to the Market-Frankfort Line subway/surface train to get up to Girard Station.

My goal for the afternoon was to see and take pictures of the holiday car, number 2336. So, with the friend I dragged along on this crazy adventure, we started waiting. The first couple trolleys weren’t decorated, so we hopped on another to get out of the chilly weather for a bit. Got off that trolley and waited for a few more, in both directions. This was repeated a couple times until we finally got a glimpse of the decorated trolley heading back to Girard Station. Now on our radar, we hopped on the next one and followed that trolley!


The non-decorated, but still lovely, 2335

We caught up with the decorated trolley after the turnaround at the end of the line and got on just after Girard Station. From there we took it all the way to the end of the line in west Philadelphia at 63rd St. There we had to disembark, and I took a few pictures of the outside.

We were able to get on again after the driver took a break, which allowed us take it all the way back.

The car was decorated inside and out, with lights, garland and signs.

At the end the driver asked if we’d just been on it to take a ride. Yep! I came just to see this specific trolley! Since it was getting dark anyway, he was kind enough to turn the outside lights on for me so I could get some pictures.

As my first time riding this line, I was able to make some observations about how they differ from the PCCs that run in San Francisco. In the historic fleet of San Francisco streetcars, the 1055 has the same livery as the trolleys that run in Philadelphia today. Most of the PCC’s in San Francisco’s fleet actually came from SEPTA in Philadelphia and this one is no exception, originally numbered 2122 while in service there. However, taking a peek inside it’s easy to see that it’s a bit different than the ones that run in Philadelphia today:


Inside the 1055 in San Francisco

The inside of this looks shiny compared to the inside of the one still running in Philadelphia. It’s all metal versus the plastic inside in Philadelphia, and the walls of the car are much thinner in San Francisco. I suspect this is all due to climate control requirements. In San Francisco we don’t really have seasons and the temperature stays pretty comfortable, so while there is a little climate control, it’s nothing compared to what the cars in Philadelphia need in the summer and winter. You can also see a difference from the outside, the entire top of the Philadelphia cars has a raised portion which seems to be climate control, but on the San Francisco cars it’s only a small bit at the center:


Outside the 1055 in San Francisco

Finally, the seats and wheelchair accessibility is different. The seats are all plastic in San Francisco, whereas they have fabric in Philadelphia. The raised platforms themselves and a portable metal platform serve as wheelchair access in San Francisco, whereas Philadelphia has an actual operative lift since there are many street level stops.

To wrap up the trolley adventure, we hopped on a final one to get us to Broad Street where we took the Broad Street Line subway down to dinner at Sazon on Spring Garden Street, where we had a meal that concluded with some of the best hot chocolate I’ve ever had. Perfect to warm us up after spending all afternoon chasing trolleys in Philadelphia December weather.

Dinner finished, I took one last train, the regional rail to head back to the suburbs.

More photos from the trolleys on the Girard Avenue Line here: https://www.flickr.com/photos/pleia2/albums/72157676838141261

by pleia2 at January 05, 2017 08:47 AM

January 04, 2017

Akkana Peck

Firefox "Reader Mode" and NoScript

A couple of days ago I blogged about using Firefox's "Delete Node" to make web pages more readable. In a subsequent Twitter discussion someone pointed out that if the goal is to make a web page's content clearer, Firefox's relatively new "Reader Mode" might be a better way.

I knew about Reader Mode but hadn't used it. It only shows up on some pages. as a little "open book" icon to the right of the URLbar just left of the Refresh/Stop button. It did show up on the Pogue Yahoo article; but when I clicked it, I just got a big blank page with an icon of a circle with a horizontal dash; no text.

It turns out that to see Reader Mode content in noscript, you must explicitly enable javascript from about:reader.

There are some reasons it's not automatically whitelisted: see discussions in bug 1158071 and bug 1166455 -- so enable it at your own risk. But it's nice to be able to use Reader Mode, and I'm glad the Twitter discussion spurred me to figure out why it wasn't working.

January 04, 2017 06:37 PM

January 02, 2017

Akkana Peck

Firefox's "Delete Node" eliminates pesky content-hiding banners

It's trendy among web designers today -- the kind who care more about showing ads than about the people reading their pages -- to use fixed banner elements that hide part of the page. In other words, you have a header, some content, and maybe a footer; and when you scroll the content to get to the next page, the header and footer stay in place, meaning that you can only read the few lines sandwiched in between them. But at least you can see the name of the site no matter how far you scroll down in the article! Wouldn't want to forget the site name!

Worse, many of these sites don't scroll properly. If you Page Down, the content moves a full page up, which means that the top of the new page is now hidden under that fixed banner and you have to scroll back up a few lines to continue reading where you left off. David Pogue wrote about that problem recently and it got a lot of play when Slashdot picked it up: These 18 big websites fail the space-bar scrolling test.

It's a little too bad he concentrated on the spacebar. Certainly it's good to point out that hitting the spacebar scrolls down -- I was flabbergasted to read the Slashdot discussion and discover that lots of people didn't already know that, since it's been my most common way of paging since browsers were invented. (Shift-space does a Page Up.) But the Slashdot discussion then veered off into a chorus of "I've never used the spacebar to scroll so why should anyone else care?", when the issue has nothing to do with the spacebar: the issue is that Page Down doesn't work right, whichever key you use to trigger that page down.

But never mind that. Fixed headers that don't scroll are bad even if the content scrolls the right amount, because it wastes precious vertical screen space on useless cruft you don't need. And I'm here to tell you that you can get rid of those annoying fixed headers, at least in Firefox.

[Article with intrusive Yahoo headers]

Let's take Pogue's article itself, since Yahoo is a perfect example of annoying content that covers the page and doesn't go away. First there's that enormous header -- the bottom row of menus ("Tech Home" and so forth) disappear once you scroll, but the rest stay there forever. Worse, there's that annoying popup on the bottom right ("Privacy | Terms" etc.) which blocks content, and although Yahoo! scrolls the right amount to account for the header, it doesn't account for that privacy bar, which continues to block most of the last line of every page.

The first step is to call up the DOM Inspector. Right-click on the thing you want to get rid of and choose Inspect Element:

[Right-click menu with Inspect Element]


That brings up the DOM Inspector window, which looks like this (click on the image for a full-sized view):

[DOM Inspector]

The upper left area shows the hierarchical structure of the web page.

Don't Panic! You don't have to know HTML or understand any of this for this technique to work.

Hover your mouse over the items in the hierarchy. Notice that as you hover, different parts of the web page are highlighted in translucent blue.

Generally, whatever element you started on will be a small part of the header you're trying to eliminate. Move up one line, to the element's parent; you may see that a bigger part of the header is highlighted. Move up again, and keep moving up, one line at a time, until the whole header is highlighted, as in the screenshot. There's also a dark grey window telling you something about the HTML, if you're interested; if you're not, don't worry about it.

Eventually you'll move up too far, and some other part of the page, or the whole page, will be highlighted. You need to find the element that makes the whole header blue, but nothing else.

Once you've found that element, right-click on it to get a context menu, and look for Delete Node (near the bottom of the menu). Clicking on that will delete the header from the page.

Repeat for any other part of the page you want to remove, like that annoying bar at the bottom right. And you're left with a nice, readable page, which will scroll properly and let you read every line, and will show you more text per page so you don't have to scroll as often.

[Article with intrusive Yahoo headers]

It's a useful trick. You can also use Inspect/Delete Node for many of those popups that cover the screen telling you "subscribe to our content!" It's especially handy if you like to browse with NoScript, so you can't dismiss those popups by clicking on an X. So happy reading!

Addendum on Spacebars

By the way, in case you weren't aware that the spacebar did a page down, here's another tip that might come in useful: the spacebar also advances to the next slide in just about every presentation program, from PowerPoint to Libre Office to most PDF viewers. I'm amazed at how often I've seen presenters squinting with a flashlight at the keyboard trying to find the right-arrow or down-arrow or page-down or whatever key they're looking for. These are all ways of advancing to the next slide, but they're all much harder to find than that great big spacebar at the bottom of the keyboard.

January 02, 2017 11:23 PM

Elizabeth Krumbach

The adventures of 2016

2016 was filled with professional successes and exciting adventures, but also various personal struggles. I exhausted myself finishing two books, navigated some complicated parts of my marriage, experienced my whole team getting laid off from a job we loved, handled an uptick in migraines and a continuing bout of bronchitis, and am still coming to terms with the recent loss.

It’s been difficult to maintain perspective, but it actually was an incredible year. I succeeded in having two books come out, my travels took me to some new, amazing places, we bought a vacation house, all my blood work shows that I’m healthier than I was at this time last year.


Lots more running in 2016 led to a healthier me!

Some of the tough stuff has even been good. I have succeeded in strengthening bonds with my husband and several people in my life who I care about. I’ve worked hard to worry less and enjoy time with friends and family, which may explain why this year ended up being the one of the group selfie. I paused to capture happy moments with my loved ones a lot more often.

So without further ado, the more quantitative year roundup!

The 9th edition of the The Official Ubuntu Book came out in July. This is the second edition I’ve been part of preparing. The book has updates to bring us up to the 16.04 release and features a whole new chapter covering “Ubuntu, Convergence, and Devices of the Future” which I was really thrilled about adding. My work with Matthew Helmke and José Antonio Rey was also very enjoyable. I wrote about the release here.

I also finished the first book I was the lead author on, Common OpenStack Deployments. Writing a book takes a considerable amount of time and effort, I spent many long nights and weekends testing and tweaking configurations largely written by my contributing author, Matt Fischer, writing copy for the book and integrating feedback from our excellent fleet of reviewers and other contributors. In the end, we released a book that takes the reader from knowing nothing about OpenStack to doing sample deployments using the same Puppet-driven tooling that enterprises use in their environments. The book came out in September, I wrote about it on my own blog here and maintain a blog about the book at DeploymentsBook.com.


Book adventures at the Ocata OpenStack Summit in Barcelona! Thanks to Nithya Ruff for taking a picture of me with my book at the Women of OpenStack area of the expo hall (source) and Brent Haley for getting the picture of Lisa-Marie and I (source).

This year also brought a new investment to our lives, we bought a vacation home in Pennsylvania! It’s a new construction townhouse, so we spent a fair amount of time on the east coast the second half of this year searching for a place, picking out the details and closing. We then spent the winter holidays here, spending a full two weeks away from home to really settle in. I wrote more about our new place here.

I keep saying I won’t travel as much, but 2016 turned out to have more travel than ever, taking over 100,000 miles of flights again.


Feeding a kangaroo, just outside of Melbourne, Australia

At the Jain Temple in Mumbai, India

We had lots of beers in Germany! Photo in the center by Chris Hoge (source)

Barcelona is now one of my favorite places, and it’s Sagrada Familia Basilica was breathtaking

Most of these conferences and events had a speaking component for me, but I also did a fair number of local talks and at some conferences I spoke more than once. The following is a rundown of all these talks I did in 2016, along with slides.


Photo by Masayuki Igawa (source) from Linux Conf AU in Geelong

Photo by Johanna Koester (source) from my keynote at the Ocata OpenStack Summit

MJ and I have also continued to enjoy our beloved home city of San Francisco, both with just the two of us and with various friends and family. We saw a couple Giants baseball games, along with one of the Sharks playoff games! Sampled a variety of local drinks and foods, visited lots of local animals and took in some amazing local sights. We went to the San Francisco Symphony for the first time, enjoyed a wonderful time together over over Labor Day weekend and I’ve skipped out at times to visit museum exhibits and the zoo.


Dinner at Luce in San Francisco, celebrating MJ’s new job

This year I also geeked out over trains – in four states and five countries! In May MJ and I traveled to Maine to spend some time with family, and a couple days of that trip were spent visiting the Seashore Trolley Museum in Kennebunkport and the Narrow Gauge Railroad Museum in Portland, I wrote about it here. I also enjoyed MUNI Heritage Weekend with my friend Mark at the end of September, where we got to see some of the special street cars and ride several vintage buses, read about that here. I also went up to New York City to finally visit the famous New York Transit Museum in Brooklyn and accompanying holiday exhibit at the Central Station with my friend David, details here. In Philadelphia I enjoyed the entire Girard Street line (15) which is populated by historic PCC streetcars (trolleys), including one decorated for the holidays, I have a pile of pictures here. I also got a glimpse of a car on the historic streetcar/trolley line in Melbourne and my buddy Devdas convinced me to take a train in Mumbai, and I visited the amazing Chhatrapati Shivaji Terminus there too. MJ also helped me plan some train adventures in the Netherlands and Germany as I traveled from airports for events.


From the Seashore Trolley Museum barn

As I enter into 2017 I’m thrilled to report that I’ll be starting a new job. Travel continues as I have trips to Australia and Los Angeles already on my schedule. I’ll also be spending time getting settled back into my life on the west coast, as I have spent 75% of my time these past couple months elsewhere.

by pleia2 at January 02, 2017 03:19 PM

December 27, 2016

Elizabeth Krumbach

OpenStack Days Mountain West 2016

A couple weeks ago I attended my last conference of the year, OpenStack Days Mountain West. After much flight shuffling following a seriously delayed flight, I arrived late on the evening prior to the conference with plenty of time to get settled in and feel refreshed for the conference in the morning.

The event kicked off with a keynote from OpenStack Foundation COO Mark Collier who spoke on the growth and success of OpenStack. His talk strongly echoed topics he touched upon at the recent OpenStack Summit back in October as he cited several major companies who are successfully using OpenStack in massive, production deployments including Walmart, AT&T and China Mobile. In keeping with the “future” theme of the conference he also talked about organizations who are already pushing the future potential of OpenStack by betting on the technology for projects that will easily exceed the capacity of what OpenStack can handle today.

Also that morning, Lisa-Marie Namphy moderated a panel on the future of OpenStack with John Dickinson, K Rain Leander, Bruce Mathews and Robert Starmer. She dove right in with the tough questions by having panelists speculate as to why the three major cloud providers don’t run OpenStack. There was also discussion about who the actual users of OpenStack were (consensus was: infrastructure operators), which got into the question of whether app developers were OpenStack users today (perhaps not, app developers don’t want a full Linux environment, they want a place for their app to live). They also discussed the expansion of other languages beyond Python in the project.

That afternoon I saw a talk by Mike Wilson of Mirantis on “OpenStack in the post Moore’s Law World” where he reflected on the current status of Moore’s Law and how it relates to cloud technologies, and the projects that are part of OpenStack. He talked about how the major cloud players outside of OpenStack are helping drive innovation for their own platforms by working directly with chip manufacturers to create hardware specifically tuned to their needs. There’s a question of whether anyone in the OpenStack community is doing similar, and it seems that perhaps they should so that OpenStack can have a competitive edge.

My talk was next, speaking on “The OpenStack Project Continuous Integration System” where I gave a tour of our CI system and explained how we’ve been tracking project growth and steps we’ve taken with regard to scaling it to handle it going into the future. Slides from the talk are available here (PDF). At the end of my talk I gave away several copies of Common OpenStack Deployments which I also took the chance to sign. I’m delighted that one of the copies will be going to the San Diego OpenStack Meetup and another to one right there in Salt Lake City.

Later I attended Christopher Aedo’s “Transforming Organizations with OpenStack” where he walked the audience through hands on training his team did about the OpenStack project’s development process and tooling for IBM teams around the world. The lessons learned from working with these teams and getting them to love open processes once they could explain them in person was inspiring. Tassoula Kokkoris wrote a great summary of the talk here: Collaborative Culture Spotlight: OpenStack Days Mountain West. I rounded off the day by going to David Medberry’s “Private Cloud Cattle and Pet Wrangling” talk where he drew experience from the private cloud at Charter Communications to discuss the move from treating servers like pets to treating them like cattle and how that works in a large organization with departments that have varying needs.

The next day began with a talk by OpenStack veteran, and now VP of Solutions at SUSE, Joseph George. He gave a talk on the state of OpenStack, with a strong message about staying on the path we set forth, which he compared to his own personal transformation to lose a significant amount of weight. In this talk, he outlined three main points that we must keep in mind in order to succeed:

  1. Clarity on the Goal and the Motivation
  2. Staying Focused During the “Middle” of the Journey
  3. Constantly Learning and Adapting

He wrote a more extensive blog post about it here which fleshes out how each of these related to himself and how they map to OpenStack: OpenStack, Now and Moving Ahead: Lessons from My Own Personal Transformation.

The next talk was a fun one from Lisa-Marie Namphy and Monty Taylor with the theme of being a naughty or nice list for the OpenStack community. They walked through various decisions, aspects of the project, and more to paint a picture of where the successes and pain points of the project are. They did a great job, managing to pull it off with humor, wit, and charm, all while also being actually informative. The morning concluded with a panel titled “OpenStack: Preferred Platform For PaaS Solutions” which had some interesting views. The panelists brought their expertise to the table to discuss what developers seeking to write to a platform wanted, and where OpenStack was weak and strong. It certainly seems to me that OpenStack is strongest as IaaS rather than PaaS, and it makes sense for OpenStack to continue focusing on being what they’ve called an “integration engine” to tie components together rather than focus on writing a PaaS solution directly. There was some talk about this on the panel, where some stressed that they did want to see OpenStack hooking into existing PaaS software offerings.


Great photo of Lisa and Monty by Gary Kevorkian, source

Lunch followed the morning talks, and I haven’t mentioned it, but the food at this event was quite good. In fact, I’d go as far as to say it was some of the best conference-supplied meals I’ve had. Nice job, folks!

Huge thanks to the OpenStack Days Mountain West crew for putting on the event. Lots of great talks and I enjoyed connecting with folks I knew, as well as meeting members of the community who haven’t managed to make it to one of the global events I’ve attended. It’s inspiring to meet with such passionate members of local groups like I found there.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157676117696131

by pleia2 at December 27, 2016 03:02 PM

December 25, 2016

Akkana Peck

Photographing Farolitos (and other night scenes)

Excellent Xmas to all! We're having a white Xmas here..

Dave and I have been discussing how "Merry Christmas" isn't alliterative like "Happy Holidays". We had trouble coming up with a good C or K adjective to go with Christmas, but then we hit on the answer: Have an Excellent Xmas! It also has the advantage of inclusivity: not everyone celebrates the birth of Christ, but Xmas is a secular holiday of lights, family and gifts, open to people of all belief systems.

Meanwhile: I spent a couple of nights recently learning how to photograph Xmas lights and farolitos.

Farolitos, a New Mexico Christmas tradition, are paper bags, weighted down with sand, with a candle inside. Sounds modest, but put a row of them alongside a roadway or along the top of a typical New Mexican adobe or faux-dobe and you have a beautiful display of lights.

They're also known as luminarias in southern New Mexico, but Northern New Mexicans insist that a luminaria is a bonfire, and the little paper bag lanterns should be called farolitos. They're pretty, whatever you call them.

Locally, residents of several streets in Los Alamos and White Rock set out farolitos along their roadsides for a few nights around Christmas, and the county cooperates by turning off streetlights on those streets. The display on Los Pueblos in Los Alamos is a zoo, a slow exhaust-choked parade of cars that reminds me of the Griffith Park light show in LA. But here in White Rock the farolito displays are a lot less crowded, and this year I wanted to try photographing them.

Canon bugs affecting night photography

I have a little past experience with night photography. I went through a brief astrophotography phase in my teens (in the pre-digital phase, so I was using film and occasionally glass plates). But I haven't done much night photography for years.

That's partly because I've had problems taking night shots with my current digital SLRcamera, a Rebel Xsi (known outside the US as a Canon 450d). It's old and modest as DSLRs go, but I've resisted upgrading since I don't really need more features.

Except maybe when it comes to night photography. I've tried shooting star trails, lightning shots and other nocturnal time exposures, and keep hitting a snag: the camera refuses to take a photo. I'll be in Manual mode, with my aperture and shutter speed set, with the lens in Manual Focus mode with Image Stabilization turned off. Plug in the remote shutter release, push the button ... and nothing happens except a lot of motorized lens whirring noises. Which shouldn't be happening -- in MF and non-IS mode the lens should be just sitting there intert, not whirring its motors. I couldn't seem to find a way to convince it that the MF switch meant that, yes, I wanted to focus manually.

It seemed to be primarily a problem with the EF-S 18-55mm kit lens; the camera will usually condescend to take a night photo with my other two lenses. I wondered if the MF switch might be broken, but then I noticed that in some modes the camera explicitly told me I was in manual focus mode.

I was almost to the point of ordering another lens just for night shots when I finally hit upon the right search terms and found, if not the reason it's happening, at least an excellent workaround.

Back Button Focus

I'm so sad that I went so many years without knowing about Back Button Focus. It's well hidden in the menus, under Custom Functions #10.

Normally, the shutter button does a bunch of things. When you press it halfway, the camera both autofocuses (sadly, even in manual focus mode) and calculates exposure settings.

But there's a custom function that lets you separate the focus and exposure calculations. In the Custom Functions menu option #10 (the number and exact text will be different on different Canon models, but apparently most or all Canon DSLRs have this somewhere), the heading says: Shutter/AE Lock Button. Following that is a list of four obscure-looking options:

  • AF/AE lock
  • AE lock/AF
  • AF/AF lock, no AE lock
  • AE/AF, no AE lock

The text before the slash indicates what the shutter button, pressed halfway, will do in that mode; the text after the slash is what happens when you press the * or AE lock button on the upper right of the camera back (the same button you use to zoom out when reviewing pictures on the LCD screen).

The first option is the default: press the shutter button halfway to activate autofocus; the AE lock button calculates and locks exposure settings.

The second option is the revelation: pressing the shutter button halfway will calculate exposure settings, but does nothing for focus. To focus, press the * or AE button, after which focus will be locked. Pressing the shutter button won't refocus. This mode is called "Back button focus" all over the web, but not in the manual.

Back button focus is useful in all sorts of cases. For instance, if you want to autofocus once then keep the same focus for subsequent shots, it gives you a way of doing that. It also solves my night focus problem: even with the bug (whether it's in the lens or the camera) that the lens tries to autofocus even in manual focus mode, in this mode, pressing the shutter won't trigger that. The camera assumes it's in focus and goes ahead and takes the picture.

Incidentally, the other two modes in that menu apply to AI SERVO mode when you're letting the focus change constantly as it follows a moving subject. The third mode makes the * button lock focus and stop adjusting it; the fourth lets you toggle focus-adjusting on and off.

Live View Focusing

There's one other thing that's crucial for night shots: live view focusing. Since you can't use autofocus in low light, you have to do the focusing yourself. But most DSLR's focusing screens aren't good enough that you can look through the viewfinder and get a reliable focus on a star or even a string of holiday lights or farolitos.

Instead, press the SET button (the one in the middle of the right/left/up/down buttons) to activate Live View (you may have to enable it in the menus first). The mirror locks up and a preview of what the camera is seeing appears on the LCD. Use the zoom button (the one to the right of that */AE lock button) to zoom in; there are two levels of zoom in addition to the un-zoomed view. You can use the right/left/up/down buttons to control which part of the field the zoomed view will show. Zoom all the way in (two clicks of the + button) to fine-tune your manual focus. Press SET again to exit live view.

It's not as good as a fine-grained focusing screen, but at least it gets you close. Consider using relatively small apertures, like f/8, since it will give you more latitude for focus errors. Yyou'll be doing time exposures on a tripod anyway, so a narrow aperture just means your exposures have to be a little longer than they otherwise would have been.

After all that, my Xmas Eve farolitos photos turned out mediocre. We had a storm blowing in, so a lot of the candles had blown out. (In the photo below you can see how the light string on the left is blurred, because the tree was blowing around so much during the 30-second exposure.) But I had fun, and maybe I'll go out and try again tonight.


An excellent X-mas to you all!

December 25, 2016 07:30 PM

Elizabeth Krumbach

The Temples and Dinosaurs of SLC

A few weeks ago I was in Salt Lake City for my last conference of the year. I was only there for a couple days, but I had some flexibility in my schedule. I was able to see most of the conference and still make time to sneak out to see some sights before my flight home at the conclusion of the conference.

The conference was located right near Temple Square. In spite of a couple flurries here and there, and the accompanying cold, I made time to visit out during lunch the first day of the conference. This square is where the most famous temple of The Church of Jesus Christ of Latter-day Saints resides, the Salt Lake Temple. Since I’d never been to Salt Lake City before, this landmark was the most obvious one to visit, and they had decorated it for Christmas.

While I don’t share their faith, it was worthy of my time. The temple is beautiful, everyone I met was welcoming and friendly, and there is important historical significance to the story of that church.

The really enjoyable time was that evening though. After some time at The Beer Hive I went for a walk with a couple colleagues through the square again, but this time all lit up with the Christmas lights! The lights were everywhere and spectacular.

And I’m sure regardless of the season, the temple itself at night is a sight to behold.

More photos from Temple Square here: https://www.flickr.com/photos/pleia2/albums/72157677633463925

The conference continued the next day and I departed in the afternoon to visit the Natural History Museum of Utah. Utah is a big deal when it comes to fossil hunting in the US, so I was eager to visit their dinosaur fossil exhibit. In addition to a variety of crafted scenes, it also features the “world’s largest display of horned dinosaur skulls” (source).

Unfortunately upon arrival I learned that the museum was without power. They were waving people in, but explained that there was only emergency lighting and some of the sections of the museum were completely closed. I sadly missed out on their very cool looking exhibit on poisons, and it was tricky seeing some of the areas that were open with so little light.

But the dinosaurs.

Have you ever seen dinosaur fossils under just emergency lighting? They were considerably more impactful and scary this way. Big fan.

I really enjoyed some of the shadows cast by their horned dinosaur skulls.

More photos from the museum here: https://www.flickr.com/photos/pleia2/sets/72157673744906273/

There should totally be an event where the fossils are showcased in this way in a planned manner. Alas, since this was unplanned, the staff decided in the late afternoon to close the museum early. This sent me on my way much earlier than I’d hoped. Still, I was glad I got to spend some time with the dinosaurs and hadn’t wasted much time elsewhere in the museum. If I’m ever in Salt Lake City again I would like to go back though, it was tricky to read the signs in such low light and I would like to have the experience as it was intended. Besides, I’ll rarely pass up the opportunity to see a good dinosaur exhibit. I haven’t been to the Salt Lake City Zoo yet, if it had been warmer I may have considered it – next time!

With that, my trip to Salt Lake City pretty much concluded. I made my way to the airport to head home that evening. This trip rounded almost a full month of being away from home, so I was particularly eager to get home and spend some time with MJ and the kitties.

by pleia2 at December 25, 2016 04:32 PM

December 22, 2016

Akkana Peck

Tips on Developing Python Projects for PyPI

I wrote two recent articles on Python packaging: Distributing Python Packages Part I: Creating a Python Package and Distributing Python Packages Part II: Submitting to PyPI. I was able to get a couple of my programs packaged and submitted.

Ongoing Development and Testing

But then I realized all was not quite right. I could install new releases of my package -- but I couldn't run it from the source directory any more. How could I test changes without needing to rebuild the package for every little change I made?

Fortunately, it turned out to be fairly easy. Set PYTHONPATH to a directory that includes all the modules you normally want to test. For example, inside my bin directory I have a python directory where I can symlink any development modules I might need:

mkdir ~/bin/python
ln -s ~/src/metapho/metapho ~/bin/python/

Then add the directory at the beginning of PYTHONPATH:

export PYTHONPATH=$HOME/bin/python

With that, I could test from the development directory again, without needing to rebuild and install a package every time.

Cleaning up files used in building

Building a package leaves some extra files and directories around, and git status will whine at you since they're not version controlled. Of course, you could gitignore them, but it's better to clean them up after you no longer need them.

To do that, you can add a clean command to setup.py.

from setuptools import Command

class CleanCommand(Command):
    """Custom clean command to tidy up the project root."""
    user_options = []
    def initialize_options(self):
        pass
    def finalize_options(self):
        pass
    def run(self):
        os.system('rm -vrf ./build ./dist ./*.pyc ./*.tgz ./*.egg-info ./docs/sphinxdoc/_build')
(Obviously, that includes file types beyond what you need for just cleaning up after package building. Adjust the list as needed.)

Then in the setup() function, add these lines:

      cmdclass={
          'clean': CleanCommand,
      }

Now you can type

python setup.py clean
and it will remove all the extra files.

Keeping version strings in sync

It's so easy to update the __version__ string in your module and forget that you also have to do it in setup.py, or vice versa. Much better to make sure they're always in sync.

I found several version of that using system("grep..."), but I decided to write my own that doesn't depend on system(). (Yes, I should do the same thing with that CleanCommand, I know.)

def get_version():
    '''Read the pytopo module versions from pytopo/__init__.py'''
    with open("pytopo/__init__.py") as fp:
        for line in fp:
            line = line.strip()
            if line.startswith("__version__"):
                parts = line.split("=")
                if len(parts) > 1:
                    return parts[1].strip()

Then in setup():

      version=get_version(),

Much better! Now you only have to update __version__ inside your module and setup.py will automatically use it.

Using your README for a package long description

setup has a long_description for the package, but you probably already have some sort of README in your package. You can use it for your long description this way:

# Utility function to read the README file.
# Used for the long_description.
def read(fname):
    return open(os.path.join(os.path.dirname(__file__), fname)).read()
    long_description=read('README'),

December 22, 2016 05:15 PM

Jono Bacon

Recommendations Requested: Building a Smart Home

Early next year Erica, the scamp, and I are likely to be moving house. As part of the move we would both love to turn this house into a smart home.

Now, when I say “smart home”, I don’t mean this:

Dogogram. It is the future.

We don’t need any holographic dogs. We are however interested in having cameras, lights, audio, screens, and other elements in the house connected and controlled in different ways. I really like the idea of the house being naturally responsive to us in different scenarios.

In other houses I have seen people with custom lighting patterns (e.g. work, party, romantic dinner), sensors on gates that trigger alarms/notifications, audio that follows you around the house, notifications on visible screens, and other features.

Obviously we will want all of this to be (a) secure, (b) reliable, and (c) simple to use. While we want a smart home, I don’t particularly want to have to learn a million details to set it up.

Can you help?

So, this is what we would like to explore.

Now, I would love to ask you folks two questions:

  1. What kind of smart-home functionality and features have you implemented in your house (in other words, what neat things can you do?)
  2. What hardware and software do you recommend for rigging a home up as a smarthome. I would ideally like to keep re-wiring to a minimum. Assume I have nothing already, so recommendations for cameras, light-switches, hubs, and anything else is much appreciated.

If you have something you would like to share, please plonk it into the comment box below. Thanks!

The post Recommendations Requested: Building a Smart Home appeared first on Jono Bacon.

by Jono Bacon at December 22, 2016 04:00 PM

December 19, 2016

Jono Bacon

Building Better Teams With Asynchronous Workflow

One of the core principles of open source and innersource communities is asynchronous workflow. That is, participants/employees should be able to collaborate together with ubiquitous access, from anywhere, at any time.

As a practical example, at a previous company I worked at, pretty much everything lived in GitHub. Not just the code for the various products, but also material and discussions from the legal, sales, HR, business development, and other teams.

This offered a number of benefits for both employees and the company:

  • History – all projects, discussions, and collaboration was recorded. This provided a wealth of material for understanding prior decisions, work, and relationships.
  • Transparency – transparency is something most employees welcome and this was the case here where all employees felt a sense of connection to work across the company.
  • Communication – with everyone using the same platform it meant that it was easier for people to communicate clearly and consistently and to see the full scope of a discussion/project when pulled in.
  • Accountability – sunlight is the best disinfectant and having all projects, discussions, and work items/commitments, available in the platform ensured people were accountable in both their work and commitments.
  • Collaboration – this platform made it easier for people to not just collaborate (e.g. issues and pull requests) but also to bring in other employees by referencing their username (e.g. @jonobacon).
  • Reduced Silos – the above factors reduced the silos in the company and resulted in wider cross-team collaboration.
  • Untethered Working – because everything was online and not buried in private meetings and notes, this meant employees could be productive at home, on the road, or outside of office hours (often when riddled with jetlag at 3am!)
  • Internationally Minded – this also made it easier to work with an international audience, crossing different timezones and geographical regions.

While asynchronous workflow is not perfect, it offers clear benefits for a company and is a core component for integrating open source methodology and workflows (also known as innersource) into an organization.

Asynchronous workflow is a common area in which I work with companies. As such, I thought I would write up some lessons learned that may be helpful for you folks.

Designing Asynchronous Workflow

Many of you reading this will likely want to bring in the above benefits to your own organization too. You likely have an existing workflow which will be a mixture of (a) in-person meetings, (b) remote conference/video calls, (c) various platforms for tracking tasks, and (d) various collaboration and communication tools.

As with any organizational change and management, culture lies at the core. Putting platforms in place is the easy bit: adapting those platforms to the needs, desires, and uncertainties that live in people is where the hard work lays.

In designing asynchronous workflow you will need to make the transition from your existing culture and workflow to a new way of working. Ultimately this is about designing workflow that generates behaviors we want to see (e.g. collaboration, open discussion, efficient working) and behaviors we want to deter (e.g. silos, land-grabbing, power-plays etc).

Influencing these behaviors will include platforms, processes, relationships, and more. You will need to take a gradual, thoughtful, and transparent approach in designing how these different pieces fit together and how you make the change in a way that teams are engaged in.

I recommend you manage this in the following way (in order):

  1. Survey the current culture – first, you need to understand your current environment. How technically savvy are your employees? How dependent on meetings are they? What are the natural connections between teams, and where are the divisions? With a mixture of (a) employee surveys, and (b) observational and quantitive data, summarize these dynamics into lists of “Behaviors to Improve” and “Behaviors to Preserve”. These lists will give us a sense of how we want to build a workflow that is mindful of these behaviors and adjusts them where we see fit.
  2. Design an asynchronous environment – based on this research, put together a proposed plan for some changes you want to make to be more asynchronous. This should cover platform choices, changes to processes/policies, and roll-out plan. Divide this plan up in priority order for which pieces you want to deliver in which order.
  3. Get buy-in – next we need to build buy-in in senior management, team leads, and with employees. Ideally this process should be as open as possible with a final call for input from the wider employee-base. This is a key part of making teams feel part of the process.
  4. Roll out in phases – now, based on your defined priorities in the design, gradually roll out the plan. As you do so, provide regular updates on this work across the company (you should include metrics of the value this work is driving in these updates).
  5. Regularly survey users – at regular check-points survey the users of the different systems you put in place. Give them express permission to be critical – we want this criticism to help us refine and make changes to the plan.

Of course, this is a simplication of the work that needs to happen, but it covers the key markers that need to be in place.

Asynchronous Principles

The specific choices in your own asynchronous workflow plan will be very specific to your organization. Every org is different, has different drivers, people, and focus, so it is impossible to make a generalized set of strategic, platform, and process recommendations. Of course, if you want to discuss your organization’s needs specifically, feel free to get in touch.

For the purposes of this piece though, and to serve as many of you as possible, I want to share the core asynchronous principles you should consider when designing your asynchronous workflow. These principles are pretty consistent across most organizations I have seen.

Be Explicitly Permissive

A fundamental principle of asynchronous working (and more broadly in innersource) is that employees have explicit permission to (a) contribute across different projects/teams, (b) explore new ideas and creative solutions to problems, and (c) challenge existing norms and strategy.

Now, this doesn’t mean it is a free for all. Employees will have work assigned to them and milestones to accomplish, but being permissive about the above areas will crisply define the behavior the organization wants to see in employees.

In some organizations the senior management team spoo forth said permission and expect it to stick. While this top-down permission and validation is key, it is also critical that team leads, middle managers, and others support this permissive principle in day-to-day work.

People change and cultures develop by others delivering behavioral patterns that become accepted in the current social structure. Thus, you need to encourage people to work across projects, explore new ideas, and challenge the norm, and validate that behavior publicly when it occurs. This is how we make culture stick.

Default to Open Access

Where possible, teams and users should default to open visibility for projects, communication, issues, and other material. Achieving this requires not just default access controls to be open, but also setting the cultural and organization expectation that material should be open for all employees.

Of course, you should trust your employees to use their judgement too. Some efforts will require private discussions and work (e.g. security issues). Also, some discussions may need to be confidential (e.g. HR). So, default to open, but be mindful of the exceptions.

Platforms Need to be Accessible, Rich, and Searchable

There are myriad platforms for asynchronous working. GitHub, GitLab, Slack, Mattermost, Asana, Phabricator, to name just a few.

When evaluating platforms it is key to ensure that they can be made (a) securely accessible from anywhere (e.g. desktop/mobile support, available outside the office), (b) provide a rich and efficient environment for collaboration (e.g. rich discussions with images/videos/links, project management, simple code collaboration and review), (c) and any material is easily searchable (finding previous projects/discussions to learn from them, or finding new issues to focus on).

Always Maintain History and Never Delete, but Archive

You should maintain history in everything you do. This should include discussions, work/issue tracking, code (revision control), releases, and more.

On a related note, you should never, ever permanently delete material. Instead, that material should be archived. As an example, if you file an issue for a bug or problem that is no longer pertinent, archive the issue so it doesn’t come up in popular searches, but still make it accessible.

Consolidate Identity and Authentication

Having a single identity for each employee on asynchronous infrastructure is important. We want to make it easy for people to reference individual employees, so a unique username/handle is key here. This is not just important technically, but also for building relationships – that username/handle will be a part of how people collaborate, build their reputations, and communicate.

A complex challenge with deploying asynchronous infrastructure is with identity and authentication. You may have multiple different platforms that have different accounts and authentication providers.

Where possible invest in Single Sign On and authentication. While it requires a heavier up-front lift, consolidating multiple accounts further down the line is a nightmare you want to avoid.

Validate, Incentivize, and Reward

Human beings need validation. We need to know we are on the right track, particularly when joining new teams and projects. As such, you need to ensure people can easily validate each other (e.g. likes and +1s, simple peer review processes) and encourage a culture of appreciation and thanking others (e.g. manager and leaders setting an example to always thank people for contributions).

Likewise, people often respond well to being incentivized and often enjoy the rewards of that work. Be sure to identify what a good contribution looks like (e.g. in software development, a merged pull request) and incentivize and reward great work via both baked-in features and specific campaigns.

Be Mindful of Uncertainty, so Train, Onboard, and Support

Moving to a more asynchronous way of working will cause uncertainty in some. Not only are people often reluctant to change, but operating in a very open and transparent manner can make people squeamish about looking stupid in front of their colleagues.

So, be sure to provide extensive training as part of the transition, onboard new staff members, and provide a helpdesk where people can always get help and their questions answered.


Of course, I am merely scratching the surface of how we build asynchronous workflow, but hopefully this will get your started and generate some ideas and thoughts about how you bring this to your organization.

Of course, feel free to get in touch if you want to discuss your organization’s needs in more detail. I would also love to hear additional ideas and approaches in the comments!

The post Building Better Teams With Asynchronous Workflow appeared first on Jono Bacon.

by Jono Bacon at December 19, 2016 03:54 PM

December 17, 2016

Akkana Peck

Distributing Python Packages Part II: Submitting to PyPI

In Part I, I discussed writing a setup.py to make a package you can submit to PyPI. Today I'll talk about better ways of testing the package, and how to submit it so other people can install it.

Testing in a VirtualEnv

You've verified that your package installs. But you still need to test it and make sure it works in a clean environment, without all your developer settings.

The best way to test is to set up a "virtual environment", where you can install your test packages without messing up your regular runtime environment. I shied away from virtualenvs for a long time, but they're actually very easy to set up:

virtualenv venv
source venv/bin/activate

That creates a directory named venv under the current directory, which it will use to install packages. Then you can pip install packagename or pip install /path/to/packagename-version.tar.gz

Except -- hold on! Nothing in Python packaging is that easy. It turns out there are a lot of packages that won't install inside a virtualenv, and one of them is PyGTK, the library I use for my user interfaces. Attempting to install pygtk inside a venv gets:

********************************************************************
* Building PyGTK using distutils is only supported on windows. *
* To build PyGTK in a supported way, read the INSTALL file.    *
********************************************************************

Windows only? Seriously? PyGTK works fine on both Linux and Mac; it's packaged on every Linux distribution, and on Mac it's packaged with GIMP. But for some reason, whoever maintains the PyPI PyGTK packages hasn't bothered to make it work on anything but Windows, and PyGTK seems to be mostly an orphaned project so that's not likely to change.

(There's a package called ruamel.venvgtk that's supposed to work around this, but it didn't make any difference for me.)

The solution is to let the virtualenv use your system-installed packages, so it can find GTK and other non-PyPI packages there:

virtualenv --system-site-packages venv
source venv/bin/activate

I also found that if I had a ~/.local directory (where packages normally go if I use pip install --user packagename), sometimes pip would install to .local instead of the venv. I never did track down why this happened some times and not others, but when it happened, a temporary mv ~/.local ~/old.local fixed it.

Test your Python package in the venv until everything works. When you're finished with your venv, you can run deactivate and then remove it with rm -rf venv.

Tag it on GitHub

Is your project ready to publish?

If your project is hosted on GitHub, you can have pypi download it automatically. In your setup.py, set

download_url='https://github.com/user/package/tarball/tagname',

Check that in. Then make a tag and push it:

git tag 0.1 -m "Name for this tag"
git push --tags origin master

Try to make your tag match the version you've set in setup.py and in your module.

Push it to pypitest

Register a new account and password on both pypitest and on pypi.

Then create a ~/.pypirc that looks like this:

[distutils]
index-servers =
  pypi
  pypitest

[pypi]
repository=https://pypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

[pypitest]
repository=https://testpypi.python.org/pypi
username=YOUR_USERNAME
password=YOUR_PASSWORD

Yes, those passwords are in cleartext. Incredibly, there doesn't seem to be a way to store an encrypted password or even have it prompt you. There are tons of complaints about that all over the web but nobody seems to have a solution. You can specify a password on the command line, but that's not much better. So use a password you don't use anywhere else and don't mind too much if someone guesses.

Update: Apparently there's a newer method called twine that solves the password encryption problem. Read about it here: Uploading your project to PyPI. You should probably use twine instead of the setup.py commands discussed in the next paragraph.

Now register your project and upload it:

python setup.py register -r pypitest
python setup.py sdist upload -r pypitest

Wait a few minutes: it takes pypitest a little while before new packages become available. Then go to your venv (to be safe, maybe delete the old venv and create a new one, or at least pip uninstall) and try installing:

pip install -i https://testpypi.python.org/pypi YourPackageName

If you get "No matching distribution found for packagename", wait a few minutes then try again.

If it all works, then you're ready to submit to the real pypi:

python setup.py register -r pypi
python setup.py sdist upload -r pypi

Congratulations! If you've gone through all these steps, you've uploaded a package to pypi. Pat yourself on the back and go tell everybody they can pip install your package.

Some useful reading

Some pages I found useful:

A great tutorial except that it forgets to mention signing up for an account: Python Packaging with GitHub

Another good tutorial: First time with PyPI

Allowed PyPI classifiers -- the categories your project fits into Unfortunately there aren't very many of those, so you'll probably be stuck with 'Topic :: Utilities' and not much else.

Python Packages and You: not a tutorial, but a lot of good advice on style and designing good packages.

December 17, 2016 11:19 PM

December 12, 2016

Eric Hammond

How Much Does It Cost To Run A Serverless API on AWS?

Serving 2.1 million API requests for $11

Folks tend to be curious about how much real projects cost to run on AWS, so here’s a real example with breakdowns by AWS service and feature.

This article walks through the AWS invoice for charges accrued in November 2016 by the TimerCheck.io API service which runs in the us-east-1 (Northern Virginia) region and uses the following AWS services:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • Route 53
  • CloudFront
  • SNS (Simple Notification Service)
  • CloudWatch Logs
  • CloudWatch Metrics
  • CloudTrail
  • S3
  • Network data transfer
  • CloudWatch Alarms

During this month, TimerCheck.io service processed over 2 million API requests. Every request ran an AWS Lambda function that read from and/or wrote to a DynamoDB table.

This AWS account is older than 12 months, so any first year free tier specials are no longer applicable.

Total Cost Overview

At the very top of the AWS invoice, we can see that the total AWS charges for the month of November add up to $11.12. This is the total bill for processing the 2.1 million API requests and all of the infrastructure necessary to support them.

Invoice: Summary

Service Overview

The next part of the invoice lists the top level services and charges for each. You can see that two thirds of the month’s cost was in API Gateway at $7.47, with a few other services coming together to make up the other third.

Invoice: Service Overview

API Gateway

Current API Gateway pricing is $3.50 per million requests, plus data transfer. As you can see from the breakdown below, the requests are the bulk of the expense at $7.41. The responses from TimerCheck.io probably average in the hundreds of bytes, so there’s only $0.06 in data transfer cost.

You currently get a million requests at no charge for the first 12 months, which was not applicable to this invoice, but does end up making API Gateway free for the development of many small projects.

Invoice: API Gateway

CloudTrail

I don’t remember enabling CloudTrail on this account, but at some point I must have done the right thing, as this is something that should be active for every AWS account. There were are almost 400,000 events recorded by CloudTrail, but since the first trail is free, there is no charge listed here.

Note that there are some charges associated with the storage of the CloudTrail event logs in S3. See below.

Invoice: CloudTrail

CloudWatch

The CloudWatch costs for this service come from logs being sent to CloudWatch Logs and the storage of those logs. These logs are being generated by AWS Lambda function execution and by API Gateway execution, so you can consider them as additional costs of running those services. You can control some of the logs generated by your AWS Lambda function, so a portion of these costs are under your control.

There are also charges for CloudWatch Alarms, but for some reason, those are listed under EC2 (below) instead of here under CloudWatch.

Invoice: CloudWatch

Data Transfer

Data transfer costs can be complex as they depend on where the data is coming from and going to. Fortunately for TimerCheck.io, there is very little network traffic and most of it falls into the free tiers. What little we are being charged for here amounts to a measly $0.04 for 4 GB of data transferred between availability zones. I presume this is related to AWS services talking to each other (e.g., AWS Lambda to DynamoDB) because there are no EC2 instances in this stack.

Note that this is not the entirety of the data transfer charges, as some other services break out their own network costs.

Invoice: Data Transfer

DynamoDB

The DynamoDB pricing includes a permanent free tier of up to 25 write capacity units and 25 read capacity units. The TimerCheck.io has a single DynamoDB table that is set to a capacity of 25 write and 25 read so there are no charges for capacity.

The TimerCheck.io DynamoDB database size falls well under the 25 GB free tier, so that has no charge either.

Invoice: DynamoDB

Elastic Compute Cloud

The TimerCheck.io service does not use EC2 and yet there is a section in the invoice for EC2. For some reason this section lists the CloudWatch Alarm charges.

Each CloudWatch Alarm costs $0.10 per month and this account has eight for a total of $0.80/month. But, for some reason, I was only billed $0.73. *shrug*

Invoice: EC2

This AWS account has four AWS billing alarms that will email me whenever the cumulative charges for the month pass $10, $20, $30, and $40.

There is one CloudWatch alarm that emails me if the AWS Lambda function invocations are being throttled (more than 100 concurrent functions being executed).

There are two CloudWatch alarms that email me if the consumed read and write capacity units are trending so high that I should look at increasing the capacity settings of the DynamoDB table. We are nowhere near that at current usage volume.

Yes, that leaves one CloudWatch alarm, which was a duplicate. I have since removed it.

AWS Lambda

Since most of the development of the TimerCheck.io API service focuses on writing the 60 lines of code for the AWS Lambda function, this is where I was expecting the bulk of the charges to be. However, the AWS Lambda costs only amount to $0.22 for the month.

There were 2.1 million AWS Lambda function invocations, one per external consumer API request, same as the API Gateway. The first million AWS Lambda function calls are free. The rest are charged at $0.20 per million.

The permanent free tier also includes 400,000 GB-seconds of compute time per month. At an average of 0.15 GB-seconds per function call, we stayed within the free tier at a total of 320,000 GB-seconds.

Invoice: AWS Lambda

I have the AWS Lambda function configuration cranked up to the max 1536 GB memory so that it will run as fast as possible. Since the charges are rounded up in units of 100ms, we could probably save GB-seconds by scaling down the memory once we exceed the free tier. Most of the time is probably spent in DynamoDB calls anyway, so this should not affect API performance much.

Route 53

Route 53 charges $0.50 per hosted zone (domain). I have two domains hosted in Route 53, the expected timercheck.io plus the extra timercheck.com. The timercheck.com domain was supposed to redirect to timercheck.io, but I apparently haven’t gotten around to tossing in that feature yet. These two hosted zones account for $1 in charges.

There were 1.1 million DNS queries to timercheck.io and www.timercheck.io, but since those resolve to aliases for the API Gateway, there is no charge.

The other $0.09 comes from the 226,000 DNS queries to random timercheck.io and timercheck.com hostnames. These would include status.timercheck.io, which is a page displaying the uptime of TimerCheck.io as reported by StatusCake.

Invoice: Route 53

Simple Notification Service

During the month of November, there was one post to an SNS topic and one email delivery from an SNS topic. These were both for the CloudWatch alert notifying me that the charges on the account had exceeded $10 for the month. There were no charges for this.

Invoice: SNS

Simple Storage Service

The S3 costs in this account are entirely for storing the CloudTrail events. There were 222 GET requests ($0) and 13,000 requests of other types ($0.07). There was no charge for the 0.064 GB-Mo of actual data stored. Has Amazon started rounding fractional pennies down instead of up in some services?

Invoice: SNS

External Costs

The domains timercheck.io and timercheck.com are registered through other registrars. Those cost about $33 and $9 per year, respectively.

The SSL/TLS certificate for https support costs around $10-15 per year, though this should drop to zero once CloudFront distributions created with API Gateway support certificates with ACM (AWS Certificate Manager) #awswishlist

Not directly obvious from the above is the fact that I have spent no time or money maintaining the TimerCheck.io API service post-launch. It’s been running for 1.5 years and I haven’t had to upgrade software, apply security patches, replace failing hardware, recover from disasters, or scale up and down with demand. By using AWS services like API Gateway, AWS Lambda, and DynamoDB, Amazon takes care of everything.

Notes

Your Mileage May Vary.

For entertainment use only.

This is just one example from one month for one service architected one way. Your service running on AWS will not cost the same.

Though 2 million TimerCheck.io API requests in November cost about $11, this does not mean that an additional million would cost another $5.50. Some services would cost significantly more and some would cost about the same, probably averaging out to significantly more.

If you are reading this after November 2016, then the prices for these AWS services have certainly changed and you should not use any of the above numbers in making decisions about running on AWS.

Conclusion

Amazon, please lower the cost of the API Gateway; or provide a simpler, cheaper service that can trigger AWS Lambda functions with https endpoints. Thank you!

Original article and comments: https://alestic.com/2016/12/aws-invoice-example/

December 12, 2016 10:00 AM

Elizabeth Krumbach

Trains in NYC

I’ve wanted to visit the New York Transit Museum ever since I discovered it existed. Housed in the retired Court station in Brooklyn, even the museum venue had “transit geek heaven” written all over it. I figured I’d visit it some day when work brought me to the city, but then I learned about the 15th Annual Holiday Train Show at their Annex and Store at Grand Central going on now. I’d love to see that! I ended up going up to the NYC from Philadelphia with my friend David last Sunday morning and made a day of it. Even better, we parked in New Jersey so had a full on transit experience from there into Manhattan and Brooklyn and back as the day progressed.

Our first stop was Grand Central Station via the 5 subway line. Somehow I’d never been there before. Enjoy the obligatory station selfie.

From there it was straight down to the Annex and Store run by the transit museum. The holiday exhibit had glittering signs hanging from the ceiling of everything from buses to transit cards to subway cars and snowflakes. The big draw though was the massive o-gauge model train setup, as the site explains:

This year’s Holiday Train Show display will feature a 34-foot-long “O gauge” model train layout with Lionel’s model Metro-North, New York Central, and vintage subway trains running on eight separate loops of track, against a backdrop featuring graphics celebrating the Museum’s 40th anniversary by artist Julia Rothman.

It was quite busy there, but folks were very clearly enjoying it. I’m really glad I went, even if the whole thing made me pine for my future home office train set all the more. Some day! It’s also worthy to note that this shop is the one to visit transit-wise. The museum in Brooklyn also had a gift shop but it was smaller and had fewer things, I highly recommend picking things up here, I ended up going back after the transit museum to get something I wanted.

We then hopped on the 4 subway line into Brooklyn to visit the actual transit museum. As advertised, it’s in a retired subway station, so the entrance looks like any other subway entrance and you take stairs underground. You enter and buy your ticket and then are free to explore both levels of the museum. The first had several exhibits that rotate, including one about Coney Island and another providing a history of crises in New York City (including 9/11, hurricane Sandy) and how the transit system and operators responded to them. They also had displays of a variety of turnstiles throughout the years, and exhibits talking about street car (trolley) lines and the introduction of the bus systems.

The exhibits were great, but it was downstairs that things got really fun. They have functioning rails where the subway trains used to run through where they’ve lined up over a dozen cars from throughout transit history in NYC for visitors to explore, inside and out.

The evolution of seat designs and configurations was interesting to witness and feel, as you could sit on the seats to get the full experience. Each car also had an information sign next to it, so you could learn about the era and the place of that car in it. Transitions between wood to metal, paired (and ..tripled?) cars were showcased, along with a bunch that were stand alone interchangables. I also enjoyed seeing a caboose, though I didn’t quite recognize at first (“is this for someone to live in?”).

A late lunch was due following the transit museum. We ended up at Sottocasa Pizzeria right there in Brooklyn. It got great reviews and I enjoyed it a lot, but was definitely on the fancy pizza side. They also had selection of Italian beers, of which I chose the delicious Nora by Birra Baladin. Don’t worry, next time I’m in New York I’ll go to a great, not fancy, pizza place.

It was then back to Manhattan to spend a bit more time at Grand Central and for an evening walk through the city. We started by going up 5th Avenue to see Rockefeller Square at night during the holidays. I hadn’t been to Manhattan since 2013 when I went with my friend Danita and I’d never seen the square all decked out for the holidays. I didn’t quite think it through though, it’s probably the busiest time of the year there so the whole neighborhood for blocks was insanely crowded. After seeing the skating rink and tree, we escaped northwest and made our way through the crowds up to Central Park. It was cold, but all the walking was fun even with the crowds. For dinner we ended up at Jackson Hole for some delicious burgers. I went with the Guacamole Burger.

The trip back to north Jersey took us through the brand new World Trade Center Transportation Hub to take the PATH. It’s a very unusual space. It’s all bright white with tons of marble shaped in a modern look, and has a shopping mall with a surreal amount of open space. The trip back on the PATH that night was as smooth as expected. In all, a very enjoyable day of public transit stuff!

More photos from Grand Central Station and the Transit Museum here: https://www.flickr.com/photos/pleia2/albums/72157677457519215

Epilogue: I received incredibly bad news the day after this visit to NYC. It cast a shadow over it for me. I went back and forth about whether I should write about this visit at all and how I should present it if I did. I decided to present it as it was that day. It was a great day of visiting the city and geeking out over trains, enjoyed with a close friend, and detached from whatever happened later. I only wish I could convince my mind to do the same.

by pleia2 at December 12, 2016 01:29 AM

UbuCon EU 2016

Last month I had the opportunity to travel to Essen, Germany to attend UbuCon EU 2016. Europe has had UbuCons before, but the goal of this one was to make it a truly international event, bringing in speakers like me from all corners of the Ubuntu community to share our experiences with the European Ubuntu community. Getting to catch up with a bunch of my Ubuntu colleagues who I knew would be there and visiting Germany as the holiday season began were also quite compelling reasons for me to attend.

The event formally kicked off Saturday morning with a welcome and introduction by Sujeevan Vijayakumaran, he reported that 170 people registered for the event and shared other statistics about the number of countries attendees were from. He also introduced a member of the UbPorts team, Marius Gripsgård, who announced the USB docking station for Ubuntu Touch devices they were developing, more information in this article on their website: The StationDock.

Following these introductions and announcements, we were joined by Canonical CEO Jane Silber who provided a tour of the Ubuntu ecosystem today. She highlighted the variety of industries where Ubuntu was key, progress with Ubuntu on desktops/laptops, tablets, phones and venturing into the smart Internet of Things (IoT) space. Her focus was around the amount of innovation we’re seeing in the Ubuntu community and from Canonical, and talked about specifics regarding security, updates, the success in the cloud and where Ubuntu Core fits into the future of computing.

I also loved that she talked about the Ubuntu community. The strength of local meetups and events, the free support community that spans a variety of resources, ongoing work by the various Ubuntu flavors. She also spoke to the passion of Ubuntu contributors, citing comics and artwork that community members have made, including the stunning series of release animal artwork by Sylvia Ritter from right there in Germany, visit them here: Ubuntu Animals. I was also super thrilled that she mentioned the Ubuntu Weekly Newsletter as a valuable resource for keeping up with the community, a very small group of folks works very hard on it and that kind of validation is key to sustaining motivation.

The next talk I attended was by Fernando Lanero Barbero on Linux is education, Linux is science. Ubuntu to free educational environments. Fernando works at a school district in Spain where he has deployed Ubuntu across hundreds of computers, reaching over 1200 students in the three years he’s been doing the work. The talk outlined the strengths of the approach, explaining that there was cost savings for his school and also how Ubuntu and open source software is more in line with the school values. One of the key takeaways from his experience was one that I know a lot about from our own Linux in schools experiences here in the US at Partimus: focus on the people, not the technologies. We’re technologists who love Linux and want to promote it, but without engagement, understanding and buy-in from teachers, deployments won’t be successful. A lot of time needs to be spent making assessments of their needs, doing roll-outs slowly and methodically so that the change doesn’t happen to abruptly and leave them in a lurch. He also stressed the importance of consistency with the deployments. Don’t get super creative across machines, use the same flavor for everything, even the same icon set. Not everyone is as comfortable with variation as we are, and you want to make the transition as easy as possible across all the systems.

Laura Fautley (Czajkowski) spoke at the next talk I went to, on Supporting Inclusion & Involvement in a Remote Distributed Team. The Ubuntu community itself is distributed across the globe, so drawing experience from her work there and later at several jobs where she’s had to help manage communities, she had a great list of recommendations as you build out such a team. She talked about being sensitive to time zones, acknowledgement that decisions are sometimes made in social situations rather than that you need to somehow document and share these decisions with the broader community. She was also eager to highlight how you need to acknowledge and promote the achievements in your team, both within the team and to the broader organization and project to make sure everyone feels valued and so that everyone knows the great work you’re doing. Finally, it was interesting to hear some thoughts about remote member on-boarding, stressing the need to have a process so that new contributors and team mates can quickly get up to speed and feel included from the beginning.

I went to a few other talks throughout the two day event, but one of the big reasons for me attending was to meet up with some of my long-time friends in the Ubuntu community and finally meet some other folks face to face. We’ve had a number of new contributors join us since we stopped doing Ubuntu Developer Summits and today UbuCons are the only Ubuntu-specific events where we have an opportunity to meet up.


Laura Fautley, Elizabeth K. Joseph, Alan Pope, Michael Hall

Of course I was also there to give a pair of talks. I first spoke on Contributing to Ubuntu on Desktops (slides) which is a complete refresh of a talk I gave a couple of times back in 2014. The point of that talk was to pull people back from the hype-driven focus on phones and clouds for a bit and highlight some of the older projects that still need contributions. I also spoke on Building a career with Ubuntu and FOSS (slides) which was definitely the more popular talk. I’ve given a similar talk for a couple UbuCons in the past, but this one had the benefit of being given while I’m between jobs. This most recent job search as I sought out a new role working directly with open source again gave a new dimension to the talk, and also made for an amusing intro, “I don’t have a job at this very moment …but without a doubt I will soon!” And in fact, I do have something lined up now.


Thanks to Tiago Carrondo for taking this picture during my talk! (source)

The venue for the conference was a kind of artists space, which made it a bit quirky, but I think worked out well. We had a couple social gatherings there at the venue, and buffet lunches were included in our tickets, which meant we didn’t need to go far or wait on food elsewhere.

I didn’t have a whole lot of time for sight-seeing this trip because I had a lot going on stateside (like having just bought a house!) but I did get to enjoy the beautiful Christmas Market in Essen a few of nights while I was there.

For those of you not familiar with German Christmas Markets (I wasn’t), they close roads downtown and pop up streets of wooden shacks that sell everything from Christmas ornaments and cookies to hot drinks, beers and various hot foods. We went the first night I was in town we met up with several fellow conference-goers and got some fries with mayonnaise, grilled mushrooms with Bearnaise sauce, my first taste of German Glühwein (mulled wine) and hot chocolate. The next night we went was a quick walk through the market that landed us at a steakhouse where we had a late dinner and a couple beers.

The final night we didn’t stay out late, but did get some much anticipated Spanish churros, which inexplicably had sugar rather than the cinnamon I’m used to, as well as a couple more servings of Glühwein, this time in commemorative Christmas mugs shaped like boots!


Clockwise from top left: José Antonio Rey, Philip Ballew, Michael Hall, John and Laura Fautley, Elizabeth K. Joseph

The next morning I was up bright and early to catch a 6:45AM train that started me on my three train journey back to Amsterdam to fly back to Philadelphia.

It was a great little conference and a lot of fun. Huge thanks to Sujeevan for being so incredibly welcoming to all of us, and thanks to all the volunteers who worked for months to make the event happen. Also thanks to Ubuntu community members who donate to the community fund since I would have otherwise had to self-fund to attend.

More photos from the event (and the Christmas Market!) here: https://www.flickr.com/photos/pleia2/albums/72157676958738915

by pleia2 at December 12, 2016 12:03 AM

December 11, 2016

Akkana Peck

Distributing Python Packages Part I: Creating a Python Package

I write lots of Python scripts that I think would be useful to other people, but I've put off learning how to submit to the Python Package Index, PyPI, so that my packages can be installed using pip install.

Now that I've finally done it, I see why I put it off for so long. Unlike programming in Python, packaging is a huge, poorly documented hassle, and it took me days to get a working.package. Maybe some of the hints here will help other struggling Pythonistas.

Create a setup.py

The setup.py file is the file that describes the files in your project and other installation information. If you've never created a setup.py before, Submitting a Python package with GitHub and PyPI has a decent example, and you can find lots more good examples with a web search for "setup.py", so I'll skip the basics and just mention some of the parts that weren't straightforward.

Distutils vs. Setuptools

However, there's one confusing point that no one seems to mention. setup.py examples all rely on a predefined function called setup, but some examples start with

from distutils.core import setup
while others start with
from setuptools import setup

In other words, there are two different versions of setup! What's the difference? I still have no idea. The setuptools version seems to be a bit more advanced, and I found that using distutils.core , sometimes I'd get weird errors when trying to follow suggestions I found on the web. So I ended up using the setuptools version.

But I didn't initially have setuptools installed (it's not part of the standard Python distribution), so I installed it from the Debian package:

apt-get install python-setuptools python-wheel

The python-wheel package isn't strictly needed, but I found I got assorted warnings warnings from pip install later in the process ("Cannot build wheel") unless I installed it, so I recommend you install it from the start.

Including scripts

setup.py has a scripts option to include scripts that are part of your package:

    scripts=['script1', 'script2'],

But when I tried to use it, I had all sorts of problems, starting with scripts not actually being included in the source distribution. There isn't much support for using scripts -- it turns out you're actually supposed to use something called console_scripts, which is more elaborate.

First, you can't have a separate script file, or even a __main__ inside an existing class file. You must have a function, typically called main(), so you'll typically have this:

def main():
    # do your script stuff

if __name__ == "__main__":
    main()

Then add something like this to your setup.py:

      entry_points={
          'console_scripts': [
              script1=yourpackage.filename:main',
              script2=yourpackage.filename2:main'
          ]
      },

There's a secret undocumented alternative that a few people use for scripts with graphical user interfaces: use 'gui_scripts' rather than 'console_scripts'. It seems to work when I try it, but the fact that it's not documented and none of the Python experts even seem to know about it scared me off, and I stuck with 'console_scripts'.

Including data files

One of my packages, pytopo, has a couple of files it needs to install, like an icon image. setup.py has a provision for that:

      data_files=[('/usr/share/pixmaps',      ["resources/appname.png"]),
                  ('/usr/share/applications', ["resources/appname.desktop"]),
                  ('/usr/share/appname',      ["resources/pin.png"]),
                 ],

Great -- except it doesn't work. None of the files actually gets added to the source distribution.

One solution people mention to a "files not getting added" problem is to create an explicit MANIFEST file listing all files that need to be in the distribution. Normally, setup generates the MANIFEST automatically, but apparently it isn't smart enough to notice data_files and include those in its generated MANIFEST.

I tried creating a MANIFEST listing all the .py files plus the various resources -- but it didn't make any difference. My MANIFEST was ignored.

The solution turned out to be creating a MANIFEST.in file, which is used to generate a MANIFEST. It's easier than creating the MANIFEST itself: you don't have to list every file, just patterns that describe them:

include setup.py
include packagename/*.py
include resources/*
If you have any scripts that don't use the extension .py, don't forget to include them as well. This may have been why scripts= didn't work for me earlier, but by the time I found out about MANIFEST.in I had already switched to using console_scripts.

Testing setup.py

Once you have a setup.py, use it to generate a source distribution with:

python setup.py sdist
(You can also use bdist to generate a binary distribution, but you'll probably only need that if you're compiling C as part of your package. Source dists are apparently enough for pure Python packages.)

Your package will end up in dist/packagename-version.tar.gz so you can use tar tf dist/packagename-version.tar.gz to verify what files are in it. Work on your setup.py until you don't get any errors or warnings and the list of files looks right.

Congratulations -- you've made a Python package! I'll post a followup article in a day or two about more ways of testing, and how to submit your working package to PyPI.

Update: Part II is up: Distributing Python Packages Part II: Submitting to PyPI.

December 11, 2016 07:54 PM

December 08, 2016

Nathan Haines

UbuCon Europe 2016

UbuCon Europe 2016

Nathan Haines enjoying UbuCon Europe

If there is one defining aspect of Ubuntu, it's community. All around the world, community members and LoCo teams get together not just to work on Ubuntu, but also to teach, learn, and celebrate it. UbuCon Summit at SCALE was a great example of an event that was supported by the California LoCo Team, Canonical, and community members worldwide coming together to make an event that could host presentations on the newest developer technologies in Ubuntu, community discussion roundtables, and a keynote by Mark Shuttleworth, who answered audience questions thoughtfully, but also hung around in the hallway and made himself accessible to chat with UbuCon attendees.

Thanks to the Ubuntu Community Reimbursement Fund, the UbuCon Germany and UbuCon Paris coordinators were able to attend UbuCon Summit at SCALE, and we were able to compare notes, so to speak, as they prepared to expand by hosting the first UbuCon Europe in Germany this year. Thanks to the community fund, I also had the immense pleasure of attending UbuCon Europe. After I arrived, Sujeevan Vijayakumaran picked me up from the airport and we took the train to Essen, where we walked around the newly-opened Weihnachtsmarkt along with Philip Ballew and Elizabeth Joseph from Ubuntu California. I acted as official menu translator, so there were no missed opportunities for bratwurst, currywurst, glühwein, or beer. Happily fed, we called it a night and got plenty of sleep so that we would last the entire weekend long.

Zeche Zollverein, a UNESCO World Heritage site

UbuCon Europe was a marvelous experience. Friday started things off with social events so everyone could mingle and find shared interests. About 25 people attended the Zeche Zollverein Coal Mine Industrial Complex for a guided tour of the last operating coal extraction and processing site in the Ruhr region and was a fascinating look at the defining industry of the Ruhr region for a century. After that, about 60 people joined in a special dinner at Unperfekthaus, a unique location that is part creative studio, part art gallery, part restaurant, and all experience. With a buffet and large soda fountains and hot coffee/chocolate machine, dinner was another chance to mingle as we took over a dining room and pushed all the tables together in a snaking chain. It was there that some Portuguese attendees first recognized me as the default voice for uNav, which was something I had to get used to over the weekend. There's nothing like a good dinner to get people comfortable together, and the Telegram channel that was established for UbuCon Europe attendees was spread around.

Sujeevan Vijayakumaran addressing the UbuCon Europe attendees

The main event began bright and early on Saturday. Attendees were registered on the fifth floor of Unpefekthaus and received their swag bags full of cool stuff from the event sponsors. After some brief opening statements from Sujeevan, Marcus Gripsgård announced an exciting new Kickstarter campaign that will bring an easier convergence experience to not just most Ubuntu phones, but many Android phones as well. Then, Jane Silber, the CEO of Canonical, gave a keynote that went into detail about where Canonical sees Ubuntu in the future, how convergence and snaps will factor into future plans, and why Canonical wants to see one single Ubuntu on the cloud, server, desktop, laptop, tablet, phone, and Internet of Things. Afterward, she spent some time answering questions from the community, and she impressed me with her willingness to answer questions directly. Later on, she was chatting with a handful of people and it was great to see the consideration and thought she gave to those answers as well. Luckily, she also had a little time to just relax and enjoy herself without the third degree before she had to leave later that day. I was happy to have a couple minutes to chat with her.

Nathan Haines and Jane Silber

Microsoft Deutschland GmbH sent Malte Lantin to talk about Bash on Ubuntu on Windows and how the Windows Subsystem for Linux works, and while jokes about Microsoft and Windows were common all weekend, everyone kept their sense of humor and the community showed the usual respect that’s made Ubuntu so wonderful. While being able to run Ubuntu software natively on Windows makes many nervous, it also excites others. One thing is for sure: it’s convenient, and the prospect of having a robust terminal emulator built right in to Windows seemed to be something everyone could appreciate.

After that, I ate lunch and gave my talk, Advocacy for Advocates, where I gave advice on how to effectively recommend Ubuntu and other Free Software to people who aren’t currently using it or aren’t familiar with the concept. It was well-attended and I got good feedback. I also had a chance to speak in German for a minute, as the ambiguity of the term Free Software in English disappears in German, where freies Software is clear and not confused with kostenloses Software. It’s a talk I’ve given before and will definitely give again in the future.

After the talks were over, there was a raffle and then a UbuCon quiz show where the audience could win prizes. I gave away signed copies of my book, Beginning Ubuntu for Windows and Mac Users, in the raffle, and in fact I won a “xenial xeres” USB drive that looks like an origami squirrel as well as a Microsoft t-shirt. Afterwards was a dinner that was not only delicious with apple crumble for desert, but also free beer and wine, which rarely detracts from any meal.

Marcos Costales and Nathan Haines before the uNav presentation

Sunday was also full of great talks. I loved Marcos Costales’s talk on uNav, and as the video shows, I was inspired to jump up as the talk was about to begin and improvise the uNav-like announcement “You have arrived at the presentation.” With the crowd warmed up from the joke, Marcos took us on a fascinating journey of the evolution of uNav and finished with tips and tricks for using it effectively. I also appreciated Olivier Paroz’s talk about Nextcloud and its goals, as I run my own Nextcloud server. I was sure to be at the UbuCon Europe feedback and planning roundtable and was happy to hear that next year UbuCon Europe will be held in Paris. I’ll have to brush up on my restaurant French before then!

Nathan Haines contemplating tools with a Neanderthal

That was the end of UbuCon, but I hadn’t been to Germany in over 13 years so it wasn’t the end of my trip! Sujeevan was kind enough to put up with me for another four days, and he accompanied me on a couple shopping trips as well as some more sightseeing. The highlight was a trip to the Neanderthal Museum in the aptly-named Neandertal, Germany, and then afterward we met his friend (and UbuCon registration desk volunteer!) Philipp Schmidt in Düsseldorf at their Weihnachtsmarkt, where we tried the Feuerzangenbowle, where mulled wine is improved by soaking a block of sugar in rum, then putting it over the wine and lighting the sugarloaf on fire to drip into the wine. After that, we went to the Brauerei Schumacher where I enjoyed not only Schumacher Alt beer, but also the Rhein-style sauerbraten that has been on my to-do list for a decade and a half. (Other variations of sauerbraten—not to mention beer—remain on the list!)

I’d like to thank Sujeevan for his hospitality on top of the tremendous job that he, the German LoCo, and the French LoCo exerted to make the first UbuCon Europe a stunning success. I’d also like to thank everyone who contributed to the Ubuntu Community Reimbursement Fund for helping out with my travel expenses, and everyone who attended, because of course we put everything together for you to enjoy.

December 08, 2016 05:04 AM

December 05, 2016

Elizabeth Krumbach

Vacation Home in Pennsylvania

This year MJ and I embarked on a secret mission: Buy a vacation home in Pennsylvania.

It was a decision we’d mulled over for a couple years, and the state of the real estate market along with our place in lives, careers and frequent visits back to the Philadelphia area finally made the stars align to make it happen. With the help of family local to the area, including one who is a real estate agent, we spent the past few trips taking time to look at houses and make some decisions. In August we started signing the paperwork to take possession of a new home in November.

With the timing of our selection, we were able to pick out cabinets, counter tops and some of the other non-architectural options in the home. Admittedly none of that is my thing, but it’s still nice that we were able to put our touch on the end result. As we prepared for the purchase, MJ spent a lot of time making plans for taking care of the house and handling things like installations, deliveries and the move of our items from storage into the house.

In October we also bought a car that we’d be keeping at the house in Philadelphia, though we did enjoy it in California for a few weeks.

On November 15th we met at the title company office and signed the final paperwork.

The house was ours!

The next day I flew to Germany for a conference and MJ headed back to San Francisco. I enjoyed the conference and a few days in Germany, but I was eager to get back to the house.

Upon my return we had our first installation. Internet! And backup internet.

MJ came back into town for Thanksgiving which we enjoyed with family. The day after was the big move from storage into the house. Our storage units not only had our own things that we’d left in Pennsylvania, but everything from MJ’s grandparents, which included key contents of their own former vacation home which I never saw. We moved his grandmother into assisted care several years ago and had been keeping their things until we got a larger home in California. With the house here in Pennsylvania we decided to use some of the pieces to furnish the house here. It also meant I have a lot of boxes to go through.

Before MJ left to head back to work in San Francisco we did get a few things unpacked, including Champagne glasses, which meant on Saturday night following the move day we were able to pick up a proper bottle of Champagne and spend the evening together in front of the fireplace to celebrate.

I’d been planning on taking some time off following the layoff from my job as I consider new opportunities in the coming year. It ended up working well since I’ve been able to do that, plus spend the past week here in the Philadelphia house unpacking and getting the house set up. Several of the days I’ve also had to be here at the house to receive deliveries and be present for installs of all kinds to make sure the house is ready and secure (cameras!) for us to properly enjoy as soon as possible. Today is window blinds day. I am getting to enjoy it some too, between all these tasks I’ve spent time with local friends and family, had some time reading in front of the fireplace, have enjoyed a beautiful new Bluetooth speaker playing music all day. The house doesn’t have a television yet, but I have also curled up to watch a few episodes on my tablet here and there in the evenings as well.

There have also been some great sunsets in the neighborhood. I sure missed the Pennsylvania autumn sights and smells.

And not all the unpacking has been laborious. I found MJ’s telescope from years ago in storage and I was able to set that up the other night. Looking forward a clear night to try it out.

Tomorrow I’m flying off yet again for a conference and then to spend at least a week at home back in San Francisco. We’ll be back very soon though, planning on spending at least the eight days of Hanukkah here, and possibly flying in earlier if we can line up some of the other work we need to get done.

by pleia2 at December 05, 2016 07:21 PM

December 04, 2016

Elizabeth Krumbach

Breathtaking Barcelona

My father once told me that Madrid was his favorite city and that he generally loved Spain. When my aunt shipped me a series of family slides last year I was delighted to find ones from Madrid in the mix, I uploaded the album: Carl A. Krumbach – Spain 1967. I wish I had asked him why he loved Madrid, but in October I had the opportunity myself to learn why I now love Spain.

I landed in Barcelona the last week of October. First, it was a beautiful time to visit. Nice weather that wasn’t too hot or too cold. It rained over night a couple times and a bit some days, but not enough to deter activities, and I was busy with a conference during most of the days anyway. It was also warm enough to go swimming in the Mediterranean, though I failed to avail myself of this opportunity. The day I got in I met up with a couple friends to go to the aquarium, walk around the coastline and was able to touch the sea for the first time. That evening I also had my first of three seafood paellas that I enjoyed throughout the week. So good.

The night life was totally a thing. Many places would offer tapas along with drinks, so one night a bunch of us went out and just ate and drank our way through the Gothic Quarter. The restaurants also served late, often not even starting dinner service until 8PM. One night at midnight we found ourselves at a steakhouse dining on a giant steak that served the table and drinking a couple bottles of cava. Oh the cava, it was plentiful and inexpensive. As someone who lives in California these days I felt a bit bad by betraying my beloved California wine, but it was really good. I also enjoyed the Sangrias.

A couple mornings after evenings when I didn’t let the drinks get the better of me, I also went out for a run. Running along the beaches in Barcelona was a tiny slice of heaven. It was also wonderful to just go sit by the sea one evening when I needed some time away from conference chaos.


Seafood paella lunch for four! We also had a couple beers.

All of this happened before I even got out to do much tourist stuff. Saturday was my big day for seeing the famous sights. Early in the week I reserved tickets to see the Sagrada Familia Basilica. I like visiting religious buildings when I travel because they tend to be on the extravagant side. Plus, back at the OpenStack Summit in Paris we heard from a current architect of the building and I’ve since seen a documentary about the building and nineteenth century architect Antoni Gaudí. I was eager to see it, but nothing quite prepared me for the experience. I had tickets for 1:30PM and was there right on time.


Sagrada Familia selfie!

It was the most amazing place I’ve ever been.

The architecture sure is unusual but once you let that go and just enjoy it, everything comes together in a calming way that I’ve never quite experienced before. The use of every color through the hundreds of stained glass windows was astonishing.

I didn’t do the tower tour on this trip because once I realized how special this place was I wanted to save something new to do there the next time I visit.

The rest of my day was spent taking one of the tourist buses around town to get a taste of a bunch of the other sights. I got a glimpse of a couple more buildings by Gaudí. In the middle of the afternoon I stopped at a tapas restaurant across from La Monumental, a former bullfighting ring. They outlawed bullfighting several years ago, but the building is still used for other events and is worth seeing for the beautiful tiled exterior, even just on the outside.

I also walked through the Arc de Triomf and made my way over to the Barcelona Cathedral. After the tour bus brought me back to the stop near my hotel I spent the rest of the late afternoon enjoying some time at the beach.

That evening I met up with my friend Clint to do one last wander around the area. We stopped at the beach and had some cava and cheese. From there we went to dinner where we split a final paella and bottle of cava. Dessert was a Catalan cream, which is a lot like a crème brûlée but with cinnamon, yum!

As much as I wanted to stay longer and enjoy the gorgeous weather, the next morning I was scheduled to return home.

I loved Barcelona. It stole my heart like no other European city ever has and it’s now easily one of my favorite cities. I’ll be returning, hopefully sooner than later.

More photos from my adventures in Barcelona here: https://www.flickr.com/photos/pleia2/albums/72157674260004081

by pleia2 at December 04, 2016 03:18 AM

December 02, 2016

Elizabeth Krumbach

OpenStack book and Infra team at the Ocata Summit

At the end of October I attended the OpenStack Ocata Summit in beautiful Barcelona. My participation in this was a bittersweet one for me. It was the first summit following the release of our Common OpenStack Deployments book and OpenStack Infrastructure tooling was featured in a short keynote on Wednesday morning, making for quite the exciting summit. Unfortunately it also marked my last week with HPE and an uncertain future with regard to my continued full time participation with the OpenStack Infrastructure team. It was also the last OpenStack Summit where the conference and design summit are being hosted together, so the next several months will be worth keeping an eye on community-wise. Still, I largely took the position of assuming I’d continue to be able to work on the team, just with more caution in regards to work I was signing up for.

The first thing that I discovered during this summit was how amazing Barcelona is. The end of October presented us with some amazing weather for walking around and the city doesn’t go to sleep early, so we had plenty of time in the evenings to catch up with each other over drinks and scrumptious tapas. It worked out well since there were fewer sponsored parties in the evenings at this summit and attendance seemed limited at the ones that existed.

The high point for me at the summit was having the OpenStack Infrastructure tooling for handling our fleet of compute instances featured in a keynote! Given my speaking history, I was picked from the team to be up on the big stage with Jonathan Bryce to walk through a demonstration where we removed one of our US cloud providers and added three more in Europe. While the change was landing and tests started queuing up we also took time to talk about how tests are done against OpenStack patch sets across our various cloud providers.


Thanks to Johanna Koester for taking this picture (source)

It wasn’t just me presenting though. Clark Boylan and Jeremy Stanley were sitting in the front row making sure the changes landed and everything went according to plan during the brief window that this demonstration took up during the keynote. I’m thrilled to say that this live demonstration was actually the best run we had of all the testing, seeing all the tests start running on our new providers live on stage in front of such a large audience was pretty exciting. The team has built something really special here, and I’m glad I had the opportunity to help highlight that in the community with a keynote.


Mike Perez and David F. Flanders sitting next to Jeremy and Clark as they monitor demonstration progress. Photo credit for this one goes to Chris Hoge (source)

The full video of the keynote is available here: Demoing the World’s Largest Multi-Cloud CI Application

A couple of conference talks were presented by members of the Infrastructure team as well. On Tuesday Colleen Murphy, Paul Belanger and Ricardo Carrillo Cruz presented on the team’s Infra-Cloud. As I’ve written about before, the team has built a fully open source OpenStack cloud using the community Puppet modules and donated hardware and data center space from Hewlett Packard Enterprise. This talk outlined the architecture of that cloud, some of the challenges they’ve encountered, statistics from how it’s doing now and future plans. Video from their talk is here: InfraCloud, a Community Cloud Managed by the Project Infrastructure Team.

James E. Blair also gave a talk during the conference, this time on Zuul version 3. This version of Zuul has been under development for some time, so this was a good opportunity to update the community on the history of the Zuul project in general and why it exists, status of ongoing efforts with an eye on v3 and problems it’s trying to solve. I’m also in love with his slide deck, it was all text-based (including some “animations”!) and all with an Art Deco theme. Video from his talk is here: Zuul v3: OpenStack and Ansible Native CI/CD.

As usual, the Infrastructure team also had a series of sessions related to ongoing work. As a quick rundown, we have Etherpads for all the sessions (read-only links provided):

Friday concluded with a Contributors Meetup for the Infrastructure team in the afternoon where folks split off into small groups to tackle a series of ongoing projects together. I was also able to spend some time with the Internationalization (i18n) team that Friday afternoon. I dragged along Clark so someone else on the team could pick up where I left off in case I have less time in the future. We talked about the pending upgrade of Zanata and plans for a translations checksite, making progress on both fronts, especially when we realized that there’s a chance we could get away with just running a development version of Horizon itself, with a more stable back end.


With the i18n team!

Finally, the book! It was the first time I was able to see Matt Fischer, my contributing author, since the book came out. Catching up with him and signing a book together was fun. Thanks to my publisher I was also thrilled to donate the signed copies I brought along to the Women of OpenStack Speed Mentoring event on Tuesday morning. I wasn’t able to attend the event, but they were given out on my behalf, thanks to Nithya Ruff for handling the giveaway.


Thanks to Nithya Ruff for taking a picture of me with my book at the Women of OpenStack area of the expo hall (source) and Brent Haley for getting the picture of Lisa-Marie and I (source).

I was also invited to sit down with Lisa-Marie Namphy to chat about the book and changes to the OpenStack Infrastructure team in the Newton cycle. The increase in capacity to over 2000 test instances this past cycle was quite the milestone so I enjoyed talking about that. The full video is up on YouTube: OpenStack® Project Infra: Elizabeth K. Joseph shares how test capacity doubled in Newton

In all, it was an interesting summit with a lot of change happening in the community and with partner companies. The people that make the community are still there though and it’s always enjoyable spending time together. My next OpenStack event is coming up quickly, next week I’ll be speaking at OpenStack Days Mountain West on the The OpenStack Project Continuous Integration System. I’ll also have a pile of books to give away at that event!

by pleia2 at December 02, 2016 02:58 PM

December 01, 2016

Elizabeth Krumbach

A Zoo and an Aquarium

When I was in Ohio last month for the Ohio LinuxFest I added a day on to my trip to visit the Columbus Zoo. A world-class zoo, it’s one of the few northern state zoos that has manatees and their African savanna exhibit is worth visiting. I went with a couple friends I attended the conference with, one of whom was a local and offered to drive (thanks again Svetlana!).

We arrived mid-day, which was in time to see their cheetah run, where they give one of their cheetahs some exercise by having it run a quick course around what had just been moments before the hyena habitat. I also learned recently via ZooBorns that the Columbus Zoo is one that participates in the cheetah-puppy pairing from a young age. The dogs keep these big cats feeling secure with their calmness in an uncertain world, adorable article from the site here: A Cheetah and His Dog

Much to my delight, they were also selling Cheetah-and-Dog pins after the run to raise money. Yes, please!

As I said, I really enjoyed their African Savanna exhibit. It was big and sprawling and had a nice mixture of animals. The piles of lions they have was also quite the sight to behold.

Their kangaroo enclosure was open to walk through, so you could get quite close to the kangaroos just like I did at the Perth Zoo. There were also a trio of baby tigers and some mountain lions that were adorable. And then there were the manatees. I love manatees!

I’m really glad I took the time to stay longer in Columbus, I’d likely go again if I found myself in the area.

More photos from the zoo, including a tiger napping on his back, and those mountain lions here: https://www.flickr.com/photos/pleia2/albums/72157671610835663

Just a couple weeks later I found myself on another continent, and at the Barcelona Aquarium with my friends Julia and Summer. It was a sizable aquarium and really nicely laid out. Their selection of aquatic animals was diverse and interesting. In this aquarium I liked some of the smallest critters the most. Loved their seahorses.

And the axolotls.

There was also an octopus that was awake and wandering around the tank, much to the delight of the crowd.

They also had penguins, a great shark tube and tank with a moving walkway.

More photos from the Barcelona Aquarium: https://www.flickr.com/photos/pleia2/albums/72157675629122655

Barcelona also has a zoo, but in my limited time there in the city I didn’t make it over there. It’s now on my very long list of other things to see the next time I’m in Barcelona, and you bet there will be a next time.

by pleia2 at December 01, 2016 03:57 AM

November 30, 2016

Eric Hammond

Amazon Polly Text To Speech With aws-cli and Twilio

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Here’s a simple example of how to turn text into speech, using the latest aws-cli:

text="Hello. This speech is generated using Amazon Polly. Enjoy!"
audio_file=speech.mp3

aws polly synthesize-speech \
  --output-format "mp3" \
  --voice-id "Salli" \
  --text "$text" \
  $audio_file

You can listen to the resulting output file using your favorite audio player:

mpg123 -q $audio_file

2- Upload Audio to S3

Create or re-use an S3 bucket to store the audio files temporarily.

s3bucket=YOURBUCKETNAME
aws s3 mb s3://$s3bucket

Upload the generated speech audio file to the S3 bucket. I use a long, random key for a touch of security:

s3key=audio-for-twilio/$(uuid -v4 -FSIV).mp3
aws s3 cp --acl public-read $audio_file s3://$s3bucket/$s3key

For easy cleanup, you can use a bucket with a lifecycle that automatically deletes objects after a day or thirty. See instructions below for how to set this up.

3- Initiate Call With Twilio

Once you have set up an account with Twilio (see pointers below if you don’t have one yet), here are sample commands to initiate a phone call and play the Amazon Polly speech audio:

from_phone="+1..." # Your Twilio allocated phone number
to_phone="+1..."   # Your phone number to call

TWILIO_ACCOUNT_SID="..." # Your Twilio account SID
TWILIO_AUTH_TOKEN="..."  # Your Twilio auth token

speech_url="http://s3.amazonaws.com/$s3bucket/$s3key"
twimlet_url="http://twimlets.com/message?Message%5B0%5D=$speech_url"

curl -XPOST https://api.twilio.com/2010-04-01/Accounts/$TWILIO_ACCOUNT_SID/Calls.json \
  -u "$TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN" \
  --data-urlencode "From=$from_phone" \
  --data-urlencode "To=to_phone" \
  --data-urlencode "Url=$twimlet_url"

The Twilio web service will return immediately after queuing the phone call. It could take a few seconds for the call to be initiated.

Make sure you listen to the phone call as soon as you answer, as Twilio starts playing the audio immediately.

The ringspeak Command

For your convenience (actually for mine), I’ve put together a command line program that turns all the above into a single command. For example, I can now type things like:

... || ringspeak --to +1NUMBER "Please review the cron job failure messages"

or:

ringspeak --at 6:30am \
  "Good morning!" \
  "Breakfast is being served now in Venetian Hall G.." \
  "Werners keynote is at 8:30."

Twilio credentials, default phone numbers, S3 bucket configuration, and Amazon Polly voice defaults can be stored in a $HOME/.ringspeak file.

Here is the source for the ringspeak command:

https://github.com/alestic/ringspeak

Tip: S3 Setup

Here is a sample commands to configure an S3 bucket with automatic deletion of all keys after 1 day:

aws s3api put-bucket-lifecycle \
  --bucket "$s3bucket" \
  --lifecycle-configuration '{
    "Rules": [{
        "Status": "Enabled",
        "ID": "Delete all objects after 1 day",
        "Prefix": "",
        "Expiration": {
          "Days": 1
        }
  }]}'

This is convenient because you don’t have to worry about knowing when Twilio completes the phone call to clean up the temporary speech audio files.

Tip: Twilio Setup

This isn’t the place for an entire Twilio howto, but I will say that it is about this simple to set up:

  1. Create a Twilio account

  2. Reserve a phone number through Twilio.

  3. Find the ACCOUNT SID and AUTH TOKEN for use in Twilio API calls.

When you are using the Twilio free trial, it requires you to verify phone numbers before calling them. To call arbitrary numbers, enter your credit card and fund the minimum of $20.

Twilio will only charge you for what you use (about a dollar a month per phone number, about a penny per minute for phone calls, etc.).

Closing

A lot is possible when you start integrating Twilio with AWS. For example, my daughter developed an Alexa skill that lets her speak a message for a family member and have it delivered by phone. Alexa triggers her AWS Lambda function, which invokes the Twilio API to deliver the message by voice call.

With Amazon Polly, these types of voice applications can sound better than ever.

Original article and comments: https://alestic.com/2016/11/amazon-polly-text-to-speech/

November 30, 2016 06:30 PM

Elizabeth Krumbach

Ohio LinuxFest 2016

Last month I had the pleasure of finally attending an Ohio LinuxFest. The conference has been on my radar for years, but every year I seemed to have some kind of conflict. When my Tour of OpenStack Deployment Scenarios was accepted I was thrilled to finally be able to attend. My employer at the time also pitched in to the conference as a Bronze sponsor and by sending along a banner that showcased my talk, and my OpenStack book!

The event kicked off on Friday and the first talk I attended was by Jeff Gehlbach on What’s Happening with OpenNMS. I’ve been to several OpenNMS talks over the years and played with it some, so I knew the background of the project. This talk covered several of the latest improvements. Of particular note were some of their UI improvements, including both a website refresh and some stunning improvements to the WebUI. It was also interesting to learn about Newts, the time-series data store they’ve been developing to replace RRDtool, which they struggled to scale with their tooling. Newts is decoupled from the visualization tooling so you can hook in your own, like if you wanted to use Grafana instead.

I then went to Rob Kinyon’s Devs are from Mars, Ops are from Venus. He had some great points about communication between ops, dev and QA, starting with being aware and understanding of the fact that you all have different goals, which sometimes conflict. Pausing to make sure you know why different teams behave the way they do and knowing that they aren’t just doing it to make your life difficult, or because they’re incompetent, makes all the difference. He implored the audience to assume that we’re all smart, hard-working people trying to get our jobs done. He also touched upon improvements to communication, making sure you repeat requests in your own words so misunderstandings don’t occur due to differing vocabularies. Finally, he suggested that some cross-training happen between roles. A developer may never be able to take over full time for an operator, or vice versa, but walking a mile in someone else’s shoes helps build the awareness and understanding that he stresses is important.

The afternoon keynote was given by Catherine Devlin on Hacking Bureaucracy with 18F. She works for the government in the 18F digital services agency. Their mandate is to work with other federal agencies to improve their digital content, from websites to data delivery. Modeled after a startup, she explained that they try not to over-plan, like many government organizations do and can lead to failure, they want to fail fast and keep iterating. She also said their team has a focus on hiring good people and understanding the needs of the people they serve, rather than focusing on raw technical talent and the tools. Their practices center around an open by default philosophy (see: 18F: Open source policy), so much of their work is open source and can be adopted by other agencies. They also make sure they understand the culture of organizations they work with so that the tools they develop together will actually be used, as well as respecting the domain knowledge of teams they’re working with. Slides from her talk here, and include lots of great links to agency tooling they’ve worked on: https://github.com/catherinedevlin/olf-2016-keynote


Catherine Devlin on 18F

That evening folks gathered in the expo hall to meet and eat! That’s where I caught up with my friends from Computer Reach. This is the non-profit I went to Ghana with back in 2012 to deploy Ubuntu-based desktops. I spent a couple weeks there with Dave, Beth Lynn and Nancy (alas, unable to come to OLF) so it was great to see them again. I learned more about the work they’re continuing to do, having switched to using mostly Xubuntu on new installs which was written about here. On a personal level it was a lot of fun connecting with them too, we really bonded during our adventures over there.


Tyler Lamb, Dave Sevick, Elizabeth K. Joseph, Beth Lynn Eicher

Saturday morning began with a keynote from Ethan Galstad on Becoming the Next Tech Entrepreneur. Ethan is the founder of Nagios, and in his talk he traced some of the history of his work on getting Nagios off the ground as a proper project and company and his belief in why technologists make good founders. In his work he drew from his industry and market expertise from being a technologist and was able to play to the niche he was focused on. He also suggested that folks look to what other founders have done that has been successful, and recommended some books (notably Founders at Work and Work the System). Finaly, he walked through some of what can be done to get started, including the stages of idea development, basic business plan (don’t go crazy), a rough 1.0 release that you can have some early customers test and get feedback from, and then into marketing, documenting and focused product development. He concluded by stressing that open source project leaders are already entrepreneurs and the free users of your software are your initial market.

Next up was Robert Foreman’s Mixed Metaphors: Using Hiera with Foreman where he sketched out the work they’ve done that preserves usage of Hiera’s key-value store system but leverages Foreman for the actual orchestration. The mixing of provisioning and orchestration technologies is becoming more common, but I hadn’t seen this particular mashup.

My talk was A Tour of OpenStack Deployment Scenarios. This is the same talk I gave at FOSSCON back in August, walking the audience through a series of ways that OpenStack could be configured to provide compute instances, metering and two types of storage. For each I gave a live demo using DevStack. I also talked about several other popular components that could be added to a deployment. Slides from my talk are here (PDF), which also link to a text document with instructions for how to run the DevStack demos yourself.


Thanks to Vitaliy Matiyash for taking a picture during my talk! (source)

At lunch I met up with my Ubuntu friends to catch up. We later met at the booth where they had a few Ubuntu phones and tablets that gained a bunch of attention throughout the event. This event was also my first opportunity to meet Unit193 and Svetlana Belkin in person, both of whom I’ve worked with on Ubuntu for years.


Unit193, Svetlana Belkin, José Antonio Rey, Elizabeth K. Joseph and Nathan Handler

After lunch I went over to see David Griggs of Dell give us “A Look Under the Hood of Ohio Supercomputer Center’s Newest Linux Cluster.” Supercomputers are cool and it was interesting to learn about the system it was replacing, the planning that went into the replacement and workload cut-over and see in-progress photos of the installation. From there I saw Ryan Saunders speak on Automating Monitoring with Puppet and Shinken. I wasn’t super familiar with the Shinken monitoring framework, so this talk was an interesting and very applicable demonstration of the benefits.

The last talk I went to before the closing keynotes was from my Computer Reach friends Dave Sevick and Tyler Lamb. They presented their “Island Server” imaging server that’s now being used to image all of the machines that they re-purpose and deploy around the world. With this new imaging server they’re able to image both Mac and Linux PCs from one Macbook Pro rather than having a different imaging server for each. They were also able to do a live demo of a Mac and Linux PC being imaged from the same Island Server at once.


Tyler and Dave with the Island Server in action

The event concluded with a closing keynote by a father and daughter duo, Joe and Lily Born, on The Democratization of Invention. Joe Born first found fame in the 90s when he invented the SkipDoctor CD repair device, and is now the CEO of Aiwa which produces highly rated Bluetooth speakers. Lily Born invented the tip-proof Kangaroo Cup. The pair reflected on their work and how the idea to product in the hands of customers has changed in the past twenty years. While the path to selling SkipDoctor had a very high barrier to entry, globalization, crowd-funding, 3D printers and internet-driven word of mouth and greater access to the press all played a part in the success of Lily’s Kangaroo cup and the new Aiwa Bluetooth speakers. While I have no plans to invent anything any time soon (so much else to do!) it was inspiring to hear how the barriers have been lowered and inventors today have a lot more options. Also, I just bought an Aiwa Exos-9 Bluetooth Speaker, it’s pretty sweet.

My conference adventures concluded with a dinner with my friends José, Nathan and David, all three of whom I also spent time with at FOSSCON in Philadelphia the month before. It was fun getting together again, and we wandered around downtown Columbus until we found a nice little pizzeria. Good times.

More photos from the Ohio LinuxFest here: https://www.flickr.com/photos/pleia2/albums/72157674988712556

by pleia2 at November 30, 2016 06:29 PM

November 29, 2016

Jono Bacon

Luma Giveaway Winner – Garrett Nay

I little while back I kicked off a competition to give away a Luma Wifi Set.

The challenge? Share a great community that you feel does wonderful work. The most interesting one, according to yours truly, gets the prize.

Well, I am delighted to share that Garrett Nay bags the prize for sharing the following in his comment:

I don’t know if this counts, since I don’t live in Seattle and can’t be a part of this community, but I’m in a new group in Salt Lake City that’s modeled after it. The group is Story Games Seattle: http://www.meetup.com/Story-Games-Seattle/. They get together on a weekly+ basis to play story games, which are like role-playing games but have a stronger emphasis on giving everyone at the table the power to shape the story (this short video gives a good introduction to what story games are all about, featuring members of the group:

Story Games from Candace Fields on Vimeo.

Story games seem to scratch a creative itch that I have, but it’s usually tough to find friends who are willing to play them, so a group dedicated to them is amazing to me.

Getting started in RPGs and story games is intimidating, but this group is very welcoming to newcomers. The front page says that no experience with role-playing is required, and they insist in their FAQ that you’ll be surprised at what you’ll be able to create with these games even if you’ve never done it before. We’ve tried to take this same approach with our local group.

In addition to playing published games, they also regularly playtest games being developed by members of the group or others. As far as productivity goes, some of the best known story games have come from members of this and sister groups. A few examples I’m aware of are Microscope, Kingdom, Follow, Downfall, and Eden. I’ve personally played Microscope and can say that it is well designed and very polished, definitely a product of years of playtesting.

They’re also productive and engaging in that they keep a record on the forums of all the games they play each week, sometimes including descriptions of the stories they created and how the games went. I find this very useful because I’m always on the lookout for new story games to try out. I kind of wish I lived in Seattle and could join the story games community, but hopefully we can get our fledgling group in Salt Lake up to the standard they have set.

What struck me about this example was that it gets to the heart of what community should be and often is – providing a welcoming, supportive environment for people with like-minded ideas and interests.

While much of my work focuses on the complexities of building collaborative communities with the intricacies of how people work together, we should always remember the huge value of what I refer to as read communities where people simply get together to have fun with each other. Garrett’s example was a perfect summary of a group doing great work here.

Thanks everyone for your suggestions, congratulations to Garrett for winning the prize, and thanks to Luma for providing the prize. Garrett, your Luma will be in the mail soon!

The post Luma Giveaway Winner – Garrett Nay appeared first on Jono Bacon.

by Jono Bacon at November 29, 2016 12:08 AM

November 23, 2016

Elizabeth Krumbach

Holiday cards 2016!

Every year I send out a big batch of winter-themed holiday cards to friends and acquaintances online.

Reading this? That means you! Even if you’re outside the United States!

Send me an email at lyz@princessleia.com with your postal mailing address. Please put “Holiday Card” in the subject so I can filter it appropriately. Please do this even if I’ve sent you a card in the past, I won’t be reusing the list from last year.

Typical disclaimer: My husband is Jewish and we celebrate Hanukkah, but the cards are non-religious, with some variation of “Happy Holidays” or “Season’s Greetings” on them.

by pleia2 at November 23, 2016 07:06 PM

Jono Bacon

Microsoft and Open Source: A New Era?

Last week the Linux Foundation announced Microsoft becoming a Platinum member.

In the eyes of some, hell finally froze over. For many though, myself included, this was not an entirely surprising move. Microsoft are becoming an increasingly active member of the open source community, and they deserve credit for this continual stream of improvements.

When I first discovered open source in 1998, the big M were painted as a bit of a villain. This accusation was largely fair. The company went to great lengths to discredit open source, including comparing Linux to a cancer, patent litigation, and campaigns formed of misinformation and FUD. This rightly left a rather sour taste in the mouths of open source supporters.

The remnants of that sour taste are still strong in some. These folks will likely never trust the Redmond mammoth, their decisions, or their intent. While I am not condoning these prior actions from the company, I would argue that the steady stream of forward progress means that…and I know this will be a tough pill to swallow for some of you…means that it is time to forgive and forget.

Forward Progress

This forward progress is impressive. They released their version of FreeBSD for Azure. They partnered with Canonical to bring Ubuntu user-space to Windows (as well as supporting Debian on Azure and even building their own Linux distribution, the Azure Cloud Switch). They supported an open source version of .NET, known as Mono, later buying Xamarin who led this development and open sourced those components. They brought .NET core to Linux, started their own Linux certification, released a litany of projects (including Visual Studio Code) as open source, founded the Microsoft Open Technologies group, and then later merged the group into the wider organization as openness was a core part of the company.

Microsoft's Satya Nadella seems to have fallen in love.

Satya Nadella, seemingly doing a puppet show, without the puppet.

My personal experience with them has reflected this trend. I first got to know the company back in 2001 when I spoke at a DeveloperDeveloperDeveloper day in the UK. Over the years I flew out to Redmond to provide input on initiatives such as .NET, got to know the Microsoft Open Technologies group, and most recently signed the company as a client where I am helping them to build the next generation of their MVP and RD community. Microsoft are not begrudgingly supporting open source, they are actively pursuing it.

As such, this recent announcement from the Linux Foundation wasn’t a huge surprise to me, but was an impressive formal articulation of Microsoft’s commitment to open source. Leaders at Microsoft and the Linux Foundation should be both credited with this additional important step in the right direction, not just for Microsoft, but for the wider acceptance and growth of open source and collaboration.

Work In Progress

Now, some of the critics will be reading this and will cite many examples of Microsoft still acting as the big bad wolf. You are perfectly right to do so. So, let me zone in on this.

I am not suggesting they are perfect. They aren’t. Companies are merely vessels of people, some of which will still continue to have antiquated perspectives. Microsoft will be no different here. Of course, for all the great steps forward, sometimes there will be some steps back.

What I try to focus on however is the larger story and trends. I would argue that Microsoft is trending in the right direction based on many of their recent moves, including the ones I cited above.

Let’s not forget that this is a big company to turn around. With 114,000 employees and 40+ years of cultural heritage and norms, change understandably takes time. The challenge with change is that it doesn’t just require strategic, product, and leadership focus, but the real challenge is cultural change.

Culture at Microsoft seems to be changing.

Culture is something of an amorphous concept. It isn’t a specific thing you can point to. Culture is instead the aggregate of the actions and intent of the many. You can often make strategic changes that result in new products, services, and projects, but those achievements could be underpinned by a broken, divisive, and ugly culture.

As such, culture is hard to change and requires a mix of top-down leadership and bottom-up action.

From my experience of working with Microsoft, the move to a more open company is not merely based on product-based decisions, but it has percolated in the core culture of how the company operates. I have seen this in my day to day interactions with the company and with my consulting work there. I credit Satya Nadella (and likely many others) for helping to trigger these positive forward motions.

So, are they perfect? No. Are they an entirely different company? No. But are they making a concerted thoughtful effort to really understand and integrate openness into the company? I think so. Is the work complete? No. But do they deserve our support as fellow friends in the open source community? I believe so, yes.

The post Microsoft and Open Source: A New Era? appeared first on Jono Bacon.

by Jono Bacon at November 23, 2016 04:00 PM

November 22, 2016

Jono Bacon

2017 Community Leadership Events: An Update

This week I was delighted to see that we could take the wraps off a new event that I am running in conjunction with my friends at the Linux Foundation called the Community Leadership Conference. The event will be part of the Open Source Summit which was previously known as LinuxCon and I will be running it in Los Angeles from 11th – 13th Sep 2017 and Prague from 23rd – 25th Oct 2017.

Now, some of you may be wondering if this replaces or is different to the Community Leadership Summit in Portland/Austin. Let me add some clarity.

The Community Leadership Summit

The Community Leadership Summit takes place each year the weekend before OSCON. I can confirm that there will be another Community Leadership Summit in 2017 in Austin. We plan to announce this soon formally.

The Community Leadership Summit has the primary goal of bringing together community managers from around the world to discuss and debate community leadership principles. The event is an unconference and is focused on discussions as opposed to formal presentations. As such, and as with any unconference, the thrill of the event is the organic schedule and conversations that follow. Thus, CLS is a great event for those who are interested in playing an active role in furthering the art of and science of community leadership more broadly in an organic way.

The Community Leadership Conference

The Community Leadership Conference, which will be part of the Open Source Summit in Los Angeles and Prague, has a slightly different format and focus.

CLC will instead be a traditional conference. My goal here is to bring together speakers from around the world to deliver presentations, panels, and other material that shares best practices, methods, and approaches in community leadership, and specific to open source. CLC is not intended to shape the future of community leadership, but more to present best practices and principles for consumption, and tailed to the needs of open source projects and organizations.

In Summary

So, in summary, the Community Leadership Conference is designed to be a place to consume community leadership best practices and principles via carefully curated presentations, panels, and networking. The Community Leadership Summit is designed to be more of an informal roll-your-sleeves up summit where attendees discuss and debate community leadership to help shape how it evolves and grows.

As regular readers will know, I am passionate about evolving the art and science of community leadership and while CLS has been an important component in this evolution, I felt we needed to augment it with CLC. These two events, combined with the respective audiences of their shared conferences, and bolstered by my wonderful friends at O’Reilly and the Linux Foundation, are going to help us to evolve this art and science faster and more efficiently than ever.

I hope to see you all at either or both of these events!

The post 2017 Community Leadership Events: An Update appeared first on Jono Bacon.

by Jono Bacon at November 22, 2016 06:12 AM

November 21, 2016

Eric Hammond

Watching AWS CloudFormation Stack Status

live display of current event status for each stack resource

Would you like to be able to watch the progress of your new CloudFormation stack resources like this? (press play)

That’s what the output of the new aws-cloudformation-stack-status command looks like when I launch a new AWS Git-backed Static Website CloudFormation stack.

It shows me in real time which resources have completed, which are still in progress, and which, if any, have experienced problems.

Background

AWS provides a few ways to look at the status of resources in a CloudFormation stack including the stream of stack events in the Web console and in the aws-cli.

Unfortunately, these displays show multiple events for each resource (e.g., CREATE_IN_PROGRESS, CREATE_COMPLETE) and it’s difficult to match up all of the resource events by hand to figure out which resources are incomplete and still in progress.

Solution

I created a bit of wrapper code that goes around the aws cloudformation describe-stack-events command. It performs these operations:

  1. Cuts the output down to the few fields that matter: status, resource name, type, event time.

  2. Removes all but the ost recent status event for each stack resource.

  3. Sorts the output to put the resources with the most recent status changes at the top.

  4. Repeatedly runs this command so that you can see the stack progress live and know exactly which resource is taking the longest.

I tossed the simple script up here in case you’d like to try it out:

GitHub: aws-cloudformation-stack-status

You can run it to monitor your CloudFormation stack with this command:

aws-cloudformation-stack-status --watch --region $region --stack-name $stack

Interrupt with Ctrl-C to exit.

Note: You will probably need to start your terminal out wider than 80 columns for a clean presentation.

Note: This does use the aws-cli, so installing and configuring that is a prerequisite.

Stack Delete Example

Here’s another example terminal session watching a stack-delete operation, including some skipped deletions (because of a retention policy). It finally ends with a “stack not found error” which is exactly what we hope for after a stack has been deleted successfully. Again, the resources with the most recent state change events are at the top.

Note: These sample terminal replays cut out almost 40 minutes of waiting for the creation and deletion of the CloudFront distributions. You can see the real timestamps in the rightmost columns.

Original article and comments: https://alestic.com/2016/11/aws-cloudformation-stack-status/

November 21, 2016 09:00 AM

November 14, 2016

Eric Hammond

Optional Parameters For Pre-existing Resources in AWS CloudFormation Templates

stack creates new AWS resources unless user specifies pre-existing

Background

I like to design CloudFormation templates that create all of the resources necessary to implement the desired functionality without requiring a lot of separate, advanced setup. For example, the AWS Git-backed Static Website creates all of the interesting pieces including a CodeCommit Git repository, S3 buckets for web site content and logging, and even the Route 53 hosted zone.

Creating all of these resources is great if you were starting from scratch on a new project. However, you may sometimes want to use a CloudFormation template to enhance an existing account where one or more of the AWS resources already exist.

For example, consider the case where the user already has a CodeCommit Git repository and a Route 53 hosted zone for their domain. They still want all of the enhanced functionality provided in the Git-backed static website CloudFormation stack, but would rather not have to fork and edit the template code just to fit it in to the existing environment.

What if we could use the same CloudFormation template for different types of situations, sometimes pluging in pre-existing AWS resources, and other times letting the stack create the resources for us?

Solution

With assistance from Ryan Scott Brown, the Git-backed static website CloudFormation template now allows the user to optionally specify a number of pre-existing resources to be integrated into the new stack. If any of those parameters are left empty, then the CloudFormation template automatically creates the required resources.

Let’s walk through relevant pieces of the CloudFormation template code using the CodeCommit Git repository as an example of an optional resource. [Note: Code exerpts below may have been abbreviated and slightly altered for article clarity.]

In the CloudFormation template Parameters section, we allow the user to pass in the name of a CodeCommit Git repository that was previously created in the AWS account. If this parameter is specified, then the CloudFormation template uses the pre-existing repository in the new stack. If the parameter is left empty when the template is run, then the CloudFormation stack will create a new CodeCommit Git repository.

Parameters:
  PreExistingGitRepository:
    Description: "Optional Git repository name for pre-existing CodeCommit repository. Leave empty to have CodeCommit Repository created and managed by this stack."
    Type: String
    Default: ""

We add an entry to the Conditions section in the CloudFormation template that will indicate whether or not a pre-existing CodeCommit Git repository name was provided. If the parameter is empty, then we will need to create a new repository.

Conditions:
  NeedsNewGitRepository: !Equals [!Ref PreExistingGitRepository, ""]

In the Resources section, we create a new CodeCommit Git repository, but only on the condition that we need a new one (i.e., the user did not specify one in the parameters). If a pre-existing CodeCommit Git repository name was specified in the stack parameters, then this resource creation will be skipped entirely.

Resources:
  GitRepository:
    Condition: NeedsNewGitRepository
    Type: "AWS::CodeCommit::Repository"
    Properties:
      RepositoryName: !Ref GitRepositoryName
    DeletionPolicy: Retain

We then come to parts of the CloudFormation template where other resources need to refer to the CodeCommit Git repository. We need to use an If conditional to refer to the correct resource, since it might be a pre-existing one passed in a parameter or it might be one created in this stack.

Here’s an example where the CodePipeline resource needs to specify the Git repository name as the source of a pipeline stage.

Resources:
  CodePipeline:
    Type: "AWS::CodePipeline::Pipeline"
    [...]
      RepositoryName: !If [NeedsNewGitRepository, !Ref GitRepositoryName, !Ref PreExistingGitRepository]

We use the same conditional to place the name of the Git repository in the CloudFormation stack outputs so that the user can easily find out what repository is being used by the stack.

Outputs:
  GitRepositoryName:
    Description: Git repository name
    Value: !If [NeedsNewGitRepository, !Ref GitRepositoryName, !Ref PreExistingGitRepository]

We also want to show the URL for cloning the repository. If we created the repository in the stack, this is an easy attribute to query. If a pre-existing repository name was passed in, we can’t determine the correct URL; so we just output that it is not available and hope the user remembers how to access the repository they created in the past.

Outputs:
  GitCloneUrlHttp:
    Description: Git https clone endpoint
    Value: !If [NeedsNewGitRepository, !GetAtt GitRepository.CloneUrlHttp, "N/A"]

Read more from Amazon about the AWS CloudFormation Conditions that are used in this template.

Replacing a Stack Without Losing Important Resources

You may have noticed in the above code that we specify a DeletionPolicy of Retain for the CodeCommit Git repository. This keeps the repository from being deleted if and when the the CloudFormation stack is deleted.

This prevents the accidental loss of what may be the master copy of the website source. It may still be deleted manually if you no longer need it after deleting the stack.

A number of resources in the Git-backed static website stack are retained, including the Route53 hosted zone, various S3 buckets, and the CodeCommit Git repository. Not coincidentally, all of these retained resources can be subsequently passed back into a new stack as pre-existing resources!

Though CloudFormation stacks can often be updated in place, sometimes I like to replace them with completely different templates. It is convenient to leave foundational components in place while deleting and replacing the other stack resources that connect them.

Original article and comments: https://alestic.com/2016/11/aws-cloudformation-optional-resources/

November 14, 2016 11:00 AM

November 13, 2016

Eric Hammond

Alestic.com Blog Infrastructure Upgrade

publishing new blog posts with “git push”

For the curious, the Alestic.com blog has been running for a while on the Git-backed Static Website Cloudformation stack using the AWS Lambda Static Site Generator Plugin for Hugo.

Not much has changed in the design because I had been using Hugo before. However, Hugo is now automatically run inside of an AWS Lambda function triggered by updates to a CodeCommit Git repository.

It has been a pleasure writing with transparent review and publication processes enabled by Hugo and AWS:

  • When I save a blog post change in my editor (written using Markdown), a local Hugo process on my laptop automatically detects the file change, regenerates static pages, and refreshes the view in my browser.

  • When I commit and push blog post changes to my CodeCommit Git repository, the Git-backed Static Website stack automatically regenerates the static blog site using Hugo and deploys to the live website served by AWS.

Blog posts I don’t want to go public yet can be marked as “draft” using Hugo’s content metadata format.

Bigger site changes can be developed and reviewed in a Git feature branch and merged to “master” when completed, automatically triggering publication.

I love it when technology gets out of my way and lets me be focus on being productive.

Original article and comments: https://alestic.com/2016/11/alestic-blog-stack/

November 13, 2016 03:10 AM

November 07, 2016

Eric Hammond

Running aws-cli Commands Inside An AWS Lambda Function

even though aws-cli is not available by default in AWS Lambda

The AWS Lambda environments for each programming language (e.g., Python, Node, Java) already have the AWS client SDK packages pre-installed for those languages. For example, the Python AWS Lambda environment has boto3 available, which is ideal for connecting to and using AWS services in your function.

This makes it easy to use AWS Lambda as the glue for AWS. A function can be triggered by many different service events, and can respond by reading from, storing to, and triggering other services in the AWS ecosystem.

However, there are a few things that aws-cli currently does better than the AWS SDKs alone. For example, the following command is an efficient way to take the files in a local directory and recursively update a website bucket, uploading (in parallel) files that have changed, while setting important object attributes including MIME types guessing:

aws s3 sync --delete --acl public-read LOCALDIR/ s3://BUCKET/

The aws-cli software is not currently pre-installed in the AWS Lambda environment, but we can fix that with a little effort.

Background

The key to solving this is to remember that aws-cli is available as a Python package. Mitch Garnaat reminded me of this when I was lamenting the lack of aws-cli in AWS Lambda, causing me to smack my virtual forehead. Amazon has already taught us how to install most Python packages, and we can apply the same process for aws-cli, though a little extra work is required, because a command line program is involved.

NodeJS/Java/Go developers: Don’t stop reading! We are using Python to install aws-cli, true, but this is a command line program. Once the command is installed in the AWS Lambda environment, you can invoke it using the system command running functions in your respective languages.

Steps

Here are the steps I followed to add aws-cli to my AWS Lambda function. Adjust to suit your particular preferred way of building AWS Lambda functions.

Create a temporary directory to work in, including paths for a temporary virtualenv, and an output ZIP file:

tmpdir=$(mktemp -d /tmp/lambda-XXXXXX)
virtualenv=$tmpdir/virtual-env
zipfile=$tmpdir/lambda.zip

Create the virtualenv and install the aws-cli Python package into it using a subshell:

(
  virtualenv $virtualenv
  source $virtualenv/bin/activate
  pip install awscli
)

Copy the aws command file into the ZIP file, but adjust the first (shabang) line so that it will run with the system python command in the AWS Lambda environment, instead of assuming python is in the virtualenv on our local system. This is the valuable nugget of information buried deep in this article!

rsync -va $virtualenv/bin/aws $tmpdir/aws
perl -pi -e '$_ = "#!/usr/bin/python\n" if $. == 1' $tmpdir/aws
(cd $tmpdir; zip -r9 $zipfile aws)

Copy the Python packages required for aws-cli into the ZIP file:

(cd $virtualenv/lib/python2.7/site-packages; zip -r9 $zipfile .)

Copy in your AWS Lambda function, other packages, configuration, and other files needed by the function code. These don’t need to be in Python.

cd YOURLAMBDADIR
zip -r9 $zipfile YOURFILES

Upload the ZIP file to S3 (or directly to AWS Lambda) and clean up:

aws s3 cp $zipfile s3://YOURBUCKET/YOURKEY.zip
rm -r $tmpdir

In your Lambda function, you can invoke aws-cli commands. For example, in Python, you might use:

import subprocess
command = ["./aws", "s3", "sync", "--acl", "public-read", "--delete",
           source_dir + "/", "s3://" + to_bucket + "/"]
print(subprocess.check_output(command, stderr=subprocess.STDOUT))

Note that you will need to specify the location of the aws command with a leading "./" or you could add /var/task (cwd) to the $PATH environment variable.

Example

This approach is used to add the aws-cli command to the AWS Lambda function used by the AWS Git-backed Static Website CloudFormation stack.

You can see the code that builds the AWS Lambda function ZIP file here, including the installation of the aws-cli command:

https://github.com/alestic/aws-git-backed-static-website/blob/master/build-upload-aws-lambda-function

Notes

  • I would still love to see aws-cli pre-installed on all the AWS Lambda environments. This simple change would remove quite a bit of setup complexity and would even let me drop my AWS Lambda function inline in the CloudFormation template. Eliminating the external dependency and having everything in one file would be huge!

  • I had success building awscli on Ubuntu for use in AWS Lambda, probably because all of the package requirements are pure Python. This approach does not always work. It is recommended you build packages on Amazon Linux so that they are compatible with the AWS Lambda environment.

  • The pip install -t DIR approach did not work for aws-cli when I tried it, which is why I went with virtualenv. Tips welcomed.

  • I am not an expert at virtualenv or Python, but I am persistent when I want to figure out how to get things to work. The above approach worked. I welcome improvements and suggestions from the experts.

Original article and comments: https://alestic.com/2016/11/aws-lambda-awscli/

November 07, 2016 03:00 PM

November 02, 2016

Jono Bacon

Luma Wifi Review and Giveaway

For some reason, wifi has always been the bane of my technological existence. Every house, every router, every cable provider…I have always suffered from bad wifi. I have tried to fix it and in most cases failed.

As such, I was rather excited when I discovered the Luma a little while ago. Put simply, the Luma is a wifi access point, but it comes in multiple units to provide repeaters around your home. The promise of Luma is that this makes it easier to bathe your home in fast and efficient wifi, and comes with other perks such as enhanced security, access controls and more.

So, I pre-ordered one and it arrived recently.

I rather like the Luma so I figured I would write up some thoughts. Stay tuned though, because I am also going to give one away to a lucky reader.

Setup

When it arrived I set it up and followed the configuration process. This was about as simple as you can imagine. The set came with three of these:

luma-kitchen-counter

I plugged each one in turn and the Android app detected each one and configured it. It even recommended where in the house I should put them.

So, I plonked the different Lumas around my house and I was getting pretty reputable speeds.

Usage

Of course, the very best wifi routers blend into the background and don’t require any attention. After a few weeks of use, this has been the case with the Luma. They just sit there working and we have had great wifi across the house.

There are though some interesting features in the app that are handy to have.

Firstly, device management is simple. You can view, remove, and block Internet to different devices and even group devices by person. So, for example, if you neighbors use your Internet from time to time you can group their devices and switch them off if you need precious bandwidth.

Viewing these devices from an app and not an archaic admin panel also makes auditing devices simple. For example, I saw two weird-looking devices on our network and after some research they turned out to be Kindles. Thanks, Amazon, for not having descriptive identifiers for the devices, by the way. 😉

Another neat feature is content filtering. If you have kids and don’t want them to see some naughty content online, you can filter by device or across the whole network. You could also switch off their access when dinner is ready.

So, overall, I am pretty happy with the Luma. Great hardware, simple setup, and reliable execution.

Win a Luma

I want to say a huge thank-you to the kind folks at Luma, because they provided me with an additional Luma to give away here!

Participating is simple. As you know, my true passion in life is building powerful, connected, and productive communities. So, unsurprisingly, I have a question that relates to community:

What is the most interesting, productive, and engaging community you have ever seen?

To participate simply share your answer as a comment on this post. Be sure to tell me which community you are nomating, share pertinant links, and tell me why that community is doing great work. These don’t have to be tech communities – they can be anything, craft, arts, sports, charities, or anything else. I want you to sell me on why the community is interesting and does great work.

Please note: if you include a lot of links, or haven’t posted here before, sometimes comments get stuck in the moderation queue. Rest assured though, I am regularly reviewing the queue and your comment will appear – please don’t submit multiple comments that are the same!

The deadline for submissions is 12pm Pacific time on Fri 18th Nov 2016.

I will then pick my favorite answer and announce the winners. My decision is final and based on what I consider to be the most interesting submission, so no complaining, people. Thanks again to Luma for the kind provision of the prize!

The post Luma Wifi Review and Giveaway appeared first on Jono Bacon.

by Jono Bacon at November 02, 2016 03:25 PM

October 31, 2016

Eric Hammond

AWS Lambda Static Site Generator Plugins

starting with Hugo!

A week ago, I presented a CloudFormation template for an AWS Git-backed Static Website stack. If you are not familiar with it, please click through and review the features of this complete Git + static website CloudFormation stack.

This weekend, I extended the stack to support a plugin architecture to run the static site generator of your choosing against your CodeCommit Git repository content. You specify the AWS Lambda function at stack launch time using CloudFormation parameters (ZIP location in S3).

The first serious static site generator plugin is for Hugo, but others can be added with or without my involvement and used with the same unmodified CloudFormation template.

The Git-backed static website stack automatically invokes the static site generator whenever the site source is updated in the CodeCommit Git repository. It then syncs the generated static website content to the S3 bucket where the stack serves it over a CDN using https with DNS served by Route 53.

I have written three AWS Lambda static site generator plugins to demonstrate the concept and to serve as templates for new plugins:

  1. Identity transformation plugin - This copies the entire Git repository content to the static website with no modifications. This is currently the default plugin for the static website CloudFormation template.

  2. Subdirectory plugin - This plugin is useful if your Git repository has files that should not be included as part of the static site. It publishes a specified subdirectory (e.g., “htdocs” or “public-html”) as the static website, keeping the rest of your repository private.

  3. Hugo plugin - This plugin runs the popular Hugo static site generator. The Git repository should include all source templates, content, theme, and config.

You are welcome to use any of these plugins when running an AWS Git-backed Static Website stack. The documentation in each of the above plugin repositories describes how to set the CloudFormation template parameters on stack create.

You may also write your own AWS Lambda function static site generator plugin using one of the above as a starting point. Let me know if you write plugins; I may add new ones to the list above.

The sample AWS Lambda handler plugin code takes care of downloading the source, and uploading the resulting site and can be copied as is. All you have to do is fill in the “generate_static_site” code to generate the site from the source.

The plugin code for Hugo is basically this:

def generate_static_site(source_dir, site_dir, user_parameters):
    command = ["./hugo", "--source=" + source_dir, "--destination=" + site_dir]
    if user_parameters.startswith("-"):
        command.extend(shlex.split(user_parameters))
    print(subprocess.check_output(command, stderr=subprocess.STDOUT))

I have provided build scripts so that you can build the sample AWS Lambda functions yourself, because you shoudn’t trust other people’s blackbox code if you can help it. That said, I have also made it easy to use pre-built AWS Lambda function ZIP files to try this out.

These CloudFormation template and AWS Lambda functions are very new and somewhat experimental. Please let me know where you run into issues using them and I’ll update documentation. I also welcome pull requests, especially if you work with me in advance to make sure the proposed changes fit the vision for this stack.

Original article and comments: https://alestic.com/2016/10/aws-static-site-generator-plugins/

October 31, 2016 09:41 AM

October 24, 2016

Eric Hammond

AWS Git-backed Static Website

with automatic updates on changes in CodeCommit Git repository

A number of CloudFormation templates have been published that generate AWS infrastructure to support a static website. I’ll toss another one into the ring with a feature I haven’t seen yet.

In this stack, changes to the CodeCommit Git repository automatically trigger an update to the content served by the static website. This automatic update is performed using CodePipeline and AWS Lambda.

This stack also includes features like HTTPS (with a free certificate), www redirect, email notification of Git updates, complete DNS support, web site access logs, infinite scaling, zero maintenance, and low cost.

One of the most exciting features is the launch-time ability to specify an AWS Lambda function plugin (ZIP file) that defines a static site generator to run on the Git repository site source before deploying to the static website. A sample plugin is provided for the popular Hugo static site generator.

Here is an architecture diagram outlining the various AWS services used in this stack. The arrows indicate the major direction of data flow. The heavy arrows indicate the flow of website content.

CloudFormation stack architecture diagram

Sure, this does look a bit complicated for something as simple as a static web site. But remember, this is all set up for you with a simple aws-cli command (or AWS Web Console button push) and there is nothing you need to maintain except the web site content in a Git repository. All of the AWS components are managed, scaled, replicated, protected, monitored, and repaired by Amazon.

The input to the CloudFormation stack includes:

  • Domain name for the static website

  • Email address to be notified of Git repository changes

The output of the CloudFormation stack includes:

  • DNS nameservers for you to set in your domain registrar

  • Git repository endpoint URL

Though I created this primarily as a proof of concept and demonstration of some nice CloudFormation and AWS service features, this stack is suitable for use in a production environment if its features match your requirements.

Speaking of which, no CloudFormation template meets everybody’s needs. For example, this one conveniently provides complete DNS nameservers for your domain. However, that also means that it assumes you only want a static website for your domain name and nothing else. If you need email or other services associated with the domain, you will need to modify the CloudFormation template, or use another approach.

How to run

To fire up an AWS Git-backed Static Website CloudFormation stack, you can click this button and fill out a couple input fields in the AWS console:

Launch CloudFormation stack

I have provided copy+paste aws-cli commands in the GitHub repository. The GitHub repository provides all the source for this stack including the AWS Lambda function that syncs Git repository content to the website S3 bucket:

AWS Git-backed Static Website GitHub repo

If you have aws-cli set up, you might find it easier to use the provided commands than the AWS web console.

When the stack starts up, two email messages will be sent to the address associated with your domain’s registration and one will be sent to your AWS account address. Open each email and approve these:

  • ACM Certificate (2)
  • SNS topic subscription

The CloudFormation stack will be stuck until the ACM certificates are approved. The CloudFront distributions are created afterwards and can take over 30 minutes to complete.

Once the stack completes, get the nameservers for the Route 53 hosted zone, and set these in your domain’s registrar. Get the CodeCommit endpoint URL and use this to clone the Git repository. There are convenient aws-cli commands to perform these fuctions in the project’s GitHub repository linked to above.

AWS Services

The stack uses a number of AWS services including:

  • CloudFormation - Infrastructure management.

  • CodeCommit - Git repository.

  • CodePipeline - Passes Git repository content to AWS Lambda when modified.

  • AWS Lambda - Syncs Git repository content to S3 bucket for website

  • S3 buckets - Website content, www redirect, access logs, CodePipeline artifacts

  • CloudFront - CDN, HTTPS management

  • Certificate Manager - Creation of free certificate for HTTPS

  • CloudWatch - AWS Lambda log output, metrics

  • SNS - Git repository activity notification

  • Route 53 - DNS for website

  • IAM - Manage resource security and permissions

Cost

As far as I can tell, this CloudFormation stack currently costs around $0.51 per month in a new AWS account with nothing else running a reasonable amount of storage for the web site content, and up to 5 Git users. This minimal cost is due to there being no free tier for Route 53 at the moment.

If you have too many GB of content, too many tens of thousands of requests, etc., you may start to see additional pennies being added to your costs.

If you stop and start the stack, it will cost an additional $1 each time because of the odd CodePipeline pricing structure. See the AWS pricing guides for complete details, and monitor your account spending closely.

Notes

  • This CloudFormation stack will only work in regions that have all of the required services and features available. The only one I’m sure about is ue-east-1. Let me know if you get it to work elsewhere.

  • This CloudFormation stack uses an AWS Lambda function that is installed from the run.alestic.com S3 bucket provided by Eric Hammond. You are welcome to use the provided script to build your own AWS Lambda function ZIP file, upload it to S3, and specify the location in the launch parameters.

  • Git changes are not reflected immediately on the website. It takes a minute for CodeDeploy to notice the change; a minute to get the latest Git branch content, ZIP, upload to S3; and a minute for the AWS Lambda function to download, unzip, and sync the content to the S3 bucket. Then the CloudFront CDN TTL may prevent the changes from being seen for another minute. Or so.

Thanks

Thanks to Mitch Garnaat for pointing me in the right direction for getting the aws-cli into an AWS Lambda function. This was important because “aws s3 sync” is much smarter than the other currently availble options for syncing website content with S3.

Thanks to AWS Community Hero Onur Salk for pointing me in the direction of CloudPipeline for triggering AWS Lamda functions off of CodeCommit changes.

Thanks to Ryan Brown for already submitting a pull request with lots of nice cleanup of the CloudFormation template, teaching me a few things in the process.

Some other resources you might fine useful:

Creating a Static Website Using a Custom Domain - Amazon Web Services

S3 Static Website with CloudFront and Route 53 - AWS Sysadmin

Continuous Delivery with AWS CodePipeline - Onur Salk

Automate CodeCommit and CodePipeline in AWS CloudFormation - Stelligent

Running AWS Lambda Functions in AWS CodePipeline using CloudFormation - Stelligent

You are welcome to use, copy, and fork this repository. I would recommend contacting me before spending time on pull requests, as I have specific limited goals for this stack and don’t plan to extend its features much more.

[Update 2016-10-28: Added Notes section.]

[Update 2016-11-01: Added note about static site generation and Hugo plugin.]

Original article and comments: https://alestic.com/2016/10/aws-git-backed-static-website/

October 24, 2016 10:00 AM

October 23, 2016

Akkana Peck

Los Alamos Artists Studio Tour

[JunkDNA Art at the LA Studio Tour] The Los Alamos Artists Studio Tour was last weekend. It was a fun and somewhat successful day.

I was borrowing space in the studio of the fabulous scratchboard artist Heather Ward, because we didn't have enough White Rock artists signed up for the tour.

Traffic was sporadic: we'd have long periods when nobody came by (I was glad I'd brought my laptop, and managed to get some useful development done on track management in pytopo), punctuated by bursts where three or four groups would show up all at once.

It was fun talking to the people who came by. They all had questions about both my metalwork and Heather's scratchboard, and we had a lot of good conversations. Not many of them were actually buying -- I heard the same thing afterward from most of the other artists on the tour, so it wasn't just us. But I still sold enough that I more than made back the cost of the tour. (I hadn't realized, prior to this, that artists have to pay to be in shows and tours like this, so there's a lot of incentive to sell enough at least to break even.) Of course, I'm nowhere near covering the cost of materials and equipment. Maybe some day ...

[JunkDNA Art at the LA Studio Tour]

I figured snacks are always appreciated, so I set out my pelican snack bowl -- one of my first art pieces -- with brownies and cookies in it, next to the business cards.

It was funny how wrong I was in predicting what people would like. I thought everyone would want the roadrunners and dragonflies; in practice, scorpions were much more popular, along with a sea serpent that had been sitting on my garage shelf for a month while I tried to figure out how to finish it. (I do like how it eventually came out, though.)

And then after selling both my scorpions on Saturday, I rushed to make two more on Saturday night and Sunday morning, and of course no one on Sunday had the slightest interest in scorpions. Dave, who used to have a foot in the art world, tells me this is typical, and that artists should never make what they think the market will like; just go on making what you like yourself, and hope it works out.

Which, fortunately, is mostly what I do at this stage, since I'm mostly puttering around for fun and learning.

Anyway, it was a good learning experience, though I was a little stressed getting ready for it and I'm glad it's over. Next up: a big spider for the front yard, before Halloween.

October 23, 2016 02:17 AM

October 22, 2016

Elizabeth Krumbach

Simcoe’s October Checkup

On October 13th MJ and I took Simcoe in to the vet for her quarterly checkup. The last time she had been in was back in June.

As usual, she wasn’t thrilled about this vet visit plan.

This time her allergies were flaring up and we were preparing to increase her dosage of Atopica to fight back on some of the areas she was scratching and breaking out. The poor thing continues to suffer from constipation, so we’re continuing to try to give her wet food with pumpkin or fiber mixed in, but it’s not easy since food isn’t really her thing. We also have been keeping an eye on her weight and giving her an appetite stimulant here and there when I’m around to monitor her. Back in June her weight was at 8.4lbs, and this time she’s down to 8.1. I hope to spend more time giving her the stimulant after my next trip.

Sadly her bloodwork related to kidney values continues to worsen. Her CRE levels are the worst we’ve ever seen, with them shooting up higher than when she first crashed and we were notified of her renal failure back in 2011, almost five years ago. From 5.5 in June, she’s now at a very concerning 7.0.

Her BUN has stayed steady at 100, the same as it was in June.

My travel has been pretty hard on her, and I feel incredibly guilty about this. She’s more agitated and upset than we’d like to see so the vet prescribed a low dose of Alprazolam that she can be given during the worst times. We’re going to reduce her Calcitriol, but otherwise are continuing with the care routine.

It’s upsetting to see her decline in this way, and I have noticed a slight drop in energy as well. I’m still hoping we have a lot more time with my darling kitten-cat, but she turns ten next month and these value are definitely cause for concern.

But let’s not leave it on a sad note. The other day she made herself at home in a box that had the sun pointed directly inside it. SO CUTE!

She also tried to go with MJ on a business trip this week.

I love this cat.

by pleia2 at October 22, 2016 02:24 AM

October 20, 2016

Jono Bacon

All Things Open Next Week – MCing, Talks, and More

Last year I went to All Things Open for the first time and did a keynote. You can watch the keynote here.

I was really impressed with All Things Open last year and have subsequently become friends with the principle organizer, Todd Lewis. I loved how the team put together a show with the right balance of community and corporation, great content, exhibition and more.

All Thing Open 2016 is happening next week and I will be participating in a number of areas:

  • I will be MCing the keynotes for the event. I am looking forward to introducing such a tremendous group of speakers.
  • Jeremy King, CTO of Walmart Labs and I will be having a fireside chat. I am looking forward to delving into the work they are doing.
  • I will also be participating in a panel about openness and collaboration, and delivering a talk about building a community exoskeleton.
  • It is looking pretty likely I will be doing a book signing with free copies of The Art of Community to be given away thanks to my friends at O’Reilly!

The event takes place in Raleigh, and if you haven’t registered yet, do so right here!

Also, a huge thanks to Red Hat and opensource.com for flying me out. I will be joining the team for a day of meetings prior to All Things Open – looking forward to the discussions!

The post All Things Open Next Week – MCing, Talks, and More appeared first on Jono Bacon.

by Jono Bacon at October 20, 2016 08:20 PM

October 17, 2016

Elizabeth Krumbach

Seeking a new role

Today I was notified that I am being laid off from the upstream OpenStack Infrastructure job I have through HPE. It’s a workforce reduction and our whole team at HPE was hit. I love this job. I work with a great team on the OpenStack Infrastructure team. HPE has treated me very well, supporting travel to conferences I’m speaking at, helping to promote my books (Common OpenStack Deployments and The Official Ubuntu Book, 9th and 8th editions) and other work. I spent almost four years there and I’m grateful for what they did for my career.

But now I have to move on.

I’ve worked as a Linux Systems Administrator for the past decade and I’d love to continue that. I live in San Francisco so there are a lot of ops positions around here that I can look at, but I really want to find a place where my expertise with open source, writing and public speaking can will be used and appreciated. I’d also be open to a more Community or Developer Evangelist role that leverages my systems and cloud background.

Whatever I end up doing next the tl;dr (too long; didn’t read) version of what I need in my next role are as follows:

  • Most of my job to be focused on open source
  • Support for travel to conferences where I speak at (6-12 per year)
  • Work from home
  • Competitive pay

My resume is over here: http://elizabethkjoseph.com

Now the long version, and a quick note about what I do today.

OpenStack project Infrastructure Team

I’ve spent nearly four years working full time on the OpenStack project Infrastructure Team. We run all the services that developers on the OpenStack project interact with on a daily basis, from our massive Continuous Integration system to translations and the Etherpads. I love it there. I also just wrote a book about OpenStack.

HPE has paid me to do this upstream OpenStack project Infrastructure work full time, but we have team members from various companies. I’d love to find a company in the OpenStack ecosystem willing to pay for me to continue this and support me like HPE did. All the companies who use and contribute to OpenStack rely upon the infrastructure our team provides, and as a root/core member of this team I have an important role to play. It would be a shame for me to have to leave.

However, I am willing to move on from this team and this work for something new. During my career thus far I’ve spent time working on both the Ubuntu and Debian projects, so I do have experience with other large open source projects, and reducing my involvement in them as my life dictates.

Most of my job to be focused on open source

This is extremely important to me. I’ve spent the past 15 years working intensively in open source communities, from Linux Users Groups to small and large open source projects. Today I work on a team where everything we do is open source. All system configs, Puppet modules, everything but the obvious private data that needs to be private for the integrity of the infrastructure (SSH keys, SSL certificates, passwords, etc). While I’d love a role where this is also the case, I realize how unrealistic it is for a company to have such an open infrastructure.

An alternative would be a position where I’m one of the ops people who understands the tooling (probably from gaining an understanding of it internally) and then going on to help manage the projects that have been open sourced by the team. I’d make sure best practices are followed for the open sourcing of things, that projects are paid attention to and contributors outside the organization are well-supported. I’d also go to conferences to present on this work, write about it on a blog somewhere (company blog? opensource.com?) and be encouraging and helping other team members do the same.

Support for travel to conferences where I speak at (to chat at 6-12 per year)

I speak a lot and I’m good at it. I’ve given keynotes at conferences in Europe, South America and right here in the US. Any company I go to work for will need to support me in this by giving me the time to prepare and give talks, and by compensating me for travel for conferences where I’m speaking.

Work from home

I’ve been doing this for the past ten years and I’d really struggle to go back into an office. Since operations, open source and travel doesn’t need me to be in an office, I’d prefer to stick with the flexibility and time working from home gives me.

For the right job I may be willing to consider going into an office or visiting client/customer sites (SF Bay Area is GREAT for this!) once a week, or some kind of arrangement where I travel to a home office for a week here and there. I can’t relocate for a position at this time.

Competitive pay

It should go without saying, but I do live in one of the most expensive places in the world and need to be compensated accordingly. I love my work, I love open source, but I have bills to pay and I’m not willing to compromise on this at this point in my life.

Contact me

If you think your organization would be interested in someone like me and can help me meet my requirements, please reach out via email at lyz@princessleia.com

I’m pretty sad today about the passing of what’s been such a great journey for me at HPE and in the OpenStack community, but I’m eager to learn more about the doors this change is opening up for me.

by pleia2 at October 17, 2016 11:23 PM

October 11, 2016

Akkana Peck

New Mexico LWV Voter Guides are here!

[Vote button] I'm happy to say that our state League of Women Voters Voter Guides are out for the 2016 election.

My grandmother was active in the League of Women Voters most of her life (at least after I was old enough to be aware of such things). I didn't appreciate it at the time -- and I also didn't appreciate that she had been born in a time when women couldn't legally vote, and the 19th amendment, giving women the vote, was ratified just a year before she reached voting age. No wonder she considered the League so important!

The LWV continues to work to extend voting to people of all genders, races, and economic groups -- especially important in these days when the Voting Rights Act is under attack and so many groups are being disenfranchised. But the League is important for another reason: local LWV chapters across the country produce detailed, non-partisan voter guides for each major election, which are distributed free of charge to voters. In many areas -- including here in New Mexico -- there's no equivalent of the "Legislative Analyst" who writes the lengthy analyses that appear on California ballots weighing the pros, cons and financial impact of each measure. In the election two years ago, not that long after Dave and I moved here, finding information on the candidates and ballot measures wasn't easy, and the LWV Voter Guide was by far the best source I saw. It's the main reason I joined the League, though I also appreciate the public candidate forums and other programs they put on.

LWV chapters are scrupulous about collecting information from candidates in a fair, non-partisan way. Candidates' statements are presented exactly as they're received, and all candidates are given the same specifications and deadlines. A few candidates ignored us this year and didn't send statements despite repeated emails and phone calls, but we did what we could.

New Mexico's state-wide voter guide -- the one I was primarily involved in preparing -- is at New Mexico Voter Guide 2016. It has links to guides from three of the four local LWV chapters: Los Alamos, Santa Fe, and Central New Mexico (Albuquerque and surrounding areas). The fourth chapter, Las Cruces, is still working on their guide and they expect it soon.

I was surprised to see that our candidate information doesn't include links to websites or social media. Apparently that's not part of the question sheet they send out, and I got blank looks when I suggested we should make sure to include that next time. The LWV does a lot of important work but they're a little backward in some respects. That's definitely on my checklist for next time, but for now, if you want a candidate's website, there's always Google.

I also helped a little on Los Alamos's voter guide, making suggestions on how to present it on the website (I maintain the state League website but not the Los Alamos site), and participated in the committee that wrote the analysis and pro and con arguments for our contentious charter amendment proposal to eliminate the elective office sheriff. We learned a lot about the history of the sheriff's office here in Los Alamos, and about state laws and insurance rules regarding sheriffs, and I hope the important parts of what we learned are reflected in both sides of the argument.

The Voter Guides also have a link to a Youtube recording of the first Los Alamos LWV candidate forum, featuring NM House candidates, DA, Probate judge and, most important, the debate over the sheriff proposition. The second candidate forum, featuring US House of Representatives, County Council and County Clerk candidates, will be this Thursday, October 13 at 7 (refreshments at 6:30). It will also be recorded thanks to a contribution from the AAUW.

So -- busy, busy with election-related projects. But I think the work is mostly done (except for the one remaining forum), the guides are out, and now it's time to go through and read the guides. And then the most important part of all: vote!

October 11, 2016 10:08 PM