Planet Ubuntu California

December 05, 2016

Elizabeth Krumbach

Vacation Home in Pennsylvania

This year MJ and I embarked on a secret mission: Buy a vacation home in Pennsylvania.

It was a decision we’d mulled over for a couple years, and the state of the real estate market along with our place in lives, careers and frequent visits back to the Philadelphia area finally made the stars align to make it happen. With the help of family local to the area, including one who is a real estate agent, we spent the past few trips taking time to look at houses and make some decisions. In August we started signing the paperwork to take possession of a new home in November.

With the timing of our selection, we were able to pick out cabinets, counter tops and some of the other non-architectural options in the home. Admittedly none of that is my thing, but it’s still nice that we were able to put our touch on the end result. As we prepared for the purchase, MJ spent a lot of time making plans for taking care of the house and handling things like installations, deliveries and the move of our items from storage into the house.

In October we also bought a car that we’d be keeping at the house in Philadelphia, though we did enjoy it in California for a few weeks.

On November 15th we met at the title company office and signed the final paperwork.

The house was ours!

The next day I flew to Germany for a conference and MJ headed back to San Francisco. I enjoyed the conference and a few days in Germany, but I was eager to get back to the house.

Upon my return we had our first installation. Internet! And backup internet.

MJ came back into town for Thanksgiving which we enjoyed with family. The day after was the big move from storage into the house. Our storage units not only had our own things that we’d left in Pennsylvania, but everything from MJ’s grandparents, which included key contents of their own former vacation home which I never saw. We moved his grandmother into assisted care several years ago and had been keeping their things until we got a larger home in California. With the house here in Pennsylvania we decided to use some of the pieces to furnish the house here. It also meant I have a lot of boxes to go through.

Before MJ left to head back to work in San Francisco we did get a few things unpacked, including Champagne glasses, which meant on Saturday night following the move day we were able to pick up a proper bottle of Champagne and spend the evening together in front of the fireplace to celebrate.

I’d been planning on taking some time off following the layoff from my job as I consider new opportunities in the coming year. It ended up working well since I’ve been able to do that, plus spend the past week here in the Philadelphia house unpacking and getting the house set up. Several of the days I’ve also had to be here at the house to receive deliveries and be present for installs of all kinds to make sure the house is ready and secure (cameras!) for us to properly enjoy as soon as possible. Today is window blinds day. I am getting to enjoy it some too, between all these tasks I’ve spent time with local friends and family, had some time reading in front of the fireplace, have enjoyed a beautiful new Bluetooth speaker playing music all day. The house doesn’t have a television yet, but I have also curled up to watch a few episodes on my tablet here and there in the evenings as well.

There have also been some great sunsets in the neighborhood. I sure missed the Pennsylvania autumn sights and smells.

And not all the unpacking has been laborious. I found MJ’s telescope from years ago in storage and I was able to set that up the other night. Looking forward a clear night to try it out.

Tomorrow I’m flying off yet again for a conference and then to spend at least a week at home back in San Francisco. We’ll be back very soon though, planning on spending at least the eight days of Hanukkah here, and possibly flying in earlier if we can line up some of the other work we need to get done.

by pleia2 at December 05, 2016 07:21 PM

December 04, 2016

Elizabeth Krumbach

Breathtaking Barcelona

My father once told me that Madrid was his favorite city and that he generally loved Spain. When my aunt shipped me a series of family slides last year I was delighted to find ones from Madrid in the mix, I uploaded the album: Carl A. Krumbach – Spain 1967. I wish I had asked him why he loved Madrid, but in October I had the opportunity myself to learn why I now love Spain.

I landed in Barcelona the last week of October. First, it was a beautiful time to visit. Nice weather that wasn’t too hot or too cold. It rained over night a couple times and a bit some days, but not enough to deter activities, and I was busy with a conference during most of the days anyway. It was also warm enough to go swimming in the Mediterranean, though I failed to avail myself of this opportunity. The day I got in I met up with a couple friends to go to the aquarium, walk around the coastline and was able to touch the sea for the first time. That evening I also had my first of three seafood paellas that I enjoyed throughout the week. So good.

The night life was totally a thing. Many places would offer tapas along with drinks, so one night a bunch of us went out and just ate and drank our way through the Gothic Quarter. The restaurants also served late, often not even starting dinner service until 8PM. One night at midnight we found ourselves at a steakhouse dining on a giant steak that served the table and drinking a couple bottles of cava. Oh the cava, it was plentiful and inexpensive. As someone who lives in California these days I felt a bit bad by betraying my beloved California wine, but it was really good. I also enjoyed the Sangrias.

A couple mornings after evenings when I didn’t let the drinks get the better of me, I also went out for a run. Running along the beaches in Barcelona was a tiny slice of heaven. It was also wonderful to just go sit by the sea one evening when I needed some time away from conference chaos.


Seafood paella lunch for four! We also had a couple beers.

All of this happened before I even got out to do much tourist stuff. Saturday was my big day for seeing the famous sights. Early in the week I reserved tickets to see the Sagrada Familia Basilica. I like visiting religious buildings when I travel because they tend to be on the extravagant side. Plus, back at the OpenStack Summit in Paris we heard from a current architect of the building and I’ve since seen a documentary about the building and nineteenth century architect Antoni Gaudí. I was eager to see it, but nothing quite prepared me for the experience. I had tickets for 1:30PM and was there right on time.


Sagrada Familia selfie!

It was the most amazing place I’ve ever been.

The architecture sure is unusual but once you let that go and just enjoy it, everything comes together in a calming way that I’ve never quite experienced before. The use of every color through the hundreds of stained glass windows was astonishing.

I didn’t do the tower tour on this trip because once I realized how special this place was I wanted to save something new to do there the next time I visit.

The rest of my day was spent taking one of the tourist buses around town to get a taste of a bunch of the other sights. I got a glimpse of a couple more buildings by Gaudí. In the middle of the afternoon I stopped at a tapas restaurant across from La Monumental, a former bullfighting ring. They outlawed bullfighting several years ago, but the building is still used for other events and is worth seeing for the beautiful tiled exterior, even just on the outside.

I also walked through the Arc de Triomf and made my way over to the Barcelona Cathedral. After the tour bus brought me back to the stop near my hotel I spent the rest of the late afternoon enjoying some time at the beach.

That evening I met up with my friend Clint to do one last wander around the area. We stopped at the beach and had some cava and cheese. From there we went to dinner where we split a final paella and bottle of cava. Dessert was a Catalan cream, which is a lot like a crème brûlée but with cinnamon, yum!

As much as I wanted to stay longer and enjoy the gorgeous weather, the next morning I was scheduled to return home.

I loved Barcelona. It stole my heart like no other European city ever has and it’s now easily one of my favorite cities. I’ll be returning, hopefully sooner than later.

More photos from my adventures in Barcelona here: https://www.flickr.com/photos/pleia2/albums/72157674260004081

by pleia2 at December 04, 2016 03:18 AM

December 02, 2016

Elizabeth Krumbach

OpenStack book and Infra team at the Ocata Summit

At the end of October I attended the OpenStack Ocata Summit in beautiful Barcelona. My participation in this was a bittersweet one for me. It was the first summit following the release of our Common OpenStack Deployments book and OpenStack Infrastructure tooling was featured in a short keynote on Wednesday morning, making for quite the exciting summit. Unfortunately it also marked my last week with HPE and an uncertain future with regard to my continued full time participation with the OpenStack Infrastructure team. It was also the last OpenStack Summit where the conference and design summit are being hosted together, so the next several months will be worth keeping an eye on community-wise. Still, I largely took the position of assuming I’d continue to be able to work on the team, just with more caution in regards to work I was signing up for.

The first thing that I discovered during this summit was how amazing Barcelona is. The end of October presented us with some amazing weather for walking around and the city doesn’t go to sleep early, so we had plenty of time in the evenings to catch up with each other over drinks and scrumptious tapas. It worked out well since there were fewer sponsored parties in the evenings at this summit and attendance seemed limited at the ones that existed.

The high point for me at the summit was having the OpenStack Infrastructure tooling for handling our fleet of compute instances featured in a keynote! Given my speaking history, I was picked from the team to be up on the big stage with Jonathan Bryce to walk through a demonstration where we removed one of our US cloud providers and added three more in Europe. While the change was landing and tests started queuing up we also took time to talk about how tests are done against OpenStack patch sets across our various cloud providers.


Thanks to Johanna Koester for taking this picture (source)

It wasn’t just me presenting though. Clark Boylan and Jeremy Stanley were sitting in the front row making sure the changes landed and everything went according to plan during the brief window that this demonstration took up during the keynote. I’m thrilled to say that this live demonstration was actually the best run we had of all the testing, seeing all the tests start running on our new providers live on stage in front of such a large audience was pretty exciting. The team has built something really special here, and I’m glad I had the opportunity to help highlight that in the community with a keynote.


Mike Perez and David F. Flanders sitting next to Jeremy and Clark as they monitor demonstration progress. Photo credit for this one goes to Chris Hoge (source)

The full video of the keynote is available here: Demoing the World’s Largest Multi-Cloud CI Application

A couple of conference talks were presented by members of the Infrastructure team as well. On Tuesday Colleen Murphy, Paul Belanger and Ricardo Carrillo Cruz presented on the team’s Infra-Cloud. As I’ve written about before, the team has built a fully open source OpenStack cloud using the community Puppet modules and donated hardware and data center space from Hewlett Packard Enterprise. This talk outlined the architecture of that cloud, some of the challenges they’ve encountered, statistics from how it’s doing now and future plans. Video from their talk is here: InfraCloud, a Community Cloud Managed by the Project Infrastructure Team.

James E. Blair also gave a talk during the conference, this time on Zuul version 3. This version of Zuul has been under development for some time, so this was a good opportunity to update the community on the history of the Zuul project in general and why it exists, status of ongoing efforts with an eye on v3 and problems it’s trying to solve. I’m also in love with his slide deck, it was all text-based (including some “animations”!) and all with an Art Deco theme. Video from his talk is here: Zuul v3: OpenStack and Ansible Native CI/CD.

As usual, the Infrastructure team also had a series of sessions related to ongoing work. As a quick rundown, we have Etherpads for all the sessions (read-only links provided):

Friday concluded with a Contributors Meetup for the Infrastructure team in the afternoon where folks split off into small groups to tackle a series of ongoing projects together. I was also able to spend some time with the Internationalization (i18n) team that Friday afternoon. I dragged along Clark so someone else on the team could pick up where I left off in case I have less time in the future. We talked about the pending upgrade of Zanata and plans for a translations checksite, making progress on both fronts, especially when we realized that there’s a chance we could get away with just running a development version of Horizon itself, with a more stable back end.


With the i18n team!

Finally, the book! It was the first time I was able to see Matt Fischer, my contributing author, since the book came out. Catching up with him and signing a book together was fun. Thanks to my publisher I was also thrilled to donate the signed copies I brought along to the Women of OpenStack Speed Mentoring event on Tuesday morning. I wasn’t able to attend the event, but they were given out on my behalf, thanks to Nithya Ruff for handling the giveaway.


Thanks to Nithya Ruff for taking a picture of me with my book at the Women of OpenStack area of the expo hall (source) and Brent Haley for getting the picture of Lisa-Marie and I (source).

I was also invited to sit down with Lisa-Marie Namphy to chat about the book and changes to the OpenStack Infrastructure team in the Newton cycle. The increase in capacity to over 2000 test instances this past cycle was quite the milestone so I enjoyed talking about that. The full video is up on YouTube: OpenStack® Project Infra: Elizabeth K. Joseph shares how test capacity doubled in Newton

In all, it was an interesting summit with a lot of change happening in the community and with partner companies. The people that make the community are still there though and it’s always enjoyable spending time together. My next OpenStack event is coming up quickly, next week I’ll be speaking at OpenStack Days Mountain West on the The OpenStack Project Continuous Integration System. I’ll also have a pile of books to give away at that event!

by pleia2 at December 02, 2016 02:58 PM

December 01, 2016

Elizabeth Krumbach

A Zoo and an Aquarium

When I was in Ohio last month for the Ohio LinuxFest I added a day on to my trip to visit the Columbus Zoo. A world-class zoo, it’s one of the few northern state zoos that has manatees and their African savanna exhibit is worth visiting. I went with a couple friends I attended the conference with, one of whom was a local and offered to drive (thanks again Svetlana!).

We arrived mid-day, which was in time to see their cheetah run, where they give one of their cheetahs some exercise by having it run a quick course around what had just been moments before the hyena habitat. I also learned recently via ZooBorns that the Columbus Zoo is one that participates in the cheetah-puppy pairing from a young age. The dogs keep these big cats feeling secure with their calmness in an uncertain world, adorable article from the site here: A Cheetah and His Dog

Much to my delight, they were also selling Cheetah-and-Dog pins after the run to raise money. Yes, please!

As I said, I really enjoyed their African Savanna exhibit. It was big and sprawling and had a nice mixture of animals. The piles of lions they have was also quite the sight to behold.

Their kangaroo enclosure was open to walk through, so you could get quite close to the kangaroos just like I did at the Perth Zoo. There were also a trio of baby tigers and some mountain lions that were adorable. And then there were the manatees. I love manatees!

I’m really glad I took the time to stay longer in Columbus, I’d likely go again if I found myself in the area.

More photos from the zoo, including a tiger napping on his back, and those mountain lions here: https://www.flickr.com/photos/pleia2/albums/72157671610835663

Just a couple weeks later I found myself on another continent, and at the Barcelona Aquarium with my friends Julia and Summer. It was a sizable aquarium and really nicely laid out. Their selection of aquatic animals was diverse and interesting. In this aquarium I liked some of the smallest critters the most. Loved their seahorses.

And the axolotls.

There was also an octopus that was awake and wandering around the tank, much to the delight of the crowd.

They also had penguins, a great shark tube and tank with a moving walkway.

More photos from the Barcelona Aquarium: https://www.flickr.com/photos/pleia2/albums/72157675629122655

Barcelona also has a zoo, but in my limited time there in the city I didn’t make it over there. It’s now on my very long list of other things to see the next time I’m in Barcelona, and you bet there will be a next time.

by pleia2 at December 01, 2016 03:57 AM

November 30, 2016

Eric Hammond

Amazon Polly Text To Speech With aws-cli and Twilio

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Here’s a simple example of how to turn text into speech, using the latest aws-cli:

text="Hello. This speech is generated using Amazon Polly. Enjoy!"
audio_file=speech.mp3

aws polly synthesize-speech \
  --output-format "mp3" \
  --voice-id "Salli" \
  --text "$text" \
  $audio_file

You can listen to the resulting output file using your favorite audio player:

mpg123 -q $audio_file

2- Upload Audio to S3

Create or re-use an S3 bucket to store the audio files temporarily.

s3bucket=YOURBUCKETNAME
aws s3 mb s3://$s3bucket

Upload the generated speech audio file to the S3 bucket. I use a long, random key for a touch of security:

s3key=audio-for-twilio/$(uuid -v4 -FSIV).mp3
aws s3 cp --acl public-read $audio_file s3://$s3bucket/$s3key

For easy cleanup, you can use a bucket with a lifecycle that automatically deletes objects after a day or thirty. See instructions below for how to set this up.

3- Initiate Call With Twilio

Once you have set up an account with Twilio (see pointers below if you don’t have one yet), here are sample commands to initiate a phone call and play the Amazon Polly speech audio:

from_phone="+1..." # Your Twilio allocated phone number
to_phone="+1..."   # Your phone number to call

TWILIO_ACCOUNT_SID="..." # Your Twilio account SID
TWILIO_AUTH_TOKEN="..."  # Your Twilio auth token

speech_url="http://s3.amazonaws.com/$s3bucket/$s3key"
twimlet_url="http://twimlets.com/message?Message%5B0%5D=$speech_url"

curl -XPOST https://api.twilio.com/2010-04-01/Accounts/$TWILIO_ACCOUNT_SID/Calls.json \
  -u "$TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN" \
  --data-urlencode "From=$from_phone" \
  --data-urlencode "To=to_phone" \
  --data-urlencode "Url=$twimlet_url"

The Twilio web service will return immediately after queuing the phone call. It could take a few seconds for the call to be initiated.

Make sure you listen to the phone call as soon as you answer, as Twilio starts playing the audio immediately.

The ringspeak Command

For your convenience (actually for mine), I’ve put together a command line program that turns all the above into a single command. For example, I can now type things like:

... || ringspeak --to +1NUMBER "Please review the cron job failure messages"

or:

ringspeak --at 6:30am \
  "Good morning!" \
  "Breakfast is being served now in Venetian Hall G.." \
  "Werners keynote is at 8:30."

Twilio credentials, default phone numbers, S3 bucket configuration, and Amazon Polly voice defaults can be stored in a $HOME/.ringspeak file.

Here is the source for the ringspeak command:

https://github.com/alestic/ringspeak

Tip: S3 Setup

Here is a sample commands to configure an S3 bucket with automatic deletion of all keys after 1 day:

aws s3api put-bucket-lifecycle \
  --bucket "$s3bucket" \
  --lifecycle-configuration '{
    "Rules": [{
        "Status": "Enabled",
        "ID": "Delete all objects after 1 day",
        "Prefix": "",
        "Expiration": {
          "Days": 1
        }
  }]}'

This is convenient because you don’t have to worry about knowing when Twilio completes the phone call to clean up the temporary speech audio files.

Tip: Twilio Setup

This isn’t the place for an entire Twilio howto, but I will say that it is about this simple to set up:

  1. Create a Twilio account

  2. Reserve a phone number through Twilio.

  3. Find the ACCOUNT SID and AUTH TOKEN for use in Twilio API calls.

When you are using the Twilio free trial, it requires you to verify phone numbers before calling them. To call arbitrary numbers, enter your credit card and fund the minimum of $20.

Twilio will only charge you for what you use (about a dollar a month per phone number, about a penny per minute for phone calls, etc.).

Closing

A lot is possible when you start integrating Twilio with AWS. For example, my daughter developed an Alexa skill that lets her speak a message for a family member and have it delivered by phone. Alexa triggers her AWS Lambda function, which invokes the Twilio API to deliver the message by voice call.

With Amazon Polly, these types of voice applications can sound better than ever.

Original article and comments: https://alestic.com/2016/11/amazon-polly-text-to-speech/

November 30, 2016 06:30 PM

Elizabeth Krumbach

Ohio LinuxFest 2016

Last month I had the pleasure of finally attending an Ohio LinuxFest. The conference has been on my radar for years, but every year I seemed to have some kind of conflict. When my Tour of OpenStack Deployment Scenarios was accepted I was thrilled to finally be able to attend. My employer at the time also pitched in to the conference as a Bronze sponsor and by sending along a banner that showcased my talk, and my OpenStack book!

The event kicked off on Friday and the first talk I attended was by Jeff Gehlbach on What’s Happening with OpenNMS. I’ve been to several OpenNMS talks over the years and played with it some, so I knew the background of the project. This talk covered several of the latest improvements. Of particular note were some of their UI improvements, including both a website refresh and some stunning improvements to the WebUI. It was also interesting to learn about Newts, the time-series data store they’ve been developing to replace RRDtool, which they struggled to scale with their tooling. Newts is decoupled from the visualization tooling so you can hook in your own, like if you wanted to use Grafana instead.

I then went to Rob Kinyon’s Devs are from Mars, Ops are from Venus. He had some great points about communication between ops, dev and QA, starting with being aware and understanding of the fact that you all have different goals, which sometimes conflict. Pausing to make sure you know why different teams behave the way they do and knowing that they aren’t just doing it to make your life difficult, or because they’re incompetent, makes all the difference. He implored the audience to assume that we’re all smart, hard-working people trying to get our jobs done. He also touched upon improvements to communication, making sure you repeat requests in your own words so misunderstandings don’t occur due to differing vocabularies. Finally, he suggested that some cross-training happen between roles. A developer may never be able to take over full time for an operator, or vice versa, but walking a mile in someone else’s shoes helps build the awareness and understanding that he stresses is important.

The afternoon keynote was given by Catherine Devlin on Hacking Bureaucracy with 18F. She works for the government in the 18F digital services agency. Their mandate is to work with other federal agencies to improve their digital content, from websites to data delivery. Modeled after a startup, she explained that they try not to over-plan, like many government organizations do and can lead to failure, they want to fail fast and keep iterating. She also said their team has a focus on hiring good people and understanding the needs of the people they serve, rather than focusing on raw technical talent and the tools. Their practices center around an open by default philosophy (see: 18F: Open source policy), so much of their work is open source and can be adopted by other agencies. They also make sure they understand the culture of organizations they work with so that the tools they develop together will actually be used, as well as respecting the domain knowledge of teams they’re working with. Slides from her talk here, and include lots of great links to agency tooling they’ve worked on: https://github.com/catherinedevlin/olf-2016-keynote


Catherine Devlin on 18F

That evening folks gathered in the expo hall to meet and eat! That’s where I caught up with my friends from Computer Reach. This is the non-profit I went to Ghana with back in 2012 to deploy Ubuntu-based desktops. I spent a couple weeks there with Dave, Beth Lynn and Nancy (alas, unable to come to OLF) so it was great to see them again. I learned more about the work they’re continuing to do, having switched to using mostly Xubuntu on new installs which was written about here. On a personal level it was a lot of fun connecting with them too, we really bonded during our adventures over there.


Tyler Lamb, Dave Sevick, Elizabeth K. Joseph, Beth Lynn Eicher

Saturday morning began with a keynote from Ethan Galstad on Becoming the Next Tech Entrepreneur. Ethan is the founder of Nagios, and in his talk he traced some of the history of his work on getting Nagios off the ground as a proper project and company and his belief in why technologists make good founders. In his work he drew from his industry and market expertise from being a technologist and was able to play to the niche he was focused on. He also suggested that folks look to what other founders have done that has been successful, and recommended some books (notably Founders at Work and Work the System). Finaly, he walked through some of what can be done to get started, including the stages of idea development, basic business plan (don’t go crazy), a rough 1.0 release that you can have some early customers test and get feedback from, and then into marketing, documenting and focused product development. He concluded by stressing that open source project leaders are already entrepreneurs and the free users of your software are your initial market.

Next up was Robert Foreman’s Mixed Metaphors: Using Hiera with Foreman where he sketched out the work they’ve done that preserves usage of Hiera’s key-value store system but leverages Foreman for the actual orchestration. The mixing of provisioning and orchestration technologies is becoming more common, but I hadn’t seen this particular mashup.

My talk was A Tour of OpenStack Deployment Scenarios. This is the same talk I gave at FOSSCON back in August, walking the audience through a series of ways that OpenStack could be configured to provide compute instances, metering and two types of storage. For each I gave a live demo using DevStack. I also talked about several other popular components that could be added to a deployment. Slides from my talk are here (PDF), which also link to a text document with instructions for how to run the DevStack demos yourself.


Thanks to Vitaliy Matiyash for taking a picture during my talk! (source)

At lunch I met up with my Ubuntu friends to catch up. We later met at the booth where they had a few Ubuntu phones and tablets that gained a bunch of attention throughout the event. This event was also my first opportunity to meet Unit193 and Svetlana Belkin in person, both of whom I’ve worked with on Ubuntu for years.


Unit193, Svetlana Belkin, José Antonio Rey, Elizabeth K. Joseph and Nathan Handler

After lunch I went over to see David Griggs of Dell give us “A Look Under the Hood of Ohio Supercomputer Center’s Newest Linux Cluster.” Supercomputers are cool and it was interesting to learn about the system it was replacing, the planning that went into the replacement and workload cut-over and see in-progress photos of the installation. From there I saw Ryan Saunders speak on Automating Monitoring with Puppet and Shinken. I wasn’t super familiar with the Shinken monitoring framework, so this talk was an interesting and very applicable demonstration of the benefits.

The last talk I went to before the closing keynotes was from my Computer Reach friends Dave Sevick and Tyler Lamb. They presented their “Island Server” imaging server that’s now being used to image all of the machines that they re-purpose and deploy around the world. With this new imaging server they’re able to image both Mac and Linux PCs from one Macbook Pro rather than having a different imaging server for each. They were also able to do a live demo of a Mac and Linux PC being imaged from the same Island Server at once.


Tyler and Dave with the Island Server in action

The event concluded with a closing keynote by a father and daughter duo, Joe and Lily Born, on The Democratization of Invention. Joe Born first found fame in the 90s when he invented the SkipDoctor CD repair device, and is now the CEO of Aiwa which produces highly rated Bluetooth speakers. Lily Born invented the tip-proof Kangaroo Cup. The pair reflected on their work and how the idea to product in the hands of customers has changed in the past twenty years. While the path to selling SkipDoctor had a very high barrier to entry, globalization, crowd-funding, 3D printers and internet-driven word of mouth and greater access to the press all played a part in the success of Lily’s Kangaroo cup and the new Aiwa Bluetooth speakers. While I have no plans to invent anything any time soon (so much else to do!) it was inspiring to hear how the barriers have been lowered and inventors today have a lot more options. Also, I just bought an Aiwa Exos-9 Bluetooth Speaker, it’s pretty sweet.

My conference adventures concluded with a dinner with my friends José, Nathan and David, all three of whom I also spent time with at FOSSCON in Philadelphia the month before. It was fun getting together again, and we wandered around downtown Columbus until we found a nice little pizzeria. Good times.

More photos from the Ohio LinuxFest here: https://www.flickr.com/photos/pleia2/albums/72157674988712556

by pleia2 at November 30, 2016 06:29 PM

November 29, 2016

Jono Bacon

Luma Giveaway Winner – Garrett Nay

I little while back I kicked off a competition to give away a Luma Wifi Set.

The challenge? Share a great community that you feel does wonderful work. The most interesting one, according to yours truly, gets the prize.

Well, I am delighted to share that Garrett Nay bags the prize for sharing the following in his comment:

I don’t know if this counts, since I don’t live in Seattle and can’t be a part of this community, but I’m in a new group in Salt Lake City that’s modeled after it. The group is Story Games Seattle: http://www.meetup.com/Story-Games-Seattle/. They get together on a weekly+ basis to play story games, which are like role-playing games but have a stronger emphasis on giving everyone at the table the power to shape the story (this short video gives a good introduction to what story games are all about, featuring members of the group:

Story Games from Candace Fields on Vimeo.

Story games seem to scratch a creative itch that I have, but it’s usually tough to find friends who are willing to play them, so a group dedicated to them is amazing to me.

Getting started in RPGs and story games is intimidating, but this group is very welcoming to newcomers. The front page says that no experience with role-playing is required, and they insist in their FAQ that you’ll be surprised at what you’ll be able to create with these games even if you’ve never done it before. We’ve tried to take this same approach with our local group.

In addition to playing published games, they also regularly playtest games being developed by members of the group or others. As far as productivity goes, some of the best known story games have come from members of this and sister groups. A few examples I’m aware of are Microscope, Kingdom, Follow, Downfall, and Eden. I’ve personally played Microscope and can say that it is well designed and very polished, definitely a product of years of playtesting.

They’re also productive and engaging in that they keep a record on the forums of all the games they play each week, sometimes including descriptions of the stories they created and how the games went. I find this very useful because I’m always on the lookout for new story games to try out. I kind of wish I lived in Seattle and could join the story games community, but hopefully we can get our fledgling group in Salt Lake up to the standard they have set.

What struck me about this example was that it gets to the heart of what community should be and often is – providing a welcoming, supportive environment for people with like-minded ideas and interests.

While much of my work focuses on the complexities of building collaborative communities with the intricacies of how people work together, we should always remember the huge value of what I refer to as read communities where people simply get together to have fun with each other. Garrett’s example was a perfect summary of a group doing great work here.

Thanks everyone for your suggestions, congratulations to Garrett for winning the prize, and thanks to Luma for providing the prize. Garrett, your Luma will be in the mail soon!

The post Luma Giveaway Winner – Garrett Nay appeared first on Jono Bacon.

by Jono Bacon at November 29, 2016 12:08 AM

November 23, 2016

Elizabeth Krumbach

Holiday cards 2016!

Every year I send out a big batch of winter-themed holiday cards to friends and acquaintances online.

Reading this? That means you! Even if you’re outside the United States!

Send me an email at lyz@princessleia.com with your postal mailing address. Please put “Holiday Card” in the subject so I can filter it appropriately. Please do this even if I’ve sent you a card in the past, I won’t be reusing the list from last year.

Typical disclaimer: My husband is Jewish and we celebrate Hanukkah, but the cards are non-religious, with some variation of “Happy Holidays” or “Season’s Greetings” on them.

by pleia2 at November 23, 2016 07:06 PM

Jono Bacon

Microsoft and Open Source: A New Era?

Last week the Linux Foundation announced Microsoft becoming a Platinum member.

In the eyes of some, hell finally froze over. For many though, myself included, this was not an entirely surprising move. Microsoft are becoming an increasingly active member of the open source community, and they deserve credit for this continual stream of improvements.

When I first discovered open source in 1998, the big M were painted as a bit of a villain. This accusation was largely fair. The company went to great lengths to discredit open source, including comparing Linux to a cancer, patent litigation, and campaigns formed of misinformation and FUD. This rightly left a rather sour taste in the mouths of open source supporters.

The remnants of that sour taste are still strong in some. These folks will likely never trust the Redmond mammoth, their decisions, or their intent. While I am not condoning these prior actions from the company, I would argue that the steady stream of forward progress means that…and I know this will be a tough pill to swallow for some of you…means that it is time to forgive and forget.

Forward Progress

This forward progress is impressive. They released their version of FreeBSD for Azure. They partnered with Canonical to bring Ubuntu user-space to Windows (as well as supporting Debian on Azure and even building their own Linux distribution, the Azure Cloud Switch). They supported an open source version of .NET, known as Mono, later buying Xamarin who led this development and open sourced those components. They brought .NET core to Linux, started their own Linux certification, released a litany of projects (including Visual Studio Code) as open source, founded the Microsoft Open Technologies group, and then later merged the group into the wider organization as openness was a core part of the company.

Microsoft's Satya Nadella seems to have fallen in love.

Satya Nadella, seemingly doing a puppet show, without the puppet.

My personal experience with them has reflected this trend. I first got to know the company back in 2001 when I spoke at a DeveloperDeveloperDeveloper day in the UK. Over the years I flew out to Redmond to provide input on initiatives such as .NET, got to know the Microsoft Open Technologies group, and most recently signed the company as a client where I am helping them to build the next generation of their MVP and RD community. Microsoft are not begrudgingly supporting open source, they are actively pursuing it.

As such, this recent announcement from the Linux Foundation wasn’t a huge surprise to me, but was an impressive formal articulation of Microsoft’s commitment to open source. Leaders at Microsoft and the Linux Foundation should be both credited with this additional important step in the right direction, not just for Microsoft, but for the wider acceptance and growth of open source and collaboration.

Work In Progress

Now, some of the critics will be reading this and will cite many examples of Microsoft still acting as the big bad wolf. You are perfectly right to do so. So, let me zone in on this.

I am not suggesting they are perfect. They aren’t. Companies are merely vessels of people, some of which will still continue to have antiquated perspectives. Microsoft will be no different here. Of course, for all the great steps forward, sometimes there will be some steps back.

What I try to focus on however is the larger story and trends. I would argue that Microsoft is trending in the right direction based on many of their recent moves, including the ones I cited above.

Let’s not forget that this is a big company to turn around. With 114,000 employees and 40+ years of cultural heritage and norms, change understandably takes time. The challenge with change is that it doesn’t just require strategic, product, and leadership focus, but the real challenge is cultural change.

Culture at Microsoft seems to be changing.

Culture is something of an amorphous concept. It isn’t a specific thing you can point to. Culture is instead the aggregate of the actions and intent of the many. You can often make strategic changes that result in new products, services, and projects, but those achievements could be underpinned by a broken, divisive, and ugly culture.

As such, culture is hard to change and requires a mix of top-down leadership and bottom-up action.

From my experience of working with Microsoft, the move to a more open company is not merely based on product-based decisions, but it has percolated in the core culture of how the company operates. I have seen this in my day to day interactions with the company and with my consulting work there. I credit Satya Nadella (and likely many others) for helping to trigger these positive forward motions.

So, are they perfect? No. Are they an entirely different company? No. But are they making a concerted thoughtful effort to really understand and integrate openness into the company? I think so. Is the work complete? No. But do they deserve our support as fellow friends in the open source community? I believe so, yes.

The post Microsoft and Open Source: A New Era? appeared first on Jono Bacon.

by Jono Bacon at November 23, 2016 04:00 PM

November 22, 2016

Jono Bacon

2017 Community Leadership Events: An Update

This week I was delighted to see that we could take the wraps off a new event that I am running in conjunction with my friends at the Linux Foundation called the Community Leadership Conference. The event will be part of the Open Source Summit which was previously known as LinuxCon and I will be running it in Los Angeles from 11th – 13th Sep 2017 and Prague from 23rd – 25th Oct 2017.

Now, some of you may be wondering if this replaces or is different to the Community Leadership Summit in Portland/Austin. Let me add some clarity.

The Community Leadership Summit

The Community Leadership Summit takes place each year the weekend before OSCON. I can confirm that there will be another Community Leadership Summit in 2017 in Austin. We plan to announce this soon formally.

The Community Leadership Summit has the primary goal of bringing together community managers from around the world to discuss and debate community leadership principles. The event is an unconference and is focused on discussions as opposed to formal presentations. As such, and as with any unconference, the thrill of the event is the organic schedule and conversations that follow. Thus, CLS is a great event for those who are interested in playing an active role in furthering the art of and science of community leadership more broadly in an organic way.

The Community Leadership Conference

The Community Leadership Conference, which will be part of the Open Source Summit in Los Angeles and Prague, has a slightly different format and focus.

CLC will instead be a traditional conference. My goal here is to bring together speakers from around the world to deliver presentations, panels, and other material that shares best practices, methods, and approaches in community leadership, and specific to open source. CLC is not intended to shape the future of community leadership, but more to present best practices and principles for consumption, and tailed to the needs of open source projects and organizations.

In Summary

So, in summary, the Community Leadership Conference is designed to be a place to consume community leadership best practices and principles via carefully curated presentations, panels, and networking. The Community Leadership Summit is designed to be more of an informal roll-your-sleeves up summit where attendees discuss and debate community leadership to help shape how it evolves and grows.

As regular readers will know, I am passionate about evolving the art and science of community leadership and while CLS has been an important component in this evolution, I felt we needed to augment it with CLC. These two events, combined with the respective audiences of their shared conferences, and bolstered by my wonderful friends at O’Reilly and the Linux Foundation, are going to help us to evolve this art and science faster and more efficiently than ever.

I hope to see you all at either or both of these events!

The post 2017 Community Leadership Events: An Update appeared first on Jono Bacon.

by Jono Bacon at November 22, 2016 06:12 AM

November 21, 2016

Eric Hammond

Watching AWS CloudFormation Stack Status

live display of current event status for each stack resource

Would you like to be able to watch the progress of your new CloudFormation stack resources like this? (press play)

That’s what the output of the new aws-cloudformation-stack-status command looks like when I launch a new AWS Git-backed Static Website CloudFormation stack.

It shows me in real time which resources have completed, which are still in progress, and which, if any, have experienced problems.

Background

AWS provides a few ways to look at the status of resources in a CloudFormation stack including the stream of stack events in the Web console and in the aws-cli.

Unfortunately, these displays show multiple events for each resource (e.g., CREATE_IN_PROGRESS, CREATE_COMPLETE) and it’s difficult to match up all of the resource events by hand to figure out which resources are incomplete and still in progress.

Solution

I created a bit of wrapper code that goes around the aws cloudformation describe-stack-events command. It performs these operations:

  1. Cuts the output down to the few fields that matter: status, resource name, type, event time.

  2. Removes all but the ost recent status event for each stack resource.

  3. Sorts the output to put the resources with the most recent status changes at the top.

  4. Repeatedly runs this command so that you can see the stack progress live and know exactly which resource is taking the longest.

I tossed the simple script up here in case you’d like to try it out:

GitHub: aws-cloudformation-stack-status

You can run it to monitor your CloudFormation stack with this command:

aws-cloudformation-stack-status --watch --region $region --stack-name $stack

Interrupt with Ctrl-C to exit.

Note: You will probably need to start your terminal out wider than 80 columns for a clean presentation.

Note: This does use the aws-cli, so installing and configuring that is a prerequisite.

Stack Delete Example

Here’s another example terminal session watching a stack-delete operation, including some skipped deletions (because of a retention policy). It finally ends with a “stack not found error” which is exactly what we hope for after a stack has been deleted successfully. Again, the resources with the most recent state change events are at the top.

Note: These sample terminal replays cut out almost 40 minutes of waiting for the creation and deletion of the CloudFront distributions. You can see the real timestamps in the rightmost columns.

Original article and comments: https://alestic.com/2016/11/aws-cloudformation-stack-status/

November 21, 2016 09:00 AM

November 14, 2016

Eric Hammond

Optional Parameters For Pre-existing Resources in AWS CloudFormation Templates

stack creates new AWS resources unless user specifies pre-existing

Background

I like to design CloudFormation templates that create all of the resources necessary to implement the desired functionality without requiring a lot of separate, advanced setup. For example, the AWS Git-backed Static Website creates all of the interesting pieces including a CodeCommit Git repository, S3 buckets for web site content and logging, and even the Route 53 hosted zone.

Creating all of these resources is great if you were starting from scratch on a new project. However, you may sometimes want to use a CloudFormation template to enhance an existing account where one or more of the AWS resources already exist.

For example, consider the case where the user already has a CodeCommit Git repository and a Route 53 hosted zone for their domain. They still want all of the enhanced functionality provided in the Git-backed static website CloudFormation stack, but would rather not have to fork and edit the template code just to fit it in to the existing environment.

What if we could use the same CloudFormation template for different types of situations, sometimes pluging in pre-existing AWS resources, and other times letting the stack create the resources for us?

Solution

With assistance from Ryan Scott Brown, the Git-backed static website CloudFormation template now allows the user to optionally specify a number of pre-existing resources to be integrated into the new stack. If any of those parameters are left empty, then the CloudFormation template automatically creates the required resources.

Let’s walk through relevant pieces of the CloudFormation template code using the CodeCommit Git repository as an example of an optional resource. [Note: Code exerpts below may have been abbreviated and slightly altered for article clarity.]

In the CloudFormation template Parameters section, we allow the user to pass in the name of a CodeCommit Git repository that was previously created in the AWS account. If this parameter is specified, then the CloudFormation template uses the pre-existing repository in the new stack. If the parameter is left empty when the template is run, then the CloudFormation stack will create a new CodeCommit Git repository.

Parameters:
  PreExistingGitRepository:
    Description: "Optional Git repository name for pre-existing CodeCommit repository. Leave empty to have CodeCommit Repository created and managed by this stack."
    Type: String
    Default: ""

We add an entry to the Conditions section in the CloudFormation template that will indicate whether or not a pre-existing CodeCommit Git repository name was provided. If the parameter is empty, then we will need to create a new repository.

Conditions:
  NeedsNewGitRepository: !Equals [!Ref PreExistingGitRepository, ""]

In the Resources section, we create a new CodeCommit Git repository, but only on the condition that we need a new one (i.e., the user did not specify one in the parameters). If a pre-existing CodeCommit Git repository name was specified in the stack parameters, then this resource creation will be skipped entirely.

Resources:
  GitRepository:
    Condition: NeedsNewGitRepository
    Type: "AWS::CodeCommit::Repository"
    Properties:
      RepositoryName: !Ref GitRepositoryName
    DeletionPolicy: Retain

We then come to parts of the CloudFormation template where other resources need to refer to the CodeCommit Git repository. We need to use an If conditional to refer to the correct resource, since it might be a pre-existing one passed in a parameter or it might be one created in this stack.

Here’s an example where the CodePipeline resource needs to specify the Git repository name as the source of a pipeline stage.

Resources:
  CodePipeline:
    Type: "AWS::CodePipeline::Pipeline"
    [...]
      RepositoryName: !If [NeedsNewGitRepository, !Ref GitRepositoryName, !Ref PreExistingGitRepository]

We use the same conditional to place the name of the Git repository in the CloudFormation stack outputs so that the user can easily find out what repository is being used by the stack.

Outputs:
  GitRepositoryName:
    Description: Git repository name
    Value: !If [NeedsNewGitRepository, !Ref GitRepositoryName, !Ref PreExistingGitRepository]

We also want to show the URL for cloning the repository. If we created the repository in the stack, this is an easy attribute to query. If a pre-existing repository name was passed in, we can’t determine the correct URL; so we just output that it is not available and hope the user remembers how to access the repository they created in the past.

Outputs:
  GitCloneUrlHttp:
    Description: Git https clone endpoint
    Value: !If [NeedsNewGitRepository, !GetAtt GitRepository.CloneUrlHttp, "N/A"]

Read more from Amazon about the AWS CloudFormation Conditions that are used in this template.

Replacing a Stack Without Losing Important Resources

You may have noticed in the above code that we specify a DeletionPolicy of Retain for the CodeCommit Git repository. This keeps the repository from being deleted if and when the the CloudFormation stack is deleted.

This prevents the accidental loss of what may be the master copy of the website source. It may still be deleted manually if you no longer need it after deleting the stack.

A number of resources in the Git-backed static website stack are retained, including the Route53 hosted zone, various S3 buckets, and the CodeCommit Git repository. Not coincidentally, all of these retained resources can be subsequently passed back into a new stack as pre-existing resources!

Though CloudFormation stacks can often be updated in place, sometimes I like to replace them with completely different templates. It is convenient to leave foundational components in place while deleting and replacing the other stack resources that connect them.

Original article and comments: https://alestic.com/2016/11/aws-cloudformation-optional-resources/

November 14, 2016 11:00 AM

November 13, 2016

Eric Hammond

Alestic.com Blog Infrastructure Upgrade

publishing new blog posts with “git push”

For the curious, the Alestic.com blog has been running for a while on the Git-backed Static Website Cloudformation stack using the AWS Lambda Static Site Generator Plugin for Hugo.

Not much has changed in the design because I had been using Hugo before. However, Hugo is now automatically run inside of an AWS Lambda function triggered by updates to a CodeCommit Git repository.

It has been a pleasure writing with transparent review and publication processes enabled by Hugo and AWS:

  • When I save a blog post change in my editor (written using Markdown), a local Hugo process on my laptop automatically detects the file change, regenerates static pages, and refreshes the view in my browser.

  • When I commit and push blog post changes to my CodeCommit Git repository, the Git-backed Static Website stack automatically regenerates the static blog site using Hugo and deploys to the live website served by AWS.

Blog posts I don’t want to go public yet can be marked as “draft” using Hugo’s content metadata format.

Bigger site changes can be developed and reviewed in a Git feature branch and merged to “master” when completed, automatically triggering publication.

I love it when technology gets out of my way and lets me be focus on being productive.

Original article and comments: https://alestic.com/2016/11/alestic-blog-stack/

November 13, 2016 03:10 AM

November 07, 2016

Eric Hammond

Running aws-cli Commands Inside An AWS Lambda Function

even though aws-cli is not available by default in AWS Lambda

The AWS Lambda environments for each programming language (e.g., Python, Node, Java) already have the AWS client SDK packages pre-installed for those languages. For example, the Python AWS Lambda environment has boto3 available, which is ideal for connecting to and using AWS services in your function.

This makes it easy to use AWS Lambda as the glue for AWS. A function can be triggered by many different service events, and can respond by reading from, storing to, and triggering other services in the AWS ecosystem.

However, there are a few things that aws-cli currently does better than the AWS SDKs alone. For example, the following command is an efficient way to take the files in a local directory and recursively update a website bucket, uploading (in parallel) files that have changed, while setting important object attributes including MIME types guessing:

aws s3 sync --delete --acl public-read LOCALDIR/ s3://BUCKET/

The aws-cli software is not currently pre-installed in the AWS Lambda environment, but we can fix that with a little effort.

Background

The key to solving this is to remember that aws-cli is available as a Python package. Mitch Garnaat reminded me of this when I was lamenting the lack of aws-cli in AWS Lambda, causing me to smack my virtual forehead. Amazon has already taught us how to install most Python packages, and we can apply the same process for aws-cli, though a little extra work is required, because a command line program is involved.

NodeJS/Java/Go developers: Don’t stop reading! We are using Python to install aws-cli, true, but this is a command line program. Once the command is installed in the AWS Lambda environment, you can invoke it using the system command running functions in your respective languages.

Steps

Here are the steps I followed to add aws-cli to my AWS Lambda function. Adjust to suit your particular preferred way of building AWS Lambda functions.

Create a temporary directory to work in, including paths for a temporary virtualenv, and an output ZIP file:

tmpdir=$(mktemp -d /tmp/lambda-XXXXXX)
virtualenv=$tmpdir/virtual-env
zipfile=$tmpdir/lambda.zip

Create the virtualenv and install the aws-cli Python package into it using a subshell:

(
  virtualenv $virtualenv
  source $virtualenv/bin/activate
  pip install awscli
)

Copy the aws command file into the ZIP file, but adjust the first (shabang) line so that it will run with the system python command in the AWS Lambda environment, instead of assuming python is in the virtualenv on our local system. This is the valuable nugget of information buried deep in this article!

rsync -va $virtualenv/bin/aws $tmpdir/aws
perl -pi -e '$_ = "#!/usr/bin/python\n" if $. == 1' $tmpdir/aws
(cd $tmpdir; zip -r9 $zipfile aws)

Copy the Python packages required for aws-cli into the ZIP file:

(cd $virtualenv/lib/python2.7/site-packages; zip -r9 $zipfile .)

Copy in your AWS Lambda function, other packages, configuration, and other files needed by the function code. These don’t need to be in Python.

cd YOURLAMBDADIR
zip -r9 $zipfile YOURFILES

Upload the ZIP file to S3 (or directly to AWS Lambda) and clean up:

aws s3 cp $zipfile s3://YOURBUCKET/YOURKEY.zip
rm -r $tmpdir

In your Lambda function, you can invoke aws-cli commands. For example, in Python, you might use:

import subprocess
command = ["./aws", "s3", "sync", "--acl", "public-read", "--delete",
           source_dir + "/", "s3://" + to_bucket + "/"]
print(subprocess.check_output(command, stderr=subprocess.STDOUT))

Note that you will need to specify the location of the aws command with a leading "./" or you could add /var/task (cwd) to the $PATH environment variable.

Example

This approach is used to add the aws-cli command to the AWS Lambda function used by the AWS Git-backed Static Website CloudFormation stack.

You can see the code that builds the AWS Lambda function ZIP file here, including the installation of the aws-cli command:

https://github.com/alestic/aws-git-backed-static-website/blob/master/build-upload-aws-lambda-function

Notes

  • I would still love to see aws-cli pre-installed on all the AWS Lambda environments. This simple change would remove quite a bit of setup complexity and would even let me drop my AWS Lambda function inline in the CloudFormation template. Eliminating the external dependency and having everything in one file would be huge!

  • I had success building awscli on Ubuntu for use in AWS Lambda, probably because all of the package requirements are pure Python. This approach does not always work. It is recommended you build packages on Amazon Linux so that they are compatible with the AWS Lambda environment.

  • The pip install -t DIR approach did not work for aws-cli when I tried it, which is why I went with virtualenv. Tips welcomed.

  • I am not an expert at virtualenv or Python, but I am persistent when I want to figure out how to get things to work. The above approach worked. I welcome improvements and suggestions from the experts.

Original article and comments: https://alestic.com/2016/11/aws-lambda-awscli/

November 07, 2016 03:00 PM

November 02, 2016

Jono Bacon

Luma Wifi Review and Giveaway

For some reason, wifi has always been the bane of my technological existence. Every house, every router, every cable provider…I have always suffered from bad wifi. I have tried to fix it and in most cases failed.

As such, I was rather excited when I discovered the Luma a little while ago. Put simply, the Luma is a wifi access point, but it comes in multiple units to provide repeaters around your home. The promise of Luma is that this makes it easier to bathe your home in fast and efficient wifi, and comes with other perks such as enhanced security, access controls and more.

So, I pre-ordered one and it arrived recently.

I rather like the Luma so I figured I would write up some thoughts. Stay tuned though, because I am also going to give one away to a lucky reader.

Setup

When it arrived I set it up and followed the configuration process. This was about as simple as you can imagine. The set came with three of these:

luma-kitchen-counter

I plugged each one in turn and the Android app detected each one and configured it. It even recommended where in the house I should put them.

So, I plonked the different Lumas around my house and I was getting pretty reputable speeds.

Usage

Of course, the very best wifi routers blend into the background and don’t require any attention. After a few weeks of use, this has been the case with the Luma. They just sit there working and we have had great wifi across the house.

There are though some interesting features in the app that are handy to have.

Firstly, device management is simple. You can view, remove, and block Internet to different devices and even group devices by person. So, for example, if you neighbors use your Internet from time to time you can group their devices and switch them off if you need precious bandwidth.

Viewing these devices from an app and not an archaic admin panel also makes auditing devices simple. For example, I saw two weird-looking devices on our network and after some research they turned out to be Kindles. Thanks, Amazon, for not having descriptive identifiers for the devices, by the way. 😉

Another neat feature is content filtering. If you have kids and don’t want them to see some naughty content online, you can filter by device or across the whole network. You could also switch off their access when dinner is ready.

So, overall, I am pretty happy with the Luma. Great hardware, simple setup, and reliable execution.

Win a Luma

I want to say a huge thank-you to the kind folks at Luma, because they provided me with an additional Luma to give away here!

Participating is simple. As you know, my true passion in life is building powerful, connected, and productive communities. So, unsurprisingly, I have a question that relates to community:

What is the most interesting, productive, and engaging community you have ever seen?

To participate simply share your answer as a comment on this post. Be sure to tell me which community you are nomating, share pertinant links, and tell me why that community is doing great work. These don’t have to be tech communities – they can be anything, craft, arts, sports, charities, or anything else. I want you to sell me on why the community is interesting and does great work.

Please note: if you include a lot of links, or haven’t posted here before, sometimes comments get stuck in the moderation queue. Rest assured though, I am regularly reviewing the queue and your comment will appear – please don’t submit multiple comments that are the same!

The deadline for submissions is 12pm Pacific time on Fri 18th Nov 2016.

I will then pick my favorite answer and announce the winners. My decision is final and based on what I consider to be the most interesting submission, so no complaining, people. Thanks again to Luma for the kind provision of the prize!

The post Luma Wifi Review and Giveaway appeared first on Jono Bacon.

by Jono Bacon at November 02, 2016 03:25 PM

October 31, 2016

Eric Hammond

AWS Lambda Static Site Generator Plugins

starting with Hugo!

A week ago, I presented a CloudFormation template for an AWS Git-backed Static Website stack. If you are not familiar with it, please click through and review the features of this complete Git + static website CloudFormation stack.

This weekend, I extended the stack to support a plugin architecture to run the static site generator of your choosing against your CodeCommit Git repository content. You specify the AWS Lambda function at stack launch time using CloudFormation parameters (ZIP location in S3).

The first serious static site generator plugin is for Hugo, but others can be added with or without my involvement and used with the same unmodified CloudFormation template.

The Git-backed static website stack automatically invokes the static site generator whenever the site source is updated in the CodeCommit Git repository. It then syncs the generated static website content to the S3 bucket where the stack serves it over a CDN using https with DNS served by Route 53.

I have written three AWS Lambda static site generator plugins to demonstrate the concept and to serve as templates for new plugins:

  1. Identity transformation plugin - This copies the entire Git repository content to the static website with no modifications. This is currently the default plugin for the static website CloudFormation template.

  2. Subdirectory plugin - This plugin is useful if your Git repository has files that should not be included as part of the static site. It publishes a specified subdirectory (e.g., “htdocs” or “public-html”) as the static website, keeping the rest of your repository private.

  3. Hugo plugin - This plugin runs the popular Hugo static site generator. The Git repository should include all source templates, content, theme, and config.

You are welcome to use any of these plugins when running an AWS Git-backed Static Website stack. The documentation in each of the above plugin repositories describes how to set the CloudFormation template parameters on stack create.

You may also write your own AWS Lambda function static site generator plugin using one of the above as a starting point. Let me know if you write plugins; I may add new ones to the list above.

The sample AWS Lambda handler plugin code takes care of downloading the source, and uploading the resulting site and can be copied as is. All you have to do is fill in the “generate_static_site” code to generate the site from the source.

The plugin code for Hugo is basically this:

def generate_static_site(source_dir, site_dir, user_parameters):
    command = ["./hugo", "--source=" + source_dir, "--destination=" + site_dir]
    if user_parameters.startswith("-"):
        command.extend(shlex.split(user_parameters))
    print(subprocess.check_output(command, stderr=subprocess.STDOUT))

I have provided build scripts so that you can build the sample AWS Lambda functions yourself, because you shoudn’t trust other people’s blackbox code if you can help it. That said, I have also made it easy to use pre-built AWS Lambda function ZIP files to try this out.

These CloudFormation template and AWS Lambda functions are very new and somewhat experimental. Please let me know where you run into issues using them and I’ll update documentation. I also welcome pull requests, especially if you work with me in advance to make sure the proposed changes fit the vision for this stack.

Original article and comments: https://alestic.com/2016/10/aws-static-site-generator-plugins/

October 31, 2016 09:41 AM

October 24, 2016

Eric Hammond

AWS Git-backed Static Website

with automatic updates on changes in CodeCommit Git repository

A number of CloudFormation templates have been published that generate AWS infrastructure to support a static website. I’ll toss another one into the ring with a feature I haven’t seen yet.

In this stack, changes to the CodeCommit Git repository automatically trigger an update to the content served by the static website. This automatic update is performed using CodePipeline and AWS Lambda.

This stack also includes features like HTTPS (with a free certificate), www redirect, email notification of Git updates, complete DNS support, web site access logs, infinite scaling, zero maintenance, and low cost.

One of the most exciting features is the launch-time ability to specify an AWS Lambda function plugin (ZIP file) that defines a static site generator to run on the Git repository site source before deploying to the static website. A sample plugin is provided for the popular Hugo static site generator.

Here is an architecture diagram outlining the various AWS services used in this stack. The arrows indicate the major direction of data flow. The heavy arrows indicate the flow of website content.

CloudFormation stack architecture diagram

Sure, this does look a bit complicated for something as simple as a static web site. But remember, this is all set up for you with a simple aws-cli command (or AWS Web Console button push) and there is nothing you need to maintain except the web site content in a Git repository. All of the AWS components are managed, scaled, replicated, protected, monitored, and repaired by Amazon.

The input to the CloudFormation stack includes:

  • Domain name for the static website

  • Email address to be notified of Git repository changes

The output of the CloudFormation stack includes:

  • DNS nameservers for you to set in your domain registrar

  • Git repository endpoint URL

Though I created this primarily as a proof of concept and demonstration of some nice CloudFormation and AWS service features, this stack is suitable for use in a production environment if its features match your requirements.

Speaking of which, no CloudFormation template meets everybody’s needs. For example, this one conveniently provides complete DNS nameservers for your domain. However, that also means that it assumes you only want a static website for your domain name and nothing else. If you need email or other services associated with the domain, you will need to modify the CloudFormation template, or use another approach.

How to run

To fire up an AWS Git-backed Static Website CloudFormation stack, you can click this button and fill out a couple input fields in the AWS console:

Launch CloudFormation stack

I have provided copy+paste aws-cli commands in the GitHub repository. The GitHub repository provides all the source for this stack including the AWS Lambda function that syncs Git repository content to the website S3 bucket:

AWS Git-backed Static Website GitHub repo

If you have aws-cli set up, you might find it easier to use the provided commands than the AWS web console.

When the stack starts up, two email messages will be sent to the address associated with your domain’s registration and one will be sent to your AWS account address. Open each email and approve these:

  • ACM Certificate (2)
  • SNS topic subscription

The CloudFormation stack will be stuck until the ACM certificates are approved. The CloudFront distributions are created afterwards and can take over 30 minutes to complete.

Once the stack completes, get the nameservers for the Route 53 hosted zone, and set these in your domain’s registrar. Get the CodeCommit endpoint URL and use this to clone the Git repository. There are convenient aws-cli commands to perform these fuctions in the project’s GitHub repository linked to above.

AWS Services

The stack uses a number of AWS services including:

  • CloudFormation - Infrastructure management.

  • CodeCommit - Git repository.

  • CodePipeline - Passes Git repository content to AWS Lambda when modified.

  • AWS Lambda - Syncs Git repository content to S3 bucket for website

  • S3 buckets - Website content, www redirect, access logs, CodePipeline artifacts

  • CloudFront - CDN, HTTPS management

  • Certificate Manager - Creation of free certificate for HTTPS

  • CloudWatch - AWS Lambda log output, metrics

  • SNS - Git repository activity notification

  • Route 53 - DNS for website

  • IAM - Manage resource security and permissions

Cost

As far as I can tell, this CloudFormation stack currently costs around $0.51 per month in a new AWS account with nothing else running a reasonable amount of storage for the web site content, and up to 5 Git users. This minimal cost is due to there being no free tier for Route 53 at the moment.

If you have too many GB of content, too many tens of thousands of requests, etc., you may start to see additional pennies being added to your costs.

If you stop and start the stack, it will cost an additional $1 each time because of the odd CodePipeline pricing structure. See the AWS pricing guides for complete details, and monitor your account spending closely.

Notes

  • This CloudFormation stack will only work in regions that have all of the required services and features available. The only one I’m sure about is ue-east-1. Let me know if you get it to work elsewhere.

  • This CloudFormation stack uses an AWS Lambda function that is installed from the run.alestic.com S3 bucket provided by Eric Hammond. You are welcome to use the provided script to build your own AWS Lambda function ZIP file, upload it to S3, and specify the location in the launch parameters.

  • Git changes are not reflected immediately on the website. It takes a minute for CodeDeploy to notice the change; a minute to get the latest Git branch content, ZIP, upload to S3; and a minute for the AWS Lambda function to download, unzip, and sync the content to the S3 bucket. Then the CloudFront CDN TTL may prevent the changes from being seen for another minute. Or so.

Thanks

Thanks to Mitch Garnaat for pointing me in the right direction for getting the aws-cli into an AWS Lambda function. This was important because “aws s3 sync” is much smarter than the other currently availble options for syncing website content with S3.

Thanks to AWS Community Hero Onur Salk for pointing me in the direction of CloudPipeline for triggering AWS Lamda functions off of CodeCommit changes.

Thanks to Ryan Brown for already submitting a pull request with lots of nice cleanup of the CloudFormation template, teaching me a few things in the process.

Some other resources you might fine useful:

Creating a Static Website Using a Custom Domain - Amazon Web Services

S3 Static Website with CloudFront and Route 53 - AWS Sysadmin

Continuous Delivery with AWS CodePipeline - Onur Salk

Automate CodeCommit and CodePipeline in AWS CloudFormation - Stelligent

Running AWS Lambda Functions in AWS CodePipeline using CloudFormation - Stelligent

You are welcome to use, copy, and fork this repository. I would recommend contacting me before spending time on pull requests, as I have specific limited goals for this stack and don’t plan to extend its features much more.

[Update 2016-10-28: Added Notes section.]

[Update 2016-11-01: Added note about static site generation and Hugo plugin.]

Original article and comments: https://alestic.com/2016/10/aws-git-backed-static-website/

October 24, 2016 10:00 AM

October 23, 2016

Akkana Peck

Los Alamos Artists Studio Tour

[JunkDNA Art at the LA Studio Tour] The Los Alamos Artists Studio Tour was last weekend. It was a fun and somewhat successful day.

I was borrowing space in the studio of the fabulous scratchboard artist Heather Ward, because we didn't have enough White Rock artists signed up for the tour.

Traffic was sporadic: we'd have long periods when nobody came by (I was glad I'd brought my laptop, and managed to get some useful development done on track management in pytopo), punctuated by bursts where three or four groups would show up all at once.

It was fun talking to the people who came by. They all had questions about both my metalwork and Heather's scratchboard, and we had a lot of good conversations. Not many of them were actually buying -- I heard the same thing afterward from most of the other artists on the tour, so it wasn't just us. But I still sold enough that I more than made back the cost of the tour. (I hadn't realized, prior to this, that artists have to pay to be in shows and tours like this, so there's a lot of incentive to sell enough at least to break even.) Of course, I'm nowhere near covering the cost of materials and equipment. Maybe some day ...

[JunkDNA Art at the LA Studio Tour]

I figured snacks are always appreciated, so I set out my pelican snack bowl -- one of my first art pieces -- with brownies and cookies in it, next to the business cards.

It was funny how wrong I was in predicting what people would like. I thought everyone would want the roadrunners and dragonflies; in practice, scorpions were much more popular, along with a sea serpent that had been sitting on my garage shelf for a month while I tried to figure out how to finish it. (I do like how it eventually came out, though.)

And then after selling both my scorpions on Saturday, I rushed to make two more on Saturday night and Sunday morning, and of course no one on Sunday had the slightest interest in scorpions. Dave, who used to have a foot in the art world, tells me this is typical, and that artists should never make what they think the market will like; just go on making what you like yourself, and hope it works out.

Which, fortunately, is mostly what I do at this stage, since I'm mostly puttering around for fun and learning.

Anyway, it was a good learning experience, though I was a little stressed getting ready for it and I'm glad it's over. Next up: a big spider for the front yard, before Halloween.

October 23, 2016 02:17 AM

October 22, 2016

Elizabeth Krumbach

Simcoe’s October Checkup

On October 13th MJ and I took Simcoe in to the vet for her quarterly checkup. The last time she had been in was back in June.

As usual, she wasn’t thrilled about this vet visit plan.

This time her allergies were flaring up and we were preparing to increase her dosage of Atopica to fight back on some of the areas she was scratching and breaking out. The poor thing continues to suffer from constipation, so we’re continuing to try to give her wet food with pumpkin or fiber mixed in, but it’s not easy since food isn’t really her thing. We also have been keeping an eye on her weight and giving her an appetite stimulant here and there when I’m around to monitor her. Back in June her weight was at 8.4lbs, and this time she’s down to 8.1. I hope to spend more time giving her the stimulant after my next trip.

Sadly her bloodwork related to kidney values continues to worsen. Her CRE levels are the worst we’ve ever seen, with them shooting up higher than when she first crashed and we were notified of her renal failure back in 2011, almost five years ago. From 5.5 in June, she’s now at a very concerning 7.0.

Her BUN has stayed steady at 100, the same as it was in June.

My travel has been pretty hard on her, and I feel incredibly guilty about this. She’s more agitated and upset than we’d like to see so the vet prescribed a low dose of Alprazolam that she can be given during the worst times. We’re going to reduce her Calcitriol, but otherwise are continuing with the care routine.

It’s upsetting to see her decline in this way, and I have noticed a slight drop in energy as well. I’m still hoping we have a lot more time with my darling kitten-cat, but she turns ten next month and these value are definitely cause for concern.

But let’s not leave it on a sad note. The other day she made herself at home in a box that had the sun pointed directly inside it. SO CUTE!

She also tried to go with MJ on a business trip this week.

I love this cat.

by pleia2 at October 22, 2016 02:24 AM

October 20, 2016

Jono Bacon

All Things Open Next Week – MCing, Talks, and More

Last year I went to All Things Open for the first time and did a keynote. You can watch the keynote here.

I was really impressed with All Things Open last year and have subsequently become friends with the principle organizer, Todd Lewis. I loved how the team put together a show with the right balance of community and corporation, great content, exhibition and more.

All Thing Open 2016 is happening next week and I will be participating in a number of areas:

  • I will be MCing the keynotes for the event. I am looking forward to introducing such a tremendous group of speakers.
  • Jeremy King, CTO of Walmart Labs and I will be having a fireside chat. I am looking forward to delving into the work they are doing.
  • I will also be participating in a panel about openness and collaboration, and delivering a talk about building a community exoskeleton.
  • It is looking pretty likely I will be doing a book signing with free copies of The Art of Community to be given away thanks to my friends at O’Reilly!

The event takes place in Raleigh, and if you haven’t registered yet, do so right here!

Also, a huge thanks to Red Hat and opensource.com for flying me out. I will be joining the team for a day of meetings prior to All Things Open – looking forward to the discussions!

The post All Things Open Next Week – MCing, Talks, and More appeared first on Jono Bacon.

by Jono Bacon at October 20, 2016 08:20 PM

October 17, 2016

Elizabeth Krumbach

Seeking a new role

Today I was notified that I am being laid off from the upstream OpenStack Infrastructure job I have through HPE. It’s a workforce reduction and our whole team at HPE was hit. I love this job. I work with a great team on the OpenStack Infrastructure team. HPE has treated me very well, supporting travel to conferences I’m speaking at, helping to promote my books (Common OpenStack Deployments and The Official Ubuntu Book, 9th and 8th editions) and other work. I spent almost four years there and I’m grateful for what they did for my career.

But now I have to move on.

I’ve worked as a Linux Systems Administrator for the past decade and I’d love to continue that. I live in San Francisco so there are a lot of ops positions around here that I can look at, but I really want to find a place where my expertise with open source, writing and public speaking can will be used and appreciated. I’d also be open to a more Community or Developer Evangelist role that leverages my systems and cloud background.

Whatever I end up doing next the tl;dr (too long; didn’t read) version of what I need in my next role are as follows:

  • Most of my job to be focused on open source
  • Support for travel to conferences where I speak at (6-12 per year)
  • Work from home
  • Competitive pay

My resume is over here: http://elizabethkjoseph.com

Now the long version, and a quick note about what I do today.

OpenStack project Infrastructure Team

I’ve spent nearly four years working full time on the OpenStack project Infrastructure Team. We run all the services that developers on the OpenStack project interact with on a daily basis, from our massive Continuous Integration system to translations and the Etherpads. I love it there. I also just wrote a book about OpenStack.

HPE has paid me to do this upstream OpenStack project Infrastructure work full time, but we have team members from various companies. I’d love to find a company in the OpenStack ecosystem willing to pay for me to continue this and support me like HPE did. All the companies who use and contribute to OpenStack rely upon the infrastructure our team provides, and as a root/core member of this team I have an important role to play. It would be a shame for me to have to leave.

However, I am willing to move on from this team and this work for something new. During my career thus far I’ve spent time working on both the Ubuntu and Debian projects, so I do have experience with other large open source projects, and reducing my involvement in them as my life dictates.

Most of my job to be focused on open source

This is extremely important to me. I’ve spent the past 15 years working intensively in open source communities, from Linux Users Groups to small and large open source projects. Today I work on a team where everything we do is open source. All system configs, Puppet modules, everything but the obvious private data that needs to be private for the integrity of the infrastructure (SSH keys, SSL certificates, passwords, etc). While I’d love a role where this is also the case, I realize how unrealistic it is for a company to have such an open infrastructure.

An alternative would be a position where I’m one of the ops people who understands the tooling (probably from gaining an understanding of it internally) and then going on to help manage the projects that have been open sourced by the team. I’d make sure best practices are followed for the open sourcing of things, that projects are paid attention to and contributors outside the organization are well-supported. I’d also go to conferences to present on this work, write about it on a blog somewhere (company blog? opensource.com?) and be encouraging and helping other team members do the same.

Support for travel to conferences where I speak at (to chat at 6-12 per year)

I speak a lot and I’m good at it. I’ve given keynotes at conferences in Europe, South America and right here in the US. Any company I go to work for will need to support me in this by giving me the time to prepare and give talks, and by compensating me for travel for conferences where I’m speaking.

Work from home

I’ve been doing this for the past ten years and I’d really struggle to go back into an office. Since operations, open source and travel doesn’t need me to be in an office, I’d prefer to stick with the flexibility and time working from home gives me.

For the right job I may be willing to consider going into an office or visiting client/customer sites (SF Bay Area is GREAT for this!) once a week, or some kind of arrangement where I travel to a home office for a week here and there. I can’t relocate for a position at this time.

Competitive pay

It should go without saying, but I do live in one of the most expensive places in the world and need to be compensated accordingly. I love my work, I love open source, but I have bills to pay and I’m not willing to compromise on this at this point in my life.

Contact me

If you think your organization would be interested in someone like me and can help me meet my requirements, please reach out via email at lyz@princessleia.com

I’m pretty sad today about the passing of what’s been such a great journey for me at HPE and in the OpenStack community, but I’m eager to learn more about the doors this change is opening up for me.

by pleia2 at October 17, 2016 11:23 PM

October 11, 2016

Akkana Peck

New Mexico LWV Voter Guides are here!

[Vote button] I'm happy to say that our state League of Women Voters Voter Guides are out for the 2016 election.

My grandmother was active in the League of Women Voters most of her life (at least after I was old enough to be aware of such things). I didn't appreciate it at the time -- and I also didn't appreciate that she had been born in a time when women couldn't legally vote, and the 19th amendment, giving women the vote, was ratified just a year before she reached voting age. No wonder she considered the League so important!

The LWV continues to work to extend voting to people of all genders, races, and economic groups -- especially important in these days when the Voting Rights Act is under attack and so many groups are being disenfranchised. But the League is important for another reason: local LWV chapters across the country produce detailed, non-partisan voter guides for each major election, which are distributed free of charge to voters. In many areas -- including here in New Mexico -- there's no equivalent of the "Legislative Analyst" who writes the lengthy analyses that appear on California ballots weighing the pros, cons and financial impact of each measure. In the election two years ago, not that long after Dave and I moved here, finding information on the candidates and ballot measures wasn't easy, and the LWV Voter Guide was by far the best source I saw. It's the main reason I joined the League, though I also appreciate the public candidate forums and other programs they put on.

LWV chapters are scrupulous about collecting information from candidates in a fair, non-partisan way. Candidates' statements are presented exactly as they're received, and all candidates are given the same specifications and deadlines. A few candidates ignored us this year and didn't send statements despite repeated emails and phone calls, but we did what we could.

New Mexico's state-wide voter guide -- the one I was primarily involved in preparing -- is at New Mexico Voter Guide 2016. It has links to guides from three of the four local LWV chapters: Los Alamos, Santa Fe, and Central New Mexico (Albuquerque and surrounding areas). The fourth chapter, Las Cruces, is still working on their guide and they expect it soon.

I was surprised to see that our candidate information doesn't include links to websites or social media. Apparently that's not part of the question sheet they send out, and I got blank looks when I suggested we should make sure to include that next time. The LWV does a lot of important work but they're a little backward in some respects. That's definitely on my checklist for next time, but for now, if you want a candidate's website, there's always Google.

I also helped a little on Los Alamos's voter guide, making suggestions on how to present it on the website (I maintain the state League website but not the Los Alamos site), and participated in the committee that wrote the analysis and pro and con arguments for our contentious charter amendment proposal to eliminate the elective office sheriff. We learned a lot about the history of the sheriff's office here in Los Alamos, and about state laws and insurance rules regarding sheriffs, and I hope the important parts of what we learned are reflected in both sides of the argument.

The Voter Guides also have a link to a Youtube recording of the first Los Alamos LWV candidate forum, featuring NM House candidates, DA, Probate judge and, most important, the debate over the sheriff proposition. The second candidate forum, featuring US House of Representatives, County Council and County Clerk candidates, will be this Thursday, October 13 at 7 (refreshments at 6:30). It will also be recorded thanks to a contribution from the AAUW.

So -- busy, busy with election-related projects. But I think the work is mostly done (except for the one remaining forum), the guides are out, and now it's time to go through and read the guides. And then the most important part of all: vote!

October 11, 2016 10:08 PM

October 06, 2016

Nathan Haines

Winners of the Ubuntu 16.10 Free Culture Showcase

It's an exciting time for Ubuntu fans because next week will see the release of Ubuntu 16.10 and some interesting new features. But today we're going to talk about one exciting user-facing change: the community wallpapers that were selected from the Ubuntu Free Culture Showcase!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. For Ubuntu 16.10, hundreds of such wallpapers were submitted to the Ubuntu 16.10 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found.

But now the results are in, and the top choices, voted on by certain members of the Ubuntu community, and I'm proud to announce the winning images that will be included in Ubuntu 16.10:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers today at the links above, or in your desktop wallpaper list after you upgrade or install Ubuntu 16.10 on October 13th.

October 06, 2016 04:20 AM

October 05, 2016

Akkana Peck

Play notes, chords and arbitrary waveforms from Python

Reading Stephen Wolfram's latest discussion of teaching computational thinking (which, though I mostly agree with it, is more an extended ad for Wolfram Programming Lab than a discussion of what computational thinking is and why we should teach it) I found myself musing over ideas for future computer classes for Los Alamos Makers. Students, and especially kids, like to see something other than words on a screen. Graphics and games good, or robotics when possible ... but another fun project a novice programmer can appreciate is music.

I found myself curious what you could do with Python, since I hadn't played much with Python sound generation libraries. I did discover a while ago that Python is rather bad at playing audio files, though I did eventually manage to write a music player script that works quite well. What about generating tones and chords?

A web search revealed that this is another thing Python is bad at. I found lots of people asking about chord generation, and a handful of half-baked ideas that relied on long obsolete packages or external program. But none of it actually worked, at least without requiring Windows or relying on larger packages like fluidsynth (which looked worth exploring some day when I have more time).

Play an arbitrary waveform with Pygame and NumPy

But I did find one example based on a long-obsolete Python package called Numeric which, when rewritten to use NumPy, actually played a sound. You can take a NumPy array and play it using a pygame.sndarray object this way:

import pygame, pygame.sndarray

def play_for(sample_wave, ms):
    """Play the given NumPy array, as a sound, for ms milliseconds."""
    sound = pygame.sndarray.make_sound(sample_wave)
    sound.play(-1)
    pygame.time.delay(ms)
    sound.stop()

Then you just need to calculate the waveform you want to play. NumPy can generate sine waves on its own, while scipy.signal can generate square and sawtooth waves. Like this:

import numpy
import scipy.signal

sample_rate = 44100

def sine_wave(hz, peak, n_samples=sample_rate):
    """Compute N samples of a sine wave with given frequency and peak amplitude.
       Defaults to one second.
    """
    length = sample_rate / float(hz)
    omega = numpy.pi * 2 / length
    xvalues = numpy.arange(int(length)) * omega
    onecycle = peak * numpy.sin(xvalues)
    return numpy.resize(onecycle, (n_samples,)).astype(numpy.int16)

def square_wave(hz, peak, duty_cycle=.5, n_samples=sample_rate):
    """Compute N samples of a sine wave with given frequency and peak amplitude.
       Defaults to one second.
    """
    t = numpy.linspace(0, 1, 500 * 440/hz, endpoint=False)
    wave = scipy.signal.square(2 * numpy.pi * 5 * t, duty=duty_cycle)
    wave = numpy.resize(wave, (n_samples,))
    return (peak / 2 * wave.astype(numpy.int16))

# Play A (440Hz) for 1 second as a sine wave:
play_for(sine_wave(440, 4096), 1000)

# Play A-440 for 1 second as a square wave:
play_for(square_wave(440, 4096), 1000)

Playing chords

That's all very well, but it's still a single tone, not a chord.

To generate a chord of two notes, you can add the waveforms for the two notes. For instance, 440Hz is concert A, and the A one octave above it is double the frequence, or 880 Hz. If you wanted to play a chord consisting of those two As, you could do it like this:

play_for(sum([sine_wave(440, 4096), sine_wave(880, 4096)]), 1000)

Simple octaves aren't very interesting to listen to. What you want is chords like major and minor triads and so forth. If you google for chord ratios Google helpfully gives you a few of them right off, then links to a page with a table of ratios for some common chords.

For instance, the major triad ratios are listed as 4:5:6. What does that mean? It means that for a C-E-G triad (the first C chord you learn in piano), the E's frequency is 5/4 of the C's frequency, and the G is 6/4 of the C.

You can pass that list, [4, 5, 5] to a function that will calculate the right ratios to produce the set of waveforms you need to add to get your chord:

def make_chord(hz, ratios):
    """Make a chord based on a list of frequency ratios."""
    sampling = 4096
    chord = waveform(hz, sampling)
    for r in ratios[1:]:
        chord = sum([chord, sine_wave(hz * r / ratios[0], sampling)])
    return chord

def major_triad(hz):
    return make_chord(hz, [4, 5, 6])

play_for(major_triad(440), length)

Even better, you can pass in the waveform you want to use when you're adding instruments together:

def make_chord(hz, ratios, waveform=None):
    """Make a chord based on a list of frequency ratios
       using a given waveform (defaults to a sine wave).
    """
    sampling = 4096
    if not waveform:
        waveform = sine_wave
    chord = waveform(hz, sampling)
    for r in ratios[1:]:
        chord = sum([chord, waveform(hz * r / ratios[0], sampling)])
    return chord

def major_triad(hz, waveform=None):
    return make_chord(hz, [4, 5, 6], waveform)

play_for(major_triad(440, square_wave), length)

There are still some problems. For instance, sawtooth_wave() works fine individually or for pairs of notes, but triads of sawtooths don't play correctly. I'm guessing something about the sampling rate is making their overtones cancel out part of the sawtooth wave. Triangle waves (in scipy.signal, that's a sawtooth wave with rising ramp width of 0.5) don't seem to work right even for single tones. I'm sure these are solvable, perhaps by fiddling with the sampling rate. I'll probably need to add graphics so I can look at the waveform for debugging purposes.

In any case, it was a fun morning hack. Most chords work pretty well, and it's nice to know how to to play any waveform I can generate.

The full script is here: play_chord.py on GitHub.

October 05, 2016 05:29 PM

October 02, 2016

Elizabeth Krumbach

Autumn in Philadelphia

I spent this past week in Philadelphia. For those of you following along at home, I was there just a month before, for FOSSCON and other time with friends. This time our trip was also purposeful, we were in town for the gravestone unveiling for MJ’s grandmother, to celebrate my birthday with Philly friends and to work on a secret mission (secret until November).

Before all that, I spent some time enjoying the city upon arrival. The first morning I was there I got in early and a friend picked me up at the airport. After breakfast we headed toward the Philadelphia Zoo. We killed some time with a walk before making our way up to the zoo itself, where I insisted we spend a bit of time watching the street cars (they call them trolleys) on the SEPTA Route 15 that goes right past the zoo. These SEPTA PCC cars are direct relatives to the ones that run in San Francisco, in fact, San Francisco bought a large portion of their PCC fleet directly from SEPTA several decades ago. Almost all the PCC cars you see running on the F-line in San Francisco are from Philadelphia. It was fun to have some time to hang out and enjoy the cars in their home city.

And of course the zoo! I’ve been to the zoo a few times, but it’s a nice sized one and I always enjoy going. I don’t remember them having an aye-aye exhibit, so it was nice to see that, particularly since the one at the San Francisco Zoo has been closed for some time. The penguins are always great to see, and the red pandas are super adorable.


Humboldt penguins at the Philadelphia Zoo

More photos from the zoo here: https://www.flickr.com/photos/pleia2/albums/72157673262888271

Tuesday I spent working and spending time with my friend Danita. Camped out on her couch I got a pile of work done and later in the evening we went out to do a bit of shopping. That evening MJ arrived in Philadelphia from a work trip he was on and picked me up to grab some dinner.

Wednesday morning was the gravestone unveiling. According to Jewish tradition, this ceremony is completed approximately a year after the passing of your loved one and coincides with the conclusion of the year of mourning. We had 10 people there, and though the weather did threaten rain, it held out as we made our way through some words about her life, prayers and quiet moments together. Afterwards the family attending all went out to lunch together.

Thursday’s big event was my 35th birthday! In the morning I went out Core Creek Park a few miles from our hotel to go out for a run. The weather wasn’t entirely cooperative, but I wasn’t about to be put off by a hint of drizzle. It was totally the right thing to do, I parked near the lake in the park and did a run/walk of a couple miles on a trail around that edge of the park. I saw a deer, lots of birds and was generally pleased with the sights. I love autumn in Philadelphia and this was such a perfect way to experience it.

That night MJ drove us down to the city and met up with a whole pile of friends (Danita, Crissi, Mike and Jess, Jon, David, Tim, Mike and Heidi, Walt, and Paul) for a birthday party at The Continental near Penn’s Landing. I love this place. We had our wedding party dinner here, and we eat here, or at the mid-town location, almost every time we’re in town. MJ and Danita had reserved a private room which allowed for mingling throughout the night. Danita helped me pick out some killer shoes that I had fun wearing with my awesome dress and I drank a lot of Twizzle martinis (Smirnoff citrus, strawberry puree, lemon, red licorice wheel) along with all the spectacular food they brought to our tables through the night. There was also a delicious walnut-free carrot cake …with only 5 candles, which was appreciated, hah! Did I mention I drank a lot of martinis? It was an awesome night, my friends are the best.

Late Friday and into Saturday were secret mission days, but I took some time for work like every day and we also got to see friends and family both days. I also was able to get down to the hotel gym on Saturday morning to visit the treadmill and spend some time in the pool.

Our flight took us home to our kitties on Saturday evening. I’ve been incredibly stressed out lately with a lot going on with my career (work, book, other open source things) and personally (where to begin…), but it was a very good trip over all.

Rosh Hashanah begins tonight and means a day of observation tomorrow too. Tuesday and Wednesday are packed with work and spending evenings with MJ before I fly off again on Thursday. This time to Ohio for the Ohio LinuxFest in Columbus where my talk is A Tour of OpenStack Deployment Scenarios. While I’m there I also have plans to meet up with my Ubuntu community friends (including going to the Columbus Zoo!) and most of the crew I went to Ghana with in 2012.

by pleia2 at October 02, 2016 03:49 PM

MUNI Heritage Weekend

Before heading to Philadelphia last weekend I took time to spend Saturday with my friend Mark at MUNI Heritage Weekend. As an active transit advocate in San Francisco, Mark is a fun person to attend such an event with. I like to think I know a fair amount about things on rails in San Francisco, but he’s much more knowledgeable about transit in general.

I was pretty excited about this day, I was all decked out in my cable car earrings and Seashore Trolley Museum t-shirt.

The day began with a walk down Market to meet Mark near the railway museum, which was the center of all the activity for the day. I arrived a bit early and spent my time snapping pictures of all the interesting streetcars and buses coming around. When we met up our first adventure was to take a ride on our first vintage bus of the day, the 5300!

Now, as far as vintage goes, the 5300 doesn’t go very far back in history. This bus was an electric from 1975 and it had a good run, still riding around the city just over a decade ago. This was a long ride, taking us down Howard, South Van Ness, all the way down to Mission and 25th street, then back to the railway museum. It took about 45 minutes, during which time Mark and I had lots of time to catch up.

We then had some time to walk around a bit and see what else was out. Throughout the day we saw one of the Blackpool “boat” streetcars, the Melbourne streetcar (which I still haven’t ridden in!) the Number 1 streetcar and more.

Next up was a ride on the short 042 from 1938! This was a fun one, it’s the oldest bus in the fleet and the blog post about the event had this to say:

A surprise participant was Muni’s oldest bus, the 042, built in 1938 by the White Motor Company. Its engine had given up the ghost, but the top-notch mechanics at Woods Motor Coach Division swapped it out for one in a White bus Market Street Railway’s Paul Wells located in the Santa Cruz Mountains and repatriated. The 042 operated like a dream looping around Union Square all weekend, as did 1970 GMC “fishbowl” 3287, shown behind it

Pretty cool! As the quote suggests, it was not electric so it was able to do its own thing in the Union Square loop, in a ride that only took about 20 minutes.

Then, more viewing of random cars. I think the highlight of my time then was getting to see the 578 “dinky” close up. Built in 1896, this street car looks quite a bit like a cable car, making it a distinctive sight among all the other street cars.

By then we were well into the late afternoon and decided to grab some late lunch. Continuing our transit-related day, I took him up Howard street to get a view of the progress on the new Transbay Transit Center. After walking past it on street level, we went up to the roof deck where we live to get some views and pictures from up above.

This was definitely a bus-heavy heritage day for me, but it was fun. Lots more photos from the day here: https://www.flickr.com/photos/pleia2/albums/72157674240825576

That evening it was time for me to get off the buses and rails to take another form of transportation, I was off to Philadelphia on a plane!

by pleia2 at October 02, 2016 02:49 PM

October 01, 2016

Akkana Peck

Zsh magic: remove all raw photos that don't have a corresponding JPEG

Lately, when shooting photos with my DSLR, I've been shooting raw mode but with a JPEG copy as well. When I triage and label my photos (with pho and metapho), I use only the JPEG files, since they load faster and there's no need to index both. But that means that sometimes I delete a .jpg file while the huge .cr2 raw file is still on my disk.

I wanted some way of removing these orphaned raw files: in other words, for every .cr2 file that doesn't have a corresponding .jpg file, delete the .cr2.

That's an easy enough shell function to write: loop over *.cr2, change the .cr2 extension to .jpg, check whether that file exists, and if it doesn't, delete the .cr2.

But as I started to write the shell function, it occurred to me: this is just the sort of magic trick zsh tends to have built in.

So I hopped on over to #zsh and asked, and in just a few minutes, I had an answer:

rm *.cr2(e:'[[ ! -e ${REPLY%.cr2}.jpg ]]':)

Yikes! And it works! But how does it work? It's cheating to rely on people in IRC channels without trying to understand the answer so I can solve the next similar problem on my own.

Most of the answer is in the zshexpn man page, but it still took some reading and jumping around to put the pieces together.

First, we take all files matching the initial wildcard, *.cr2. We're going to apply to them the filename generation code expression in parentheses after the wildcard. (I think you need EXTENDED_GLOB set to use that sort of parenthetical expression.)

The variable $REPLY is set to the filename the wildcard expression matched; so it will be set to each .cr2 filename, e.g. img001.cr2.

The expression ${REPLY%.cr2} removes the .cr2 extension. Then we tack on a .jpg: ${REPLY%.cr2}.jpg. So now we have img001.jpg.

[[ ! -e ${REPLY%.cr2}.jpg ]] checks for the existence of that jpg filename, just like in a shell script.

So that explains the quoted shell expression. The final, and hardest part, is how to use that quoted expression. That's in section 14.8.7 Glob Qualifiers. (estring) executes string as shell code, and the filename will be included in the list if and only if the code returns a zero status.

The colons -- after the e and before the closing parenthesis -- are just separator characters. Whatever character immediately follows the e will be taken as the separator, and anything from there to the next instance of that separator (the second colon, in this case) is taken as the string to execute. Colons seem to be the character to use by convention, but you could use anything. This is also the part of the expression responsible for setting $REPLY to the filename being tested.

So why the quotes inside the colons? They're because some of the substitutions being done would be evaluated too early without them: "Note that expansions must be quoted in the string to prevent them from being expanded before globbing is done. string is then executed as shell code."

Whew! Complicated, but awfully handy. I know I'll have lots of other uses for that.

One additional note: section 14.8.5, Approximate Matching, in that manual page caught my eye. zsh can do fuzzy matches! I can't think offhand what I need that for ... but I'm sure an idea will come to me.

October 01, 2016 09:28 PM

September 28, 2016

Jono Bacon

Bacon Roundup – 28th September 2016

Here we are with another roundup of things I have been working on, complete with a juicy foray into the archives too. So, sit back, grab a cup of something delicious, and enjoy.

To gamify or not to gamify community (opensource.com)

In this piece I explore whether gamification is something we should apply to building communities. I also pull from my experience building a gamification platform for Ubuntu called Ubuntu Accomplishments.

The GitLab Master Plan (gitlab.com)

Recently I have been working with GitLab. The team has been building their vision for conversational development and I MCed their announcement of their plan. You can watch the video below for convenience:


Social Media: 10 Ways To Not Screw It Up (jonobacon.org)

Here I share 10 tips and tricks that I have learned over the years for doing social media right. This applies to tooling, content, distribution, and more. I would love to learn your tips too, so be sure to share them in the comments!

Linux, Linus, Bradley, and Open Source Protection (jonobacon.org)

Recently there was something of a spat in the Linux kernel community about when is the right time to litigate companies who misuse the GPL. As a friend of both sides of the debate, this was my analysis.

The Psychology of Report/Issue Templates (jonobacon.org)

As many of you will know, I am something of a behavioral economics fan. In this piece I explore the interesting human psychology behind issue/report templates. It is subtle nudges like this that can influence the behavioral patterns you want to see.

My Reddit AMA

It would be remiss without sharing a link to my recent reddit AMA where I was asked a range of questions about community leadership, open source, and more. Thanks to all of you who joined and asked questions!

Looking For Talent

I also posted a few pieces about some companies who I am working with who want to hire smart, dedicated, and talented community leaders. If you are looking for a new role, be sure to see these:

From The Archives

Dan Ariely on Building More Human Technology, Data, Artificial Intelligence, and More (forbes.com)

My Forbes piece on the impact of behavioral economics on technologies, including an interview with Dan Ariely, TED speaker, and author of many books on the topic.

Advice for building a career in open source (opensource.com)

In this piece I share some recommendations I have developed over the years for those of you who want to build a career in open source. Of course, I would love to hear you tips and tricks too!

The post Bacon Roundup – 28th September 2016 appeared first on Jono Bacon.

by Jono Bacon at September 28, 2016 03:00 PM

Elizabeth Krumbach

Yak Coloring

A couple cycles ago I asked Ronnie Tucker, artist artist and creator of Full Circle Magazine, to create a werewolf coloring page for the 15.10 release (details here). He then created another for Xenial Xerus, see here.

He’s now created one for the upcoming Yakkety Yak release! So if you’re sick of all the yak shaving you’re doing as we prepare for this release, you may consider giving yak coloring a try.

But that’s not the only yak! We have Tom Macfarlane in the Canonical Design Team once again for sending me the SVG to update the Animal SVGs section of the Official Artwork page on the Ubuntu wiki. They’re sticking with a kind of origami theme this time for our official yak.

Download the SVG version for printing from the wiki page or directly here.

by pleia2 at September 28, 2016 12:43 AM

September 26, 2016

Akkana Peck

Unclaimed Alcoholic Beverages

Dave was reading New Mexico laws regarding a voter guide issue we're researching, and he came across this gem in Section 29-1-14 G of the "Law Enforcement: Peace Officers in General: Unclaimed Property" laws:

Any alcoholic beverage that has been unclaimed by the true owner, is no longer necessary for use in obtaining a conviction, is not needed for any other public purpose and has been in the possession of a state, county or municipal law enforcement agency for more than ninety days may be destroyed or may be utilized by the scientific laboratory division of the department of health for educational or scientific purposes.

We can't decide which part is more fun: contemplating what the "other public purposes" might be, or musing on the various "educational or scientific purposes" one might come up with for a month-old beverage that's been sitting in the storage locker ... I'm envisioning a room surrounded by locked chain-link containing dusty shelves containing rows of half-full martini and highball glasses.

September 26, 2016 05:04 PM

Eric Hammond

Deleting a Route 53 Hosted Zone And All DNS Records Using aws-cli

fast, easy, and slightly dangerous recursive deletion of a domain’s DNS

Amazon Route 53 currently charges $0.50/month per hosted zone for your first 25 domains, and $0.10/month for additional hosted zones, even if they are not getting any DNS requests. I recently stopped using Route 53 to serve DNS for 25 domains and wanted to save on the $150/year these were costing.

Amazon’s instructions for using the Route 53 Console to delete Record Sets and a Hosted Zone make it look simple. I started in the Route 53 Console clicking into a hosted zone, selecting each DNS record set (but not the NS or SOA ones), clicking delete, clicking confirm, going back a level, selecting the next domain, and so on. This got old quickly.

Being lazy, I decided to spend a lot more effort figuring out how to automate this process with the aws-cli, and pass the savings on to you.

Steps with aws-cli

Let’s start by putting the hosted zone domain name into an environment variable. Do not skip this step! Do make sure you have the right name! If this is not correct, you may end up wiping out DNS for a domain that you wanted to keep.

domain_to_delete=example.com

Install the jq json parsing command line tool. I couldn’t quite get the normal aws-cli --query option to get me the output format I wanted.

sudo apt-get install jq

Look up the hosted zone id for the domain. This assumes that you only have one hosted zone for the domain. (It is possible to have multiple, in which case I recommend using the Route 53 console to make sure you delete the right one.)

hosted_zone_id=$(
  aws route53 list-hosted-zones \
    --output text \
    --query 'HostedZones[?Name==`'$domain_to_delete'.`].Id'
)
echo hosted_zone_id=$hosted_zone_id

Use list-resource-record-sets to find all of the current DNS entries in the hosted zone, then delete each one with change-resource-record-sets.

aws route53 list-resource-record-sets \
  --hosted-zone-id $hosted_zone_id |
jq -c '.ResourceRecordSets[]' |
while read -r resourcerecordset; do
  read -r name type <<<$(jq -r '.Name,.Type' <<<"$resourcerecordset")
  if [ $type != "NS" -a $type != "SOA" ]; then
    aws route53 change-resource-record-sets \
      --hosted-zone-id $hosted_zone_id \
      --change-batch '{"Changes":[{"Action":"DELETE","ResourceRecordSet":
          '"$resourcerecordset"'
        }]}' \
      --output text --query 'ChangeInfo.Id'
  fi
done

Finally, delete the hosted zone itself:

aws route53 delete-hosted-zone \
  --id $hosted_zone_id \
  --output text --query 'ChangeInfo.Id'

As written, the above commands output the change ids. You can monitor the background progress using a command like:

change_id=...
aws route53 wait resource-record-sets-changed \
  --id "$change_id"

GitHub repo

To make it easy to automate the destruction of your critical DNS resources, I’ve wrapped the above commands into a command line tool and tossed it into a GitHub repo here:

https://github.com/alestic/aws-route53-wipe-hosted-zone

You are welcome to use as is, fork, add protections, rewrite with Boto3, and generally knock yourself out.

Alternative: CloudFormation

A colleague pointed out that a better way to manage all of this (in many situations) would be to simply toss my DNS records into a CloudFormation template for each domain. Benefits include:

  • Easy to store whole DNS definition in revision control with history tracking.

  • Single command creation of the hosted zone and all record sets.

  • Single command updating of all changed record sets, no matter what has changed since the last update.

  • Single command deletion of the hosted zone and all record sets (my current challenge).

This doesn’t work as well for hosted zones where different records are added, updated, and deleted by automated processes (e.g., instance startup), but for simple, static domain DNS, it sounds ideal.

How do you create, update, and delete DNS in Route 53 for your domains?

Original article and comments: https://alestic.com/2016/09/aws-route53-wipe-hosted-zone/

September 26, 2016 09:30 AM

Jono Bacon

Looking for a data.world Director of Community

data.world

Some time ago I signed an Austin-based data company called data.world as a client. The team are building an incredible platform where the community can store data, collaborate around the shape/content of that data, and build an extensive open data commons.

As I wrote about previously I believe data.world is going to play an important role in opening up the potential for finding discoveries in disparate data sets and helping people innovate faster.

I have been working with the team to help shape their community strategy and they are now ready to hire a capable Director of Community to start executing these different pieces. The role description is presented below. The data.world team are an incredible bunch with some strong heritage in the leadership of Brett Hurt, Matt Laessig, Jon Loyens, Bryon Jacob, and others.

As such, I am looking to find the team some strong candidates. If I know you, I would invite you to confidentially share your interest in this role by filling my form here. This way I can get a good sense of who is interested and also recommend people I personally know and can vouch for. I will then reach out to those of you who this seems to be a good potential fit for and play a supporting role in brokering the conversation.

This role will require candidates to either be based in Austin or be willing to relocate to Austin. This is a great opportunity, and feel free to get in touch with me if you have any questions.

Director of Community Role Description

data.world is building a world-class data commons, management, and collaboration platform. We believe that data.world is the very best place to build great data communities that can make data science fun, enjoyable, and impactful. We want to ensure we can provide the very best support, guidance, and engagement to help these communities be successful. This will involve engagement in workflow, product, outreach, events, and more.

As Director of Community, you will lead, coordinate, and manage our global community development initiatives. You will use your community leadership experience to shape our community experience and infrastructure, feed into the product roadmap with community needs and requirements, build growth and engagement, and more. You will help connect, celebrate, and amplify the existing communities on data.world and assist new ones as they form. You will help our users to think bigger, be the best they can be, and succeed more. You’ll work across teams within data.world to promote the community’s voice within our different internal teams. You should be a content expert, superb communicator, and humble facilitator.

Typical activities for this role include:

  • Building and executing programs that grow communities on data.world and empower them to do great work.
  • Taking a structured approach to community roles, on-boarding, and working with our teams to ensure community members have a simple and powerful experience.
  • Developing content that promotes the longevity and sustainability of fast growing, organically built data communities with high impact outcomes.
  • Building relationships within the industry and community to be their representative for data.world in helping to engage, be successful, and deliver great work and collaboration.
  • Working with product, user operations, and marketing teams on product roadmap for community features and needs.
  • Being a data.world representative and spokesperson at conferences, events, and within the media and external data communities.
  • Always challenging our assumptions, our culture, and being singularly focused on delivering the very best data community platform in the world.

Experience with the following is required:

  • 5-7 years of experience participating in and building communities, preferably data based, or technical in nature.
  • Experience with working in open source, open data, and other online communities.
  • Public speaking, blogging, and content development.
  • Facilitating complex and sensitive community management situations with humility, judgment, tact, and humor.
  • Integrating company brand, voice, and messaging into developed content. Working independently and autonomously, managing multiple competing priorities.

Experience with any of the following preferred:

  • Data science experience and expertise.
  • 3-5 years of experience leading community management programs within a software or Internet-based company.
  • Media training and experience in communicating with journalists, bloggers, and other media on a range of technical topics.
  • Existing network from a diverse set of communities and social media platforms.
  • Software development capabilities and experience

The post Looking for a data.world Director of Community appeared first on Jono Bacon.

by Jono Bacon at September 26, 2016 04:16 AM

September 25, 2016

Elizabeth Krumbach

Beer and trains in Germany

I spent most of this past week in Germany with the OpenStack Infrastructure and QA teams doing a sprint at the SAP offices in Walldorf, I wrote about it here.

The last (and first!) time I was in Germany was for the same purpose, a sprint, that time in Darmstadt where I snuck in a tiny amount of touristing but due troubles with my gallbladder, I could have any fried foods or beer. Well, I had one beer to celebrate Germany winning the World Cup, but I regretted it big time.

This time was different, finally I could have liters of German beer! And I did. The first night there I even had some wiener schnitzel (fried veal!), even if we were all too tired from our travels to leave the hotel that night. We went out to beer gardens every other night after that, taking in the beautiful late summer weather and enjoying great beers.


Photo in the center by Chris Hoge (source)

But I have a confession to make: I don’t like pilsners and that makes Belgium my favorite beer country in Europe. Still, Germany has quite the title. Fortunately while they are the default, pilsners were not my only option. I indulged in dark lagers and hefeweizens all week. Our evening in Heidelberg I also had an excellent Octoberfest Märzen by Heidelberger, which was probably my favorite beer of the whole trip.

Now I’m getting ahead of myself because I was excited about all the beer. I arrived on Sunday, sadly much later than I had intended. My original flights had been rescheduled so ended up meeting my colleague Clark at the Frankfurt airport around 4PM to catch our trains to Walldorf. The train station is right there in the airport, and clear signs meant a no fuss transfer halfway through our journey to get to the next train. We were on the trains for about an hour before arriving at Wiesloch-Walldorf station. A ten Euro cab ride then got us to the hotel where we met up with several other colleagues for drinks.

Of course we were there to work, so that’s what we spend 9-5 each day doing, but the evenings were ours to explore our little corner of Germany. The first night we just walked into Walldorf after work and enjoyed drinks and food until the sun went down. Walldorf is a very cute little town and the outdoor seating at the beer garden we went to was a wonderful treat, especially since the weather was so warm and clear. We spent Wednesday night in Walldorf too.

More photos from Walldorf here: https://www.flickr.com/photos/pleia2/sets/72157670828593814/

Tuesday night was our big night out. We all headed out to the nearby Heidelberg for a big group dinner. After parking, we had a lovely short walk to the restaurant which took me by a shop that sold post cards! I picked up a trio of cards for my mother and sisters, as I typically do when traveling. The walk also gave a couple of us time to take pictures of the city before the sun went down.

Dinner was at Zum Weissen Schwanen (The White Swan). That was my four beer night.

After the meal several of us took a nice walk around the city a bit more. We got to look up and see the massive, lit up, Heidelberg Castle. It’s a pretty exceptional place, I’d love to properly visit some time. The post cards I sent to family all included the castle.

The drive back to the hotel was fun too. I got a tiny taste of the German autobahn as we got up to 220 kilometers per hour on our way back to the hotel before our exit came up. Woo!

My pile of Heidelberg photos are here: https://www.flickr.com/photos/pleia2/albums/72157674174957385

Thursday morning was my big morning of trains. I flew into Frankfurt like everyone else, but I flew home out of Dusseldorf because it was several hundred dollars cheaper. The problem is Walldorf and Dusseldorf aren’t exactly close, but I could spend a couple hours on European ICE (Inter-City Express) and get there. MJ highly recommended I try it out since I like trains, and with the simplicity of routing he convinced me to take a route from Mannheim all the way to Dusseldorf Airport with one simple connection, which just required walking across the platform.

I’m super thankful he convinced me to take the trains. The ticket wasn’t very expensive and I really do like trains. In addition to being reasonably priced, they’re fast, on time and all the signs were great so I didn’t feel worried about getting lost or ending up in the wrong place. The signs even report where each coach will show up on the platform so I had no trouble figuring out where to stand to get to my assigned seat.

I took a few more pictures while on my train adventure, here: https://www.flickr.com/photos/pleia2/albums/72157670930346613

And so I spent a couple hours on my way to Dusseldorf. I was a bit tired since my first train left the station at 7:36AM, so I mostly just listened to music and stared out the window. My flight out of Dusseldorf was uneventful, and was a direct to San Francisco so I was able to come home to my kitties in the early evening. Unfortunately MJ had left home the day before, so I’ll have to wait until we’re both in Philadelphia next week to see him.

by pleia2 at September 25, 2016 12:16 AM

September 24, 2016

Elizabeth Krumbach

OpenStack QA/Infrastructure Meetup in Walldorf

I spent this week in the lovely town of Walldorf, Germany with about 25 of my OpenStack Quality Assurance and Infrastructure colleagues. We were there for a late-cycle sprint, where we all huddled in a couple of rooms for three days to talk, script and code our way through some challenges that are much easier to tackle when all the key players are in a room together. QA and Infra have always been a good match for an event like this since we’re so tightly linked as things QA works on are supported by and tested in the Continuous Integration system we run.

Our venue this time around were the SAP offices in Walldorf. They graciously donated the space to us for this event, and kept us blessedly fed, hydrated and caffeinated throughout the day.

Each day we enjoyed a lovely walk from and to the hotel many of us stayed at. We lucked out and there wasn’t any rain while we were there so we got to take in the best of late summer weather in Germany. Our walk took us through a corn field, past flowers, gave us a nice glimpse at the town of Walldorf on the other side of the highway and then began in on the approach to the SAP buildings of which there are many.

The first day began with an opening from our host at the SAP offices, Marc Koderer and by the QA project lead Ken’ichi Ohmichi. From there we went through the etherpad for the event to figure out where to begin. A big chunk of the Infrastructure team went to their own room to chat about Zuulv3 and some of the work on Ansible, and a couple of us hung back with the QA team to move some of their work along.

Spending time with the QA folks I learned about future plans for a more useful series of bugday graphs. I also worked with Spencer Krum and Matt Treinish to land a few patches related to the new Firehose service. Firehose is a MQTT-based unified message bus that seeks to encompass all the developer-facing infra alerts and updates in a single stream. This includes job results from Gerrit, updates on bugs from Launchpad, specific logs that are processed by logstash and more. At the beginning of the sprint only Gerrit was feeding into it using germqtt, but by the end of Monday we had Launchpad bugs submitting events over email via lpmqtt. The work was mostly centered around setting up Cyrus with Exim and then configuring the accounts and MX records, and trying to do this all in a way that the rest of the team would be happy with. All seems to have worked out, and at the end of the day Matt sent out an email announcing it: Announcing firehose.openstack.org.

That evening we gathered in the little town of Walldorf to have a couple beers, dinner, and relax in a lovely beer garden for a few hours as the sun went down. It was really nice to catch up with some of my colleagues that I have less day to day contact with. I especially enjoyed catching up with Yolanda and Gema, both of whom I’ve known for years through their past work at Canonical on Ubuntu. The three of us also were walk buddies back to the hotel, before which I demanded a quick photo together.

Tuesday morning we started off by inviting Khai Do over to give a quick demo of the Gerrit verify plugin. Now, Khai is one of us, so what do I mean by “come over”? Of all the places and times in the world, Khai was also at the SAP offices in Walldorf, Germany, but he was there for a Gerrit Hackathon. He brought along another Gerrit contributor and showed us how the verify plugin would replace our somewhat hacked into place Javascript that we currently have on our review pages to give a quick view into the test results. It also offers the ability in the web UI to run rechecks on tests, and will provide a page including history of all results through all the patchsets and queues. They’ve done a great job on it, and I was thrilled to see upstream Gerrit working with us to solve some of our problems.


Khai demos the Gerrit verify plugin

After Khai’s little presentation, I plugged my laptop into the projector and brought up the etherpad so we could spend a few minutes going over work that was done on Monday. A Zuulv3 etherpad had been worked on to capture a lot of the work from the Infrastructure team on Monday. Updates were added to our main etherpad about things other people worked on and reviews that were now pending to complete the work.

Groups then split off again, this time I followed most of the rest of the Infrastructure team into a room where we worked on infra-cloud, our infra-spun, fully open source OpenStack deployment that we started running a chunk of our CI tests on a few weeks ago. The key folks working on it gave a quick introduction and then we dove right into debugging some performance problems that were causing failed initial launches. This took us through poking at the Glance image service, rules in Neutron and defaults in the Puppet modules. A fair amount of multi-player (using screen) debugging was done up on the projector as we shifted around options, took the cloud out of the pool of servers for some time, and spent some time debugging individual compute nodes and instances as we watched what they did when they came up for the first time. In addition to our “vanilla” region, Ricardo Carrillo Cruz also made progress that day on getting our “chocolate” region working (next up: strawberry!).

I also was able to take some time on Tuesday to finally get notice and alert notifications going to our new @openstackinfra Twitter account. Monty Taylor had added support for this months ago, but I had just set up the account and written the patches to land it a few days before. We ran into one snafu, but a quick patch (thanks Andreas Jaeger!) got us on our way to automatically sending out our first Tweet. This will be fun, and I can stop being the unofficial Twitter status bot.

That evening we all piled into cars to head over to the nearby city of Heidelberg for dinner and drinks at Zum Weissen Schwanen (The White Swan). This ended up being our big team dinner. Lots of beers, great conversation and catching up on some discussions we didn’t have during the day. I had a really nice time and during our walk back to the car I got to see Heidelberg Castle light up at night as it looms over the city.

Friday kicked off once again at 9AM. For me this day was a lot of talking and chasing down loose ends while I had key people in the room. I also worked on some more Firehose stuff, this time working our way down the path to get logstash also sending data to Firehose. In the midst of which, we embarrassingly brought down our cluster due to failure to quote strings in the config file, but we did get it back online and then more progress was made after everyone got home on Friday. Still, it was good to get part of the way there during the sprint, and we all learned about the amount of logging (in this case, not much!) our tooling for all this MQTT stuff was providing for us to debug. Never hurts to get a bit more familiar with logstash either.

The final evening was spent once again in Walldorf, this time at the restaurant just across the road from the one we went to on Monday. We weren’t there long enough to grow tired of the limited selection, so we all had a lovely time. My early morning to catch a train meant I stuck to a single beer and left shortly after 8PM with a colleague, but that was plenty late for me.


Photo courtesy of Chris Hoge (source)

Huge thanks to Marc and SAP for hosting us. The spaces worked out really well for everything we needed to get done. I also have to say I really enjoyed my time. I work with some amazing people, and come Thursday morning all I could think was “What a great week! But I better get home so I can get back to work.” Hey! This all was work! Also thanks to Jeremy Stanley, our fearless Infrastructure Project Team Leader who sat this sprint out and kept things going on the home front while we were all focused on the sprint.

A few more photos from our sprint here: https://www.flickr.com/photos/pleia2/albums/72157674174936355

by pleia2 at September 24, 2016 03:30 PM

September 20, 2016

Eric Hammond

Developing CloudStatus, an Alexa Skill to Query AWS Service Status -- an interview with Kira Hammond by Eric Hammond

Interview conducted in writing July-August 2016.

[Eric] Good morning, Kira. It is a pleasure to interview you today and to help you introduce your recently launched Alexa skill, “CloudStatus”. Can you provide a brief overview about what the skill does?

[Kira] Good morning, Papa! Thank you for inviting me.

CloudStatus allows users to check the service availability of any AWS region. On opening the skill, Alexa says which (if any) regions are experiencing service issues or were recently having problems. Then the user can inquire about the services in specific regions.

This skill was made at my dad’s request. He wanted to quickly see how AWS services were operating, without needing to open his laptop. As well as summarizing service issues for him, my dad thought CloudStatus would be a good opportunity for me to learn about retrieving and parsing web pages in Python.

All the data can be found in more detail at status.aws.amazon.com. But with CloudStatus, developers can hear AWS statuses with their Amazon Echo. Instead of scrolling through dozens of green checkmarks to find errors, users of CloudStatus listen to which services are having problems, as well as how many services are operating satisfactorily.

CloudStatus is intended for anyone who uses Amazon Web Services and wants to know about current (and recent) AWS problems. Eventually it might be expanded to talk about other clouds as well.

[Eric] Assuming I have an Amazon Echo, how do I install and use the CloudStatus Alexa skill?

[Kira] Just say “Alexa, enable CloudStatus skill”! Ask Alexa to “open CloudStatus” and she will give you a summary of regions with problems. An example of what she might say on the worst of days is:

“3 out of 11 AWS regions are experiencing service issues: Mumbai (ap-south-1), Tokyo (ap-northeast-1), Ireland (eu-west-1). 1 out of 11 AWS regions was having problems, but the issues have been resolved: Northern Virginia (us-east-1). The remaining 7 regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Or on most days:

“All 62 regional services in the 12 AWS regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Request any AWS region you are interested in, and Alexa will present you with current and recent service issues in that region.

Here’s the full recording of an example session: http://pub.alestic.com/alexa/cloudstatus/CloudStatus-Alexa-Skill-sample-20160908.mp3

[Eric] What technologies did you use to create the CloudStatus Alexa skill?

[Kira] I wrote CloudStatus using AWS Lambda, a service that manages servers and scaling for you. Developers need only pay for their servers when the code is called. AWS Lambda also displays metrics from Amazon CloudWatch.

Amazon CloudWatch gives statistics from the last couple weeks, such as the number of invocations, how long they took, and whether there were any errors. CloudWatch Logs is also a very useful service. It allows me to see all the errors and print() output from my code. Without it, I wouldn’t be able to debug my skill!

I used Amazon EC2 to build the Python modules necessary for my program. The modules (Requests and LXML) download and parse the AWS status page, so I can get the data I need. The Python packages and my code files are zipped and uploaded to AWS Lambda.

Fun fact: My Lambda function is based in us-east-1. If AWS Lambda stops working in that region, you can’t use CloudStatus to check if Northern Virginia AWS Lambda is working! For that matter, CloudStatus will be completely dysfunctional.

[Eric] Why do you enjoy programming?

[Kira] Programming is so much fun and so rewarding! I enjoy making tools so I can be lazy.

Let’s rephrase that: Sometimes I’m repeatedly doing a non-programming activity—say, making a long list of equations for math practice. I think of two “random” numbers between one and a hundred (a human can’t actually come up with a random set of numbers) and pick an operation: addition, subtraction, multiplication, or division. After doing this several times, the activity begins to tire me. My brain starts to shut off and wants to do something more interesting. Then I realize that I’m doing the same thing over and over again. Hey! Why not make a program?

Computers can do so much in so little time. Unlike humans, they are capable of picking completely random items from a list. And they aren’t going to make mistakes. You can tell a computer to do the same thing hundreds of times, and it won’t be bored.

Finish the program, type in a command, and voila! Look at that page full of math problems. Plus, I can get a new one whenever I want, in just a couple seconds. Laziness in this case drives a person to put time and effort into ever-changing problem-solving, all so they don’t have to put time and effort into a dull, repetitive task. See http://threevirtues.com/.

But programming isn’t just for tools! I also enjoy making simple games and am learning about websites.

One downside to having computers do things for you: You can’t blame a computer for not doing what you told it to. It did do what you told it to; you just didn’t tell it to do what you thought you did.

Coding can be challenging (even frustrating) and it can be tempting to give up on a debug issue. But, oh, the thrill that comes after solving a difficult coding problem!

The problem-solving can be exciting even when a program is nowhere near finished. My second Alexa program wasn’t coming along that well when—finally!—I got her to say “One plus one is eleven.” and later “Three plus four is twelve.” Though it doesn’t seem that impressive, it showed me that I was getting somewhere and the next problem seemed reasonable.

[Eric] How did you get started programming with the Alexa Skills Kit (ASK)?

[Kira] My very first Alexa skill was based on an AWS Lambda blueprint called Color Expert (alexa-skills-kit-color-expert-python). A blueprint is a sample program that AWS programmers can copy and modify. In the sample skill, the user tells Alexa their favorite color and Alexa stores the color name. Then the user can ask Alexa what their favorite color is. I didn’t make many changes: maybe Alexa’s responses here and there, and I added the color “rainbow sparkles.”

I also made a skill called Calculator in which the user gets answers to simple equations.

Last year, I took a music history class. To help me study for the test, I created a trivia game from Reindeer Games, an Alexa Skills Kit template (see https://developer.amazon.com/public/community/post/TxDJWS16KUPVKO/New-Alexa-Skills-Kit-Template-Build-a-Trivia-Skill-in-under-an-Hour). That was a lot of fun and helped me to grow in my knowledge of how Alexa works behind the scenes.

[Eric] How does Alexa development differ from other programming you have done?

[Kira] At first Alexa was pretty overwhelming. It was so different from anything I’d ever done before, and there were lines and lines of unfamiliar code written by professional Amazon people.

I found the ASK blueprints and templates extremely helpful. Instead of just being a functional program, the code is commented so developers know why it’s there and are encouraged to play around with it.

Still, the pages of code can be scary. One thing new Alexa developers can try: Before modifying your blueprint, set up the skill and ask Alexa to run it. Everything she says from that point on is somewhere in your program! Find her response in the program and tweak it. The variable name is something like “speech_output” or “speechOutput.”

It’s a really cool experience making voice apps. You can make Alexa say ridiculous things in a serious voice! Because CloudStatus started with the Color Expert blueprint, my first successful edit ended with our Echo saying, “I now know your favorite color is Northern Virginia. You can ask me your favorite color by saying, ‘What’s my favorite color?’.”

Voice applications involve factors you never need to deal with in a text app. When the user is interacting through text, they can take as long as they want to read and respond. Speech must be concise so the listener understands the first time. Another challenge is that Alexa doesn’t necessarily know how to pronounce technical terms and foreign names, but the software is always improving.

One plus side to voice apps is not having to build your own language model. With text-based programs, I spend a considerable amount of time listing all the ways a person can answer “yes,” or request help. Luckily, with Alexa I don’t have to worry too much about how the user will phrase their sentences. Amazon already has an algorithm, and it’s constantly getting smarter! Hint: If you’re making your own skill, use some built-in Amazon intents, like AMAZON.YesIntent or AMAZON.HelpIntent.

[Eric] What challenges did you encounter as you built the CloudStatus Alexa skill?

[Kira] At first, I edited the code directly in the Lambda console. Pretty soon though, I needed to import modules that weren’t built in to Python. Now I keep my code and modules in the same directory on a personal computer. That directory gets zipped and uploaded to Lambda, so the modules are right there sitting next to the code.

One challenge of mine has been wanting to fix and improve everything at once. Naturally, there is an error practically every time I upload my code for testing. Isn’t that what testing is for? But when I modify everything instead of improving bit by bit, the bugs are more difficult to sort out. I’m slowly learning from my dad to make small changes and update often. “Ship it!” he cries regularly.

During development, I grew tired of constantly opening my code, modifying it, zipping it and the modules, uploading it to Lambda, and waiting for the Lambda function to save. Eventually I wrote a separate Bash program that lets me type “edit-cloudstatus” into my shell. The program runs unit tests and opens my code files in the Atom editor. After that, it calls the command “fileschanged” to automatically test and zip all the code every time I edit something or add a Python module. That was exciting!

I’ve found that the Alexa speech-to-text conversions aren’t always what I think they will be. For example, if I tell CloudStatus I want to know about “Northern Virginia,” it sends my code “northern Virginia” (lowercase then capitalized), whereas saying “Northern California” turns into “northern california” (all lowercase). To at least fix the capitalization inconsistencies, my dad suggested lowercasing the input and mapping it to the standardized AWS region code as soon as possible.

[Eric] What Alexa skills do you plan on creating in the future?

[Kira] I will probably continue to work on CloudStatus for a while. There’s always something to improve, a feature to add, or something to learn about—right now it’s Speech Synthesis Markup Language (SSML). I don’t think it’s possible to finish a program for good!

My brother and I also want to learn about controlling our lights and thermostat with Alexa. Every time my family leaves the house, we say basically the same thing: “Alexa, turn off all the lights. Alexa, turn the kitchen light to twenty percent. Alexa, tell the thermostat we’re leaving.” I know it’s only three sentences, but wouldn’t it be easier to just say: “Alexa, start Leaving Home” or something like that? If I learned to control the lights, I could also make them flash and turn different colors, which would be super fun. :)

In August a new ASK template was released for decision tree skills. I want to make some sort of dichotomous key with that. https://developer.amazon.com/public/community/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill

[Eric] Do you have any advice for others who want to publish an Alexa skill?

[Kira]

  • Before submitting your skill for certification, make sure you read through the submission checklist. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-submission-checklist#submission-checklist

  • Remember to check your skill’s home cards often. They are displayed in the Alexa App. Sometimes the text that Alexa pronounces should be different from the reader-friendly card content. For example, in CloudStatus, “N. Virginia (us-east-1)” might be easy to read, but Alexa is likely to pronounce it “En Virginia, Us [as in ‘we’] East 1.” I have to tell Alexa to say “northern virginia, u.s. east 1,” while leaving the card readable for humans.

  • Since readers can process text at their own pace, the home card may display more details than Alexa speaks, if necessary.

  • If you don’t want a card to accompany a specific response, remove the ‘card’ item from your response dict. Look for the function build_speechlet_response() or buildSpeechletResponse().

  • Never point your live/public skill at the $LATEST version of your code. The $LATEST version is for you to edit and test your code, and it’s where you catch errors.

  • If the skill raises errors frequently, don’t be intimidated! It’s part of the process of coding. To find out exactly what the problem is, read the “log streams” for your Lambda function. To print debug information to the logs, print() the information you want (Python) or use a console.log() statement (JavaScript/Node.js).

  • It helps me to keep a list of phrases to try, including words that the skill won’t understand. Make sure Alexa doesn’t raise an error and exit the skill, no matter what nonsense the user says.

  • Many great tips for designing voice interactions are on the ASK blog. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-voice-design-best-practices

  • Have fun!

In The News

Amazon had early access to this interview and to Kira and wrote an article about her in the Alexa Blog:

14-Year-Old Girl Creates CloudStatus Alexa Skill That Benefits AWS Developers

which was then picked up by VentureBeat:

A 14-year-old built an Alexa skill for checking the status of AWS

Original article and comments: https://alestic.com/2016/09/alexa-skill-aws-cloudstatus/

September 20, 2016 04:15 AM

Akkana Peck

Frogs on the Rio, and Other Amusements

Saturday, a friend led a group hike for the nature center from the Caja del Rio down to the Rio Grande.

The Caja (literally "box", referring to the depth of White Rock Canyon) is an area of national forest land west of Santa Fe, just across the river from Bandelier and White Rock. Getting there involves a lot of driving: first to Santa Fe, then out along increasingly dicey dirt roads until the road looks too daunting and it's time to get out and walk.

[Dave climbs the Frijoles Overlook trail] From where we stopped, it was only about a six mile hike, but the climb out is about 1100 feet and the day was unexpectedly hot and sunny (a mixed blessing: if it had been rainy, our Rav4 might have gotten stuck in mud on the way out). So it was a notable hike. But well worth it: the views of Frijoles Canyon (in Bandelier) were spectacular. We could see the lower Bandelier Falls, which I've never seen before, since Bandelier's Falls Trail washed out below the upper falls the summer before we moved here. Dave was convinced he could see the upper falls too, but no one else was convinced, though we could definitely see the red wall of the maar volcano in the canyon just below the upper falls.

[Canyon Tree Frog on the Rio Grande] We had lunch in a little grassy thicket by the Rio Grande, and we even saw a few little frogs, well camouflaged against the dirt: you could even see how their darker brown spots imitated the pebbles in the sand, and we wouldn't have had a chance of spotting them if they hadn't hopped. I believe these were canyon treefrogs (Hyla arenicolor). It's always nice to see frogs -- they're not as common as they used to be. We've heard canyon treefrogs at home a few times on rainy evenings: they make a loud, strange ratcheting noise which I managed to record on my digital camera. Of course, at noon on the Rio the frogs weren't making any noise: just hanging around looking cute.

[Chick Keller shows a burdock leaf] Sunday we drove around the Pojoaque Valley following their art tour, then after coming home I worked on setting up a new sandblaster to help with making my own art. The hardest and least fun part of welded art is cleaning the metal of rust and paint, so it's exciting to finally have a sandblaster to help with odd-shaped pieces like chains.

Then tonight was a flower walk in Pajarito Canyon, which is bursting at the seams with flowers, especially purple aster, goldeneye, Hooker's evening primrose and bahia. Now I'll sign off so I can catalog my flower photos before I forget what's what.

September 20, 2016 02:17 AM

September 19, 2016

Jono Bacon

Looking For Talent For ClusterHQ

clusterhq_logo

Recently I signed ClusterHQ as a client. If you are unfamiliar with them, they provide a neat technology for managing data as part of the overall lifecycle of an application. You can learn more about them here.

I will be consulting with Cluster to help them (a) build their community strategy, (b) find a great candidate as Senior Developer Evanglist, and (c) help to mentor that person in their role to be successful.

If you are looking for a new career, this could be a good opportunity. ClusterHQ are doing some interesting work, and if this role is a good fit for you, I will also be there to help you work within a crisply defined strategy and be successful in the execution. Think of it as having a friend on the inside. 🙂

You can learn more in the job description, but you should have these skills:

  • You are a deep full-stack cloud technologist. You have a track record of building distributed applications end-to-end.
  • You either have a Bachelor’s in Computer Science or are self-motivated and self-taught such that you don’t need one.
  • You are passionate about containers, data management, and building stateful applications in modern clusters.
  • You have a history of leadership and service in developer and DevOps communities, and you have a passion for making applications work.
  • You have expertise in lifecycle management of data.
  • You understand how developers and IT organizations consume cloud technologies, and are able to influence enterprise technology adoption outcomes based on that understanding.
  • You have great technical writing skills demonstrated via documentation, blog posts and other written work.
  • You are a social butterfly. You like meeting new people on and offline.
  • You are a great public speaker and are sought after for your expertise and presentation style.
  • You don’t mind charging your laptop and phone in airport lounges so are willing and eager to travel anywhere our developer communities live, and stay productive and professional on the road.
  • You like your weekend and evening time to focus on your outside-of-work passions, but don’t mind working irregular hours and weekends occasionally (as the job demands) to support hackathons, conferences, user groups, and other developer events.

ClusterHQ are primarily looking for help with:

  • Creating high-quality technical content for publication on our blog and other channels to show developers how to implement specific stateful container management technologies.
  • Spreading the word about container data services by speaking and sharing your expertise at relevant user groups and conferences.
  • Evangelizing stateful container management and ClusterHQ technologies to the Docker Swarm, Kubernetes, and Mesosphere communities, as well as to DevOPs/IT organizations chartered with operational management of stateful containers.
  • Promoting the needs of developers and users to the ClusterHQ product & engineering team, so that we build the right products in the right way.
  • Supporting developers building containerized applications wherever they are, on forums, social media, and everywhere in between.

Pretty neat opportunity.

Interested?

If you are interested in this role, there are few options for next steps:

  1. You can apply directly by clicking here.
  2. Alternatively, if I know you, I would invite you to confidentially share your interest in this role by filling in my form here. This way I can get a good sense of who is interested and also recommend people I personally know and can vouch for. I will then reach out to those of you who this seems to be a good potential fit for and play a supporting role in brokering the conversation.

By the way, there are going to be a number of these kinds of opportunities shared here on my blog. So, be sure to subscribe to my posts if you want to keep up to date with the latest opportunities.

The post Looking For Talent For ClusterHQ appeared first on Jono Bacon.

by Jono Bacon at September 19, 2016 07:25 PM

September 17, 2016

Elizabeth Krumbach

Kubrick, Typeface to Interface and the Zoo

I’ve been home for almost three weeks, and now I’m back in an airport. For almost two weeks of that MJ has been on a business trip overseas and I’ve kept myself busy with work, the book release and meeting up with friends and acquaintances. The incredibly ambitious plans I had for this time at home weren’t fully realized, but with everything we have going on I’m kind of glad I was able to spend some time at home.

Mornings have changed some for me during these three weeks. Coming off of trips from Mumbai and Philadelphia in August my sleep schedule was heavily shifted and I decided to take advantage of that by going out running in the mornings. I’d been meaning to get back into it, and my doctor has gotten a bit more insistent of late based on some results from blood work, and she’s right. Instead of doing proper C25K this time I’ve just been doing interval run/walks. I walk about a half mile, do pretty even run/walk for two miles and then a half mile back. It’s not a lot, but I’ll up the difficultly level as I the run/walk I have going feels easier, I have been going out 4-5 days a week and so far it feels great and seems sustainable. Fingers crossed for keeping this up during my next few weeks of travel.

With MJ out of town I’ve made plans with a bunch of local friends. Meals with my friends James, Emma, Sabice and Amanda last week were all a lot of fun and reversed my at home trend of being a hermit. Last weekend I made my way over to to the Stanley Kubrick: The Exhibition. It opened in June and I’ve been interested in going, but sorting out timing and who to go with has been impossible. I finally just went by myself last Saturday after some having some sushi for lunch nearby.

I wouldn’t say I’m a huge Kubrick fan, but I have enjoyed a lot of his work. The exhibit does a really exceptional job showcasing his work, with bright walls throughout and really nicely laid out scripts, cameras, costumes and props from the films. I had just recently seen Eyes Wide Shut again, but the exhibit made me want to go back and watch the rest, and ones I haven’t seen (Lolita, Spartacus). I particularly enjoyed the bits about my favorite movies of his, 2001: A Space Odyssey and Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.

Some photos from the exhibition here: https://www.flickr.com/photos/pleia2/albums/72157670417890794

I did get out to a movie recently with my friend mct. We saw Complete Unknown which was OK, but not as good as I had hoped. Dinner at a nearby brewery rounded off the evening nicely.

With the whirlwind week of my book release, preparations for the OpenStack QA/Infrastructure sprint (which I’m on my way to now) and other things, I called it a day early on Thursday and met up with my friend Atul for some down time to visit the San Francisco Zoo. He’s been in town for several weeks doing a rotation for work, and we kept missing each other between other plans and my travel schedule. We got to the zoo in time to spend about 90 minutes there before they closed, making it around to most of the exhibits. We got a picture together by the giraffes, but they’ve opened exhibits for the Mexican Wolves and Sifaka lemurs since I last visited! It was fun to finally see both of those. I have some more zoo visits in my future too, hoping to visit the Philadelphia zoo when I’m there next weekend and then the Columbus Zoo after the Ohio LinuxFest in early October.

More zoo pictures here: https://www.flickr.com/photos/pleia2/albums/72157670651372724

Thursday night I met up with my friend Steve to go to the San Francisco Museum of Modern Art to see the Typeface to Interface exhibit. This museum is just a block from where I live and the recently reopened after a few years of massive renovations. They’re open until 9PM on Thursdays and we got there around 7:30 to quite the crowd of people, so these later hours seem to be really working for them. Unfortunately I’ve never been much of a fan of modern art. This exhibit interested me though, and I’m really glad I went. It walks you through the beginning of bringing typeface work into the digital realm, presenting you with the IBM Selectric that had replaceable typing element ball for different fonts. You see a variety of digital-inspired posters and other art, the New York Transit Authority Graphics Standards Manual. It was fun going with Steve too, since his UX expertise meant that he actually knew a thing or two about these things out of the geeky computer context I was approaching it with. I think they could have done a bit more to tie the exhibit together, but it’s probably the best one I’ve seen there.

We spent the rest of the evening before closing walking through several of the other galleries in the museum. Nothing really grabbed my interest, and a lot of it I found difficult to understand why it was in a museum. I do understand the aesthetically pleasing nature of much abstract art, but when it starts being really simple (panel of solid magenta) or really eclectic I struggle with understanding the appeal. Dinner was great though, both of us are east coasters by origin and we went to my favorite fish place in SOMA for east coast oysters, mussels, lobster rolls and strawberry shortcake.

Yesterday afternoon MJ got home from his work trip. In the midst of packing and laundry we were able to catch up and spend some precious time together, including a wonderful dinner at Fogo de Chão. Now I’m off to Germany for work. I had time to write this post because the first flight I had was delayed by an astonishing 6 hours, sailing past catching my connection. I’ve now been rebooked on a two stop itinerary that’s getting me in 5 hours later than I had expected. Sadly, this means I’m missing most of the tourist day in Heidelberg I had planned with colleagues on Sunday, but I expect we’ll still be able to get out for drinks in the evening before work on Monday morning.

by pleia2 at September 17, 2016 04:53 PM

September 13, 2016

Elizabeth Krumbach

Common OpenStack Deployments released!

Back in the fall of 2014 I signed a contract with Prentice Hall that began my work on my second book, Common OpenStack Deployments. This was the first book I was writing from scratch and the first where I was the lead author (the first books I was co-author on were the 8th and 9th editions of The Official Ubuntu Book). That contract started me on a nearly two year journey to write a this book about OpenStack, which I talk a lot about here: How the book came to be.

Along the way I recruited my excellent contributing author Matt Fischer, who in addition to his Puppet and OpenStack expertise, shares a history with me in the Ubuntu community and Mystery Science Theater 3000 fandom (he had a letter read on the show once!). In short, he’s pretty awesome.

A lot of work and a lot of people went into making this book a reality, so I’m excited and happy to announce that the book has been officially released as of last week, and yesterday I got my first copy direct from the printer!

As I was putting the finishing touches on it in the spring, the dedication came up. I decided to dedicate the book to the OpenStack community, with a special nod to the Puppet OpenStack team.

Text:

This book is dedicated to the OpenStack community. Of the community, I’d also like to specifically call out the help and support received from the Puppet OpenStack Team, whose work directly laid the foundation for the deployment scenarios in this book.

Huge thanks to everyone who participated in making this book a reality, whether they were diligently testing all of our Puppet manifests, lent their OpenStack or systems administration experience to reviewing or gave me support as I worked my way through the tough parts of the book (my husband was particularly supportive during some of the really grim moments). This is a really major thing for me and I couldn’t have done it without all of you.

I’ll be continuing to write about updates to the book over on the blog that lives on the book’s website: DeploymentsBook.com (RSS). You can also follow updates on Twitter via @deploymentsbook, if that’s your thing.

If you’re interested in getting your hands on a copy, it’s sold by all the usual book sellers and available on Safari. The publisher’s website also routinely has sales and deals, especially if you buy the paper and digital copies together, so keep an eye out. I’ll also be speaking at conferences over the next few months and will be giving out signed copies. Check out my current speaking engagements here to see where I’ll be and I will have a few copies at the upcoming OpenStack Summit in Barcelona.

by pleia2 at September 13, 2016 06:53 PM

September 12, 2016

Akkana Peck

Art on display at the Bandelier Visitor Center

As part of the advertising for next month's Los Alamos Artists Studio Tour (October 15 & 16), the Bandelier Visitor Center in White Rock has a display case set up, and I have two pieces in it.

[my art on display at Bandelier]

The Velociraptor on the left and the hummingbird at right in front of the sweater are mine. (Sorry about the reflections in the photo -- the light in the Visitor Center is tricky.)

The turtle at front center is my mentor David Trujillo's, and I'm pretty sure the rabbit at far left is from Richard Swenson.

The lemurs just right of center are some of Heather Ward's fabulous scratchboard work. You may think of scratchboard as a kids' toy (I know I used to), but Heather turns it into an amazing medium for wildlife art. I'm lucky enough to get to share her studio for the art tour: we didn't have a critical mass of artists in White Rock, just two of us, so we're borrowing space in Los Alamos for the tour.

September 12, 2016 04:38 PM

September 09, 2016

iheartubuntu

How to Read RSS on Ubuntu with FeedReader


One of my favorite apps on linux is FeedReader. Long ago when I used Google products they had a great RSS reader built in, but they phased it out and got rid of it. If you are a fan of reading blogs but dont want all of the emails for new articles piling into your inbox, FeedReader is a very nice app to organize and handle off of your RSS news.

Lets just get right to it and install which is very easy with a PPA in the terminal. Cut and paste each line into a terminal and press enter...

sudo apt-add-repository ppa:eviltwin1/feedreader-stable

sudo apt-get update && sudo apt-get install feedreader

The only caveat is this is not a true RSS manager but more of an RSS client. You will need to create an external account on one of the RSS web services (usually free) like InoReader, Feedly, or Tiny Tiny RSS. The beauty of this is that your blog feeds will all by synced up.

FeedReader has some nice features such as sharing an article to your Pocket, Instapaper or Readability accounts, as well as desktop notifications of new articles, keyboard shortcuts, and tagging of articles. As seen in my FeedReader image above, we list our subscribed blogs in one area on the left side, however if you have a lot of blogs you read you can make and group them into categories. Linux blogs, home improvement blogs, science blogs, whatever your desires.

Another bonus is you can even add YouTube pages to your feed! Example of a feed would be to go to their YouTube page and click onto the pages "Videos" feed and thats what you would plug into FeedReader. Heres an example of a YouTube link...

https://www.youtube.com/user/StarTrekContinues/videos

(thats a fan created Star Trek series in the aura of the original Star Trek TV show)

by iheartubuntu (noreply@blogger.com) at September 09, 2016 07:30 PM

New Version of Elementary OS Has Landed

The newest version of elementary OS 0.4, dubbed Loki, has landed today and it is impressive. More than a year in development, elementary OS ships with a carefully curated selection of apps that cater to every day needs so you can spend more time using your computer and less time cleaning up bloatware.

One of the major new features of elementary OS is their new Open Source App Store.
The AppCenter brings handcrafted apps and extensions directly to your desktop. Quickly discover new apps and easily update the ones you already have. No license keys, no subscriptions, no trial periods.

Loki is being built with the Ubuntu 16.04 LTS repository, which means it comes with Gtk 3.18, Vala 0.32, and Linux 4.4 as well as a multitude of other updated libraries. Loki also removes Ubuntu's Ayatana with a brand new set of wingpanel indicators for a better experience.

From their website...

"elementary OS is Open Source... Our code is available for review, scrutiny, modification, and redistribution by anyone. We dont make advertising deals and we don't collect sensitive personal data. Our only income is directly from our users. elementary OS is Safe & Secure. We're built on Linux: the same software powering the U.S Department of Defense, the Bank of China, and more"

If you want an impressive, beautiful and snappy linux desktop based on Ubuntu, give elementary OS a try.

by iheartubuntu (noreply@blogger.com) at September 09, 2016 06:06 PM

September 06, 2016

Elizabeth Krumbach

Labor Day Weekend in SF

Labor Day weekend was a busy one for us this year, as the last weekend MJ and I will be properly spending together for several weeks. Tomorrow he flies off to India for work. He’ll get home mid-day the following Friday, and Saturday morning I’ll be off to Germany. When I get home he’ll be in New York, directly from there we’ll meet at the end of September in Philadelphia for a couple events we have planned with family and friends. October will only be marginally better. But I already have three trips planned and he has at least one.

Saturday morning we went to services and then lunch in Japantown. We had a bit of a medical adventure in the afternoon (all is well) before going off to dinner. MJ started a new job in the beginning of August while I was in Mumbai and we hadn’t had time to properly schedule time to celebrate. After doing some searching as to a nice place to go where getting reservations wasn’t too much of a hassle, we ended up at Luce. They have a Michelin star, reservations were easy to get, they offer a tasting menu Tuesday through Saturday and they’re located just a few blocks from home.

I really enjoyed the ambiance of the restaurant, in spite of it being just off the lobby for the InterContinental. It was spacious and cool, and not very busy for a Saturday night, so we had privacy to talk and didn’t feel crowded. The food was good, with the portions being just right for a tasting menu. There were a lot of fish dishes (menu here), which suited me just fine, both the salmon and the halibut were amazing. I skipped the foie gras, but I did have the duck course, which I’m not usually that keen on, but it was not as tender and rare as is usually served so I was OK with it. The desserts were light, cold and fruity, making for a really nice ending to the meal that wasn’t at all heavy. I also opted for the wine pairing, which MJ dipped into throughout the meal. There was only one red among the selection, with the rest being a series of dry and sweet whites.


Selections from the tasting menu at Luce

Sunday we had a long lunch over at Waterbar on the Embarcadero. We’d been to the restaurant next door, EPIC Steak, many times but this was our first time snagging reservations over at Waterbar. They have an amazing oyster list and the views of the Bay Bridge there are stunning, especially on a day as beautiful as Sunday was.

After lunch we went to the Exploratorium. We’d been there a few times before, so this visit was specifically to see their exhibit Strandbeest: The Dream Machines of Theo Jansen.

I knew about the Strandbeests after seeing a video about them online. These mechanical animals are created by a Dutch artist artist and propel themselves along beaches. He makes them out of a type of plastic piping and has come up with a whole evolution through the various animals he’s made since beginning this work in 1990. They all have names, and the exhibit went through various iterations, some of which they had examples of on exhibit. It talked about the “nerves” and “muscles” that the Strandbeests have, how some are propelled by wind but many also have some mechanisms for limited self-propulsion. We also sat through a 32 minute video they had showing about them.

What was most striking about the exhibit was how strange it all was. This artist devoted a nice chunk of his life to this work and it’s kind of an unusual thing, but the Strandbeest are amazing. They are mechanical but appear so lifelike when they move. All in spite of being so very obviously built with plastic and they wander along beaches. More photos from the exhibit here: https://www.flickr.com/photos/pleia2/albums/72157673332520306

Monday didn’t have so many adventures. As we prepare for all this overlapping travel, we had a lot to get squared away at home and in preparation for our trips (like booking one of them! And packing!). Though we did have time to sneak out to a nice brunch together at the nearby Red Dog Restaurant.

by pleia2 at September 06, 2016 05:14 AM

Akkana Peck

The Taos Earthships (and a lovely sunset)

We drove up to Taos today to see the Earthships.

[Taos Earthships] Earthships are sustainable, completely off-the-grid houses built of adobe and recycled materials. That was pretty much all I knew about them, except that they were weird looking; I'd driven by on the highway a few times (they're on highway 64 just west of the beautiful Rio Grande Gorge Bridge) but never stopped and paid the $7 admission for the self-guided tour.

[Earthship construction] Seeing them up close was fun. The walls are made of old tires packed with dirt, then covered with adobe. The result is quite strong, though like all adobe structures it requires regular maintenance if you don't want it to melt away. For non load bearing walls, they pack adobe around old recycled bottles or cans.

The houses have a passive solar design, with big windows along one side that make a greenhouse for growing food and freshening the air, as well as collecting warmth in cold weather. Solar panels provide power -- supposedly along with windmills, but I didn't see any windmills in operation, and the ones they showed in photos looked too tiny to offer much help. To help make the most of the solar power, the house is wired for DC, and all the lighting, water pumps and so forth run off low voltage DC. There's even a special DC refrigerator. They do include an AC inverter for appliances like televisions and computer equipment that can't run directly off DC.

Water is supposedly self sustaining too, though I don't see how that could work in drought years. As long as there's enough rainfall, water runs off the roof into a cistern and is used for drinking, bathing etc., after which it's run through filters and then pumped into the greenhouse. Waste water from the greenhouse is used for flushing toilets, after which it finally goes to the septic tank.

All very cool. We're in a house now that makes us very happy (and has excellent passive solar, though we do plan to add solar panels and a greywater system some day) but if I was building a house, I'd be all over this.

We also discovered an excellent way to get there without getting stuck in traffic-clogged Taos (it's a lovely town, but you really don't want to go near there on a holiday, or a weekend ... or any other time when people might be visiting). There's a road from Pilar that crosses the Rio Grande then ascends up to the mesa high above the river, continuing up to highway 64 right near the earthships. We'd been a little way up that road once, on a petroglyph-viewing hike, but never all the way through. The map said it was dirt from the Rio all the way up to 64, and we were in the Corolla, since the Rav4's battery started misbehaving a few days ago and we haven't replaced it yet.

So we were hesitant. But the nice folks at the Rio Grande Gorge visitor center at Pilar assured us that the dirt section ended at the top of the mesa and any car could make it ("it gets bumpy -- a New Mexico massage! You'll get to the top very relaxed"). They were right: the Corolla made it with no difficulty and it was a much faster route than going through Taos.

[Nice sunset clouds in White Rock] We got home just in time for the rouladen I'd left cooking in the crockpot, and then finished dinner just in time for a great sunset sky.

A few more photos: Earthships (and a great sunset).

September 06, 2016 03:05 AM

September 01, 2016

Jono Bacon

The Psychology of Report/Issue Templates

This week HackerOne, who I have been working with recently, landed Report Templates.

In a nutshell, a report template is a configurable chunk of text that can be pre-loaded into the vulnerability submission form instead of a blank white box. For example:

96ba76fdc5f74c773902b8b975ea161326a72ffc_original

The goal of a report template is two-fold. Firstly, it helps security teams to think about what specific pieces of information they require in a vulnerability report. Secondly, it provides a useful way of ensuring a hacker provides all of these different pieces of information when they submit a report.

While a simple feature, this should improve the overall quality of reports submitted to HackerOne customers, improve the success of hackers by ensuring their vulnerability reports match the needs of their security teams, and result in overall better quality engagement in the platform.

Similar kinds of templates can be seen in platforms such as Discourse, GitLab, GitHub, and elsewhere. While a simple feature, there are some subtle underlying psychological components that I thought could be interesting to share.

The Psychology Behind the Template

When I started working with HackerOne the first piece of work I did was to (a) understand the needs/concerns of hackers and customers and then based on this, (b) perform a rigorous assessment of the typical community workflow to ensure that it mapped to these requirements. My view is simple: if you don’t have simple and effective workflow, it doesn’t matter how much outreach you do, people will get confused and give up.

This view fits into a wider narrative that has accompanied my work over the years that at the core of great community leadership is intentionally influencing the behavior we want to see in our community participants.

When I started talking to the HackerOne team about Report Templates (an idea that had already been bounced around), building this intentional influence was my core strategic goal. Customers on HackerOne clearly want high quality reports. Low quality reports suck up their team’s time, compromise the value of the platform, and divert resources from other areas. Similarly, hackers should be set up for success. A core metric for a hacker is Signal, and signal threshold is a metric for many of the private programs that operate on HackerOne.

In my mind Report Templates were a logical areas to focus on for a few reasons.

Firstly, as with almost everything in life, the root of most problems are misaligned expectations. Think about spats with your boss/spouse, frustrations with your cable company, and other annoyances as as examples of this.

A template provides an explicit tool for the security team to state exactly what they need. This reduces ambiguity, which in turn reduces uncertainty, which has proven to be a psychological blocker, and particularly dangerous on communities.

There has also been some interesting research into temptation and one of the findings has been that people often make irrational choices when they are in a state of temptation or arousal. Thus, when people are in a state of temptation, it is critical for us to build systems that can responsibility deliver positive results for them. Otherwise, people feel tempted, initiate an action, do not receive the rewards they expected (e.g. validation/money in this case), and then feel discomfort at the outcome.

Every platform plays to this temptation desire. Whether it is being tempted to buy something on Amazon, temptation to download and try a new version of Ubuntu, temptation to respond to that annoying political post from your Aunt on Facebook, or a temptation to submit a vulnerability report in HackerOne, we need to make sure the results of the action, at this most delicate moment, are indeed positive.

Report Templates (or Issue/Post Templates in other platforms) play this important role. They are triggered at the moment the user decides to act. If we simply give the user a blank white box to type into, we run the risk of that temptation not resulting in said suitable reward. Thus, the Report Template greases the wheels, particularly within the expectations-setting piece I outlined above.

Finally, and as relates to temptation, I have become a strong believer in influencing behavioral patterns at the point of action. In other words, when someone decides to do something, it is better to tune that moment to influence the behavior you want rather than try to prime people to make a sensible decision before they do so.

In the Report Templates example, we could have alternatively written oodles and oodles of documentation, provided training, delivered webinars/seminars and other content to encourage hackers to write great reports. There is though no guarantee that this would have influenced their behavior. With a Report Template though, because it is presented at the point of action (and temptation) it means that we can influence the right kind of behavior at the right time. This generally delivers better results.

This is why I love what I do for a living. There are so many fascinating underlying attributes, patterns, and factors that we can learn from and harness. When we do it well, we create rewarding, successful, impactful communities. While the Report Templates feature may be a small piece of this jigsaw, it, combined with similar efforts can join together to create a pretty rewarding picture.

The post The Psychology of Report/Issue Templates appeared first on Jono Bacon.

by Jono Bacon at September 01, 2016 08:50 PM

August 30, 2016

Elizabeth Krumbach

Layover “at” Heathrow

While on my way back from Mumbai several weeks ago I ended up with a seven hour layover at Heathrow. I had planned on just camping out in an airport lounge for that time and catching up on email, open source stuff, work stuff. Then my friend Laura reached out to see if I wanted her to pick me up at the airport so we could escape for a few hours and grab breakfast.

I’d never left the airport on a layover like this, but the chance to catch up with a good friend and being able to take advantage of not needing to plan for a VISA to enter were too good to pass up. Leaving immigration was fun, having to explain that I’d only be out of the airport for five hours. And so, with 7 hours between flights I properly entered England for the second time in my life.

We ended up at Wetherspoon’s pub in Woking for breakfast. Passing on the pork-heavy English breakfast, I had a lovely smoked salmon benedict and some tea.

The weather that morning was beautiful, so after breakfast wandered around town, stopped in a shop or two. We got some more tea (this time with cake!) and generally caught up. It was really nice to chat about our latest career stuff, geek out about open source and fill each other in on our latest life plans.

Definitely the best layover I’ve ever had, I’m super glad I didn’t just stay in the airport lounge! I’ll remind myself of this the next time the opportunity arises.

A handful of other photos here: https://www.flickr.com/photos/pleia2/albums/72157671052472560

by pleia2 at August 30, 2016 03:52 PM

Jono Bacon

My Reddit AMA Is Live – Join Us

Just a quick note that my Reddit Ask Me Anything discussion is live. Be sure to head over to this link and get your questions in!

All and any questions are absolutely welcome!

The post My Reddit AMA Is Live – Join Us appeared first on Jono Bacon.

by Jono Bacon at August 30, 2016 03:43 PM

Linux, Linus, Bradley, and Open Source Protection

Last week a bun-fight kicked off on the Linux kernel mailing list that led to some interesting questions about how and when we protect open source projects from bad actors. This also shone the light on some interesting community dynamics.

The touchpaper was lit when Bradley Kuhn, president of the Software Freedom Conservancy (an organization that provides legal and administrative services for free software and open source projects) posted a reply to Greg KH on the Linux kernel mailing list:

I observe now that the last 10 years brought something that never occurred before with any other copylefted code. Specifically, with Linux, we find both major and minor industry players determined to violate the GPL, on purpose, and refuse to comply, and tell us to our faces: “you think that we have to follow the GPL? Ok, then take us to Court. We won’t comply otherwise.” (None of the companies in your historical examples ever did this, Greg.) And, the decision to take that position is wholly in the hands of the violators, not the enforcers.

He went on to say:

In response, we have two options: we can all decide to give up on the GPL, or we can enforce it in Courts.

This rather ruffled Linus’s feathers who feels that lawyers are more part of the problem than the solution:

The fact is, the people who have created open source and made it a success have been the developers doing work – and the companies that we could get involved by showing that we are not all insane crazy people like the FSF. The people who have destroyed projects have been lawyers that claimed to be out to “save” those projects.

What followed has been a long and quite interesting discussion that is still rumbling on.

In a nutshell, this rather heated (and at times unnecessarily personal) debate has focused on when is the right time to defend the rights on the GPL. Bradley is of the view that these rights should be intrinsically defended as they are as important (if not more important) than the code. Linus is of the view that the practicalities of the software industry mean sending in the lawyers can potentially have an even more damaging effect as companies will tense up and choose to stay away.

Ethics and Pragmatism

Now, I have no dog in this race. I am a financial supporter of the Software Freedom Conservancy and the Free Software Foundation. I have an active working relationship with the Linux Foundation and I am friends with all the main players in this discussion, Linus, Greg, Bradley, Karen, Matthew, and Jeremy. I am not on anyone’s “side” here and I see value in the different perspectives brought to the table.

With that said, the core of this debate is the balance of ethics and pragmatism, something which has existed in open source and free software for a long time.

Linus and Bradley are good examples of either side of the aisle.

Linus has always been a pragmatic guy, and his stewardship of Linux has demonstrated that. Linus prioritizes the value of the GPL for practical software engineering and community-building purposes more-so than wider ideological free software ambitions. With Linus, practicality and tangible output come first.

Bradley is different. For Bradley, software freedom is first and foremost a moral issue. Bradley’s talents and interests lay with the legal and copyright aspects more-so than software engineering, so naturally his work has focused on licensing, copyright, and protection.

Now, this is not to suggest Linus doesn’t have ethics or that Bradley isn’t pragmatic, but their priorities are drawn in different areas. This results in differences in expectations, tone, and approach, with this debate being a good example.

Linus and Bradley are not alone here. For a long time there have been differences between organizations such as the Linux Foundation, the Free Software Foundation, and the Open Source Initiative. Again, each of these organizations draw their ethical and pragmatic priorities differently and they attract supporters who commonly share those similar lines in the sand.

I am a supporter of all of these organizations. I believe the Linux Foundation has had an unbelievably positive effect in normalizing and bridging the open source culture, methodology, and mindset to the wider business world. The Open Source Initiative have done wonderful work as stewards of licenses that thousands of organizations depend on. The Free Software Foundation has laid out a core set of principles around software freedom that are worthy for us all to strive for.

As such, I often take the view that everyone is bringing value, but everyone is also somewhat blinded by their own priorities and biases.

My Conclusion

Unsurprisingly, I see value in both sides of the debate.

Linus rightly raises the practicalities of the software industry. This is an industry in that is driven by a wide range of different forcing functions and pressures: politics, competition, supply/demand, historical precedent, cultural norms, and more. Many of these companies do great things, and some do shitty things. That is human beings for you.

As such, and like any industry, nothing is black and white. This isn’t as simple as Company A licenses code under the GPL and if they don’t meet the expectations of the license they should face legal consequences until they do. Each company has a delicate mix of these driving forces and Linus is absolutely right that a legal recourse could potentially have the inverse effect of reducing participation rather than improving it.

On the other hand, the GPL (or another open source license) does have to have meaning. As we have seen in countless societies in history, if rules are not enforced, humans will naturally try to break the rules. This always starts as small infractions but then ultimately grows more and more as the waters are tested. So, Bradley raises an important point, and while we should take a realistic and pragmatic approach to the norms of the industry, we do need people who are willing and able to enforce open source licenses.

The subtlety is in how we handle this. We need to lead with nuance and negotiation and not with antagonistic legal implications. The lawyers have to be a last resort and we should all be careful not to infer an overblown legal recourse for organizations that skirt the requirements of these licenses.

Anyone who has been working in this industry knows that the way you get things done in an organization is via a series of indirect nudges. We change organizations and industries with relationships, trust, and collaboration, and providing a supporting function to accomplish the outcome we want.

Of course, sometimes there has to be legal consequences, but this has to genuinely be a last resort. We need to not be under the illusion that legal action is an isolated act of protection. While legal action may protect the GPL in that specific scenario it will also freak out lots of people watching it unfold. Thus, it is critical that we consider the optics of legal action as much as the practical benefits from within that specific case.

The solution here, as is always the case, is more dialog that is empathetic to the views of those we disagree with. Linus, Bradley, and everyone else embroiled in this debate are on the right side of history. We just need to work together to find common ground and strategies: I am confident they are there.

What do you think? Do I have an accurate read on this debate? Am I missing something important? Share your thoughts below in the comments!

The post Linux, Linus, Bradley, and Open Source Protection appeared first on Jono Bacon.

by Jono Bacon at August 30, 2016 05:43 AM

August 29, 2016

Jono Bacon

Join my Reddit AMA Tomorrow

Screen Shot 2016-08-28 at 9.58.32 PM

Just a short reminder that tomorrow, Tuesday 30th August 2016 at 9am Pacific (see other time zone times here) I will be doing a Reddit AMA about community strategy/management, developer relations, open source, music, and anything else you folks want to ask about.

Want to ask questions about Canonical/GitHub/XPRIZE? Questions about building great communities? Questions about open source? Questions about politics or music? All questions are welcome!

To join, simply do the following:

  • Be sure to have a Reddit account. If you don’t have one, head over here and sign up.
  • On Tuesday 30th August 2016 at 9am Pacific (see other time zone times here) I will share the link to my AMA on Twitter (I am not allowed to share it until we run the AMA). You can look for this tweet by clicking here.
  • Click the link in my tweet to go to the AMA and then click the text box to add your question(s).
  • Now just wait until I respond. Feel free to follow up, challenge my response, and otherwise have fun!

I hope to see you all tomorrow!

The post Join my Reddit AMA Tomorrow appeared first on Jono Bacon.

by Jono Bacon at August 29, 2016 03:00 PM

Nathan Haines

Announcing the Ubuntu 16.10 Free Culture Showcase!

It’s time once again for the Ubuntu Free Culture Showcase!

The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We're looking for content which shows off the skill and talent of these amazing artists and will greet Ubuntu 16.10 users.

More information about the Free Culture Showcase is available at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase

This cycle, we're looking for beautiful wallpaper images that will literally set the backdrop for new users as they experience Ubuntu 16.10 for the first time.  Submissions will be handled via Flickr at https://www.flickr.com/groups/ubuntu-fcs-1610/

I'm looking forward to seeing the next round of entrants and a difficult time picking final choices to ship with Ubuntu 16.10.

August 29, 2016 11:13 AM

August 28, 2016

Elizabeth Krumbach

Local Sights

I’ve been going running along the Embarcadero here in San Francisco lately. These runs afford me fresh air coming off the bay, stunning views of the bay itself, a chance to run under the beautiful Bay Bridge and down to the AT&T ballpark. I run past palm trees and E-Line street cars, and the weather is cool and clear enough to pretty much do it every day. In short, it sometimes feels like we live in paradise.

Naturally, we like to share that with friends and family who visit. I’ve had a fun year of local touristing as cousins, sisters and friends have been in town visiting. Our favorite place to take them is Fort Baker. It’s almost always less chaotic than the lookout point at the north side of the bridge, and you actually get to walk around a fair amount to get some views of both the Golden Gate Bridge and San Francisco itself. It’s also where I got my head shots done, including the header image I’ve used for this blog for several years. I’m a big fan of the city skyline from there.

Back in April we made a visit up to The Marine Mammal Center, which I wrote about here. We took an alternate route back due to a closed tunnel, and that’s how we ended up looking down at the Golden Gate Bridge from the northwest edge, the one view I hadn’t seen yet. It’s a pretty exceptional ones, getting to see the undeveloped hilly area on the north side and then the San Francisco city skyline in the far distance. I probably could have sat there all day.

Alas, I didn’t have all day. I had only taken the morning off from work and I had to grab a bite before catching the ferry back to San Francisco from Sausalito while MJ took everyone else on to Muir Woods. Now, I’d taken a ferry in the bay before, one to Alcatraz to do some tourist visiting, another to Alameda and back when visiting a potential location for a Partimus computer lab deployment. It’s always been a beautiful ride, but the ride from Sausalito to San Francisco lands into exceptional territory. You get views of several islands, both of San Francisco’s bridges, Alcatraz, Sausalito and the city. I was so happy on this ferry ride that I even had a conversation with a couple who was in town visiting from Canada and answered piles of questions about what we were seeing. This is something that shy, introvert me hardly ever does.

We also take folks up to Twin Peaks. How many cities in the world are there where you can climb a hill and look at downtown? In San Francisco, you can go up to Twin Peaks. It’s breathtaking.

Nice bay, right? We have an ocean too. I spent my youth on the coast of Maine. I didn’t sneak out to late night parties when I was a teenager, I snuck out to go to the park and sit by the ocean. My head clearing spot? The ocean. Needed cheering up when I was depressed? Trip to the ocean. First kiss? Happened right there on the rocks by the ocean. My love for being near the coast is a pretty deep part of who I am.

From the Cliff House on the western side of the city you get some great views of the beach stretching south.

Looking north you can see the ruins of the Sutro Baths that were opened in 1896 and lasted through the middle of the 20th century. Looking beyond to the other side of the golden gate.

Further views we caught this spring are in a pair of albums on Flickr, by month: April and June

by pleia2 at August 28, 2016 10:07 PM

August 27, 2016

kdub

Mir 0.24 Release

Mir 0.24 was just released this week!

We’ve reworked a few things internally and fixed a fair amount of bugs. Notably, our buffer swapping system and our input keymapping system were reworked (Alt-Gr should now work for international keyboards). There was also some improvements made to the server API to make window management better.

I’m most excited about the internal buffer swapping mechanism changes, as its what I’ve been working to release for a while now. The internal changes get us ready for Vulkan [1], and improve our multimedia support [2], improve  our WiDi support, and to reduce latency in nested server scenarios [3].

Mir ImageThis is prep work  for releasing some new client API functions (perhaps in 0.25, depending on how the trade winds are blowing… they’re currently gated in non-public project directories here). More on that once the headers are released.

[1]
Vulkan is  a new rendering API from Khronos designed to give finer-grained gpu control and more parallel operation between CPU and the GPU.

[2]
Especially multimedia decoding and encoding, which need more arbitrary buffer control.

[3]
“Unity8” runs in a nested server configuration for multiuser support (among other reasons). unity-system-compositor controls the framebuffer, unity8 sessions connect to unity-system-compositor, and clients connect to the appropriate unity8 session. More fine-grained buffer submissions allow us to forward buffers more creatively, making sure the clients have zero-copy more often.

by kdub at August 27, 2016 07:19 AM

August 26, 2016

Akkana Peck

More map file conversions: ESRI Shapefiles and GeoJSON

I recently wrote about Translating track files between mapping formats like GPX, KML, KMZ and UTM But there's one common mapping format that keeps coming up that's hard to handle using free software, and tricky to translate to other formats: ESRI shapefiles.

ArcGIS shapefiles are crazy. Typically they come as an archive that includes many different files, with the same base name but different extensions: filename.sbn, filename.shx, filename.cpg, filename.sbx, filename.dbf, filename.shp, filename.prj, and so forth. Which of these are important and which aren't?

To be honest, I don't know. I found this description in my searches: "A shape file map consists of the geometry (.shp), the spatial index (.shx), the attribute table (.dbf) and the projection metadata file (.prj)." Poking around, I found that most of the interesting metadata (trail name, description, type, access restrictions and so on) was in the .dbf file.

You can convert the whole mess into other formats using the ogr2ogr program. On Debian it's part of the gdal-bin package. Pass it the .shp filename, and it will look in the same directory for files with the same basename and other shapefile-related extensions. For instance, to convert to KML:

 ogr2ogr -f KML output.kml input.shp

Unfortunately, most of the metadata -- comments on trail conditions and access restrictions that were in the .dbf file -- didn't make it into the KML.

GPX was even worse. ogr2ogr knows how to convert directly to GPX, but that printed a lot of errors like "Field of name 'foo' is not supported in GPX schema. Use GPX_USE_EXTENSIONS creation option to allow use of the <extensions> element." So I tried ogr2ogr -f "GPX" -dsco GPX_USE_EXTENSIONS=YES output.gpx input.shp but that just led to more errors. It did produce a GPX file, but it had almost no useful data in it, far less than the KML did. I got a better GPX file by using ogr2ogr to convert to KML, then using gpsbabel to convert that KML to GPX.

Use GeoJSON instead to preserve the metadata

But there is a better way: GeoJSON.

ogr2ogr -f "GeoJSON" -t_srs crs:84 output.geojson input.shp

That preserved most, maybe all, of the metadata the .dbf file and gave me a nicely formatted file. The only problem was that I didn't have any programs that could read GeoJSON ...

[PyTopo showing metadata from GeoJSON converted from a shapefile]

But JSON is a nice straightforward format, easy to read and easy to parse, and it took surprisingly little work to add GeoJSON parsing to PyTopo. Now, at least, I have a way to view the maps converted from shapefiles, click on a trail and see the metadata from the original shapefile.

See also:

August 26, 2016 06:11 PM

August 25, 2016

Jono Bacon

Social Media: 10 Ways To Not Screw It Up

Social media is everywhere. Millions of users, seemingly almost as many networks, and many agencies touting that they have mastered the zen-like secrets to social media and can bring incredible traction.

While social media has had undeniable benefits to many, it has also been contorted and twisted in awkward ways. For every elegant, well deliver social account there are countless blatant attention-grabbing efforts.

While I am by no means a social media expert, over the years I have picked up some techniques and approaches that I have found useful with the communities, companies, and clients I have worked with. My goal has always been to strike a good balance between quality, engagement, and humility.

I haven’t always succeeded, but here are 10 things I recommend you do if you want to do social media well:

1. Focus on Your Core Networks

There are loads of social media networks out there. For some organizations there is an inherent temptation to grow an audience on all of them. More audiences mean more people, right?

Well, not really.

As with most things in life, it is better to have focus and deliver quality than to spread yourself too thin. So, pick a few core networks and focus on them. Focus on delivering great content, growing your audience, and engaging well.

My personal recommendations are to focus n Twitter and Facebook for sure, as they have significant traction, but also Instagram and Google+ are good targets too. It is really up to you though for what works best for your organization/goals.

2. Configure Your Accounts Well

Every social media network has some options for choosing an avatar, banner, and adding a little text. It is important to get this right.

Put yourself in the position of your audience. Imagine they don’t know who you are and they stumble on your profile. Sure, a picture of a care bear and a quote from The Big Lebowski may look cool, but it doesn’t help the reader.

Their reading of this content is going to result in a judgement call about you. So, reflect yourself accurately. Want to be a professional? Look and write professionally. Want to be a movie fan who believes in magical bears? Well, erm, I guess you know what to do.

It is also important to do this for SEO (Search Engine Optimization). If you want more Google juice for your name/organization, be sure to incorporate it in your profiles and content.

3. Quality vs. Quantity

A while back I spent a bit of time working with some folks who were really into social media. They had all kinds of theories about how the Facebook and Twitter algorithms prioritize content, hide it from users, and only display certain types of content to others. Of course this is not an exact science as these algorithms are typically confidential to those networks.

There is no doubt that social networks have to make some kind of judgement on what to show – there is just too much material to show it all. So, we want to be mindful of these restrictions, but also be wary that a lot of this is guessing.

The trick here is simple: focus on delivering high quality content and just don’t overdo it. Posting 50 tweets in a day is not going to help – it will be too much and probably not high quality (likely due to the quantity). Even if your audience sees it all, it will just seem spammy.

Now, you may be asking what high quality content would look like? Fundamentally I see it as understanding your audience, how they communicate, and mirroring those interests and tonality. Some examples:

  • Well written content that is concise, loose, and fun.
  • Interesting thoughts, ideas, and discussions.
  • Links to interesting articles, data, and other material.
  • Interesting embedded pictures, videos, and other content.

Speaking of embedding…

4. Embed Smartly

All the networks allow you to embed pictures and videos in your social posts.

Where possible, always embed something. It typically results in higher performing posts both in terms of views and click-rate.

Video has proven to do very well on social media networks. People are naturally curious and click the video to see it. Be mindful here though – posting a 45 minute documentary isn’t going to work well. A 2 minute clip will work great though.

Also, check how different networks display videos. For example, on Twitter and Google+, YouTube videos get a decent sized thumbnail and are simple to play. On Facebook though, YouTube videos are noticeably smaller (likely because Facebook doesn’t want people embedding YouTube videos). So, when posting on Facebook, uploading a native video might be best.

Pictures are an interesting one. A few tips:

  • Square pictures work especially well. They resize well in most social interfaces to take up the maximum amount of space.
  • The ideal size is 505×505 pixels on Facebook. I have found this size to work well on other networks too.
  • Images that work particularly well are high contrast and have large letters. They stand out more in a feed and make people want to click them. An example of an image I am using for my Reddit AMA next week:

Social Media

Authenticity is essential in any human communication. As humans we are constantly advertised to, sold, and marketed at, and thus evolution has increasingly expanded our bullshit radar.

This radar gets triggered when we see inauthentic content. Examples of this include content trying to be overly peppy, material that requires too many commitments (e.g. registrations), or clickbait. A classic example from our friends at Microsoft:

Social Media

Social media is fundamentally about sharing and discussion and representing content and tonality that matches your audience. Make sure that you do both authentically.

Share openly, and discuss openly. Act and talk like a human, not a business book, don’t try to be someone you are not, and you will find your audience enjoys your content and finds your efforts rewarding.

6. Connect and Schedule Your Content

Managing all these social media networks is a pain. Of course, there are many tools that you can use for extensive analytics, content delivery, and team collaboration. While these are handy for professional social media people, for many people they are not particularly necessary.

What I do recommend for everyone though is Buffer.

The idea is simple. Buffer lets you fill a giant bucket full of social media posts that will hit the major networks such as Twitter, Facebook, Google+ (pages), and Instagram. You then set a schedule for when these posts should go out and Buffer will take care of sending them for you at an optimal chosen time.

Part of the reason I love this is that if you have a busy week and forget to post on social media, you know that you are always sharing content. Speaking personally, I often line up my posts on a Sunday night and then periodically post during the week.

Speaking of optimal times…

7. Timing Is Everything

If you want your content to get a decent number of views and clicks, there are definitely better times than others to post.

Much of this depends on your audience and where you are geographically. As an example, while I have a fairly global audience for my work, a significant number of people are based in US. As such, I have found that the best time for my content is in the morning between 8am and 9am Pacific. This then still hits Europe and out towards India.

To figure out the best time for you, post some social posts and look at the analytics to see which times work best. Each social network has analytics available and Buffer provides a nice analytics view too, although the nicer stats require a professional plan.

Knowing what is the best time to post combined with the scheduled posting capabilities of Buffer is a great combo.

8. Deliver Structured Campaigns

You might also want to explore some structured campaigns for your social media efforts. These are essentially themed campaigns designed to get people interested or involved.

A few examples:

  • Twitter Chats – here you simply choose a hashtag and some guests, announce the chat, and then invite your guests to answer the questions via Twitter and for the audience to respond. They can be rather fun.
  • Calls For Action – again, choose a hashtag, and ask your audience for feedback to certain questions. This could be questions, suggestions, content, and more.
  • Thematic Content – here you post a series of posts with similar images or videos attached.

You are only limited by your imagination, but remember, be authentic. Social media is riddled with cheesy last-breath attempts at engagement. Don’t be one of those people.

9. Don’t Take Yourself too Seriously

There has much various studies to suggest social media encourages narcissism. There is certainly observational evidence that backs this up.

You should be proud of your work, proud of your projects, and focus on doing great things. Always try to ensure that you are down to earth though, and demonstrate a grounded demeanor in your posts. No one likes ego, and it is more tempting than ever to use social media as a platform for a confidence boost and increasingly post ego-drive narcissistic content.

Let’s be honest, we have all made this mistake from time to time. I know I have. We are human beings, after all.

As I mentioned earlier, you always want to try to match your tonality to your audience. For some global audiences though it can be tempting to err on the side of caution and be a little too buttoned up. This often ends up being just boring. Be professional, sure, but surprise your audience in your humanity, your humility, and that there is a real person behind the tweet or post.

10. What Not To Do

Social media can be a lot of fun and with some simple steps (such as these) you can perform some successful and rewarding work. There are a few things I would recommend you don’t do though:

  • Unless you want to be a professional provocateur, avoid deliberately fighting with your audience. You will almost certainly disagree with many of your followers on some political stances – picking fights won’t get you anywhere.
  • Don’t go and follow everyone for the purposes of getting followed back. When I see that Joe Bloggs has 5,434 followers and is following 5,654 people, it smacks of this behavior. 😉
  • Don’t be overtly crass. I know some folks online, and even worked with some people, who just can’t help dropping F bombs, crass jokes, and more online. Be fun, be a little edgy, but keep it classy, people.

So, that’s it. Just a few little tips and tricks I have learned over the years. I hope some of this helps. If you found it handy, click those social buttons on the side and practice what you preach and share this post. 🙂

I would love to learn from you though. What approaches, methods, and techniques have you found for doing social media better? Share your ideas in the comment box and let’s have a discussion…

The post Social Media: 10 Ways To Not Screw It Up appeared first on Jono Bacon.

by Jono Bacon at August 25, 2016 03:00 PM

August 23, 2016

Elizabeth Krumbach

FOSSCON 2016

Last week I was in Philadelphia, which was fun and I got to do some Ubuntu stuff but I was actually there to speak at FOSSCON. It’s not the largest open source conference, but it is in my adopted home city of Philadelphia and I have piles of friends, mentors and family there. I love attending FOSSCON because I get to catch up with so many people, making it a very hug-heavy conference. I sadly missed it last year, but I made sure to come out this year.

They also invited me to give a closing keynote. After some back and forth about topics, I ended up with a talk on “Listening to the Needs of Your Global Open Source Community” but more on that later.

I kicked off my morning by visiting my friends at the Ubuntu booth, and meeting up with my OpenStack and HPE colleague Ma Dong who had flown in from Beijing to join us. I made sure we got our picture taken by the beautiful Philadelphia-themed banner that the HPE open source office designed and sent for the event.

At 11AM I gave my regular track talk, “A Tour Of OpenStack Deployment Scenarios.” My goal here was to provide a gentle introduction, with examples, of the basics of OpenStack and how it may be used by organizations. My hope is that the live demos of launching instances from the Horizon web UI and OpenStack client were particularly valuable in making the connection between the concepts of building a cloud the actual tooling you might use. The talk was well-attended and I had some interesting chats later in the day. I learned that a number of the attendees are currently using proprietary cloud offerings and looking for options to in-house some of that.

The demos were very similar to the tutorial I gave at SANOG earlier this month, but the talk format was different. Notes from demos here and slides (219K).


Thanks to Ma Dong for taking a picture during my talk! (source)

For lunch I joined other sponsors at the sponsor lunch over at the wonderful White Dog Cafe just a couple blocks from the venue. Then it was a quick dash back to the venue for Ma Dong’s talk on “Continuous Integration And Delivery For Open Source Development.”

He outlined some of the common mechanisms for CI/CD in open source projects, and how the OpenStack project has solved them for a project that eclipses most others in size, scale and development pace. Obviously it’s a topic I’m incredibly familiar with, but I appreciated his perspective as a contributor who comes from an open source CI background and has now joined us doing QA in OpenStack.


Ma Dong on Open Source CI/CD

After his talk it was also nice to sit down for a bit to chat about some of the latest changes in the OpenStack Infrastructure. We were able to catch up about the status of our Zuul tooling and general direction of some of our other projects and services. The day continued with some chats about Jenkins, Nodepool and how we’ve played around with infrastructure tooling to cover some interesting side cases. It was really fun to meet up with some new folks doing CI things to swap tips and stories.

Just before my keynote I attended the lightning talks for a few minutes, but had to depart early to get set up in the big room.

They keynote on “Listening to the Needs of Your Global Open Source Community” was a completely new talk for me. I wrote the abstract for it a few weeks ago for another conference CFP after the suggestion from my boss. The talk walked through eight tips for facilitating the collection of feedback from your community as one of the project leaders or infrastructure representatives.

  • Provide a simple way for contributors to contact project owners
  • Acknowledge every piece of feedback
  • Stay calm
  • Communicate potential changes and ask for feedback
  • Check in with teams
  • Document your processes
  • Read between the lines
  • Stick to your principles

With each of these, I gave some examples from my work mostly in the Ubuntu and OpenStack communities. Some of the examples were pretty funny, and likely very familiar with any systems folks who are interfacing with users. The Q&A at the end of the presentation was particularly interesting, I was very focused on open source projects since that’s where my expertise lies, but members of the audience felt that my suggestions were more broadly applicable. In those moments after my talk I was invited to speak on a podcast and encouraged to write a series of articles related to my talk. Now I’m aiming for writing some OpenSource.com content on over the next couple weeks.

Slides from the talk are here (7.3M pdf).


And thanks to Josh, José, Vincent and Nathan for snapping some photos of the talk too!

The conference wound down and following the keynote with a raffle and we then went our separate ways. For me, it was time for spending time with friends over a martini.

A handful of other photos from the conference here: https://www.flickr.com/photos/pleia2/albums/72157671843605132

by pleia2 at August 23, 2016 09:01 PM

Jono Bacon

Bacon Roundup – 23rd August 2016

Well, hello there, people. I am back with another Bacon Roundup which summarizes some of the various things I have published recently. Don’t forget to subscribe to get the latest posts right to your inbox.

Also, don’t forget that I am doing a Reddit AMA (Ask Me Anything) on Tues 30th August 2016 at 9am Pacific. Find out the details here.

Without further ado, the roundup:

Building a Career in Open Source (opensource.com)
A piece I wrote about how to build a successful career in open source. It delves into finding opportunity, building a network, always learning/evolving, and more. If you aspire to work in open source, be sure to check it out.

Cutting the Cord With Playstation Vue (jonobacon.org)
At home we recently severed ties with DirecTV (for lots of reasons, this being one), and moved our entertainment to a Playstation 4 and Playstation Vue for TV. Here’s how I did it, how it works, and how you can get in on the action.

Running a Hackathon for Security Hackers (jonobacon.org)
Recently I have been working with HackerOne and we recently ran a hackathon for some of the best hackers in the world to hack popular products and services for fun and profit. Here’s what happened, how it looked, and what went down.

Opening Up Data Science with data.world (jonobacon.org)
Recently I have also been working with data.world who are building a global platform and community for data, collaboration, and insights. This piece delves into the importance of data, the potential for data.world, and what the future might hold for a true data community.

From The Archive

To round out this roundup, here are a few pieces I published from the archive. As usual, you can find more here.

Using behavioral patterns to build awesome communities (opensource.com)
Human beings are pretty irrational a lot of the time, but irrational in predictable ways. These traits can provide a helpful foundation in which we build human systems and communities. This piece delves into some practical ways in which you can harness behavioral economics in your community or organization.

Atom: My New Favorite Code Editor (jonobacon.org)
Atom is an extensible text editor that provides a thin and sleek core and a raft of community-developed plugins for expanding it into the editor you want. Want it like vim? No worries. Want it like Eclipse? No worries. Here’s my piece on why it is neat and recommendations for which plugins you should install.

Ultimate unconference survival guide (opensource.com)
Unconferences, for those who are new to them, are conferences in which the attendees define the content on the fly. They provide a phenomenal way to bring fresh ideas to the surface. They can though, be a little complicated to figure out for attendees. Here’s some tips on getting the most out of them.

Stay up to date and get the latest posts direct to your email inbox with no spam and no nonsense. Click here to subscribe.

The post Bacon Roundup – 23rd August 2016 appeared first on Jono Bacon.

by Jono Bacon at August 23, 2016 01:48 PM

Elizabeth Krumbach

Wandering around Philadelphia

Philadelphia is my figurative (and may soon be literal…) second home. Visits are always filled with activities, events, friends and family. This trip was a considerably less structured. I flew in several days before the conference I was attending and stayed in my friend’s guest room, and didn’t take much time off from work, instead working from a couch most of the week with my little dog friend Blackie.

I did have some time for adventuring throughout the week though, taking a day off to check out The Science Behind Pixar exhibit down at The Franklin Institute with a friend. On our way down we stopped at Pudge’s in Conshohocken to satisfy my chicken cheesesteak craving. It hit the spot.

Then we were off to the city! The premise of the exhibit seemed to be trying to encourage youth into STEM fields by way of the creative processes and interesting jobs at a company like Pixar. As such, they walked you through various phases of production of Pixar films and have hands-on exhibits that let you simply play around with the themes of what professionals in the industry do. It’s probably a good idea to encourage interest, even if a museum exhibit can’t begin to tackle the complexity of these fields, as a technologist I agree that the work is ultimately fun and exciting.

But let’s be honest, I’m an adult who already has an STEM career and I’ve been a Pixar fan since the beginning. I was there so I could get selfies with Wall-E (and Buzz, Sully and Mike, Edna Mode, Dory…).

A few more photos from the exhibit here: https://www.flickr.com/photos/pleia2/albums/72157671629547292

We had the whole afternoon, so I also got to see the Lost Egypt exhibit, which was fun to see after the Egypt exhibit I saw at de Young last month. We went to a couple planetarium shows and also got all the nostalgia on as I revisited all the standing exhibits. Like the trains. I love the trains. The Franklin Institute is definitely one of my favorite museums.

That evening I also got to check out the new Hive76 location. The resurgence of hackerspaces had just started when I left Philly, and while I was never super involved, I did host a few “PLUG into Hive” meetings there when I was coordinating the LUG and had friends at Hive. It was nice getting to see their new space. After dinner I had the amusing experience of going to catch Pokémon in a park after dark, along with several other folks who were there for the same reason. There really is something to be said for a game that gets people out of their house at night to go for walks and socialize over augmented reality. Even if I didn’t catch any new Pokémon. Hah!

Wednesday and Thursday nights I spent time with my best buddies Danita and Crissi. Dinner, drinks, lots of good chatting. It had absolutely been too much time since we’d spend time together, spending time catching up was just the thing I needed. I’ll have to make sure I don’t let so much time pass between getting together in the future.

More photos from various wanderings this past week (including dinosaurs!) here: https://www.flickr.com/photos/pleia2/albums/72157671629567332

And then MJ and I spent Friday and Sunday on a secret mission before flying home. I’ll write more about that once it becomes unclassified.

by pleia2 at August 23, 2016 02:35 AM

August 22, 2016

Elizabeth Krumbach

Ubuntu in Philadelphia

Last week I traveled to Philadelphia to spend some time with friends and speak at FOSSCON. While I was there, I noticed a Philadelphia area Linux Users Group (PLUG) meeting would land during that week and decided to propose a talk on Ubuntu 16.04.

But first I happened to be out getting my nails done with a friend on Sunday before my talk. Since I was there, I decided to Ubuntu theme things up again. Drawing freehand, the manicurist gave me some lovely Ubuntu logos.

Girly nails aside, that’s how I ended up at The ATS Group on Monday evening for a PLUG West meeting. They had a very nice welcome sign for the group. Danita and I arrived shortly after 7PM for the Q&A portion of the meeting. This pre-presentation time gave me the opportunity to pass around my BQ Aquaris M10 tablet running Ubuntu. After the first unceremonious pass, I sent it around a second time with more of an introduction, and the Bluetooth keyboard and mouse combo so people could see convergence in action by switching between the tablet and desktop view. Unlike my previous presentations, I was traveling so I didn’t have my bag of laptops and extra tablet, so that was the extent of the demos.

The meeting was very well attended and the talk went well. It was nice to have folks chiming in on a few of the topics (like the transition to systemd) and there were good questions. I also was able to give away a copy of our The Official Ubuntu Book, 9th Edition to an attendee who was new to Ubuntu.

Keith C. Perry shared a video of the talk on G+ here. Slides are similar to past talks, but I added a couple since I was presenting on a Xubuntu system (rather than Ubuntu) and didn’t have pure Ubuntu demos available: slides (7.6M PDF, lots of screenshots).

After the meeting we all had an enjoyable time at The Office, which I hadn’t been to since moving away from Philadelphia almost seven years ago.

Thanks again to everyone who came out, it was nice to meet a few new folks and catch up with a bunch of people I haven’t seen in several years.

Saturday was FOSSCON! The Ubuntu Pennsylvania LoCo team showed up to have a booth, staffed by long time LoCo member Randy Gold.

They had Ubuntu demos, giveaways from the Ubuntu conference pack (lanyards, USB sticks, pins) and I dropped off a copy of the Ubuntu book for people to browse, along with some discount coupons for folks who wanted to buy it. My Ubuntu tablet also spent time at the table so people could play around with that.


Thanks to Randy for the booth photo!

At the conference closing, we had three Ubuntu books to raffle off! They seemed to go to people who appreciated them and since both José and I attended the conference, the raffle winners had 2/3 of the authors there to sign the books.


My co-author, José Antonio Rey, signing a copy of our book!

by pleia2 at August 22, 2016 07:53 PM

A lecture, a symphony and a lot of street cars

My local July adventures weren’t confined to mummies, baseball and food. I also attended a few shows a lectures.

On July 14th I met up with a friend to see a Kevin Kelly speak on The Next 30 Digital Years, put on by The Long Now Foundation. This lecture covered a series of trends (not specific technologies) that Kelly felt would drive the future. This included proliferation of “screens” on a variety of surfaces to meet our ever-increasing desire to be connected to media we now depend on in our work and lives. He also talked about the rise of augmented reality, increased tracking for increased personalization of services (with a sidebar about privacy) and increasing sharing economy, where access continues to replace ownership.

What I enjoyed most about this talk was how optimistic he was. Even while tackling difficult topics like privacy in a very connected world, he was incredibly positive about what our future holds in store for us. This held true even when questions from the audience expressed more pessimistic views.

In a weekend that revolved around events near City Hall, the very next evening I went to the San Francisco Symphony for the first time. As SciFi fan who has a sidebar love for movie scores, my introduction to the symphony here was appropriately Star Trek: The Ultimate Voyage — A 50th Anniversary Celebration (article). The event featured the full symphony, with a screen above them that showed clips and a narrated exploration through the Star Trek universe as they played scores from movies and selections from each series. They definitely focused on TOS and TNG, but there was decent representation of the rest. I also learned that SF trekkies really like Janeway. Me too. It was a really fun night.

We also went to an event put on by the Western Neighborhoods Project (WNP), Streetcar San Francisco: Transit Tales of the City in Motion at Balboa Theatre.

The event featured short films and clips of historic streetcars and expertise from folks over at Market Street Railway (which may have been how I heard about it). The clips covered the whole city, including a lot of downtown as they walked us through some the milestones and transit campaigns in the history of the city. It was particularly interesting to learn about the street cars in the west side of the city, where they used to have have a line that ran up around Land’s End, and some neat (or tacky) hanging “sky-trams” which took you from Cliff House to Point Lobos, an article about them here: A Brief History of San Francisco’s Long-Lost Sky Tram, which also references the WNP page about them.

This event also clued me in to the existence of OpenSF History by WNP. They’re going through a collection of historic San Francisco photos that have been donated and are now being digitized, indexed and shared online. Very fun to browse through, and there are great pictures of historic streetcars and other transit.

by pleia2 at August 22, 2016 01:56 PM

August 18, 2016

Jono Bacon

Opening Up Data Science with data.world

Earlier this year when I was in Austin, my friend Andy Sernovitz introduced me to a new startup called data.world.

What caught my interest is that they are building a platform to make data science and discovery easier, more accessible, and more collaborative. I love these kinds of big juicy challenges!

Recently I signed them up as a client to help them build their community, and I want to share a few words about why I think they are important, not just for data science fans, but from a wider scientific discovery perspective.

Screen Shot 2016-08-15 at 3.35.31 AM

Armchair Discovery

Data plays a critical role in the world. Buried in rows and rows of seemingly flat content are patterns, trends, and discoveries that can help us to learn, explore new ideas, and work more effectively.

The work that leads to these discoveries is often bringing together different data sets to explore and reach new conclusions. As an example, traffic accident data for a single town is interesting, but when we combine it with data sets for national/international traffic accidents, insurance claims, drink driving, and more, we can often find patterns that can help us to influence and encourage new behavior and technology.

Screen Shot 2016-08-15 at 3.36.10 AM

Many of these discoveries are hiding in plain sight. Sadly, while talented data scientists are able to pull together these different data sets, it is often hard and laborious work. Surely if we make this work easier, more accessible, consistent, and available to all we can speed up innovation and discovery?

Exactly.

As history has taught us, the right mixture of access, tooling, and community can have a tremendous impact. We have seen examples of this in open source (e.g. GitLab / GitHub), funding (e.g. Kickstarter / Indiegogo), and security (e.g. HackerOne).

data.world are doing this for data.

Data Science is Tough

There are four key areas where I think data.world can make a potent impact:

  1. Access – while there is lots of data in the world, access is inconsistent. Data is often spread across different sites, formats, and accessible to different people. We can bring this data together into a consistent platform, available to everyone.
  2. Preparation – much of the work data scientists perform is learning and prepping datasets for use. This work should be simplified, done once, and then shared with everyone, as opposed to being performed by each person who consumes the data.
  3. Collaboration – a lot of data science is fairly ad-hoc in how people work together. In much the same way open source has helped create common approaches for code, there is potential to do the same with data.
  4. Community – there is a great opportunity to build a diverse global community, not just of data scientists, but also organizations, charities, activists, and armchair sleuths who, armed with the right tools and expertise, could make many meaningful discoveries.

This is what data.world is building and I find the combination of access, platform, and network effects of data and community particularly exciting.

Unlocking Curiosity

If we look at the most profound impacts technology has had in recent years it is in bubbling people’s curiosity and creativity to the surface.

When we build community-based platforms that tap into this curiosity and creativity, we generate new ideas and approaches. New ideas and approaches then become the foundation for changing how the world thinks and operates.

screencapture-data-world-1471257465804

As one such example, open source tapped the curiosity and creativity of developers to produce a rich patchwork of software and tooling, but more importantly, a culture of openness and collaboration. While it is easy to see the software as the primary outcome, the impact of open source has been much deeper and impacted skills, education, career opportunities, business, collaboration, and more.

Enabling the same curiosity and creativity with the wealth of data we have in the world is going to be an exciting journey. Stay tuned.

The post Opening Up Data Science with data.world appeared first on Jono Bacon.

by Jono Bacon at August 18, 2016 03:00 PM