Planet Ubuntu California

January 24, 2015

Elizabeth Krumbach

Remembering Eric P. Scott (eps)

Last night I learned the worst kind of news, my friend and valuable member of the Linux community here in San Francisco, Eric P. Scott (eps) recently passed away.

In an excerpt from a post by Chaz Boston Baden, he cites the news from Ron Hipschman:

I hate to be the bearer of bad news, but It is my sad duty to inform you that Eric passed away sometime in the last week or so. After a period of not hearing from Eric by phone or by email, Karil Daniels (another friend) and I became concerned that something might be more serious than a lost phone or a trip to a convention, so I called his property manager and we met at Eric’s place Friday night. Unfortunately, the worst possible reason for his lack of communication was what we found. According to the medical examiner, he apparently died in his sleep peacefully (he was in bed). Eric had been battling a heart condition. We may learn more next week when they do an examination.

He was a good friend, the kind who was hugely supportive of any local events I had concocted for the Ubuntu California community, but as a friend he was also the thoughtful kind of man who would spontaneously give me thoughtful gifts. Sometimes they were related to an idea he had for promoting Ubuntu, like a new kind of candy we could use for our candy dishes at the Southern California Linux Expo, a toy penguin we could use at booths or a foldable origami-like street car he thought we could use as inspiration for something similar as a giveaway to promote the latest animal associated with an Ubuntu LTS release.

He also went beyond having ideas and we spent time together several times scouring local shops for giveaway booth candy, and once meeting at Costco to buy cookies and chips in bulk for an Ubuntu release party last spring, which he then helped me cart home on a bus! Sometimes after the monthly Ubuntu Hours, which he almost always attended, we’d go out to explore options for candy to include at booth events, with the amusing idea he also came up with: candy dishes that came together to form the Ubuntu logo.

In 2012 we filled the dishes with M&Ms:

The next year we became more germ conscious and he suggested we go with individually wrapped candies, searching the city for ones that would taste good and not too expensive. Plus, he found a California-shaped bowl which fit into our Ubuntu California astonishingly theme well!

He also helped with Partimus, often coming out to hardware triage and installfests we’d have at the schools.


At a Partimus-supported school, back row, middle

As a friend, he was also always welcome to share his knowledge with others. Upon learning that I don’t cook, he gave me advice on some quick and easy things I could do at home, which culminated in the gift of a plastic container built for cooking pasta in the microwave. Skeptical of all things microwave, it’s actually something I now use routinely when I’m eating alone, I even happened to use it last night before learning of his passing.

He was a rail fan and advocate for public transportation, so I could always count on him for the latest transit news, or just a pure geek out about trains in general, which often happened with other rail fans at our regular Bay Area Debian dinners. He had also racked up the miles on his favorite airline alliance, so there were plenty of air geek conversations around ticket prices, destinations and loyalty programs. And though I haven’t really connected with the local science fiction community here in San Francisco (so many hobbies, so little time!), we definitely shared a passion for scifi too.

This is a hard and shocking loss for me. I will deeply miss his friendship and support.

by pleia2 at January 24, 2015 08:10 PM

January 20, 2015

Elizabeth Krumbach

Stress, flu, Walt’s Trains and a scrap book

I’ve spent this month at home. Unfortunately, I’ve been pretty stressed out. Now that I’m finally home I have a ton to catch up on here, I’m getting back into the swing of things with the pure technical (not event, travel, talk) part of my day job and and have my book to work on. I know I haven’t backed off enough from projects I’m part of, even though I’ve made serious efforts to move away from a few leadership roles in 2014, so keeping up with everything remains challenging. Event-wise, I’ve managed to arrange my schedule so I only have 4 trips during this half of the year (down from 5, thanks to retracting a submission to one domestic conference), and 1-3 major local events that I’m either speaking at or hosting. It still feels like too much.

Perhaps adding to my stress was the complete loss of 5 days last week to the flu. I had some sniffles and cough on Friday morning, which quickly turned into a fever that sent me to bed as soon as I wrapped up work in the early evening. Saturday through most of Tuesday are a bit of a blur, I attempted to get some things done but honestly should have just stayed in bed and not tried to work on anything, because nothing I did was useful and actually made it more difficult to pick up where I left off come late Tuesday and into Wednesday. I always forget how truly miserable having the flu is, sleep is the only escape, even something as mind-numbing as TV isn’t easy as everything hurts. However, kitty snuggles are always wonderful.

Sickness aside, strict adherence to taking Saturdays off has helped my stress. I really look forward to my Saturdays when I can relax for a bit, read, watch TV, play video games, visit an exhibit at a museum or make progress in learning how to draw. I’m finally at the point where I no longer feel guilty for taking this time, and it’s pretty refreshing to simply ignore all email and social media for a day, even if I do have the impulse to check both. It turns out it’s not so bad to disconnect for a weekend day, and I come back somewhat refreshed on Sunday. It ultimately does make me more productive during the rest of the week too, and less likely to just check out in the middle of the week in a guiltful and poorly-timed evening of pizza, beer and television.

This Saturday MJ and I enjoyed All Aboard: A Celebration of Walt’s Trains exhibit at the Walt Disney Family Museum. It was a fantastic exhibit. I’m a total sucker for the entrepreneurial American story of Walt Disney and I love trains, so the mix of the two was really inspiring. This is particularly true as I find my own hobbies being as work-like and passion-driven as my actual work. Walt’s love of trains and creation of a train at his family home in order to have a hobby outside work led to trains at Disney parks around the world. So cool.

No photos are allowed in the exhibit, but I did take some time around the buildings to capture some signs and the beautiful day in the Presidio: https://www.flickr.com/photos/pleia2/sets/72157650347931082/

One evening over these past few weeks took time to put together a scrap book, which I’d been joking about for years (“ticket stub? I’ll keep it for my scrap book!”). Several months ago I dug through drawers and things to find all my “scrap book things” and put them into a bag, collecting everything from said ticket stubs to conference badges from the past 5 years. I finally swung by a craft store recently and picked up some rubber cement, good clear tape and an empty book made for the purpose. Armed with these tools, I spent about 3 hours gluing and taping things into the book one evening after work. The result is a mess, not at all beautiful, but one that I appreciate now that it exists.

I mentioned in my last “life” blog post that I was finishing a services migration from one of my old servers. That’s now done, I shut off my old VPS yesterday. It was pretty sad when I realized I’d been using that VPS for 7 years when the level plan I had offered a mere 360M of RAM (up to 2G now), I had gotten kind of attached! But that faded today when I did an upgrade on my new server and realized how much faster it is. On to bigger and better things! In other computer news, I’m really pushing hard on promoting the upcoming Ubuntu Global Jam here in the city and spent Wednesday evening of this week hosting a small Ubuntu Hour, thankful that it was the only event of the evening as I continued to need rest post-flu.

Today is a Monday, but a holiday in the US. I spent it catching up with work for Partimus in the morning, Ubuntu in the afternoon and this evening I’m currently avoiding doing more work around the house by writing this blog post. I’m happy to say that we did get some tricky light bulbs replaced and whipped out the wood glue in an attempt to give some repair love to the bathroom cabinet. Now off to do some laundry and cat-themed chores before spending a bit more time on my book.

by pleia2 at January 20, 2015 02:07 AM

January 19, 2015

Elizabeth Krumbach

San Francisco Ubuntu Global Jam at Gandi.net on Sunday February 8th

For years Gandi.net has been a strong supporter of Open Source communities and non-profits. From their early support of Debian to their current support of Ubuntu via discounts to Ubuntu Members they’ve been directly supportive of projects I’m passionate about. I was delighted when I heard they had opened an office in my own city of San Francisco, and they’ve generously offered to host the next Ubuntu Global Jam for the Ubuntu California team right here at their office in the city.

Gandi.net

+

Ubuntu

=

Jam for days
Jam!

What’s an Ubuntu Global Jam? From the FAQ on the wiki:

A world-wide online and face-to-face event to get people together to work on Ubuntu projects – we want to get as many people online working on things, having a great time doing so, and putting their brick in the wall for free software as possible. This is not only a great opportunity to really help Ubuntu, but to also get together with other Ubuntu fans to make a difference together, either via your LoCo team, your LUG, other free software group, or just getting people together in your house/apartment to work on Ubuntu projects and have a great time.

The event will take place on Sunday, February 8th from noon – 5PM at the Gandi offices on 2nd street, just south of Mission.

Community members will gather to do some Quality Assurance testing on Xubuntu ISOs and packages for the upcoming release, Vivid Vervet, using the trackers built for this purpose. We’re focusing on Xubuntu because that’s the project I volunteer with and I can help put us into contact with the developers as we test the ISOs and submit bugs. The ISO tracker and package tracker used for Xubuntu are used for all recognized flavors of Ubuntu, so what you learn from this event will transfer into testing for Ubuntu, Kubuntu, Ubuntu GNOME and all the rest.

No experience with Testing or Quality Assurance is required and Quality Assurance is not as boring as it sounds, honest :) Plus, one of the best things about doing testing on your hardware is that your bugs are found and submitted prior to release, increasing the chances significantly that any bugs that exist with your hardware are fixed prior to release!

The event will begin with a presentation that gives a tour of how manual testing is done on Ubuntu releases. From there we’ll be able to do Live Testing, Package Testing and Installation testing as we please, working together as we confirm bugs and when we get stuck. Installation Testing is the only one that requires you to make any changes to the laptop you bring along, so feel free to bring along one you can do Live and Package testing on if you’re not able to do installations on your hardware.

I’ll also have the following two laptops for folks to do testing on if they aren’t able to bring along a laptop:

I’ll also be bringing along DVDs and USB sticks with the latest daily builds for tests to be done and some notes about how to go about submitting bugs.

Please RSVP here (full address also available at this link):

http://loco.ubuntu.com/events/ubuntu-california/2984-ubuntu-california-san-francisco-qa-jam/

Or email me at lyz@ubuntu.com if you’re interested in attending and have trouble with or don’t wish to RSVP through the site. Also please feel free to contact me if you’re interested in helping out (it’s ok if you don’t know about QA, I need logistical and promotional help too!).

Food and drinks will be provided, the current menu is a platter of sandwiches and some pizzas, so please let me know if you have dietary restrictions so we can place orders accordingly. I’d hate to exclude folks because of our menu, so I’m happy to accommodate vegan, gluten free, whatever you need, I just need to know :)

Finally, giveaways of Ubuntu stickers and pens for everyone and a couple Ubuntu books (hopefully signed by the authors!) will also be available to a few select attendees.

Somewhere other than San Francisco and interested in hosting or attending an event? The Ubuntu Global Jam is an international event with teams focusing on a variety of topics, details at: https://wiki.ubuntu.com/UbuntuGlobalJam. Events currently planned for this Jam can be found via this link: http://loco.ubuntu.com/events/global/2967/

by pleia2 at January 19, 2015 11:00 PM

Jono Bacon

Bridging Marketing and Community

In the last five years we have seen tremendous growth in community management. Organizations large and small are striving to build strong, empowered communities that contribute to and support their work. These efforts are focused on a new form of engagement, one that builds engaged communities that are part of the fabric that achieves success.

This growth in community management has been disruptive. Engineering, governance, and other areas have been turned upside down with this new art and science. This disruption has been positive though, producing new cultures and relationships and a new feather to our collective bows in achieving our grander ambitions.

If there is one area where this disruption has made the strongest lightning bolt, it has been marketing and brand management.

Every year I run the Community Leadership Summit in Portland, and every year I hear the same feedback; the philosophical, strategic, and tactical differences between marketing and community managers. These concerns have also been shared with me in my work as a community strategy and management consultant.

The Community Leadership Summit

This worries me. When I see this feedback shared it tells a narrative of “us and them“, as if marketing and brand managers are people intent on standing in the way of successful communities.

This just isn’t true.

Marketing and brand managers are every bit as passionate and engaged about success as community managers. What we are seeing here is a set of strategic and tactical differences which can be bridged. To build unity though we need to first see and presume the good in people; we are all part of the same team, and we all want to do right by our organizations.

Philosophy

For most organizations, marketing operations are fairly crisply defined and controlled. You specify your brand and values and build multiple marketing campaigns to achieve the goals of brand awareness and engagement. This is usually pretty tightly controlled in terms of brand, values, mission, and campaigns, by the organization. This is designed to assure consistency across brand, voice, and messaging, and legal protection with your marks.

This kind of brand marketing is critical. We live in a world dominated by brands, and brand managers have to balance a delicate line between authentic engagement and feckless shlepping of their wares. There is an art and science to brand marketing and many tremendous leaders in this area such as Brendon Burchard, Aaliyah Shafiq, and Gary Briggs. These fine people and others have guided organizations through challenging times and an increasingly over-subscribed audience with shorter and shorter attention spans.

Community management takes a similar but different approach. Community managers seek to build open-ended engagement in which you create infrastructure, process, and governance, and then you invite a wider diversity of people and groups to join a central mission. With this work we see passionate and inspired communities that span the world, bringing a diverse range of skills and talents. Philosophically this is very much a “let a thousand roses bloom” approach to engagement.

The Spider and the Starfish

I believe the strategic and tactical difference between many marketing and community managers can be best explained with the inspiring and excellent work of Ori Brafman and Rod Beckstrom in their seminal book, The Starfish and the Spider. The book outlines the differences between the centralized methods of organization (the spider), and the decentralized method (the starfish).

I don’t like spiders, so this was uncomfortable to add to this post.

In many traditional organizations the structure is very much like a spider. While there are multiple legs, there is a central body that is in charge. The body provides strategy, execution, and guidance from a small group of people in charge, and the outer legs serve those requirements.

Brand management commonly uses the spider model: the parameters of the brand, structure, values, and execution are typically defined by a central hand-picked team of people. While the brand may be open to multiple possibilities and opportunities, the center of the spider has to approve or reject new ideas. In many cases the center of spider has oversight and approvals over everything externally facing.

There are two core challenges with the spider model: innovation and agility. You may have the very best folks in the middle of that spider but like any group of human beings, they will reach the natural limits of their own creativity and innovation. Likewise, that team will face a limit in agility; there is only so much the center of the spider can do, and this will impact the spider as a whole.

The other organizational management model is the starfish. Here we empower teams to do great work and provide guidelines to help them be successful. This doesn’t lack accountability or quality, but we achieve it by defining strong standards of quality and trusting the teams to execute within them. We then deal with suboptimal cases where appropriate. Many modern organizations work this way, such as YouTube, Wikipedia, and many start-ups, and this is the inherent model in the community management world.

Now, let’s be honest here. I am a community management guy. Much as I like to think I am an objective thinker and unbiased, everyone is biased in some way. You are probably expecting me to pronounce these spider-orientated marketing and brand organizations dead and to hail the new starfish king of community management.

Not at all.

As I said earlier, brand management is critical to our success. What we need to do is first understand we are all on the same team, and secondly bridge the agility of community management with the consistency of brand management.

Focus on the mission

In reality, we don’t want an entirely spider model or an entirely starfish model, we want a mixture of both; a spiderfish, if you will.

I am a strong believer in Covey’s philosophy of “begin with the end in mind“. We should sit down, dream a little, and then rigorously define our mission for our organization. With this mission in mind, every project, every initiative, every idea, should be assessed within the parameters of whether it furthers that mission. If it doesn’t, we should do something else.

Always think about where we want to get to.

When most organizations think with the end in mind they want their audience to feel a personal sense of connection to their work, and therefore their brand. The world of broadcast media is withering on the vine: We don’t just sit there and mindlessly devour content with a bag of Cheetos in hand. We want to engage, to interact, to be a part of that message and that content. If we are passionate about a brand, we want to play an active role in how we can make that brand successful. We want to transition from being a member of the audience to being a member of the team.

Most brand managers want this. All community managers want and should achieve this. Thus, brand and community managers are really singing from the same hymn sheet and connected to the same broader mission. Brand and community managers are simply people with different skill-sets putting different jigsaw pieces into the same puzzle.

So how do we strike that balance between brand and community? Well, I have some practical suggestions that may be useful:

1. Align strategy

Your marketing and community strategies need to be well-understood and aligned. Both teams should have regular meetings and a clear understanding of what both teams are doing. This serves two key functions. Firstly, it will mean that everyone has an understanding of what everyone is working on. Secondly, it will clearly demonstrate the importance and value of both teams, be able to identify positive and negative touch points, and bring balance to them.

Now, this is easier said than done. Strategy will change and adapt and it can be tough to keep everyone in the loop at once. As such, at a minimum focus on connecting the team leads together; they can then communicate this to their respective teams.

2. Your future won’t be 100% of what you expect it to be

There is a great rule of thumb in project management of “you will achieve your goals, but what it will be different to what you expect“. We should always remind our brand and community managers that part of bridging two different skill sets and philosophies means that our work will be a little different than we may expect.

Our goal here is the consistency of an awesome brand manager with the engagement of an awesome community manager. This may mean that a community manager’s work may be a little more tempered and conservative and a brand manager’s work may be a little more agile and freeform. This will feel weird and awkward at first, but sends us in the right direction to achieve our broader mission in our organization.

3. Have a flexible brand/trademark policy and communicate it clearly

One of the key challenges in balancing brand and community management is that communities typically want to use brands themselves in their work in a freeform way. This can include printing signs for events, using the brand on websites and social media, printing t-shirts and merchandise, creating presentation slides, and more. The brand is our shared identity, both for the organization and the community.

It is important that we clearly define the lines of how the brand can and cannot be used. We want to empower our community to freely utilize the brand (and associated trade dress, fonts, colors, and more) to do amazing work, but we want to avoid our brand being cheapened and diluted.

To do this we should create a rigorous brand and trademark policy that outlines these freedoms and restrictions and clearly communicate it to the community.

A good example of this is the Ubuntu Trademark Policy; it crisply states these restrictions and freedoms and has resulted in a large and capable community and fantastic brand awareness.

4. Focus quality where it really matters

As I mentioned earlier, we really want to take a “spiderfish” approach to our organizatons. This means that we centrally define some aspects of policy, but we focus those central pieces on the most valuable and important areas.

The trick is that we want to focus quality assurance on the right places. The way in which we assess the brand consistency of a keynote presentation that will be beamed around the world should be different to how we assess a small presentation given at a local community group. If we treat everything the same we will burn our teams out and limit agility and creativity.

Likewise, our assessment of quality should be around consistency as opposed to stylistic differences. We want to encourage different styles and voices: our community will present a multitude of different narratives and ideas. Our goal is to ensure that they feel consistent and connected to our central mission.

As such, focus your spider body on the most critical pieces. If you don’t, those teams will be overworked and stressed as opposed to creatively inspired and engaged.

5. Always focus on the mission

I know I have banged this drum a few times already in this article, but we have to focus on our mission every single day.

Covey teaches us that we should collaboratively define and share our missions and that these missions should guide our work every day, not just be shoved in a cupboard or stuck to a dusty wall, never to be seen again. We should assess every idea, every project, every motivation within the parameters of what we are here to do.

This is critical at a tactical level (“should project foo be something we invest in?”) but also at a strategic level (“how do we balance marketing and community management to further our mission?”).

Enforcing this is a key responsibility for senior executives. Is is senior leadership that really defines the culture and tenor of our organizations so it can trickle down, and reminding and inspiring everyone of the bigger picture is essential.

I hope you find some of this useful. My primary goal with this article was to help bridge the divide between what I consider to be two critical roles in successful organizations: marketing and community management. While the cultures may be a little different, both have much to learn from each other, and much to bring to the world. I look forward to hearing from you all about your experiences and perspectives on how we continue to work together to do interesting and important work.

by jono at January 19, 2015 08:37 PM

January 18, 2015

Akkana Peck

Another stick figure in peril

One of my favorite categories of funny sign: "Stick figures in peril". This one was on one of those automated gates, where you type in a code and it rolls aside, and on the way out it automatically senses your car.

[Moving gate can cause serious injury or death]

January 18, 2015 05:19 PM

January 17, 2015

kdub

Salaea Logic 8 Review

Over the break, I got to play a bit with the Saleae Logic 8 logic analyzer. Its the mid-range model from Saleae, and it works with Ubuntu. I wrote about the predecessor to the Logic 8 a while back, before Linux support was around. I finally got to do a bit of tinkering with the new device, under Ubuntu Vivid.

logic 8

Logic 8

The device itself came packaged only in the carrying case that is provided. Inside the zippered carrying case was the Logic 8 itself, 2 4×2 headers with 6-inch leads, 16 logic probes, a micro-usb cable, a postcard directing you to the support site, and a poster of Buzz Aldrin in the Apollo cockpit.
The Logic 8 is made out of machined anodized aluminum and is only about 2×2 inches. It’s sturdy-feeling, and the only ports are the micro-usb to connect to the computer, and the 16 logic probe pins (8x ground+signal). There’s a blue LED on the top.

IMG_1917

Package Contents

The test leads seem pretty good. I’m used to the J-hook type leads, and these have two pincers that come out. I’ve been able to get the leads into more places than I would have with a J-hook type logic probe.

Bonus Inspiration

Another really interesting feature is that this logic analyzer can do analog sampling. Each of the Logic 8 test leads can perform analog sampling. The device can sample faster if you’re only using one analog channel. One channel can sample at 10M samples/second, and running all 8 will sample at 2.5M samples/second. According to the literature, frequencies above the Nyquist frequency of the sample rate get filtered out before hitting the onboard ADC. If you’re anything like me, most of the my electronics tinkering doesnt require me to look at signals above this sampling rate, and I could see using the oscilloscope less and using the Logic 8 for some analog signals work too.

Underside of Logic 8

Underside of Logic 8

The Software:
The Logic 8 software is available (freeware, closed source) on the website and will simulate inputs if there’s no device connected, so you can get a pretty good feel for how the actual device will work. It was largely hassle-free, although I did have to unpack it in /opt because it wasn’t packaged. Overall, it was pretty intuitive to configure the sampling, set up triggers, and test my circuit. The look and feel of the software was much better than a lot of other electronics tools I’ve used.

Trying it out:
I was working on a pretty simple circuit that takes a sensor input and outputs to a single 7-segment. Its composed of a BCD-7segment decoder chip and an ATtiny13. (easy enough to program with the ubuntu packages ‘avrdude’ and ‘gcc-avr’).

circuit

Circuit Under Test (ATtiny13, a light sensor, and a BCD to 7 segment decoder)

Its not electrically isolated from the circuit, but I would expect that for the price point. Just make sure that you don’t have any ground loops between your computer and the circuit under test. I don’t typically build circuits that really need a earth-ground, so I don’t see that being much of an issue.

So, my first run, I connected it to the GPIO pins on the AVR and varied the voltage from 0-2.5V on the ADC pin.

ADC to 4bit digital

ADC to 4bit digital

Yay, my circuit (and avr program) was working.

I am pleased with the Logic 8, and am even more excited to have a hassle free way to measure logic and analog signals in Ubuntu!

by Kevin at January 17, 2015 08:53 PM

January 14, 2015

Jono Bacon

Your new Community Manager Hire: 5 Areas to Focus on

So, you have just hired that new community manager into your organization. Their remit is simple: build a community that wraps around your product/technology/service. You have an idea of what success looks like, but you are also not entirely sure exactly what this new hire will be doing at a tactical level.

Lots of people are in this position. Here are five things you should focus on to help ensure they are successful.

1. Think carefully about the reporting line

When a new community manager joins a company the question is where they report. In many cases they report into Marketing, in some cases (particularly for technology companies) they report to Engineering. In some cases they report to the COO.

Much of this depends on what the community manager is doing. If they are managing social and forums, marketing may be a good fit. If they are building a developer community, engineering may be a good.

If however they are building a full community with infrastructure, processes, governance, and more, they are going to be working cross-team in your organization. As such, having them report into a single team such as Marketing may not be a good idea: it may restrict their cross-functional capabilities and executive buy-in.

Also, and how do I say this delicately…there is often a philosophical difference between traditional marketing/brand managers and community managers. Think carefully about how open to community success your marketing manager is…if they are not very open, they may end up squashing the creativity of your new hire.

2. Build a strategic plan

A key part of success is setting expectations. With rare exceptions, right out the gate the difference in expectations between senior execs and a new community manager are likely to be pretty significant. We want to reduce that gap.

To do this you need to gather stakeholder requirements, define crisp goals for what the community should look like and map out an annual strategic plan that outlines what the community manager will achieve to meet those goals as well as crisp success criteria. Summarize this into a simple deck to review with the exec team and other key leaders in the organization.

The community manager should make a point of socializing the strategy with the majority of the organization: it will help to smooth the lines to success.

3. Provide mentoring

The is a huge variance in what community managers actually do. Some take care of social media, some respond on forums, some fly to conferences to speak, and some built entire environments with infrastructure, process, governance, and on-ramps to help the community be successful.

I believe the latter, is the true definition of a community manager. A community manager should have a vision for a community and be able to put all the infrastructure, process, and resources in place to achieve it.

This is tough. It requires balancing lots of different teams and resources, and your new hire may feel they are drowning. Find a good community manager who gets this kind of stuff and ask them for help. Encourage an environment and culture of learning: help them to help themselves to be successful.

4. Have an “essential travel only” policy

I see the same thing over and over again: a new community manager joins a company and the company spends thousands flying them to every conceivable conference to speak and hang out with attendees. This is usually with the rationale of “spreading the word”.

Here’s the deal. Every minute your community manager is on the road, at conferences, preparing talks, and mingling with people, they are not working on the wider community vision, they are working on the scope of that event. Travel 2is incredibly disruptive and conferences are very distracting, and at the beginning of a new community, you really want your community manager putting the foundations of your community in place which typically means them being sat at a computer and drinking plenty of coffee.

Now, don’t get me wrong, conferences and travel are critical for community success. My point is that you should pick conferences that match closely with the strategy you have defined. This keep your costs lower, your new hire more focused, and help get things up and running quicker.

5. Train the rest of your employees

The word “community” means radically different things to different people. For some a community is a customer-base, for some it is engineering, for some it is a support function, for others it may be social media.

When your new community manager joins, your other staff will have their own interpretation of what “community” means. You should help to align the community manger’s focus and goals with the rest of the organization.

In many companies, the formation of a community is a key strategic change. It is often a new direction that is breaking some ground. In these cases, this step is particularly important. We want to ensure the wider team knows the organizational significance of a community, but also to get them bought into the value and opportunity it brings.

I hope this helps. If anyone has any questions I can help with, feel free to get in touch.

by jono at January 14, 2015 04:24 PM

January 13, 2015

Jono Bacon

Discourse: Saving forums from themselves

Many of us are familiar with discussion forums: webpages filled with chronologically ordered messages, each with a little avatar and varying degrees of cruft surrounding the content.

Forums are a common choice for community leaders and prove to be popular, largely due to their simplicity. The largest forum in the world, Gaia Online, an Anime community, has 27 million users and over 2,200,000,000 posts. They are not alone: it is common for forums to have millions of posts and hundreds of thousands of users.

So, they are a handy tool in the armory of the community leader.

The thing is, I don’t particularly like them.

While they are simple to use, most forums I have seen look like 1998 vomited into your web browser. They are often ugly, slow to navigate, have suboptimal categorization, and reward users based on the number of posts as opposed to the quality of content. They are commonly targeted by spammers and as they grow in size they invariably grow in clutter and decrease in usefulness.

I have been involved with and run many forums and while some are better, most are just similar incarnations of the same dated norms of online communication.

So…yes…not a fan. :-)

Enter Discourse

Fortunately a new forum is on the block and it is really very good: Discourse.

Created by Jeff Atwood, co-founder of Stack Overflow and the Stack Exchange Network, Discourse takes a familiar but uprooted approach to forums. They have re-thought through everything that is normal in forums and improved online communication significantly.

If you want to see it in action, see the XPRIZE Community, Bad Voltage Community, and Community Leadership Forum forums that I have set up.

Discourse is neat for a few reasons.

Firstly, it is simple to use and read. It presents a simple list of discussions with suitable categories, as opposed to cluttered sub-forums that divide discussions. It provides a easy and effective way to highlight and pin topics and identify active discussions. Users can even hide certain categories they are not interested in.

Creating and replying to topics is a beautiful experience. The editor supports Markdown as well as GUI controls and includes a built-in preview where you can embed videos, images, tweets, quotes, code, and more. It supports multiple headings, formatting styles, and more. I find that posts really come to life with Discourse as opposed to the limited fragments of text shown on other forums.

Discourse is also clever in how it encourages good behavior. It has a range of trust levels that reward users for good and regular participation in the forum. This is gamified with badges which encourages users to progress, but more importantly from a community leadership perspective, it provides a simple at-a-glance view of who the rock stars in the forum are. This provides a list of people I can now encourage and engage to be leaders. Now, before you get too excited, this is based on forum usage, not content, but I find the higher trust level people are generally better contributors anyway.

Discourse also makes identity pleasant. Users can configure their profiles in a similar way to Twitter with multiple types of imagery and details about who they are. Likewise, referencing other users is simple by pressing @ and then their username. This makes replies easier to spot in the notifications indicator and therefore keeps the discussion flowing.

Administrating and running the site is also simple. User and content management is a breeze, configuring the look and feel of most aspects of the forum is simple, and Discourse supports multiple login providers.

What’s more, you can install Discourse easily with docker and there are many hosting providers. While Jeff Atwood’s company has their own commercial service I ended up using DiscourseHosting who are excellent and pretty cheap.

To top things off, the Discourse community are responsive, polite, and incredibly enthusiastic about their work. Everything is Open Source and everything works like clockwork. I have never, not once, seen a bug impact a stable release.

All in all Discourse makes online discussions in a browser just better. It is better than previous forums I have used in pretty much every conceivable way. If you are running a community, I strongly suggest you check Discourse out; there simply is no competition.

by jono at January 13, 2015 05:31 AM

January 12, 2015

Jono Bacon

Announcing the Community Leadership Summit 2015!

I am delighted to announce the Community Leadership Summit 2015, now in it’s seventh year! This year it takes place on the 18th and 19th July 2015, the weekend before OSCON at the Oregon Convention Center. Thanks again to O’Reilly for providing the venue.

For those of you who are unfamiliar with the CLS, it is an entirely free event designed to bring together community leaders and managers and the projects and organizations that are interested in growing and empowering a strong community. The event provides an unconference-style schedule in which attendees can discuss, debate and explore topics. This is augmented with a range of scheduled talks, panel discussions, networking opportunities and more.

The heart of CLS is an event driven by the attendees, for the attendees.

The event provides an opportunity to bring together the leading minds in the field with new community builders to discuss topics such as governance, creating collaborative environments, conflict resolution, transparency, open infrastructure, social networking, commercial investment in community, engineering vs. marketing approaches to community leadership and much more.

The previous events have been hugely successful and a great way to connect together different people from different community backgrounds to share best practice and make community management an art and science better understood and shared by us all.

I will be providing more details about the event closer to the time, but in the meantime be sure to register!

Mixing Things Up

For those who have been to CLS before, I want to ask your help.

This year I want to explore new ideas and methods of squeezing as much value out of CLS for everyone. As such, I am looking for your input on areas in which we can improve, refine, and optimize CLS.

I ask you head over to the Community Leadership Forum and share your feedback. Thanks!

by jono at January 12, 2015 06:27 PM

January 11, 2015

Grant Bowman

Next Billion Connected

Dell’s Next Billion video would inspire me more if Dell genuinely supported Linux to consumers instead of actively promoting Windows almost everywhere I see Dell.


by grantbow at January 11, 2015 10:24 PM

January 08, 2015

Jono Bacon

Bad Voltage and Ubuntu

I know many of my readers here are Ubuntu fans and I wanted to let you know of something neat.

For just over a year now I have been doing a podcast with Stuart Langridge, Bryan Lunduke, and Jeremy Garcia. It is a fun, loose, but informative show about Open Source and technology. It is called Bad Voltage.

Anyway, in the show that was released today, we did an interview with Michael Hall, a community manager over at Canonical (and who used to work for me when I was there).

It is a fun and interesting interview about Ubuntu and phones, release dates, and even sets a challenge to convince Lunduke about the value of scopes on the Bad Voltage Forum.

Go and listen to or download the show here and be sure to share your thoughts on the show in the community discussion.

The show also discusses the Soylent super-food, has predictions for 2015 (one of which involves Canonical), and more!

Finally, Bad Voltage will be doing our first live performance at SCALE in Los Angeles on Fri 20th Feb 2015. We hope to see you there!

by jono at January 08, 2015 11:18 PM

Akkana Peck

Accessing image metadata: storing tags inside the image file

A recent Slashdot discussion on image tagging and organization a while back got me thinking about putting image tags inside each image, in its metadata.

Currently, I use my MetaPho image tagger to update a file named Tags in the same directory as the images I'm tagging. Then I have a script called fotogr that searches for combinations of tags in these Tags files.

That works fine. But I have occasionally wondered if I should also be saving tags inside the images themselves, in case I ever want compatibility with other programs. I decided I should at least figure out how that would work, in case I want to add it to MetaPho.

I thought it would be simple -- add some sort of key in the images's EXIF tags. But no -- EXIF has no provision for tags or keywords. But JPEG (and some other formats) supports lots of tags besides EXIF. Was it one of the XMP tags?

Web searching only increased my confusion; it seems that there is no standard for this, but there have been lots of pseudo-standards over the years. It's not clear what tag most programs read, but my impression is that the most common is the "Keywords" IPTC tag.

Okay. So how would I read or change that from a Python program?

Lots of Python libraries can read EXIF tags, including Python's own PIL library -- I even wrote a few years ago about reading EXIF from PIL. But writing it is another story.

Nearly everybody points to pyexiv2, a fairly mature library that even has a well-written pyexiv2 tutorial. Great! The only problem with it is that the pyexiv2 front page has a big red Deprecation warning saying that it's being replaced by GExiv2. With a link that goes to a nonexistent page; and Debian doesn't seem to have a package for GExiv2, nor could I find a tutorial on it anywhere.

Sigh. I have to say that pyexiv2 sounds like a much better bet for now even if it is supposedly deprecated.

Following the tutorial, I was able to whip up a little proof of concept that can look for an IPTC Keywords tag in an existing image, print out its value, add new tags to it and write it back to the file.

import sys
import pyexiv2

if len(sys.argv) < 2:
    print "Usage:", sys.argv[0], "imagename.jpg [tag ...]"
    sys.exit(1)

metadata = pyexiv2.ImageMetadata(sys.argv[1])
metadata.read()

newkeywords = sys.argv[2:]

keyword_tag = 'Iptc.Application2.Keywords'
if keyword_tag in metadata.iptc_keys:
    tag = metadata[keyword_tag]
    oldkeywords = tag.value
    print "Existing keywords:", oldkeywords
    if not newkeywords:
        sys.exit(0)
    for newkey in newkeywords:
        oldkeywords.append(newkey)
    tag.value = oldkeywords
else:
    print "No IPTC keywords set yet"
    if not newkeywords:
        sys.exit(0)
    metadata[keyword_tag] = pyexiv2.IptcTag(keyword_tag, newkeywords)

tag = metadata[keyword_tag]
print "New keywords:", tag.value

metadata.write()

Does that mean I'm immediately adding it to MetaPho? No. To be honest, I'm not sure I care very much, since I don't have any other software that uses that IPTC field and no other MetaPho user has ever asked for it. But it's nice to know that if I ever have a reason to add it, I can.

January 08, 2015 05:28 PM

January 06, 2015

Grant Bowman

Scheduling Algorithms

One learns a lot about scheduling when working with many different schedules hoping for a harmonious and predictable result. My background in mathmatics and studies of project management scheduling algorithms has given me a good background. I apologize in advance for some of the specific vagueness.

Very inflexible schedule elements make everything feel much more difficult. Whether the inflexible elements are imposed in a seemingly arbitrary way or because of contingencies along a chain of events the consequences of a missed segment can feel disappointing. Sometimes the bad effects can cascade. Opportunity costs must be weighed against the measures taken to make things work as planned.

Sometimes the result is simply too unpredictable so the system must be treated as a black box. The costs of planning can actually exceed the time saved. Best efforts and adapting/pivoting is required. Results must be accepted for what they are, a best result given the amount of variability in the system. It is good enough so use energy where it will yield a better return on investment.

I consider myself a patient person, yet recent exercises have given me a new appreciation for the patience and planning required when dealing with these particular complex systems.

P. S. Thanks for reading. I have been focusing on writing in other places but I intend to write more frequently on this blog now.


by grantbow at January 06, 2015 02:58 AM

Elizabeth Krumbach

New year, Snowpiercer, Roads of Arabia and projects

I’ve been home for a week now, and strongly resisted the temptation to go complete hermit and stay home to work furiously on my personal projects as the holidays brought several Day Job days off. On New Year’s Eve MJ and I went over to ring in the new year with my friend Mark and his beautiful kitty. On Friday I met up with my friend mct to see Snowpiercer over at Castro Theater. I’m a huge fan of that Theater, but until now had only ever gone to see hybrid live+screen shows related to MST3K, first a Rifftrax show and then a Cinematic Titanic show. It was a nice theater to see a movie in, they make it very dark and then slowly bring up the lights at the end through the credits to welcome you back to the world. A gentle welcome was much needed for Snowpiercer, it was very good but very intense, after watching that I didn’t have it in me to stick around for the double feature (particularly not another one with a train!).

I have watched even less TV than usual lately, lacking time and patience for it (getting bored easily). But MJ and I did start watching Star Trek: Voyager. It turns out that it’s very Classic Trek feeling (meet new aliens every episode!) and I’m enjoying it a lot. I know people really love Deep Space Nine, and I did enjoy it too, but it was always a bit too dark and serious for my old fashioned Trek taste. Voyager is a nice journey back to the Trek style I love, plus, Captain Janeway is totally my hero.

This past Saturday MJ and I had a relaxing day, the highlight of which was the Roads of Arabia exhibit at the Asian Art Museum. It’s one of my favorite museums in the city, and I was really excited to see a full exhibit focused on the middle east, particularly with my trip to Oman on the horizon. It also inspired me for my trip, I’d been advised that it’s common to buy frankincense while in Oman, but I already have what seems like a lifetime supply, I’m now thinking I might try to find a pretty incense burner.


No photos allowed of the exhibit, with the exception of
this statue, where they encouraged selifes

Our wedding photos have finally gotten some attention. It’s been over a year and a half and the preview of photos has been limited to what our photographer shared on Facebook. Sorry everyone. I’ve mostly gone through them now and just need to take some time to put together a website for them. Maybe over this weekend.

My book has also seen progress, but sometimes I also like to write on paper. While going through my huge collection of pens-from-conferences I decided that I write notes enough to treat myself to a nicer pen than these freebies. Through my explorations of pens on the internet, I came across the Preppy Plaisir fountain pen. I’d never used a fountain pen before, so I figured I’d give it a shot. Now, I won’t forsake all other pens moving forward, but I do have to admit that I quite like this pen.


Naturally, I got the pink one

I did manage to catch up on some personal “work” things. Got a fair amount of Ubuntu project work done, including securing a venue and sponsorship for an upcoming Ubuntu Global Jam here in San Francisco and working out travel for a couple upcoming conferences. Also have almost completed the migration of websites and services from one of my old servers to a bigger, cheaper one, and satisfied the prerequisite of re-configuring my monitoring and backups of all my servers in preparation for the new one. Now I’m just waiting for some final name propagation and holding out in case I forgot something on the old server. At least I have backups.

Work on Partimus has been very quiet in recent months. There’s been a movement locally to deploy ChromeBooks in classrooms rather than traditional systems with full operating systems. These work well as many tools that teachers use, including some of the standardized testing tools, have moved to online tools. This is something we noticed back when we were still deploying larger labs of Ubuntu-based systems as we worked hard to tune the systems for optimal performance of Firefox with the latest Java and Flash. Our focus now has turned to education-focused community centers and groups who are seeking computers to do more application and programming focused tasked, I hope to have news about our newest projects in the works soon. I did have the opportunity last week to meet up with an accountant to go over our books, working pro-bono I was thankful for his time and ability to confirm we’re doing everything correctly. I’m not fired as Treasurer, hooray!

by pleia2 at January 06, 2015 02:48 AM

January 05, 2015

Elizabeth Krumbach

Ubuntu California in 2014

Inspired by the post by Riccardo Padovani about the awesome year that Ubuntu Italy had, I welcome you to a similar one for Ubuntu California for events I participated in.

The year kicked off with our annual support of the Southern California Linux Expo with SCaLE12x. The long weekend began with an Ubucon on Friday, and then a team booth on Saturday and Sunday in the expo hall. There were a lot of great presentations at Ubucon and a streamlined look to the Ubuntu booth with a great fleet of volunteers. I wrote about the Ubuntu-specific bits of SCaLE12x here. Unfortunately I have a scheduling conflict, but you can look for the team again at SCaLE this February with an Ubucon and Ubuntu booth in the main expo hall.


Ubuntu booth at SCale12x

In April, Ubuntu 14.04 LTS was released with much fanfare in San Francisco as we hosted a release party at a local company called AdRoll, which uses Ubuntu in their day to day operations. Attendees were treated with demos of a variety of flavors of Ubuntu, a couple Nexus 7s with Ubuntu on them, book giveaways, a short presentation about the features of 14.04 and a pile of pizza and cookies, courtesy of Ubuntu Community Donations Funding.


Ubuntu release party in San Francisco

More details and photos from that party here.

In May, carrying the Ubuntu California mantle, I did a pair of presentations about 14.04 for a couple of local groups, (basic slides here). The first was a bit of a drive down to Felton, California where I was greeted at the firehouse by the always welcoming FeltonLUG members. In addition to my presentation, I was able to bring along several laptops running Ubuntu, Xubuntu and Lubuntu and a Nexus 7 tablet running Ubuntu for attendees to check out.


Ubuntu at FeltonLUG

Back up in San Francisco, I presented at Bay Area Linux Users Group and once again had the opportunity to show off my now well-traveled bag of 14.04 laptops and tablet.


Ubuntu at BALUG

As the year continued, my travel schedule picked up and I mostly worked on hosting regular Ubuntu Hours in San Francisco.

Some featuring a Unicorn…

And an Ubuntu Hour in December finally featuring a Vervet!

December 31st marked my last day as a member of the Ubuntu California leadership trio. I took on this role back in 2010 and in that time have seen a lot of maturity come out of our team and events, from commitment of team members to host regular events to the refinement of our booths each year at the Southern California Linux Expo. I’m excited to see 2015 kick off with the election of an entirely new leadership trio, announced on January 1st, comprised of: Nathan Haines (nhaines), Melissa Draper (elky) and Brendan Perrine (ianorlin). Congratulations! I know you’ll all do a wonderful job. In spite of clearing out to make room for the new leadership team, I’ll still be active in the LoCo, with regular Ubuntu Hours in San Francisco and an Ubuntu Global Jam event coming up on February 8th, details here.

by pleia2 at January 05, 2015 02:08 AM

December 31, 2014

Elizabeth Krumbach

The Ubuntu Weekly Newsletter and other ways to contribute to Ubuntu

Today, the last day of 2014, I’ve taken some time to look back on some of my biggest accomplishments. There have been the big flashy things, lots of travel, lots of talks and the release of The Official Ubuntu Book, 8th Edition. What a great year!

Then there is the day to day stuff, one of which is the Ubuntu Weekly Newsletter.

Every week we work to collect news from around our community and the Internet to bring together a snapshot of that week in Ubuntu. I’ve used the Newsletter archive to glimpse into where we were 6 years ago, and many folks depend on the newsletter each week to get the latest dose of collected news.

In 2014 we released 49 issues. Each one of these issues is the result of a team of contributors who collect links for our newsletter (typically Paul White and myself) and then a weekend of writing summaries for many of these collected articles, where again Paul White has been an exceptional contributor, with several others pitching here and there for a few issues. We then do some editorial review. Release takes place on Monday, where we post to several community resources (forums, discourse, mailing lists, fridge) and across our social media outlets (Twitter, Facebook, Google+), this is usually done by myself or José Antonio Rey. In all, I’d estimate that creating a newsletter takes about 6-8 hours of people time each week. Not a small investment! And one that is shared largely on a week by week basis between the core of three contributors.

We need your help.

Plus, kicking off the new year by contributing to open source is a great way to start!

We specifically need folks to help write summaries over the weekend. All links and summaries are stored in a Google Doc, so you don’t need to learn any special documentation formatting or revision control software to participate. Plus, everyone who participates is encouraged to add their name to the credits.

Summary writers. Summary writers receive an email every Friday evening (or early Saturday) with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.

Interested? Email editor.ubuntu.news@ubuntu.com and we’ll get you added to the list of folks who are emailed each week.

Finally, I grepped through our archives and want to thank the following people who’ve contributed this year:

  • Paul White
  • José Antonio Rey
  • Jim Connett
  • Emily Gonyer
  • Gim H
  • John Kim
  • Esther Schindler
  • Nathan Dyer
  • David Morfin
  • Tiago Carrondo
  • Diego Turcios
  • Penelope Stowe
  • Neil Oosthuizen
  • John Mahoney
  • Aaron Honeycutt
  • Mathias Hellsten
  • Stephen Michael Kellat
  • Sascha Manns
  • Walter Lapchynski

Thank you all!

Looking for some other way to contribute? I was fortunate in 2014 to speak at two Ubucons in the United States, at the Southern California Linux Expo and then at Fossetcon in Florida. At both these events I gave presentations on how to contribute to Ubuntu without any programming experience required, I dove into more thoroughly here in my blog:

Want more? Explore community.ubuntu.com for a variety of other opportunities to contribute to the Ubuntu community.

by pleia2 at December 31, 2014 07:41 PM

The adventures of 2014

I had a great year in 2013, highlighted by getting married to MJ and starting a new job that I continue to be really happy with. 2014 ended up being characterized by how much travel I’ve done, and a side trip down having the first surgery of my life.

Travel-wise I broke the 100k in-flight miles barrier, with a total of 101,170 in air miles. I traveled at least once a month and was able to add Australia to my continents list this year. The beaches in Perth were beautiful, and with January in the middle of their summer, it was certainly beach weather when I went!

Visiting family didn’t take a back seat this year, I spent a week up in Maine staying with my sister Annette, my nephew Xavier and visiting with my mother and her kitties. Plus, got a nice dose of snow along with it! Very enjoyable when I’m in a warm home and don’t have to drive.

I also was able to stick family visits onto a couple Florida trips and we went to the weddings of MJ’s cousin and sister in the fall. But much of my travel was for work, with a variety of conferences this year, all travel:


Really enjoyed walking the streets of friendly Zagreb, Croatia

First time out of an airport in Germany during my visit to Darmstadt!

My first time in Paris, need I say more?

Jamaica was beautiful and relaxing

But it wasn’t all traveling to conferences, I did HP booth duty at the Open Business Conference in May (wrap-up post) and presented at PuppetConf in September (wrap-up post), both here in San Francisco. I also did some personal conference geekery with my friend Danita Fries by attending Google I/O for the first time in June (wrap-up post).

I also gave a number of talks, sometimes double or tripling up during a conference. I learned that doing 3 talks at a conference is 1-2 talks too many.


Thanks to Vedran Papeš for this photo from DORS/CLUC in Croatia, source

Plus, I had my first book published over the summer! Working with Matthew Helmke and José Antonio Rey, the Official Ubuntu Book, 8th Edition was released in July.


The Official Ubuntu Book, 8th Edition, July 2014

2014 also made me one organ lighter as of July with the removal of my gallbladder after a few months of diagnostics and pain. It certainly complicated some of my travel, making me spend the shortest amount of time possible in both Croatia and Germany, both countries I wish I could have explored more during my trips.

So far for 2015 I believe I’ll have a slightly less busy year travel-wise, but my first two trips are international, the first to Brussels for my first FOSDEM and then in February off to Oman for FOSSC Oman. Looking for a great, and healthier year in 2015!

by pleia2 at December 31, 2014 04:24 PM

December 30, 2014

Elizabeth Krumbach

Tourist in St. Louis

This past long weekend I decided to take one final trip of the year. I admit, part of the reason for having any year end trip was to hit the 100k flight miles this year. This was purely for hitting that real number, it doesn’t help me get any kind of status since my miles are split between 2 alliances due to the USAirways split from Star Alliance during their American Airlines merger.

So I had a look at a map. Where have I never been, have friends I can crash with and is at least 1200 miles away? St. Louis!

I flew in on Christmas and met up with my friend Ryan who I’d be staying with. Options were limited for food, but we were able to snag some tickets at an AMC Dine-in theater so I could see the final Hobbit movie and get some food.

Friday Ryan had work, so I met up with my friend Eric and his wife Kristin for a day at the St. Louis Zoo. Routinely ranked among the top five in the country, I was pretty excited to go. The zoo is also free and the weather was exceptionally nice for the end of December in Missouri, with highs around 57 degrees. Perfect day for the zoo!


Eric and I with the giraffes

Some of the exhibits were closed for renovation (penguins!) but I really enjoyed the big cats and the primate house and herpetarium. They also did something really clever with their underwater exhibit: rather than having a shark tunnel that you can walk under, it’s a seal lion tunnel. The cool thing about sea lions is that they’re interactive, so zoo goers learned that if you throw a ball (or baseball cap) around in the tunnel, the sea lions will chase it. So cute and fun!

More photos from the zoo here: https://www.flickr.com/photos/pleia2/sets/72157649609657700/

Saturday and Sunday I hung out with Ryan. First stop on Saturday was the City Museum. The website doesn’t really do the insanity of this place justice, “big playground” doesn’t really do it either. We started off by going through the museum’s “caves” where you walk and climb through all kinds of man made caves, with dragons and other creatures carved into the walls. Once you get through those, you find yourself going up a series of metal stairways and landings with one goal: to get to the 10 story slide. I did it, and managed not to throw up afterwards (though it was a bit touch and go for a couple minutes!).


In the City Museum caves

The museum also features all kinds of eclectic collections, from massive carved stone pieces to doorknobs to every Lego train set ever made. For an additional fee, the second floor has the World Aquarium with a variety of animals, aquatic and not. I’d probably skip the aquarium next time, the exhibits were cramped and I wasn’t too impressed with the cage/tank sizes for most of the animals.

Finally, there’s the outdoor MonstroCity, described as: “A captivating collision of old and new, architectural castoffs and post-apocalyptic chaos, MonstroCity is at once interactive sculpture and playground. Comprised of wrought iron slinkies, fire trucks, stone turrets, airplane fuselages, slides of all sizes and shapes” – yep, that’s about right. Like the caves, there were all kinds of places to climb through, with adults having as much fun as the kids. The structures were quite wet and I was feeling very old at this point, so I kept my own explorations pretty conservative, taking walkways and stairways everywhere I went, including to both of the airplane fuselages. Even so, there were some scary moments as parts of the structure move slightly as you walk on them. I was a bit sore after my City Museum trip with all the climbing and head bumping (low ceilings!), but it really was a lot of fun.


MonstroCity

Also, pro-tip: I enjoyed taking photos throughout my visit, but holding on to a camera and phone while climbing everywhere was quite a challenge at times, even with my hoodie pockets. If you can do without having a photographic record of your visit, it may be more fun to leave the electronics in the car. More photos from City Museum here: https://www.flickr.com/photos/pleia2/sets/72157647687508774/

After City Museum, we headed over to Schafly Bottleworks to do a brewery tour. As a big beer fan, I was excited to see one of the several craft breweries that sit in the shadow of giant Anheuser-Busch that also calls St. Louis home. The tour was fun, and was followed by a tasting. They make some great ales, but I was particularly impressed with their Tasmanian IPA, which uses Australian Topaz and Tasmanian Galaxy hops for a nice, complex taste. We skipped lunch at the brewery to head over to Imo’s Pizza, with the super thin crust and Provel cheese, this St. Louis classic was a must. Yum!

Our evening was spent with at Three Sixty rooftop bar downtown, with a great view of the Arch, and then over to Baily’s Chocolate Bar for some fantastic dessert.

Sunday! This being a short trip, I packed in as much as possible, so Sunday began with 10:30AM tickets for the Gateway Arch. Getting there early was a good move, by the time we left around 11:15 the line for security into the facility was quite long, and with the weather taking a turn for the colder (around 32 degrees!) it was nice to not have to wait in such a long line in the cold. The trip up to the top of the arch began with a ride in their little super 1960s-style trams:

At the top, 630 ft (63 stories) up, there are small 7″x27″ windows where you can see the Mississippi river and the city of St. Louis. Going to the top was definitely a must, but we didn’t stay up too long because it was quite busy and the views were limited with such small windows.

Also a must, an arch photo:

And arch tourist in this, my St. Louis blog post:

We had lunch over in Ballpark Village at the Budweiser Brew House before heading off on our last adventure of the trip: The Anheuser-Busch Brewery Tour. Now, I’m not actually a fan of Budweiser. It’s rice-based beer and I don’t care for lagers in general, being more of an ale fan (and fan of hops!). But I was in St. Louis, I am a beer fan, and the idea of visiting the home of the biggest beer company in the world was compelling. We did the Day Fresh Brewery Tour. We got a couple samples throughout the tour, and I have to admit I was pleasantly surprised by the the Michelob AmberBock, while still quite mild for a Bock, it was smooth and didn’t have any unpleasant aftertaste. The tour gave us an opportunity to visit the Budweiser Clydesdales, who have a pretty amazing building to live in (beautiful, heated, wood paneling, nicer than most human houses!). From there was a tour of the Brew House and Clock Tower and finally the BEVO Packaging Plant where they have their canning and bottling lines which run 24/7. But my favorite thing on the whole tour? The hop vine chandeliers in the historical brew house. Our tour guide told us they were bought by the company from the 1904 World’s Fair.

More photos from the brewery tour here: https://www.flickr.com/photos/pleia2/sets/72157647696859563/

And with that, my trip wound down. We snagged some roast beef sandwiches to enjoy with a movie before I went to bed early to be up at 4AM for my 5:50AM flight back home via Denver. Huge thanks to Ryan for putting me up in his guest room the long weekend and driving me around town as we did our whirlwind tour of his city!

More generic St. Louis photos collected in this album: https://www.flickr.com/photos/pleia2/sets/72157647695764113/

by pleia2 at December 30, 2014 07:44 PM

December 24, 2014

Elizabeth Krumbach

I think I’ll go for a… oh bother

Last December I wrote about taking up running. I had some fantastic weeks, I was gaining stamina and finding actual value in my new found ability to run (late to the train? I can run!). I never really grew to like it, and as I got up to 25 minutes of solid (even if slow) running I really had to push myself, but things were going well.

Then, in April, I got sick. This kicked off my whole gallbladder ordeal. Almost 4 months of constant pain, changes in my diet to avoid triggers to increased pain. Running was out entirely, anything that bounced me around that much was not tolerable. The diet changes tended toward carbs, and away from meats and fats. The increased carbs and death of my exercise routine was a disaster for me weight-wise. Add on the busiest travel year of my life and all the stress and poor eating choices that come with travel, and I’ve managed to put on 30lbs this year, landing me at the heaviest I’ve ever been.

I don’t feel good about this.

By September I was recovered enough to start running again, but sneaking in discipline to exercise into my travel schedule proved tricky. They also don’t tell you how much harder it is to exercise when you’re heavy – all that extra weight to carry around! Particularly as I run, soreness in my feet has been my key issue, where previously I’d only had trouble with joint (knee) pain here and there. I picked up running again for a couple weeks in late November, but then the rain started in San Francisco. December has been unusually soggy. One of the reasons I picked running as my exercise of choice was because we actually have nice weather most of the time, so this was quite the disappointment.

But I haven’t given up! I did start Couch to 5k over, but so far it’s not nearly as hard as the first time around, so I didn’t lose all the ground I gained earlier in the year. Here’s to 2015 being a healthier year for me.

by pleia2 at December 24, 2014 02:05 AM

Simcoe’s December 2014 Checkup

Simcoe was diagnosed with Chronic Renal Failure (CRF) back in December of 2011, so it’s been a full three years since her diagnosis!

Still, she doesn’t enjoy the quarterly vet visits. We took her in on December 6th and she was determined to stay in her carrier and not look at me.


“I’m mad at you”

We’re keeping up with subcutaneous fluid injections every other day to keep her hydrated, and it has been keeping her pretty stable. This latest round of tests did show a slight decrease in her weight from 9.94lbs to 9.74lbs.

Weight

Her BUN level remained steady, and CRE rose a bit from 3.8 to 4.2.

BUN: 59 (normal range: 14-36)
CRE: 4.2 (normal range: .6-2.4)

Her calcium levels also came back a little high, so we scheduled some fasted blood work for this past weekend. We took the opportunity to also bring Caligula in for his annual exam.

Caligula is doing well, he just turned 11 years old and our only concern was some staining on his iris, which the vet took a look at and confirmed was just pigmentation changes that are common with aging. His blood work looks good, though also shows some slightly elevated calcium levels.


Simcoe was taken in the back with the carrier, Caligula got the leash

We still have one follow-up call with Simcoe’s vet to chat about the calcium levels, but the vet on duty who delivered the results didn’t seem concerned since they’ve been elevated for some time and are just slightly above normal.

The only other current struggle is supplies. Following some quality control issues with one of the manufacturers, the Lactated Ringer’s solution we give subcutaneously went through a period of severe shortage (article here). The market seems to be recovering, but we’re now navigating a world with different bag manufacturers and canceled out of stock orders from our pharmacy. Hoping 2015 will be a better year with regard to this shortage, it wasn’t only kitties who were impacted by this problem!

by pleia2 at December 24, 2014 01:21 AM

Jono Bacon

Happy Holidays

Just a quick note to wish all of you a happy, restful, and peaceful holidays, however and whoever you spend it with. Take care, folks, and I look forward to seeing you in 2015!

by jono at December 24, 2014 12:24 AM

December 22, 2014

Akkana Peck

Passwordless ssh with a key: the part most tutorials skip

I'm working on my Raspberry Pi crittercam again. I got a battery, so it can be a standalone box -- it was such a hassle to set it up with two power cords dangling from it at all times -- and set it up to run automatically at boot time.

But there was one aspect of the camera that wasn't automated: if close enough to the house to see the wi-fi router, I want it to mount a filesystem from our server and store its image files there. That makes it a lot easier to check on its progress, and also saves wear on the Pi's SD card.

Only one problem: I was using sshfs to mount the disk remotely, and ssh always prompts me for a password.

Now, there are a gazillion tutorials on how to set up an ssh key. Just do a web search for ssh key or passwordless ssh key. They vary a bit in their details, but they're all the same in the important aspects. They're all the same in one other detail: none of them work for me. I generate a new key (various types) with no pass phrase, I copy it to the server's authorized keys file (several different ways, two possible filenames), I try to ssh -- and I'm prompted for a password.

After much flailing I finally found out what was missing. In addition to those two steps, you need to modify your .ssh/config file to tell it which key to use. This is especially critical if you have multiple keys on the client machine, or if you've named the file anything but the default id_dsa or id_rsa.

So here are the real steps for making an ssh key. Assume the server, the machine to which you want to ssh, is named "myserver". But these steps are all run on the client machine, the one from which you want to run ssh.

ssh-keygen -t rsa -C "Comment"
When it prompts you for a filename, give it a full pathname, e.g. ~/.ssh/id_rsa_myserver. Type in a pass phrase, or hit return twice if you want to be able to ssh without a password.
ssh-copy-id -i .ssh/id_rsa_myserver user@myserver
You can omit the user@ if you're using the same username on both machines. You'll have to type in your password on myserver.

Then edit ~/.ssh/config, and add an entry like this:

Host myserver
  User my_username
  IdentityFile ~/.ssh/id_rsa_myserver
The User line is optional, and refers to your username on myserver if it's different from the one on the client. For instance, on the Raspberry Pi, everything has to run as root because most of the hardware and camera libraries can't work any other way. But I want it using my user ID on the server side, not root.

Eliminating strict host key checking

Of course, you can use this to go the other way too, and ssh to your Pi without needing to type a password every time. If you do that, and if you have several Pis, Beaglebones, plug computers or other little Linux gizmos which sometimes share the same IP address, you may run into the annoying whine ssh is prone to:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
The only way to get around this once it happens is by editing ~/.ssh/known_hosts, finding the line corresponding to the pi, and removing it (or just removing the whole file).

You're supposed to be able to turn off this check with StrictHostKeyChecking no, but it doesn't work. Fortunately, there's a trick I discovered several years ago and discussed in Three SSH tips. Here's how the Pi entry ends up looking in my desktop's ~/.ssh/config:

Host pipi
  HostName pi
  User pi
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  IdentityFile ~/.ssh/id_pi

December 22, 2014 11:25 PM

December 18, 2014

Akkana Peck

Firefox deprecates flash. How to get it back (on Debian).

Recently Firefox started refusing to run flash, including youtube videos (about the only flash I run). A bar would appear at the top of the page saying "This plug-in is vulnerable and should be upgraded". Apparently Adobe had another security bug. There's an "Update now" button in the Firefox bar, but it's a chimera: Firefox has never known how to install plug-ins for Linux (there are longstanding bugs filed on why it claims to be able to but can't), and it certainly doesn't know how to update a Debian package.

I use a Firefox downloaded from Mozilla.org, but flash from Debian's flashplugin-nonfree package. So I figured updating Debian -- apt-get update; apt-get dist-upgrade -- would fix it. Nope. I still got the same message.

A little googling found several pages recommending update-flashplugin-nonfree --install; I tried that but it didn't help either. It seemed to download a tarball, but as far as I could tell it never unpacked or installed the tarball it downloaded.

What finally did the trick was

apt-get install --reinstall flashplugin-nonfree
That downloaded a new tarball, AND unpacked and installed it. After restarting Firefox, I was able to view the video I'd been trying to watch.

December 18, 2014 10:21 PM

December 17, 2014

Jono Bacon

The Impact of One Person

I am 35 years old and people never cease to surprise me. My trip home from Los Angeles today was a good example of this.

It was a tortuous affair that should have been a quick hop from LA to Oakland, popping on BArt, and then getting home for a cup of tea and an episode of The Daily Show.

It didn’t work out like that.

My flight was delayed. Then we sat on the tarmac for an hour. Then the new AirBart train was delayed. Then I was delayed at the BArt station in Oakland for 30 minutes. Throughout this I was tired, it was raining, and my patience was wearing thin.

Through the duration of this chain of minor annoyances, I was reading about the horrifying school attack in Pakistan. As I read more, related articles were linked with other stories of violence, aggression, and rape, perpetuated by the dregs of our species.

As anyone who knows me will likely testify, I am a generally pretty positive guy who sees the good in people. I have baked my entire philosophy in life and focus in my career upon the core belief that people are good and the solutions to our problems and the doors to opportunity are created by good people.

On some days though, even the strongest sense of belief in people can be tested when reading about events such as this dreadful act of violence in Pakistan. My seemingly normal trip home from the office in LA just left me disappointed in people.

While stood at the BArt station I decided I had had enough and called an Uber. I just wanted to get home and see my family. This is when my mood changed entirely.

Gerald

A few minutes later, my Uber arrived, and I was picked up by an older gentleman called Gerald. He put my suitcase in the trunk of his car and off we went.

We started talking about the Pakistan shooting. We both shared a desperate sense of disbelief at all those innocent children slaughtered. We questioned how anyone with any sense of humanity and emotion could even think about doing that, let alone going through with it. With a somber air filling the car, Gerald switched gears and started talking about his family.

He told me about his two kids, both of which are in their mid-thirtees. He doted on their accomplishments in their careers, their sense of balance and integrity as people, and his three beautiful grand-children.

He proudly shared that he had shipped his grandkids’ Christmas presents off to them today (they are on the East Coast) so he didn’t miss the big day. He was excited about the joy he hoped the gifts would bring to them. His tone and sentiment was one of happiness and pride.

We exchanged stories about our families, our plans for Christmas, and how lucky we both felt to love and be loved.

While we were generations apart…our age, our experiences, and our differences didn’t matter. We were just proud husbands and fathers who were cherishing the moments in life that were so important to both of us.

We arrived at my home and I told Gerald that until I stepped in his car I was having a pretty shitty trip home and he completely changed that. We shook hands, shared Christmas best wishes, and parted ways.

Good People

What I was expecting to be a typical Uber ride home with me exchanging a few pleasantries and then doing email on my phone, instead really illuminated what is important in life.

We live in complex world. We live on a planet with a rich tapestry of people and perspectives.

Evil people do exist. I am not referring to a specific religious or spiritual definition of evil, but instead the extreme inverse of the good we see in others.

There are people who can hurt others, who can so violently shatter innocence and bring pain to hundreds, so brutally, and so unnecessarily. I can’t even imagine what the parents of those kids are going through right now.

It can be easy to focus on these tragedies and to think that our world is getting worse; to look at the full gamut of negative humanity, from the inconsequential, such as the miserable lady yelling at the staff at the airport, to the hateful, such as the violence directed at innocent children. It is easy to assume that our species is rotting from the inside out, to see poison in the well, and that the rot is spreading.

While it is easy to lose faith in people, I believe our wider humanity keeps us on the right path.

While there is evil in the world, there is an abundance of good. For every evil person screaming there is a choir of good people who drown them out. These good people create good things, they create beautiful things that help others to also create good things and be good people too.

Like many of you, I am fortunate to see many of these things every day. I see people helping the elderly in their local communities, many donating toys to orphaned kids over the holidays, others creating technology and educational resources that help people to create new content, art, music, businesses, and more. Every day millions devote hours to helping and inspiring others to create a brighter future.

What is most important about all of this is that every individual, every person, every one of you reading this, has the opportunity to have this impact. These opportunities may be small and localized, or they may be large and international, but we can all leave this planet a little better than when we arrived on it.

The simplest way of doing this is to share our humanity with others and to cherish the good in the face of evil. The louder our choir, the weaker theirs.

Gerald did exactly that tonight. He shared happiness and opportunity with a random guy he picked up in his car and I felt I should pass that spirit on to you folks too. Now it is your turn.

Thanks for reading.

by jono at December 17, 2014 07:35 AM

December 16, 2014

Elizabeth Krumbach

Recent time between travel

This year has pretty much been consumed by travel and events. I’ll dive into that more in a wrap-up post in a couple weeks, but for now I’ll just note that it’s been tiring and I’ve worked to value my time at home as much as possible.

It’s been uncharacteristically wet here in San Francisco since coming home from Jamaica. We’re fortunate to have the rain since we’re currently undergoing a pretty massive drought here in California, but I would have been happier if it didn’t come at once! There was some flooding in our basement garage at the beginning (fortunately a leak was found and fixed) and we had possibly the first power outage since I moved here almost five years ago. Internet has had outages too, which could be a bit tedious work-wise even with a back up connection. All because of a few inches of rain that we’d not think anything of back in Pennsylvania, let alone during the kinds of winter storms I grew up with in Maine.

On Thanksgiving I got ambitious about my time at home and decided to actually make a full dinner. We’d typically either gone out or picked up prepared food somewhere, so this was quite a change from the norm. I skipped the full turkey and went with cutlets I prepared in a pan, the rest of the menu included the usual suspects: gravy, stuffing, mashed potatoes, cranberry sauce, green beans and rolls. I had leftovers for days. I also made MJ suffer with me through a Mystery Science Theater 3000 Turkey Day marathon, hah!

I’ve spent a lot of time catching up with project work in the past few weeks. Following up on a number of my Xubuntu tasks and working through my Partimus backlog. Xubuntu-wise we’re working on a few contributor incentives, so I’m receiving a box of Xubuntu stickers in the mail soon, courtesy of UnixStickers.com, which I’ll be sending out to select QA contributors in the coming months. We’re also working on a couple of polls that can give us a better idea of who are user base is and how to serve them better. I also spent an afternoon in Alameda recently to meet with an organization that Partimus may partner with and met up with the Executive Director this past weekend for a board meeting where we identified some organizational work for the next quarter.

At home I’ve been organizing the condo and I’m happy to report that the boxes have gone, even working from home means I still have too much stuff around all the time. MJ took some time to set up our shiny new PlayStation 4 and several antennas so our TV now has channels and we can get AM and FM radio. I’ll finally be able to watch baseball at home! I also got holiday cards sent out and some Hanukkah lights put up, so it’s feeling quite comfortable here.

Having time at home has also meant I’ve been able to make time for friends who’ve come into town to visit lately. Laura Czajkowski, who I’ve worked with for years in the Ubuntu community, was recently in town and we met up for dinner. I also recently had dinner with my friend BJ, who I know from the Linux scene back in Philadelphia, though we’ve both moved since. Now I just need to make more time for my local friends.

The holiday season has afforded us some time to dress up and go out, like to a recent holiday party by MJ’s employer.

Plus I’ve had the typical things to keep me busy outside of work, an Ubuntu Hour and Debian Dinner last week and the Ubuntu Weekly Newsletter which will hit issue 400 early next year. Plus, I have work on my book, which I wish were going faster, but is coming along.

I have one more trip coming this year, off to St. Louis late next week. I’ll be spending few days visiting with friends and traveling around a city I’ve never been to! This trip will put me over 100k miles for the calendar year, which is a pretty big milestone for me, and one I’m not sure I’ll reach again. Plans are still firming up for how my travel schedule will look next year, but I do have a couple big international trips on the horizon that I’m excited about.

by pleia2 at December 16, 2014 05:24 AM

December 12, 2014

Eric Hammond

Exploring The AWS Lambda Runtime Environment

In the AWS Lambda Shell Hack article, I present a crude hack that lets me run shell commands in the AWS Lambda environment to explore what might be available to Lambda functions running there.

I’ve added a wrapper that lets me type commands on my laptop and see the output of the command run in the Lambda function. This is not production quality software, but you can take a look at it in the alestic/lambdash GitHub repo.

For the curious, here are some results. Please note that this is running on a preview and is in no way a guaranteed part of the environment of a Lambda function. Amazon could change any of it at any time, so don’t build production code using this information.

The version of Amazon Linux:

$ lambdash cat /etc/issue
Amazon Linux AMI release 2014.03
Kernel \r on an \m

The kernel version:

$ lambdash uname -a
Linux ip-10-0-168-157 3.14.19-17.43.amzn1.x86_64 #1 SMP Wed Sep 17 22:14:52 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

The working directory of the Lambda function:

$ lambdash pwd
/var/task

which contains the unzipped contents of the Lambda function I uploaded:

$ lambdash ls -l
total 12
-rw-rw-r-- 1 slicer 497 5195 Nov 18 05:52 lambdash.js
drwxrwxr-x 5 slicer 497 4096 Nov 18 05:52 node_modules

The user running the Lambda function:

$ lambdash id
uid=495(sbx_user1052) gid=494 groups=494

which is one of one hundred sbx_userNNNN users in /etc/passwd. “sbx_user” presumably stands for “sandbox user”.

The environment variables (in a shell subprocess). This appears to be how AWS Lambda is passing the AWS credentials to the Lambda function.

$ lambdash env
 AWS_SESSION_TOKEN=[ELIDED]
LAMBDA_TASK_ROOT=/var/task
LAMBDA_CONSOLE_SOCKET=14
PATH=/usr/local/bin:/usr/bin:/bin
PWD=/var/task
AWS_SECRET_ACCESS_KEY=[ELIDED]
NODE_PATH=/var/runtime:/var/task:/var/runtime/node_modules
AWS_ACCESS_KEY_ID=[ELIDED]
SHLVL=1
LAMBDA_CONTROL_SOCKET=11
_=/usr/bin/env

The versions of various pre-installed software:

$ lambdash perl -v
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
[...]

$ lambdash python --version
Python 2.6.9

$ lambdash node -v
v0.10.32

Running processes:

$ lambdash ps axu
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
493          1  0.2  0.7 1035300 27080 ?       Ssl  14:26   0:00 node --max-old-space-size=0 --max-new-space-size=0 --max-executable-size=0 /var/runtime/node_modules/.bin/awslambda
493         13  0.0  0.0  13444  1084 ?        R    14:29   0:00 ps axu

The entire file system: 2.5 MB download

 $ lambdash ls -laiR /
 [click link above to download]

Kernel ring buffer: 34K download

$ lambdash dmesg
[click link above to download]

CPU info:

$ lambdash cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 62
model name  : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
stepping    : 4
microcode   : 0x416
cpu MHz     : 2800.110
cache size  : 25600 KB
physical id : 0
siblings    : 2
core id     : 0
cpu cores   : 1
apicid      : 0
initial apicid  : 0
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
bogomips    : 5600.22
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
[...]

Installed nodejs modules:

$ dirs=$(lambdash 'echo $NODE_PATH' | tr ':' '\n' | sort)
$ echo $dirs
/var/runtime /var/runtime/node_modules /var/task

$ lambdash 'for dir in '$dirs'; do echo $dir; ls -1 $dir; echo; done'
/var/runtime
node_modules

/var/runtime/node_modules
aws-sdk
awslambda
dynamodb-doc
imagemagick

/var/task # Uploaded in Lambda function ZIP file
lambdash.js
node_modules

[Update 2013-12-03]

We’re probably not on a bare EC2 instance. The standard EC2 instance metadata service is not accessible through HTTP:

$ lambdash curl -sS http://169.254.169.254:8000/latest/meta-data/instance-type
curl: (7) Failed to connect to 169.254.169.254 port 8000: Connection refused

Browsing the AWS Lambda environment source code turns up some nice hints about where the product might be heading. I won’t paste the copyrighted code here, but you can download into an “awslambda” subdirectory with:

$ lambdash 'cd /var/runtime/node_modules;tar c awslambda' | tar xv

[Update 2013-12-11]

There’s a half gig of writable disk space available under /tmp (when run with 256MB of RAM. Does this scale up with memory?)

$ lambdash 'df -h 2>/dev/null'
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       30G  1.9G   28G   7% /
devtmpfs         30G  1.9G   28G   7% /dev
/dev/xvda1       30G  1.9G   28G   7% /
/dev/loop0      526M  832K  514M   1% /tmp

Anything else you’d like to see? Suggest commands in the comments on this article.

Original article: http://alestic.com/2014/11/aws-lambda-environment

by Eric Hammond at December 12, 2014 05:07 AM

December 10, 2014

Akkana Peck

Not exponential after all

We're saved! From the embarrassing slogan "Live exponentially", that is.

Last night the Los Alamos city council voted to bow to public opinion and reconsider the contract to spend $50,000 on a logo and brand strategy based around the slogan "Live Exponentially." Though nearly all the councilors (besides Pete Sheehey) said they still liked the slogan, and made it clear that the slogan isn't for residents but for people in distant states who might consider visiting as tourists, they now felt that basing a campaign around a theme nearly of the residents revile was not the best idea.

There were quite a few public comments (mine included); everyone was civil and sensible and stuck well under the recommended 3-minute time limit.

Instead, the plan is to go ahead with the contract, but ask the ad agency (Atlas Services) to choose two of the alternate straplines from the initial list of eight that North Star Research had originally provided.

Wait -- eight options? How come none of the previous press or the previous meeting mentioned that there were options? Even in the 364 page Agenda Packets PDF provided for this meeting, there was no hint of that report or of any alternate strap lines.

But when they displayed the list of eight on the board, it became a little clearer why they didn't want to make the report public: they were embarrassed to have paid for work of this quality. Check out the list:

  • Where Everything is Elevated
  • High Intelligence in the High Desert
  • Think Bigger. Live Brighter.
  • Great. Beyond.
  • Live Exponentially
  • Absolutely Brilliant
  • Get to a Higher Plane
  • Never Stop Questioning What's Possible

I mean, really. Great Beyond? Are we're all dead? High Intelligence in the High Desert? That'll certainly help with people who think this might be a bunch of snobbish intellectuals.

It was also revealed that at no point during the plan was there ever any sort of focus group study or other tests to see how anyone reacted to any of these slogans.

Anyway, after a complex series of motions and amendments and counter-motions and amendments and amendments to the amendments, they finally decided to ask Atlas to take the above list, minus "Live Exponentially"; add the slogan currently displayed on the rocks as you drive into town, "Where Discoveries are Made" (which came out of a community contest years ago and is very popular among residents); and ask Atlas to choose two from the list to make logos, plus one logo that has no slogan at all attached to it.

If we're lucky, Atlas will pick Discoveries as one of the slogans, or maybe even come up with something decent of their own.

The chicken ordinance discussion went well, too. They amended the ordinance to allow ten chickens (instead of six) and to try to allow people in duplexes and quads to keep chickens if there's enough space between the chickens and their neighbors. One commenter asked for the "non-commercial' clause to be struck because his kids sell eggs from a stand, like lemonade, which sounded like a very reasonable request (nobody's going to run a large commercial egg ranch with ten chickens); but it turned out there's a state law requiring permits and inspections to sell eggs.

So, folks can have chickens, and we won't have to live exponentially. I'm sure everyone's breathing a little more easily now.

December 10, 2014 11:27 PM

December 09, 2014

Eric Hammond

Persistence Of The AWS Lambda Environment Between Function Invocations

AWS Lambda functions are run inside of an Amazon Linux environment (presumably a container of some sort). Sequential calls to the same Lambda function could hit the same or different instantiations of the environment.

If you hit the same copy (I don’t want to say “instance”) of the Lambda function, then stuff you left in the environment from a previous run might still be available.

This could be useful (think caching) or hurtful (if your code incorrectly expects a fresh start every run).

Here’s an example using lambdash, a hack I wrote that sends shell commands to a Lambda function to be run in the AWS Lambda environment, with stdout/stderr being sent back through S3 and displayed locally.

$ lambdash 'echo a $(date) >> /tmp/run.log; cat /tmp/run.log'
a Tue Dec 9 13:54:50 PST 2014

$ lambdash 'echo b $(date) >> /tmp/run.log; cat /tmp/run.log'
a Tue Dec 9 13:54:50 PST 2014
b Tue Dec 9 13:55:00 PST 2014

$ lambdash 'echo c $(date) >> /tmp/run.log; cat /tmp/run.log'
a Tue Dec 9 13:54:50 PST 2014
b Tue Dec 9 13:55:00 PST 2014
c Tue Dec 9 13:55:20 PST 2014

As you can see in this example, the file in /tmp contains content from previous runs.

These tests are being run in AWS Lambda Preview, and should not be depended on for long term or production plans. Amazon could change how AWS Lambda works at any time for any reason, especially when the behaviors are not documented as part of the interface. For example, Amazon could decide to clear out writable file system areas like /tmp after each run.

If you want to have a dependable storage that can be shared among multiple copies of an AWS Lambda function, consider using standard AWS services like DynamoDB, RDS, ElastiCache, S3, etc.

Original article: http://alestic.com/2014/12/aws-lambda-persistence

by Eric Hammond at December 09, 2014 10:07 PM

December 08, 2014

Eric Hammond

AWS Lambda: Pay The Same Price For Faster Execution

multiply the speed of compute-intensive Lambda functions without (much) increase in cost

Given:

  • AWS Lambda duration charges are proportional to the requested memory.

  • The CPU power, network, and disk are proportional to the requested memory.

One could conclude that the charges are proportional to the CPU power available to the Lambda function. If the function completion time is inversely proportional to the CPU power allocated (not entirely true), then the cost remains roughly fixed as you dial up power to make it faster.

If your Lambda function is primarily CPU bound and takes at least several hundred ms to execute, then you may find that you can simply allocate more CPU by allocating more memory, and get the same functionality completed in a shorter time period for about the same cost.

For example, if you allocate 128 MB of memory and your Lambda function takes 10 seconds to run, then you might be able to allocate 640 MB and see it complete in about 2 seconds.

At current AWS Lambda pricing, both of these would cost about $0.02 per thousand invocations, but the second one completes five times faster.

Things that would cause the higher memory/CPU option to cost more in total include:

  • Time chunks are rounded up to the nearest 100 ms. If your Lambda function runs near or under that in less memory, then increasing the CPU allocated will make it return faster, but the rounding up will cause the resulting cost to be more expensive.

  • Doubling the CPU allocated to a Lambda function does not necessarily cut the run time in half. The code might be accessing external resources (e.g., calling S3 APIs) or interacting with disk. If you double the requested CPU, then those fixed time actions will end up costing twice as much.

If you have a slow Lambda function, and it seems that most of its time is probably spent in CPU activities, then it might be worth testing an increase in requested memory to see if you can get it to complete much faster without increasing the cost by much.

I’d love to hear what practical test results people find when comparing different memory/CPU allocation values for the same Lambda function.

Original article: http://alestic.com/2014/11/aws-lambda-speed

by Eric Hammond at December 08, 2014 05:54 PM

Elizabeth Krumbach

My father passed away 10 years ago

It’s December 7th, which marks 10 years since my father passed away. In the past decade I’ve had much to reflect on about his life.

When he passed away I was 23 and had bought a house in the suburbs of Philadelphia. I had just transitioned from doing web development contract work to working various temp jobs to pay the bills. It was one of those temp jobs that I went to the morning after I learned my father had passed, because I didn’t know what else to do, I learned quickly that people tend to take a few days off when they have such a loss and why. The distance from home made it challenging to work through the loss, as is seen in my blog post from the week it happened, I felt pretty rutterless.

My father had been an inspiration for me. He was always making things, had a wood workshop where he’d build dollhouses, model planes, and even a stable for my My Little Ponies. He was also a devout Tolkien fan, making The Hobbit a more familiar story for me growing up than Noah’s Ark. I first saw and fell in love with Star Wars because he was a big scifi fan. My passion for technology was sparked when his brother at IBM shipped us our first computer and he told me stories about talking to people from around the world on his HAM radios. He was also an artist, with his drawings of horses being among my favorites growing up. Quite the Renaissance man. Just this year, when my grandmother passed, I was honored received several of his favorite things that she had kept, including a painting that hung in our house growing up, a video of his time at college and photos that highlighted his love of travel.

He was also very hard on me. Every time I excelled, he pushed harder. Unfortunately it felt like “I could never do good enough” when in fact I now believe he pushed me for my own good, I could usually take it and I’m ultimately better for it. I know he was also supremely disappointed that I never went to college, something that was very important to him. This all took me some time to reconcile, but deep down I know my father loved my sisters and I very much, and regardless of what we accomplished I’m sure he’d be proud of all of us.

And he struggled with alcoholism. It’s something I’ve tended to gloss over in most public discussions about him because it’s so painful. It’s had a major impact on my life, I’m pretty much as text book example of “eldest child of an alcoholic” as you can get. It also tore apart my family and inevitably lead to my father’s death from cirrhosis of the liver. For a long time I was angry with him. Why couldn’t he give it up for his family? Not even to save his own life? I’ve since come to understand that alcoholism is a terrible, destructive thing and for many people it’s a lifelong battle that requires a tremendous amount of support from family and community. While I may have gotten genetic fun bag of dyslexia, migraines and seizures from my father, I’m routinely thankful I didn’t inherit the predisposition toward alcoholism.

And so, on this sad anniversary, I won’t be having an drink to his life. Instead I think I’ll honor his memory by spending the evening working on one of the many projects that his legacy inspired and brings me so much joy. I love you, Daddy.

by pleia2 at December 08, 2014 01:49 AM

Akkana Peck

My Letter to the Editor: Make Your Voice Heard On 'Live Exponentially'

More on the Los Alamos "Live Exponentially" slogan saga: There's been a flurry of letters, all opposed to the proposed slogan, in the Los Alamos Daily Post these last few weeks.

And now the issue is back on the council agenda; apparently they're willing to reconsider the October vote to spend another $50,000 on the slogan.

But considering that only two people showed up to that October meeting, I wrote a letter to the Post urging people to speak before the council: Letter to the Editor: Attend Tuesday's Council Meeting To Make Your Voice Heard On 'Live Exponentially'.

I'll be there. I've never actually spoken at a council meeting before, but hey, confidence in public speaking situations is what Toastmasters is all about, right?

(Even though it means I'll have to miss an interesting sounding talk on bats that conflicts with the council meeting. Darn it!)

A few followup details that I had no easy way to put into the Post letter:

The page with the links to Council meeting agendas and packets is here: Los Alamos County Calendar.

There, you can get the short Agenda for Tuesday's meeting, or the full 364 page Agenda Packets PDF.

[Breathtaking raised to the power of you] The branding section covers pages 93 - 287. But the graphics the council apparently found so compelling, which swayed several of them from initially not liking the slogan to deciding to spend a quarter million dollars on it, are in the final presentation from the marketing company, starting on page p. 221 of the PDF.

In particular, a series of images like this one, with the snappy slogan:

Breathtaking raised to the power of you
LIVE EXPONENTIALLY

That's right: the advertising graphics that were so compelling they swayed most of the council are even dumber than the slogan by itself. Love the superscript on the you that makes it into an exponent. Get it ... exponentially? Oh, now it all makes sense!

There's also a sadly funny "Written Concept" section just before the graphics (pages 242- in the PDF) where they bend over backward to work in scientific-sounding words, in bold each time.

But there you go. Hopefully some of those Post letter writers will come to the meeting and let the council know what they think.

The council will also be discussing the much debated proposed chicken ordinance; that discussion runs from page 57 to 92 of the PDF. It's a non-issue for Dave and me since we're in a rural zone that already allows chickens, but I hope they vote to allow them everywhere.

December 08, 2014 01:05 AM

December 03, 2014

Elizabeth Krumbach

December 2014 OpenStack Infrastructure User Manual Sprint

Back in April, the OpenStack Infrastructure project create the Infrastructure User Manual. This manual sought consolidate our existing documentation for Developers, Core Reviewers and Project Drivers, which was spread across wiki pages, project-specific documentation files and general institutional knowledge that was mostly just in our brains.

Books

In July, at our mid-cycle sprint, Anita Kuno drove a push to start getting this document populated. There was some success here, we had a couple of new contributors. Unfortunately, after the mid-cycle reviews only trickled in and vast segments of the manual remained empty.

At the summit, we had a session to plan out how to change this and announced an online sprint in the new #openstack-sprint channel (see here for scheduling: https://wiki.openstack.org/wiki/VirtualSprints). We hosted the sprint on Monday and Tuesday of this week.

Over these 2 days we collaborated on an etherpad so no one was duplicating work and we all did a lot of reviewing. Contributors worked to flesh out missing pieces of the guide and added a Project Creator’s section to the manual.

We’re now happy to report, that with the exception of the Third Party section of the manual (to be worked on collaboratively with the broader Third Party community at a later date), our manual is looking great!

The following are some stats about our sprint gleaned from Gerrit and Stackalytics:

Sprint start

  • Patches open for review: 10
  • Patches merged in total repo history: 13

Sprint end:

  • Patches open for review: 3, plus 2 WIP (source)
  • Patches merged during sprint: 30 (source)
  • Reviews: Over 200 (source)

We also have 16 patches for documentation in flight that were initiated or reviewed elsewhere in the openstack-infra project during this sprint, including the important reorganization of the git-review documentation (source)

Finally, thanks to sprint participants who joined me for this sprint, sorted chronologically by reviews: Andreas Jaeger, James E. Blair, Anita Kuno, Clark Boylan, Spencer Krum, Jeremy Stanley, Doug Hellmann, Khai Do, Antoine Musso, Stefano Maffulli, Thierry Carrez and Yolanda Robla

by pleia2 at December 03, 2014 04:30 PM

Jono Bacon

Feedback Requested: Great Examples of Community

Folks, I need to ask for some help.

Like many, I have some go-to examples of great communities. This includes Wikipedia, OpenStreetmap, Ubuntu, Debian, Linux, and others. Many of these are software related, many of them are Open Source.

I would like to ask your feedback for other examples of great communities. These don’t have to be software-related…in fact I would love to see examples of great communities in other areas and disciplines.

They could be collaborative communities, communities that share a common interest, communities that process big chunks of data, communities that inspire and educate certain groups (e.g. kids, under-privilaged), or anything else.

I am looking for inspiring examples that get to the heart of what makes communities beautiful. These don’t have to be huge and elaborate communities, they just need to demonstrate the power of people sharing a mission or ethos and doing interesting things.

Please share your examples in the comments, and in doing so, please share the following:

  • The name of the community
  • A web address / contact person
  • Overview of the community, what it does, and why you feel it is special

Thanks!

by jono at December 03, 2014 06:13 AM

Eric Hammond

Before You Buy Amazon EC2 (New) Reserved Instances

understand the commitment you are making to pay for the entire 1-3 years

Amazon just announced a change in the way that Reserved Instances are sold. Instead of selling the old Reserved Instance types:

  • Light Utilization
  • Medium Utilization
  • Heavy Utilization

EC2 is now selling these new Reserved Instance types:

  • No Upfront
  • Partial Upfront
  • All Upfront

Despite the fact that they are still called “Reserved Instances” and that there are three plans which sound like increasing commitment, the are not equivalent and do not map 1-1 old to new. In fact the new Reserved Instances are not even increasing commitment.

You should forget what you knew about Reserved Instances and read all the fine print before making any further Reserved Instance purchases.

One of the big differences between the old and the new is that you are always committing to spend the entire 1-3 years of cost even if you are not running a matching instance during part of that time. This text is buried in the fine print in a “**” footnote towards the bottom of the pricing page:

When you purchase a Reserved Instance, you are billed for every hour during the entire Reserved Instance term that you select, regardless of whether the instance is running or not.

As I pointed out in the 2012 article titled Save Money by Giving Away Unused Heavy Utilization Reserved Instances, this was also true of Heavy Utilization Reserved Instances, but with the old Light and Medium Utilization Reserved Instances you stopped spending money by stopping or terminating your instance.

Let’s walk through an example with the new EC2 Reserved Instance prices. Say you expect to run a c3.2xlarge for a year. Here are some options at the prices when this article was published:

Pricing Option Cost Structure Yearly Cost Savings over On Demand
On Demand $0.420/hour $3,679.20/year  
No Upfront RI $213.16/month $2,557.92/year 30%
Partial Upfront RI $1,304/once + $75.92/month $2,215.04/year 40%
All Upfront RI $2,170/once $2,170.00/year 41%

There’s a big jump in yearly savings from On Demand to the Reserved Instances, and then there is an increasing (but sometimes small) savings for the more of the total cost you pay up front. The percentage savings varies by instance type, so read up on the pricing page.

The big difference is that you can stop paying the On Demand price if you decide you don’t need that instance running, or you figure out that the application can work better on a larger (or smaller) instance type.

With all new Reserved Instance pricing options, you commit to paying the entire year’s cost. The only difference is how much of it you pay up front and how much you pay over the next 12 months.

If you purchase a Reserved Instance and decide you don’t need it after a while, you may be able to sell it (perhaps at some loss) on the Reserved Instance Marketplace, but your odds of completing a sale and the money you get back from that are not guaranteed.

Original article: http://alestic.com/2014/12/ec2-reserved-instances

by Eric Hammond at December 03, 2014 12:23 AM

December 02, 2014

Akkana Peck

Ripping a whole CD on Linux

I recently discovered that my ancient stereo turntable didn't survive our move. So all those LPs I brought along, intending to rip to mp3 when I had more time, will never see bits.

So I need to buy new versions of some of that old music. In particular, I'd lately been wanting to listen to my old Flanders and Swann albums. Flanders and Swann were a terrific comedy music duo (think Tom Lehrer only less scientifically oriented) from the 1960s.

So I ordered a CD of The Complete Flanders & Swann, which contains all three of the albums I inherited from my parents. Woohoo! I ran a little script I have that rips a whole CD to a directory of separate MP3 songs, and I was all set.

Until I listened to it. It turns out that when the LP album was turned into a CD, they put the track breaks in the wrong place. These albums are recordings of live performances. Each song has a spoken intro, giving a little context for the song that follows. On the CD, each track starts with a song, and ends with the spoken intro for the next song. That's no problem if you always listen to whole albums in order. But I like to play individual tracks, or listen to music on random play. So this wasn't going to work at all.

I tried using audacity to copy the intro from the end of one track and paste it onto the beginning of another. That worked, but it was tedious and fiddly. A little research showed me a much better way.

First: Rip the whole CD

First I needed to rip the whole CD as one gigantic track. My script had been running cdparanoia tracknumber filename.wav. But it took some study of the cdparanoia manual before I finally found the way to rip a whole CD to one track: you can specify a range of tracks, starting at 0 and omitting the end track.

cdparanoia 0- outfile.wav

Use Audacity to split and save the tracks

Now what's the best way to split a recording into separate tracks? Fortunately the Audacity manual has a nice page on that very subject: Splitting a recording into separate tracks.

Mostly, the issue is setting labels -- with Tracks->Add Label at Selection or Tracks->Add Label at Playback Position. Use Ctrl-1 to zoom as much as you need to see where the short pauses are. Then listen to the audio, pausing or clicking and setting labels appropriately.

It's a bit fiddly. For instance, if you pause your listening to set a label, you might want to save the audacity project so you don't lose the label positions you've set so far. But you can't save unless you Stop the playback; and that loses the current playback position which you may not yet have set a label for. Even if you have set a label for it, you'll need to click to set the selection to the label you just made if you want to continue playing from where you left off. It all seems a little silly and unintuitive ... but after a few tries you'll find a routine that works for you.

When all your labels are set, then File->Export Multiple.... You will have to go through a bunch of dialogs involving metadata for each track; just hit return, since audacity ignores any metadata you type in and won't actually write it to the MP3 file. I have no idea why it always prompts for metadata then doesn't use it, but you can use a program like id3tool later to add proper metadata to the tracks.

So, no, the tools aren't perfect. On the other hand, I now have a nice set of Flanders and Swann tracks, and can listen to Misalliance, Ill Wind and The GNU Song complete with their proper introductions.

December 02, 2014 08:35 PM

December 01, 2014

Eric Hammond

S3 Bucket Notification to SQS/SNS on Object Creation

A fantastic new and oft-requested AWS feature was released during AWS re:Invent, but has gotten lost in all the hype about AWS Lambda functions being triggered when objects are added to S3 buckets. AWS Lambda is currently in limited Preview mode and you have to request access, but this related feature is already available and ready to use.

I’m talking about automatic S3 bucket notifications to SNS topics and SQS queues when new S3 objects are added.

Unlike AWS Lambda, with S3 bucket notifications you do need to maintain the infrastructure to run your code, but you’re already running EC2 instances for application servers and job processing, so this will fit right in.

To detect and respond to S3 object creation in the past, you needed to either have every process that uploaded to S3 subsequently trigger your back end code in some way, or you needed to poll the S3 bucket to see if new objects had been added. The former adds code complexity and tight coupling dependencies. The latter can be costly in performance and latency, especially as the number of objects in the bucket grows.

With the new S3 bucket notification configuration options, the addition of an object to a bucket can send a message to an SNS topic or to an SQS queue, triggering your code quickly and effortlessly.

Here’s a working example of how to set up and use S3 bucket notification configurations to send messages to SNS on object creation and update.

Setup

Replace parameter values with your preferred names.

region=us-east-1
s3_bucket_name=BUCKETNAMEHERE
email_address=YOURADDRESS@EXAMPLE.COM
sns_topic_name=s3-object-created-$(echo $s3_bucket_name | tr '.' '-')
sqs_queue_name=$sns_topic_name

Create the test bucket.

aws s3 mb \
  --region "$region" \
  s3://$s3_bucket_name

Create an SNS topic.

sns_topic_arn=$(aws sns create-topic \
  --region "$region" \
  --name "$sns_topic_name" \
  --output text \
  --query 'TopicArn')
echo sns_topic_arn=$sns_topic_arn

Allow S3 to publish to the SNS topic for activity in the specific S3 bucket.

aws sns set-topic-attributes \
  --topic-arn "$sns_topic_arn" \
  --attribute-name Policy \
  --attribute-value '{
      "Version": "2008-10-17",
      "Id": "s3-publish-to-sns",
      "Statement": [{
              "Effect": "Allow",
              "Principal": { "AWS" : "*" },
              "Action": [ "SNS:Publish" ],
              "Resource": "'$sns_topic_arn'",
              "Condition": {
                  "ArnLike": {
                      "aws:SourceArn": "arn:aws:s3:*:*:'$s3_bucket_name'"
                  }
              }
      }]
  }'

Add a notification to the S3 bucket so that it sends messages to the SNS topic when objects are created (or updated).

aws s3api put-bucket-notification \
  --region "$region" \
  --bucket "$s3_bucket_name" \
  --notification-configuration '{
    "TopicConfiguration": {
      "Events": [ "s3:ObjectCreated:*" ],
      "Topic": "'$sns_topic_arn'"
    }
  }'

Test

You now have an S3 bucket that is going to post a message to an SNS topic when objects are added. Let’s give it a try by connecting an email address listener to the SNS topic.

Subscribe an email address to the SNS topic.

aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol email \
  --notification-endpoint "$email_address"

IMPORTANT! Go to your email inbox now and click the link to confirm that you want to subscribe that email address to the SNS topic.

Upload one or more files to the S3 bucket to trigger the SNS topic messages.

aws s3 cp [SOMEFILE] s3://$s3_bucket_name/testfile-01

Check your email for the notification emails in JSON format, containing attributes like:

{ "Records":[  
    { "eventTime":"2014-11-27T00:57:44.387Z",
      "eventName":"ObjectCreated:Put", ...
      "s3":{
        "bucket":{ "name":"BUCKETNAMEHERE", ... },
        "object":{ "key":"testfile-01", "size":5195, ... }
}}]}

Notification to SQS

The above example connects an SNS topic to the S3 bucket notification configuration. Amazon also supports having the bucket notifications go directly to an SQS queue, but I do not recommend it.

Instead, send the S3 bucket notification to SNS and have SNS forward it to SQS. This way, you can easily add other listeners to the SNS topic as desired. You can even have multiple SQS queues subscribed, which is not possible when using a direct notification configuration.

Here are some sample commands that create an SQS queue and connect it to the SNS topic.

Create the SQS queue and get the ARN (Amazon Resource Name). Some APIs need the SQS URL and some need the SQS ARN. I don’t know why.

sqs_queue_url=$(aws sqs create-queue \
  --queue-name $sqs_queue_name \
  --attributes 'ReceiveMessageWaitTimeSeconds=20,VisibilityTimeout=300'  \
  --output text \
  --query 'QueueUrl')
echo sqs_queue_url=$sqs_queue_url

sqs_queue_arn=$(aws sqs get-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attribute-names QueueArn \
  --output text \
  --query 'Attributes.QueueArn')
echo sqs_queue_arn=$sqs_queue_arn

Give the SNS topic permission to post to the SQS queue.

sqs_policy='{
    "Version":"2012-10-17",
    "Statement":[
      {
        "Effect":"Allow",
        "Principal": { "AWS": "*" },
        "Action":"sqs:SendMessage",
        "Resource":"'$sqs_queue_arn'",
        "Condition":{
          "ArnEquals":{
            "aws:SourceArn":"'$sns_topic_arn'"
          }
        }
      }
    ]
  }'
sqs_policy_escaped=$(echo $sqs_policy | perl -pe 's/"/\\"/g')
sqs_attributes='{"Policy":"'$sqs_policy_escaped'"}'
aws sqs set-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attributes "$sqs_attributes"

Subscribe the SQS queue to the SNS topic.

aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol sqs \
  --notification-endpoint "$sqs_queue_arn"

You can upload another test file to the S3 bucket, which will now generate both the email and a message to the SQS queue.

aws s3 cp [SOMEFILE] s3://$s3_bucket_name/testfile-02

Read the S3 bucket notification message from the SQS queue:

aws sqs receive-message \
  --queue-url $sqs_queue_url

The output of that command is not quite human readable as it has quoted JSON inside quoted JSON inside JSON, but your queue processing software should be able to decode it and take appropriate actions.

You can tell the SQS queue that you have “processed” the message by grabbing the “ReceiptHandle” value from the above output and deleting the message.

sqs_receipt_handle=...
aws sqs delete-message \
  --queue-url "$sqs_queue_url" \
  --receipt-handle "$sqs_receipt_handle"

You only have a limited amount of time to process the message and delete it before SQS tosses it back in the queue for somebody else to process. This test queue gives you 5 minutes (VisibilityTimeout=300). If you go past this timeout, simply read the message from the queue and try again.

Cleanup

Delete the SQS queue:

aws sqs delete-queue \
  --queue-url "$sqs_queue_url"

Delete the SNS topic (and all subscriptions).

aws sns delete-topic \
  --region "$region" \
  --topic-arn "$sns_topic_arn"

Delete test objects in the bucket:

aws s3 rm s3://$s3_bucket_name/testfile-01
aws s3 rm s3://$s3_bucket_name/testfile-02

Delete the bucket, but only if it was created for this test!

aws s3 rb s3://$s3_bucket_name

Note: There is currently no way that I’ve found to use the aws-cli to remove an S3 bucket notification configuration if you want to keep the bucket. This must be done through the S3 API or AWS console.

History / Future

If the concept of an S3 bucket notification sounds a bit familiar, it’s because AWS S3 has had it for years, but the only supported event type was “s3:ReducedRedundancyLostObject”, triggered when S3 lost an RRS object. Given the way that this feature was designed, we all assumed that Amazon would eventually add more useful events like “S3 object created”, which indeed they released a couple weeks ago.

I would continue to assume/hope that Amazon will eventually support an “S3 object deleted” event because it just makes too much sense for applications that need to keep track of the keys in a bucket.

Original article: http://alestic.com/2014/12/s3-bucket-notification-to-sqssns-on-object-creation

by Eric Hammond at December 01, 2014 06:16 PM

November 29, 2014

Elizabeth Krumbach

My Smart Watch

I wear a watch.

Like many people, I went through a period where I thought my phone was enough. However, when my travel schedule picked up and I often found myself in planes with my phone off in an effort to save my battery for whatever exotic land I found myself in next. I also found it was nice to be able to have a clock I could adjust so I knew what time it was in this foreign land before I got there. Enter the mechanical watch.

When I learned I’d be receiving an Android Wear device at Google I/O I was skeptical that I’d have a real use for it, but amused and happy to give it a chance. I didn’t have high hopes though, another device to charge? Will interaction with my phone through a tiny device actually be that useful?

I’m happy to report that my skepticism was unnecessary. I have the Samsung Gear Live and I couldn’t be happier.

The battery life will last me a couple days, which is plenty of time to get me to my next destination, and I turn it off at night if I’m really concerned about not getting to an outlet (or just being to lazy to do so).

And usefulness? It sends alerts to my watch, so at a glance can see Twitter mentions and replies, and quickly favorite or retweet them from my watch. Perhaps my favorite feature is the ability to control Google Play music via the watch, walking around town I no longer need to dig my phone out of my purse to change the song (or now, adjust volume!). As an added bonus, the watch also has an icon for when it’s disconnected from my phone, so if I walk out the door and don’t remember if I grabbed my phone? Check my watch.

In addition to all this, it’s also much less distracting, I can feel in touch with people trying to contact me without having my face rudely buried in my phone all the time. I only need to pull out my phone when I actually have something to act on, which is pretty rare.

It seems I’m not alone. I was delighted to read this piece in Smithsonian Magazine several months ago: The Pocket Watch Was the World’s First Wearable Tech Game Changer. Unless some other, more convenient and socially acceptable wearable tech comes out, I’m hoping smart watches will catch on.

Perhaps the only caveat is how it looks. When I’m attending a wedding or nice dinner, I’m not going to strap on my giant black Gear Live, I switch back to my pretty mechanical watch. So I’m looking forward to the market opening up and giving us more options device-wise. In addition to something more feminine, a hybrid of mechanical and digital like the upcoming Kairos watches would be a lot of fun.

by pleia2 at November 29, 2014 12:34 AM

November 27, 2014

Eric Hammond

lambdash: AWS Lambda Shell Hack

I spent the weekend learning just enough JavaScript and nodejs to hack together a Lambda function that runs arbitrary shell commands in the AWS Lambda environment.

This hack allows you to explore the current file system, learn what versions of Perl and Python are available, and discover what packages might be installed.

If you’re interested in seeing the results, then read following article which uses this AWS Lambda shell hack to examine the inside of the AWS Lambda run time environment.

Exploring The AWS Lambda Runtime Environment

Now on to the hack…

Setup

Define the basic parameters.

# Replace with your bucket name
bucket_name=lambdash.alestic.com

function=lambdash
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
log_group_name=/aws/lambda/$function

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
      }]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. Log to Cloudwatch and upload files to a specific S3 bucket/location.

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Effect": "Allow",
          "Action": [ "logs:*" ],
          "Resource": "arn:aws:logs:*:*:*"
      }, {
          "Effect": "Allow",
          "Action": [ "s3:PutObject" ],
          "Resource": "arn:aws:s3:::'$bucket_name'/'$function'/*"
      }]
  }'

Grab the current Lambda function JavaScript from the Alestic lambdash GitHub repository, create the ZIP file, and upload the new Lambda function.

wget -q -O$function.js \
  https://raw.githubusercontent.com/alestic/lambdash/master/lambdash.js
npm install async fs tmp
zip -r $function.zip $function.js node_modules
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --runtime nodejs \
  --mode event \
  --handler "$function.handler" \
  --role "$lambda_execution_role_arn" \
  --timeout 60 \
  --memory-size 256

Invoke the Lambda function with the desired command and S3 output locations. Adjust the command and repeat as desired.

cat > $function-args.json <<EOM
{
    "command": "ls -laiR /",
    "bucket":  "$bucket_name",
    "stdout":  "$function/stdout.txt",
    "stderr":  "$function/stderr.txt"
}
EOM

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-args.json"

Look at the Lambda function log output in CloudWatch.

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Get the command output.

aws s3 cp s3://$bucket_name/$function/stdout.txt .
aws s3 cp s3://$bucket_name/$function/stderr.txt .
less stdout.txt stderr.txt

Clean up

If you are done with this example, you can delete the created resources. Or, you can leave the Lambda function in place ready for future use. After all, you aren’t charged unless you use it.

aws s3 rm s3://$bucket_name/$function/stdout.txt
aws s3 rm s3://$bucket_name/$function/stderr.txt
aws lambda delete-function \
  --function-name "$function"
aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"
aws iam delete-role \
  --role-name "$lambda_execution_role_name"
aws logs delete-log-group \
  --log-group-name "$log_group_name"

Requests

What command output would you like to see in the Lambda environment?

Original article: http://alestic.com/2014/11/aws-lambda-shell

by Eric Hammond at November 27, 2014 02:33 AM

November 26, 2014

Akkana Peck

Yam-Apple Casserole

Yams. I love 'em. (Actually, technically I mean sweet potatoes, since what we call "yams" here in the US aren't actual yams, but the root from a South American plant, Ipomoea batatas, related to the morning glory. I'm not sure I've ever had an actual yam, a tuber from an African plant of the genus Dioscorea).

But what's up with the way people cook them? You take something that's inherently sweet and yummy -- and then you cover them with brown sugar and marshmallows and maple syrup and who knows what else. Do you sprinkle sugar on apples before you eat them?

Normally, I bake a yam for about an hour in the oven, or, if time is short (which it usually is), microwave it for about four and a half minutes, then finish up with 20-40 minutes in a toaster oven at 350°. The oven part seems to be necessary: it brings out the sweetness and the nice crumbly texture in a way that the microwave doesn't. You can read about some of the science behind this at this Serious Eats discussion of cooking sweet potatoes: it's because sweet potatoes have an odd enzyme, beta amylase, that breaks down carbohydrates into sugars, thus bringing out the vegetable's sweetness, but that enzyme only works in a limited temperature range, so if you heat up a sweet potato too fast the enzyme doesn't have time to work.

But Thanksgiving is coming up, and for a friend's dinner party, I wanted to make something a little more festive (and more easily parceled out) than whole baked yams.

A web search wasn't much help: nearly everything I found involved either brown sugar or syrup. The most interesting casserole recipes I saw fell into two categories: sweet and spicy yams with chile powder and cayenne pepper (and brown sugar), and for yam-apple casserole (with brown sugar and lemon juice). As far as I can tell it has never occurred to anyone, before me, to try either of these without added sugar. So I bravely volunteered myself as test subject.

I was very pleased with the results. The combination of the tart apples, the sweet yams and the various spices made a lovely combination. And it's a lot healthier than the casseroles with all the sugary stuff piled on top.

Yam-Apple Casserole without added sugar

Ingredients:

  • Yams, as many as needed.
  • Apples: 1-2 apples per yam. Use a tart variety, like granny smith.
  • chile powder
  • sage
  • rosemary or thyme
  • cumin
  • nutmeg
  • ginger powder
  • salt
(Your choice whether to use all of these spices, just some, or different ones.)

Peel and dice yams and apples into bite-sized pieces, inch or half-inch cubes. (Peeling the yams is optional.)

Drizzle a little olive oil over the yam and apple pieces, then sprinkle spices. Your call as to which spices and how much. Toss it all together until the pieces are all evenly coated with oil and the spices look evenly distributed.

Lay out in a casserole dish or cake pan and bake at 350°F until the yam pieces are soft. This takes at least an hour, two if you made big pieces or layered the pieces thickly in the pan. The apples will mostly disintegrate into little mushy bits between the pieces of yam, but that's fine -- they're there for flavor, not consistency.

Note: After reading about beta-amylase and its temperature range, I had the bright idea that it would be even better to do this in a crockpot. Long cooking at low temps, right? Wrong! The result was terrible, almost completely tasteless. Stick to using the oven.

I'm going to try adding some parsnips, too, though parsnips seem to need to cook longer than sweet potatoes, so it might help to pre-cooked the parsnips a few minutes in the microwave before tossing them in with the yams and apples.

November 26, 2014 02:07 AM

November 25, 2014

Eric Hammond

AWS Lambda Walkthrough Command Line Companion

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name:
source_bucket=alestic-lambda-example

# Do not change this. Walkthrough code assumes this name
target_bucket=${source_bucket}resized

function=CreateThumbnail
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
lambda_invocation_role_name=lambda-$function-invocation
lambda_invocation_access_policy_name=lambda-$function-invocation-access
log_group_name=/aws/lambda/$function

Install some required software:

sudo apt-get install nodejs nodejs-legacy npm

Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket
aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas: https://www.flickr.com/photos/hatalmas/6094281702
wget -q -OHappyFace.jpg \
  https://c3.staticflickr.com/7/6209/6094281702_d4ac7290d3_b.jpg

aws s3 cp HappyFace.jpg s3://$source_bucket/

Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough
wget -q -O $function.js \
  http://run.alestic.com/lambda/aws-examples/CreateThumbnail.js

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $function.zip $function.js node_modules

Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "logs:*"
        ],
        "Resource": "arn:aws:logs:*:*:*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetObject"
        ],
        "Resource": "arn:aws:s3:::'$source_bucket'/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::'$target_bucket'/*"
      }
    ]
  }'

Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --role "$lambda_execution_role_arn" \
  --mode event \
  --handler "$function.handler" \
  --timeout 30 \
  --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM
{  
   "Records":[  
      {  
         "eventVersion":"2.0",
         "eventSource":"aws:s3",
         "awsRegion":"us-east-1",
         "eventTime":"1970-01-01T00:00:00.000Z",
         "eventName":"ObjectCreated:Put",
         "userIdentity":{  
            "principalId":"AIDAJDPLRKLG7UEXAMPLE"
         },
         "requestParameters":{  
            "sourceIPAddress":"127.0.0.1"
         },
         "responseElements":{  
            "x-amz-request-id":"C3D13FE58DE4C810",
            "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"testConfigRule",
            "bucket":{  
               "name":"$source_bucket",
               "ownerIdentity":{  
                  "principalId":"A3NL1KOZZKExample"
               },
               "arn":"arn:aws:s3:::$source_bucket"
            },
            "object":{  
               "key":"HappyFace.jpg",
               "size":1024,
               "eTag":"d41d8cd98f00b204e9800998ecf8427e",
               "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
            }
         }
      }
   ]
}
EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups \
  --output text \
  --query 'logGroups[*].[logGroupName]'

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName')
echo log_stream_names="'$log_stream_names'"
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role \
  --role-name "$lambda_invocation_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "s3.amazonaws.com"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:ExternalId": "arn:aws:s3:::*"
            }
          }
        }
      ]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --policy-document '{
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "lambda:InvokeFunction"
         ],
         "Resource": [
           "*"
         ]
       }
     ]
   }'

Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration \
  --function-name "$function" \
  --output text \
  --query 'FunctionARN'
)
echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification \
  --bucket "$source_bucket" \
  --notification-configuration '{
    "CloudFunctionConfiguration": {
      "CloudFunction": "'$lambda_function_arn'",
      "InvocationRole": "'$lambda_invocation_role_arn'",
      "Event": "s3:ObjectCreated:*"
    }
  }'

Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

myimages=...
aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket

Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions \
  --output text \
  --query 'Functions[*].[FunctionName]'

aws lambda get-function \
  --function-name "$function"

aws iam list-roles \
  --output text \
  --query 'Roles[*].[RoleName]'

aws iam get-role \
  --role-name "$lambda_execution_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_execution_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws iam get-role \
  --role-name "$lambda_invocation_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_invocation_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws s3api get-bucket-notification \
  --bucket "$source_bucket"

Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg
aws s3 rm s3://$source_bucket/HappyFace.jpg
aws s3 rb s3://$target_bucket/
aws s3 rb s3://$source_bucket/

aws lambda delete-function \
  --function-name "$function"

aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_execution_role_name"

aws iam delete-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_invocation_role_name"

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  echo "deleting log-stream $log_stream_name"
  aws logs delete-log-stream \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name"
done

aws logs delete-log-group \
  --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

Original article: http://alestic.com/2014/11/aws-lambda-cli

by Eric Hammond at November 25, 2014 09:36 PM

November 24, 2014

Jono Bacon

Ubuntu Governance Reboot: Five Proposals

Sorry, this is long, but hang in there.

A little while back I wrote a blog post that seemed to inspire some people and ruffle the feathers of some others. It was designed as a conversation-starter for how we can re-energize leadership in Ubuntu.

When I kicked off the blog post, Elizabeth quite rightly gave me a bit of a kick in the spuds about not providing a place to have a discussion, so I amended the blog post to a link to this thread where I encourage your feedback and participation.

Rather unsurprisingly, there was some good feedback, before much of it started wandering off the point a little bit.

I was delighted to see that Laura posted that a Community Council meeting on the 4th Dec at 5pm UTC has been set up to further discuss the topic. Thanks, CC, for taking the time to evaluate and discuss the topic in-hand.

I plan on joining the meeting, but I wanted to post five proposed recommendations that we can think about. Again, please feel free to share feedback about these ideas on the mailing list

1. Create our Governance Mission/Charter

I spent a bit of time trying to find the charter or mission statements for the Community Council and Technical Board and I couldn’t find anything. I suspect they are not formally documented as they were put together back in the early days, but other sub-councils have crisp charters (mostly based off the first sub-council, the Forum Council).

I think it could be interesting to define a crisp mission statement for Ubuntu governance. What is our governance here to do? What are the primary areas of opportunity? What are the priorities? What are the risks we want to avoid? Do we need both a CC and TB?

We already have the answers to some of these questions, but are the answers we have the right ones? Is there an opportunity to adjust our goals with our leadership and governance in the project?

Like many of the best mission statements, this should be a collaborative process. Not a mission defined by a single person or group, but an opportunity for multiple people to feed into so it feels like a shared mission. I would recommend that this be a process that all Ubuntu members can play a role in. Ubuntu members have earned their seat at the table via their contributions, and would be a wonderfully diverse group to pull ideas from.

This would give us a mission that feels shared, and feels representative of our community and culture. It would feel current and relevant, and help guide our governance and wider project forward.

2. Create an ‘Impact Constitution’

OK, I just made that term up, and yes, it sounds a bit buzzwordy, but let me explain.

The guiding principles in Ubuntu are the Ubuntu Promise. It puts in place a set of commitments that ensure Ubuntu always remains a collaborative Open Source project.

What we are missing though is a document that outlines the impact that Ubuntu gives you, others, and the wider world…the ways in which Ubuntu empowers us all to succeed, to create opportunity in our own lives and the life of others.

As an example:

Ubuntu is a Free Software platform and community. Our project is designed to create open technology that empowers individuals, groups, businesses, charities, and others. Ubuntu breaks down the digital divide, and brings together our collective energy into a system that is useful, practical, simple, and accessible.

Ubuntu empowers you to:

  1. Deploy an entirely free Operating System and archive of software to one of multiple computers in homes, offices, classrooms, government institutions, charities, and elsewhere.
  2. Learn a variety of programming and development languages and have the tools to design, create, test, and deploy software across desktops, phones, tablets, the cloud, the web, embedded devices and more.
  3. Have the tools for artistic creativity and expression in music, video, graphics, writing, and more.
  4. . . .

Imagine if we had a document with 20 or so of these impact statements that crisply show the power of our collective work. I think this will regularly remind us of the value of Ubuntu and provide a set of benefits that we as a wider community will seek to protect and improve.

I would then suggest that part of the governance charter of Ubuntu is that our leadership are there to inspire, empower, and protect the ‘impact constitution'; this then directly connects our governance and leadership to what we consider to be the primary practical impact of Ubuntu in making the world a better place.

3. Cross-Governance Strategic Meetings

Today we have CC meetings, TB meetings, FC meetings etc. I think it would be useful to have a monthly, or even quarterly meeting that brings together key representatives from each of the governance boards with a single specific goal – how do the different boards help further each other’s mission. As an example, how does the CC empower the TB for success? How does the TB empower the FC for success?

We don’t want governance that is either independent or dependent at the individual board level. We want governance that is inter-dependent with each other. This then creates a more connected network of leadership.

4. Annual In-Person Governance Summit

We have a community donations fund. I believe we should utilize it to get together key representatives across Ubuntu governance into the same room for two or three days to discuss (a) how to refine and optimize process, but also (b) how to further the impact of our ‘impact constitution’ and inspire wider opportunity in Ubuntu.

If Canonical could chip in and potentially there were a few sponsors, we could potentially bring all governance representatives together.

Now, it could be tempting to suggest we do this online. I think this would be a mistake. We want to get our leaders together to work together, socialize together, and bond together. The benefits of doing this in person significantly outweigh doing it online.

5. Optimize our community brand around “innovation”

Ubuntu has a good reputation for innovation. Desktop, Mobile, Tablet, Cloud…it is all systems go. Much of this innovation though is seen in the community as something that Canonical fosters and drives. There was a sentiment in the discussion after my last blog post that some folks feel that Canonical is in the driving seat of Ubuntu these days and there isn’t much the community can do to inspire and innovate. There was at times a jaded feeling that Canonical is standing in the way of our community doing great things.

I think this is a bit of an excuse. Yes, Canonical are primarily driving some key pieces…Unity, Mir, Juju for example…but there is nothing stopping anyone innovating in Ubuntu. Our archives are open, we have a multitude of toolsets people can use, we have extensive collaborative infrastructure, and an awesome community. Our flavors are a wonderful example of much of this innovation that is going on. There is significantly more in Ubuntu that is open than restricted.

As such, I think it could be useful to focus on this in our outgoing Ubuntu messaging and advocacy. As our ‘impact constitution’ could show, Ubuntu is a hotbed of innovation, and we could create some materials, messaging, taglines, imagery, videos, and more that inspires people to join a community that is doing cool new stuff.

This could be a great opportunity for designers and artists to participate, and I am sure the Canonical design team would be happy to provide some input too.

Imagine a world in which we see a constant stream of social media, blog posts, videos and more all thematically orientated around how Ubuntu is where the innovators innovate.

Bonus: Network of Ubucons

OK, this is a small extra one I would like to throw in for good measure. :-)

The in-person Ubuntu Developer Summits were a phenomenal experience for so many people, myself included. While the Ubuntu Online Summit is an excellent, well-organized online event, there is something to be said about in-person events.

I think there is a great opportunity for us to define two UbuCons that become the primary in-person events where people meet other Ubuntu folks. One would be focused on the US, and one of Europe, and if we could get more (such as an Asian event), that would be awesome.

These would be driven by the community for the community. Again, I am sure the donations fund could help with the running costs.

In fact, before I left Canonical, this is something I started working on with the always-excellent Richard Gaskin who puts on the UbuCon before SCALE in LA each year.

This would be more than a LoCo Team meeting. It would be a formal Ubuntu event before another conference that brings together speakers, panel sessions, and more. It would be where Ubuntu people to come to meet, share, learn, and socialize.

I think these events could be a tremendous boon for the community.


Well, that’s it. I hope this provided some food for thought for further discussion. I am keen to hear your thoughts on the mailing list!

by jono at November 24, 2014 10:35 PM

November 22, 2014

Elizabeth Krumbach

My Vivid Vervet has crazy hair

Keeping with my Ubuntu toy tradition, I placed an order for a vervet stuffed toy, available in the US via: Miguel the Vervet Monkey.

He arrived today!

He’ll be coming along to his first Ubuntu event on December 10th, a San Francisco Ubuntu Hour.

by pleia2 at November 22, 2014 02:57 AM

Vacation in Jamaica

This year I’ve traveled more than ever, but almost all of my trips have been for work. This past week, MJ and I finally snuck off for a romantic vacation together in Jamaica, where neither of us had been before.

Unfortunately we showed up a day late after I forgot my passport at home. I had removed it from my bag earlier in the day to get a copy of it for a VISA application and left it on the scanner. I realized it an hour before our flight, and the check in was 45 minutes prior to, not enough time for me to get home and back to the airport before the cutoff (but I did try!). I felt horrible. Fortunately the day home together before the trip did give us a little bit of breathing room between mad dash from work to airport.

Friday evening we got a flight! We sprung for First Class on our flights and thankfully all travel was uneventful. We got to Couples Negril around 3PM the following day after 2 flights, a 6 hour layover and a 90 minute van ride from Montego Bay to Negril.

It was beautiful. The rooms had recently been renovated and looked great. It was also nice that the room air conditioning was very good, so on those days when the humidity got to be a bit much I had a wonderful refuge. The resort was all-inclusive and we had confirmed ahead of time that the food was good, so there were no disappointments there. They had some low-key activities and little events and entertainment at lunch and later into the evening (including some ice carving and a great show by Dance Xpressionz). As a self-proclaimed not cool person I found it all to be the perfect atmosphere to relax and feel comfortable going to some of the events.

The view from our room (2nd floor Beachfront suite) was great too:

I had planned on going into deep Ian Fleming mode and getting a lot of writing done on my book, but I only ended up spending about 4 hours on it throughout the week. Upon arrival I realized how much I really needed the time off and took full advantage of it, which was totally the right decision. By Tuesday I was clear-headed and finally excited again about some of my work plans for the upcoming weeks, rather than feeling tired and overwhelmed by them.

Also, there were bottomless Strawberry Daiquiris.

Alas, it had to come to an end. We packed our things and were on our way on Thursday. Prior to the trip, MJ had looked into AirLink in order to take a 12 minute flight from Negril to Montego Bay rather than the 90 minute van ride. At $250 for the pair of us, I was happy to give it a go for the opportunity to ride in a Cessna and take some nice aerial shots. After getting our photo with the pilot, at 11AM the pair of us got into the Cessna with the pilot and co-pilot.

The views were everything I expected, and I was happy to get some nice pictures.

Jamaica is definitely now on my list for going back to. I really enjoyed our time there and it seemed to be a good season for it.

More photos from the week here (admittedly, mostly of the Cessna flight): https://www.flickr.com/photos/pleia2/sets/72157649408324165/

by pleia2 at November 22, 2014 02:32 AM

November 18, 2014

Akkana Peck

Unix "remind" file for US holidays

Am I the only one who's always confused about when holidays happen?

Partly it's software, I guess. In these days of everybody keeping their schedules on Google's or Apple's servers, maybe most people keep up on these things.

But being the dinosaur I am, I'm still resistant to keeping my schedule in the cloud on a public server. What if I need to check for upcoming events while I'm on a trip out in the remote desert somewhere? (Not to mention the obvious privacy considerations.) For years I used PalmOS PDAs, but when I switched to Android and discovered how poor the offline calendar options are, I decided that I should learn how to use the old Unix standby.

It's been pretty handy. I run remind ~/[remind-file-name] when I log in in the morning, and it gives me a nice summary of upcoming events:

DPU Solar surcharge meeting, 5:30-8:30 tomorrow
NMGLUG meeting in 2 days' time

Of course, I can also have it email me with reminders, or pop up a window, but so far I haven't felt the need.

I can also display a nice calendar showing upcoming events for this month or the next several months. I made a couple of aliases:

mycal () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=1 
        fi
        remind -c$months ~/Docs/Lists/remind
}

mycalp () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=2 
        fi
        remind -p$months ~/Docs/Lists/remind | rem2ps -e -l > /tmp/mycal.ps
        gv /tmp/mycal.ps &
}

The first prints an ascii calendar; the second displays a nice postscript calendar complete with little icons for phases of the moon.

But what about those holidays?

Okay, that gives me a good way of storing reminders about appointments. But I still don't know when holidays are. (I had that problem with the PalmOS scheduling program, too -- it never knew about holidays either.)

Web searching didn't help much. Unfortunately, "remind" is a terrible name in this age of search engines. If someone has already solved this problem, I sure wasn't able to find any evidence of it. So instead, I went to Wikipedia's list of US holidays, with the remind man page in another tab, and wrote remind stanzas for each one -- except Easter, which is much more complicated.

But wait -- it turns out that remind already has code to calculate Easter! It just needs a slightly more complicated stanza: instead of the standard form of

REM  1 Apr +1 MSG April Fool's Day %b
I need to use this form:
REM  [trigger(easterdate(today()))] +1 MSG Easter %b

The %b in each case is what gives you the notice of when the event is in your reminders, e.g. "Easter tomorrow" or "Easter in two days' time". The +1 is how far beforehand you want to be reminded of each event.

So here's my remind file for US holidays. I make no guarantees that every one is right, though I did check them for the next 12 months and they all seem to be working.

#
# US Holidays
#
REM      1 Jan    +3 MSG New Year's Day %b
REM Mon 15 Jan    +2 MSG MLK Day %b
REM      2 Feb       MSG Groundhog Day %b
REM     14 Feb    +2 MSG Valentine's Day %b
REM Mon 15 Feb    +2 MSG President's Day %b
REM     17 Mar    +2 MSG St Patrick's Day %b
REM      1 Apr    +9 MSG April Fool's Day %b
REM  [trigger(easterdate(today()))] +1 MSG Easter %b
REM     22 Apr    +2 MSG Earth Day %b
REM Fri  1 May -7 +2 MSG Arbor Day %b
REM Sun  8 May    +2 MSG Mother's Day %b
REM Mon  1 Jun -7 +2 MSG Memorial Day %b
REM Sun 15 Jun       MSG Father's Day
REM      4 Jul    +2 MSG 4th of July %b
REM Mon  1 Sep    +2 MSG Labor Day %b
REM Mon  8 Oct    +2 MSG Columbus Day %b
REM     31 Oct    +2 MSG Halloween %b
REM Tue  2 Nov    +4 MSG Election Day %b
REM     11 Nov    +2 MSG Veteran's Day %b
REM Thu 22 Nov    +3 MSG Thanksgiving %b
REM     25 Dec    +3 MSG Christmas %b

November 18, 2014 09:07 PM

November 14, 2014

Jono Bacon

Ubuntu Governance: Reboot?

For many years Ubuntu has had a comprehensive governance structure. At the top of the tree are the Community Council (community policy) and the Technical Board (technical policy).

Below those boards are sub-councils such as the IRC, Forum, and LoCo councils, and developer assessment boards.

The vast majority of these boards are populated by predominantly non-Canonical folks. I think this is a true testament to the openness and accessibility of governance in Ubuntu. There is no “Canonical needs to have people on half the board” shenanigans…if you are a good leader in the Ubuntu community, you could be on these boards if you work hard.

So, no-one is denying the openness of these boards, and I don’t question the intentions or focus of the people who join and operate them. They are good people who act in the best interests of Ubuntu.

What I do question is the purpose and effectiveness of these boards.

Let me explain.

From my experience, the charter and role of these boards has remained largely unchanged. The Community Council, for example, is largely doing much of the same work it did back in 2006, albeit with some responsibility delegated elsewhere.

Over the years though Ubuntu has changed, not just in terms of the product, but also the community. Ubuntu is no longer just platform contributors, but there are app and charm developers, a delicate balance between Canonical and community strategic direction, and a different market and world in which we operate.

Ubuntu governance has, as a general rule, been fairly reactive. In other words, items are added to a governance meeting by members of the community and the boards sit, review the topic, discuss it, and in some cases vote. In this regard I consider this method of governance not really leadership, but instead idea, policy, and conflict arbitration.

What saddens me is that when I see some of these meetings, much of the discussion seems to focus on paperwork and administrivia, and many of the same topics pop up over and over again. With no offense meant at the members of these boards, these meetings are neither inspirational and rarely challenge the status quo of the community. In fact, from my experience, challenging the status quo with some of these boards has invariably been met with reluctance to explore, experiment, and try new ideas, and to instead continue to enforce and protect existing procedures. Sadly, the result of this is more bureaucracy than I feel comfortable with.

Ubuntu is at a critical point in it’s history. Just look at the opportunity: we have a convergent platform that will run across phones, tablets, desktops and elsewhere, with a powerful SDK, secure application isolation, and an incredible developer community forming. We have a stunning cloud orchestration platform that spans all the major clouds, making the ability to spin up large or small scale services a cinch. In every part of this the code is open and accessible, with a strong focus on quality.

This is fucking awesome.

The opportunity is stunning, not just for Ubuntu but also for technology freedom.

Just think of how many millions of people can be empowered with this work. Kids can educate themselves, businesses can prosper, communities can form, all on a strong, accessible base of open technology.

Ubuntu is innovating on multiple fronts, and we have one of the greatest communities in the world at the core. The passion and motivation in the community is there, but it is untapped.

Our inspirational leader has typically been Mark Shuttleworth, but he is busy flying around the world working hard to move the needle forward. He doesn’t always have the time to inspire our community on a regular basis, and it is sorely missing.

As such, we need to look to our leadership…the Community Council, the Technical Board, and the sub-councils for inspiration and leadership.

I believe we need to transform and empower these governance boards to be inspirational vessels that our wider community look to for guidance and leadership, not for paper-shuffling and administrivia.

We need these boards to not be reactive but to be proactive…to constantly observe the landscape of the Ubuntu community…the opportunities and the challenges, and to proactively capitalize on protecting the community from risk while opening up opportunity to everyone. This will make our community stronger, more empowered, and have that important dose of inspiration that is so critical to focus our family on the most important reasons why we do this: to build a world of technology freedom across the client and the cloud, underlined by a passionate community.

To achieve this will require awkward and uncomfortable change. It will require a discussion to happen to modify the charter and purpose of these boards. It will mean that some people on the current boards will not be the right people for the new charter.

I do though think this is important and responsible work for the Ubuntu community to be successful: if we don’t do this, I worry that the community will slowly degrade from lack of inspiration and empowerment, and our wider mission and opportunity will be harmed.

I am sure this post may offend some members of these boards, but it is not mean’t too. This is not a reflection of the current staffing, this is a reflection of the charter and purpose of these boards. Our current board members do excellent work with good and strong intentions, but within that current charter.

We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success.

This, I believe will transform Ubuntu into a new world of potential, a level of potential I have always passionately believed in.

I have kicked off a discussion on ubuntu-community-team where we can discuss this. Please share your thoughts and solutions there!

by jono at November 14, 2014 06:16 PM

Elizabeth Krumbach

Holiday cards 2014!

Every year I send out a big batch of wintertime holiday cards to friends and acquaintances online.

Reading this? That means you! Even if you’re outside the United States!

Just drop me an email at lyz@princessleia.com with your postal address, please put “Holiday Card” in the subject so I can filter it appropriately. Please do this even if I’ve sent you a card in the past, I won’t be reusing the list from last year.

Typical disclaimer: My husband is Jewish and I’m not religious, the cards will say “Happy Holidays”

by pleia2 at November 14, 2014 04:38 PM

November 13, 2014

Elizabeth Krumbach

Wedding in Philadelphia

This past weekend MJ and I met in Philadelphia to attend his step-sister’s wedding on Sunday. My flight came in from Paris on Saturday, and unfortunately MJ was battling a cold so we had a pretty low key evening.

Sunday morning we were up ready to dress and pick up a truck to drive his sister to the church. The wedding itself didn’t begin until 2PM, but since we were coordinating transportation for the wedding party, we had to meet everyone pretty early to make sure everyone got into their respective bus/car to make it to St. Stephen’s Orthodox Cathedral on time.

I’d never been to an eastern Orthodox wedding, so it was an interesting ceremony to watch. It took about an hour, and we were all standing for the entire ceremony. There was a ring exchange in the back of the chapel, and then the bride and groom come up the center aisle together for the rest of their ceremony. I chose to keep my camera stashed away during the ceremony, but as soon as the priest had finished and was making some closing comments about the newlyweds I got one in real quick.

The weather in November can go either way in Philadelphia, but they got lucky with bright, clear skies and the quite comfortable temperature in the 60s.

The reception began at 4PM with a cocktail hour.

And we did manage to get a few minutes in with the beautiful bride, Irina :)

Big congratulations to Irina and Sam!

More photos here: https://www.flickr.com/photos/pleia2/sets/72157648832387979/

The trip was a short one, with us packing up on Monday to fly home that evening. I did manage to get in a quick lunch with my friend Crissi who made it down to the city for the occasion, so it was great to catch up with her. Our flights home were uneventful and I finally got to sleep in my own bed after 3 weeks on the road!

Tomorrow night we fly off to Jamaica for a proper vacation together, I’m very much looking forward to it.

by pleia2 at November 13, 2014 02:46 AM

Party in France

On Saturday November 1st I landed in Paris on a redeye flight from Miami. I didn’t manage to sleep much at all on the flight, but thankfully I was able to check into my hotel room around 8:30AM to drop off my bags and freshen up before going on a day of jetlag-battling tourism.

It was the right decision. Of all the days I spent in Paris, that Saturday was the most beautiful weather-wise. The sky was clear and blue, the temperature quite comfortable to be wandering around the city in a t-shirt. Since Saturday was one of my only 2 days to play the tourist in Paris, mixed in with some meetings with colleagues, I took the advice of my cousin Melissa and bought a ticket on one of the red hop-on, hop-off circuit buses that stopped at the various landmarks throughout the city.

The hotel I was staying not far from the Arc de Triomphe so I was able to have a look at that and pick up a bus at that stop. I rode the bus until it reached the Eiffel Tower.

The line to take a lift up to the top of the tower was quite long and I wasn’t keen on waiting while battling jet lag, so I took a nice long walk around the tower and the grounds, snapping pictures along the way. I also found myself hungry so I picked up a surprisingly delicious chicken sandwich at a booth under the tower and enjoyed it there.

I hopped on the bus again and drove through the grounds of the Louvre museum, which was an astonishingly large complex. Due to the crowds and other things on my list for the day, I skipped actually going to the Louvre and contented myself with simply seeing the glass pyramid and making a mental note to return the next time I’m in Paris.

Soon after my phone lit up with a notification from my friend and OpenStack colleague Chris Hoge saying that he was at Notre Dame and folks were welcome to join him. It was the next stop I was planning on making, so I made plans to meet up.

I adore old cathedrals, and Notre Dame is a special one for me. As funny as it sounds, Disney’s The Hunchback of Notre Dame is one of my favorite movies. Being released in 1996, I must have just been finishing up my freshman year in high school where one of my history classes had started diving into world religions. I was also growing my skeptic brain. I had also developed a habit at that time of seeing all Disney full-length animated features in theaters the day they were released because I was such a hopeless fan. The confluence of all these things made the movie hit me at the right time. It was a surprising tale of serious issues around compassion, religion and ethics for an animated film, I was totally into it. Plus, they didn’t disappoint with the venue for the film, I fell in love with Notre Dame that summer and started developing a passion for cathedrals and stained glass, particularly rose windows.

I met up with Chris and we took the bell tower tour, which all told took us up 387 steps to the roof of the 226 foot cathedral. We stopped halfway up to walk between the towers and hear the bells ring, which is where I took this video (YouTube). If you’re still with me with the Disney film, it’s where the final battle between Frollo and Quasimodo takes place ;)

387 steps is a lot, and I have to admit getting a bit winded as we climbed the narrow spiral staircases, but it was totally worth it. I really enjoyed being so close to all the gargoyles and the view from the top of the cathedral was beautiful, not to mention a fantastic way to see the architecture of the cathedral from above.

After the tour, I was was able to go inside the cathedral to take a good luck at all those stunning stained glass windows!

After Notre Dame, I did a little shopping and made my way back to the bus and eventually the hotel for a meeting and dinner with my colleagues.

Sunday morning I managed to sleep in a bit and made my way out of the hotel shortly before 10AM so I could make it over to the Catacombs of Paris. The line for the catacombs is very long, the website warning that you could wait 3-4 hours. I had hoped that getting there early would mitigate some of that wait, but it did end up taking 3 hours! I brought along my Nook so at least I got some reading done, but it probably was the longest I’ve ever waited in line.

I’d say that it was worth it though. I’d never been inside catacombs before, so it was a pretty exceptional experience. After walking through a fair number of tunnels going down and then you finally get to where they keep all the bones. So. Many. Bones. As you walk through the catacombs the walls are made of stacked bones, seeing skulls and leg bones piled up to make the walls, with all kinds of other bones stacked on the tops of the piles.

I also decided to bring along a bit of modernity into the catacombs with a selfie. I’ll leave it to the reader to judge whether or not I have respect for the dead.

By the time I left the catacombs it was after 2PM and I made my way over to the Avenue des Champs-Élysées to do some shopping. Most worthy of note was my stop at Louis Vuitton flagship store where I bought a lovely wallet.

And with that, my tourism wound down. Sunday night I began getting into the swing of things with the OpenStack Summit as we had a team dinner (for certain values of “team” – we’re so many now that any meal now is just a subset of us). I am looking forward to going again some day on a proper vacation with MJ, there are so many more things to see!

A couple hundred more photos from my travels around Paris here: https://www.flickr.com/photos/pleia2/sets/72157648830423229/

by pleia2 at November 13, 2014 02:31 AM

Akkana Peck

Crockpot Green Chile Posole Stew

Posole is a traditional New Mexican dish made with pork, hominy and chile. Most often it's made with red chile, but Dave and I are both green chile fans so that's how I make it. I make no claims as to the resemblance between my posole and anything traditional; but it sure is good after a cold, windy day like we had today.

Dave is leery of anything called "posole" -- I think the hominy reminds him visually of garbanzo beans, which he dislikes -- but he admits that they taste fine in this stew. I call it "green chile stew" rather than "posole" when talking to him, and then he gets enthusiastic.

Ingredients (all quantities very approximate):

  • pork, about a pound; tenderloin works well but cheaper cuts are okay too
  • about 10 medium-sized roasted green chiles, whatever heat you prefer (or 1 large or 2 medium cans diced green chile)
  • 1 can hominy
  • 1 large or two medium russet potatoes (or equivalent amount of other type)
  • 1 can chicken broth
  • 1 tsp salt
  • 1 tsp red chile powder
  • 1/2 tsp cumin
  • fresh garlic to taste
  • black pepper and hot sauce (I use Tapatio) to taste

Start the crockpot heating: I start it on high then turn it down later. Add broth.

Dice potato. At least half the potato should be in small pieces, say 1/4" cubes, or even shredded; the other half can be larger chunks. I leave the skin on.

Pre-cook diced potato in the microwave for 7 minutes or until nearly soft enough to eat, in a loosely covered bowl with maybe 1" of water in the bottom. (This will get messy and the water gets all over and you have to clean the microwave afterward. I haven't found a solution to that yet.) Dump cooked potato into crockpot.

Dice pork into stew-sized pieces, trimming fat as desired. Add to crockpot.

De-skin and de-seed the green chiles and cut into short strips. (Or use canned or frozen.) Add to crockpot.

Add spices: salt, chile powder, cumin, and hot sauce (if your chiles aren't hot enough -- we have a bulk order of mild chiles this year so I sprinkled liberally with Tapatio).

Cover, reduce heat to low.

Cook 6-7 hours, occasionally stirring, tasting and correcting the seasoning. (I always add more of everything after I taste it, but that's me.)

Serve with bread, tortillas, sopaipillas or similar. French bread baked from the refrigerated dough in the supermarket works well if you aren't brave enough to make sopaipillas (I'm not, yet).

November 13, 2014 12:49 AM

November 07, 2014

Elizabeth Krumbach

Final day of the OpenStack Kilo Summit

Today was the last day of the OpenStack Design Summit. It wrapped up with a change of pace this time around, each project had their own contributor meetup which was used to continue hashing out ideas and getting some work done. I think this was a really brilliant move. I was pretty tired by the time Friday rolled around (one of the reasons the later Ubuntu Developer Summits were shrunk to 4 days), so I’m not sure how useful I would have been in more discussion-driven sessions. The contributor meetup allowed us to chat about things we didn’t have time to run sessions on, or do in-person follow-ups to sessions we did have. We also had nice in-person time to collaborate on some things so that some of our projects got to a semi-working state before we all go home and take a vacation (my vacation starts next Thursday).

I spent my day meeting up with with people to talk about our new translations tools and did the first couple drafts of the infrastructure specification to get that project started. Given the timeline, I anticipate that my real work on that won’t really begin until after I return from Jamaica on November 21st, but that seemed to sync up with the timeline of others on the team who are either taking some time off post-summit or have some dependencies blocking their action items.

There was also time spent on talking about the Infrastructure User Manual as a follow up to the session earlier in the week. We decided to host a 48 hour virtual sprint on the first couple days of December in order to collaborate on fleshing out the rest of the document (announcement here). As we all know, I love documentation, so I’m glad to see this coming together. I was also able to have a chat with a contributor later in the day who is also looking forward to seeing it finished so he can build upon it as the foundation for more project-specific developer documentation.

Also, the topic of third party testing came up during one of my chats and was overheard by someone nearby – which is how we learned there were at least three teams talking about creating a more automatic mechanism for determining the health of the third party testing systems. That’s approximately two teams too many. Kurt Taylor was able to get us all on an email thread together so I’m happy to say that a specification for that project should be coming together too.

Late in the afternoon James E. Blair did a demo for developers of gertty. I wrote about the tool back in September (here) and I’m a big fan of CLI-based code review, so it was fun to see others excited and asking questions about it.

As things wound down, I realized that this was probably the best OpenStack summit I’ve attended. The occasional snafu aside (like the over-crowded lunch on Thursday – I ate elsewhere), for a conference with over 4,600 attendees it felt well-managed. The Design Summit itself had a format I was really pleased with, as in addition to having the Friday work day, Tuesday was devoted to much-needed cross-project summit sessions. As OpenStack grows and matures, I’m really happy to see everyone working to fine tune the summits like this to keep pace.

Tonight I joined several of my OpenStack colleagues for an early dinner, retiring early to my room so I could re-pack my suitcase (and hope it’s not over 50lbs) and get some work done before my flight tomorrow morning. As exhausting as this trip was, it sure flew by fast and I am quite sad to be leaving Paris! Alas, my sister in law’s wedding in Philadelphia on Sunday awaits and I’m looking forward to it (and finally seeing my husband again after almost 2 weeks).

by pleia2 at November 07, 2014 09:36 PM

November 06, 2014

Elizabeth Krumbach

Kilo OpenStack Summit Days 3-4

As the OpenStack Summit continued for those of us on the development side, Wednesday and Thursday were full of design sessions.

First up for me on Wednesday was a great session about the Infrastructure User Manual led by Anita Kuno. A pile of work went into this while we were at our mid-cycle Infrastructure sprint in July, but many of the patches have since been sitting around. This session worked to make sure we had a shared vision for the manual and to get more core contributors both reviewing patches and submitting content for some of the more complicated, institutional knowledge type sections of the manual. The etherpad for the session is available here.

The session on AFS (Andrew File System) for the Infrastructure team was also on Wednesday. In spite of having a lot of storage space at our disposal and tools like Swift (which we’re slowly moving logs to), there are still some problems we’re seeing to solve that a distributed filesystem would be useful for, enter the AFS cell set up for the OpenStack project. The session went through some of the benefits of using AFS in our environment (such as read-only replicas of volumes, heavy client-side caching support and more comprehensive ACLs than standard Unix filesystem permissions). From there the discussion moved on to how it may be used, some of the popular proposals were our pypi mirror, the git repos and documentation. Detailed Etherpad here.

There were also a couple QA/Infra sessions, including one on Gating Relationships. At the QA/Infra mid-cycle meetup back in July we touched upon some of the possible “over-testing” that may be done when a change in one project really has no potential to impact another project, but we run the tests anyway, using up testing resources. However, there isn’t really any criteria to follow for determining what changes and project combinations should trigger tests, and it was noted that many of what seem like unnecessary testing was actually put in place at one point to address a particular pain point. The main result of this session was to try to develop some of this criteria, even if it’s manual and human-based for now. Detailed Etherpad here.

We also had a QA and CI After Merge session. Currently all of our tests are pre-merge, which makes sure all code that lands in the development repository has undergone all official tests that the OpenStack CI system has to offer. This session discussed whether heavier, less “central” tests to the projects be tested post merge or with periodic tests, with what I believe was some consensus: We do want to split out some of the current gated jobs. Several todo items to move this forward were defined at the bottom of the etherpad.

I also attended the “Stable branches” session (lively etherpad here). Icehouse’s support is 15 months and the goal seems to be to support Juno for a similar time frame. Several representatives from distributions were attending and giving feedback about their own support needs and there seems to be hope that there will be work from folks from distros committing to do some of the maintenance work.

There were also a couple sessions about Tempest, the integration test suite. First there was “Tempest scope in the brave new world” which focused on the questions around what should be in Tempest moving forward and what the project should consider removing as the project moves forward. Etherpad for the session here. There was also a “Tempest-lib moving forward” session, which discussed this library that was created last cycle and various ways to improve it in the coming cycle, details in the Etherpad here.

Wednesday evening I made my way over to the Core Reviewer party put on by HP at the near rooftop event space of Cité de l’Architecture et du Patrimoine. We were driven there by what was described as “iconic, old French cars” which turned out to be the terrifying Citroën 2CV. And our drivers were all INSANE in Paris traffic. Fortunately no one died and it was actually pretty fun (though I was happy to see buses would be taking us back to the conference venue!).

The night itself kicked off with a lecture on the architecture of the Sagrada Família Basílica in Barcelona by one of the people currently working on it, and which drew some loose parallels between our own development work (including the observation that Sagrada Família is not complete – a 140+ year release cycle!). They also brought in entertainment in the form of several opera singers who came in throughout the night. Some food was served, but I spent much of the night outside chatting with various of my OpenStack colleagues and drinking so much Champagne that the outdoor bartender learned to pull out the bottle as soon as he saw me coming. Hah!

My favorite part of the night was the stunning view of the Eiffel Tower. It’s a beautiful thing on its own at night, but at the top of the hour it also sparkles for 5 minutes in a pretty impressive show. I was so caught up in discussions that I didn’t manage to go on the museum tour that was offered, but I heard good things about it today.

Then it was on to today, Thursday! I had a great chat with Steve Weston about the third party dashboard we’re working on before Anita came to find me so I wouldn’t be late for my own session (oops).

My (along with Andreas Jaeger’s who I saved a seat for up front) session was an infrastructure session on Translations Tools. We’re currently using Transifex but we need to move off of it now that they’ve transitioned to a closed source product. As I mentioned in my last post, we decided to go with Zanata so the session was primarily to firm up this decision with the rest of the infrastructure team and answer any questions from everyone involved. I have a lot of work to do during the Kilo cycle to finally get this going, but I’m really excited that all the work I did last cycle in getting demos set up and corralling the right talent for each component has finally culminated in a solid decision and action items for making the move. Next week I’ll start working on the spec for the transition. Etherpad here.

I attended a few other sessions, but the other big infrastructure one today was about Storyboard, the new task and bug tracker being written for the project to replace Launchpad. Michael Krotscheck has been doing an exceptional job on this project and the first decision of the session was whether it was ready for the OpenStack Infrastructure team to move to – yes! The rest of the session was spent outlining the key features that were needed to have really good support for infrastructure and to start supporting StackForge and OpenStack projects. The beautiful Etherpad that Michael created is here.

Tonight I went out with several of my OpenStack colleagues to dinner at La maison de Charly for delicious and stunningly arranged Moroccan food. I managed to get back to my room by 9PM so I could get an early night before the last day of the summit… but of course I got caught up in writing this, checking email and goofing off in IRC.

Tomorrow the summit wraps up with a working day with an open agenda for all the teams, so I’ll be spending my day in the Infra/QA/Release Management room.

by pleia2 at November 06, 2014 10:46 PM

Akkana Peck

New GIMP Save/Export plug-in: Saver

The split between Save and Export that GIMP introduced in version 2.8 has been a matter of much controversy. It's been over two years now, and people are still complaining on the gimp-users list.

Early on, I wrote a simple Python plug-in called Save-Export Clean, which saved over an image's current save or export filename regardless of whether the filename was XCF (save) or a different format (export). The idea was that you could bind Ctrl-S to the plug-in and not be pestered by needing to remember whether it was XCF, JPG or what.

Save-Export Clean has been widely cited, and I hope it's helped some people who were bothered by the Save/Export split. But personally I didn't like it very much. It wasn't very flexible -- there was no way to change the filename, for one thing, and it was awfully easy to overwrite an original image without knowing that you'd done it. I went back to using GIMP's separate Save and Export, but in the back of my mind I was turning over ideas, trying to understand my workflow and what I really wanted out of a GIMP Save plug-in.

[Screenshot: GIMP Saver-as... plug-in] The result of that was a new Python plug-in called Saver. I first wrote it a year ago, but I've been tweaking it and using it since then, with Ctrl-S bound to Saverand Ctrl-Shift-S bound to Saver as...). I wanted to make sure that it was useful and working reliably ... and somehow I never got around to writing it up and announcing it formally ... until now.

Saver, like Save/Export Clean, will overwrite your chosen filename, whether XCF or another format, and will mark the image as saved so GIMP won't pester you when you exit.

What's different? Mainly, three things:

  1. A Saver as... option so you can change the filename or file type.
  2. Merges multiple layers so they'll show up properly in your JPG or PNG image.
  3. An option to save as .xcf or .xcf.gz and, at the same time, export a copy in another format, possibly scaled down. So you can maintain your multi-layer XCF image but also update the JPG copy that you're going to put on the web.

I've been using Saver for nearly all my saving for the past year. If I'm just making a quick edit of a JPEG camera image, Ctrl-S overwrites it without questioning me. If I'm editing an elaborate multi-layer GIMP project, Ctrl-S overwrites the .xcf.gz. If I'm planning to export that image for the web, I Ctrl-Shift-S to bring up the Saver As... dialog, make sure the main filename is .xcf.gz, set a name (ending in .jpg) for the exported copy; and from then on, Ctrl-S will save both the XCF and the JPG copy.

Saver is available on my github page, with installation instructions here: GIMP Saver and Save/Export Clean Plug-ins. I hope you find it useful.

November 06, 2014 07:57 PM

Eric Hammond

When Are Your SSL Certificates Expiring on AWS?

If you uploaded SSL certificates to Amazon Web Services for ELB (Elastic Load Balancing) or CloudFront (CDN), then you will want to keep an eye on the expiration dates and renew the certificates well before to ensure uninterrupted service.

If you uploaded the SSL certificates yourself, then of course at that time you set an official reminder to make sure that you remembered to renew the certificate. Right?

However, if you inherited an AWS account and want to review your company or client’s configuration, then here’s an easy command to get a list of all SSL certificates in IAM, sorted by expiration date.

aws iam list-server-certificates \
  --output text \
  --query 'ServerCertificateMetadataList[*].[Expiration,ServerCertificateName]' \
  | sort

To get more information on an individual certificate, you might use something like:

certificate_name=...
aws iam get-server-certificate \
  --server-certificate-name $certificate_name \
  --output text \
  --query 'ServerCertificate.CertificateBody' \
| openssl x509 -text \
| less

That can let you review information like the DNS name(s) the SSL certificate is good for.

Exercise for the reader: Schedule an automated job that reviews SSL certificate expiration and generates messages to an SNS topic when certificates are near expiration. Subscribe email addresses and other alerting services to the SNS topic.

Read more from Amazon on Managing Server Certificates.

Note: SSL certificates embedded in web server applications running on EC2 instances would have to be checked and updated separately from those stored in AWS.

Original article: http://alestic.com/2014/11/aws-iam-ssl-certificate-expiration

by Eric Hammond at November 06, 2014 12:35 AM

November 04, 2014

Elizabeth Krumbach

Kilo OpenStack Summit Days 1-2

Saturday morning I arrived in Paris. The weather was gorgeous and I had a wonderful tourist day visiting some of the key sights of the city. I will write about that once I’m home and can upload all my photos, for now I am going to talk about the first couple of days of the OpenStack Summit, which began on Monday.

Both days kicked off with keynotes. While my work focuses on the infrastructure for the OpenStack project itself and I’m not strictly building components of OpenStack that people are deploying, the keynotes are still an inspiration. Companies from around the world get up on the stage and share how they’re using OpenStack to enable their developers to be more innovative by getting them development environments more quickly or how they’re putting serious production load on them in the processing of big data. This year they had BBVA, BMW (along with a stunning i8 driven onto the stage), Time Warner Cable, CERN, Expedia and Tapjoy get up on stage to share their stories.

CERN’s story was probably my favorite (even if the BMW on stage was shiny and I want one). Like many in my field, I hold a hobbyist level interest in science and could geek out about the work being done at CERN for days. Plus, they’re solving some really exceptional problems around massive amounts of big data produced by the LHC using OpenStack and a pile of other open source software.


Tim Bell of CERN

It was exciting to learn that they’re currently running 4 clusters using the latest release of OpenStack, the largest of which has over 70,000 cores across over 3,000 servers. Pretty serious stuff! He also shared some great links during his talk, including:

I was also delighted to see Jim Zemlin, Executive Director of the Linux Foundation, get on stage on the first day to share his excitement about the success of OpenStack and to tell us all what we wanted to hear: we’re doing great work for open source and are on the right side of history.

In short, the keynotes spoke to both my professional pride in what we’re all working on and the humanitarian and democratization side of technology that so seriously drew me into the possibilities of open source in the first place.

All the keynotes for both days are already online, you can check them out in this youtube playlist: OpenStack Summit Paris 2014 Keynote Presentations

Back to Monday, I headed over to the other venue to attend a session in the Ops Summit, “Top 10 Pain points from the user survey – how to fix them?” The session began by looking at results from the survey released that day: OpenStack User Survey Insights: November 2014. From that survey, they picked the top-cited issues that operations are having with OpenStack and worked to come up with some concrete issues that the operators could pass along to developers. Much of the discussion ended up focusing on problems with Neutron (including problems with the default configuration) and gaps in Documentation that made it difficult for operators to know that features existed or how to use them. The etherpad for the session goes further into depth about these and other issues raised and added during the session, see it here.

Monday afternoon I met up with Carlos Munoz of Red Hat and Andreas Jaeger of SUSE who I’ve been working with over these past couple of months to do an in depth exploration of our options for a new translations system. We have been evaluating both Pootle and Zanata, and though my preference had been Pootle because of it being written in Python and apparent popularity with other open source projects, the Translations team overwhelmingly preferred Zanata. As Andreas and I went through the Translations Infrastructure we currently have, it was also clear that Zanata was our best option. It was a great meeting, and I’m looking forward to the Translations Tools Session on Thursday at 11AM where we discuss these results with the rest of the Infrastructure team and work out some next steps.


Me, Carlos and Andreas!

From there I went down to the HP Sponsored track where lighting talks were being run during the last two sessions of the day. The room was packed! There were a lot of great presentations which I hope were recorded since I missed the first few. My talk was one of the last, and with a glowing introduction from my boss I gave a 5 minute whirlwind description of elastic-recheck. I fear the jetlag made my talk a bit weaker than I intended, but I was delighted to have 3 separate conversations about elastic-recheck and general failure tracking on CI systems that evening with people from different companies trying to do something similar. My slides are available here: Automated failure aggregation & detection with elastic-recheck slides (pdf).

On Tuesday morning I was up bright and early for the Women of OpenStack breakfast. Waking up with a headache made me tempted to skip it, but I’m glad I didn’t. The event kicked off with some stats from a recent poll of members of the Women of OpenStack LinkedIn group. It was nice to see that 50% of those who responded were OpenStack ATCs (Active Technical Contributor) and many of those who weren’t identified themselves as having other technical roles (not that I don’t value non-technical women in our midst, but the technical ones are My Tribe!).

Following the results summaries, we split into 4 groups to talk about some of the challenges facing us as a minority in the OpenStack community and came up with 4 problems and solutions: Coaching for building confidence, increasing profile and communication for and around the Women of OpenStack group, working to get more women in our community doing public speaking and helping women rejoin the community after a gap in involvement (bonus: this can directly help men too, but more women go through it when taking time off for children). The group decided on focusing on getting the word out about the community for now, seeking to improve our communication mechanisms and see about profiling some women in our community, as well as creating some space where we can put our basic information about what we’re working on and how to contact us. I was really happy with how this session went, kudos to all the amazing women who I got to interact with there, and sorry for being so shy!

After keynotes, I headed back over to the Design Summit venue to attend a couple cross-project testing-focused sessions: “DefCore, RefStack, Interoperability, and Tempest” (etherpad here) and Moving Functional Tests to Projects (etherpad here). One of the most valuable things I got out of these sessions was that projects really need to do a better job of communicating directly with each other. Currently so much is funneled through the Quality Assurance team (and Infrastructure team) because they run the test harness where things fail. Instead, it would be great to see some more direct communication between these projects, and splitting out some of the functional tests may be one way to help socially engineer this.

Following lunch and a quick meeting, I was off to “Changes to our Requirements Management Policy” (etherpad here) and then “Log Rationalization” (etherpad here). There seemed to be more work accomplished on the latter, which was nice to see since there’s a stalled specification up that it would be great to see moved along so that the project can come up with some guidelines for log levels. Operators have been reporting both that they often run logging at DEBUG level all the time so they can see even some of the more basic problems that crop up, AND are frustrated by some “non-issues” being promoted to WARNING and filling their logs with unnecessary stack traces.

Next up was the Gerrit third-party CI discussion session. I wasn’t sure what to expect from this session, but the self-selected group (many were more involved with OpenStack than was assumed, but they did come all the way to the summit…) was much more engaged than I had feared. Talk in the session centered around how to get more third party operators involved with the growing third party community, one suggestion being moving the meeting time to a more European friendly time every other week. There was also discussion around the need for improved documentation and I raised my hand about helping with a more dynamic dashboard for automatically determining the status of third party systems without manual notifications from operators. Etherpad here.

The last session of my very long day was “Translators / I18N team meetup” where the group sought to promote translations to grow the community and recognize translators, etherpad here. As I mentioned earlier, I’m working on some of the new tooling that the team will use, so in spite of only speaking English, I was able to chime in a bit on the technical side of making some of the recognitions and other statistics available once we switch back to an open source platform for translations.

Then it was off to the HP party at Musée des Arts Forains. Open for private events only, the venue hosts a collection of antique/vintage (dating from 1850-1950) games, rides and other fair-related objects. I played a couple of the games and enjoyed snacks and wine throughout the evening. It was certainly busy and some areas were quite loud and crowded, but it was easy to find large areas where the volume was quite conducive to conversations – of which I had many.

Social events and parties are not really my thing, but this one I really enjoyed. Transportation to the venue included an optional tourguide led tour on the bus past many of the stunning sights of Paris at night. And they began running shuttles back to the conference center at 9PM – which I figured I’d catch then, but it was after 10PM before I made my way back to the bus. I think what I really don’t like are club-like parties with loud music and nothing interesting to occupy myself with when I find myself frequently wandering around solo (apparently I’m a lousy pack animal). The ability to stop and play games, explore the interesting food offerings and run into lots of people I know made the evening fly by.

Huge thanks to my friends and colleagues at HP for putting on such a comfortable and exciting event, this one will be hard to top in my awesome-events-at-conferences ledger.

Tomorrow we begin the hardcore part of the conference for me, kicking off with an Infrastructure session at 9AM and moving through various QA and Infrastructure sessions going on through the rest of the week. Since it’s nearing 1AM, I should get some sleep!

by pleia2 at November 04, 2014 11:47 PM

November 03, 2014

Jono Bacon

Dealing With Disrespect: The Video

A while back I wrote and released a free e-book called Dealing With Disrespect.

It is a book that provides a short, simple to read, free guide for handling personalized, mean-spirited, disrespectful, and in some cases, malicious feedback and criticism of your work. I wrote it because this kind of feedback is increasingly common in online communities and places such as YouTube, reddit, and elsewhere, and I am tired of seeing good people give-up sharing their creative efforts because of this.

My goal with the book is that when someone reads mean-spirited feedback and criticism and feels demotivated, someone can point them to the book to it as a means of helping to put things in perspective.

Well, to make this content even easier to consume, I recorded a presentation version of it and put it up on my YouTube channel:

Can’t see it? Watch it here!

by jono at November 03, 2014 10:13 PM

Elizabeth Krumbach

Wedding and week in Florida

All this travel is leaving me in the unfortunate position of having a growing pile of blog posts queuing up, which will only get worse as the OpenStack Summit continues this week, so I better get these out! I’m now in Paris for the summit, but last week I was in Florida for MJ’s cousin Stephanie’s wedding.

I arrived on Friday afternoon from Raleigh and MJ picked me up at the airport, getting us to the hotel just in time to get changed for a family and friends gathering the evening before the wedding.

Saturday we were able to enjoy the beach and pools at the hotel with some of MJ’s cousins. The weather was great, even the humidity was quite low, relative to what I tend to expect from Florida.

As the day wound down, we got ready for the wedding!

The ceremony and reception took place at a beautiful country club not far from the hotel. As an attendee, it seemed like everything went very well. The reception was fun, lots of great food, a fun, sparkly signature drink and some stunning centerpieces decorating the dinner tables. I even danced a little.

Unfortunately I picked up a cold somewhere along the way, and spent all of Sunday in bed while MJ spent more time with family and pools. By Monday I was feeling a bit better and was able to see MJ off and get moved over to the beach motel where I spent the rest of the week.

My beach motel wasn’t the greatest place, but it was inexpensive, clean and ultimately quite tolerable. The plan to stay in Florida, in spite of my general “I don’t like Florida” attitude, was to avoid going all the way back to California prior to my Paris trip. And I have to say, with nice October weather and the views at sunset, I think it was the right choice.

My days were spent catching up with work post-conference and prepare for the summit this week. Thankfully it wasn’t very hot out, so I was able to open the windows during the day and let fresh air into my rooms. I also made plans throughout the week to visit with family in the area, managing to meet up with my cousin Shannon and her family, my Aunt Pam, and my Aunt Meg and cousin Melissa throughout the week.


At dinner with Shannon, Rich & Frankie

I also was able to take some long lunch breaks to enjoy a few quick dips in the ocean.

The San Francisco Giants won the World Series while I was in Florida too! I was able to watch the games in my room each night. I was disappointed not to be in town for the win, as the whole city explodes in celebration when there’s a win like this. My week wrapped up on Friday when I checked out of the motel and headed toward the airport for my redeye flight to Paris. And since I was also disappointed to be missing Halloween in San Francisco again, I dressed up for my flight, as Carmen Sandiego.

by pleia2 at November 03, 2014 09:57 PM

November 01, 2014

Elizabeth Krumbach

All Things Open 2014

From Oct 22-23rd I had the pleasure of speaking at and attending All Things Open in Raleigh, North Carolina. Of all the conferences I’ve attended this year, this conference is one of the most amazing when it comes to how well they treated their speakers. When I submitted my talk I received an email from the conference organizer thanking me for the submission. Frequent emails were sent keeping us informed about all the speaker-focused conference details. Leading up to the event I woke up one morning to this flattering profile on their news feed. A series of interviews was also published by the OpenSource.com folks featuring speakers. Once there, I was thanked about 100 times throughout the 2 day event. In short, they really did a remarkable job making me feel valued.

Thankfulness aside, the conference was top notch. Several months back I read The foundation for an open source city by Jason Hibbets so I was excited to go to Raleigh (where much of the work Hibbets talked about centered around) and doubly amused when Jason said hello to me and I got to say “hey, I read your book!” During the conference introduction they said the attendence last year (their first year) was around 700 and that they were looking at 1,100 this year. The conference was opened by Raleigh Mayor Nancy McFarlane, which was pretty exciting, I’d seen cities send CTOs or supervisors, but the having the mayor herself show up was quite the showing of support.

After her keynote came Jeffrey Hammond, VP & Principal Analyst at Forrester Research. I really enjoyed the statistics his company put together regarding the amount of open source software being used today. For instance, of developers surveyed they learned that 4/5 of them are using open source software and 73% of them are programming outside of their paid job, 27% on open source.

Right after the keynotes I headed downstairs to give my talk, Open Source Systems Administration. A blending of my passion for open source and love of systems administration, this is one of my favorite talks to give, I really enjoy being able to present on how the OpenStack infrastructure itself is an open source project. It was a lot of fun chatting with people throughout the rest of the conference who had attended (or missed) my talk, while there is less surprise these days that a project would open source an infrastructure, there’s a lot of interest in learning that there are project which actually have and how we’ve done it. Slides from my talk here: ATO-opensource_sysadmin.pdf (2.3M).


Giving my talk! Thanks to Charlie Reisinger for this photo.

The schedule made it hard to select talks, but I next decided to head over to the Design track to learn from Garth Braithwaite why Open Source Needs Design. I’ll start off by saying that it’s wonderful that there are some designers participating in open source these days, but as Garth points out in his talk they are generally: paid by a company as a designer to focus on the product (open sourceyness of it doesn’t matter, it’s a job), a designer friend of someone in the project who is helping out or a developer on the project who happens to have some design expertise (or is willing to get some in order to help the project). He explored some of the history of how developers made their way to open source and the tools we used, and explained that the “story” doesn’t exist for designers, why would they get involved? They’re not fixing a printer or solving some tricky problem. The tools for open collaboration for designers also don’t really exist, popular sites for design sharing like Dribbble don’t have source upload options and portfolio sites like BeHance lack any ability for collaboration. The new DesignOpen.org seeks to help change that, so it was interesting to learn about that. From there he detailed different types of design work, UX, IxD and UI and the tools and deliverables for each type of work. As someone who really has never worked with design it was an interesting tour of that space. His slides from the talk are available here: speakerdeck.com/garthdb/open-source-needs-design (first few slides are a image-full, but stick with it, some great slides with bullet points come later!).

Then it was off to see Lessons Learned with Distributed Systems at bit.ly presented by Sean O’Connor (it was a pleasure to meet him and colleague Peter Herndon during the keynote earlier in the day). The talk centered around some of the concerns when architecting systems at scale, from time syncronization to having codebases that are debuggable. At bit.ly they adopted a codebase that is broken out into many small pieces, allowing ops to dig into and learn about specific components when something goes wrong, not necessarily having to learn everything all at once in order to do their job effectively. He also went into how they’ve broken their workload up into what has to be done synchronously and what can be shifted into an asynchronous job, which is preferred because it’s easier to do well. Finally, he talked some about how they deal with failure, starting off with actually having a plan for failure, and doing things like back offs, where the retries end up spaced out over time rather than hammering the service constantly until it has returned.

After lunch I decided to check out the Messaging Standards and Systems – AMQP & RabbitMQ talk by Gavin M. Roy. I’ve used RabbitMQ a fair amount, but that doesn’t mean I’ve ever paid attention to AMQP (Advanced Message Queuing Protocol), I was pretty surprised to learn that releases 0-8 and 0-9-1 are very different the 1.0 release and are effectively overseen by different people, with many users still intentionally on 0-9-1. Good to know, I imagine that causes a ridiculous amount of confusion. He went through some of the architecture of how RabbitMQ can be used and things it does to “fix” issues encountered with the default AMQP 0-9-1. Slides from his talk here speakerdeck.com/gmr/messaging-standards-and-systems-amqp-and-rabbitmq (the exchange slides about halfway through are quite helpful).

I was then off to Saving the World with Open Source and Science presented by Dr. Marcus Hanwell. Given my job working on OpenStack, I perhaps have the distinct benefit of being exposed to scientists who understand how to store, process and present big data, plus who understand open source. I assumed this ubiquitous, so this talk was quite the wake up call. Not only are publicly-funded papers not available for free (perhaps a whole different rant), the papers often don’t have enough data for the results to be reproducible. Sources from which data was processed aren’t released (whether it be raw data, source code used to make computations or, seriously, an Excel spreadsheet with some data+formulas), images are shrunk and stripped of all metadata so it can be impossible to determine whether you’re actually seeing the same thing. Worse, most institutions have no way to index this source material at all, so something as simple as a harddrive failure on a laptop can mean loss of this precious data. Wow, how depressing. But the talk was actually a call for action in this space. As technologists there are things we can do to provide solutions to scientists, and scientists working in research can make social changes so that releasing full sources, code and more becomes something valued and validation of results is something that once again becomes central to all scientific research.

Day one completed with a keynote by Doug Cutting he titled “Pax Data” which was a fascinating look into the world we’re building where the collection of data is What We Do. He began by talking about how in most science fiction the collectors of data end up being the Bad Guys in a future dystopia, but the fact is that sectors from Education to Healthcare to Climate can benefit from the collection and analysis of big data. He posted the question to the audience: How do we do this without becoming those Bad Guys? He admitted not having a full answer, but provided some guidance on key things that would be required, including transparency, best practices around data handling, definition of data usage abuse so we can protect against it, and either government or industry oversight and/or regulation. Fascinating talk for me, particularly as I was in the middle of reading both a SciFi dystopia book where big data becomes really scary (The Circle by David Eggers) and non-fiction book about our overuse of technology (Program or be Programmed).

Day 2! Keynotes began with a talk by James Pearce of Facebook. I know Facebook is pretty much built on open source (just like everyone else) but this talk was about the open source program he and his team have built within Facebook starting about a year ago. As is standard for many companies starting with open source, they’d just “throw things over the wall” and expect the code to be useful to the community. It wasn’t. So they then began seriously working to develop the code the were open sourcing, assigning people internally to be the caretakers of projects, judging the health of projects based on metrics like forks and commits from community members outside of Facebook. They also run much of the same code versions internally as they release in the community. Github profile for Facebook is here: https://github.com/facebook. Very nice work!

The next keynote was by DeLisa Alexander of Red Hat on Women in Open Source. She started out with a history lesson about how the first real programmers were women and stressed why diversity is important in our industry. Stories about how the most successful women in open source have had encouragement of some form from their peers, and how important it is that everyone in the audience seek to do that with newcomers to their community, particularly women. It was also interesting to hear her talk about how children now often think of computers as opaque black boxes that they can’t influence, so it’s important to engage children (including girls) at a young age to teach them that they can make changes to the software and platforms they use.

Alexander also hosted a panel at lunch which I participated in on this topic. I was really honored to be a part of the panel, it was packed with very successful women in tech and open source. Jen Wike Huger wrote up some of her notes in a great article here: Keys to diversity in tech are more simple than you think. My own biggest takeaway from the panel was the realization that everyone on the panel has spent a significant amount of time being a mentor in some formal capacity. We’ve all supported students and other women in technology via organizations that we either work or volunteer for, or run ourselves.

Getting back to sessions, I went to Steven Vaughan-Nichols’ talk on Open Source, Marketing, and Using the Press. Now, technically I’m the Marketing Lead for Xubuntu, but I somewhat joke to people that it’s “only because I know how to use Twitter.” Amusingly, during his talk he covered people just like myself, project contributors who end up with the Marketing role. I gained a number of great insights from this talk, including defining your marketing audience properly – there’s your community and then there’s the rest of the world. Tips to knowing your customer, maybe we should do a more formal survey in Xubuntu about some of the decisions we make rather than relying upon sporadic social media feedback and expecting users to participate in development discussions? He also drove home the importance of branding, which thanks to our logo designer Pasi Lallinaho I believe we have done a good job of. There was also a crash course in communicating with the press: know who you’re contacting and what their focus is, be clear and concise in emails and explain the context in which your news is exciting. Oh, and be friendly and reply promptly when reporters contact you. I also realized I should add our press contact to our website, that’s a good idea! I have some updates to make to the Xubuntu Marketing blueprint this cycle.

Perhaps one of my favorite talks of the even was presented by Dr. Megan Squire: Case Study: We’re Watching You: How and Why Researchers Study Open Source And What We’ve Found So Far. I think what I found most interesting is that while I see poll from time to time put out by people claiming to do research on open source, I never see the results of that research. Using what I now know from Dr. Marcus Hanwell (many academic papers are locked behind journal pay walls) this suddenly makes sense. But Dr. Squire’s talk dove into the other side of research that doesn’t include polls: research done on data, or “artifacts” that open source projects create. Artifacts are pretty much anything that is public as a result of a project existing, from obvious things like code to the public communication methods we use like IRC and mailing lists. This is what is at the heart of a duo of websites she runs, the first being FLOSSmole which connects well-formatted data about projects with researches interested in doing datamining against it, and FLOSShub which is a collection of papers she’s collected about open source so it’s all in one place and we can see what kind of research is being done. Aside from her great presentation style, I think what made this one of my favorites was the fact that I didn’t know this was happening. I make FOSS artifacts all day long, both in my day job and with my open source hobbies, and sure I know it’s out there for anyone to find, piles of IRC logs, code reviews, emails, but learning that academics are actively processing them is another thing entirely. For instance, to take an example from a project I work on, I had no idea this existed: Estimating Development Effort in Free/Open Source Software Projects by Mining Software Repositories: A Case Study of OpenStack. It made me a bit tin-foil-hat for about 5 minutes until I once again realized that I’m not just fine, but happy to be putting my work out there. Huge thanks to her for doing this presentation and maintaining these really valuable websites.

Slides from her presentation are up on Google docs here and are well worth the browse for examples she uses to illustrate how our artifacts are being used in research.

After lunch I attended my last three talks for the conference, the first one being Software Development as a Civic Service presented by Ben Balter. I’ve attended a number of civic hacking focused talks at events over the past couple years, but this one wasn’t strictly talking about a specific project or organization in this space. Instead he focused on the challenges that confront governments and us as technologists as we attempt to enter the government space, and led to one of my favorite (sad!) slides of the event, in which you will note that doing anything remotely modern (use of public package repositories, configuration management or source control) doesn’t factor in:

He talked about how some government organizations are simply blinded by proprietary sales talk and FUD around open source, while others actually are bound by specific governmental requirements in the software that industries have figured out, but open source projects don’t think to include (ie – an Open Source CMS may get us 99% of us there, but this company is offering something that satisfies everything because it’s their job to do so). He also talked some about the “Command and Control” structure inside of government and how transparency can often be seen as a liability rather than the strength that we’ve come to trust in within the open source community. He wrapped up with some success stories from the government, like petitions.whitehouse.gov and GOV.UK and shared some stats about the increase of known government employees collaborating on Github.

The next talk was by Phil Shapiro on Open Sourcing the Public Library. To begin his talk he talked about how open source has a major opportunity as libraries move from the analog to digital space. He then moved into a fact he wanted to stress: libraries are owned by all of us. There is an effort to transform them from the community “reading room” into the community “living room” where people share ideas and collaborate on projects, bringing in more educational resources in the form of classes and the building of maker spaces. I love this idea, I find Hackerspaces to be unintentionally hostile places for many young women, so providing a different option to accomplish similar goals is appealing to me. I think what struck me most about this was how “open sourcey” it felt, people coming together to build something new together in the open in their community, it’s why I work on any of this at all. He shared a link of some collected writings about the future of Libraries here: https://sites.google.com/site/librarywritings/

The final talk of the day I attended was Your Company Culture is Awesome (But is Company Culture a Lie?) by Pamela Vickers. In her talk she identified the trend in technology of offering “perks” in lieu of an actual healthy work environment for workers. These perks often end up masking real underlying unhappiness for employees, and ultimately lead to loss of talent. She suggested that companies take a step back from their pile of perks and look to make sure they’re actually meeting the core needs of their employees. Are your developers happy? How do you know? Are you asking them? You should, and your employees should trust you to be honest with you and to at least professionally acknowledge their feedback. She also highlighted some of the key places where companies fall down on making their developers happy, including forcing them to use the wrong tools, upsetting a healthy work-life balance, giving them too much work or projects that don’t feel achievable and giving them boring or unimportant projects.

To wrap this up, huge thanks to everyone who worked on and participated in this conference. As a conference sponsor, my employer (HP) had a booth, but unfortunately I was the only one who was able to attend. I spent breaks and lunches at the booth (leaving a friendly note when I was away) and had some great chats with folks looking for Python jobs and who were more generally interested in the work we’re doing in the open source space. It still can strike people as unusual that HP is so committed to open source, so it’s nice to be available to not only give numbers, but be a living, breathing example of someone HP pays to contribute to open source.

by pleia2 at November 01, 2014 10:31 PM

October 31, 2014

Akkana Peck

Simulating a web page timeout

Today dinner was a bit delayed because I got caught up dealing with an RSS feed that wasn't feeding. The website was down, and Python's urllib2, which I use in my "feedme" RSS fetcher, has an inordinately long timeout.

That certainly isn't the first time that's happened, but I'd like it to be the last. So I started to write code to set a shorter timeout, and realized: how does one test that? Of course, the offending site was working again by the time I finished eating dinner, went for a little walk then sat down to code.

I did a lot of web searching, hoping maybe someone had already set up a web service somewhere that times out for testing timeout code. No such luck. And discussions of how to set up such a site always seemed to center around installing elaborate heavyweight Java server-side packages. Surely there must be an easier way!

How about PHP? A web search for that wasn't helpful either. But I decided to try the simplest possible approach ... and it worked!

Just put something like this at the beginning of your HTML page (assuming, of course, your server has PHP enabled):

<?php sleep(500); ?>

Of course, you can adjust that 500 to be any delay you like.

Or you can even make the timeout adjustable, with a few more lines of code:

<?php
 if (isset($_GET['timeout']))
     sleep($_GET['timeout']);
 else
     sleep(500);
?>

Then surf to yourpage.php?timeout=6 and watch the page load after six seconds.

Simple once I thought of it, but it's still surprising no one had written it up as a cookbook formula. It certainly is handy. Now I just need to get some Python timeout-handling code working.

October 31, 2014 01:38 AM

October 24, 2014

Akkana Peck

Partial solar eclipse, with amazing sunspots

[Partial solar eclipse, with sunspots] We had perfect weather for the partial solar eclipse yesterday. I invited some friends over for an eclipse party -- we set up a couple of scopes with solar filters, put out food and drink and had an enjoyable afternoon.

And what views! The sunspot group right on the center of the sun's disk was the most large and complex I'd ever seen, and there were some much smaller, more subtle spots in the path of the eclipse. Meanwhile, the moon's limb gave us a nice show of mountains and crater rims silhouetted against the sun.

I didn't do much photography, but I did hold the point-and-shoot up to the eyepiece for a few shots about twenty minutes before maximum eclipse, and was quite pleased with the result.

An excellent afternoon. And I made too much blueberry bread and far too many oatmeal cookies ... so I'll have sweet eclipse memories for quite some time.

October 24, 2014 03:15 PM

Jono Bacon

Bad Voltage Turns 1

Today Bad Voltage celebrates our first birthday. We plan on celebrating it by having someone else blow out our birthday candles while we smash a cake and quietly defecate on ourselves.

For those of you unaware of the show, Bad Voltage is an Open Source, technology, and “other things we find interesting” podcast featuring Stuart Langridge (LugRadio, Shot Of Jaq), Bryan Lunduke (Linux Action Show), Jeremy Garcia (Linux Questions), and myself (LugRadio, Shot Of Jaq). The show takes fun but informed take on various topics, and includes interviews, reviews, competitions, and challenges.

Over the last year we have covered quite the plethora of topics. This has included VR, backups, atheism, ElementaryOS, guns, bitcoin, biohacking, PS4 vs. XBOX, kids and coding, crowdfunding, genetics, Open Source health, 3D printed weapons, the GPL, work/life balance, Open Source political parties, the right to be forgotten, smart-watches, equality, Mozilla, tech conferences, tech on TV, and more.

We have interviewed some awesome guests including Chris Anderson (Wired), Tim O’Reilly (O’Reilly Media), Greg Kroah-Hartman (Linux), Miguel de Icaza (Xamarin/GNOME), Stormy Peters (Mozilla), Simon Phipps (OSI), Jeff Atwood (Discourse), Emma Marshall (System76), Graham Morrison (Linux Voice), Matthew Miller (Fedora), Ilan Rabinovitch (Southern California Linux Expo), Daniel Foré (Elementary), Christian Schaller (Redhat), Matthew Garrett (Linux), Zohar Babin (Kaltura), Steven J. Vaughan-Nicols (ZDNet), and others.

…and then there are the competitions and challenges. We had a debate where we had to take the opposite viewpoints of what we think, we had a rocking poetry contest, challenged our listeners to mash up the shows to humiliate us, ran a selfie competition, and more. In many cases we punished each other when we lost and even tried to take on a sausage company.

It is all a lot of fun, and if you haven’t checked the show out, be sure to head over to www.badvoltage.org and load up on some shows.

One of the most awesome aspects of Bad Voltage is our community. Our pad is at community.badvoltage.org and we have a fantastically diverse community of different ideas, perspectives and viewpoints. In many cases we have discussed a topic on the show and there has been a long and interesting (and always respectful debate on the forum). It is so much fun to be around.

I just want to say a huge thank-you to everyone who has supported the show and stuck with us through our first year. We have a lot of fun doing it, but the Bad Voltage community make every ounce of effort worthwhile. I also want to thank my fellow presenters, Bryan, Stuart, and Jeremy; it is a pleasure getting to shoot the proverbial with you guys every few weeks.

Live Voltage!

Before I wrap up, I need to share an important piece of information. The Bad Voltage team will be performing our very first live show at the Southern California Linux Expo on the evening of Friday 20th Feb 2015 in Los Angeles.

We can’t think of a better place to do our first live show than SCALE, and we hope to see you there!

by jono at October 24, 2014 04:39 AM