Planet Ubuntu California

August 27, 2015

Jono Bacon

Ubuntu, Canonical, and IP

Recently there has been a flurry of concerns relating to the IP policy at Canonical. I have not wanted to throw my hat into the ring, but I figured I would share a few simple thoughts.

Firstly, the caveat. I am not a lawyer. Far from it. So, take all of this with a pinch of salt.

The core issue here seems to be whether the act of compiling binaries provides copyright over those binaries. Some believe it does, some believe it doesn’t. My opinion: I just don’t know.

The issue here though is with intent.

In Canonical’s defense, and specifically Mark Shuttleworth’s defense, they set out with a promise at the inception of the Ubuntu project that Ubuntu will always be free. The promise was that there would not be a hampered community edition and full-flavor enterprise edition. There will be one Ubuntu, available freely to all.

Canonical, and Mark Shuttleworth as a primary investor, have stuck to their word. They have not gone down the road of the community and enterprise editions, of per-seat licensing, or some other compromise in software freedom. Canonical has entered multiple markets where having separate enterprise and community editions could have made life easier from a business perspective, but they haven’t. I think we sometimes forget this.

Now, from a revenue side, this has caused challenges. Canonical has invested a lot of money in engineering/design/marketing and some companies have used Ubuntu without contributing even nominally to it’s development. Thus, Canonical has at times struggled to find the right balance between a free product for the Open Source community and revenue. We have seen efforts such as training services, Ubuntu One etc, some of which have failed, some have succeeded.

Again though, Canonical has made their own life more complex with this commitment to freedom. When I was at Canonical I saw Mark very specifically reject notions of compromising on these ethics.

Now, I get the notional concept of this IP issue from Canonical’s perspective. Canonical invests in staff and infrastructure to build binaries that are part of a free platform and that other free platforms can use. If someone else takes those binaries and builds a commercial product from them, I can understand Canonical being a bit miffed about that and asking the company to pay it forward and cover some of the costs.

But here is the rub. While I understand this, it goes against the grain of the Free Software movement and the culture of Open Source collaboration.

Putting the legal question of copyrightable binaries aside for one second, the current Canonical IP policy is just culturally awkward. I think most of us expect that Free Software code will result in Free Software binaries and to make claim that those binaries are limited or restricted in some way seems unusual and the antithesis of the wider movement. It feels frankly like an attempt to find a loophole in a collaborative culture where the connective tissue is freedom.

Thus, I see this whole thing from both angles. Firstly, Canonical is trying to find the right balance of revenue and software freedom, but I also sympathize with the critics that this IP approach feels like a pretty weak way to accomplish that balance.

So, I ask my humble readers this question: if Canonical reverts this IP policy and binaries are free to all, what do you feel is the best way for Canonical to derive revenue from their products and services while also committing to software freedom? Thoughts and ideas welcome!

by jono at August 27, 2015 11:59 PM

Elizabeth Krumbach

Travels in Peru: Machu Picchu

Our trip to Peru first took us to the cities ofLima and Cusco. We had a wonderful time in both, seeing the local sites and dining at some of their best restaurants. But if I’m honest, we left the most anticipated part of our journey for last, visiting Machu Picchu.

Before I talk about our trip to Machu Picchu, there are a few things worthy of note:

  1. I love history and ruins
  2. I’ve been fascinated by Peru since I was a kid
  3. Going to Machu Picchu has been a dream since I learned it existed

So, even being the world traveler that I am (I’d already been to Asia and Europe this year before going to South America), this was an exceptional trip for me. Growing up our land lord was from Peru, as a friend of his daughters I regularly got to see their home, which was full of Peruvian knickknacks and artifacts. As I dove into history during high school I learned about ancient ruins all over the world, from Egypt to Mexico and of course Machu Picchu in Peru. The mysterious city perched upon a mountaintop always held a special fascination to me. When the opportunity to go to Peru for a conference came up earlier this year, I agreed immediately and began planning. I had originally was going to go alone, but MJ decided to join me once I found a tour I wanted to book with. I’m so glad he did. Getting to share this experience with him meant the world to me.

Our trip from Cusco began very early on Friday morning in order to catch the 6:40AM train to Aguas Calientes, the village below Machu Picchu. Our tickets were for Peru Rail’s Vistadome train, and I was really looking forward to the ride. On the disappointing side, the Cusco half of the trip had foggy windows and the glare on the windows generally made it difficult to take pictures. But as we lowered in elevation my altitude headache went away and so did the condensation from the windows. The glare was still an issue, but as I settled in I just enjoyed the sights and didn’t end up taking many photos. It was probably the most enjoyable train journey I’ve ever been on. At 3 hours it was long enough to feel settled in and relaxed watching the countryside, rivers and mountains go by, but not too long that I got bored. I brought along my Nook but didn’t end up reading at all.

Of course I did take some pictures, here: https://www.flickr.com/photos/pleia2/albums/72157657450179755

Once at Aguas Calientes our overnight bags (big suitcases were left at the hotel in Cusco, as is common) were collected and taken to the hotel. We followed the tour guide who met us with several others to take a bus up to Machu Picchu!

Our guide gave us a three hour tour of the site. At a medium pace, he took us to some of the key structures and took time for photo opportunities all around. Of particular interest to him was the Temple of the Sun (“J” shaped building, center of the photo below), which we saw from above and then explored around and below.

The hike up for these amazing views wasn’t very hard, but I was thankful for the stops along the way as he talked about the exploration and scientific discovery of the site in the early 20th century.

And then there were the llamas. Llamas were brought to Machu Picchu in modern times, some say to trim the grass and other say for tourists. It seems to be a mix of the two, and there is still a full staff of groundskeepers to keep tidy what the llamas don’t manage. I managed to get this nice people-free photo of a llama nursing.

There seem to be all kinds of jokes about “selfies with llamas” and I was totally in for that. Though I didn’t get next to a llama like some of my fellow selfie-takers, but I did get my lovely distance selfie with llamas.

Walking through what’s left of Machu Picchu is quite the experience. The tall stone walls, stepped terraces that make up the whole thing. Lots of climbing and walking at various elevations throughout the mountaintop. Even going through the ruins in Mexico didn’t quite prepare me for what it’s like to be on top of a mountain like this. Amazing place.

We really lucked out with the weather, much of the day was clear and sunny, and quite warm (in the 70s). It made for good walking weather as well as fantastic photos. When the afternoon showers did come in, it was just in time for our tour to end and for us to have lunch just outside the gates. When lunch was complete the sun came out again and we were able to go back in to explore a bit more and take more pictures!

I feel like I should write more about Machu Picchu, being such an epic event for me, but it was more of a visual experience much better shared via photos. I uploaded over 200 more photos from our walk through Machu Picchu here: https://www.flickr.com/photos/pleia2/albums/72157657449734565

My photos were taken with a nice compact digital camera, but MJ brought along his DSLR camera. I’m really looking forward to seeing what he ended up with.

The park closes at 5PM, so close to that time we caught one of the buses back down to Aguas Calientes. I did a little shopping (went to Machu Picchu, got the t-shirt). We were able to check into our hotel, the Casa Andina Classic, which ended up being my favorite hotel of the trip, it was a shame we were only there for one night! Hot, high pressure shower, comfortable bed, and a lovely view of the river that runs along the village:

I was actually so tired from all our early mornings and late evenings the rest of the trip that after taking a shower at the hotel that evening I collapsed onto the bed and instead of reading, zombied out to some documentaries on the History channel, after figuring out the magical incantation on the remote to switch to English. So much for being selective about the TV I watch! We also decided to take advantage of the dinner that was included with our booking and had a really low key, but enjoyable and satisfying meal there at the hotel.

The next morning we took things slow and did some walking around the village before lunch. Aguas Calientes is very small, it’s quite possible that we saw almost all of it. I took the opportunity to also buy some post cards to send to my mother and sisters, plus find stamps for them. Finding stamps is always an interesting adventure. Our hotel couldn’t post them for me (or sell me stamps) and being a Saturday we struck out at the actual post office, but found a corner tourist goodie shop that sold them and a mailbox nearby to so I could send them off.

For lunch we made our way past all the restaurants who were trying to get us in their doors by telling us about their deals and pushing menus our way until we found what we were looking for, a strange little place called Indio Feliz. I found it first in the tour book I’d been lugging around, typical tourist that I am, and followed up with some online recommendations. The decor is straight up Caribbean pirate themed (what?) and with a French owner, they specialize in Franco-Peruvian cuisine. We did the fixed menu where you pick an appetizer, entree and dessert, though it was probably too much for lunch! They also had the best beer menu I had yet seen in Peru, finally far from the altitude headache in Cusco I had a Duvel and MJ went with a Chimay Red. Food-wise I began with an amazing avocado and papaya in lemon sauce. Entree was an exceptional skewer of beef with an orange sauce, and my meal concluded with coffee and apple pie that came with both custard and ice cream. While there we got to chat with some fellow diners from the US, they had just concluded the 4 day Inca Trail hike and regaled us with stories of rain and exhaustion as we swapped small talk about the work we do.

More photos from Aguas Calientes here: https://www.flickr.com/photos/pleia2/albums/72157657449826685

After our leisurely lunch, it was off to the train station. We were back on the wonderful Vistadome train, and on the way back to Cusco there was some culturally-tuned entertainment as well as a “fashion show” featuring local clothing they were selling, mostly of alpaca wool. It was a fun touch, as the ride back was longer (going up the mountains) and being wintertime the last hour or so of the ride was in the dark.

We had our final night in Cusco, and Sunday was all travel. A quick flight from Cusco to Lima, where we had 7 hours before our next flight and took the opportunity to have one last meal in Lima. Unfortunately the timing of our stay meant that most restaurants were in their “closed between lunch and dinner” time, so we ended up at Larcomar, a shopping complex built into an oceanside cliff in Miraflores. We ate at Tanta, where we had a satisfying lunch with a wonderful ocean view!

Our late lunch concluded our trip, from there we went back to Lima airport and began our journey back home via Miami. I was truly sad to see the trip come to an end. Often times I am eager to get home after such an adventurey vacation (particularly when it’s attached to a conference!), but I will miss Peru. The sights, the foods, the llamas and alpacas! It’s a beautiful country that I hope to visit again.

by pleia2 at August 27, 2015 02:50 AM

August 26, 2015

Akkana Peck

Switching to a Kobo e-reader

For several years I've kept a rooted Nook Touch for reading ebooks. But recently it's become tough to use. Newer epub books no longer work work on any version of FBReader still available for the Nook's ancient Android 2.1, and the Nook's built-in reader has some fatal flaws: most notably that there's no way to browse books by subject tag, and it's painfully slow to navigate a library of 250 books when have to start from the As and you need to get to T paging slowly forward 6 books at a time.

The Kobo Touch

But with my Nook unusable, I borrowed Dave's Kobo Touch to see how it compared. I like the hardware: same screen size as the Nook, but a little brighter and sharper, with a smaller bezel around it, and a spring-loaded power button in a place where it won't get pressed accidentally when it's packed in a suitcase -- the Nook was always coming on while in its case, and I didn't find out until I pulled it out to read before bed and discovered the battery was too low.

The Kobo worked quite nicely as a reader, though it had a few of the same problems as the Nook. They both insist on justifying both left and right margins (Kobo has a preference for that, but it doesn't work in any book I tried). More important is the lack of subject tags. The Kobo has a "Shelves" option, called "Collections" in some versions, but adding books to shelves manually is tedious if you have a lot of books. (But see below.)

It also shared another Nook problem: it shows overall progress in the book, but not how far you are from the next chapter break. There's a choice to show either book progress or chapter progress, but not both; and chapter progress only works for books in Kobo's special "kepub" format (I'll write separately about that). I miss FBReader's progress bar that shows both book and chapter progress, and I can't fathom why that's not considered a necessary feature for any e-reader.

But mostly, Kobo's reader was better than the Nook's. Bookmarks weren't perfect, but they basically worked, and I didn't even have to spent half an hour reading the manual to use them (like I did with the Nook). The font selection was great, and the library navigation had one great advantage over the Nook: a slider so you could go from A to T quickly.

I liked the Kobo a lot, and promptly ordered one of my own.

It's not all perfect

There were a few disadvantages. Although the Kobo had a lot more granularity in its line spacing and margin settings, the smallest settings were still a lot less tight than I wanted. The Nook only offered a few settings but the smallest setting was pretty good.

Also, the Kobo can only see books at the top level of its microSD card. No subdirectories, which means that I can't use a program like rsync to keep the Kobo in sync with my ebooks directory on my computer. Not that big a deal, just a minor annoyance.

More important was the subject tagging, which is really needed in a big library. It was pretty clear Shelves/Collections were what I needed; but how could I get all my books into shelves without laboriously adding them all one by one on a slow e-ink screen?

It turns out Kobo's architecture makes it pretty easy to fix these problems.

Customizing Kobo

While the rooted Nook community has been stagnant for years -- it was a cute proof of concept that, in the end, no one cared about enough to try to maintain it -- Kobo readers are a lot easier to hack, and there's a thriving Kobo community on MobileReads which has been trading tips and patches over the years -- apparently with Kobo's blessing.

The biggest key to Kobo's customizability is that you can mount it as a USB storage device, and one of the files that exposes is the device's database (an sqlite file). That means that well supported programs like Calibre can update shelves/collections on a Kobo, access its book list, and other nifty tricks; and if you want more, you can write your own scripts, or even access the database by hand.

I'll write separately about some Python scripts I've written to display the database and add books to shelves, and I'll just say here that the process was remarkably straightforward and much easier than I usually expect when learning to access a new device.

There's lots of other customizing you can do. There are ways of installing alternative readers on the Kobo, or installing Python so you can write your own reader. I expected to want that, but so far the built-in reader seems good enough.

You can also patch the OS. Kobo updates are distributed as tarballs of binaries, and there's a very well designed, documented and supported (by users, not by Kobo) patching script distributed on MobileReads for each new Kobo release. I applied a few patches and was impressed by how easy it was. And now I have tight line spacing and margins, a slightly changed page number display at the bottom of the screen (still only chapter or book, not both), and a search that defaults to my local book collection rather than the Kobo store.

Stores and DRM

Oh, about the Kobo store. I haven't tried it yet, so I can't report on that. From what I read, it's pretty good as e-bookstores go, and a lot of Nook and Sony users apparently prefer to buy from Kobo. But like most e-bookstores, the Kobo store uses DRM, which makes it a pain (and is why I probably won't be using it much).

They use Adobe's DRM, and at least Adobe's Digital Editions app works in Wine under Linux. Amazon's app no longer does, and in case you're wondering why I didn't consider a Kindle, that's part of it. Amazon has a bad reputation for removing rights to previously purchased ebooks (as well as for spying on their customers' reading habits), and I've experienced it personally more than once.

Not only can I no longer use the Kindle app under Wine, but Amazon no longer lets me re-download the few Kindle books I've purchased in the past. I remember when my mother used to use the Kindle app on Android regularly; every few weeks all her books would disappear and she'd have to get on the phone again to Amazon to beg to have them back. It just isn't worth the hassle. Besides, Kindles can't read public library books (those are mostly EPUBs with Adobe DRM); and a Kindle would require converting my whole EPUB library to MOBI. I don't see any up side, and a lot of down side.

The Adobe scheme used by Kobo and Nook is better, but I still plan to avoid books with DRM as much as possible. It's not the stores' fault, and I hope Kobo does well, because they look like a good company. It's the publishers who insist on DRM. We can only hope that some day they come to their senses, like music publishers finally did with MP3 versus DRMed music. A few publishers have dropped DRM already, and if we readers avoid buying DRMed ebooks, maybe the message will eventually get through.

August 26, 2015 11:04 PM

Elizabeth Krumbach

Travels in Peru: Cusco

We started our Peruvian adventures in Lima. On Wednesday morning we too a very early flight to Cusco. The tour company had recommended an early flight so we could take a nap upon arrival to help adjust to the altitude, indeed, with Cusco over 2 miles high in elevation I did find myself with a slight headache during our visit there. After our nap we met up with our fellow travelers for our city tour of Cusco.

The tour began by going up for a view of all of Cusco from the hillside, where I got my first selfie with an alpaca. We also visited San Pedro’s Market, a large market complex that had everything from tourist goodies to everyday produce, meats, cheeses and breads.

From there we made our way to Qurikancha, said to be the most important temple in the Inca Empire. When the Spanish arrived they built their Church of Santo Domingo on top of it, so only the foundation and some of the rooms remain. I was happy that the tour focused on the Inca aspects and largely ignored the Church, aside from some of the famous religious paintings contained within.

More photos from Qurikancha here: https://www.flickr.com/photos/pleia2/sets/72157657421208352

We then went to the Plaza de Armas where the Cusco Cathedral lords over the square. No photos were allowed inside, but the Cathedral is notable for the Señor de los Temblores, a Jesus statue that is believed to have halted an earthquake in 1650 and a huge, captivating painting by Marcos Zapata of a localized Last Supper where participants are dining on guinea pig and chicha morada!

That evening we had the most exceptional dinner in Cusco, at MAP Café. It’s located inside Museo Arqueologico Peruano (MAP) which is run in association with the fantastic Museo Larco that we visited in Lima. Since this museum also had late hours, we had a wonderful time browsing their collection before dinner. Dinner itself was concluded with some amazing desserts, including a deconstructed lemon meringue pie accompanied by caramel ice cream.

More photos from the museum and dinner here: https://www.flickr.com/photos/pleia2/sets/72157655109721514

Thursday started off bright and early with a tour of a series of ruins outside of Cusco, in Saksaywaman. This was the first collection of ruins in Cusco we really got to properly climb, so with our tiny group of just four we were able to explore the citadel of Saksaywaman with a guide and then for a half hour on our own. In addition to the easy incline we took with the tour guide to walk on the main part of the ruins, which afforded our best view of Cusco, we walked up a multi-story staircase on the other side to get great panoramic views of the ruins. Plus, there were alpacas.

Beyond the main Saksaywaman sites, we visited other sites inside the park, seeing the fountains featured at Tambomachay, the amazing views from a quick stop at Puka Pukara and a near natural formation that had been carved for sacrifices at Q’enqo. The tour concluded by stopping at a local factory shop specializing in alpaca clothing.

More photos from throughout the morning here: https://www.flickr.com/photos/pleia2/albums/72157657034040428

We were on our own for the afternoon, so we began by finally visiting a Chifa (Peruvian-inspired Chinese) restaurant. I enjoyed their take on Sweet and Sour Chicken. We then did some browsing at local shops before finally ending up at the Center for Traditional Textiles. They featured a small museum sharing details about the types and procedures for creating traditional Peruvian textiles, as well as live demonstration from master craftswomen and young trainees of the techniques involved. While there we fell in love with a pair of pieces that we took home with us, a finely woven tapestry and a small blanket that we’ll need to get framed soon.

Our time in Cusco concluded with a meal at Senzo, which had been really hyped but didn’t quite live up to our expectations, especially after the meal we had the previous night at MAP Café, but it was still an enjoyable evening. We’d have one last night in Cusco following our trip to Machu Picchu where we dined at Marcelo Batata, but the altitude sickness had hit me upon our return and I could only really enjoy the chicken soup, but as a ginger, mint & lemongrass soup, it was the perfect match for my queasy stomach (even if it didn’t manage to cure me of it).

More photos from Cusco here: https://www.flickr.com/photos/pleia2/sets/72157657024948969

The next brought an early morning train to Aguas Calientes and Machu Picchu!

by pleia2 at August 26, 2015 03:01 AM

August 24, 2015

Elizabeth Krumbach

Travels in Peru: Lima

After the UbuCon Latin America conference that I wrote about here I had a day of work and personal project catch up with a dash of relaxation at my hotel before MJ arrived that night. Monday morning we were picked up by the folks at Viajes Pacifico who I had booked a tour of Lima and Cusco with.

It was the first time I used a group tour company, the price of the tour included all the hotels (selected by them) as well as transportation and entrance fees into the sites our tour went to. I definitely prefer the private driver we had in Mexico for our honeymoon, and we’re putting together our own itinerary for our trip to Japan in October, but given my schedule this year I simply didn’t have the time or energy to put together a schedule for Peru. The selected hotels were fine, but we likely would have gone to nicer ones if we booked ourselves. The tours were kept small, with the largest group being one in Cusco that was maybe 14 of us and the smallest being only 4. I wasn’t a fan of the schedule execution, we had a loose schedule each day but they wouldn’t contact us until the evening before with exact pickup times and it was unclear how long the tours would last or which trains we’d be taking, which caused making dinner reservations and the like to be a bit dicey. Still, it all worked out and it was great to have someone else worry about the logistical details.

On Monday we were picked up from our hotel in the afternoon for the schedule Lima city tour, which began at El Parque del Amor (Love Park), a beautiful seaside park in Miraflores with lots of flowers, a giant sculpture of a couple and lovely view of the Pacific Ocean. From there the tour bus did a quick drive around the ruins of Huaca Pucllana, which I had really hoped to see beyond just the windows of a bus – alas! And then on to the rest of our tour that took us to the main square in Lima where we got a tour of Basilica Cathedral of Lima which is notable not only by being the main cathedral but also the tomb of famous Spanish conqueror Francisco Pizarro. I learned that during excavations they discovered that his head was buried in a box separate from his body. The cathedral itself is beautiful.

More photos from the cathedral here: https://www.flickr.com/photos/pleia2/sets/72157657445084381

Our next stop was the Convent of Santo Domingo. The claim to fame there are the tombs and related accoutrements of both Saint Rose of Lima and Saint Martin de Porres. They had an impressive library that spanned not just religious books, but various topics in Spanish and Latin. The convent also had some nice gardens and history of these places is always interesting to learn about. I think we may have gotten more out of them if were were Catholic (or even Christian).

More photos from the convent here: https://www.flickr.com/photos/pleia2/sets/72157657445106491

That evening we met up with a friend of mine from high school who has lived in Lima for several years. It was fun to catch up over a nice Peruvian meal that included more ceviche and my first round of Pisco Sours.

Tuesday was our non-tour day in Lima, so I got up early for a walk down by the ocean and then up to the Artisan Markets of Miraflores (the “Inka Market”). I was able to pick up some tourist goodies and on my way to the market I walked through Kennedy Park. We were told about this park on the tour the previous day, it’s full of cats! Cats in the flowers, cats on the lawn, cats on the benches. Given my love for cats, it was quite the enjoyable experience. I took a bunch of pictures: https://www.flickr.com/photos/pleia2/sets/72157657420782972

I made it back to our hotel shortly after noon in time to meet up with MJ to go to our lunch reservations at the famous Astrid y Gaston. This was definitely one of the highlights of our trip to Lima. We partook of their tasting menu which was made of over a dozen small plates that each were their own little work of art. It was easily one of the best meals I’ve ever had.

After lunch, which was a multiple hour affair, we made it to the ruins of Huaca Huallamarca just before closing. They have a small, single room museum that contains a mummy that was found on the site and some artifacts. They let you climb the mud brick “pyramid” that seems to have active archaeological digs going on (though no one was there when we visited). Definitely worth the stop as we rounded out our afternoon.

More photos of the site here: https://www.flickr.com/photos/pleia2/sets/72157657376822346

Our early evening plans were made partially by what was still open after 5PM, which is how we found ourselves at the gem that is Museo Larco. Beautifully manicured grounds with lots of flowering plants, a stunning array of Peruvian artifacts dating back several thousand years with descriptions in multiple languages and a generally pleasant place to be. I particularly liked the exhibits with the cat themes, as the cats were an ancient symbol of earth (with heavens the bird and snakes below). Highly recommended and they’re open until 10PM! We didn’t stay that late though, we had dinner reservations at Brujas de Cachiche back down in Miraflores. With a focus on seafood, the menu was massive and the food was good.

More photos from Museo Larco here: https://www.flickr.com/photos/pleia2/sets/72157657420797692

That meal wrapped up the Lima portion of or trip, we were up before the sun the next day for our flight to Cusco!

And more photos more generally around Lima are here: https://www.flickr.com/photos/pleia2/albums/72157657029669660

by pleia2 at August 24, 2015 06:00 AM

Nathan Haines

Ubuntu Free Culture Showcase submissions are now open!

Ubuntu 15.10 is coming up soon, and what better way to celebrate a new release with beautiful new content to go with the release? The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We're looking for content which shows off the skill and talent of these amazing artists and will great Ubuntu 15.10 users. We announced the showcase last week, and now we are accepting submissions at the following groups: For more information, please visit the Ubuntu Free Culture Showcase page on the Ubuntu wiki.

August 24, 2015 03:40 AM

Making Hulu videos play in Ubuntu

A couple of weeks ago, Hulu made some changes to their video playback system to incorporate Adobe Flash DRM technology. Unfortunately, this meant that Hulu no longer functioned on Ubuntu because Adobe stopped supporting Flash on Linux several year ago, and therefore Adobe’s DRM requires HAL which was likewise obsoleted about 4 years ago and was dropped from Ubuntu in 13.10. The net result is that Hulu no longer functions on Ubuntu.

While Hulu began detecting Linux systems and displaying a link to Adobe’s support page when playback failed, and the Adobe site correctly identifies the lack of HAL support as the problem, the instructions given no longer function because HAL is no longer provided by Ubuntu.

Fortunately, Michael Blennerhassett has maintained a Personal Package Archive which rebuilds HAL so that it can be installed on Ubuntu. Adding this PPA and then installing the “hal” package will allow you to play Hulu content once again.

To do this, first open a Terminal window by searching for it in the Dash or by pressing Ctrl+Alt+T.

Next, type the following command at the command line and press Enter:

sudo add-apt-repository ppa:mjblenner/ppa-hal

You will be prompted for your password and then you will see a message from the PPA maintainer. Press Enter, and the PPA will be added to Ubuntu’s list of software sources. Next, have Ubuntu refresh its list of available software, which will now include this PPA, by typing the following and pressing Enter:

sudo apt update

Once this update finishes, you can then install HAL support on your computer by searching for “hal” in the Ubuntu Software Center and installing the “Hardware Abstraction Layer” software, or by typing the following command and pressing Enter:

sudo apt install hal

and confirming the installation when prompted by pressing Enter.

book cover

I explain more about how to install software on the command line in Chapter 5 and how to use PPAs in Chapter 6 of my upcoming book, Beginning Ubuntu for Windows and Mac Users, coming this October from Apress. This book was designed to help Windows and Mac users quickly and easily become productive on Ubuntu so they can get work done immediately, while providing a foundation for further learning and exploring once they are comfortable.

August 24, 2015 02:13 AM

August 21, 2015

Akkana Peck

Python module for reading EPUB e-book metadata

Three years ago I wanted a way to manage tags on e-books in a lightweight way, without having to maintain a Calibre database and fire up the Calibre GUI app every time I wanted to check a book's tags. I couldn't find anything, nor did I find any relevant Python libraries, so I reverse engineered the (simple, XML-bsaed) EPUB format and wrote a Python script to show or modify epub tags.

I've been using that script ever since. It's great for Project Gutenberg books, which tend to be overloaded with tags that I don't find very useful for categorizing books ("United States -- Social life and customs -- 20th century -- Fiction") but lacking in tags that I would find useful ("History", "Science Fiction", "Mystery").

But it wasn't easy to include it in other programs. For the last week or so I've been fiddling with a Kobo ebook reader, and I wanted to write programs that could read epub and also speak Kobo-ese. (I'll write separately about the joys of Kobo hacking. It's really a neat little e-reader.)

So I've factored my epubtag script into a usable Python module, so as well as being a standalone program for viewing epub book data, it's easy to use from other programs. It's available on GitHub: epubtag.py: parse EPUB metadata and view or change subject tags.

August 21, 2015 02:27 AM

August 20, 2015

Jono Bacon

Talking with a Mythbuster and a Maker

Recently I started writing a column for Forbes.

My latest column covers the rise of the maker movement and in it I interviewed Jamie Hyneman from Mythbusters and Dale Dougherty from Make Magazine.

Go and give it a read right here.

by jono at August 20, 2015 04:48 PM

August 15, 2015

Nathan Haines

Reintroducing the Ubuntu Free Culture Showcase

In the past, we’ve had the opportunity to showcase some really fun, really incredible media in Ubuntu. Content creators who value free culture have offered beautiful photography for the desktop and entertaining videos and music.

Not only does this highlight the fantastic work that comes out of free culture on Ubuntu desktops worldwide, but the music and video selections also help show off Ubuntu’s fantastic multimedia support by providing content for the Ubuntu live images.

The wallpaper contest has continued from cycle to cycle, but the audio and video contests have fallen by the wayside. But Ubuntu is more popular than ever, and can now power phones and tablets as well as desktops and laptops. So as we move closer towards a goal of convergence, we’d like to bring back this opportunity for artists to share their work with millions of Ubuntu users around the world.

All content must be released under a Creative Commons Attribution-Sharealike or Creative Commons Attribute license. (The Creative Commons Zero waiver is okay, too!). Each entrant must only submit content they have created themselves, and all submissions must adhere to the Ubuntu Code of Conduct.

The winners will be featured in the Ubuntu 15.10 release in October!

We’re looking for work in one of three categories:

  • Wallpapers – we’ll choose 11 stunning photographs, but also one illustrative wallpaper that focuses on Ubuntu 15.10’s codename: wily werewolf.
  • Videos – we need to keep it small, so we’re looking for a 1280x720 video of about 30 seconds.
  • Audio – everybody loves music, so a song between 2-5 minutes will rock speakers around the world!

You’re limited to a total of two submissions in any single category.

We’re still cleaning out the cobwebs, and selecting a panel of judges from the Canonical and Ubuntu design teams and the Ubuntu community. So in the next week or two we’ll announce Flickr, Vimeo, and SoundCloud groups where you can submit your entries. But you’ll have plenty of time to prepare before the final submission deadline: September 15th, 2015 at 23:59 UTC.

There are more technical details for each category, so please see the Ubuntu Free Culture Showcase wiki page for the latest information and updates!

August 15, 2015 10:19 AM

August 10, 2015

kdub

Bad Metaphysics Costs Maintainability

header image

I find myself doing a lot of metaphysical thinking in my day to day work as a coder. Objects that are cohesive and are valid metaphysical analogues to common experiences make it much easier to read, understand, and fix existing code.

Taking an example:

1
2
3
4
5
struct ServiceQueue
{
    void place_customer_at_back();
    void service_front_customer();
};

This class maps well to a physical problem we encounter frequently in our day to day lives; customer service at a bank teller window, perhaps, or ordering a hamburger at fast food chain. Taking things to the realm of computers, ServiceQueue also maps to many computer problems pretty well as well. A packet arrives over the network, and we want to service the packet in a FIFO manner.

This class is cohesive and easily understood by someone new to the code because it maps well to a well understood concept; that of a Queue. By making use of the shared metaphysical concept of physical queueing, we introduce the new coder to the abstract queue that’s implemented in software. The new coder can understand and verify the interface and implementation quickly.

Now lets spice the interface up with a metaphysical error. Say that we have the above class, and we encounter an error condition that happens when the network becomes disconnected. One might be tempted to add a function like:

1
2
3
4
5
6
struct ServiceQueue
{
    void place_customer_at_back();
    void service_front_customer();
    void handle_network_disconnected();
};

The problem in adding this function is twofold; it makes the code harder to understand, and it makes writing a new object more difficult.

Rapid Understandibility

The first issues is that that analogue to a real life concept is diluted, and with enough dilution, will eventually be lost. This makes it more difficult to rapidly understand the role an object implementing the interface has.

I bet the coder that added “handle_network_disconnected” saw that a lot of the implementation where the error could be handled from was “conveniently” in the class implementing the interface, and punched “handle_network_disconnected” in. But did you catch the metaphysical error? ServiceQueue is no longer named properly, its become a different object. Its a ServerQueueThatCanBeDisconnected; and the analogy to the physical queue is weakened. It takes a bit more explaining to a new coder to explain what sort of interface ServiceQueue is. This additional explanation needed to the new coder makes it much more difficult to understand the object and the problem that is being solved. Consequently, its harder to maintain, and takes longer to debug, because of the added cost of understanding. 1

Alternative Implementations

With each error like this, it becomes a bit harder to write an implementation for the interface that solves a similar problem. ServiceQueue with “handle_network_disconnected” fits the network-packet problem, but its been made more difficult to use this interface with the myriad other problems (like the bank teller problem).

Now, in the practical world of software, we’re used to seeing this all the time. We can mentally handle one metaphysical error per interface quite easily. The actual problem comes in much worse scenarios, where there are multiple holes punched through the interface. Eventually, it can get to the point where the object really has no physical manifestation and the interface gets renamed to something ambiguous, like “ServiceManager”. At this point, the object has sluggish understandability, and is irreplaceable. We’ve found ourself with some difficult to maintain software!

It might take a bit of refactoring to get things right, but in the end, its worth it; both practically, and metaphysically.

This post originally appeared on kdubois.net, and is (c) Kevin DuBois 2015

by Kevin at August 10, 2015 10:27 AM

Akkana Peck

Bat Ballet above the Amaranths

This evening Dave and I spent quite a while clearing out amaranth (pigweed) that's been growing up near the house.

[Palmer's amaranth, pigweed] We'd been wondering about it for quite some time. It's quite an attractive plant when small, with pretty patterns on its leaves that remind me of some of the decorative houseplants we used to try to grow when I was a kid.

I've been working on an Invasive Plants page for the nature center, partly as a way to figure out myself which plants we need to pull and which are okay. For instance, Russian thistle (tumbleweed) -- everybody knows what it looks like when it's a dried-up tumbleweed, but by then it's too late, scattering its seeds all over. Besides, it's covered with spikes by then. The trick is to recognize and pull it when it's young, and the same is true of a lot of invasives, especially the ones with spiky seeds that stick to you, like stickseed and caltrops (goatheads).

A couple of the nature center experts have been sending me lists of invasive plants I should be sure to include, and one of them was a plant called redroot pigweed. I'd never heard of it, so I looked it up -- and it looked an awful lot like our mystery plant. A little more web searching on Amaranthus images eventually led me to Palmer's amaranth, which turns out to be aggressive and highly competitive, with sticky seeds.

Unfortunately the pretty little plants had had a month to grow by the time we realized the problem, and some of them had trunks an inch and a half across, so we had to go after them with a machete and a hand axe. But we got most of them cleared.

As we returned from dumping the last load of pigweed, a little after 8 pm, the light was fading, and we were greeted by a bat making rounds between our patio and the area outside the den. I stopped what I was doing and watched, entranced, as the bat darted into the dark den area then back out, followed a slalom course through the junipers, buzzed past my head and the out to make a sweep across the patio ... then back, around the tight corner and back to the den, over and over.

I stood watching for twenty minutes, with the bat sometimes passing within a foot of my head. (yay, bat -- eat some of these little gnats that keep whining by my ears and eyes!) It flew with spectacular maneuverability and grace, unsurpassed by anything save perhaps a hummingbird, changing direction constantly but always smoothly. I was reminded of the way a sea lion darts around underwater while it's hunting, except the bat is so much smaller, able to turn in so little space ... and of course maneuvering in the air, and in the dark, makes it all the more impressive.

I couldn't hear the bat's calls at all. Years ago, waiting for dusk at star parties on Fremont Peak, I used to hear the bats clearly. Are the bats here higher pitched than those California bats? Or am I just losing high frequencies as I get older? Maybe a combination of both.

Finally, a second bat, a little smaller than the first, appeared over the patio and both bats disappeared into the junipers. Of course I couldn't see either one well enough to tell whether the second bat was smaller because it was a different species, or a different gender of the same species. In Myotis bats, apparently the females are significantly larger than the males, so perhaps my first bat was a female Myotis and the male came to join her.

The two bats didn't reappear, and I reluctantly came inside.

Where are they roosting? In the trees? Or is it possible that one of them is using my bluebird house? I'm not going to check and risk disturbing anyone who might be roosting there.

I don't know if it's the same little brown bat I saw last week on the front porch, but it seems like a reasonable guess.

I've wondered how many bats there are flying around here, and how late they fly. I see them at dusk, but of course there's no reason to think they stop at dusk just because we're no longer able to see them. Perhaps I'll find out: I ordered parts for an Arduino-driven bat detector a few weeks ago, and they've been sitting on my desk waiting for me to find time to solder them together. I hope I find the time before summer ends and the bats fly off wherever they go in winter.

August 10, 2015 03:47 AM

August 09, 2015

Elizabeth Krumbach

UbuConLA 2015 in Lima

This week I had the honor of joining a couple hundred free software enthusiasts at UbuCon Latin America. I’d really been looking forward to it, even if I was a bit apprehensive about the language barrier, and the fact that mine was the only English talk on the schedule. But those fears melted away as the day began on Friday morning and I found myself loosely able to follow along with sessions with the help of slides, context and my weak understanding of Spanish (listening is much easier than speaking!).

The morning began by meeting a couple folks from Canonical and a fellow community member at the hotel lobby and getting a cab over to the venue. Upon arrival, we were brought into the conference speaker lounge to settle in before the event. Our badges had already been printed and were right there for us, and bottles of water available for us, it was quite the pleasant welcome.

José Antonio Rey kicked off the event at 10AM with a welcome, basic administrative notes about the venue, a series of thanks and schedule overview. Video (the audio in the beginning sounds like aliens descending, but it gets better by the end).

Immediately following him was a keynote by Pablo Rubianes, a contributor from Uruguay who I’ve known and worked with in the Ubuntu community for several years. As a member of the LoCo Council, he had a unique view into development and construction of LoCo (Local/Community) teams, which he shared in this talk. He talked some about how LoCos are organized, gave an overview of the types of events many of them do, like Ubuntu Hours, Global Jams and events in collaboration with other communities. I particularly enjoyed the photos he shared in his presentation. He left a lot of time for questions, which was needed as many people in the audience had questions about various aspects of LoCo teams. Also, I enjoyed the playful and good humored relationship they have with the title “LoCo” given the translation of the word into Spanish. Video.

My keynote was next, Building a Career in Free and Open Source Software (slides, English and Spanish). Based on audience reaction, I’m hopeful that a majority of the audience understood English well enough to follow along. For anyone who couldn’t, I hope there was value found in my bi-lingual slides. I had some great feedback following my talk both in person and on Twitter. Video (in English!).


Thanks to Pablo Rubianes for the photo (source)

For all the pre-conference jokes about a “cafeteria lunch” I was super impressed with my lunch yesterday. Chicken and spiced rice, some kind of potato-based side and a dessert of Chicha Morada pudding… which is what I called it until I learned the real name, Mazamorra Morada, a purple corn pudding that tastes like the drink I named it after. Yum!

After lunch we heard from Naudy Villaroel who spoke about the value of making sure people of all kinds are included in technology, regardless of disability. He gave an overview of several accessibility applications available in Ubuntu and beyond, including the Orca screen reader, the Enable Viacam (eViacam) tool for controlling the mouse through movements on camera and Dasher which allows for small movements to control words that are displayed through algorithms that anticipate words and letters the operator will want to use, and makes it easy to form them. He then went on to talk about other sites and tools that could be used. Video.

Following Naudy’s talk, was one by Yannick Warnier, president of Chamilo, which produces open source educational software. His talk was a tour of how online platforms, both open source and hosted (MOOC-style) have evolved over the past couple decades. He concluded by speculating far into the future as to how online learning platforms will continue to evolve and how important education will continue to be. Video. The first day concluded with a duo of talks from JuanJo Ciarlante, the first about free software on clouds (video… and ran over so continued in next link…) and a second that covered some basics around using Python to do data crunching, including some of the concepts around Map Reduce type jobs and Python-based libraries to accomplish it (video, which includes the conclusion of the cloud talk, the last half is about Python).

The evening was spent with several of my fellow speakers at La Bistecca. I certainly can’t say I haven’t been eating well while I’ve been here!

I also recommend reading Jose’s post about the first day, giving you a glimpse into the work he’s done to organize the conference here: UbuConLA 2015: The other side of things. Day 1.

And with that, we were on to day 2!

The day began at 10AM with a talk about Snappy by Sergio Schvezov. I was happy to have read a blog post by Ollie Ries earlier in the week that walked through all the Snappy/core/phone related names that have been floating around, but this talk went over several of the definitions again so I’m sure the audience was appreciative to get them straightened out. He brought along a BeagleBone and Ubuntu tablet that he did some demos on as he deployed Ubuntu Core and introduced Snapcraft for making Snappy packages. Video.

Following his talk was one by Luis Michael Ibarra in a talk about the Linux container hypervisor, LXD. I learned that LXD was an evolution of lxc-tools, and in his talk he dug through the filesystem and system processes themselves to show how the containers he was launching worked. Unfortunately his talk was longer than his slot, so he didn’t get through all his carefully prepared slides, so hopefully they’ll be published soon. Video.

Just prior to lunch, we enjoyed a talk by Sebastián Ferrari about Juju where he went through the background of Juju, what it’s for and where it fits into the deployment and orchestration world. He gave demos of usage and the web interface for it on both Amazon and Google Compute Engine. He also provided an introduction to the Juju Charm Store where charms for various applications are shared and shared the JuJu documentation for folks looking to get started with Juju. Video.

After lunch the first talk was by Neyder Achahuanco who talked about building Computer Science curriculum for students using tools available in Ubuntu. He demonstrated Scratch, Juegos de Blockly (Spanish version of Blockly Games), code.org (which is in many languages, see bottom right of the site) and MIT App Inventor. Video).


Break, with Ubuntu and Kubuntu stickers!

As the afternoon continued, Pedro Muñoz del Río spoke on using Ubuntu for a platform for data analysis. Video. the Talks concluded with Alex Aragon who gave an introduction to 3d animation with Blender where he played the delightful Monkaa film. He then talked about features and went through various settings. Video.

Gracias to all the organizers, attendees and folks who made me feel welcome. I had a wonderful time! And as we left, I snagged a selfie with the flags flying outside the University. For what? Jose picked them out upon learning which countries people would be flying in from, the stars and stripes were flying for me!

More photos from UbuConLA here: https://www.flickr.com/photos/pleia2/sets/72157656475304230

by pleia2 at August 09, 2015 03:13 AM

August 07, 2015

Elizabeth Krumbach

Lima, dia uno y UbuConLA prep

Saying my Spanish is “weak” is being generous. I know lots of nouns and a smattering of verbs from “learning Spanish” in school, but it never quite stuck and I lacked the immersive experience that leads to actually learning a language. So I was very thankful to be spending yesterday with my friend and Ubuntu colleague José Antonio Rey as we navigated the city and picked up a SIM for my phone.

I’m staying at Hotel & Spa Golf Los Incas in Lima. Jose and his father were kind enough to meet me at the airport, late on Wednesday night when my flight came in. The hotel itself is a bit of a drive from the airport, but it’s not far from the university where the conference is being held today, an 8 minute Uber ride yesterday evening in brisk traffic. They offer a free shuttle to a nearby mall, where I met up with Jose come morning. The day kicked off by discovering that Lima has Dunkin’ Donuts, and I don’t (at home in San Francisco). Having already finished breakfast, I didn’t avail myself of the opportunity for a doughnut. We then searched the mall, waited in some lines, waited for processing and finally got a SIM for my phone! With the data plan along with it, I plan on taking lots of pictures of llamas when I reach Cusco and sharing them with everyone.

From the mall we took a bus down the main east-west avenue in Lima, Avenida Javier Prado, and then the Línea 1 del Metro de Lima, a train! The Metro goes north to south and was very speedy and new, if packed. We took it just a couple stops from La Cultura to Gamarra.

Gamarra is home to a shopping district with various open air markets and a lot of clothing and street food along the way. Our journey took us here to pick up the custom t-shirts that were printed for the staff and crew working the UbuConLA conference. The shirts look great.

It was then on to the train and bus again, which took us to Señor Limón for some amazing ceviche!

After lunch we went over to Universidad de Lima to get a tour of the campus and see how things were coming together. Jose met up with several of his fellow conference planners as they tested audio and video, streaming and got all kinds of other logistical things. We also picked up boxes of Ubuntu goodies from across campus and brought them over so setup of tables could begin.

It was pretty fun to get a “behind the scenes” view of the pieces of the conference coming together. Huge thanks to everyone putting it together, it’s a real pleasure to be here.

My evening wound down at my hotel with a nice meal. At noon today I’ll be giving my keynote!

by pleia2 at August 07, 2015 02:40 PM

Meetup, baseball and kitties

I had fully intended on writing this before sitting in a hotel in Peru, but pre-trip tasks crept up, I had last minute things to finish with work (oh, leaving on a Wednesday!) and sitting on a plane all day is always much more exhausting than I expect it to be. So here we are!

Since returning from OSCON a couple weeks ago I’ve kept busy with other things. In addition to the continued work on my book. On the Thursday OSCON was still going on, I attended my first Write/Speak/Code SF & Bay Area event. It was a work evening where several women met up at a little eatery in SOMA, chatted about their work and each brought a project to work on. I had my keynote slides to perfect, and managed to do that and get them set off to the friend I was having them translate them into Spanish. I managed to also talk about the work I’d been doing on my book and found a couple people who may be interested in doing some review. It was also great to learn that some of them were interested in supporting Grace Hopper Conference speakers, and there may be an event in September to gather some of us who live and work in the area to support each other and practice fine tune our talks.

The following Monday MJ and I met at AT&T Park downtown to attend a Giants baseball game on Jewish Heritage Night. It had been a couple years since I’d been to a Giants game (the season goes by so quickly!), it was great to get to see a game again. Plus, the Kiddush cup they gave away as the special event gift now has a treasured spot in my home.

As the game began, I found myself sitting in front of the Rabbi for our congregation, who is a big baseball fan and is always fun to talk to about it. Since we bought tickets with other members we also found ourselves in the bleachers, which I’d never sat in before. It was a whole different angle and seating arrangement than I’m used to, but still lots of fun.

As an added bonus, it was a solid game that the Giants won. More photos from the game here: https://www.flickr.com/photos/pleia2/sets/72157654129506103

In other “while I’m home” life news, I also started the sessions with a trainer at the gym. I have 5 paid for, and the first was a tour of pain, as advertised. I go running and have always been quite skilled at lifting things, but this trainer found muscles I’m not sure I’ve ever used. He also managed to put me in a state where it took me about 3 days to feel normal again, the first day of which I really struggled to walk down stairs! I’m sticking to it though and while I may ask him to tone it down slightly for my next session, I already have it on my schedule upon my return from Peru and the OpenStack Operators Meetup in Palo Alto.

I then spent a lot of time on work and getting some loose ends tied off for my book as I prepared for this trip to Peru. We’ve also had some vet visits interspersed as poor Simcoe has battled a bacterial problem that caused some eye trouble. Thankfully she was almost all healed up by the time I flew out on Wednesday and you can hardly tell there was an issue. Fortunately none of these troubles impacted her bouncy nature.

Or whatever is in their nature that makes them want to sleep on our suitcases.

Sitting on my suitcases aside, I already miss my fluffy critters, and am thankful that my husband is joining me on Sunday. Still, I’m super excited for UbuCon Latin America tomorrow!

by pleia2 at August 07, 2015 03:15 AM

July 31, 2015

Eric Hammond

AWS SNS Outage: Effects On The Unreliable Town Clock

It took a while, but the Unreliable Town Clock finally lived up to its name. Surprisingly, the fault was not mine, but Amazon’s.

For several hours tonight, a number of AWS services in us-east-1, including SNS, experienced elevated error rates according to the AWS status page.

Successful, timely chimes were broadcast through the Unreliable Town Clock public SNS topic up to and including:

2015-07-31 05:00 UTC

and successful chimes resumed again at:

2015-07-31 08:00 UTC

Chimes in between were mostly unpublished, though SNS appears to have delivered a few chimes during that period up to several hours late and out of order.

I had set up Unreliable Town Clock monitoring and alerting through Cronitor.io. This worked perfectly and I was notified within 1 minute of the first missed chime, though it turned out there was nothing I could do but wait for AWS to correct the underlying issue with SNS.

Since we now know SNS has the potential to fail in a region, I have launched an Unreliable Town Clock public SNS Topic in a second region: us-west-2. The infrastructure in each region is entirely independent.

The public SNS topic ARNs for both regions are listed at the top of this page:

https://alestic.com/2015/05/aws-lambda-recurring-schedule/

You are welcome to subscribe to the public SNS topics in both regions to improve the reliability of invoking your scheduled functionality.

The SNS message content will indicate which region is generating the chime.

Original article and comments: https://alestic.com/2015/07/aws-sns-outage/

July 31, 2015 09:55 AM

July 30, 2015

Akkana Peck

A good week for critters

It's been a good week for unusual wildlife.

[Myotis bat hanging just outside the front door] We got a surprise a few nights ago when flipping the porch light on to take the trash out: a bat was clinging to the wall just outside the front door.

It was tiny, and very calm -- so motionless we feared it was dead. (I took advantage of this to run inside and grab the camera.) It didn't move at all while we were there. The trash mission accomplished, we turned out the light and left the bat alone. Happily, it wasn't ill or dead: it was gone a few hours later.

We see bats fairly regularly flying back and forth across the patio early on summer evenings -- insects are apparently attracted to the light visible through the windows from inside, and the bats follow the insects. But this was the first close look I'd had at a stationary bat, and my first chance to photograph one.

I'm not completely sure what sort of bat it is: almost certainly some species of Myotis (mouse-eared bats), and most likely M. yumanensis, the "little brown bat". It's hard to be sure, though, as there are at least six species of Myotis known in the area.

[Woodrat released from trap] We've had several woodrats recently try to set up house near the house or the engine compartment of our Rav4, so we've been setting traps regularly. Though woodrats are usually nocturnal, we caught one in broad daylight as it explored the area around our garden pond.

But the small patio outside the den seems to be a particular draw for them, maybe because it has a wooden deck with a nice dark space under it for a rat to hide. We have one who's been leaving offerings -- pine cones, twigs, leaves -- just outside the door (and less charming rat droppings nearby), so one night Dave set three traps all on that deck. I heard one trap clank shut in the middle of the night, but when I checked in the morning, two traps were sprung without any occupants and the third was still open.

But later that morning, I heard rattling from outside the door. Sure enough, the third trap was occupied and the occupant was darting between one end and the other, trying to get out. I told Dave we'd caught the rat, and we prepared to drive it out to the parkland where we've been releasing them.

[chipmunk caught in our rat trap] And then I picked up the trap, looked in -- and discovered it was a pretty funny looking woodrat. With a furry tail and stripes. A chipmunk! We've been so envious of the folks who live out on the canyon rim and are overloaded with chipmunks ... this is only the second time we've seen here, and now it's probably too spooked to stick around.

We released it near the woodpile, but it ran off away from the house. Our only hope for its return is that it remembers the nice peanut butter snack it got here.

[Baby Great Plains skink] Later that day, we were on our way out the door, late for a meeting, when I spotted a small lizard in the den. (How did it get in?) Fast and lithe and purple-tailed, it skittered under the sofa as soon as it saw us heading its way.

But the den is a small room and the lizard had nowhere to go. After upending the sofa and moving a couple of tables, we cornered it by the door, and I was able to trap it in my hands without any damage to its tail.

When I let it go on the rocks outside, it calmed down immediately, giving me time to run for the camera. Its gorgeous purple tail doesn't show very well, but at least the photo was good enough to identify it as a juvenile Great Plains skink. The adults look more like Jabba the Hut nothing like the lovely little juvenile we saw. We actually saw an adult this spring (outside), when we were clearing out a thick weed patch and disturbed a skink from its hibernation. And how did this poor lizard get saddled with a scientfic name of Eumeces obsoletus?

July 30, 2015 05:07 PM

July 27, 2015

Akkana Peck

Trackpad workarounds: using function keys as mouse buttons

I've had no end of trouble with my Asus 1015E's trackpad. A discussion of laptops on a mailing list -- in particular, someone's concerns that the nifty-looking Dell XPS 13, which is available preloaded with Linux, has had reviewers say that the trackpad doesn't work well -- reminded me that I'd never posted my final solution.

The Asus's trackpad has two problems. First, it's super sensitive to taps, so if any part of my hand gets anywhere near the trackpad while I'm typing, suddenly it sees a mouse click at some random point on the screen, and instead of typing into an emacs window suddenly I find I'm typing into a live IRC client. Or, worse, instead of typing my password into a password field, I'm typing it into IRC. That wouldn't have been so bad on the old style of trackpad, where I could just turn off taps altogether and use the hardware buttons; this is one of those new-style trackpads that doesn't have any actual buttons.

Second, two-finger taps don't work. Three-finger taps work just fine, but two-finger taps: well, I found when I wanted a right-click (which is what two-fingers was set up to do), I had to go TAP, TAP, TAP, TAP maybe ten or fifteen times before one of them would finally take. But by the time the menu came up, of course, I'd done another tap and that canceled the menu and I had to start over. Infuriating!

I struggled for many months with synclient's settings for tap sensitivity and right and left click emulation. I tried enabling syndaemon, which is supposed to disable clicks as long as you're typing then enable them again afterward, and spent months playing with its settings, but in order to get it to work at all, I had to set the timeout so long that there was an infuriating wait after I stopped typing before I could do anything.

I was on the verge of giving up on the Asus and going back to my Dell Latitude 2120, which had an excellent trackpad (with buttons) and the world's greatest 10" laptop keyboard. (What the Dell doesn't have is battery life, and I really hated to give up the Asus's light weight and 8-hour battery life.) As a final, desperate option, I decided to disable taps completely.

Disable taps? Then how do you do a mouse click?

I theorized, with all Linux's flexibility, there must be some way to get function keys to work like mouse buttons. And indeed there is. The easiest way seemed to be to use xmodmap (strange to find xmodmap being the simplest anything, but there you go). It turns out that a simple line like

  xmodmap -e "keysym F1 = Pointer_Button1"
is most of what you need. But to make it work, you need to enable "mouse keys":
  xkbset m

But for reasons unknown, mouse keys will expire after some set timeout unless you explicitly tell it not to. Do that like this:

  xkbset exp =m

Once that's all set up, you can disable single-finger taps with synclient:

  synclient TapButton1=0
Of course, you can disable 2-finger and 3-finger taps by setting them to 0 as well. I don't generally find them a problem (they don't work reliably, but they don't fire on their own either), so I left them enabled.

I tried it and it worked beautifully for left click. Since I was still having trouble with that two-finger tap for right click, I put that on a function key too, and added middle click while I was at it. I don't use function keys much, so devoting three function keys to mouse buttons wasn't really a problem.

In fact, it worked so well that I decided it would be handy to have an additional set of mouse keys over on the other side of the keyboard, to make it easy to do mouse clicks with either hand. So I defined F1, F2 and F3 as one set of mouse buttons, and F10, F11 and F12 as another.

And yes, this all probably sounds nutty as heck. But it really is a nice laptop aside from the trackpad from hell; and although I thought Fn-key mouse buttons would be highly inconvenient, it took surprisingly little time to get used to them.

So this is what I ended up putting in .config/openbox/autostart file. I wrap it in a test for hostname, since I like to be able to use the same configuration file on multiple machines, but I don't need this hack on any machine but the Asus.

if [ $(hostname) == iridum ]; then
  synclient TapButton1=0 TapButton2=3 TapButton3=2 HorizEdgeScroll=1

  xmodmap -e "keysym F1 = Pointer_Button1"
  xmodmap -e "keysym F2 = Pointer_Button2"
  xmodmap -e "keysym F3 = Pointer_Button3"

  xmodmap -e "keysym F10 = Pointer_Button1"
  xmodmap -e "keysym F11 = Pointer_Button2"
  xmodmap -e "keysym F12 = Pointer_Button3"

  xkbset m
  xkbset exp =m
else
  synclient TapButton1=1 TapButton2=3 TapButton3=2 HorizEdgeScroll=1
fi

July 27, 2015 02:54 AM

July 25, 2015

Elizabeth Krumbach

OSCON 2015

Following the Community Leadership Summit (CLS), which I wrote about wrote about here, I spent a couple of days at OSCON.

Monday kicked off by attending Jono Bacon’s Community leadership workshop. I attended one of these a couple years ago, so it was really interesting to see how his advice has evolved with the change in tooling and progress that communities in tech and beyond has changed. I took a lot of notes, but everything I wanted to say here has been summarized by others in a series of great posts on opensource.com:

…hopefully no one else went to Powell’s to pick up the recommended books, I cleared them out of a couple of them.

That afternoon Jono joined David Planella of the Community Team at Canonical and Michael Hall, Laura Czajkowski and I of the Ubuntu Community Council to look through our CLS notes and come up with some talking points to discuss with the rest of the Ubuntu community regarding everything from in person events (stronger centralized support of regional Ubucons needed?) to learning what inspires people about the active Ubuntu phone community and how we can make them feel more included in the broader community (and helping them become leaders!). There was also some interesting discussion around the Open Source projects managed by Canonical and expectations for community members with regard to where they can get involved. There are some projects where part time, community contributors are wanted and welcome, and others where it’s simply not realistic due to a variety of factors, from the desire for in-person collaboration (a lot of design and UI stuff) to the new projects with an exceptionally fast pace of development that makes it harder for part time contributors (right now I’m thinking anything related to Snappy). There are improvements that Canonical can make so that even these projects are more welcoming, but adjusting expectations about where contributions are most needed and wanted would be valuable to me. I’m looking forward to discussing these topics and more with the broader Ubuntu community.


Laura, David, Michael, Lyz

Monday night we invited members of the Oregon LoCo out and had an Out of Towners Dinner at Altabira City Tavern, the restaurant on top of the Hotel Eastlund where several of us were staying. Unfortunately the local Kubuntu folks had already cleared out of town for Akademy in Spain, but we were able to meet up with long-time Ubuntu member Dan Trevino, who used to be part of the Florida LoCo with Michael, and who I last saw at Google I/O last year. I enjoyed great food and company.

I wasn’t speaking at OSCON this year, so I attended with an Expo pass and after an amazing breakfast at Mother’s Bistro in downtown Portland with Laura, David and Michael (…and another quick stop at Powell’s), I spent Tuesday afternoon hanging out with various friends who were also attending OSCON. When 5PM rolled around the actual expo hall itself opened, and surprised me with how massive and expensive some of the company booths had become. My last OSCON was in 2013 and I don’t remember the expo hall being quite so extravagant. We’ve sure come a long way.

Still, my favorite part of the expo hall is always the non-profit/open source project/organization area where the more grass-roots tables are. I was able to chat with several people who are really passionate about what they do. As a former Linux Users Group organizer and someone who still does a lot of open source work for free as a hobby, these are my people.

Wednesday was my last morning at OSCON. I did another walk around the expo hall and chatted with several people. I also went by the HP booth and got a picture of myself… with myself. I remain very happy that HP continues to support my career in a way that allows me to work on really interesting open source infrastructure stuff and to travel the world to tell people about it.

My flight took me home Wednesday afternoon and with that my OSCON adventure for 2015 came to a close!

More OSCON and general Portland photos here: https://www.flickr.com/photos/pleia2/sets/72157656192137302

by pleia2 at July 25, 2015 12:27 AM

July 24, 2015

iheartubuntu

Linux Lite for older computers


At work I use several older desktops for various functions. As in "older" I mean 2006 or so :) One system is used primarily for the internet if a customer needs internet access, another system is set up for a cheap live webcam to monitor outdoor premises, and so on.

In looking for an easy to install OS that is Ubuntu/Debian based I have had MUCH success with Linux Lite. Linux Lite is a beginner-friendly Linux distribution based on Ubuntu 14.04 LTS and featuring the Xfce desktop.

Linux Lite is delightfully lightweight and runs fast & responsive on our old computers which are single core Pentium 4 - 3.0Ghtz, with 2GB of memory. I have had problems in the past with graphics while installing Xubuntu or Lubuntu, but not so with Linux Lite.

A ton of time is also saved with pre-installed programs like VLC, LibreOffice, GIMP, Firefox, Steam, and Thunderbird.

It also has its own built in program called Lite Software which makes life super easy for you to install other useful apps including: Chrome browser, Chromium browser, Dropbox, Ubuntu Games Pack, Pidgin chat, Google Talk plugin, Java, KeePassX password manager, PlayOnLinux for windows games, Ubuntu Restricted Extras, Skype, TeamViewer, Deluge torrent app, OpenShot video editor, VirtualBox, and XBMC.

If you have older computers and other distros are not working out for you, definitely give Linux Lite a try!

by iheartubuntu (noreply@blogger.com) at July 24, 2015 12:43 AM

July 21, 2015

Elizabeth Krumbach

Community Leadership Summit 2015

My Saturday kicked off with the Community Leadership Summit (CLS) here in Portland, Oregon.

CLS sign

Jono Bacon opened the event by talking about the growth of communities in the past several years as internet-connected communities of all kinds are springing up worldwide. Though this near-OSCON CLS is open source project heavy, he talked about communities that range from the Maker movement to political revolutions. While we work to develop best practices for all kinds of communities, it was nice to hear one of his key thoughts as we move forward in community building: “Community is not an extension of the Marketing department.”

The day continued with a series of plenaries, which were 15 minutes long and touched upon topics like empathy, authenticity and vulnerability in community management roles. The talks wrapped up with a Facilitation 101 talk to give tips on how to run the unconference sessions. We then did the session proposals and scheduling that would pick up after lunch.

CLS schedule

As mentioned in my earlier post we had some discussion points from our experiences in the Ubuntu community that we wanted to get feedback on from the broader leadership community so we proposed 4 sessions that lasted the afternoon.

Lack of new generation of leaders

The root of this session came from our current struggle in the Ubuntu community to find leaders, from those who wish to sit on councils and boards to leaders for the LoCo teams. In addition to several people who expressed similar problems in their own communities, there was some fantastic feedback from folks who attended, including:

  • Some folks don’t see themselves as “Leaders” so using that work can be intimidating, if you find this is the case, shift to using different types of titles that do more to describe the role they are taking.
  • Document tasks that you do as a leader and slowly hand them off to people in your community to build a supportive group of people who know the ins and outs and can take a leadership role in the future.
  • Evaluate your community every few years to determine whether your leadership structure still makes sense, and make changes with every generation of community leaders if needed (and it often is!).
  • If you’re seeking to get more contributions from people who are employed to do open source, you may need to engage their managers to prioritize appropriately. Also, make sure credit is given to companies who are paying employees to contribute.
  • Set a clear set of responsibilities and expectations for leadership positions so people understand the role, commitment level and expectations of them.
  • Actively promote people who are doing good work, whether by expressing thanks on social media, in blog posts and whatever other communications methods you employ, as well as inviting them to speak at other events, fund them to attend events and directly engage them. This will all serve to build satisfaction and their social capital in the community.
  • Casual mentorship of aspiring leaders who you can hand over projects for them to take over once they’ve begun to grow and understand the steps required.

Making lasting friendships that are bigger than the project

This was an interesting session that was proposed as many of us found that we built strong relationships with people early on in Ubuntu, but have noticed fewer of those developing in the past few years. Many of us have these friendships which have lasted even as people leave the project, and even leave the tech industry entirely, for us Ubuntu wasn’t just an open source project, we were all building lasting relationships.

Recommendations included:

  • In person events are hugely valuable to this (what we used to get from Ubuntu Developer Summits). Empower local communities to host major events.
  • Find a way to have discussions that are not directly related to the project with your fellow project members, including creating a space where there’s a weekly topic, giving a space to share accomplishments, and perhaps not lumping it all together (some new off-topic threads on Discourse?)
  • Provide a space to have check-ins with members of and teams in your community, how is life going? Do you have the resources you need?
  • Remember that tangential interests are what bring people together on a personal level and seek to facilitate that

There was also some interesting discussion around handling contributors whose behavior has become disruptive (often due to personal things that have come up in their life), from making sure a Code of Conduct is in place to set expectations for behavior to approaching people directly to check in to make sure they’re doing all right and to discuss the change in their behavior.

Declining Community Participation

We proposed this session because we’ve seen a decline in community participation since before the Ubuntu Developer Summits ceased. We spent some time framing this problem in the space it’s in, with many Linux distributions and “core” components seeing similar decline and disinterest in involvement. It was also noted that when a project works well, people are less inclined to help because they don’t need to fix things, which may certainly be the case with a product like the Ubuntu server. In this vein, it was noted that 10 years ago the contributor to user ratio was much higher, since many people who used it got involved in order to file bugs and collaborate to fix things.

Some of the recommendations that came out of this session:

  • Host contests and special events to showcase new technologies to get people excited about involvement (made me think of Xubuntu testing with XMir, we had a lot of people testing it because it was an interesting new thing!)
  • In one company, the co-founder set a community expectation for companies who were making money from the product to give back 5% in development (or community management, or community support).
  • Put a new spin on having your code reviewed: it’s constructive criticism from programmers with a high level of expertise, you’re getting training while they chime in on reviews. Note that the community must have a solid code review community that knows how to help people and be kind to them in reviews.
  • Look at bright spots in your community and recreate them: Where has the community grown? (Ubuntu Phone) How can you bring excitement there to other parts of your project? Who are your existing contributors in the areas where you’ve seen a decline and how can you find more contributors like them?
  • Share stories about how your existing members got involved so that new contributors see a solid on-ramp for themselves, and know that everyone started somewhere.
  • Make sure you have clear, well-defined on-ramps for various parts of your project, it was noted that Mozilla does a very good job with this (Ubuntu does use Mozilla’s Asknot, but it’s hard to find!).

Barriers related to single-vendor control and development of a project

This session came about because of the obvious control that Canonical has in the direction of the Ubuntu project. We sought to find advice from other communities where there was single-vendor control. Perhaps unfortunately the session trended heavily toward specifically Ubuntu, but we were able to get some feedback from other communities and how they handle decisions made in an ecosystem with both paid and volunteer contrbutors:

  • Decisions should happen in a public, organized space (not just an IRC log, Google Hangout or in person discussion, even if these things are made public). Some communities have used: Github repo, mailing list threads, Request For Comment system to gather feedback and discuss it.
  • Provide a space where community members can submit proposals that the development community can take seriously (we did used to have brainstorm.ubuntu.com for this, but it wound down over the years and became less valuable.
  • Make sure the company counts contributions as real, tangible things that should be considered for monetary value (non-profits already do this for their volunteers).
  • Make sure the company understands the motivation of community members so they don’t accidentally undermine this.
  • Evaluate expectations in the community, are there some things the company won’t budge on? Are they honest about this and do they make this clear before community members make an investment? Ambiguity hurts the community.

I’m really excited to have further discussions in the Ubuntu community about how these insights can help us. Once I’m home I’ll be able to collect my thoughts and take thoughts and perhaps even action items to the ubuntu-community-team mailing list (which everyone is welcome to participate in).

This first day concluded with a feedback session for the summit itself, which brought up some great points. On to day two!

As with day one, we began the day with a series of plenaries. The first was presented by Richard Millington who talked about 10 “Social Psychology Hacks” that you can use to increase participation in your community. These included “priming” or using existing associations to encourage certain feelings, making sure you craft your story about your community, designing community rituals to make people feel included and use existing contributors to gain more through referrals. It was then time for Laura Czajkowski’s talk about “Making your the Marketing team happy”. My biggest take-away from this one was that not only has she learned to use the tools the marketing team uses, but she now attends their meetings so she can stay informed of their projects and chime in when a suggestion has been made that may cause disruption (or worse!) in the community. Henrik Ingo then gave a talk where he did an analysis of the governance types of many open source projects. He found that all the “extra large” projects developer/commit-wise were all run by a foundation, and that there seemed to be a limit as to how big single-vendor controlled projects could get. I had suspected this was the case, but it was wonderful to have his data to back up my suspicions. Finally, Gina Likins of Red Hat spoke about her work to get universities and open source projects working together. She began her talk by explaining how few college Computer Science majors are familiar with open source, and suggested that a kind of “dating site” be created to match up open source projects with professors looking to get their students involved. Brilliant! I attended her session related to it later in the afternoon.

My afternoon was spent first by joining Gina and others to talk about relationships between university professors and open source communities. Her team runs teachingopensource.org and it turns out I subscribed to their mailing list some time ago. She outlined several goals, from getting students familiar with open source tooling (IRC, mailing lists, revision control, bug trackers) all the way up to more active roles directly in open source projects where the students are submitting patches. I’m really excited to see where this goes and hope I can some day participate in working with some students beyond the direct mentoring through internships that I’m doing now.

Aside from substantial “hallway track” time where I got to catch up with some old friends and meet some people, I went to a session on having open and close-knit communities where people talked about various things, from reaching out to people when they disappear, the importance of conduct standards (and swift enforcement), and going out of your way to participate in discussions kicked off by newcomers in order to make them feel included. The last session I went to shared tips for organizing local communities, and drew from the off-line community organizing that has happened in the past. Suggestions for increasing participation for your group included cross-promotion of groups (either through sharing announcements or doing some joint meetups), not letting volunteers burn out/feel taken for granted and making sure you’re not tolerating poisonous people in your community.

The Community Leadership Summit concluded with a Question and Answer session. Many people really liked the format, keeping the morning pretty much confined to the set presentations and setting up the schedule, allowing us to take a 90 minute lunch (off-site) and come back to spend the whole afternoon in sessions. In all, I was really pleased with the event, kudos to all the organizers!

by pleia2 at July 21, 2015 05:10 AM

July 20, 2015

Eric Hammond

TimerCheck.io - Countdown Timer Microservice Built On Amazon API Gateway and AWS Lambda

deceptively simple web service with super powers

TimerCheck.io is a fully functional, fully scalable microservice built on the just-released Amazon API Gateway and increasingly popular AWS Lambda platforms.

TimerCheck.io is a public web service that maintains a practically unlimited number of countdown timers with one second resolution and no practical limit to the number of seconds each timer can run.

New timers can be created on a whim and each timer can be reset at any time to any number of seconds desired, whether it is still running or has already expired.

Synopsis

Let’s begin with an example to demonstrate the elegant simplicity of the TimerCheck.io interface.

1. Set timer - Any request of the following URL sets a timer named “YOURTIMERNAME” to start counting down immediately from 60 seconds:

https://timercheck.io/YOURTIMERNAME/60

You may click on that link now, or hit a URL of the same format with your own timer name and your chosen number of seconds. You may use a browser, a command like curl, or your favorite programming language.

2. Poll timer - The following URL requests the status of the above timer. Note that the only difference in the URL is that we have dropped the seconds count.

https://timercheck.io/YOURTIMERNAME

If the named timer is still running, TimerCheck.io will return HTTP Status code 200 OK, along with a JSON structure containing information like how many seconds are left.

If the timer has expired, TimerCheck.io will return an HTTP status code 504 Timeout.

That’s it!

No, really. That’s the entire API.

And the whole service is implemented in about 60 lines of code, on top of a handful of powerful infrastructure services managed, protected, maintained, and scaled by Amazon.

Not Included

The TimerCheck.io service does not perform any action when a timer expires. The timer should be polled to find out if it has expired.

On first thought, this may cause you to wonder if this service might, in fact, be completely useless. Instead of polling TimerCheck.io, why not just have your code keep its own timer records or look at a clock and see if it’s time yet?

The answer is that TimerCheck.io is not created for situations where you can depend on your own code to be running and keeping track of things.

TimerCheck.io is designed for integration with existing third party software packages and services that already support a polling mechanism, but do not implement timers.

For example…

Event Monitoring

There are many types of monitoring software packages and free/commercial services that poll resources to see if they are healthy and alert you if there is a problem, but they have no way to alert you if an expected event does not occur. For example, you may want to ensure that a batch job runs every hour, or a message is posted to an SNS topic at least every 15 minutes.

The TimerCheck.io service can be the glue between the existing events you wish to monitor and your existing monitoring system. Here’s how it works:

1. Set timer - When your event runs, trigger a ping of TimerCheck.io to reset the timer. In the URL, specify the name of the timer and the number of seconds when your monitoring system should consider it a problem if no further event has run.

2. Poll timer - Add the TimerCheck.io polling URL for the same timer to your monitoring software, configuring it to alert you if the web request returns anything but success.

If your events keep resetting the timer before the timer expires, your monitoring system will stay happy and quiet, as the polling URL will always return success.

If the monitoring system polls the timer when no event has run in the specified number of seconds, then alarms sound, you will be woken up, and you can investigate why your batch job did not run on its expected schedule.

This is all possible using your existing monitoring system’s standard web check service, without any additional plugins or feature development.

Naming

TimerCheck.io has no registration, no authentication, and no authorization. If you don’t want somebody else resetting your timer accidentally or on purpose, you should pick a timer name that is unguessable even with brute force.

For example:

# A sensible timer name with some unguessable random bits
timer=https://timercheck.io/sample-timer-$(pwgen -s 22 1)
echo $timer

# (OR)
timer=https://timercheck.io/sample-timer-$(uuid -v4 -FSIV)
echo $timer

# Set the timer to 1 hour
seconds=3600
curl -s $timer/$seconds | jq .

# Check the timer
curl -s $timer | jq .

Cron Jobs

Say I have a cron job that runs once an hour. I don’t mind if it fails to compelete successfully once, but if it fails to check in twice in a row, I want to be alerted.

This example will use a random number for the timer name. You should generate your own unique timer names (see previous section).

Here’s a sample crontab entry that runs my job, then resets the countdown timer using TimerCheck.io:

0 * * * * $HOME/bin/create-snapshots && curl -s https://timercheck.io/sample-cron-4/8100 >/dev/null

The timer is being reset at the end of each job to 8100 seconds which is two hours plus 15 minutes. The extra minutes gives the hourly cron job some extra time to complete before we start sounding alarms.

All that’s left is to add the monitor poll URL to my monitoring service:

https://timercheck.io/sample-cron-4

Responses

Though you can ignore the response content from the TimerCheck.io web service, here are samples of what it returns.

If the timer has not yet expired because your events are running on schedule and resetting the countdown, then the monitoring URL returns a 200 success code along with the current state of the timer. This includes things like when the timer set URL was last requested, and how many seconds remain before the timer goes into an error state.

{
  "timer": "YOURTIMERNAME",
  "request_id": "501abe10-2dad-11e5-80c1-35cdcb449e41",
  "status": "ok",
  "now": 1437265810,
  "start_time": 1437265767,
  "start_seconds": 60,
  "seconds_elapsed": 43,
  "seconds_remaining": 17,
  "message": "Timer still running"
}

If the timer has expired and no event has been run to reset it, then the monitor URL returns a 504 timeout error code and an error message. Once I figure out how to get the API Manager to return both an error code and some JSON content, I will expand this to include more details about when the timer expired.

{
  "errorMessage": "504: timer timed out"
}

When you call the event URL, passing in the number of seconds for resetting the timer, the API returns the previous state of the timer (as in the first example above) along with a note that it has set the new values.

{
  "timer": "YOURTIMERNAME",
  "request_id": "36a764b6-2dad-11e5-9318-f3b076dd2a3a",
  "status": "ok",
  "now": 1437265767,
  "start_time": 1437263674,
  "start_seconds": 60,
  "seconds_elapsed": 2093,
  "seconds_remaining": -2033,
  "message": "Timer countdown updated",
  "new_start_time": 1437265767,
  "new_start_seconds": 60
}

If this is the first time you have set the particular timer, the previous state keys will be missing.

Guarantees

There are none.

TimerCheck.io a free public service intended, but not guaranteed, to be useful. It may return unexpected results. At any time and with no warning, it may become unavailable for short periods or forever.

Terms of Use

Don’t use TimerCheck.io in an abusive manner. If you are unsure if your use case might be considered abusive, ask.

Alternatives

I am not aware of any services that operate the same way as TimerCheck.io, with the ability to add dead man switch features to existing polling-based monitoring services, but here are a few services that are targeted specifically at event monitoring.

What do you use for monitoring and alerting? Are you using monitoring to make sure scheduled events are not missed?

Original article and comments: https://alestic.com/2015/07/timercheck-scheduled-events-monitoring/

July 20, 2015 11:26 AM

Akkana Peck

Plugging in those darned USB cables

I'm sure I'm not the only one who's forever trying to plug in a USB cable only to find it upside down. And then I flip it and try it the other way, and that doesn't work either, so I go back to the first side, until I finally get it plugged in, because there's no easy way to tell visually which way the plug is supposed to go.

It's true of nearly all of the umpteen variants of USB plug: almost all of them differ only subtly from the top side to the bottom.

[USB trident] And to "fix" this, USB cables are built so that they have subtly raised indentations which, if you hold them to the light just right so you can see the shadows, say "USB" or have the little USB trident on the top side:


In an art store a few weeks ago, Dave had a good idea.

[USB cables painted for orientation] He bought a white paint marker, and we've used it to paint the logo side of all our USB cables.

Tape the cables down on the desk -- so they don't flop around while the paint is drying -- and apply a few dabs of white paint to the logo area of each connector. If you're careful you might be able to fill in the lowered part so the raised USB symbol stays black; or to paint only the raised USB part. I tried that on a few cables, but after the fifth or so cable I stopped worrying about whether I was ending up with a pretty USB symbol and just started dabbing paint wherever was handy.

The paint really does make a big difference. It's much easier now to plug in USB cables, especially micro USB, and I never go through that "flip it over several times" dance any more.

July 20, 2015 02:37 AM

July 18, 2015

Elizabeth Krumbach

SF activities and arrival in Portland, OR

Time at home in San Francisco came to an end this week with a flight to Portland, OR on Friday for some open source gatherings around OSCON. This ended my nearly 2 months without getting on a plane, the longest stretch I’ve gone in over 2 years. My initial intention with this time was to spend a lot of time on my book, which I have, but not nearly as much as I’d hoped because the work and creativity required isn’t something you can just turn on and off. It was nice getting to spend so much time with my husband though, and the kitties. The stretch at home also led me to join a gym again (I’d canceled my last month to month membership when a stretch of travel had me gone for over a month). Upon my return next week I have my first of four sessions with a trainer at the gym scheduled.

While I haven’t exactly had a full social calendar of late, I have been able to go to a few events. Last Wednesday I hosted an Ubuntu Hour and Bay Area Debian Dinner in San Francisco.

The day after, SwiftStack hosted probably the only OpenStack 5th birthday party I’ll be able to attend this year (leaving before the OSCON one, will be in Peru for the HP one!). I got to see some familiar faces, meet some Swift developers and eat some OpenStack cake.

MJ had a friend in town last week too, which meant I had a lot of time to myself. In the spirit of not having to worry about my own meals during this time, I cooked up a pot of beef stew to enjoy through the week and learned quickly that I should have frozen at least half of it. Even a modest pot of stew is much more than I can eat it all myself over the course of a week. I did enjoy it though, some day I’ll learn about spices so I can make one that’s not so bland.

I’ve also been running again, after a bit of a hiatus following the trip to Vancouver. Fortunately I didn’t lose much ground stamina-wise and was mostly able to pick up where I left off. It has been warmer than normal in San Francisco these past couple weeks though, so I’ve been playing around with the times of my runs, with early evenings as soon as the fog/coolness rolls in currently the winning time slot during the week. Sunday morning runs have been great too.

This week I made it out to a San Francisco DevOps meetup where Tom Limoncelli was giving a talk inspired by some of the less intuitive points in his book The Practice of Cloud Systems Administration. In addition to seeing Tom, it was nice to meet up with some of my local DevOps friends who I haven’t managed to connect with lately and meet some new people.

I had a busy week at home before my trip to Portland this week, upon settling in to the hotel I’m staying at I met up with my friend and fellow Ubuntu Community Council Member Laura Czajkowski. We took the metro over the bridge to downtown Portland and on the way she showed off her Ubuntu phone, and the photo taking app for a selfie together!

Since it was Laura’s first time in Portland, our first stop downtown was to Voodoo Doughnuts! I got my jelly-filled voodoo guy doughnut.

From there we made our way to Powell’s Books where we spent the rest of the afternoon, as you do with Powell’s. I picked up 3 books and learned that Powell’s Technical Books/Powell’s 2 has been absorbed into the big store, which was a little sad for me, it was fun to go to the store that just had science, transportation and engineering books. Still, it was a fun visit and I always enjoy introducing someone new to the store.

Then we headed back across the river to meet up with people for the Community Leadership Summit informal gathering event at the Double Tree. We had a really enjoyable time, I got to see Michael Hall of the Ubuntu Community Council and David Planella of the Community Team at Canonical to catch up with each other and chat about Ubuntu things. Plus, I ran into people I know from the broader open source community. As an introvert, it was one of the more energizing social events I’ve been to in a long time.

Today the Community Leadership Summit that I’m in town for kicks off! Looking forward to some great discussions.

by pleia2 at July 18, 2015 03:17 PM

July 16, 2015

Elizabeth Krumbach

Ubuntu at the upcoming Community Leadership Summit

This weekend I have the opportunity to attend the Community Leadership Summit. While there, I’ll be able to take advantage of an opportunity that’s rare now: meeting up with my fellow Ubuntu Community Council members Laura Czajkowski and Michael Hall, along with David Planella of the community team at Canonical. At the Community Council meeting today, I was able to work with David on narrowing down a few topics that impact us and we think would be of interest to other communities and we’ll propose for discussion at CLS:

  1. Declining participation
  2. Community cohesion
  3. Barriers related to [the perception of] company-driven control and development
  4. Lack of a new generation of leaders

As an unconference, we’ll be submitting these ideas for discussion and so we’ll see how many of them gain interest of enough people to have a discussion.

at

Community Leadership Summit 2015

Since we’ll all be together, we also managed to arrange some time together on Monday afternoon and Tuesday to talk about how these challenges impact Ubuntu specifically and get to any of the topics mentioned above that weren’t selected for discussion at CLS itself. By the end of this in person gathering we hope to have some action items, or at least some solidified talking points and ideas to bring to the ubuntu-community-team mailing list. I’ll also be doing a follow-up blog post where I share some of my takeaways.

What I need from you:

If you’re attending CLS join us for the discussions! If you just happen to be in the area for OSCON in general, feel free to reach out to me (email: lyz@ubuntu.com) to have a chat while I’m in town. I fly home Wednesday afternoon.

If you can’t attend CLS but are interested in these discussions, chime in on the ubuntu-community-team thread or send a message to the Community Council at community-council at lists.ubuntu.com with your feedback and we’ll work to incorporate it into the sessions. You’re also welcome to contact me directly and I’ll pass things along (anonymously if you’d like, just let me know).

Finally, a reminder that this time together is not a panacea. These are complicated concerns in our community that will not be solved over a weekend and a few members of the Ubuntu Community Council won’t be able to solve them alone. Like many of you, I’m a volunteer who cares about the Ubuntu community and am doing my best to find the best way forward. Please keep this in mind as you bring concerns to us. We’re all on the same team here.

by pleia2 at July 16, 2015 06:59 PM

July 15, 2015

Eric Hammond

Simple New Web Service: Testers Requested

Interested in adding scheduled job monitoring (dead man’s switch) to the existing monitoring and alerting framework you are already using (Nagios, Sensu, Zenoss, Zabbix, Monit, Pingdom, Montastic, Ruxit, and the like)?

Last month I wrote about how I use Cronitor.io to monitor scheduled events with an example using an SNS Topic and AWS Lambda.

This week I spent a few hours building a simple web service that enables any polling-based monitor software or service to automatically support alerting when a target event has not occurred in a desired timeframe.

The new web service is built on infrastructure technologies that are reliably maintained and scaled by Amazon:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • CloudFront
  • Route53
  • CloudWatch

The source code is about a page long and the web service API is as trivial as it gets; but the functionality it adds to monitoring services is quite powerful and hugely scalable.

Integration requires these simple steps:

Step 1: There is no step one! There is no registration, no setup, and no configuration of the new web service for your use.

Step 2: Hit one URL when your target event occurs.

Step 3: Tell your existing monitoring system to poll another URL and to alert you when it fails.

Result: When your scheduled task misses an appointment and doesn’t check in, the second URL monitored by your software will start returning a failure code, and you will be alerted.

Intrigued?

I’m still working on the blog post to introduce the web service, but would love to have some folks test it out this week and give feedback.

If you are interested, drop me an email and mention:

  • The monitoring/alerting frameworks you currently use

  • The type of scheduled activities you would like to monitor (cron job, SNS topic, Lambda function, web page view, email receipt, …)

  • The frequency of the target events (every 10 seconds, every 10 years, …)

Even if you don’t want to do testing this week, I’d love to hear your answers to the above three points, through email or in the comments below.

Original article and comments: https://alestic.com/2015/07/timercheck-testers-requested/

July 15, 2015 04:54 AM

July 14, 2015

Akkana Peck

Hummingbird Quidditch!

[rufous hummingbird] After months of at most one hummingbird at the feeders every 15 minutes or so, yesterday afternoon the hummingbirds here all suddenly went crazy. Since then, my patio looks like a tiny Battle of Britain, There are at least four males involved in the fighting, plus a couple of females who sneak in to steal a sip whenever the principals retreat for a moment.

I posted that to the local birding list and someone came up with a better comparison: "it looks like a Quidditch game on the back porch". Perfect! And someone else compared the hummer guarding the feeder to "an avid fan at Wimbledon", referring to the way his head keeps flicking back and forth between the two feeders under his control.

Last year I never saw anything like this. There was a week or so at the very end of summer where I'd occasionally see three hummingbirds contending at the very end of the day for their bedtime snack, but no more than that. I think putting out more feeders has a lot to do with it.

All the dogfighting (or quidditch) is amazing to watch, and to listen to. But I have to wonder how these little guys manage to survive when they spend all their time helicoptering after each other and no time actually eating. Not to mention the way the males chase females away from the food when the females need to be taking care of chicks.

[calliope hummingbird]

I know there's a rufous hummingbird (shown above) and a broad-tailed hummingbird -- the broad-tailed makes a whistling sound with his wings as he dives in for the attack. I know there a black-chinned hummer around because I saw his characteristic tail-waggle as he used the feeder outside the nook a few days before the real combat started. But I didn't realize until I checked my photos this morning that one of the combatants is a calliope hummingbird. They're usually the latest to arrive, and the rarest. I hadn't realized we had any calliopes yet this year, so I was very happy to see the male's throat streamers when I looked at the photo. So all four of the species we'd normally expect to see here in northern New Mexico are represented.

I've always envied places that have a row of feeders and dozens of hummingbirds all vying for position. But I would put out two feeders and never see them both occupied at once -- one male always keeps an eye on both feeders and drives away all competitors, including females -- so putting out a third feeder seemed pointless. But late last year I decided to try something new: put out more feeders, but make sure some of them are around the corner hidden from the main feeders. Then one tyrant can't watch them all, and other hummers can establish a beachhead.

It seems to be working: at least, we have a lot more activity so far than last year, even though I never seem to see any hummers at the fourth feeder, hidden up near the bedroom. Maybe I need to move that one; and I just bought a fifth, so I'll try putting that somewhere on the other side of the house and see how it affects the feeders on the patio.

I still don't have dozens of hummingbirds like some places have (the Sopaipilla Factory restaurant in Pojoaque is the best place I've seen around here to watch hummingbirds). But I'm making progress

July 14, 2015 06:45 PM

July 09, 2015

Akkana Peck

Taming annoyances in the new Google Maps

For a year or so, I've been appending "output=classic" to any Google Maps URL. But Google disabled Classic mode last month. (There have been a few other ways to get classic Google maps back, but Google is gradually disabling them one by one.)

I have basically three problems with the new maps:

  1. If you search for something, the screen is taken up by a huge box showing you what you searched for; if you click the "x" to dismiss the huge box so you can see the map underneath, the box disappears but so does the pin showing your search target.
  2. A big swath at the bottom of the screen is taken up by a filmstrip of photos from the location, and it's an extra click to dismiss that.
  3. Moving or zooming the map is very, very slow: it relies on OpenGL support in the browser, which doesn't work well on Linux in general, or on a lot of graphics cards on any platform.

Now that I don't have the "classic" option any more, I've had to find ways around the problems -- either that, or switch to Bing maps. Here's how to make the maps usable in Firefox.

First, for the slowness: the cure is to disable webgl in Firefox. Go to about:config and search for webgl. Then doubleclick on the line for webgl.disabled to make it true.

For the other two, you can add userContent lines to tell Firefox to hide those boxes.

Locate your Firefox profile. Inside it, edit chrome/userContent.css (create that file if it doesn't already exist), and add the following two lines:

div#cards { display: none !important; }
div#viewcard { display: none !important; }

Voilà! The boxes that used to hide the map are now invisible. Of course, that also means you can't use anything inside them; but I never found them useful for anything anyway.

July 09, 2015 04:54 PM

July 06, 2015

Elizabeth Krumbach

California Tourist

I returned from my latest conference on May 23rd, closing down what had been over 2 years of traveling every month to some kind of conference, event or family gathering. This was the longest stretch of travel I’ve done and I’ve managed to visit a lot of amazing places and meeting some unforgettable people. However, with a book deadline creeping up and tasks at home piling up, I figured it was time to slow down for a bit. I didn’t travel in June and my next trip isn’t until the end of July when I’m going up to Portland for the Community Leadership Summit and a couple days of schmoozing with OSCON friends.

Complicated moods of late and continued struggles with migraines have made it so I’ve not been as productive as I’ve wanted, but I have made real progress on some things I’ve wanted to and my book is really finally coming together. In the spaces between work I’ve also managed a bit of much needed fun and relaxation.

A couple weekends ago MJ and I took a weekend trip up to an inn and spa in Sonoma to get some massages and soak in natural mineral water pools provided by on site springs. We had some amazing dinners at the inn, including one evening where we enjoyed s’mores at an outdoor fire pit. The time spent was amazingly relaxing and refreshing, and although it wasn’t a cure-all for the dip in my mood of late, it was some time well spent together.


Perfect weather, beautiful venue

On Sunday morning we checked out of the inn and enjoyed a fantastic brunch that included lobster eggs benedict on the grounds before venturing on. While in Sonoma, we decided to stop by a couple wineries that we were familiar with, starting with Imagery, which is the sister winery to the one we got engaged at, and our next stop, Benziger. At both we picked up several nice wines, of which I’m looking forward to cracking open for Shabbats in our near future!

We also stopped by B.R. Cohn for a couple olive oils, and I picked up some delicious blackberry jam and some Chardonnay caramel sauce which has graced some bowls of ice cream since our return. On the trip back to San Francisco we made one final stop, at Jacuzzi Winery where we picked up several more interesting bottles of olive oil, which will soon make it into some salads, scrambled eggs and other dishes that we got recipe cards for.

Due to my backlog, I’ve been spending a lot of time at home and not much at local events, with the exception of a great gathering at the East Bay Linux Users Group a few weeks ago. In contrast with my professional colleagues who work on Linux full time as systems administrators, engineers and DevOps, it’s so refreshing to go to a LUG where I’m meeting with long term tech hobbiests who still distro-hop and come up with interesting questions around the distros I’m most familiar with and the Linux ecosystem in general. This group has also had interest in Partimus lately, so it was nice to get some feedback about our on-going efforts and volunteer recruitment activities.

In an effort to get out of the house more, I picked up the book Historic Walks in San Francisco: 18 Trails Through the City’s Past and finally took it out for a spin this weekend. I went on the Financial District walk which took me around what is essentially my own neighborhood but had me look at it with whole new eyes. I learned that the Hallidie Building tricked me into believing it was a new building with it’s glass exterior, but is actually from 1917 and one of the first American buildings to feature glass curtain walls.


Hallidie Building

One of my favorite buildings on the tour turned out to be the Kohl Building, which was built in 1901 and withstood the 1906 earthquake that leveled most of downtown San Francisco and so was used as a command post during the recovery. Erected for Alvinza Hayward, the “H” shape of the building is allegedly in honor of his last name.


Kohl Building

The tour had lots more fun landmarks and stories of recovery (or not) following the 1906 earthquake. Amusingly for my European friends, the young age of San Francisco itself, and our shaky history means that there was not much at all here 160 years ago, so “historical” for us means 50+ years. Over 110 years and you’re going back before the city was essentially leveled by the earthquake and fire to some truly impressive sturdy buildings. The oldest on the tour was the oldest standing building downtown and it dates from 1877 and now houses the Pacific Heritage Museum, which I hope to visit one of these days when they’re open.

More photos from my walk here: https://www.flickr.com/photos/pleia2/sets/72157655051173508

While on the topic of walking tours, doing this tour alone left something to be desired, even with Tony Bennett and company crooning in my ears. I think I might look up some of the free San Francisco Walking Tours for my next adventure.

My 4th of July weekend here has been pretty low-key. MJ has a friend in town, so they’ve been spending the days out and I’ll sometimes tag along for dinner. With an empty house, I got some reading done, plowed through several tasks on my to do list and started catching up on book related tasks. I still don’t feel like I got “enough” done, but there’s always tomorrow.

by pleia2 at July 06, 2015 01:23 AM

July 04, 2015

Akkana Peck

Create a signed app with Cordova

I wrote last week about developing apps with PhoneGap/Cordova. But one thing I didn't cover. When you type cordova build, you're building only a debug version of your app. If you want to release it, you have to sign it. Figuring out how turned out to be a little tricky.

Most pages on the web say you can sign your apps by creating platforms/android/ant.properties with the same keystore information in it that you'd put in an ant build, then running cordova build android --release

But Cordova completely ignored my ant.properties file and went on creating a debug .apk file and no signed one.

I found various other purported solutions on the web, like creating a build.json file in the app's top-level directory ... but that just made Cordova die with a syntax error inside one of its own files). This is the only method that worked for me:

Create a file called platforms/android/release-signing.properties, and put this in it:

storeFile=/path/to/your-keystore.keystore
storeType=jks
keyAlias=some-key
// if you don't want to enter the password at every build, use this:
keyPassword=your-key-password
storePassword=your-store-password

Then cordova build android --release finally works, and creates a file called platforms/android/build/outputs/apk/android-release.apk

July 04, 2015 12:02 AM

June 30, 2015

Elizabeth Krumbach

Contributing to the Ubuntu Weekly Newsletter

Super star Ubuntu Weekly Newsletter contributor Paul White recently was reflecting upon his work with the newsletter and noted that he was approaching 100 issues that he’s contributed to. Wow!

That caused me to look at how long I’ve been involved. Back in 2011 the newsletter when on a 6 month hiatus when the former editor had to step down due to obligations elsewhere. After much pleading for the return of the newsletter, I spent a few weeks working with Nathan Handler to improve the scripts used in the release process and doing an analysis of the value of each section of the newsletter in relation to how much work it took to produce each week. The result was a slightly leaner, but hopefully just as valuable newsletter, which now took about 30 minutes for an experienced editor to release rather than 2+ hours. This change was transformational for the team, allowing me to be involved for a whopping 205 consecutive issues.

If you’re not familiar with the newsletter, every week we work to collect news from around our community and the Internet to bring together a snapshot of that week in Ubuntu. It helps people stay up to date with the latest in the world of Ubuntu and the Newsletter archive offers a fascinating glimpse back through history.

But we always need help putting the newsletter together. We especially need people who can take some time out of their weekend to help us write article summaries.

Summary writers. Summary writers receive an email every Friday evening (or early Saturday) US time with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.

Interested? Email editor.ubuntu.news@ubuntu.com and we’ll get you added to the list of folks who are emailed each week.

I love working on the newsletter. As I’ve had to reduce my commitment to some volunteer projects I’m working on, I’ve held on to the newsletter because of how valuable and enjoyable I find it. We’re a friendly team and I hope you can join us!

Still just interested in reading? You have several options:

And everyone is welcome to drop by #ubuntu-news on Freenode to chat with us or share links to news we may found valuable for the newsletter.

by pleia2 at June 30, 2015 02:29 AM

June 29, 2015

Akkana Peck

Chollas in bloom, and other early summer treats

[Bee in cholla blossom] We have three or four cholla cacti on our property. Impressive, pretty cacti, but we were disappointed last year that they never bloomed. They looked like they were forming buds ... and then one day the buds were gone. We thought maybe some animal ate them before the flowers had a chance to open.

Not this year! All of our chollas have gone crazy, with the early rain followed by hot weather. Last week we thought they were spectacular, but they just kept getting better and better. In the heat of the day, it's a bee party: they're aswarm with at least three species of bees and wasps (I don't know enough about bees to identify them, but I can tell they're different from one another) plus some tiny gnat-like insects.

I wrote a few weeks ago about the piñons bursting with cones. What I didn't realize was that these little red-brown cones are all the male, pollen-bearing cones. The ones that bear the seeds, apparently, are the larger bright green cones, and we don't have many of those. But maybe they're just small now, and there will be more later. Keeping fingers crossed. The tall spikes of new growth are called "candles" and there are lots of those, so I guess the trees are happy.

[Desert willow in bloom] Other plants besides cacti are blooming. Last fall we planted a desert willow from a local native plant nursery. The desert willow isn't actually native to White Rock -- we're around the upper end of its elevation range -- but we missed the Mojave desert willow we'd planted back in San Jose, and wanted to try one of the Southwest varieties here. Apparently they're all the same species, Chilopsis linearis.

But we didn't expect the flowers to be so showy! A couple of blossoms just opened today for the first time, and they're as beautiful as any of the cultivated flowers in the garden. I think that means our willow is a 'Rio Salado' type.

Not all the growing plants are good. We've been keeping ourselves busy pulling up tumbleweed (Russian thistle) and stickseed while they're young, trying to prevent them from seeding. But more on that in a separate post.

As I write this, a bluebird is performing short aerobatic flights outside the window. Curiously, it's usually the female doing the showy flying; there's a male out there too, balancing himself on a piñon candle, but he doesn't seem to feel the need to show off. Is the female catching flies, showing off for the male, or just enjoying herself? I don't know, but I'm happy to have bluebirds around. Still no definite sign of whether anyone's nesting in our bluebird box. We have ash-throated flycatchers paired up nearby too, and I'm told they use bluebird boxes more than the bluebirds do. They're both beautiful birds, and welcome here.

Image gallery: Chollas in bloom (and other early summer flowers.

June 29, 2015 01:38 AM

June 23, 2015

Akkana Peck

Cross-Platform Android Development Toolkits: Kivy vs. PhoneGap / Cordova

Although Ant builds have made Android development much easier, I've long been curious about the cross-platform phone development apps: you write a simple app in some common language, like HTML or Python, then run something that can turn it into apps on multiple mobile platforms, like Android, iOS, Blackberry, Windows phone, UbuntoOS, FirefoxOS or Tizen.

Last week I tried two of the many cross-platform mobile frameworks: Kivy and PhoneGap.

Kivy lets you develop in Python, which sounded like a big plus. I went to a Kivy talk at PyCon a year ago and it looked pretty interesting. PhoneGap takes web apps written in HTML, CSS and Javascript and packages them like native applications. PhoneGap seems much more popular, but I wanted to see how it and Kivy compared. Both projects are free, open source software.

If you want to skip the gory details, skip to the summary: how do Kivy and PhoneGap compare?

PhoneGap

I tried PhoneGap first. It's based on Node.js, so the first step was installing that. Debian has packages for nodejs, so apt-get install nodejs npm nodejs-legacy did the trick. You need nodejs-legacy to get the "node" command, which you'll need for installing PhoneGap.

Now comes a confusing part. You'll be using npm to install ... something. But depending on which tutorial you're following, it may tell you to install and use either phonegap or cordova.

Cordova is an Apache project which is intertwined with PhoneGap. After reading all their FAQs on the subject, I'm as confused as ever about where PhoneGap ends and Cordova begins, which one is newer, which one is more open-source, whether I should say I'm developing in PhoneGap or Cordova, or even whether I should be asking questions on the #phonegap or #cordova channels on Freenode. (The one question I had, which came up later in the process, I asked on #phonegap and got a helpful answer very quickly.) Neither one is packaged in Debian.

After some searching for a good, comprehensive tutorial, I ended up on a The Cordova tutorial rather than a PhoneGap one. So I typed:

sudo npm install -g cordova

Once it's installed, you can create a new app, add the android platform (assuming you already have android development tools installed) and build your new app:

cordova create hello com.example.hello HelloWorld
cordova platform add android
cordova build

Oops!

Error: Please install Android target: "android-22"
Apparently Cordova/Phonegap can only build with its own preferred version of android, which currently is 22. Editing files to specify android-19 didn't work for me; it just gave errors at a different point.

So I fired up the Android SDK manager, selected android-22 for install, accepted the license ... and waited ... and waited. In the end it took over two hours to download the android-22 SDK; the system image is 13Gb! So that's a bit of a strike against PhoneGap.

While I was waiting for android-22 to download, I took a look at Kivy.

Kivy

As a Python enthusiast, I wanted to like Kivy best. Plus, it's in the Debian repositories: I installed it with sudo apt-get install python-kivy python-kivy-examples

They have a nice quickstart tutorial for writing a Hello World app on their site. You write it, run it locally in python to bring up a window and see what the app will look like. But then the tutorial immediately jumps into more advanced programming without telling you how to build and deploy your Hello World. For Android, that information is in the Android Packaging Guide. They recommend an app called Buildozer (cute name), which you have to pull from git, build and install.

buildozer init
buildozer android debug deploy run
got started on building ... but then I noticed that it was attempting to download and build its own version of apache ant (sort of a Java version of make). I already have ant -- I've been using it for weeks for building my own Java android apps. Why did it want a different version?

The file buildozer.spec in your project's directory lets you uncomment and customize variables like:

# (int) Android SDK version to use
android.sdk = 21

# (str) Android NDK directory (if empty, it will be automatically downloaded.)
# android.ndk_path = 

# (str) Android SDK directory (if empty, it will be automatically downloaded.)
# android.sdk_path = 

Unlike a lot of Android build packages, buildozer will not inherit variables like ANDROID_SDK, ANDROID_NDK and ANDROID_HOME from your environment; you must edit buildozer.spec.

But that doesn't help with ant. Fortunately, when I inspected the Python code for buildozer itself, I discovered there was another variable that isn't mentioned in the default spec file. Just add this line:

android.ant_path = /usr/bin

Next, buildozer gave me a slew of compilation errors:

kivy/graphics/opengl.c: No such file or directory
 ... many many more lines of compilation interspersed with errors
kivy/graphics/vbo.c:1:2: error: #error Do not use this file, it is the result of a failed Cython compilation.

I had to ask on #kivy to solve that one. It turns out that the current version of cython, 0.22, doesn't work with kivy stable. My choices were to uninstall kivy and pull the development version from git, or to uninstall cython and install version 0.21.2 via pip. I opted for the latter option. Either way, there's no "make clean", so removing the dist and build directories let me start over with the new cython.

apt-get purge cython
sudo pip install Cython==0.21.2
rm -rf ./.buildozer/android/platform/python-for-android/dist
rm -rf ./.buildozer/android/platform/python-for-android/build

Buildozer was now happy, and proceeded to download and build Python-2.7.2, pygame and a large collection of other Python libraries for the ARM platform. Apparently each app packages the Python language and all libraries it needs into the Android .apk file.

Eventually I ran into trouble because I'd named my python file hello.py instead of main.py; apparently this is something you're not allowed to change and they don't mention it in the docs, but that was easily solved. Then I ran into trouble again:

Exception: Unable to find capture version in ./main.py (looking for `__version__ = ['"](.*)['"]`)
The buildozer.spec file offers two types of versioning: by default "method 1" is enabled, but I never figured out how to get past that error with "method 1" so I commented it out and uncommented "method 2". With that, I was finally able to build an Android package.

The .apk file it created was quite large because of all the embedded Python libraries: for the little 77-line pong demo, /usr/share/kivy-examples/tutorials/pong in the Debian kivy-examples package, the apk came out 7.3Mb. For comparison, my FeedViewer native java app, roughly 2000 lines of Java plus a few XML files, produces a 44k apk.

The next step was to make a real mini app. But when I looked through the Kivy examples, they all seemed highly specialized, and I couldn't find any documentation that addressed issues like what widgets were available or how to lay them out. How do I add a basic text widget? How do I put a button next to it? How do I get the app to launch in portrait rather than landscape mode? Is there any way to speed up the very slow initialization?

I'd spent a few hours on Kivy and made a Hello World app, but I was having trouble figuring out how to do anything more. I needed a change of scenery.

PhoneGap, redux

By this time, android-22 had finally finished downloading. I was ready to try PhoneGap again.

This time,

cordova platforms add android
cordova build
worked fine. It took a long time, because it downloaded the huge gradle build system rather than using something simpler like ant. I already have a copy of gradle somewhere (I downloaded it for the OsmAnd build), but it's not in my path, and I was too beaten down by this point to figure out where it was and how to get cordova to point to it.

Cordova eventually produced a 1.8Mb "hello world" apk -- a quarter the size of the Kivy package, though 20 times as big as a native Java app. Deployed on Android, it initialized much faster than the Kivy app, and came up in portrait mode but rotated correctly if I rotated the phone.

Editing the HTML, CSS and Javascript was fairly simple. You'll want to replace pretty much all of the default CSS if you don't want your app monopolized by the Cordova icon.

The only tricky part was file access: opening a file:// URL didn't work. I asked on #phonegap and someone helpfully told me I'd need the file plugin. That was easy to find in the documentation, and I added it like this:

cordova plugin search file
cordova plugin add org.apache.cordova.file

My final apk, for a small web app I use regularly on Android, was almost the same size as their hello world example: 1.8Mb. And it works great: phonegap had no problem playing an audio clip, something that was tricky when I was trying to do the same thing from a native Android java WebView class.

Summary: How do Kivy and PhoneGap compare?

This has been a long article, I know. So how do Kivy and PhoneGap compare, and which one will I be using?

They both need a large amount of disk space for the development environment. I wish I had good numbers to give you, but I was working with both systems at the same time, and their packages are scattered all over the disk so I haven't found a good way of measuring their size. I suspect PhoneGap is quite a bit bigger, because it uses gradle rather than ant and because it insists on android-22.

On the other hand, PhoneGap wins big on packaged application size: its .apk files are a quarter the size of Kivy's.

PhoneGap definitely wins on documentation. Kivy has seemingly lots of documentation, but its tutorials jumped around rather than following a logical sequence, and I had trouble finding answers to basic questions like "How do I display a text field with a button?" PhoneGap doesn't need that, because the UI is basic HTML and CSS -- limited though they are, at least most people know how to use them.

Finally, PhoneGap wins on startup speed. For my very simple test app, startup was more or less immediate, while the Kivy Hello World app required several seconds of startup time on my Galaxy S4.

Kivy is an interesting project. I like the ant-based build, the straightforward .spec file, and of course the Python language. But it still has some catching up to do in performance and documentation. For throwing together a simple app and packaging it for Android, I have to give the win to PhoneGap.

June 23, 2015 06:09 PM

June 19, 2015

Jono Bacon

Rebasing Ubuntu on Android?

NOTE: Before you read this, I want to clear up some confusion. This post shares an idea that is designed purely for some intellectual fun and discussion. I am not proposing we actually do this, nor advocating for this. So, don’t read too much into these words…

The Ubuntu phone is evolving step by step. The team has worked their socks off to build a convergent user interface, toolkit, and full SDK. The phone exposes an exciting new concept, scopes, that while intriguing in their current form, after some refinement (which the team are already working on) could redefine how we use devices and access content. It is all the play for.

There is one major stumbling block though: apps.

While scopes offer a way of getting access to content quickly, they don’t completely replace apps. There will always be certain apps that people are going to want. The common examples are Skype, WhatsApp, Uber, Google Maps, Fruit Ninja, and Temple Run.

Now this is a bit of a problem. The way new platforms usually solve this is by spending hundreds of thousands of dollars to pay those companies to create and support a port. This isn’t really an option for the Ubuntu phone (there is much more than just the phone being funded by Canonical).

So, it seems to me that the opportunity of the Ubuntu phone is a sleek and sexy user interface that converges and puts content first, but the stumbling block is the lack of apps, and the lack of apps may well have a dramatic impact on adoption.

So, i have an idea to share based on a discussion last night with a friend.

Why don’t we rebase the phone off Android?

OK, bear with me…

In other words, the Ubuntu phone would be an Android phone but instead of the normal user interface it would be a UI that looks and feels like the Ubuntu phone. It would have the messaging menu, scopes, and other pieces, and select Android API calls could be mapped to the different parts of the Unity UI such as the messaging menu and online account support.

The project could even operate like how we build Ubuntu today. Every six months upstream Android would be synced into Launchpad where a patchset would live on patches.ubuntu.com and applied to the codebase (in much the same way we do with Debian today).

This would mean that Ubuntu would continue to be an Open Source project, based on a codebase easily supported by hardware manufacturers (thus easier to ship), it would run all Android apps without requiring a cludgy porting/translation layer running on Ubuntu, it would look and feel like an Ubuntu phone, it would still expose scopes as a first-class user interface, the Ubuntu SDK would still be the main ecosystem play, Ubuntu apps would still stand out as more elegant and engaging apps, and it would reduce the amount of engineering required (I assume).

Now, the question is how this would impact a single convergent Operating System across desktop, phone, tablet, and TV. If Unity is essentially a UI that runs on top of Android and exposes a set of services, the convergence story should work well too, after all…it is all Linux. It may need different desktop, phone, tablet, and TV kernels, but I think we would need different kernels anyway.

So where does this put Debian and Ubuntu packages? Well, good question. I don’t know. The other unknown of course would be the impact of such a move on our flavors and derivatives, but then again I suspect the march towards snappy is going to put us in a similar situation if flavors/derivatives choose to stick with the Debian packaging system.

Of course, I am saying all this as who really only understands a small part of the picture, but this just strikes me as a logical step forward. I know there has been a reluctance to support Android apps on Ubuntu as it devalues the Ubuntu app ecosystem and people would just use Android apps, but I honestly think some kind of middle-ground is needed to get into the game, otherwise I worry we won’t even make it to the subs bench no matter how awesome our technology is.

Just a thought, would love to hear what everyone thinks, including if what I am suggesting is total nonsense. :-)

Again, remember, this is just an idea I am throwing out for the fun of the discussion; I am not suggesting we actually do this.

by jono at June 19, 2015 04:20 AM

June 18, 2015

Eric Hammond

lambdash: AWS Lambda Shell Hack: New And Improved!

easier, simpler, faster, better

Seven months ago I published the lambdash AWS Lambda Shell Hack that lets you run shell commands to explore the environment in which AWS Lambda functions are executed.

I also posted samples of command output that show fascinating properties of the AWS Lambda runtime environment.

In the last seven months, Amazon has released new features and enhancements that have made a completely new version of lambdash possible, with many benefits including:

  • Ability to use AWS CloudFormation to create all needed resources including the AWS Lamba function and the IAM role.

  • Ability to create AWS Lambda functions by referencing a ZIP file in an S3 bucket.

  • Simpler IAM role structure.

  • Increased AWS Lamba function memory limit, with corresponding faster execution.

  • Ability to invoke an AWS Lambda function synchronously.

This last point means that we no longer need to put the shell command output into an S3 bucket and poll the bucket from the local host. Instead, we can simply return the shell command output directly to the client that invoked the AWS Lambda function.

The above have made the lambdash code much simpler, much easier to intstall, and much, much faster to execute and get results.

You can browse the source here:

https://github.com/alestic/lambdash

There are three easy steps to get lambdash working:

1. CloudFormation Stack

Option 1: Here are sample steps to create the lambdash AWS Lambda function and to use a local command to invoke the function and output the results of commands run inside of Lambda

git clone git@github.com:alestic/lambdash.git
cd lambdash
./lambdash-install

The lambdash-install script runs the aws-cli command aws cloudformation create-stack passing in the template file to create the AWS Lambda function in a CloudFormation stack.

The above assumes that you have installed aws-cli and have appropriate credentials configured.

Option 2: You may use the AWS Console to create a lambdash CloudFormation stack by pressing this button:

Launch Stack

Accept all the defaults, confirm the IAM role creation (after reading the CloudFormation template and verifying that I am not doing anything malicious), and perhaps add a Tag to help identify the lambdash CloudFormation stack.

2. Environment Variable

Since the CloudFormation stack creates the AWS Lambda function with a unique name, you need to find out what this name is before you can invoke it with the lambdash command.

If you ran the lambdash-install command, it printed the export statement you should use.

If you used the AWS Console, click on the lambdash CloudFormation stack’s [Output] tab and copy the export command listed there.

It will look something like this, with your own unique 12-character suffix:

export LAMBDASH_FUNCTION=lambdash-function-ABC123EXAMPL

Run this in your current shell and, perhaps, add it to your $HOME/.bashrc or equivalent.

3. Local lambdash Program

The previous step installs the AWS Lambda function in the AWS environment. You also need a complementary local command that will invoke the function with your requested command line then receive and print the stdout and stderr content.

This is the lambdash program, which is now a small Python script that uses boto3.

You can either use the lambdash program in the GitHub repo you cloned above, or download it directly:

sudo curl -so/usr/local/bin/lambdash \
  https://raw.githubusercontent.com/alestic/lambdash/master/lambdash
sudo chmod +x /usr/local/bin/lambdash

This Python program requires boto3, so install it using your favorite method. This worked for me:

sudo -H pip install boto3

Now you’re ready to run shell commands on AWS Lambda.

Usage

You can now execute shell commands in the AWS Lambda environment and see the output. This command shows us that Amazon has upgraded the AWS Lambda environment from Amazon Linux 2014.03 when it was launched, to 2015.03 today:

$ lambdash cat /etc/issue
Amazon Linux AMI release 2015.03
Kernel \r on an \m

Nodejs has been upgraded from v0.10.32 to v0.10.36

$ lambdash node -v
v0.10.36

Here’s a command I use to occasionally check in on changes in the Amazon’s awslambda nodejs framework that runs our Lambda functions:

mkdir awslambda-source
lambdash tar cvzf - -C /var/runtime/node_modules/awslambda . | 
  tar xzf - -C awslambda-source

For example, the most recent change was to “log only 256K of errorMessage into customer’s cloudwatch”. Good to know.

Cleanup

Deleting the lambdash CloudFormation stack removes all resources including the AWS Lambda function and the IAM role. You can do this by running this command in the GitHub repo:

./lambdash-uninstall

Or, you can delete the lambdash CloudFormation stack in the AWS Console.

Original article and comments: https://alestic.com/2015/06/aws-lambda-shell-2/

June 18, 2015 11:00 AM

June 15, 2015

Jono Bacon

New Forbes Column: From Piracy to Prosperity

My new Forbes column is published.

This article covers how technology has impacted how creatives, artists, and journalists create, distribute, and engage around their work.

For it I sat down with Mike Shinoda, co-founder of grammy award winning Linkin Park as well as Ali Velshi, host on Al Jazeera and former CNN Senior Business Corrospondent.

Go and read the article here.

After that you may want to see my previous article where I interviewed Chris Anderson, founder of 3DR and author of The Long Tail, where we discuss building the open drone revolution. Read that article here.

by jono at June 15, 2015 04:49 PM

June 14, 2015

Elizabeth Krumbach

Weekends, street cars and red pandas

I’m home for the entire month of June! Looking back through my travel schedule, the last month I didn’t get on a plane was March of 2013. The travel-loving part of me is a little sad about breaking my streak, but given that it’s June and I’ve already given 8 presentations in 5 countries across 3 continents, I’m due for this break from travel. It’s not a break from work though, I’ve had to really hunker down on some projects I’m working on at work now that I have solid chunks of time to concentrate, and some serious due dates for my book are looming. I’ve also been tired, which prompted an extensive pile of blood work that had some troubling results that I’m now working with a specialist to get to the bottom of. I’m continuing to run and improve my diet by eating more fresh, green things which have traditionally helped bump my energy level because I’m treating my body better, but lately they both just make me more tired. And ultimately, tired means some evenings I spend more time watching True Blood and The Good Wife than I should with all these book deadlines creeping up. Don’t tell my editor ;)

I’m also getting lots of kitty snuggles as I remain at home, and lots of opportunities to take cute kitty pictures.

I continue to take Saturdays off, which continues to be my primary burnout protection mechanism. I’ve continued to evolve what this day off means. While originally inspired by the Jewish tradition of Shabbat, and we practice Shabbat rituals in our home (candles, challah, etc), and I continue to avoid work, the definition of work is in flux for me. Early on, I’d still check “personal” email and social media, until I discovered that there’s no such thing, with my open source volunteer work, open source day job and personal life so intertwined. There recently have also been some considerable stresses related to my volunteer open source work, which I want to have a break from on my day off. So currently I work hard to avoid checking email and social media, even though it’s still a struggle. It’s caused me to learn how much of a slave I’ve become to my phone. It beeps, I leap for it. Having a day off has caused me to create discipline around my relationship with my phone, so even on days when I’m working, I’m less inclined to prioritize phone beeps over the work I’m currently engaged in, leading to a greater ability to focus. Sorry to people who randomly text or direct message me on Twitter/Facebook expecting an immediate response, it will rarely happen.

So currently, my Saturdays often include either:

  • Attending Synagogue services with MJ and having a lunch out together
  • Going to some museum, movie or cultural event with MJ
  • Staying home and reading, writing, catching up with some online classes or working on hobby projects

I had played around with avoiding computers entirely on Saturdays, but on home days I realized I’d get bored too easily if I’m reading all day and some times I’m really not in the mood for my offline activities. When I get bored, I end up napping or watching TV instead, neither of which are rejuvenating or satisfying, and I end up just feeling sad about wasting the day. So my criteria has shifted to “not work” to including fun, enriching projects that I likely don’t have time or energy for on my other six “working” days. I have struggled with whether these hobbies should be on my to do list or not, since putting them on my list adds a level of structure that can lead to stress, but my coping habit for task organization makes leaving them off a challenging mental exercise. Writing here in my blog also requires a computer, and these days off give me ample time for ideas to settle and finally have some quite time to get my thoughts in order and write without distraction. Though I do have to admit that buying a vintage mechanical typewriter has crossed my mind more than a few times. Which reminds me, have any recommendations? Aside from divorce lawyers and a bigger home in the event that I drive MJ crazy. I also watch videos associated with various electronics projects and online classes I’m learning for fun (Arduinos! History and anthropology!), so a computer or tablet is regularly involved there.

It’s still not perfect. My stress levels have been high this year and we’ve booked a weekend at a beautiful inn and spa in Sonoma next weekend to unplug away from the random tasks that come from spending our weekends at home. I’m counting down the hours.

Last weekend was a lot of fun though, even if I was still stressed. On Saturday we went on a Blackpool Boat Tram Tour along the F-line. I’ve been looking for an opportunity to ride on this “topless” street car for years, but the charters always conflicted with my travel schedule, until last weekend! MJ and I booked tickets and at 1:30PM on Saturday we were on our way down Market Street.

As the title of the tour suggests, these unusually styled street cars come from Blackpool, England, a region known for their seaside activities, including Blackpool Pleasure Beach where they now have the first Wallace and Gromit theme park ride, Wallace & Gromit’s Thrill-O-Matic ride! Well, they also have a tramway where these cars came from, and California now has three of them – two functioning ones operated here in the city by MUNI and maintained by the Market Street Railway non-profit, which I’m a member of and conducted this charter.

We met at 1:15 to pick up our tickets, browse through the little SF Railyway Museum and capture some pre-travel photos of the boat tram.

Upon boarding, we took seats at the back of the street car. The tour was in two parts, half of it guided by a representative from Market Street Railway who gave some history of the transportation lines themselves as we glided up Market Street along the standard F-line until getting to Castro where a slightly different route was taken to turn back on to Market.

At the turn around near Castro, the guides swapped places and we got a representative from San Francisco City Guides who typically does walking tours of the city. As a local enthusiast he was able to give us details about the major landmarks along Market and up the Embarcadero as we made our way up to Pier 39. I knew most of what both guides told us, but there were a few bits of knowledge I was excited to learn. I was also reminded of the ~12 minute A Trip Down Market Street, 1906 that was taken just days before the earthquake in 1906 that destroyed many of the buildings seen in the video. Fascinating stuff.

At Pier 39 we had the opportunity to get out of the car and take some pictures around it, including the obligatory pictures of ourselves!

The trip lasted a couple hours, and with the open top of the car I managed to get a bit of sunburn on my face, oops!

More photos from the tram tour can be found here: https://www.flickr.com/photos/pleia2/sets/72157654163687542

Sunday morning I took advantage of the de-stressing qualities of a visit to the zoo.

I finally got to see all three of the red pandas. It had been some time since I’d seen their exhibit, and last time only one of them was there. It was fun to see all three of them together, two of them climbing the trees (pictured below) and the third walking around the ground of the enclosure. I’m kind of jealous of their epic tree houses.

Also got to swing by the sea lions Henry and Silent Knight, with Henry playing king of the rock in the middle of their pool.

More photos here: https://www.flickr.com/photos/pleia2/sets/72157654194707041

In other miscellaneous life things, MJ and I made it out to see Mad Max: Fury Road recently. It’s been several months since I’d been to a theater, and probably over a year since MJ and I had gone to a movie together, so it was a nice change of pace. Plus, it was a fun, mind-numbing movie that took my mind off my ever-growing task list. MJ and I have also been able to spend several nice dinners together, including indulging in a Brazilian Steakhouse one evening and fondue another night. In spite of these things, with running, improved breakfast and lunch and mostly skipping desserts I’ve dropped 5lbs in the past month, which is not rapid weight loss but is being done in a way that’s sustainable without completely eliminating the things I love (including my craft beer hobby). Hooray!

I’ve cut back on events, sadly turning down invitations to local panels and presentations in favor of staying home and working on my book during my off-work hours. I did host an Ubuntu Hour this week though.

Next week I’m planning on popping over to a nearby Ubuntu/Juju Mine and Mingle. I’ll also be heading down to the south end of the east bay for an EBLUG meeting where they’ve graciously offered to host space, time and expertise for an evening of discussing work on some servers that Partimus is planning on deploying in some of the schools we work with. It will be great to meet up and chat with some of the volunteers who I’ve largely only worked on thus far online, and to block off some of my own time for the raw technical tasks that Partimus needs to be focusing on but I’ve been suffering from time constraints around.

I really am looking forward to that spa weekend, but for now I’m rounding out my relaxing Saturday and preparing for get-things-done Sunday!

by pleia2 at June 14, 2015 01:38 AM

June 08, 2015

Akkana Peck

Adventure Dental

[Adventure Dental] This sign, in Santa Fe, always makes me do a double-take.

Would you go to a dentist or eye doctor named "Adventure Dental"?

Personally, I prefer that my dental and vision visits are as un-adventurous as possible.

June 08, 2015 02:54 PM

June 03, 2015

Eric Hammond

Monitor an SNS Topic with AWS Lambda and Cronitor.io

get alerted when an expected event does NOT happen

Last week I announced the availability of a public SNS Topic that may be used to run AWS Lambda functions on a recurring schedule. To encourage folks to realize the implications of a free community service maintained by an individual, I named it the “Unreliable Town Clock”.

Even with this understanding, some folks in the AWS community have (again) placed their faith in me and are already starting to depend on the Unreliable Town Clock public SNS Topic to drive their own AWS Lambda functions and SQS queues, and I want to make sure this service is as reliable as I can reasonably make it.

Here are some of the steps I have taken to increase the reliability of the Unreliable Town Clock:

  1. Runs in a dedicated AWS account. This helps prevent human error and accidents when working on other projects.

  2. Uses restrictive IAM roles/policies and good security practices. EC2 security groups don’t allow any incoming connections, not even ssh. I destroyed the root AWS account password and there are no IAM users.

  3. Auto Scaling group is used to trigger automatic instance re-launch if a running one fails. In my tests, this takes a matter of minutes.

  4. Built reproducibly using a CloudFormation template. This means it’s easy to re-create in the event of a complete disaster, though it would still be bad if the SNS Topic disappeared, as clients would need to resubscribe.

  5. The SNS Topic itself is protected from deletion even if a delete request were somehow submitted for the CloudFormation stack.

  6. The SNS Topic is constantly monitored using AWS Lambda and Cronitor.io. The first delayed or missed chime will trigger alerts to a human and will keep alerting until corrected.

The rest of this article elaborates on this last point of protection.

Delayed/Missing SNS Message Monitoring and Alerting

Most monitoring and alerting services are designed to poll your resources and sound the alarm when the polled resource fails to respond, reports an error, or exceeds some threshold.

This works great for finding out when your web site is down or your server is unpingable. It doesn’t work so well for letting you know when your hourly cron job didn’t run, your ETL aborted mid-stream, or your expected daily email was not received.

And normal monitoring and alerting also can’t tell you when it’s been more than 15 minutes since the last message was published to your SNS Topic, which is exactly what I need to know in order to respond quickly to any failures of the Unreliable Town Clock that aren’t automatically handled by the AWS architecture.

Fortunately, this is exactly the type of monitoring and alerting that Cronitor is designed for. Here’s how I set it up:

  1. Sign up on Cronitor.io and create a new monitor (first monitor is free with email alerts). In my case, I selected “Notify me if [time since run exceeds] [16] [minutes]“.

  2. Create a simple AWS Lambda function that does an HTTP GET on your monitor’s run URL (e.g., https://cronitor.link/d3x0/run). See the sample code below.

  3. Subscribe the AWS Lambda function to the SNS Topic. See example instructions on the Unreliable Town Clock post.

Now, if the SNS Topic goes longer than 16 minutes between chimes, I get personally alerted so I can go investigate and whip the Unreliable Town Clock back into shape.

Here’s some simplified AWS Lambda code that demonstrates how easy it is to ping a Cronitor.io monitor. The code I am running is slightly more involved with extra logging and parameterization of my monitor URL outside of the code, but this would do the job if you plugged in your own monitor run URL.

var request = require('request');
exports.handler = function(event, context) {
    request('https://cronitor.link/d3x0/run',
            function(error, response, body) {
                context.done(error, body);
            }
    );
};

Disclaimer: I am not a nodejs expert. I just Google what I want to do and try Stack Overflow answers until it seems to work. Ideas for improvement welcomed.

I suspect that I should be able to do some similar monitoring and alerting with CloudWatch Metrics and CloudWatch Alarms, and I may eventually work this out, but I still like to have some monitoring managed by an external party who is taking responsibility to make sure their system is running and who will notify me when mine is not.

I rarely plug non-AWS services on this blog, but I love the simple design and powerful functionality of Cronitor.io and think the service fills an important need. In my brief time using the service, August and Shane have been incredibly helpful, generous, and responsive to suggestions for improvements.

If you become a paying customer, don’t let me stop you from suggesting that Cronitor support direct SNS Topic monitoring (eliminating the AWS Lambda step above) if you think that would be something you would use ;-)

Oh, and in case it wasn’t completely obvious, you can use the procedure described in this article to directly monitor the Unreliable Town Clock public SNS Topic yourself and get your own alerts if it ever misses a chime. Or, you can use it to monitor the reliability of the AWS Lambda function you subscribe to the SNS Topic, making sure that it completes successfully as often as it is supposed to.

Original article and comments: https://alestic.com/2015/06/aws-lambda-sns-cronitor/

June 03, 2015 10:26 AM

June 02, 2015

Akkana Peck

Piñon cones!

[Baby piñon cones] I've been having fun wandering the yard looking at piñon cones. We went all last summer without seeing cones on any of our trees, which seemed very mysterious ... though the book I found on piñon pines said they follow a three-year cycle. This year, nearly all of our trees have little yellow-green cones developing.

[piñon spikes with no cones] A few of the trees look like most of our piñons last year: long spikes but no cones developing on any of them. I don't know if it's a difference in the weather this year, or that three-year cycle I read about in the book. I also see on the web that there's a 2-7 year interval between good piñon crops, so clearly there are other factors.

It's going to be fun to see them develop, and to monitor them over the next several years. Maybe we'll actually get some piñon nuts eventually (or piñon jays to steal the nuts). I don't know if baby cones now means nuts later this summer, or not until next summer. Time to check that book out of the library again ...

June 02, 2015 09:20 PM

May 30, 2015

Elizabeth Krumbach

Tourist in Vancouver

While in Vancouver for the OpenStack Summit, I made some time to visit some of the sights as well. Unfortunately I wasn’t able to do as much as I’d like, when I arrived early on Sunday I was sick and had to take it easy, so missed the Women of OpenStack boat tour and happy hour. Then after a stunning week of sunny weather, the Saturday afternoon following the summit brought rain. But I did get out on Saturday to explore some anyway.

First thing on Saturday morning I laced up my running shoes and took advantage of the beautiful path around the waterfront to go for a run. Of all the places away from home I’ve run, there’s been a common theme: water. From Perth to Miami, and even here at home in San Francisco, there’s something about running along the water that defies exhaustion otherwise brought on by travel to inspire me to get out there. It was a great run, one of my longer ones in recent memory.

While on my run I got to see the sea planes one last time. The next time I visit Vancouver, taking one of them to Victoria will definitely be on my list, I knew I’d regret not taking time on Saturday to do it and I totally do! Vancouver isn’t that far away, I’ll have my chance some other time.

I then packed up and checked out of my hotel in time to meet a couple colleagues for lunch, and then I was off to Stanley Park to visit the Vancouver Aquarium. I’ve been to a lot of aquariums, and this one is definitely in my top 5. They had a Sea Monsters Revealed exhibit that I visited first, very similar to the Bodies exhibits that show the insides of people, these ones showed the inside of sea animals. Gross and cool.

Fish, frogs, jellyfish, but the big draw for me is always the marine mammals. I continue to have mixed feelings about keeping large animals like belugas in captivity, but they were amazing to see. While I got a glimpse of one of the dolphin from an underwater tank, the above ground section was closed due the other recovering from surgery, which I later learned was sadly unsuccessful and she passed away the next day. Then of course there were the sea otters, oh the adorable sea otters! I also got to see the penguins get some food from one of their caretakers, after which they were quite lively, waddling around their habitat and going for swims.

Great visit, highly recommended. The rest of Stanley Park was beautiful too, I should have taken more pictures!

More photos from the aquarium here: https://www.flickr.com/photos/pleia2/sets/72157651049343264

I then headed back down to Gastown, the historic district of Vancouver, for some shopping and browsing. I picked up some lovely first nation-made goodies as well as some maple coffee, which may be a tourist gimmick, but it is one of the few types of coffees I’ve grow accustom to drinking black and it’s tricky to find south of the border. Gastown is also where the really cool steam-powered clock lives. While not historic, it is very steam punk.

And with that, the skies opened up and it began raining. I had planned for this and wore my new raincoat supplied as the gift to OpenStack attendees (nice thinking in Vancouver!). It was good to break it in with some nice Vancouver rain, but I did get a bit soggy where I wasn’t covered by the raincoat while walking back to the hotel. I then enjoyed a drink with a colleague who was also escaping the rain and we enjoyed chatting and I wrote some post cards before heading to the airport.

by pleia2 at May 30, 2015 05:52 PM

May 29, 2015

Akkana Peck

Command-line builds for Android using ant

I recently needed to update an old Android app that I hadn't touched in years. My Eclipse setup is way out of date, and I've been hearing about more and more projects switching to using command-line builds. I wanted to ditch my fiddly, difficult to install Eclipse setup and switch to something easier to use.

Some of the big open-source packages, like OsmAnd, have switched to gradle for their Java builds. So I tried to install gradle -- and on Debian, apt-get install gradle wanted to pull in a total of 153 packages! Maybe gradle wasn't the best option to pursue.

But there's another option for command-line android builds: ant. When I tried apt-get install ant, since I already have Java installed (I think the relevant package is openjdk-7-jdk), it installed without needing a single additional package. For a small program, that's clearly a better way to go!

Then I needed to create a build directory and move my project into it. That turned out to be fairly easy, too -- certainly compared to the hours it spent setting up an Eclipse environment. Here's how to set up your ant Android build:

First install the Android "Stand-alone SDK Tools" from Installing the Android SDK. This requires a fair amount of clicking around, accepting licenses, and waiting for a long download.

Now install an SDK or two. Use android sdk to install new SDK versions, and android list targets to see what versions you have installed.

Create a new directory for your project, cd into it, and then:

android create project --name YourProject --path . --target android-19 --package tld.yourdomain.YourProject --activity YourProject
Adjust the Android target for the version you want to use.

When this is done, type ant with no arguments to make sure the directory structure was created properly. If it doesn't print errors, that's a good sign.

Check that local.properties has sdk.dir set correctly. It should have picked that up from your environment.

There will be a stub source file in src/tld/yourdomain/YourProject.java. Edit it as needed, or, if you're transferring a project from another build system such as eclipse, copy the existing .java files to that directory.

If you have custom icons for your project, or other resources like layout or menu files, put them in the appropriate directories under res. The directory structure is the same as in eclipse, but unlike an eclipse build, you can edit the files at any time without the build mysteriously breaking.

Signing your app

Now you'll need a key to sign your app. Eclipse generates a debugging key automatically, but ant doesn't. It's better to use a real key anyway, since debugging keys expire and need to be regenerated periodically.

If you don't already have a key, generate one with:

keytool -genkey -v -keystore my-key.keystore -alias mykey -keyalg RSA -sigalg SHA1withRSA -keysize 2048 -validity 10000
It will ask you for a password; be sure to use one you won't forget (or record it somewhere). You can use any filename you want instead of my-key.keystore, and any alias you want instead of mykey.

Now create a file called ant.properties containing these two lines:

key.store=/path/to/my-key.keystore
key.alias=mykey
Some tutorials tell you to put this in build.properties, but that's outdated and no longer works.

If you forget your key alias, you can find out with this command and the password:

keytool -list -keystore /path/to/my-key.keystore

Optionally, you can also include your key's password:

key.store.password=xxxx
key.alias.password=xxxx
If you don't, you'll be prompted twice for the password (which echoes on the terminal, so be aware of that if anyone is bored enough to watch over your shoulder as you build packages. I guess build-signing keys aren't considered particularly high security). Of course, you should make sure not to include both the private keystore file and the password in any public code repository.

Building

Finally, you're ready to build!

ant release

If you get an error like:

AndroidManifest.xml:6: error: Error: No resource found that matches the given name (at 'icon' with value '@drawable/ic_launcher').
it's because older eclipse builds wanted icons named icon.png, while ant wants them named ic_launcher.png. You can fix this either by renaming your icons to res/drawable-hdpi/ic_launcher.png (and the same for res/drawable-lpdi and -mdpi), or by removing everything under bin (rm -rf bin/*) and then editing AndroidManifest.xml. If you don't clear bin before rebuilding, bin/AndroidManifest.xml will take precendence over the AndroidManifest.xml in the root, so you might have to edit both files.

After ant release, your binary will be in bin/YourProject-release.apk. If you have an adb connection, you can (re)install it with: adb install -r bin/YourProject-release.apk

Done! So much easier than eclipse, and you can use any editor you want, and check your files into any version control system.

That just leaves the coding part. If only Java development were as easy as Python or C ...

May 29, 2015 02:52 AM

May 27, 2015

Jono Bacon

#ISupportCommunity

So the Ubuntu Community Council has asked Jonathan Riddell to step down as a leader in the Kubuntu community. The reasoning for this can be broadly summarized as “poor conduct”.

Some members of the community have concluded that this is something of a hatchet job from the Community Council, that Jonathan’s insistence to get answers to tough questions (e.g. licensing, donations) has resulted in the Community Council booting him out.

I don’t believe this is true.

Just because the Community Council has not provided an extensive docket of evidence behind their decision does not equate to wrong-doing. It does not equate to corruption or malpractice.

I do sympathize with the critics though. I spent nearly eight years pretty close to the politics of Ubuntu and when I read the Community Council’s decision I understood and agreed with it. For all of Jonathan’s tremendously positive contributions to Kubuntu, I do believe his conduct and approach has sadly had a negative impact on parts of our community too.

This has nothing to do with the questions he raised, it was the way he raised them, and the inference and accusations he made in raising such questions. We can’t have our leaders behaving like that: it sets a bad example.

As such, I understood the Community Council’s decision because I have seen these politics both up front and behind the scenes due to my close affiliation with Ubuntu and Canonical. For those people for who haven’t been so close to the coalface though, this decision from the CC feels heavy handed, without due evidence, and emotive in response.

Thus, in conclusion, I don’t believe the CC have acted inappropriately in making this decision, but I do believe that their decision needs to be illustrated further. The decision needs to feel complete and authoritative, but until we see further material, we are not going to improve the situation if everyone assumes the Community Council is some shadowy cabal against Jonathan and Kubuntu.

We are a community. We have more in common than what differs between us. Let’s put the hyperbole to one side and have a conversation about how we resolve this. There is an opportunity for a great outcome here: for better understanding and further improvement, but the first step is everyone understanding the perspectives of the people with opposing viewpoints.

As such #ISupportCommunity; our wider Ubuntu and Kubuntu family. Let’s work together, not against each other.

by jono at May 27, 2015 05:25 PM

May 26, 2015

Eric Hammond

Schedule Recurring AWS Lambda Invocations With The Unreliable Town Clock (UTC)

public SNS Topic with a trigger event every quarter hour

Scheduled executions of AWS Lambda functions on an hourly/daily/etc basis is a frequently requested feature, ever since the day Amazon introduced the service at AWS re:Invent 2014.

Until Amazon releases a reliable, premium cron feature for AWS Lambda, I’m offering a community-built alternative which may be useful for some non-critical applications.

us-east-1:

arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

us-west-2:

arn:aws:sns:us-west-2:522480313337:unreliable-town-clock-topic-N4N94CWNOMTH

Background

Beyond its event-driven convenience, the primary attraction of AWS Lambda is eliminating the need to maintain infrastructure to run and scale code. The AWS Lambda function code is simply uploaded to AWS and Amazon takes care of providing systems to run on, keeping it available, scaling to meet demand, recovering from infrastructure failures, monitoring, logging, and more.

The available methods to trigger AWS Lambda functions already include some powerful and convenient events like S3 object creation, DynamoDB changes, Kinesis stream processing, and my favorite: the all-purpose SNS Topic subscription.

Even so, there is a glaring need for code that wants to run at regular intervals: time-triggered, recurring, scheduled event support for AWS Lambda. Attempts to to do this yourself generally ends up with having to maintain your own supporting infrastructure, when your original goal was to eliminate the infrastructure worries.

Unreliable Town Clock (UTC)

The Unreliable Town Clock (UTC) is a new, free, public SNS Topic (Amazon Simple Notification Service) that broadcasts a “chime” message every quarter hour to all subscribers. It can send the chimes to AWS Lambda functions, SQS queues, and email addresses.

You can use the chime attributes to run your code every fifteen minutes, or only run your code once an hour (e.g., when minute == "00") or once a day (e.g., when hour == "00" and minute == "00") or any other series of intervals.

You can even subscribe a function you only want to run only once at a specific time in the future: Have the function ignore all invocations until it’s after the time it wants. When it is time, it can perform its job, then unsubscribe itself from the SNS Topic.

Connecting your code to the Unreliable Town Clock is fast and easy. No application process or account creation is required:

Example: AWS Lambda Function

These commands subscribe an AWS Lambda function to the Unreliable Town Clock:

# AWS Lambda function
lambda_function_name=YOURLAMBDAFUNCTION
lambda_function_region=us-east-1
account=YOURACCOUNTID
lambda_function_arn="arn:aws:lambda:$lambda_function_region:$account:function:$lambda_function_name"

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Allow the SNS Topic to invoke the AWS Lambda function
aws lambda add-permission \
  --function-name "$lambda_function_name"  \
  --action lambda:InvokeFunction \
  --principal sns.amazonaws.com \
  --source-arn "$sns_topic_arn" \
  --statement-id $(uuidgen)

# Subscribe the AWS Lambda function to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol lambda \
  --notification-endpoint "$lambda_function_arn"

Example: Email Address

These commands subscribe an email address to the Unreliable Town Clock (useful for getting the feel, testing, and debugging):

# Email address
email=YOUREMAIL@YOURDOMAIN

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Subscribe the email address to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol email \
  --notification-endpoint "$email"

Example: SQS Queue

These commands subscribe an SQS queue to the Unreliable Town Clock:

# SQS Queue
sqs_queue_name=YOURQUEUE
account=YOURACCOUNTID
sqs_queue_arn="arn:aws:sqs:us-east-1:$account:$sqs_queue_name"
sqs_queue_url="https://queue.amazonaws.com/$account/$sqs_queue_name"

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Allow the SNS Topic to post to the SQS queue
sqs_policy='{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": { "AWS": "*" },
    "Action": "sqs:SendMessage",
    "Resource": "'$sqs_queue_arn'",
    "Condition": {
      "ArnEquals": {
        "aws:SourceArn": "'$sns_topic_arn'"
}}}]}'
sqs_policy_escaped=$(echo $sqs_policy | perl -pe 's/"/\\"/g')
aws sqs set-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attributes '{"Policy":"'"$sqs_policy_escaped"'"}'

# Subscribe the SQS queue to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol sqs \
  --notification-endpoint "$sqs_queue_arn"

Chime message

The chime message includes convenient attributes like the following:

{
  "type" : "chime",
  "timestamp": "2015-05-26 02:15 UTC",
  "year": "2015",
  "month": "05",
  "day": "26",
  "hour": "02",
  "minute": "15",
  "day_of_week": "Tue",
  "unique_id": "2d135bf9-31ba-4751-b46d-1db6a822ac88",
  "region": "us-east-1",
  "sns_topic_arn": "arn:aws:sns:...",
  "reference": "...",
  "support": "...",
  "disclaimer": "UNRELIABLE SERVICE {ACCURACY,CONSISTENCY,UPTIME,LONGEVITY}"
}

You should only run your code’s primary function when the message type == "chime"

Other values are reserved for other message types which may include things like service notifications or alerts. Those message types may have different attributes.

It might make sense to forward non-chime messages to a human (e.g., post to an SNS Topic where you have an email address subscribed).

Regions

The Unreliable Town Clock is currently available in the following AWS Regions:

  • us-east-1
  • us-west-2

You may create AWS Lambda functions in any AWS accounts in any AWS regions and subscribe them to these SNS Topics.

Problems in one region will not affect the Unreliable Town Clock functionality in the other region. You may subscribe to both topics for additional reliability. [There was an AWS SNS us-east-1 outage on 2015-07-31 that caused the Unreliable Town Clock in that region to not broadcast chimes for almost 3 hours.]

Cost

The Unreliable Town Clock is free for unlimited “lambda” and “sqs” subscriptions.

Yes. Unlimited. Amazon takes care of the scaling and does not charge for sending to these endpoints through SNS.

You may currently add “email” subscriptions, especially to test and see the message format, but if there are too many email subscribers, new subscriptions may be disabled, as it costs the sending account $0.70/year for each address at the current chime frequency.

You are naturally responsible for any charges that occur in your own accounts.

Running an AWS Lambda function four times an hour for a year results in 35,000 invocations, which is negligible if not free, but you need to take care what your functions do and what resources they consume as they are running in your AWS account.

Source

The source code for the infrastructure of the Unreliable Town Clock is available on GitHub

https://github.com/alestic/alestic-unreliable-town-clock

You are welcome to run your own copy, but note that the current code marks the SNS Topic as public so that anybody can subscribe.

Support

The following Google Group mailing list can be used for discussion, questions, enhancement requests, and alerts about problems.

http://groups.google.com/d/forum/unreliable-town-clock

If you plan to use the Unreliable Town Clock, you should subscribe to this mailing list so that you receive service notifications (e.g., if the public SNS Topic ARN is going to change).

Disclaimer

The Unreliable Town Clock service is intended but not guaranteed to be useful. As the name explicitly states, you should consider it unreliable and should not use it for anything you consider important.

Here are some, but not all, of the dimensions in which it is unreliable:

  • Accuracy: The times messages are sent may not be the true times they indicate. Messages may be delayed, get sent early, or be duplicated.

  • Uptime: Chime messages may be skipped for short or long periods of time.

  • Consistency: The formats or contents of the messages may change without warning.

  • Longevity: The service may disappear without warning at any time.

There is no big company behind this service, just a human being. I have experience building and supporting public services used by individuals, companies, and other organizations around the world, but I’m still just one fellow, and this is just an experimental service for the time being.

Comments

What are you thinking of using recurring AWS Lambda invocations for?

Any other features you would like to see?

[Update 2015-07-19: Ok to subscribe across AWS regions]

[Update 2015-07-31: Added second public SNS topic in us-west-2 after AWS SNS outage in us-east-1.]

Original article and comments: https://alestic.com/2015/05/aws-lambda-recurring-schedule/

May 26, 2015 09:01 AM

May 25, 2015

Eric Hammond

Debugging AWS Lambda Invocations With An Echo Function

As I create architectures that include AWS Lambda functions, I find there are situations where I just want to know that the AWS Lambda function is getting invoked and to review the exact event data structure that is being passed in to it.

I found that a simple “echo” function can be dropped in to copy the AWS Lambda event to the console log (CloudWatch Logs). It’s easy to review this output to make sure the function is getting invoked at the right times and with the right data.

There are probably dozens of debug/echo AWS Lambda functions floating around out there, but for my own future reference, I have created a GitHub repo with a four line echo function that does the trick for me. I’ve included a couple scripts to install and uninstall the AWS Lambda function in an account, including the required IAM role and policies.

Here’s the repo for the lambda-echo AWS Lambda function:

https://github.com/alestic/lambda-echo

The README.md provides instructions on how to install and test.

Note: Once you install an AWS Lambda function, there is no reason to delete it if you think it might be useful in the future. It costs nothing to let Amazon store it for you and keep it available for when you want to run it again.

Amazon has indicated that they may prune AWS Lambda functions that go unused for long periods of time, but I haven’t sen this happen in practice yet.

Is there a standard file structure yet for a directory with AWS Lambda function source and the related IAM role/policies? Should I convert this to the format expected by Mitch Garnaat’s kappa perhaps?

Original article and comments: https://alestic.com/post/2015/05/aws-lambda-echo/

May 25, 2015 08:03 AM

Debugging AWS Lambda Invocations With An Echo Function

As I create architectures that include AWS Lambda functions, I find there are situations where I just want to know that the AWS Lambda function is getting invoked and to review the exact event data structure that is being passed in to it.

I found that a simple “echo” function can be dropped in to copy the AWS Lambda event to the console log (CloudWatch Logs). It’s easy to review this output to make sure the function is getting invoked at the right times and with the right data.

There are probably dozens of debug/echo AWS Lambda functions floating around out there, but for my own future reference, I have created a GitHub repo with a four line echo function that does the trick for me. I’ve included a couple scripts to install and uninstall the AWS Lambda function in an account, including the required IAM role and policies.

Here’s the repo for the lambda-echo AWS Lambda function:

https://github.com/alestic/lambda-echo

The README.md provides instructions on how to install and test.

Note: Once you install an AWS Lambda function, there is no reason to delete it if you think it might be useful in the future. It costs nothing to let Amazon store it for you and keep it available for when you want to run it again.

Amazon has indicated that they may prune AWS Lambda functions that go unused for long periods of time, but I haven’t seen this happen in practice yet.

Is there a standard file structure yet for a directory with AWS Lambda function source and the related IAM role/policies? Should I convert this to the format expected by Mitch Garnaat’s kappa perhaps?

Original article and comments: https://alestic.com/2015/05/aws-lambda-echo/

May 25, 2015 08:03 AM

May 24, 2015

Elizabeth Krumbach

Liberty OpenStack Summit days 3-5

Summiting continued! The final three days of the conference offered two days of OpenStack Design Summit discussions and working sessions on specific topics, and Friday was spent doing a contributors meetup so we could have face time with people we’re working with on projects.

Wednesday began with a team breakfast, where over 30 of us descended upon a breakfast restaurant and had a lively morning. Unfortunately it ran a bit long and made us a bit late for the beginning of summit stuff, but the next Infrastructure work session was fully attended! The session sought to take some next steps with our activity tracking mechanisms, none of which are currently part of the OpenStack Infrastructure. Currently there are several different types of stats being collected, from reviewstats which are hosted by a community member and focus specifically on reviews to those produced from Bitergia (here) that are somewhat generic but help compare OpenStack to other open source projects to Stackalytics which is crafted specifically for the OpenStack community. There seems to be value in hosting various metric types, mostly so comparisons can be made across platforms if they differ in any way. The consensus of the session was to first move forward with moving Stackalytics into our infrastructure, since so many projects find such value in it. Etherpad here: YVR-infra-activity-tracking


With this view from the work session room, it’s amazing we got anything done

Next up was QA: Testing Beyond the Gate. In OpenStack there is a test gate that all changes must pass in order for a change to be merged. In the past cycle periodic and post-merge tests have also been added, but it’s been found that if a code merging isn’t dependent upon these passing, not many people pay attention to these additional tests. The result of the session is a proposed dashboard for tracking these tests so that there’s an easier view into what they’re doing, whether they’re failing and so empower developers to fix them up. Tracking of third party testing in this, or a similar, tracker was also discussed as a proposal once the infra-run tests are being accounted for. Etherpad here: YVR-QA-testing-beyond-the-gate

The QA: DevStack Roadmap session covered some of the general cleanup that typically needs to be done in DevStack, but then also went into some of the broader action items, including improving the reliability of Centos tests run against it that are currently non-voting, pulling some things out of DevStack to support them as plugins as we move into a Big Tent world and work out how to move forward with Grenade. Etherpad here: YVR-QA-Devstack-Roadmap

I then attended QA: QA in the Big Tent. In the past cycle, OpenStack dropped the long process of being accepted into OpenStack as an official project and streamlined it so that competing technologies are now all in the mix, we’re calling it the Big Tent – as we’re now including everyone. This session focused on how to support the QA needs now that OpenStack is not just a slim core of a few projects. The general idea from a QA perspective is that they can continue to support the things-everyone-uses (nova, neutron, glance… an organically evolving list) and improve pluggable support for projects beyond that so they can help themselves to the QA tools at their disposal. Etherpad here: YVR-QA-in-the-big-tent

With sessions behind me, I boarded a bus for the Core Reviewer Party, hosted at the Museum of Anthropology at UBC. As party venues go, this was a great one. The museum was open for us to explore, and they also offered tours. The main event took place outside, where they served design-your-own curry seafood dishes, bison, cheeses and salmon. Of course no OpenStack event would be complete with a few bars around serving various wines and beer. There was an adjacent small building where live music was playing and there was a lot of space to walk around, catch the sunset and enjoy some gardens. I spent much of my early evening with friends from Time Warner Cable, and rounded things off with several of my buddies from HP. This ended up being a get-back-after-midnight event for me, but it was totally worth it to spend such a great time with everyone.

Thursday morning kicked off with a series of fishbowl sessions where the Infrastructure team was discussing projects we have in the works. First up was Infrastructure: Zuul v3. Zuul is our pipeline-oriented project gating system, which currently works by facilitating the of running tests and automated tasks in response to Gerrit events. Right now it sends jobs off to Gearman for launching via Jenkins to our fleet of waiting nodes, but we’re really using Jenkins as a shim here, not really taking advantage of the built in features that Jenkins offers. We’re also in need of a system that better supports multi-tenancy and multi-node jobs and which can scale as OpenStack continues to grow, particularly with Big Tent. This session discussed the end game of phasing out Jenkins in favor of a more Zuul-driven workflow and more immediate changes that may be made to Nodepool and smaller projects like Zuul-merger to drive our vision. Etherpad here: YVR-infra-zuulv3

Everyone loves bug reporting and task tracking, right? In the next session, Infrastructure: Task tracking, that was our topic. We did an experiment with the creation of Storyboard as our homebrewed solution to bug and task tracking, but in spite of valiant efforts by the small team working on it, they were unable to gain more contributors and the job was simply too big for the size of the team doing the work. As a result, we’re now back to looking at solutions other than Canonical’s hosted Launchpad (which is currently used). The session went through some basic evaluation of a few tools, and at the end there was some consensus to work toward bringing up a more battle-hardened and Puppetized instance of Maniphest (from Phabricator) so that teams can see if it fits their needs. Etherpad here:YVR-infra-task-tracking

The morning continued with an Infrastructure: Infra-cloud session. The Infrastructure team has about 150 machines in a datacenter that have been donated to us by HP. The session focused on how we can put these to use as Nodepool instances by running OpenStack on our own and adding that “infra-cloud” to the providers in Nodepool. I’m particularly interested in this, given some of my history with getting TripleO into testing (so have deployed OpenStack many, many times!) and in general eager to learn even more about production OpenStack deployments. So it looks like I’ll be providing Infra-brains to Clint Byrum who is otherwise taking a lead here. To keep in sync with other things we host, we’ll be using Puppet to deploy OpenStack, so I’m thankful for the expertise of people like Colleen Murphy who just joined our team to help with that. Etherpad here: YVR-infra-cloud

Next up was the Infrastructure: Puppet testing session. It was great to have some of the OpenStack Puppet folks in the room so they could talk some about how they’re using beaker-rspec in our infra for testing the OpenStack modules themselves. Much of the discussion centered around whether we want to follow their lead, or do something else, leveraging our current system of node allocation to do our own module testing. We also have a much commented on spec up for proposal here. The result of the discussion was that it’s likely that we’ll just follow the lead of the OpenStack Puppet team. Etherpad here: kilo-infra-puppet-testing

That afternoon we had another Infrastructure: Work session where we focused on the refactor of portions of system-config OpenStack module puppet scripts, and some folks worked on getting the testing infrastructure that was talked about earlier. I took the opportunity to do some reviews of the related patches and help a new contributor do some review – she even submitted a patch that was merged the next morning! Etherpad for the work session here: YVR-infra-puppet-openstackci

The last session I attended that day was QA: Liberty Priorities. It wasn’t one I strictly needed to be in, but I hadn’t attended a session in room 306 yet, and it was the famous gosling room! The room had a glass wall that looked out onto a roof were a couple of geese had their babies and would routinely walk by and interrupt the session because everyone would stop, coo and take pictures of them. So I finally got to see the babies! The actual session collected the pile of to do list items generated at the summit, which I got roped into helping with, and prioritized them. Oh, and they gave me a task to help with. I just wanted to see the geese! Etherpad with the priorities is here: YVR-QA-Liberty-Priorities


Photo by Thierry Carrez (source)

Thursday night I ended up having dinner with the moderator of our women of OpenStack panel, Beth Cohen. We went down to Gastown to enjoy a dinner of oysters and seafood and had a wonderful time. It was great to swap tech (and women in tech) stories and chat about our work.

Friday! The OpenStack conference itself ended on Thursday, so it was just ATCs (Active Technical Contributors) attending for the final day of the Design Summit. So things were much quieter and the agenda was full of contributors meetups. I spent the day in the Infrastructure, QA and Release management contributors meetup. We had a long list of things to work on, but I focused on the election tooling, which I ended up following up with on list and then later had a chat with the author of the proposed tooling. My afternoon was spent working on the translations infrastructure with Steve Kowalik who works with me on OpenStack infra and Carlos Munoz of the Zanata team. We were able to work through the outstanding Zanata bugs and make some progress with how we’re going to tackle everything, it was a productive afternoon and always a pleasure to get together with the folks I work with online every day.

That evening, as we left the closing conference center, I met up with several colleagues for an amazing sushi dinner in downtown Vancouver. A perfect, low-key ending to an amazing event!

by pleia2 at May 24, 2015 02:15 AM

May 21, 2015

Elizabeth Krumbach

Liberty OpenStack Summit day 2

My second day of the OpenStack summit came early with he Women of OpenStack working breakfast at 7AM. It kicked off with a series of lightning talks that talked about impostor syndrome, growing as a technical leader (get yourself out there, ask questions) and suggestions from a tech start-up founder about being an entrepreneur. From there we broke up into groups to discuss what we’d like to see from the Women of OpenStack group in the next year. The big take-aways were around mentoring of new women joining our community and starting to get involved with all the OpenStack tooling and more generally giving voice to the women in our community.

Keynotes kicked off at 9AM with Mark Collier announcing the next OpenStack Summit venues: Austin for the spring 2016 summit and Barcelona for the fall 2016 summit. He then went into a series of chats and demos related to using containers, which may be the Next Big Thing in cloud computing. During the session we heard from a few companies who are already using OpenStack with containers (mostly Docker and Kubernetes) in production (video). The keynotes continued with one by Intel, where the speaker took time to talk about how valuable feedback from operators has been in the past year, and appreciation for the new diversity working group (video). The keynote from EBay/Paypal showed off the really amazing progress they’ve made with deploying OpenStack, with it now running on over 300k cores and pretty much powers Paypal at this point (video). Red Hat’s keynote focused on customer engagement as OpenStack matures (video). The keynotes wrapped up with one from NASA JPL, which mostly talked about the awesome Mars projects they’re working on and the massive data requirements therein (video).


OpenStack at EBay/Paypal

Following keynotes, Tuesday really kicked off the core OpenStack Design Summit sessions, where I focused on a series of Cross Project Workshops. First up was Moving our applications to Python 3. This session focused on the migration of Python 3 for functional and integration testing in OpenStack projects now that Oslo libraries are working in Python 3. The session mostly centered around strategy, how to incrementally move projects over and the requirements for the move (2.x dependencies, changes to Ubuntu required to effectively use Python 3.4 for gating, etc). Etherpad here: liberty-cross-project-python3. I then attended Functional Testing Show & Tell which was a great session where projects shared their stories about how they do functional (and some unit) testing in their projects. The Etherpad for this one is super valuable for seeing what everyone reports, it’s available here: liberty-functional-testing-show-tell.

My Design Summit sessions were broken up nicely with a lunch with my fellow panelists, and then the Standing Tall in the Room – Sponsored by the Women of OpenStack panel itself at 2PM (video). It was wonderful to finally meet my fellow panelists in person and the session itself was well-attended and we got a lot of positive feedback from it. I tackled a question about shyness with regard to giving presentations here at the OpenStack Summit, where I pointed at a webinar about submitting a proposal via the Women of OpenStack published in January. I also talked about difficulties related to the first time you write to the development mailing list, participate on IRC and submit code for review. I used an example of having to submit 28 patches for one of my early patches, and audience member Steve Martinelli helpfully tweeted about a 63 patch change. Diving in to all these things helps, as does supporting the ideas of and doing code review for others in your community. Of course my fellow panelists had great things to say too, watch the video!


Thanks to Lisa-Marie Namphy for the photo!

Panel selfie by Rainya Mosher

Following the panel, it was back to the Design Summit. The In-team scaling session was an interesting one with regard to metrics. We’ve learned that regardless of project size, socially within OpenStack it seems difficult for any projects to rise above 14 core reviewers, and keep enough common culture, focus and quality. The solutions presented during the session tended to be heavy on technology (changes to ACLs, splitting up the repo to trusted sub-groups). It’ll be interesting to see how the scaling actually pans out, as there seem to be many more social and leadership solutions to the problem of patches piling up and not having enough core folks to review them. There was also some discussion about the specs process, but the problems and solutions seem to heavily vary between teams, so it seemed unlikely that a unified solution to unprocessed specs would be universal, but it does seem like the process is often valuable for certain things. Etherpad here: liberty-cross-project-in-team-scaling.

My last session of the day was OpenStack release model(s). A time-based discussion required broader participation, so much of the discussion centered around the ability for projects to independently do intermediary releases outside of the release cycle and how that could be supported, but I think the jury is still out on a solution there. There was also talk about how to generally handle release tracking, as it’s difficult to predict what will land, so much so that people have stopped relying on the predictions and that bled into a discussion about release content reporting (release changelogs). In all, an interesting session with some good ideas about how to move forward, Etherpad here: liberty-cross-project-release-models.

I spent the evening with friends and colleagues at the HP+Scality hosted party at Rocky Mountaineer Station. BBQ, food trucks and getting to see non-Americans/non-Canadians try s’mores for the first time, all kinds of fun! Fortunately I managed to make it back to my hotel at a reasonable hour.

by pleia2 at May 21, 2015 10:03 PM

May 20, 2015

Elizabeth Krumbach

Liberty OpenStack Summit day 1

This week I’m at the OpenStack Summit. It’s the most wonderful, exhausting and valuable-to-my-job event I go to, and it happens twice a year. This time it’s being held in the beautiful city of Vancouver, BC, and the conference venue is right on the water, so we get to enjoy astonishing views throughout the day.


OpenStack Summit: Clouds inside and outside!

Jonathan Bryce Executive Director of the OpenStack Foundation kicked off the event with an introduction to the summit, success that OpenStack has built in the Process, Store and Move digital economy, and some announcements, among which was the success found with federated identity support in Keystone where Morgan Fainberg, PTL of Keystone, helped show off a demonstration. The first company keynote was presented by Digitalfilm Tree who did a really fun live demo of shooting video at the summit here in Vancouver, using their OpenStack-powered cloud so it was accessible in Los Angeles for editorial review and then retrieving and playing the resulting video. They shared that a recent show that was shot in Vancouver used this very process for the daily editing and that they had previously used courier services and staff-hopping-on-planes to do the physical moving of digital content because it was too much for their previous systems. Finally, Comcast employees rolled onto the stage on a couch to chat about how they’ve expanded their use of OpenStack since presenting at the summit in Portland, Oregon Video of the all of this available here.

Next up for keynotes was Walmart, who talked about how they moved to OpenStack and used it for all the load on their sites experienced over the 2014 holiday season and how OpenStack has met their needs, video here. Then came HP’s keynote, which really focused on the community and choices available aspect of OpenStack, where speaker Mark Interrante said “OpenStack should be simpler, you shouldn’t need a PhD to run it.” Bravo! He also pointed out that HP’s booth had a demonstration of OpenStack running on various hardware at the booth, an impressively inclusive step for a company that also sells hardware. Video for HP’s keynote here (I dig the Star Wars reference). Keynotes continued with one from TD Bank, which I became familiar with when they bought up the Commerce branches in the Philadelphia region, but have since learned are a major Canadian Bank (oooh, TD stands for Toronto Dominion!). The most fascinating thing about their moved to the cloud for me is how they’ve imposed a cloud-first policy across their infrastructure, where teams must have a really good reason and approval in order to do more traditional bare-metal, one off deployments for their applications, so it’s rare, video. Cybera was the next keynote and perhaps the most inspiring from a humanitarian standpoint. As one of the earliest OpenStack adopters, Cybera is a non-profit that seeks to improve access to the internet and valuable resources therein, which presented Robin Winsor stressed in his keynote was now as the physical infrastructure that was built in North America in the 19th and 20th centuries (railroads, highways, etc), video here. The final keynote was from Solidfire who discussed the importance of solid storage as a basis of a successful deployment, video here.

Following the keynotes, I headed over to the Virtual Networking in OpenStack: Neutron 101 (video) where Kyle Mestery and Mark McClain gave a great overview of how Neutron works with various diagrams showing of the agents and improvements made in Kilo with various new drivers and plugins. The video is well worth the watch.

A chunk of my day was then reserved for translations. My role here is as the Infrastructure team contact for the translations tooling, so it’s also been a crash course in learning about translations workflows since I only speak English. Each session, even unrelated to the actual infrastructure-focused tooling has been valuable to learning. In the first translation team working session the focus was translations glossaries, which are used to help give context/meaning to certain English words where the meaning can be unclear or otherwise needs to be defined in terms of the project. There was representation from the Documentation team, which was valuable as they maintain a docs-focused glossary (here) which is more maintained and has a bigger team than the proposed separate translations glossary would have. Interesting discussion, particularly as my knowledge of translations glossaries was limited. Etherpad here: Vancouver-I18n-WG-session.

I hosted the afternoon session on Building Translation Platform. We’re migrating the team to Zanata have been fortunate to have Carlos Munoz, one of the developers on Zanata, join us at every summit since Atlanta. They’ve been one of the most supportive upstreams I’ve ever worked with, prioritizing our bug reports and really working with us to make sure our adoption is a success. The session itself reviewed the progress of our migration and set some deadlines for having translators begin the testing/feedback cycle. We also talked about hosting a Horizon instance in infra, refreshed daily, so that translators can actually see where translations are most needed via the UI and can prioritize appropriately. Finally, it was a great opportunity to get feedback from translators about what they need from the new workflow and have Carlos there to answer questions and help prioritize bugs. Etherpad here: Vancouver-I18n-Translation-platform-session.

My last translations-related thing of the day was Here be dragons – Translating OpenStack (slides). This was a great talk by Łukasz Jernaś that began with some benefits of translations work and then went into best practices and tips for working with open source translations and OpenStack specifically. It was another valuable session for me as the tooling contact because it gave me insight into some of the pain points and how appropriate it would be to address these with tooling vs. social changes to translations workflows.

From there I went back to general talks, attending Building Clouds with OpenStack Puppet Modules by Emilien Macchi, Mike Dorman and Matt Fischer (video). The OpenStack Infrastructure team is looking at building our own infra-cloud (we have a session on it later this week) and the workflows and tips that this presentation gave would also be helpful to me in other work I’ve been focusing on.

The final session I wandered into was a series of Lightning Talks, put together by HP. They had a great lineup of speakers from various companies and organizations. My evening was then spent at an HP employee gathering, but given my energy level and planned attendance at the Women of OpenStack breakfast at 7AM the following morning I headed back to my hotel around 9PM.

by pleia2 at May 20, 2015 11:26 PM

May 18, 2015

Eric Hammond

Alestic.com Site Redesign

The Alestic.com web site has been redesigned. The old design was going on 8 years old. The new design is:

Ok, so I still have a little improvement remaining in the fast dimension, but at least the site is static now and served through a CDN.

Since fellow geeks care, here are the technologies currently employed:

Simple, efficient, and gets the job done.

The old site is aailable at http://old.alestic.com for a while.

Questions, comments, and suggestions in the comments below.

Original article and comments: https://alestic.com/post/2015/05/blog-redesign/

May 18, 2015 07:10 AM

Alestic.com Site Redesign

The Alestic.com web site has been redesigned. The old design was going on 8 years old. The new design is:

Ok, so I still have a little improvement remaining in the fast dimension, but at least the site is static now and served through a CDN.

Since fellow geeks care, here are the technologies currently employed:

Simple, efficient, and gets the job done.

The old site is available at http://old.alestic.com for a while.

Questions, comments, and suggestions in the comments below.

Original article and comments: https://alestic.com/2015/05/blog-redesign/

May 18, 2015 07:10 AM

May 16, 2015

Elizabeth Krumbach

Xubuntu sweatshirt, Wily, & Debian Jessie Release

People like shirts, stickers and goodies to show support of their favorite operation system, and though the Xubuntu project has been slower than our friends over at Kubuntu at offering them, we now have a decent line-up offered by companies we’re friendly with. Several months ago the Xubuntu team was contacted by Gabor Kum of HELLOTUX to see if we’d be interested in offering shirts through their site. We were indeed interested! So after he graciously sent our project lead a polo shirt to evaluate, we agreed to start offering his products on our site, alongside the others. See all products here.

Polos aren’t really my thing, so when the Xubuntu shirts went live I ordered the Xubuntu sweater. Now a language difference may be in play here, since I’d call it a sweatshirt with a zipper, or a light jacket, or a hoodie without a hood. But it’s a great shirt, I’ve been wearing it regularly since I got it in my often-chilly city of San Francisco. It fits wonderfully and the embroidery is top notch.

Xubuntu sweatshirt
Close-up of HELLOTUX Xubuntu embroidery

In other Ubuntu things, given my travel schedule Peter Ganthavorn has started hosting some of the San Francisco Ubuntu Hours. He hosted one last month that I wasn’t available for, and then another this week which I did attend. Wearing my trusty new Xubuntu sweatshirt, I also brought along my Wily Werewolf to his first Ubuntu Hour! I picked up this fluffy-yet-fearsome werewolf from Squishable.com, which is also where I found my Natty Narwhal.

When we wrapped up the Ubuntu Hour, we headed down the street to our favorite Chinese place for Linux meetings where I was hosting a Bay Area Debian Meeting and Jessie Release Party! I was pretty excited about doing this, since the Toy Story character Jessie is a popular one, I jumped at the opportunity to pick up some party supplies to mark the occasion, and ended up with a collection of party hats and notepads:

There were a total of 5 of us there, long time BAD member Michael Paoli being particularly generous with his support of my ridiculous hats:

We had a fun time, welcoming a couple of new folks to our meeting as well. A few more photos from the evening here: https://www.flickr.com/photos/pleia2/sets/72157650542082473

Now I just need to actually upgrade my servers to Jessie!

by pleia2 at May 16, 2015 03:09 AM

May 15, 2015

Akkana Peck

Of file modes, umasks and fmasks, and mounting FAT devices

I have a bunch of devices that use VFAT filesystems. MP3 players, camera SD cards, SD cards in my Android tablet. I mount them through /etc/fstab, and the files always look executable, so when I ls -f them, they all have asterisks after their names. I don't generally execute files on these devices; I'd prefer the files to have a mode that doesn't make them look executable.

I'd like the files to be mode 644 (or 0644 in most programming languages, since it's an octal, or base 8, number). 644 in binary is 110 100 100, or as the Unix ls command puts it, rw-r--r--.

There's a directive, fmask, that you can put in fstab entries to control the mode of files when the device is mounted. (Here's Wikipedia's long umask article.) But how do you get from the mode you want the files to be, 644, to the mask?

The mask (which corresponds to the umask command) represent the bits you don't want to have set. So, for instance, if you don't want the world-execute bit (1) set, you'd put 1 in the mask. If you don't want the world-write bit (2) set, as you likely don't, put 2 in the mask. So that's already a clue that I'm going to want the rightmost byte to be 3: I don't want files mounted from my MP3 player to be either world writable or executable.

But I also don't want to have to puzzle out the details of all nine bits every time I set an fmask. Isn't there some way I can take the mode I want the files to be -- 644 -- and turn them into the mask I'd need to put in /etc/fstab or set as a umask?

Fortunately, there is. It seemed like it ought to be straightforward, but it took a little fiddling to get it into a one-line command I can type. I made it a shell function in my .zshrc:

# What's the complement of a number, e.g. the fmask in fstab to get
# a given file mode for vfat files? Sample usage: invertmask 755
invertmask() {
    python -c "print '0%o' % (~(0777 & 0$1) & 0777)"
}

This takes whatever argument I give to it -- $1 -- and takes only the three rightmost bytes from it, (0777 & 0$1). It takes the bitwise NOT of that, ~. But the result of that is a negative number, and we only want the three rightmost bytes of the result, (result) & 0777, expressed as an octal number -- which we can do in python by printing it as %o. Whew!

Here's a shorter, cleaner looking alias that does the same thing, though it's not as clear about what it's doing:

invertmask1() {
    python -c "print '0%o' % (0777 - 0$1)"
}

So now, for my MP3 player I can put this in /etc/fstab:

UUID=0000-009E /mp3 vfat user,noauto,exec,fmask=133,shortname=lower 0 0

May 15, 2015 04:27 PM

May 11, 2015

Elizabeth Krumbach

OpenStack events, anniversary & organization, a museum and some computers & cats

I’ve been home for just over 3 weeks. I thought things would be quieter event-wise, but I have attended 2 OpenStack meetups since getting home, the first right after getting off the plane from South Carolina. My colleague and Keystone PTL Morgan Fainberg was giving a presentation on Keystone and I have the rare opportunity to finally meet a scholarship winner who I’ve been mentoring at work. It was great to meet up and see some of the folks who I only see at conference, including other colleagues from HP. Plus, Morgan’s presentation on Keystone was great and the audience had a lot of good questions. Video of the presentation here and slides are available here


With my Helion mentee!

This past week I went to the second meetup, this time over at Walmart Labs, just a quick walk from the Sunnyvale Caltrain station. For this meetup I was on a mainstage panel where discussions covered improvements to OpenStack in the Kilo release (including the continued rise of third party testing, which I was able to speak to), the new Big Tent approach to OpenStack project adoption and how baremetal is starting to change the OpenStack landscape. I was also able to meet some of the really smart people working at Walmart Labs, and learned that all of walmart.com is running on top of OpenStack (this article from March talks about it and they’ll be doing a session on it at the upcoming OpenStack Summit in Vancouver).


Panel at Walmart Labs

In other professional news, the work I did in Oman earlier this year continues to bear fruit. On April 20th issue #313 of the Sultan Qaboos University Horizon newsletter was published with my interview, (8M PDF here). They were kind enough to send me a few paper copies which I received on Friday. The interview touched upon key points that I spoke on during my presentation back in February, focusing on personal and business reasons for open source contributions.

Personally, MJ and I celebrated our second wedding anniversary with a fantastic meal at Murray Circle Restaurant where we sat on the porch and enjoyed our dinner with a nighttime view of the Golden Gate Bridge. We also recently agreed to start a diet together, largely going back to our pre-wedding diet that we both managed to lose a lot of weight on. Health-wise I continue to go out running, but running isn’t enough to help me to lose weight. I’m largely replacing starches with vegetables and reducing the sugar in my diet. Finally, we’ve been hacking our way through a massive joint to do list that’s been haunting us for several months now. Most of the tasks are home-based, from things like painting we need to get done to storage clean-outs. I don’t love that we have so much to do (don’t other adults get to have fun on weekends?), but finally having it organized and a plan for tackling it has reduced my stress incredibly.


2nd anniversary dinner

We do actually get to have fun on weekends, Saturday at least. We’ve continued to take Saturdays off together to attend services, have a nice lunch together and spend some time relaxing, whether that’s catching up on some shows together or visiting a local museum. Last weekend we had the opportunity of finally going to the Cable Car Museum here in San Francisco. Given my love for all things rail, it’s astonishing that I never made it up there before. The core of the museum is the above-ground, in-building housing for the four cables that run the three cable car lines, and then exhibits are built around it. It’s a fantastic little museum, and entrance is free.

I also picked up some beautifully 3d printed cable car earrings and matching necklace produced by Freeform Ind. I loved their stuff so much that I found their shop online and picked up some other local landmark jewelry.

More photos from our trip to the Cable Car Museum are available here: https://www.flickr.com/photos/pleia2/sets/72157652325687332

We’ve had some computer fun lately. MJ has finally ordered a replacement 1U server for the old one that he has co-located in Fremont. Burn-in testing happened this weekend but there are some more harddrive-related pieces that we’re still waiting on to get it finished up. We’re aiming for getting it installed at the datacenter in June. I also replaced the old Pentium 4 that I’ve been using as a monitoring server and backups machine. It was getting quite old and unusable as a second desktop, even when restricted to following social media accounts and watching videos here and there. It’s now been replaced with a refurbished HP DC6200 from 2011, which has an i3 processor and I bumped it up to 8G of RAM that I had laying around from when I maxed out my primary desktop with 16G. So far so good, I moved over the harddrive from the old machine and it’s been running great.


HP DC6200

In the time between work and other things, I’ve been watching The Good Wife on my own and Star Trek: Voyager with MJ. Also, hanging out with my darling kitties. One evening I got this epic picture of Caligula:

This week I’m hosting an Ubuntu Hour and Debian Dinner where we’re celebrate the release of Debian 8 “Jessie”. I’ve purchased Jessie (cowgirl from Toy Story 2 and 3) party hats to mark the occasion. At the break of dawn on Sunday I’ll be boarding a plane to go to the OpenStack Summit in Vancouver. I’ve never been to Vancouver, so I’m spending Sunday there and staying until late on the following Saturday night, so I hope to have time to see some of the city. After this trip, I’m staying home until July! Thank goodness, I can definitely use the down time to work on my book.

by pleia2 at May 11, 2015 03:07 AM

May 06, 2015

Akkana Peck

Tips for passing Google's "Mobile Friendly" tests

I saw on Slashdot that Google is going to start down-rating sites that don't meet its criteria of "mobile-friendly": Are you ready for Google's 'Mobilegeddon' on Tuesday?. And from the the Slashdot discussion, it was pretty clear that Google's definition included some arbitrary hoops to jump through.

So I headed over to Google's Mobile-friendly test to check out some of my pages.

Now, most of my website seemed to me like it ought to be pretty mobile friendly. It's size agnostic: I don't specify any arbitrary page widths in pixels, so most of my pages can resize down as far as necessary (I was under the impression that was what "responsive design" meant for websites, though I've been doing it for many years and it seems now that "responsive design" includes a whole lot of phone-specific tweaks and elaborate CSS for moving things around based on size.) I also don't set font sizes that might make the page less accessible to someone with vision problems -- or to someone on a small screen with high pixel density. So I was pretty confident.

[Google's mobile-friendly test page] I shouldn't have been. Basically all of my pages failed. And in chasing down some of the problems I've learned a bit about Google's mobile rules, as well as about some weird quirks in how current mobile browsers render websites.

Basically, all of my pages failed with the same three errors:

  • Text too small to read
  • Links too close together
  • Mobile viewport not set

What? I wasn't specifying text size at all -- if the text is too small to read with the default font, surely that's a bug in the mobile browser, not a bug in my website. Same with links too close together, when I'm using the browser's default line spacing.

But it turned out that the first two points were meaningless. They were just a side effect of that third error: the mobile viewport.

The mandatory meta viewport tag

It turns out that any page that doesn't add a new meta tag, called "viewport", will automatically fail Google's mobile friendly test and be downranked accordingly. What's that all about?

Apparently it's originally Apple's fault. iPhones, by default, pretend their screen is 980 pixels wide instead of the actual 320 or 640, and render content accordingly, and so they shrink everything down by a factor of 3 (980/320). They do this assuming that most website designers will set a hard limit of 980 pixels (which I've always considered to be bad design) ... and further assuming that their users care more about seeing the beautiful layout of a website than about reading the website's text.

And Google apparently felt, at some point during the Android development process, that they should copy Apple in this silly behavior. I'm not sure when Android started doing this; my Android 2.3 Samsung doesn't do it, so it must have happened later than that.

Anyway, after implementing this, Apple then introduced a meta tag you can add to an HTML file to tell iPhone browsers not to do this scaling, and to display the text at normal text size. There are various forms for this tag, but the most common is:

<meta name="viewport" content="width=device-width, initial-scale=1">
(A lot of examples I found on the web at first suggested this: <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> but don't do that -- it prevents people from zooming in to see more detail, and hurts the accessibility of the page, since people who need to zoom in won't be able to. Here's more on that: Stop using the viewport meta tag (until you know how to use it.)

Just to be clear, Google is telling us that in order not to have our pages downgraded, we have to add a new tag to every page on the web to tell mobile browsers not to do something silly that they shouldn't have been doing in the first place, and which Google implemented to copy a crazy thing Apple was doing.

How width and initial-scale relate

Documentation on how width and initial-scale relate to each other, and which takes precedence, are scant. Apple's documentation on the meta viewport tag says that setting initial-scale=1 automatically sets width=device-width. That implies that the two are basically equivalent: that they're only different if you want to do something else, like set a page width in pixels (use width=) or set the width to some ratio of the device width other than 1 (use initial-scale=.

That means that using initial-scale=1 should imply width=device-width -- yet nearly everyone on the web seems to use both. So I'm doing that, too. Apparently there was once a point to it: some older iPhones had a bug involving switching orientation to landscape mode, and specifying both initial-scale=1 and width=device-width helped, but supposedly that's long since been fixed.

initial-scale=2, by the way, sets the viewport to half what it would have been otherwise; so if the width would have been 320, it sets it to 160, so you'll see half as much. Why you'd want to set initial-scale to anything besides 1 in a web page, I don't know.

If the width specified by initial-scale conflicts with that specified by width, supposedly iOS browsers will take the larger of the two, while Android won't accept a width directive less than 320, according to Quirks mode: testing Meta viewport.

It would be lovely to be able to test this stuff; but my only Android device is running Android 2.3, which doesn't do all this silly zooming out. It does what a sensible small-screen device should do: it shows text at normal, readable size by default, and lets you zoom in or out if you need to.

(Only marginally related, but interesting if you're doing elaborate stylesheets that take device resolution into account, is A List Apart's discussion, A Pixel Identity Crisis.)

Control width of images

[Image with max-width 100%] Once I added meta viewport tags, most of my pages passed the test. But I was seeing something else on some of my photo pages, as well as blog pages where I have inline images:

  • Content wider than screen
  • Links too close together

Image pages are all about showing an image. Many of my images are wider than 320 pixels ... and thus get flagged as too wide for the screen. Note the scrollbars, and how you can only see a fraction of the image.

There's a simple way to fix this, and unlike the meta viewport thing, it actually makes sense. The solution is to force images to be no wider than the screen with this little piece of CSS:

<style type="text/css">
  img { max-width: 100%; height: auto; }
</style>

[Image with max-width 100%] I've been using similar CSS in my RSS reader for several months, and I know how much better it made the web, on news sites that insist on using 1600 pixel wide images inline in stories. So I'm happy to add it to my photo pages. If someone on a mobile browser wants to view every hair in a squirrel's tail, they can still zoom in to the page, or long-press on the image to view it at full resolution. Or rotate to landscape mode.

The CSS rule works for those wide page banners too. Or you can use overflow: hidden if the right side of your banner isn't all that important.

Anyway, that takes care of the "page too wide" problem. As for the "Links too close together" even after I added the meta viewport tag, that was just plain bad HTML and CSS, showing that I don't do enough testing on different window sizes. I fixed it so the buttons lay out better and don't draw on top of each other on super narrow screens, which I should have done long ago. Likewise for some layout problems I found on my blog.

So despite my annoyance with the whole viewport thing, Google's mandate did make me re-examine some pages that really needed fixing, and should have improved my website quite a bit for anyone looking at it on a small screen. I'm glad of that.

It'll be a while before I have all my pages converted, especially that business of adding the meta tag to all of them. But readers, if you see usability problems with my site, whether on mobile devices or otherwise, please tell me about them!

May 06, 2015 09:48 PM

April 30, 2015

iheartubuntu

How To Install BitMessage


If you are ever concerned about private messaging, BitMessage offers an easy solution. Bitmessage is a P2P communications protocol used to send encrypted messages to another person or to many subscribers. It is decentralized and trustless, meaning that you need-not inherently trust any entities like root certificate authorities. It uses strong authentication which means that the sender of a message cannot be spoofed, and it aims to hide "non-content" data, like the sender and receiver of messages, from passive eavesdroppers like those running warrantless wiretapping programs. If Bitmessage is completely new to you, you may wish to start by reading the whitepaper:

https://bitmessage.org/bitmessage.pdf

Windows, Mac and Source Code available here:

https://bitmessage.org/wiki/Main_Page

A community-based forum for questions, feedback, and discussion is also available on the subreddit:

http://www.reddit.com/r/bitmessage/

To install BitMessage on Ubuntu (and other linux distros) go to your terminal and type:

git clone git://github.com/Bitmessage/PyBitmessage.git

Once its finished, run this...

python2.7 PyBitmessage/src/bitmessagemain.py

BitMessage should now be installed with a link your menu or dash or by running that last line in your terminal window again.

* You may need to install git and python for the codes to work.

Give it a try and good luck!

by iheartubuntu (noreply@blogger.com) at April 30, 2015 09:42 PM

Akkana Peck

Stile style

On a hike a few weeks ago, we encountered an unusual, and amusing, stile across the trail.

[Normal stile] It isn't uncommon to see stiles along trails. There are lots of different designs, but their purpose is to allow humans, on foot, an easy way to cross a fence, while making it difficult for vehicles and livestock like cattle to pass through. A common design looks like this, with a break in the fence and "wings" so that anything small enough to make the sharp turn can pass through.

On a recent hike starting near Buckman, on the Rio Grande, we passed a few stiles with the "wings" design; but one of the stiles we came to had a rather less common design:

[Wrongly-built stile]

It was set up so that nothing could pass without climbing over the fence -- and one of the posts which was supposed to hold fence rails was just sitting by itself, with nothing attached to it. [Pathological stile]

I suspect someone gave a diagram to a welder, and the welder, not being an outdoor person and having no idea of the purpose of a stile, welded it up without giving it much thought. Not very functional ... and not very stilish, either!

I'm curious whether the error was in the spec, or in the welder's interpretation of it. But alas, I suspect I'll never learn the story behind the stile.

Giggling, we climbed over the fence and proceeded on our hike up to the very scenic Otowi Peak.

April 30, 2015 05:38 PM

April 21, 2015

Akkana Peck

Finding orphaned files on websites

I recently took over a website that's been neglected for quite a while. As well as some bad links, I noticed a lot of old files, files that didn't seem to be referenced by any of the site's pages. Orphaned files.

So I went searching for a link checker that also finds orphans. I figured that would be easy. It's something every web site maintainer needs, right? I've gotten by without one for my own website, but I know there are some bad links and orphans there and I've often wanted a way to find them.

An intensive search turned up only one possibility: linklint, which has a -orphan flag. Great! But, well, not really: after a few hours of fiddling with options, I couldn't find any way to make it actually find orphans. Either you run it on a http:// URL, and it says it's searching for orphans but didn't find any (because it ignors any local directory you specify); or you can run it just on a local directory, in which case it finds a gazillion orphans that aren't actually orphans, because they're referenced by files generated with PHP or other web technology. Plus it flags all the bad links in all those supposed orphans, which get in the way of finding the real bad links you need to worry about.

I tried asking on a couple of technical mailing lists and IRC channels. I found a few people who had managed to use linklint, but only by spidering an entire website to local files (thus getting rid of any server side dependencies like PHP, CGI or SSI) and then running linklint on the local directory. I'm sure I could do that one time, for one website. But if it's that much hassle, there's not much chance I'll keep using to to keep websites maintained.

What I needed was a program that could look at a website and local directory at the same time, and compare them, flagging any file that isn't referenced by anything on the website. That sounded like it would be such a simple thing to write.

So, of course, I had to try it. This is a tool that needs to exist -- and if for some bizarre reason it doesn't exist already, I was going to remedy that.

Naturally, I found out that it wasn't quite as easy to write as it sounded. Reconciling a URL like "http://mysite.com/foo/bar.html" or "../asdf.html" with the corresponding path on disk turned out to have a lot of twists and turns.

But in the end I prevailed. I ended up with a script called weborphans (on github). Give it both a local directory for the files making up your website, and the URL of that website, for instance:

$ weborphans /var/www/ http://localhost/

It's still a little raw, certainly not perfect. But it's good enough that I was able to find the 10 bad links and 606 orphaned files on this website I inherited.

April 21, 2015 08:55 PM

April 20, 2015

Elizabeth Krumbach

POSSCON 2015

This past week I had the pleasure of attending POSSCON in the beautiful capital city of South Carolina, Columbia. The great event kicked off with a social at Hickory Tavern, which I arranged to be at by tolerating a tight connection in Charlotte. It all worked out and in spite of generally being really shy at these kind of socials, I found some folks I knew and had a good time. Late in the evening several of us even had the opportunity to meet the Mayor of Columbia who had come down to the event and talk about our work and the importance of open source in the economy today. It’s really great to see that kind of support for open source in a city.

The next morning the conference actually kicked off. Organizer Todd Lewis opened the event and quickly handed things off to Lonnie Emard, the President of IT-oLogy. IT-oLogy is a non-profit that promotes initial and continued learning in technology through events targeting everyone from children in grade school to professionals who are seeking to extend their skill set, more on their About page. As a partner for POSSCON, they were a huge part of the event, even hosting the second day at their offices.

We then heard from aforementioned Columbia Mayor Steve Benjamin. A keynote from the city mayor was real treat, taking time out of what I’m sure is a busy schedule showed a clear commitment to building technology in Columbia. It was really inspiring to hear him talk about Columbia, with political support and work from IT-oLogy it sounds like an interesting place to be for building or growing a career in tech. There was then a welcome from Amy Love, the South Carolina Department of Commerce Innovation Director. Talk about local support! Go South Carolina!

The next keynote was from Andy Hunt, who was speaking on “A New Look at Openness” which began with a history of how we’ve progressed with development, from paying for licenses and compilers for proprietary development to the free and open source tool set and their respective licenses we work with today. He talked about how this all progresses into the Internet of Things, where we can now build physical objects and track everything from keys to pets. Today’s world for developers, he argued, is not about inventing but innovating, and he implored the audience to seek out this innovation by using the building blocks of open source as a foundation. In the idea space he proposed 5 steps for innovative thinking:

  1. Gather raw material
  2. Work it
  3. Forget the whole thing
  4. Eureka/My that’s peculiar
  5. Refine and develop
  6. profit!

Directly following the keynote I gave my talk on Tools for Open Source Systems Administration in the Operations/Back End track. It had the themes of many of my previous talks on how the OpenStack Infrastructure team does systems administration in an open source way, but I refocused this talk to be directly about the tools we use to accomplish this as a geographically distributed team across several different companies. The talk went well and I had a great audience, huge thanks to everyone who came out for it, it was a real pleasure to talk with folks throughout the rest of the conference who had questions about specific parts of how we collaborate. Slides from my presentation are here (pdf).

The next talk in the Operations/Back End track was Converged Infrastructure with Sanoid by Jim Salter. With SANOID, he was seeking to bring enterprise-level predictability, minimal downtime and rapid recover to small-to-medium-sized businesses. Using commodity components, from hardware through software, he’s built a system that virtualizes all services and runs on ZFS for Linux to take hourly (by default) snapshots of running systems. When something goes wrong, from a bad upgrade to a LAN infected with a virus, he has the ability to quickly roll users back to the latest snapshot. It also has a system for easily creating on and off-site backups and uses Nagios for monitoring, which is how I learned about aNag, a Nagios client for Android, I’ll have to check it out! I had the opportunity to spend more time with Jim as the conference went on, which included swinging by his booth for a SANOID demo. Slides from his presentation are here.

For lunch they served BBQ. I don’t really care for typical red BBQ sauce, so when I saw a yellow sauce option at the buffet I covered my chicken in that instead. I had discovered South Carolina Mustard BBQ sauce. Amazing stuff. Changed my life. I want more.

After lunch I went to see a talk by Isaac Christofferson on Assembling an Open Source Toolchain to Manage Public, Private and Hybrid Cloud Deployments. With a focus on automation, standardization and repeatability, he walked us through his usage of Packer, Vagrant and Ansible to interface with a variety of different clouds and VMs. I’m also apparently the last systems administrator alive who hadn’t heard of devopsbookmarks.com, but he shared the link and it’s a great site.

The rooms for the talks were spread around a very walkable area in downtown Columbia. I wasn’t sure how I’d feel about this and worried it would be a problem, but with speakers staying on schedule we were afforded a full 15 minutes between talks to switch tracks. The venue I spoke it was in a Hilton, and the next talk I went to was in a bar! It made for quite the enjoyable short walks outside between talks and a diversity in venues that was a lot of fun.

That next talk I went to was Open Source and the Internet of Things presented by Erica Stanley. I had the pleasure of being on a panel with Erica back in October during All Things Open (see here for a great panel recap) so it was really great running into her at this conference as well. Her talk was a deluge of information about the Internet of Things (IoT) and how we can all be makers for it! She went into detail about the technology and ideas behind all kinds of devices, and on slides 41 and 42 she gave a quick tour of hardware and software tools that can be used to build for the IoT. She also went through some of the philosophy, guidelines and challenges for IoT development. Slides from her talk are online here, the wealth of knowledge packed into that slidedeck are definitely worth spending some time with if you’re interested in the topic.

The last pre-keynote talk I went to was by Tarus Balog with a Guide to the Open Source Desktop. A self-confessed former Apple fanboy, he had quite the sense of humor about his past where “everything was white and had an apple on it” and his move to using only open source software. As someone who has been using Linux and friends for almost a decade and a half, I wasn’t at this talk to learn about the tools available, but instead see how a long time Mac user could actually make the transition. It’s also interesting to me as a member of the Ubuntu and Xubuntu projects to see how newcomers view entrance into the world of Linux and how they evaluate and select tools. He walked the audience through the process he used to select a distro and desktop environment and then all the applications: mail, calendar, office suite and more. Of particular interest he showed a preference for Banshee (reminded him of old iTunes), as well as digiKam for managing photos. Accounting-wise he is still tied to Quickbooks, but either runs it under wine or over VNC from a Mac.

The day wound down with a keynote from Jason Hibbets. He wrote The foundation for an open source city and is a Project Manager for opensource.com. His keynote was all about stories, and why it’s important to tell our open source stories. I’ve really been impressed with the development of opensource.com over the past year (disclaimer: I’ve written for them too), they’ve managed to find hundreds of inspirational and beneficial stories of open source adoption from around the world. In this talk he highlighted a few of these, including the work of my friend Charlie Reisinger at Penn Manor and Stu Keroff with students in the Asian Penguins computer club (check out a video from them here). How exciting! The evening wrapped up with an afterparty (I enjoyed a nice Palmetto Amber Ale) and a great speakers and sponsors dinner, huge thanks to the conference staff for putting on such a great event and making us feel so welcome.

The second day of the conference took place across the street from the South Carolina State House at the IT-oLogoy office. The day consisted of workshops, so the sessions were much longer and more involved. But the day also kicked off with a keynote by Bradley Kuhn, who gave a basic level talk on Free Software Licensing: Software Freedom Licensing: What You Must Know. He did a great job offering a balanced view of the licenses available and the importance of selecting one appropriate to your project and team from the beginning.

After the keynote I headed upstairs to learn about OpenNMS from Tarus Balog. I love monitoring, but as a systems administrator and not a network administrator, I’ve mostly been using service-based monitoring tooling and hadn’t really looking into OpenNMS. The workshop was an excellent tour of the basics of the project, including a short history and their current work. He walked us through the basic installation and setup, and some of the configuration changes needed for SNMP and XML-based changes made to various other parts of the infrastructure. He also talked about static and auto-discovery mechanisms for a network, how events and alarms work and details about setting up the notification system effectively. He wrapped up by showing off some interesting graphs and other visualizations that they’re working to bring into the system for individuals in your organization who prefer to see the data presented in less technical format.

The afternoon workshop I attended was put on by Jim Salter and went over Backing up Android using Open Source technologies. This workshop focused on backing up content and not the Android OS itself, but happily for me, that’s what I wanted to back up, as I run stock Android from Google otherwise (easy to install again from a generic source as needed). Now, Google will happily backup all your data, but what if you want to back it up locally and store it on your own system? By using rsync backup for Android, Jim demonstrated how to configure your phone to send backups to Linux, Windows and Mac using ssh+rsync. For Linux at least so far this is a fully open source solution, which I quite like and have started using it at home. The next component makes it automatic, which is where we get into a proprietary bit of software, Llama – Location Profiles. Based on various types of criteria (battery level, location, time, and lots more), Llama allows you to identify criteria of when it runs certain actions, like automatically running rsync to do backups. In all, it was a great and informative workshop and I’m happy to finally have a useful solution to pulling photos and things off my phone periodically without plugging it in and using MTP, which apparently I hate and so never I do it. Slides from Jim’s talk, which also include specific instructions and tools for Windows and Mac are online here.

The conference concluded with Todd Lewis sending more thanks all around. By this time in the day rain was coming down in buckets and there were no taxis to be seen, so I grabbed a ride from Aaron Crosman who I was happy to learn earlier was a local but had come from Philadelphia and we had great Philly tech and city vs. country tech stories to swap.

More of my photos from the event available here: https://www.flickr.com/photos/pleia2/sets/72157651981993941/

by pleia2 at April 20, 2015 06:07 PM

Jono Bacon

Announcing Chimp Foot.

I am delighted to share my new music project: Chimp Foot.

I am going to be releasing a bunch of songs, which are fairly upbeat rock and roll (no growly metal here). The first tune is called ‘Line In The Sand’ and is available here.

All of these songs are available under a Creative Commons Attribution ShareAlike license, which means you can download, share, remix, and sell them. I am also providing a karaoke version with vocals removed (great for background music) and all of the individual instrument tracks that I used to create each song. This should provide a pretty comprehensive archive of open material.

Please follow me on SoundCloud and/or on Twitter, Facebook, and Google+.

Shares of this would be much appreciated, and feedback welcome for the music!?

by jono at April 20, 2015 04:22 PM