Planet Ubuntu California

July 31, 2015

Eric Hammond

AWS SNS Outage: Effects On The Unreliable Town Clock

It took a while, but the Unreliable Town Clock finally lived up to its name. Surprisingly, the fault was not mine, but Amazon’s.

For several hours tonight, a number of AWS services in us-east-1, including SNS, experienced elevated error rates according to the AWS status page.

Successful, timely chimes were broadcast through the Unreliable Town Clock public SNS topic up to and including:

2015-07-31 05:00 UTC

and successful chimes resumed again at:

2015-07-31 08:00 UTC

Chimes in between were mostly unpublished, though SNS appears to have delivered a few chimes during that period up to several hours late and out of order.

I had set up Unreliable Town Clock monitoring and alerting through Cronitor.io. This worked perfectly and I was notified within 1 minute of the first missed chime, though it turned out there was nothing I could do but wait for AWS to correct the underlying issue with SNS.

Since we now know SNS has the potential to fail in a region, I have launched an Unreliable Town Clock public SNS Topic in a second region: us-west-2. The infrastructure in each region is entirely independent.

The public SNS topic ARNs for both regions are listed at the top of this page:

https://alestic.com/2015/05/aws-lambda-recurring-schedule/

You are welcome to subscribe to the public SNS topics in both regions to improve the reliability of invoking your scheduled functionality.

The SNS message content will indicate which region is generating the chime.

Original article and comments: https://alestic.com/2015/07/aws-sns-outage/

July 31, 2015 09:55 AM

July 30, 2015

Akkana Peck

A good week for critters

It's been a good week for unusual wildlife.

[Myotis bat hanging just outside the front door] We got a surprise a few nights ago when flipping the porch light on to take the trash out: a bat was clinging to the wall just outside the front door.

It was tiny, and very calm -- so motionless we feared it was dead. (I took advantage of this to run inside and grab the camera.) It didn't move at all while we were there. The trash mission accomplished, we turned out the light and left the bat alone. Happily, it wasn't ill or dead: it was gone a few hours later.

We see bats fairly regularly flying back and forth across the patio early on summer evenings -- insects are apparently attracted to the light visible through the windows from inside, and the bats follow the insects. But this was the first close look I'd had at a stationary bat, and my first chance to photograph one.

I'm not completely sure what sort of bat it is: almost certainly some species of Myotis (mouse-eared bats), and most likely M. yumanensis, the "little brown bat". It's hard to be sure, though, as there are at least six species of Myotis known in the area.

[Woodrat released from trap] We've had several woodrats recently try to set up house near the house or the engine compartment of our Rav4, so we've been setting traps regularly. Though woodrats are usually nocturnal, we caught one in broad daylight as it explored the area around our garden pond.

But the small patio outside the den seems to be a particular draw for them, maybe because it has a wooden deck with a nice dark space under it for a rat to hide. We have one who's been leaving offerings -- pine cones, twigs, leaves -- just outside the door (and less charming rat droppings nearby), so one night Dave set three traps all on that deck. I heard one trap clank shut in the middle of the night, but when I checked in the morning, two traps were sprung without any occupants and the third was still open.

But later that morning, I heard rattling from outside the door. Sure enough, the third trap was occupied and the occupant was darting between one end and the other, trying to get out. I told Dave we'd caught the rat, and we prepared to drive it out to the parkland where we've been releasing them.

[chipmunk caught in our rat trap] And then I picked up the trap, looked in -- and discovered it was a pretty funny looking woodrat. With a furry tail and stripes. A chipmunk! We've been so envious of the folks who live out on the canyon rim and are overloaded with chipmunks ... this is only the second time we've seen here, and now it's probably too spooked to stick around.

We released it near the woodpile, but it ran off away from the house. Our only hope for its return is that it remembers the nice peanut butter snack it got here.

[Baby Great Plains skink] Later that day, we were on our way out the door, late for a meeting, when I spotted a small lizard in the den. (How did it get in?) Fast and lithe and purple-tailed, it skittered under the sofa as soon as it saw us heading its way.

But the den is a small room and the lizard had nowhere to go. After upending the sofa and moving a couple of tables, we cornered it by the door, and I was able to trap it in my hands without any damage to its tail.

When I let it go on the rocks outside, it calmed down immediately, giving me time to run for the camera. Its gorgeous purple tail doesn't show very well, but at least the photo was good enough to identify it as a juvenile Great Plains skink. The adults look more like Jabba the Hut nothing like the lovely little juvenile we saw. We actually saw an adult this spring (outside), when we were clearing out a thick weed patch and disturbed a skink from its hibernation. And how did this poor lizard get saddled with a scientfic name of Eumeces obsoletus?

July 30, 2015 05:07 PM

July 27, 2015

Akkana Peck

Trackpad workarounds: using function keys as mouse buttons

I've had no end of trouble with my Asus 1015E's trackpad. A discussion of laptops on a mailing list -- in particular, someone's concerns that the nifty-looking Dell XPS 13, which is available preloaded with Linux, has had reviewers say that the trackpad doesn't work well -- reminded me that I'd never posted my final solution.

The Asus's trackpad has two problems. First, it's super sensitive to taps, so if any part of my hand gets anywhere near the trackpad while I'm typing, suddenly it sees a mouse click at some random point on the screen, and instead of typing into an emacs window suddenly I find I'm typing into a live IRC client. Or, worse, instead of typing my password into a password field, I'm typing it into IRC. That wouldn't have been so bad on the old style of trackpad, where I could just turn off taps altogether and use the hardware buttons; this is one of those new-style trackpads that doesn't have any actual buttons.

Second, two-finger taps don't work. Three-finger taps work just fine, but two-finger taps: well, I found when I wanted a right-click (which is what two-fingers was set up to do), I had to go TAP, TAP, TAP, TAP maybe ten or fifteen times before one of them would finally take. But by the time the menu came up, of course, I'd done another tap and that canceled the menu and I had to start over. Infuriating!

I struggled for many months with synclient's settings for tap sensitivity and right and left click emulation. I tried enabling syndaemon, which is supposed to disable clicks as long as you're typing then enable them again afterward, and spent months playing with its settings, but in order to get it to work at all, I had to set the timeout so long that there was an infuriating wait after I stopped typing before I could do anything.

I was on the verge of giving up on the Asus and going back to my Dell Latitude 2120, which had an excellent trackpad (with buttons) and the world's greatest 10" laptop keyboard. (What the Dell doesn't have is battery life, and I really hated to give up the Asus's light weight and 8-hour battery life.) As a final, desperate option, I decided to disable taps completely.

Disable taps? Then how do you do a mouse click?

I theorized, with all Linux's flexibility, there must be some way to get function keys to work like mouse buttons. And indeed there is. The easiest way seemed to be to use xmodmap (strange to find xmodmap being the simplest anything, but there you go). It turns out that a simple line like

  xmodmap -e "keysym F1 = Pointer_Button1"
is most of what you need. But to make it work, you need to enable "mouse keys":
  xkbset m

But for reasons unknown, mouse keys will expire after some set timeout unless you explicitly tell it not to. Do that like this:

  xkbset exp =m

Once that's all set up, you can disable single-finger taps with synclient:

  synclient TapButton1=0
Of course, you can disable 2-finger and 3-finger taps by setting them to 0 as well. I don't generally find them a problem (they don't work reliably, but they don't fire on their own either), so I left them enabled.

I tried it and it worked beautifully for left click. Since I was still having trouble with that two-finger tap for right click, I put that on a function key too, and added middle click while I was at it. I don't use function keys much, so devoting three function keys to mouse buttons wasn't really a problem.

In fact, it worked so well that I decided it would be handy to have an additional set of mouse keys over on the other side of the keyboard, to make it easy to do mouse clicks with either hand. So I defined F1, F2 and F3 as one set of mouse buttons, and F10, F11 and F12 as another.

And yes, this all probably sounds nutty as heck. But it really is a nice laptop aside from the trackpad from hell; and although I thought Fn-key mouse buttons would be highly inconvenient, it took surprisingly little time to get used to them.

So this is what I ended up putting in .config/openbox/autostart file. I wrap it in a test for hostname, since I like to be able to use the same configuration file on multiple machines, but I don't need this hack on any machine but the Asus.

if [ $(hostname) == iridum ]; then
  synclient TapButton1=0 TapButton2=3 TapButton3=2 HorizEdgeScroll=1

  xmodmap -e "keysym F1 = Pointer_Button1"
  xmodmap -e "keysym F2 = Pointer_Button2"
  xmodmap -e "keysym F3 = Pointer_Button3"

  xmodmap -e "keysym F10 = Pointer_Button1"
  xmodmap -e "keysym F11 = Pointer_Button2"
  xmodmap -e "keysym F12 = Pointer_Button3"

  xkbset m
  xkbset exp =m
else
  synclient TapButton1=1 TapButton2=3 TapButton3=2 HorizEdgeScroll=1
fi

July 27, 2015 02:54 AM

July 25, 2015

Elizabeth Krumbach

OSCON 2015

Following the Community Leadership Summit (CLS), which I wrote about wrote about here, I spent a couple of days at OSCON.

Monday kicked off by attending Jono Bacon’s Community leadership workshop. I attended one of these a couple years ago, so it was really interesting to see how his advice has evolved with the change in tooling and progress that communities in tech and beyond has changed. I took a lot of notes, but everything I wanted to say here has been summarized by others in a series of great posts on opensource.com:

…hopefully no one else went to Powell’s to pick up the recommended books, I cleared them out of a couple of them.

That afternoon Jono joined David Planella of the Community Team at Canonical and Michael Hall, Laura Czajkowski and I of the Ubuntu Community Council to look through our CLS notes and come up with some talking points to discuss with the rest of the Ubuntu community regarding everything from in person events (stronger centralized support of regional Ubucons needed?) to learning what inspires people about the active Ubuntu phone community and how we can make them feel more included in the broader community (and helping them become leaders!). There was also some interesting discussion around the Open Source projects managed by Canonical and expectations for community members with regard to where they can get involved. There are some projects where part time, community contributors are wanted and welcome, and others where it’s simply not realistic due to a variety of factors, from the desire for in-person collaboration (a lot of design and UI stuff) to the new projects with an exceptionally fast pace of development that makes it harder for part time contributors (right now I’m thinking anything related to Snappy). There are improvements that Canonical can make so that even these projects are more welcoming, but adjusting expectations about where contributions are most needed and wanted would be valuable to me. I’m looking forward to discussing these topics and more with the broader Ubuntu community.


Laura, David, Michael, Lyz

Monday night we invited members of the Oregon LoCo out and had an Out of Towners Dinner at Altabira City Tavern, the restaurant on top of the Hotel Eastlund where several of us were staying. Unfortunately the local Kubuntu folks had already cleared out of town for Akademy in Spain, but we were able to meet up with long-time Ubuntu member Dan Trevino, who used to be part of the Florida LoCo with Michael, and who I last saw at Google I/O last year. I enjoyed great food and company.

I wasn’t speaking at OSCON this year, so I attended with an Expo pass and after an amazing breakfast at Mother’s Bistro in downtown Portland with Laura, David and Michael (…and another quick stop at Powell’s), I spent Tuesday afternoon hanging out with various friends who were also attending OSCON. When 5PM rolled around the actual expo hall itself opened, and surprised me with how massive and expensive some of the company booths had become. My last OSCON was in 2013 and I don’t remember the expo hall being quite so extravagant. We’ve sure come a long way.

Still, my favorite part of the expo hall is always the non-profit/open source project/organization area where the more grass-roots tables are. I was able to chat with several people who are really passionate about what they do. As a former Linux Users Group organizer and someone who still does a lot of open source work for free as a hobby, these are my people.

Wednesday was my last morning at OSCON. I did another walk around the expo hall and chatted with several people. I also went by the HP booth and got a picture of myself… with myself. I remain very happy that HP continues to support my career in a way that allows me to work on really interesting open source infrastructure stuff and to travel the world to tell people about it.

My flight took me home Wednesday afternoon and with that my OSCON adventure for 2015 came to a close!

More OSCON and general Portland photos here: https://www.flickr.com/photos/pleia2/sets/72157656192137302

by pleia2 at July 25, 2015 12:27 AM

July 24, 2015

iheartubuntu

Linux Lite for older computers


At work I use several older desktops for various functions. As in "older" I mean 2006 or so :) One system is used primarily for the internet if a customer needs internet access, another system is set up for a cheap live webcam to monitor outdoor premises, and so on.

In looking for an easy to install OS that is Ubuntu/Debian based I have had MUCH success with Linux Lite. Linux Lite is a beginner-friendly Linux distribution based on Ubuntu 14.04 LTS and featuring the Xfce desktop.

Linux Lite is delightfully lightweight and runs fast & responsive on our old computers which are single core Pentium 4 - 3.0Ghtz, with 2GB of memory. I have had problems in the past with graphics while installing Xubuntu or Lubuntu, but not so with Linux Lite.

A ton of time is also saved with pre-installed programs like VLC, LibreOffice, GIMP, Firefox, Steam, and Thunderbird.

It also has its own built in program called Lite Software which makes life super easy for you to install other useful apps including: Chrome browser, Chromium browser, Dropbox, Ubuntu Games Pack, Pidgin chat, Google Talk plugin, Java, KeePassX password manager, PlayOnLinux for windows games, Ubuntu Restricted Extras, Skype, TeamViewer, Deluge torrent app, OpenShot video editor, VirtualBox, and XBMC.

If you have older computers and other distros are not working out for you, definitely give Linux Lite a try!

by iheartubuntu (noreply@blogger.com) at July 24, 2015 12:43 AM

July 21, 2015

Elizabeth Krumbach

Community Leadership Summit 2015

My Saturday kicked off with the Community Leadership Summit (CLS) here in Portland, Oregon.

CLS sign

Jono Bacon opened the event by talking about the growth of communities in the past several years as internet-connected communities of all kinds are springing up worldwide. Though this near-OSCON CLS is open source project heavy, he talked about communities that range from the Maker movement to political revolutions. While we work to develop best practices for all kinds of communities, it was nice to hear one of his key thoughts as we move forward in community building: “Community is not an extension of the Marketing department.”

The day continued with a series of plenaries, which were 15 minutes long and touched upon topics like empathy, authenticity and vulnerability in community management roles. The talks wrapped up with a Facilitation 101 talk to give tips on how to run the unconference sessions. We then did the session proposals and scheduling that would pick up after lunch.

CLS schedule

As mentioned in my earlier post we had some discussion points from our experiences in the Ubuntu community that we wanted to get feedback on from the broader leadership community so we proposed 4 sessions that lasted the afternoon.

Lack of new generation of leaders

The root of this session came from our current struggle in the Ubuntu community to find leaders, from those who wish to sit on councils and boards to leaders for the LoCo teams. In addition to several people who expressed similar problems in their own communities, there was some fantastic feedback from folks who attended, including:

  • Some folks don’t see themselves as “Leaders” so using that work can be intimidating, if you find this is the case, shift to using different types of titles that do more to describe the role they are taking.
  • Document tasks that you do as a leader and slowly hand them off to people in your community to build a supportive group of people who know the ins and outs and can take a leadership role in the future.
  • Evaluate your community every few years to determine whether your leadership structure still makes sense, and make changes with every generation of community leaders if needed (and it often is!).
  • If you’re seeking to get more contributions from people who are employed to do open source, you may need to engage their managers to prioritize appropriately. Also, make sure credit is given to companies who are paying employees to contribute.
  • Set a clear set of responsibilities and expectations for leadership positions so people understand the role, commitment level and expectations of them.
  • Actively promote people who are doing good work, whether by expressing thanks on social media, in blog posts and whatever other communications methods you employ, as well as inviting them to speak at other events, fund them to attend events and directly engage them. This will all serve to build satisfaction and their social capital in the community.
  • Casual mentorship of aspiring leaders who you can hand over projects for them to take over once they’ve begun to grow and understand the steps required.

Making lasting friendships that are bigger than the project

This was an interesting session that was proposed as many of us found that we built strong relationships with people early on in Ubuntu, but have noticed fewer of those developing in the past few years. Many of us have these friendships which have lasted even as people leave the project, and even leave the tech industry entirely, for us Ubuntu wasn’t just an open source project, we were all building lasting relationships.

Recommendations included:

  • In person events are hugely valuable to this (what we used to get from Ubuntu Developer Summits). Empower local communities to host major events.
  • Find a way to have discussions that are not directly related to the project with your fellow project members, including creating a space where there’s a weekly topic, giving a space to share accomplishments, and perhaps not lumping it all together (some new off-topic threads on Discourse?)
  • Provide a space to have check-ins with members of and teams in your community, how is life going? Do you have the resources you need?
  • Remember that tangential interests are what bring people together on a personal level and seek to facilitate that

There was also some interesting discussion around handling contributors whose behavior has become disruptive (often due to personal things that have come up in their life), from making sure a Code of Conduct is in place to set expectations for behavior to approaching people directly to check in to make sure they’re doing all right and to discuss the change in their behavior.

Declining Community Participation

We proposed this session because we’ve seen a decline in community participation since before the Ubuntu Developer Summits ceased. We spent some time framing this problem in the space it’s in, with many Linux distributions and “core” components seeing similar decline and disinterest in involvement. It was also noted that when a project works well, people are less inclined to help because they don’t need to fix things, which may certainly be the case with a product like the Ubuntu server. In this vein, it was noted that 10 years ago the contributor to user ratio was much higher, since many people who used it got involved in order to file bugs and collaborate to fix things.

Some of the recommendations that came out of this session:

  • Host contests and special events to showcase new technologies to get people excited about involvement (made me think of Xubuntu testing with XMir, we had a lot of people testing it because it was an interesting new thing!)
  • In one company, the co-founder set a community expectation for companies who were making money from the product to give back 5% in development (or community management, or community support).
  • Put a new spin on having your code reviewed: it’s constructive criticism from programmers with a high level of expertise, you’re getting training while they chime in on reviews. Note that the community must have a solid code review community that knows how to help people and be kind to them in reviews.
  • Look at bright spots in your community and recreate them: Where has the community grown? (Ubuntu Phone) How can you bring excitement there to other parts of your project? Who are your existing contributors in the areas where you’ve seen a decline and how can you find more contributors like them?
  • Share stories about how your existing members got involved so that new contributors see a solid on-ramp for themselves, and know that everyone started somewhere.
  • Make sure you have clear, well-defined on-ramps for various parts of your project, it was noted that Mozilla does a very good job with this (Ubuntu does use Mozilla’s Asknot, but it’s hard to find!).

Barriers related to single-vendor control and development of a project

This session came about because of the obvious control that Canonical has in the direction of the Ubuntu project. We sought to find advice from other communities where there was single-vendor control. Perhaps unfortunately the session trended heavily toward specifically Ubuntu, but we were able to get some feedback from other communities and how they handle decisions made in an ecosystem with both paid and volunteer contrbutors:

  • Decisions should happen in a public, organized space (not just an IRC log, Google Hangout or in person discussion, even if these things are made public). Some communities have used: Github repo, mailing list threads, Request For Comment system to gather feedback and discuss it.
  • Provide a space where community members can submit proposals that the development community can take seriously (we did used to have brainstorm.ubuntu.com for this, but it wound down over the years and became less valuable.
  • Make sure the company counts contributions as real, tangible things that should be considered for monetary value (non-profits already do this for their volunteers).
  • Make sure the company understands the motivation of community members so they don’t accidentally undermine this.
  • Evaluate expectations in the community, are there some things the company won’t budge on? Are they honest about this and do they make this clear before community members make an investment? Ambiguity hurts the community.

I’m really excited to have further discussions in the Ubuntu community about how these insights can help us. Once I’m home I’ll be able to collect my thoughts and take thoughts and perhaps even action items to the ubuntu-community-team mailing list (which everyone is welcome to participate in).

This first day concluded with a feedback session for the summit itself, which brought up some great points. On to day two!

As with day one, we began the day with a series of plenaries. The first was presented by Richard Millington who talked about 10 “Social Psychology Hacks” that you can use to increase participation in your community. These included “priming” or using existing associations to encourage certain feelings, making sure you craft your story about your community, designing community rituals to make people feel included and use existing contributors to gain more through referrals. It was then time for Laura Czajkowski’s talk about “Making your the Marketing team happy”. My biggest take-away from this one was that not only has she learned to use the tools the marketing team uses, but she now attends their meetings so she can stay informed of their projects and chime in when a suggestion has been made that may cause disruption (or worse!) in the community. Henrik Ingo then gave a talk where he did an analysis of the governance types of many open source projects. He found that all the “extra large” projects developer/commit-wise were all run by a foundation, and that there seemed to be a limit as to how big single-vendor controlled projects could get. I had suspected this was the case, but it was wonderful to have his data to back up my suspicions. Finally, Gina Likins of Red Hat spoke about her work to get universities and open source projects working together. She began her talk by explaining how few college Computer Science majors are familiar with open source, and suggested that a kind of “dating site” be created to match up open source projects with professors looking to get their students involved. Brilliant! I attended her session related to it later in the afternoon.

My afternoon was spent first by joining Gina and others to talk about relationships between university professors and open source communities. Her team runs teachingopensource.org and it turns out I subscribed to their mailing list some time ago. She outlined several goals, from getting students familiar with open source tooling (IRC, mailing lists, revision control, bug trackers) all the way up to more active roles directly in open source projects where the students are submitting patches. I’m really excited to see where this goes and hope I can some day participate in working with some students beyond the direct mentoring through internships that I’m doing now.

Aside from substantial “hallway track” time where I got to catch up with some old friends and meet some people, I went to a session on having open and close-knit communities where people talked about various things, from reaching out to people when they disappear, the importance of conduct standards (and swift enforcement), and going out of your way to participate in discussions kicked off by newcomers in order to make them feel included. The last session I went to shared tips for organizing local communities, and drew from the off-line community organizing that has happened in the past. Suggestions for increasing participation for your group included cross-promotion of groups (either through sharing announcements or doing some joint meetups), not letting volunteers burn out/feel taken for granted and making sure you’re not tolerating poisonous people in your community.

The Community Leadership Summit concluded with a Question and Answer session. Many people really liked the format, keeping the morning pretty much confined to the set presentations and setting up the schedule, allowing us to take a 90 minute lunch (off-site) and come back to spend the whole afternoon in sessions. In all, I was really pleased with the event, kudos to all the organizers!

by pleia2 at July 21, 2015 05:10 AM

July 20, 2015

Eric Hammond

TimerCheck.io - Countdown Timer Microservice Built On Amazon API Gateway and AWS Lambda

deceptively simple web service with super powers

TimerCheck.io is a fully functional, fully scalable microservice built on the just-released Amazon API Gateway and increasingly popular AWS Lambda platforms.

TimerCheck.io is a public web service that maintains a practically unlimited number of countdown timers with one second resolution and no practical limit to the number of seconds each timer can run.

New timers can be created on a whim and each timer can be reset at any time to any number of seconds desired, whether it is still running or has already expired.

Synopsis

Let’s begin with an example to demonstrate the elegant simplicity of the TimerCheck.io interface.

1. Set timer - Any request of the following URL sets a timer named “YOURTIMERNAME” to start counting down immediately from 60 seconds:

https://timercheck.io/YOURTIMERNAME/60

You may click on that link now, or hit a URL of the same format with your own timer name and your chosen number of seconds. You may use a browser, a command like curl, or your favorite programming language.

2. Poll timer - The following URL requests the status of the above timer. Note that the only difference in the URL is that we have dropped the seconds count.

https://timercheck.io/YOURTIMERNAME

If the named timer is still running, TimerCheck.io will return HTTP Status code 200 OK, along with a JSON structure containing information like how many seconds are left.

If the timer has expired, TimerCheck.io will return an HTTP status code 504 Timeout.

That’s it!

No, really. That’s the entire API.

And the whole service is implemented in about 60 lines of code, on top of a handful of powerful infrastructure services managed, protected, maintained, and scaled by Amazon.

Not Included

The TimerCheck.io service does not perform any action when a timer expires. The timer should be polled to find out if it has expired.

On first thought, this may cause you to wonder if this service might, in fact, be completely useless. Instead of polling TimerCheck.io, why not just have your code keep its own timer records or look at a clock and see if it’s time yet?

The answer is that TimerCheck.io is not created for situations where you can depend on your own code to be running and keeping track of things.

TimerCheck.io is designed for integration with existing third party software packages and services that already support a polling mechanism, but do not implement timers.

For example…

Event Monitoring

There are many types of monitoring software packages and free/commercial services that poll resources to see if they are healthy and alert you if there is a problem, but they have no way to alert you if an expected event does not occur. For example, you may want to ensure that a batch job runs every hour, or a message is posted to an SNS topic at least every 15 minutes.

The TimerCheck.io service can be the glue between the existing events you wish to monitor and your existing monitoring system. Here’s how it works:

1. Set timer - When your event runs, trigger a ping of TimerCheck.io to reset the timer. In the URL, specify the name of the timer and the number of seconds when your monitoring system should consider it a problem if no further event has run.

2. Poll timer - Add the TimerCheck.io polling URL for the same timer to your monitoring software, configuring it to alert you if the web request returns anything but success.

If your events keep resetting the timer before the timer expires, your monitoring system will stay happy and quiet, as the polling URL will always return success.

If the monitoring system polls the timer when no event has run in the specified number of seconds, then alarms sound, you will be woken up, and you can investigate why your batch job did not run on its expected schedule.

This is all possible using your existing monitoring system’s standard web check service, without any additional plugins or feature development.

Naming

TimerCheck.io has no registration, no authentication, and no authorization. If you don’t want somebody else resetting your timer accidentally or on purpose, you should pick a timer name that is unguessable even with brute force.

For example:

# A sensible timer name with some unguessable random bits
timer=https://timercheck.io/sample-timer-$(pwgen -s 22 1)
echo $timer

# (OR)
timer=https://timercheck.io/sample-timer-$(uuid -v4 -FSIV)
echo $timer

# Set the timer to 1 hour
seconds=3600
curl -s $timer/$seconds | jq .

# Check the timer
curl -s $timer | jq .

Cron Jobs

Say I have a cron job that runs once an hour. I don’t mind if it fails to compelete successfully once, but if it fails to check in twice in a row, I want to be alerted.

This example will use a random number for the timer name. You should generate your own unique timer names (see previous section).

Here’s a sample crontab entry that runs my job, then resets the countdown timer using TimerCheck.io:

0 * * * * $HOME/bin/create-snapshots && curl -s https://timercheck.io/sample-cron-4/8100 >/dev/null

The timer is being reset at the end of each job to 8100 seconds which is two hours plus 15 minutes. The extra minutes gives the hourly cron job some extra time to complete before we start sounding alarms.

All that’s left is to add the monitor poll URL to my monitoring service:

https://timercheck.io/sample-cron-4

Responses

Though you can ignore the response content from the TimerCheck.io web service, here are samples of what it returns.

If the timer has not yet expired because your events are running on schedule and resetting the countdown, then the monitoring URL returns a 200 success code along with the current state of the timer. This includes things like when the timer set URL was last requested, and how many seconds remain before the timer goes into an error state.

{
  "timer": "YOURTIMERNAME",
  "request_id": "501abe10-2dad-11e5-80c1-35cdcb449e41",
  "status": "ok",
  "now": 1437265810,
  "start_time": 1437265767,
  "start_seconds": 60,
  "seconds_elapsed": 43,
  "seconds_remaining": 17,
  "message": "Timer still running"
}

If the timer has expired and no event has been run to reset it, then the monitor URL returns a 504 timeout error code and an error message. Once I figure out how to get the API Manager to return both an error code and some JSON content, I will expand this to include more details about when the timer expired.

{
  "errorMessage": "504: timer timed out"
}

When you call the event URL, passing in the number of seconds for resetting the timer, the API returns the previous state of the timer (as in the first example above) along with a note that it has set the new values.

{
  "timer": "YOURTIMERNAME",
  "request_id": "36a764b6-2dad-11e5-9318-f3b076dd2a3a",
  "status": "ok",
  "now": 1437265767,
  "start_time": 1437263674,
  "start_seconds": 60,
  "seconds_elapsed": 2093,
  "seconds_remaining": -2033,
  "message": "Timer countdown updated",
  "new_start_time": 1437265767,
  "new_start_seconds": 60
}

If this is the first time you have set the particular timer, the previous state keys will be missing.

Guarantees

There are none.

TimerCheck.io a free public service intended, but not guaranteed, to be useful. It may return unexpected results. At any time and with no warning, it may become unavailable for short periods or forever.

Terms of Use

Don’t use TimerCheck.io in an abusive manner. If you are unsure if your use case might be considered abusive, ask.

Alternatives

I am not aware of any services that operate the same way as TimerCheck.io, with the ability to add dead man switch features to existing polling-based monitoring services, but here are a few services that are targeted specifically at event monitoring.

What do you use for monitoring and alerting? Are you using monitoring to make sure scheduled events are not missed?

Original article and comments: https://alestic.com/2015/07/timercheck-scheduled-events-monitoring/

July 20, 2015 11:26 AM

Akkana Peck

Plugging in those darned USB cables

I'm sure I'm not the only one who's forever trying to plug in a USB cable only to find it upside down. And then I flip it and try it the other way, and that doesn't work either, so I go back to the first side, until I finally get it plugged in, because there's no easy way to tell visually which way the plug is supposed to go.

It's true of nearly all of the umpteen variants of USB plug: almost all of them differ only subtly from the top side to the bottom.

[USB trident] And to "fix" this, USB cables are built so that they have subtly raised indentations which, if you hold them to the light just right so you can see the shadows, say "USB" or have the little USB trident on the top side:


In an art store a few weeks ago, Dave had a good idea.

[USB cables painted for orientation] He bought a white paint marker, and we've used it to paint the logo side of all our USB cables.

Tape the cables down on the desk -- so they don't flop around while the paint is drying -- and apply a few dabs of white paint to the logo area of each connector. If you're careful you might be able to fill in the lowered part so the raised USB symbol stays black; or to paint only the raised USB part. I tried that on a few cables, but after the fifth or so cable I stopped worrying about whether I was ending up with a pretty USB symbol and just started dabbing paint wherever was handy.

The paint really does make a big difference. It's much easier now to plug in USB cables, especially micro USB, and I never go through that "flip it over several times" dance any more.

July 20, 2015 02:37 AM

July 18, 2015

Elizabeth Krumbach

SF activities and arrival in Portland, OR

Time at home in San Francisco came to an end this week with a flight to Portland, OR on Friday for some open source gatherings around OSCON. This ended my nearly 2 months without getting on a plane, the longest stretch I’ve gone in over 2 years. My initial intention with this time was to spend a lot of time on my book, which I have, but not nearly as much as I’d hoped because the work and creativity required isn’t something you can just turn on and off. It was nice getting to spend so much time with my husband though, and the kitties. The stretch at home also led me to join a gym again (I’d canceled my last month to month membership when a stretch of travel had me gone for over a month). Upon my return next week I have my first of four sessions with a trainer at the gym scheduled.

While I haven’t exactly had a full social calendar of late, I have been able to go to a few events. Last Wednesday I hosted an Ubuntu Hour and Bay Area Debian Dinner in San Francisco.

The day after, SwiftStack hosted probably the only OpenStack 5th birthday party I’ll be able to attend this year (leaving before the OSCON one, will be in Peru for the HP one!). I got to see some familiar faces, meet some Swift developers and eat some OpenStack cake.

MJ had a friend in town last week too, which meant I had a lot of time to myself. In the spirit of not having to worry about my own meals during this time, I cooked up a pot of beef stew to enjoy through the week and learned quickly that I should have frozen at least half of it. Even a modest pot of stew is much more than I can eat it all myself over the course of a week. I did enjoy it though, some day I’ll learn about spices so I can make one that’s not so bland.

I’ve also been running again, after a bit of a hiatus following the trip to Vancouver. Fortunately I didn’t lose much ground stamina-wise and was mostly able to pick up where I left off. It has been warmer than normal in San Francisco these past couple weeks though, so I’ve been playing around with the times of my runs, with early evenings as soon as the fog/coolness rolls in currently the winning time slot during the week. Sunday morning runs have been great too.

This week I made it out to a San Francisco DevOps meetup where Tom Limoncelli was giving a talk inspired by some of the less intuitive points in his book The Practice of Cloud Systems Administration. In addition to seeing Tom, it was nice to meet up with some of my local DevOps friends who I haven’t managed to connect with lately and meet some new people.

I had a busy week at home before my trip to Portland this week, upon settling in to the hotel I’m staying at I met up with my friend and fellow Ubuntu Community Council Member Laura Czajkowski. We took the metro over the bridge to downtown Portland and on the way she showed off her Ubuntu phone, and the photo taking app for a selfie together!

Since it was Laura’s first time in Portland, our first stop downtown was to Voodoo Doughnuts! I got my jelly-filled voodoo guy doughnut.

From there we made our way to Powell’s Books where we spent the rest of the afternoon, as you do with Powell’s. I picked up 3 books and learned that Powell’s Technical Books/Powell’s 2 has been absorbed into the big store, which was a little sad for me, it was fun to go to the store that just had science, transportation and engineering books. Still, it was a fun visit and I always enjoy introducing someone new to the store.

Then we headed back across the river to meet up with people for the Community Leadership Summit informal gathering event at the Double Tree. We had a really enjoyable time, I got to see Michael Hall of the Ubuntu Community Council and David Planella of the Community Team at Canonical to catch up with each other and chat about Ubuntu things. Plus, I ran into people I know from the broader open source community. As an introvert, it was one of the more energizing social events I’ve been to in a long time.

Today the Community Leadership Summit that I’m in town for kicks off! Looking forward to some great discussions.

by pleia2 at July 18, 2015 03:17 PM

July 16, 2015

Elizabeth Krumbach

Ubuntu at the upcoming Community Leadership Summit

This weekend I have the opportunity to attend the Community Leadership Summit. While there, I’ll be able to take advantage of an opportunity that’s rare now: meeting up with my fellow Ubuntu Community Council members Laura Czajkowski and Michael Hall, along with David Planella of the community team at Canonical. At the Community Council meeting today, I was able to work with David on narrowing down a few topics that impact us and we think would be of interest to other communities and we’ll propose for discussion at CLS:

  1. Declining participation
  2. Community cohesion
  3. Barriers related to [the perception of] company-driven control and development
  4. Lack of a new generation of leaders

As an unconference, we’ll be submitting these ideas for discussion and so we’ll see how many of them gain interest of enough people to have a discussion.

at

Community Leadership Summit 2015

Since we’ll all be together, we also managed to arrange some time together on Monday afternoon and Tuesday to talk about how these challenges impact Ubuntu specifically and get to any of the topics mentioned above that weren’t selected for discussion at CLS itself. By the end of this in person gathering we hope to have some action items, or at least some solidified talking points and ideas to bring to the ubuntu-community-team mailing list. I’ll also be doing a follow-up blog post where I share some of my takeaways.

What I need from you:

If you’re attending CLS join us for the discussions! If you just happen to be in the area for OSCON in general, feel free to reach out to me (email: lyz@ubuntu.com) to have a chat while I’m in town. I fly home Wednesday afternoon.

If you can’t attend CLS but are interested in these discussions, chime in on the ubuntu-community-team thread or send a message to the Community Council at community-council at lists.ubuntu.com with your feedback and we’ll work to incorporate it into the sessions. You’re also welcome to contact me directly and I’ll pass things along (anonymously if you’d like, just let me know).

Finally, a reminder that this time together is not a panacea. These are complicated concerns in our community that will not be solved over a weekend and a few members of the Ubuntu Community Council won’t be able to solve them alone. Like many of you, I’m a volunteer who cares about the Ubuntu community and am doing my best to find the best way forward. Please keep this in mind as you bring concerns to us. We’re all on the same team here.

by pleia2 at July 16, 2015 06:59 PM

July 15, 2015

Eric Hammond

Simple New Web Service: Testers Requested

Interested in adding scheduled job monitoring (dead man’s switch) to the existing monitoring and alerting framework you are already using (Nagios, Sensu, Zenoss, Zabbix, Monit, Pingdom, Montastic, Ruxit, and the like)?

Last month I wrote about how I use Cronitor.io to monitor scheduled events with an example using an SNS Topic and AWS Lambda.

This week I spent a few hours building a simple web service that enables any polling-based monitor software or service to automatically support alerting when a target event has not occurred in a desired timeframe.

The new web service is built on infrastructure technologies that are reliably maintained and scaled by Amazon:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • CloudFront
  • Route53
  • CloudWatch

The source code is about a page long and the web service API is as trivial as it gets; but the functionality it adds to monitoring services is quite powerful and hugely scalable.

Integration requires these simple steps:

Step 1: There is no step one! There is no registration, no setup, and no configuration of the new web service for your use.

Step 2: Hit one URL when your target event occurs.

Step 3: Tell your existing monitoring system to poll another URL and to alert you when it fails.

Result: When your scheduled task misses an appointment and doesn’t check in, the second URL monitored by your software will start returning a failure code, and you will be alerted.

Intrigued?

I’m still working on the blog post to introduce the web service, but would love to have some folks test it out this week and give feedback.

If you are interested, drop me an email and mention:

  • The monitoring/alerting frameworks you currently use

  • The type of scheduled activities you would like to monitor (cron job, SNS topic, Lambda function, web page view, email receipt, …)

  • The frequency of the target events (every 10 seconds, every 10 years, …)

Even if you don’t want to do testing this week, I’d love to hear your answers to the above three points, through email or in the comments below.

Original article and comments: https://alestic.com/2015/07/timercheck-testers-requested/

July 15, 2015 04:54 AM

July 14, 2015

Akkana Peck

Hummingbird Quidditch!

[rufous hummingbird] After months of at most one hummingbird at the feeders every 15 minutes or so, yesterday afternoon the hummingbirds here all suddenly went crazy. Since then, my patio looks like a tiny Battle of Britain, There are at least four males involved in the fighting, plus a couple of females who sneak in to steal a sip whenever the principals retreat for a moment.

I posted that to the local birding list and someone came up with a better comparison: "it looks like a Quidditch game on the back porch". Perfect! And someone else compared the hummer guarding the feeder to "an avid fan at Wimbledon", referring to the way his head keeps flicking back and forth between the two feeders under his control.

Last year I never saw anything like this. There was a week or so at the very end of summer where I'd occasionally see three hummingbirds contending at the very end of the day for their bedtime snack, but no more than that. I think putting out more feeders has a lot to do with it.

All the dogfighting (or quidditch) is amazing to watch, and to listen to. But I have to wonder how these little guys manage to survive when they spend all their time helicoptering after each other and no time actually eating. Not to mention the way the males chase females away from the food when the females need to be taking care of chicks.

[calliope hummingbird]

I know there's a rufous hummingbird (shown above) and a broad-tailed hummingbird -- the broad-tailed makes a whistling sound with his wings as he dives in for the attack. I know there a black-chinned hummer around because I saw his characteristic tail-waggle as he used the feeder outside the nook a few days before the real combat started. But I didn't realize until I checked my photos this morning that one of the combatants is a calliope hummingbird. They're usually the latest to arrive, and the rarest. I hadn't realized we had any calliopes yet this year, so I was very happy to see the male's throat streamers when I looked at the photo. So all four of the species we'd normally expect to see here in northern New Mexico are represented.

I've always envied places that have a row of feeders and dozens of hummingbirds all vying for position. But I would put out two feeders and never see them both occupied at once -- one male always keeps an eye on both feeders and drives away all competitors, including females -- so putting out a third feeder seemed pointless. But late last year I decided to try something new: put out more feeders, but make sure some of them are around the corner hidden from the main feeders. Then one tyrant can't watch them all, and other hummers can establish a beachhead.

It seems to be working: at least, we have a lot more activity so far than last year, even though I never seem to see any hummers at the fourth feeder, hidden up near the bedroom. Maybe I need to move that one; and I just bought a fifth, so I'll try putting that somewhere on the other side of the house and see how it affects the feeders on the patio.

I still don't have dozens of hummingbirds like some places have (the Sopaipilla Factory restaurant in Pojoaque is the best place I've seen around here to watch hummingbirds). But I'm making progress

July 14, 2015 06:45 PM

July 09, 2015

Akkana Peck

Taming annoyances in the new Google Maps

For a year or so, I've been appending "output=classic" to any Google Maps URL. But Google disabled Classic mode last month. (There have been a few other ways to get classic Google maps back, but Google is gradually disabling them one by one.)

I have basically three problems with the new maps:

  1. If you search for something, the screen is taken up by a huge box showing you what you searched for; if you click the "x" to dismiss the huge box so you can see the map underneath, the box disappears but so does the pin showing your search target.
  2. A big swath at the bottom of the screen is taken up by a filmstrip of photos from the location, and it's an extra click to dismiss that.
  3. Moving or zooming the map is very, very slow: it relies on OpenGL support in the browser, which doesn't work well on Linux in general, or on a lot of graphics cards on any platform.

Now that I don't have the "classic" option any more, I've had to find ways around the problems -- either that, or switch to Bing maps. Here's how to make the maps usable in Firefox.

First, for the slowness: the cure is to disable webgl in Firefox. Go to about:config and search for webgl. Then doubleclick on the line for webgl.disabled to make it true.

For the other two, you can add userContent lines to tell Firefox to hide those boxes.

Locate your Firefox profile. Inside it, edit chrome/userContent.css (create that file if it doesn't already exist), and add the following two lines:

div#cards { display: none !important; }
div#viewcard { display: none !important; }

Voilà! The boxes that used to hide the map are now invisible. Of course, that also means you can't use anything inside them; but I never found them useful for anything anyway.

July 09, 2015 04:54 PM

July 06, 2015

Elizabeth Krumbach

California Tourist

I returned from my latest conference on May 23rd, closing down what had been over 2 years of traveling every month to some kind of conference, event or family gathering. This was the longest stretch of travel I’ve done and I’ve managed to visit a lot of amazing places and meeting some unforgettable people. However, with a book deadline creeping up and tasks at home piling up, I figured it was time to slow down for a bit. I didn’t travel in June and my next trip isn’t until the end of July when I’m going up to Portland for the Community Leadership Summit and a couple days of schmoozing with OSCON friends.

Complicated moods of late and continued struggles with migraines have made it so I’ve not been as productive as I’ve wanted, but I have made real progress on some things I’ve wanted to and my book is really finally coming together. In the spaces between work I’ve also managed a bit of much needed fun and relaxation.

A couple weekends ago MJ and I took a weekend trip up to an inn and spa in Sonoma to get some massages and soak in natural mineral water pools provided by on site springs. We had some amazing dinners at the inn, including one evening where we enjoyed s’mores at an outdoor fire pit. The time spent was amazingly relaxing and refreshing, and although it wasn’t a cure-all for the dip in my mood of late, it was some time well spent together.


Perfect weather, beautiful venue

On Sunday morning we checked out of the inn and enjoyed a fantastic brunch that included lobster eggs benedict on the grounds before venturing on. While in Sonoma, we decided to stop by a couple wineries that we were familiar with, starting with Imagery, which is the sister winery to the one we got engaged at, and our next stop, Benziger. At both we picked up several nice wines, of which I’m looking forward to cracking open for Shabbats in our near future!

We also stopped by B.R. Cohn for a couple olive oils, and I picked up some delicious blackberry jam and some Chardonnay caramel sauce which has graced some bowls of ice cream since our return. On the trip back to San Francisco we made one final stop, at Jacuzzi Winery where we picked up several more interesting bottles of olive oil, which will soon make it into some salads, scrambled eggs and other dishes that we got recipe cards for.

Due to my backlog, I’ve been spending a lot of time at home and not much at local events, with the exception of a great gathering at the East Bay Linux Users Group a few weeks ago. In contrast with my professional colleagues who work on Linux full time as systems administrators, engineers and DevOps, it’s so refreshing to go to a LUG where I’m meeting with long term tech hobbiests who still distro-hop and come up with interesting questions around the distros I’m most familiar with and the Linux ecosystem in general. This group has also had interest in Partimus lately, so it was nice to get some feedback about our on-going efforts and volunteer recruitment activities.

In an effort to get out of the house more, I picked up the book Historic Walks in San Francisco: 18 Trails Through the City’s Past and finally took it out for a spin this weekend. I went on the Financial District walk which took me around what is essentially my own neighborhood but had me look at it with whole new eyes. I learned that the Hallidie Building tricked me into believing it was a new building with it’s glass exterior, but is actually from 1917 and one of the first American buildings to feature glass curtain walls.


Hallidie Building

One of my favorite buildings on the tour turned out to be the Kohl Building, which was built in 1901 and withstood the 1906 earthquake that leveled most of downtown San Francisco and so was used as a command post during the recovery. Erected for Alvinza Hayward, the “H” shape of the building is allegedly in honor of his last name.


Kohl Building

The tour had lots more fun landmarks and stories of recovery (or not) following the 1906 earthquake. Amusingly for my European friends, the young age of San Francisco itself, and our shaky history means that there was not much at all here 160 years ago, so “historical” for us means 50+ years. Over 110 years and you’re going back before the city was essentially leveled by the earthquake and fire to some truly impressive sturdy buildings. The oldest on the tour was the oldest standing building downtown and it dates from 1877 and now houses the Pacific Heritage Museum, which I hope to visit one of these days when they’re open.

More photos from my walk here: https://www.flickr.com/photos/pleia2/sets/72157655051173508

While on the topic of walking tours, doing this tour alone left something to be desired, even with Tony Bennett and company crooning in my ears. I think I might look up some of the free San Francisco Walking Tours for my next adventure.

My 4th of July weekend here has been pretty low-key. MJ has a friend in town, so they’ve been spending the days out and I’ll sometimes tag along for dinner. With an empty house, I got some reading done, plowed through several tasks on my to do list and started catching up on book related tasks. I still don’t feel like I got “enough” done, but there’s always tomorrow.

by pleia2 at July 06, 2015 01:23 AM

July 04, 2015

Akkana Peck

Create a signed app with Cordova

I wrote last week about developing apps with PhoneGap/Cordova. But one thing I didn't cover. When you type cordova build, you're building only a debug version of your app. If you want to release it, you have to sign it. Figuring out how turned out to be a little tricky.

Most pages on the web say you can sign your apps by creating platforms/android/ant.properties with the same keystore information in it that you'd put in an ant build, then running cordova build android --release

But Cordova completely ignored my ant.properties file and went on creating a debug .apk file and no signed one.

I found various other purported solutions on the web, like creating a build.json file in the app's top-level directory ... but that just made Cordova die with a syntax error inside one of its own files). This is the only method that worked for me:

Create a file called platforms/android/release-signing.properties, and put this in it:

storeFile=/path/to/your-keystore.keystore
storeType=jks
keyAlias=some-key
// if you don't want to enter the password at every build, use this:
keyPassword=your-key-password
storePassword=your-store-password

Then cordova build android --release finally works, and creates a file called platforms/android/build/outputs/apk/android-release.apk

July 04, 2015 12:02 AM

June 30, 2015

Elizabeth Krumbach

Contributing to the Ubuntu Weekly Newsletter

Super star Ubuntu Weekly Newsletter contributor Paul White recently was reflecting upon his work with the newsletter and noted that he was approaching 100 issues that he’s contributed to. Wow!

That caused me to look at how long I’ve been involved. Back in 2011 the newsletter when on a 6 month hiatus when the former editor had to step down due to obligations elsewhere. After much pleading for the return of the newsletter, I spent a few weeks working with Nathan Handler to improve the scripts used in the release process and doing an analysis of the value of each section of the newsletter in relation to how much work it took to produce each week. The result was a slightly leaner, but hopefully just as valuable newsletter, which now took about 30 minutes for an experienced editor to release rather than 2+ hours. This change was transformational for the team, allowing me to be involved for a whopping 205 consecutive issues.

If you’re not familiar with the newsletter, every week we work to collect news from around our community and the Internet to bring together a snapshot of that week in Ubuntu. It helps people stay up to date with the latest in the world of Ubuntu and the Newsletter archive offers a fascinating glimpse back through history.

But we always need help putting the newsletter together. We especially need people who can take some time out of their weekend to help us write article summaries.

Summary writers. Summary writers receive an email every Friday evening (or early Saturday) US time with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.

Interested? Email editor.ubuntu.news@ubuntu.com and we’ll get you added to the list of folks who are emailed each week.

I love working on the newsletter. As I’ve had to reduce my commitment to some volunteer projects I’m working on, I’ve held on to the newsletter because of how valuable and enjoyable I find it. We’re a friendly team and I hope you can join us!

Still just interested in reading? You have several options:

And everyone is welcome to drop by #ubuntu-news on Freenode to chat with us or share links to news we may found valuable for the newsletter.

by pleia2 at June 30, 2015 02:29 AM

June 29, 2015

Akkana Peck

Chollas in bloom, and other early summer treats

[Bee in cholla blossom] We have three or four cholla cacti on our property. Impressive, pretty cacti, but we were disappointed last year that they never bloomed. They looked like they were forming buds ... and then one day the buds were gone. We thought maybe some animal ate them before the flowers had a chance to open.

Not this year! All of our chollas have gone crazy, with the early rain followed by hot weather. Last week we thought they were spectacular, but they just kept getting better and better. In the heat of the day, it's a bee party: they're aswarm with at least three species of bees and wasps (I don't know enough about bees to identify them, but I can tell they're different from one another) plus some tiny gnat-like insects.

I wrote a few weeks ago about the piñons bursting with cones. What I didn't realize was that these little red-brown cones are all the male, pollen-bearing cones. The ones that bear the seeds, apparently, are the larger bright green cones, and we don't have many of those. But maybe they're just small now, and there will be more later. Keeping fingers crossed. The tall spikes of new growth are called "candles" and there are lots of those, so I guess the trees are happy.

[Desert willow in bloom] Other plants besides cacti are blooming. Last fall we planted a desert willow from a local native plant nursery. The desert willow isn't actually native to White Rock -- we're around the upper end of its elevation range -- but we missed the Mojave desert willow we'd planted back in San Jose, and wanted to try one of the Southwest varieties here. Apparently they're all the same species, Chilopsis linearis.

But we didn't expect the flowers to be so showy! A couple of blossoms just opened today for the first time, and they're as beautiful as any of the cultivated flowers in the garden. I think that means our willow is a 'Rio Salado' type.

Not all the growing plants are good. We've been keeping ourselves busy pulling up tumbleweed (Russian thistle) and stickseed while they're young, trying to prevent them from seeding. But more on that in a separate post.

As I write this, a bluebird is performing short aerobatic flights outside the window. Curiously, it's usually the female doing the showy flying; there's a male out there too, balancing himself on a piñon candle, but he doesn't seem to feel the need to show off. Is the female catching flies, showing off for the male, or just enjoying herself? I don't know, but I'm happy to have bluebirds around. Still no definite sign of whether anyone's nesting in our bluebird box. We have ash-throated flycatchers paired up nearby too, and I'm told they use bluebird boxes more than the bluebirds do. They're both beautiful birds, and welcome here.

Image gallery: Chollas in bloom (and other early summer flowers.

June 29, 2015 01:38 AM

June 23, 2015

Akkana Peck

Cross-Platform Android Development Toolkits: Kivy vs. PhoneGap / Cordova

Although Ant builds have made Android development much easier, I've long been curious about the cross-platform phone development apps: you write a simple app in some common language, like HTML or Python, then run something that can turn it into apps on multiple mobile platforms, like Android, iOS, Blackberry, Windows phone, UbuntoOS, FirefoxOS or Tizen.

Last week I tried two of the many cross-platform mobile frameworks: Kivy and PhoneGap.

Kivy lets you develop in Python, which sounded like a big plus. I went to a Kivy talk at PyCon a year ago and it looked pretty interesting. PhoneGap takes web apps written in HTML, CSS and Javascript and packages them like native applications. PhoneGap seems much more popular, but I wanted to see how it and Kivy compared. Both projects are free, open source software.

If you want to skip the gory details, skip to the summary: how do Kivy and PhoneGap compare?

PhoneGap

I tried PhoneGap first. It's based on Node.js, so the first step was installing that. Debian has packages for nodejs, so apt-get install nodejs npm nodejs-legacy did the trick. You need nodejs-legacy to get the "node" command, which you'll need for installing PhoneGap.

Now comes a confusing part. You'll be using npm to install ... something. But depending on which tutorial you're following, it may tell you to install and use either phonegap or cordova.

Cordova is an Apache project which is intertwined with PhoneGap. After reading all their FAQs on the subject, I'm as confused as ever about where PhoneGap ends and Cordova begins, which one is newer, which one is more open-source, whether I should say I'm developing in PhoneGap or Cordova, or even whether I should be asking questions on the #phonegap or #cordova channels on Freenode. (The one question I had, which came up later in the process, I asked on #phonegap and got a helpful answer very quickly.) Neither one is packaged in Debian.

After some searching for a good, comprehensive tutorial, I ended up on a The Cordova tutorial rather than a PhoneGap one. So I typed:

sudo npm install -g cordova

Once it's installed, you can create a new app, add the android platform (assuming you already have android development tools installed) and build your new app:

cordova create hello com.example.hello HelloWorld
cordova platform add android
cordova build

Oops!

Error: Please install Android target: "android-22"
Apparently Cordova/Phonegap can only build with its own preferred version of android, which currently is 22. Editing files to specify android-19 didn't work for me; it just gave errors at a different point.

So I fired up the Android SDK manager, selected android-22 for install, accepted the license ... and waited ... and waited. In the end it took over two hours to download the android-22 SDK; the system image is 13Gb! So that's a bit of a strike against PhoneGap.

While I was waiting for android-22 to download, I took a look at Kivy.

Kivy

As a Python enthusiast, I wanted to like Kivy best. Plus, it's in the Debian repositories: I installed it with sudo apt-get install python-kivy python-kivy-examples

They have a nice quickstart tutorial for writing a Hello World app on their site. You write it, run it locally in python to bring up a window and see what the app will look like. But then the tutorial immediately jumps into more advanced programming without telling you how to build and deploy your Hello World. For Android, that information is in the Android Packaging Guide. They recommend an app called Buildozer (cute name), which you have to pull from git, build and install.

buildozer init
buildozer android debug deploy run
got started on building ... but then I noticed that it was attempting to download and build its own version of apache ant (sort of a Java version of make). I already have ant -- I've been using it for weeks for building my own Java android apps. Why did it want a different version?

The file buildozer.spec in your project's directory lets you uncomment and customize variables like:

# (int) Android SDK version to use
android.sdk = 21

# (str) Android NDK directory (if empty, it will be automatically downloaded.)
# android.ndk_path = 

# (str) Android SDK directory (if empty, it will be automatically downloaded.)
# android.sdk_path = 

Unlike a lot of Android build packages, buildozer will not inherit variables like ANDROID_SDK, ANDROID_NDK and ANDROID_HOME from your environment; you must edit buildozer.spec.

But that doesn't help with ant. Fortunately, when I inspected the Python code for buildozer itself, I discovered there was another variable that isn't mentioned in the default spec file. Just add this line:

android.ant_path = /usr/bin

Next, buildozer gave me a slew of compilation errors:

kivy/graphics/opengl.c: No such file or directory
 ... many many more lines of compilation interspersed with errors
kivy/graphics/vbo.c:1:2: error: #error Do not use this file, it is the result of a failed Cython compilation.

I had to ask on #kivy to solve that one. It turns out that the current version of cython, 0.22, doesn't work with kivy stable. My choices were to uninstall kivy and pull the development version from git, or to uninstall cython and install version 0.21.2 via pip. I opted for the latter option. Either way, there's no "make clean", so removing the dist and build directories let me start over with the new cython.

apt-get purge cython
sudo pip install Cython==0.21.2
rm -rf ./.buildozer/android/platform/python-for-android/dist
rm -rf ./.buildozer/android/platform/python-for-android/build

Buildozer was now happy, and proceeded to download and build Python-2.7.2, pygame and a large collection of other Python libraries for the ARM platform. Apparently each app packages the Python language and all libraries it needs into the Android .apk file.

Eventually I ran into trouble because I'd named my python file hello.py instead of main.py; apparently this is something you're not allowed to change and they don't mention it in the docs, but that was easily solved. Then I ran into trouble again:

Exception: Unable to find capture version in ./main.py (looking for `__version__ = ['"](.*)['"]`)
The buildozer.spec file offers two types of versioning: by default "method 1" is enabled, but I never figured out how to get past that error with "method 1" so I commented it out and uncommented "method 2". With that, I was finally able to build an Android package.

The .apk file it created was quite large because of all the embedded Python libraries: for the little 77-line pong demo, /usr/share/kivy-examples/tutorials/pong in the Debian kivy-examples package, the apk came out 7.3Mb. For comparison, my FeedViewer native java app, roughly 2000 lines of Java plus a few XML files, produces a 44k apk.

The next step was to make a real mini app. But when I looked through the Kivy examples, they all seemed highly specialized, and I couldn't find any documentation that addressed issues like what widgets were available or how to lay them out. How do I add a basic text widget? How do I put a button next to it? How do I get the app to launch in portrait rather than landscape mode? Is there any way to speed up the very slow initialization?

I'd spent a few hours on Kivy and made a Hello World app, but I was having trouble figuring out how to do anything more. I needed a change of scenery.

PhoneGap, redux

By this time, android-22 had finally finished downloading. I was ready to try PhoneGap again.

This time,

cordova platforms add android
cordova build
worked fine. It took a long time, because it downloaded the huge gradle build system rather than using something simpler like ant. I already have a copy of gradle somewhere (I downloaded it for the OsmAnd build), but it's not in my path, and I was too beaten down by this point to figure out where it was and how to get cordova to point to it.

Cordova eventually produced a 1.8Mb "hello world" apk -- a quarter the size of the Kivy package, though 20 times as big as a native Java app. Deployed on Android, it initialized much faster than the Kivy app, and came up in portrait mode but rotated correctly if I rotated the phone.

Editing the HTML, CSS and Javascript was fairly simple. You'll want to replace pretty much all of the default CSS if you don't want your app monopolized by the Cordova icon.

The only tricky part was file access: opening a file:// URL didn't work. I asked on #phonegap and someone helpfully told me I'd need the file plugin. That was easy to find in the documentation, and I added it like this:

cordova plugin search file
cordova plugin add org.apache.cordova.file

My final apk, for a small web app I use regularly on Android, was almost the same size as their hello world example: 1.8Mb. And it works great: phonegap had no problem playing an audio clip, something that was tricky when I was trying to do the same thing from a native Android java WebView class.

Summary: How do Kivy and PhoneGap compare?

This has been a long article, I know. So how do Kivy and PhoneGap compare, and which one will I be using?

They both need a large amount of disk space for the development environment. I wish I had good numbers to give you, but I was working with both systems at the same time, and their packages are scattered all over the disk so I haven't found a good way of measuring their size. I suspect PhoneGap is quite a bit bigger, because it uses gradle rather than ant and because it insists on android-22.

On the other hand, PhoneGap wins big on packaged application size: its .apk files are a quarter the size of Kivy's.

PhoneGap definitely wins on documentation. Kivy has seemingly lots of documentation, but its tutorials jumped around rather than following a logical sequence, and I had trouble finding answers to basic questions like "How do I display a text field with a button?" PhoneGap doesn't need that, because the UI is basic HTML and CSS -- limited though they are, at least most people know how to use them.

Finally, PhoneGap wins on startup speed. For my very simple test app, startup was more or less immediate, while the Kivy Hello World app required several seconds of startup time on my Galaxy S4.

Kivy is an interesting project. I like the ant-based build, the straightforward .spec file, and of course the Python language. But it still has some catching up to do in performance and documentation. For throwing together a simple app and packaging it for Android, I have to give the win to PhoneGap.

June 23, 2015 06:09 PM

June 19, 2015

Jono Bacon

Rebasing Ubuntu on Android?

NOTE: Before you read this, I want to clear up some confusion. This post shares an idea that is designed purely for some intellectual fun and discussion. I am not proposing we actually do this, nor advocating for this. So, don’t read too much into these words…

The Ubuntu phone is evolving step by step. The team has worked their socks off to build a convergent user interface, toolkit, and full SDK. The phone exposes an exciting new concept, scopes, that while intriguing in their current form, after some refinement (which the team are already working on) could redefine how we use devices and access content. It is all the play for.

There is one major stumbling block though: apps.

While scopes offer a way of getting access to content quickly, they don’t completely replace apps. There will always be certain apps that people are going to want. The common examples are Skype, WhatsApp, Uber, Google Maps, Fruit Ninja, and Temple Run.

Now this is a bit of a problem. The way new platforms usually solve this is by spending hundreds of thousands of dollars to pay those companies to create and support a port. This isn’t really an option for the Ubuntu phone (there is much more than just the phone being funded by Canonical).

So, it seems to me that the opportunity of the Ubuntu phone is a sleek and sexy user interface that converges and puts content first, but the stumbling block is the lack of apps, and the lack of apps may well have a dramatic impact on adoption.

So, i have an idea to share based on a discussion last night with a friend.

Why don’t we rebase the phone off Android?

OK, bear with me…

In other words, the Ubuntu phone would be an Android phone but instead of the normal user interface it would be a UI that looks and feels like the Ubuntu phone. It would have the messaging menu, scopes, and other pieces, and select Android API calls could be mapped to the different parts of the Unity UI such as the messaging menu and online account support.

The project could even operate like how we build Ubuntu today. Every six months upstream Android would be synced into Launchpad where a patchset would live on patches.ubuntu.com and applied to the codebase (in much the same way we do with Debian today).

This would mean that Ubuntu would continue to be an Open Source project, based on a codebase easily supported by hardware manufacturers (thus easier to ship), it would run all Android apps without requiring a cludgy porting/translation layer running on Ubuntu, it would look and feel like an Ubuntu phone, it would still expose scopes as a first-class user interface, the Ubuntu SDK would still be the main ecosystem play, Ubuntu apps would still stand out as more elegant and engaging apps, and it would reduce the amount of engineering required (I assume).

Now, the question is how this would impact a single convergent Operating System across desktop, phone, tablet, and TV. If Unity is essentially a UI that runs on top of Android and exposes a set of services, the convergence story should work well too, after all…it is all Linux. It may need different desktop, phone, tablet, and TV kernels, but I think we would need different kernels anyway.

So where does this put Debian and Ubuntu packages? Well, good question. I don’t know. The other unknown of course would be the impact of such a move on our flavors and derivatives, but then again I suspect the march towards snappy is going to put us in a similar situation if flavors/derivatives choose to stick with the Debian packaging system.

Of course, I am saying all this as who really only understands a small part of the picture, but this just strikes me as a logical step forward. I know there has been a reluctance to support Android apps on Ubuntu as it devalues the Ubuntu app ecosystem and people would just use Android apps, but I honestly think some kind of middle-ground is needed to get into the game, otherwise I worry we won’t even make it to the subs bench no matter how awesome our technology is.

Just a thought, would love to hear what everyone thinks, including if what I am suggesting is total nonsense. :-)

Again, remember, this is just an idea I am throwing out for the fun of the discussion; I am not suggesting we actually do this.

by jono at June 19, 2015 04:20 AM

June 18, 2015

Eric Hammond

lambdash: AWS Lambda Shell Hack: New And Improved!

easier, simpler, faster, better

Seven months ago I published the lambdash AWS Lambda Shell Hack that lets you run shell commands to explore the environment in which AWS Lambda functions are executed.

I also posted samples of command output that show fascinating properties of the AWS Lambda runtime environment.

In the last seven months, Amazon has released new features and enhancements that have made a completely new version of lambdash possible, with many benefits including:

  • Ability to use AWS CloudFormation to create all needed resources including the AWS Lamba function and the IAM role.

  • Ability to create AWS Lambda functions by referencing a ZIP file in an S3 bucket.

  • Simpler IAM role structure.

  • Increased AWS Lamba function memory limit, with corresponding faster execution.

  • Ability to invoke an AWS Lambda function synchronously.

This last point means that we no longer need to put the shell command output into an S3 bucket and poll the bucket from the local host. Instead, we can simply return the shell command output directly to the client that invoked the AWS Lambda function.

The above have made the lambdash code much simpler, much easier to intstall, and much, much faster to execute and get results.

You can browse the source here:

https://github.com/alestic/lambdash

There are three easy steps to get lambdash working:

1. CloudFormation Stack

Option 1: Here are sample steps to create the lambdash AWS Lambda function and to use a local command to invoke the function and output the results of commands run inside of Lambda

git clone git@github.com:alestic/lambdash.git
cd lambdash
./lambdash-install

The lambdash-install script runs the aws-cli command aws cloudformation create-stack passing in the template file to create the AWS Lambda function in a CloudFormation stack.

The above assumes that you have installed aws-cli and have appropriate credentials configured.

Option 2: You may use the AWS Console to create a lambdash CloudFormation stack by pressing this button:

Launch Stack

Accept all the defaults, confirm the IAM role creation (after reading the CloudFormation template and verifying that I am not doing anything malicious), and perhaps add a Tag to help identify the lambdash CloudFormation stack.

2. Environment Variable

Since the CloudFormation stack creates the AWS Lambda function with a unique name, you need to find out what this name is before you can invoke it with the lambdash command.

If you ran the lambdash-install command, it printed the export statement you should use.

If you used the AWS Console, click on the lambdash CloudFormation stack’s [Output] tab and copy the export command listed there.

It will look something like this, with your own unique 12-character suffix:

export LAMBDASH_FUNCTION=lambdash-function-ABC123EXAMPL

Run this in your current shell and, perhaps, add it to your $HOME/.bashrc or equivalent.

3. Local lambdash Program

The previous step installs the AWS Lambda function in the AWS environment. You also need a complementary local command that will invoke the function with your requested command line then receive and print the stdout and stderr content.

This is the lambdash program, which is now a small Python script that uses boto3.

You can either use the lambdash program in the GitHub repo you cloned above, or download it directly:

sudo curl -so/usr/local/bin/lambdash \
  https://raw.githubusercontent.com/alestic/lambdash/master/lambdash
sudo chmod +x /usr/local/bin/lambdash

This Python program requires boto3, so install it using your favorite method. This worked for me:

sudo -H pip install boto3

Now you’re ready to run shell commands on AWS Lambda.

Usage

You can now execute shell commands in the AWS Lambda environment and see the output. This command shows us that Amazon has upgraded the AWS Lambda environment from Amazon Linux 2014.03 when it was launched, to 2015.03 today:

$ lambdash cat /etc/issue
Amazon Linux AMI release 2015.03
Kernel \r on an \m

Nodejs has been upgraded from v0.10.32 to v0.10.36

$ lambdash node -v
v0.10.36

Here’s a command I use to occasionally check in on changes in the Amazon’s awslambda nodejs framework that runs our Lambda functions:

mkdir awslambda-source
lambdash tar cvzf - -C /var/runtime/node_modules/awslambda . | 
  tar xzf - -C awslambda-source

For example, the most recent change was to “log only 256K of errorMessage into customer’s cloudwatch”. Good to know.

Cleanup

Deleting the lambdash CloudFormation stack removes all resources including the AWS Lambda function and the IAM role. You can do this by running this command in the GitHub repo:

./lambdash-uninstall

Or, you can delete the lambdash CloudFormation stack in the AWS Console.

Original article and comments: https://alestic.com/2015/06/aws-lambda-shell-2/

June 18, 2015 11:00 AM

June 15, 2015

Jono Bacon

New Forbes Column: From Piracy to Prosperity

My new Forbes column is published.

This article covers how technology has impacted how creatives, artists, and journalists create, distribute, and engage around their work.

For it I sat down with Mike Shinoda, co-founder of grammy award winning Linkin Park as well as Ali Velshi, host on Al Jazeera and former CNN Senior Business Corrospondent.

Go and read the article here.

After that you may want to see my previous article where I interviewed Chris Anderson, founder of 3DR and author of The Long Tail, where we discuss building the open drone revolution. Read that article here.

by jono at June 15, 2015 04:49 PM

June 14, 2015

Elizabeth Krumbach

Weekends, street cars and red pandas

I’m home for the entire month of June! Looking back through my travel schedule, the last month I didn’t get on a plane was March of 2013. The travel-loving part of me is a little sad about breaking my streak, but given that it’s June and I’ve already given 8 presentations in 5 countries across 3 continents, I’m due for this break from travel. It’s not a break from work though, I’ve had to really hunker down on some projects I’m working on at work now that I have solid chunks of time to concentrate, and some serious due dates for my book are looming. I’ve also been tired, which prompted an extensive pile of blood work that had some troubling results that I’m now working with a specialist to get to the bottom of. I’m continuing to run and improve my diet by eating more fresh, green things which have traditionally helped bump my energy level because I’m treating my body better, but lately they both just make me more tired. And ultimately, tired means some evenings I spend more time watching True Blood and The Good Wife than I should with all these book deadlines creeping up. Don’t tell my editor ;)

I’m also getting lots of kitty snuggles as I remain at home, and lots of opportunities to take cute kitty pictures.

I continue to take Saturdays off, which continues to be my primary burnout protection mechanism. I’ve continued to evolve what this day off means. While originally inspired by the Jewish tradition of Shabbat, and we practice Shabbat rituals in our home (candles, challah, etc), and I continue to avoid work, the definition of work is in flux for me. Early on, I’d still check “personal” email and social media, until I discovered that there’s no such thing, with my open source volunteer work, open source day job and personal life so intertwined. There recently have also been some considerable stresses related to my volunteer open source work, which I want to have a break from on my day off. So currently I work hard to avoid checking email and social media, even though it’s still a struggle. It’s caused me to learn how much of a slave I’ve become to my phone. It beeps, I leap for it. Having a day off has caused me to create discipline around my relationship with my phone, so even on days when I’m working, I’m less inclined to prioritize phone beeps over the work I’m currently engaged in, leading to a greater ability to focus. Sorry to people who randomly text or direct message me on Twitter/Facebook expecting an immediate response, it will rarely happen.

So currently, my Saturdays often include either:

  • Attending Synagogue services with MJ and having a lunch out together
  • Going to some museum, movie or cultural event with MJ
  • Staying home and reading, writing, catching up with some online classes or working on hobby projects

I had played around with avoiding computers entirely on Saturdays, but on home days I realized I’d get bored too easily if I’m reading all day and some times I’m really not in the mood for my offline activities. When I get bored, I end up napping or watching TV instead, neither of which are rejuvenating or satisfying, and I end up just feeling sad about wasting the day. So my criteria has shifted to “not work” to including fun, enriching projects that I likely don’t have time or energy for on my other six “working” days. I have struggled with whether these hobbies should be on my to do list or not, since putting them on my list adds a level of structure that can lead to stress, but my coping habit for task organization makes leaving them off a challenging mental exercise. Writing here in my blog also requires a computer, and these days off give me ample time for ideas to settle and finally have some quite time to get my thoughts in order and write without distraction. Though I do have to admit that buying a vintage mechanical typewriter has crossed my mind more than a few times. Which reminds me, have any recommendations? Aside from divorce lawyers and a bigger home in the event that I drive MJ crazy. I also watch videos associated with various electronics projects and online classes I’m learning for fun (Arduinos! History and anthropology!), so a computer or tablet is regularly involved there.

It’s still not perfect. My stress levels have been high this year and we’ve booked a weekend at a beautiful inn and spa in Sonoma next weekend to unplug away from the random tasks that come from spending our weekends at home. I’m counting down the hours.

Last weekend was a lot of fun though, even if I was still stressed. On Saturday we went on a Blackpool Boat Tram Tour along the F-line. I’ve been looking for an opportunity to ride on this “topless” street car for years, but the charters always conflicted with my travel schedule, until last weekend! MJ and I booked tickets and at 1:30PM on Saturday we were on our way down Market Street.

As the title of the tour suggests, these unusually styled street cars come from Blackpool, England, a region known for their seaside activities, including Blackpool Pleasure Beach where they now have the first Wallace and Gromit theme park ride, Wallace & Gromit’s Thrill-O-Matic ride! Well, they also have a tramway where these cars came from, and California now has three of them – two functioning ones operated here in the city by MUNI and maintained by the Market Street Railway non-profit, which I’m a member of and conducted this charter.

We met at 1:15 to pick up our tickets, browse through the little SF Railyway Museum and capture some pre-travel photos of the boat tram.

Upon boarding, we took seats at the back of the street car. The tour was in two parts, half of it guided by a representative from Market Street Railway who gave some history of the transportation lines themselves as we glided up Market Street along the standard F-line until getting to Castro where a slightly different route was taken to turn back on to Market.

At the turn around near Castro, the guides swapped places and we got a representative from San Francisco City Guides who typically does walking tours of the city. As a local enthusiast he was able to give us details about the major landmarks along Market and up the Embarcadero as we made our way up to Pier 39. I knew most of what both guides told us, but there were a few bits of knowledge I was excited to learn. I was also reminded of the ~12 minute A Trip Down Market Street, 1906 that was taken just days before the earthquake in 1906 that destroyed many of the buildings seen in the video. Fascinating stuff.

At Pier 39 we had the opportunity to get out of the car and take some pictures around it, including the obligatory pictures of ourselves!

The trip lasted a couple hours, and with the open top of the car I managed to get a bit of sunburn on my face, oops!

More photos from the tram tour can be found here: https://www.flickr.com/photos/pleia2/sets/72157654163687542

Sunday morning I took advantage of the de-stressing qualities of a visit to the zoo.

I finally got to see all three of the red pandas. It had been some time since I’d seen their exhibit, and last time only one of them was there. It was fun to see all three of them together, two of them climbing the trees (pictured below) and the third walking around the ground of the enclosure. I’m kind of jealous of their epic tree houses.

Also got to swing by the sea lions Henry and Silent Knight, with Henry playing king of the rock in the middle of their pool.

More photos here: https://www.flickr.com/photos/pleia2/sets/72157654194707041

In other miscellaneous life things, MJ and I made it out to see Mad Max: Fury Road recently. It’s been several months since I’d been to a theater, and probably over a year since MJ and I had gone to a movie together, so it was a nice change of pace. Plus, it was a fun, mind-numbing movie that took my mind off my ever-growing task list. MJ and I have also been able to spend several nice dinners together, including indulging in a Brazilian Steakhouse one evening and fondue another night. In spite of these things, with running, improved breakfast and lunch and mostly skipping desserts I’ve dropped 5lbs in the past month, which is not rapid weight loss but is being done in a way that’s sustainable without completely eliminating the things I love (including my craft beer hobby). Hooray!

I’ve cut back on events, sadly turning down invitations to local panels and presentations in favor of staying home and working on my book during my off-work hours. I did host an Ubuntu Hour this week though.

Next week I’m planning on popping over to a nearby Ubuntu/Juju Mine and Mingle. I’ll also be heading down to the south end of the east bay for an EBLUG meeting where they’ve graciously offered to host space, time and expertise for an evening of discussing work on some servers that Partimus is planning on deploying in some of the schools we work with. It will be great to meet up and chat with some of the volunteers who I’ve largely only worked on thus far online, and to block off some of my own time for the raw technical tasks that Partimus needs to be focusing on but I’ve been suffering from time constraints around.

I really am looking forward to that spa weekend, but for now I’m rounding out my relaxing Saturday and preparing for get-things-done Sunday!

by pleia2 at June 14, 2015 01:38 AM

June 08, 2015

Akkana Peck

Adventure Dental

[Adventure Dental] This sign, in Santa Fe, always makes me do a double-take.

Would you go to a dentist or eye doctor named "Adventure Dental"?

Personally, I prefer that my dental and vision visits are as un-adventurous as possible.

June 08, 2015 02:54 PM

June 03, 2015

Eric Hammond

Monitor an SNS Topic with AWS Lambda and Cronitor.io

get alerted when an expected event does NOT happen

Last week I announced the availability of a public SNS Topic that may be used to run AWS Lambda functions on a recurring schedule. To encourage folks to realize the implications of a free community service maintained by an individual, I named it the “Unreliable Town Clock”.

Even with this understanding, some folks in the AWS community have (again) placed their faith in me and are already starting to depend on the Unreliable Town Clock public SNS Topic to drive their own AWS Lambda functions and SQS queues, and I want to make sure this service is as reliable as I can reasonably make it.

Here are some of the steps I have taken to increase the reliability of the Unreliable Town Clock:

  1. Runs in a dedicated AWS account. This helps prevent human error and accidents when working on other projects.

  2. Uses restrictive IAM roles/policies and good security practices. EC2 security groups don’t allow any incoming connections, not even ssh. I destroyed the root AWS account password and there are no IAM users.

  3. Auto Scaling group is used to trigger automatic instance re-launch if a running one fails. In my tests, this takes a matter of minutes.

  4. Built reproducibly using a CloudFormation template. This means it’s easy to re-create in the event of a complete disaster, though it would still be bad if the SNS Topic disappeared, as clients would need to resubscribe.

  5. The SNS Topic itself is protected from deletion even if a delete request were somehow submitted for the CloudFormation stack.

  6. The SNS Topic is constantly monitored using AWS Lambda and Cronitor.io. The first delayed or missed chime will trigger alerts to a human and will keep alerting until corrected.

The rest of this article elaborates on this last point of protection.

Delayed/Missing SNS Message Monitoring and Alerting

Most monitoring and alerting services are designed to poll your resources and sound the alarm when the polled resource fails to respond, reports an error, or exceeds some threshold.

This works great for finding out when your web site is down or your server is unpingable. It doesn’t work so well for letting you know when your hourly cron job didn’t run, your ETL aborted mid-stream, or your expected daily email was not received.

And normal monitoring and alerting also can’t tell you when it’s been more than 15 minutes since the last message was published to your SNS Topic, which is exactly what I need to know in order to respond quickly to any failures of the Unreliable Town Clock that aren’t automatically handled by the AWS architecture.

Fortunately, this is exactly the type of monitoring and alerting that Cronitor is designed for. Here’s how I set it up:

  1. Sign up on Cronitor.io and create a new monitor (first monitor is free with email alerts). In my case, I selected “Notify me if [time since run exceeds] [16] [minutes]“.

  2. Create a simple AWS Lambda function that does an HTTP GET on your monitor’s run URL (e.g., https://cronitor.link/d3x0/run). See the sample code below.

  3. Subscribe the AWS Lambda function to the SNS Topic. See example instructions on the Unreliable Town Clock post.

Now, if the SNS Topic goes longer than 16 minutes between chimes, I get personally alerted so I can go investigate and whip the Unreliable Town Clock back into shape.

Here’s some simplified AWS Lambda code that demonstrates how easy it is to ping a Cronitor.io monitor. The code I am running is slightly more involved with extra logging and parameterization of my monitor URL outside of the code, but this would do the job if you plugged in your own monitor run URL.

var request = require('request');
exports.handler = function(event, context) {
    request('https://cronitor.link/d3x0/run',
            function(error, response, body) {
                context.done(error, body);
            }
    );
};

Disclaimer: I am not a nodejs expert. I just Google what I want to do and try Stack Overflow answers until it seems to work. Ideas for improvement welcomed.

I suspect that I should be able to do some similar monitoring and alerting with CloudWatch Metrics and CloudWatch Alarms, and I may eventually work this out, but I still like to have some monitoring managed by an external party who is taking responsibility to make sure their system is running and who will notify me when mine is not.

I rarely plug non-AWS services on this blog, but I love the simple design and powerful functionality of Cronitor.io and think the service fills an important need. In my brief time using the service, August and Shane have been incredibly helpful, generous, and responsive to suggestions for improvements.

If you become a paying customer, don’t let me stop you from suggesting that Cronitor support direct SNS Topic monitoring (eliminating the AWS Lambda step above) if you think that would be something you would use ;-)

Oh, and in case it wasn’t completely obvious, you can use the procedure described in this article to directly monitor the Unreliable Town Clock public SNS Topic yourself and get your own alerts if it ever misses a chime. Or, you can use it to monitor the reliability of the AWS Lambda function you subscribe to the SNS Topic, making sure that it completes successfully as often as it is supposed to.

Original article and comments: https://alestic.com/2015/06/aws-lambda-sns-cronitor/

June 03, 2015 10:26 AM

June 02, 2015

Akkana Peck

Piñon cones!

[Baby piñon cones] I've been having fun wandering the yard looking at piñon cones. We went all last summer without seeing cones on any of our trees, which seemed very mysterious ... though the book I found on piñon pines said they follow a three-year cycle. This year, nearly all of our trees have little yellow-green cones developing.

[piñon spikes with no cones] A few of the trees look like most of our piñons last year: long spikes but no cones developing on any of them. I don't know if it's a difference in the weather this year, or that three-year cycle I read about in the book. I also see on the web that there's a 2-7 year interval between good piñon crops, so clearly there are other factors.

It's going to be fun to see them develop, and to monitor them over the next several years. Maybe we'll actually get some piñon nuts eventually (or piñon jays to steal the nuts). I don't know if baby cones now means nuts later this summer, or not until next summer. Time to check that book out of the library again ...

June 02, 2015 09:20 PM

May 30, 2015

Elizabeth Krumbach

Tourist in Vancouver

While in Vancouver for the OpenStack Summit, I made some time to visit some of the sights as well. Unfortunately I wasn’t able to do as much as I’d like, when I arrived early on Sunday I was sick and had to take it easy, so missed the Women of OpenStack boat tour and happy hour. Then after a stunning week of sunny weather, the Saturday afternoon following the summit brought rain. But I did get out on Saturday to explore some anyway.

First thing on Saturday morning I laced up my running shoes and took advantage of the beautiful path around the waterfront to go for a run. Of all the places away from home I’ve run, there’s been a common theme: water. From Perth to Miami, and even here at home in San Francisco, there’s something about running along the water that defies exhaustion otherwise brought on by travel to inspire me to get out there. It was a great run, one of my longer ones in recent memory.

While on my run I got to see the sea planes one last time. The next time I visit Vancouver, taking one of them to Victoria will definitely be on my list, I knew I’d regret not taking time on Saturday to do it and I totally do! Vancouver isn’t that far away, I’ll have my chance some other time.

I then packed up and checked out of my hotel in time to meet a couple colleagues for lunch, and then I was off to Stanley Park to visit the Vancouver Aquarium. I’ve been to a lot of aquariums, and this one is definitely in my top 5. They had a Sea Monsters Revealed exhibit that I visited first, very similar to the Bodies exhibits that show the insides of people, these ones showed the inside of sea animals. Gross and cool.

Fish, frogs, jellyfish, but the big draw for me is always the marine mammals. I continue to have mixed feelings about keeping large animals like belugas in captivity, but they were amazing to see. While I got a glimpse of one of the dolphin from an underwater tank, the above ground section was closed due the other recovering from surgery, which I later learned was sadly unsuccessful and she passed away the next day. Then of course there were the sea otters, oh the adorable sea otters! I also got to see the penguins get some food from one of their caretakers, after which they were quite lively, waddling around their habitat and going for swims.

Great visit, highly recommended. The rest of Stanley Park was beautiful too, I should have taken more pictures!

More photos from the aquarium here: https://www.flickr.com/photos/pleia2/sets/72157651049343264

I then headed back down to Gastown, the historic district of Vancouver, for some shopping and browsing. I picked up some lovely first nation-made goodies as well as some maple coffee, which may be a tourist gimmick, but it is one of the few types of coffees I’ve grow accustom to drinking black and it’s tricky to find south of the border. Gastown is also where the really cool steam-powered clock lives. While not historic, it is very steam punk.

And with that, the skies opened up and it began raining. I had planned for this and wore my new raincoat supplied as the gift to OpenStack attendees (nice thinking in Vancouver!). It was good to break it in with some nice Vancouver rain, but I did get a bit soggy where I wasn’t covered by the raincoat while walking back to the hotel. I then enjoyed a drink with a colleague who was also escaping the rain and we enjoyed chatting and I wrote some post cards before heading to the airport.

by pleia2 at May 30, 2015 05:52 PM

May 29, 2015

Akkana Peck

Command-line builds for Android using ant

I recently needed to update an old Android app that I hadn't touched in years. My Eclipse setup is way out of date, and I've been hearing about more and more projects switching to using command-line builds. I wanted to ditch my fiddly, difficult to install Eclipse setup and switch to something easier to use.

Some of the big open-source packages, like OsmAnd, have switched to gradle for their Java builds. So I tried to install gradle -- and on Debian, apt-get install gradle wanted to pull in a total of 153 packages! Maybe gradle wasn't the best option to pursue.

But there's another option for command-line android builds: ant. When I tried apt-get install ant, since I already have Java installed (I think the relevant package is openjdk-7-jdk), it installed without needing a single additional package. For a small program, that's clearly a better way to go!

Then I needed to create a build directory and move my project into it. That turned out to be fairly easy, too -- certainly compared to the hours it spent setting up an Eclipse environment. Here's how to set up your ant Android build:

First install the Android "Stand-alone SDK Tools" from Installing the Android SDK. This requires a fair amount of clicking around, accepting licenses, and waiting for a long download.

Now install an SDK or two. Use android sdk to install new SDK versions, and android list targets to see what versions you have installed.

Create a new directory for your project, cd into it, and then:

android create project --name YourProject --path . --target android-19 --package tld.yourdomain.YourProject --activity YourProject
Adjust the Android target for the version you want to use.

When this is done, type ant with no arguments to make sure the directory structure was created properly. If it doesn't print errors, that's a good sign.

Check that local.properties has sdk.dir set correctly. It should have picked that up from your environment.

There will be a stub source file in src/tld/yourdomain/YourProject.java. Edit it as needed, or, if you're transferring a project from another build system such as eclipse, copy the existing .java files to that directory.

If you have custom icons for your project, or other resources like layout or menu files, put them in the appropriate directories under res. The directory structure is the same as in eclipse, but unlike an eclipse build, you can edit the files at any time without the build mysteriously breaking.

Signing your app

Now you'll need a key to sign your app. Eclipse generates a debugging key automatically, but ant doesn't. It's better to use a real key anyway, since debugging keys expire and need to be regenerated periodically.

If you don't already have a key, generate one with:

keytool -genkey -v -keystore my-key.keystore -alias mykey -keyalg RSA -sigalg SHA1withRSA -keysize 2048 -validity 10000
It will ask you for a password; be sure to use one you won't forget (or record it somewhere). You can use any filename you want instead of my-key.keystore, and any alias you want instead of mykey.

Now create a file called ant.properties containing these two lines:

key.store=/path/to/my-key.keystore
key.alias=mykey
Some tutorials tell you to put this in build.properties, but that's outdated and no longer works.

If you forget your key alias, you can find out with this command and the password:

keytool -list -keystore /path/to/my-key.keystore

Optionally, you can also include your key's password:

key.store.password=xxxx
key.alias.password=xxxx
If you don't, you'll be prompted twice for the password (which echoes on the terminal, so be aware of that if anyone is bored enough to watch over your shoulder as you build packages. I guess build-signing keys aren't considered particularly high security). Of course, you should make sure not to include both the private keystore file and the password in any public code repository.

Building

Finally, you're ready to build!

ant release

If you get an error like:

AndroidManifest.xml:6: error: Error: No resource found that matches the given name (at 'icon' with value '@drawable/ic_launcher').
it's because older eclipse builds wanted icons named icon.png, while ant wants them named ic_launcher.png. You can fix this either by renaming your icons to res/drawable-hdpi/ic_launcher.png (and the same for res/drawable-lpdi and -mdpi), or by removing everything under bin (rm -rf bin/*) and then editing AndroidManifest.xml. If you don't clear bin before rebuilding, bin/AndroidManifest.xml will take precendence over the AndroidManifest.xml in the root, so you might have to edit both files.

After ant release, your binary will be in bin/YourProject-release.apk. If you have an adb connection, you can (re)install it with: adb install -r bin/YourProject-release.apk

Done! So much easier than eclipse, and you can use any editor you want, and check your files into any version control system.

That just leaves the coding part. If only Java development were as easy as Python or C ...

May 29, 2015 02:52 AM

May 27, 2015

Jono Bacon

#ISupportCommunity

So the Ubuntu Community Council has asked Jonathan Riddell to step down as a leader in the Kubuntu community. The reasoning for this can be broadly summarized as “poor conduct”.

Some members of the community have concluded that this is something of a hatchet job from the Community Council, that Jonathan’s insistence to get answers to tough questions (e.g. licensing, donations) has resulted in the Community Council booting him out.

I don’t believe this is true.

Just because the Community Council has not provided an extensive docket of evidence behind their decision does not equate to wrong-doing. It does not equate to corruption or malpractice.

I do sympathize with the critics though. I spent nearly eight years pretty close to the politics of Ubuntu and when I read the Community Council’s decision I understood and agreed with it. For all of Jonathan’s tremendously positive contributions to Kubuntu, I do believe his conduct and approach has sadly had a negative impact on parts of our community too.

This has nothing to do with the questions he raised, it was the way he raised them, and the inference and accusations he made in raising such questions. We can’t have our leaders behaving like that: it sets a bad example.

As such, I understood the Community Council’s decision because I have seen these politics both up front and behind the scenes due to my close affiliation with Ubuntu and Canonical. For those people for who haven’t been so close to the coalface though, this decision from the CC feels heavy handed, without due evidence, and emotive in response.

Thus, in conclusion, I don’t believe the CC have acted inappropriately in making this decision, but I do believe that their decision needs to be illustrated further. The decision needs to feel complete and authoritative, but until we see further material, we are not going to improve the situation if everyone assumes the Community Council is some shadowy cabal against Jonathan and Kubuntu.

We are a community. We have more in common than what differs between us. Let’s put the hyperbole to one side and have a conversation about how we resolve this. There is an opportunity for a great outcome here: for better understanding and further improvement, but the first step is everyone understanding the perspectives of the people with opposing viewpoints.

As such #ISupportCommunity; our wider Ubuntu and Kubuntu family. Let’s work together, not against each other.

by jono at May 27, 2015 05:25 PM

May 26, 2015

Eric Hammond

Schedule Recurring AWS Lambda Invocations With The Unreliable Town Clock (UTC)

public SNS Topic with a trigger event every quarter hour

Scheduled executions of AWS Lambda functions on an hourly/daily/etc basis is a frequently requested feature, ever since the day Amazon introduced the service at AWS re:Invent 2014.

Until Amazon releases a reliable, premium cron feature for AWS Lambda, I’m offering a community-built alternative which may be useful for some non-critical applications.

us-east-1:

arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

us-west-2:

arn:aws:sns:us-west-2:522480313337:unreliable-town-clock-topic-N4N94CWNOMTH

Background

Beyond its event-driven convenience, the primary attraction of AWS Lambda is eliminating the need to maintain infrastructure to run and scale code. The AWS Lambda function code is simply uploaded to AWS and Amazon takes care of providing systems to run on, keeping it available, scaling to meet demand, recovering from infrastructure failures, monitoring, logging, and more.

The available methods to trigger AWS Lambda functions already include some powerful and convenient events like S3 object creation, DynamoDB changes, Kinesis stream processing, and my favorite: the all-purpose SNS Topic subscription.

Even so, there is a glaring need for code that wants to run at regular intervals: time-triggered, recurring, scheduled event support for AWS Lambda. Attempts to to do this yourself generally ends up with having to maintain your own supporting infrastructure, when your original goal was to eliminate the infrastructure worries.

Unreliable Town Clock (UTC)

The Unreliable Town Clock (UTC) is a new, free, public SNS Topic (Amazon Simple Notification Service) that broadcasts a “chime” message every quarter hour to all subscribers. It can send the chimes to AWS Lambda functions, SQS queues, and email addresses.

You can use the chime attributes to run your code every fifteen minutes, or only run your code once an hour (e.g., when minute == "00") or once a day (e.g., when hour == "00" and minute == "00") or any other series of intervals.

You can even subscribe a function you only want to run only once at a specific time in the future: Have the function ignore all invocations until it’s after the time it wants. When it is time, it can perform its job, then unsubscribe itself from the SNS Topic.

Connecting your code to the Unreliable Town Clock is fast and easy. No application process or account creation is required:

Example: AWS Lambda Function

These commands subscribe an AWS Lambda function to the Unreliable Town Clock:

# AWS Lambda function
lambda_function_name=YOURLAMBDAFUNCTION
lambda_function_region=us-east-1
account=YOURACCOUNTID
lambda_function_arn="arn:aws:lambda:$lambda_function_region:$account:function:$lambda_function_name"

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Allow the SNS Topic to invoke the AWS Lambda function
aws lambda add-permission \
  --function-name "$lambda_function_name"  \
  --action lambda:InvokeFunction \
  --principal sns.amazonaws.com \
  --source-arn "$sns_topic_arn" \
  --statement-id $(uuidgen)

# Subscribe the AWS Lambda function to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol lambda \
  --notification-endpoint "$lambda_function_arn"

Example: Email Address

These commands subscribe an email address to the Unreliable Town Clock (useful for getting the feel, testing, and debugging):

# Email address
email=YOUREMAIL@YOURDOMAIN

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Subscribe the email address to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol email \
  --notification-endpoint "$email"

Example: SQS Queue

These commands subscribe an SQS queue to the Unreliable Town Clock:

# SQS Queue
sqs_queue_name=YOURQUEUE
account=YOURACCOUNTID
sqs_queue_arn="arn:aws:sqs:us-east-1:$account:$sqs_queue_name"
sqs_queue_url="https://queue.amazonaws.com/$account/$sqs_queue_name"

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Allow the SNS Topic to post to the SQS queue
sqs_policy='{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": { "AWS": "*" },
    "Action": "sqs:SendMessage",
    "Resource": "'$sqs_queue_arn'",
    "Condition": {
      "ArnEquals": {
        "aws:SourceArn": "'$sns_topic_arn'"
}}}]}'
sqs_policy_escaped=$(echo $sqs_policy | perl -pe 's/"/\\"/g')
aws sqs set-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attributes '{"Policy":"'"$sqs_policy_escaped"'"}'

# Subscribe the SQS queue to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol sqs \
  --notification-endpoint "$sqs_queue_arn"

Chime message

The chime message includes convenient attributes like the following:

{
  "type" : "chime",
  "timestamp": "2015-05-26 02:15 UTC",
  "year": "2015",
  "month": "05",
  "day": "26",
  "hour": "02",
  "minute": "15",
  "day_of_week": "Tue",
  "unique_id": "2d135bf9-31ba-4751-b46d-1db6a822ac88",
  "region": "us-east-1",
  "sns_topic_arn": "arn:aws:sns:...",
  "reference": "...",
  "support": "...",
  "disclaimer": "UNRELIABLE SERVICE {ACCURACY,CONSISTENCY,UPTIME,LONGEVITY}"
}

You should only run your code’s primary function when the message type == "chime"

Other values are reserved for other message types which may include things like service notifications or alerts. Those message types may have different attributes.

It might make sense to forward non-chime messages to a human (e.g., post to an SNS Topic where you have an email address subscribed).

Regions

The Unreliable Town Clock is currently available in the following AWS Regions:

  • us-east-1
  • us-west-2

You may create AWS Lambda functions in any AWS accounts in any AWS regions and subscribe them to these SNS Topics.

Problems in one region will not affect the Unreliable Town Clock functionality in the other region. You may subscribe to both topics for additional reliability. [There was an AWS SNS us-east-1 outage on 2015-07-31 that caused the Unreliable Town Clock in that region to not broadcast chimes for almost 3 hours.]

Cost

The Unreliable Town Clock is free for unlimited “lambda” and “sqs” subscriptions.

Yes. Unlimited. Amazon takes care of the scaling and does not charge for sending to these endpoints through SNS.

You may currently add “email” subscriptions, especially to test and see the message format, but if there are too many email subscribers, new subscriptions may be disabled, as it costs the sending account $0.70/year for each address at the current chime frequency.

You are naturally responsible for any charges that occur in your own accounts.

Running an AWS Lambda function four times an hour for a year results in 35,000 invocations, which is negligible if not free, but you need to take care what your functions do and what resources they consume as they are running in your AWS account.

Source

The source code for the infrastructure of the Unreliable Town Clock is available on GitHub

https://github.com/alestic/alestic-unreliable-town-clock

You are welcome to run your own copy, but note that the current code marks the SNS Topic as public so that anybody can subscribe.

Support

The following Google Group mailing list can be used for discussion, questions, enhancement requests, and alerts about problems.

http://groups.google.com/d/forum/unreliable-town-clock

If you plan to use the Unreliable Town Clock, you should subscribe to this mailing list so that you receive service notifications (e.g., if the public SNS Topic ARN is going to change).

Disclaimer

The Unreliable Town Clock service is intended but not guaranteed to be useful. As the name explicitly states, you should consider it unreliable and should not use it for anything you consider important.

Here are some, but not all, of the dimensions in which it is unreliable:

  • Accuracy: The times messages are sent may not be the true times they indicate. Messages may be delayed, get sent early, or be duplicated.

  • Uptime: Chime messages may be skipped for short or long periods of time.

  • Consistency: The formats or contents of the messages may change without warning.

  • Longevity: The service may disappear without warning at any time.

There is no big company behind this service, just a human being. I have experience building and supporting public services used by individuals, companies, and other organizations around the world, but I’m still just one fellow, and this is just an experimental service for the time being.

Comments

What are you thinking of using recurring AWS Lambda invocations for?

Any other features you would like to see?

[Update 2015-07-19: Ok to subscribe across AWS regions]

[Update 2015-07-31: Added second public SNS topic in us-west-2 after AWS SNS outage in us-east-1.]

Original article and comments: https://alestic.com/2015/05/aws-lambda-recurring-schedule/

May 26, 2015 09:01 AM

May 25, 2015

Eric Hammond

Debugging AWS Lambda Invocations With An Echo Function

As I create architectures that include AWS Lambda functions, I find there are situations where I just want to know that the AWS Lambda function is getting invoked and to review the exact event data structure that is being passed in to it.

I found that a simple “echo” function can be dropped in to copy the AWS Lambda event to the console log (CloudWatch Logs). It’s easy to review this output to make sure the function is getting invoked at the right times and with the right data.

There are probably dozens of debug/echo AWS Lambda functions floating around out there, but for my own future reference, I have created a GitHub repo with a four line echo function that does the trick for me. I’ve included a couple scripts to install and uninstall the AWS Lambda function in an account, including the required IAM role and policies.

Here’s the repo for the lambda-echo AWS Lambda function:

https://github.com/alestic/lambda-echo

The README.md provides instructions on how to install and test.

Note: Once you install an AWS Lambda function, there is no reason to delete it if you think it might be useful in the future. It costs nothing to let Amazon store it for you and keep it available for when you want to run it again.

Amazon has indicated that they may prune AWS Lambda functions that go unused for long periods of time, but I haven’t sen this happen in practice yet.

Is there a standard file structure yet for a directory with AWS Lambda function source and the related IAM role/policies? Should I convert this to the format expected by Mitch Garnaat’s kappa perhaps?

Original article and comments: https://alestic.com/post/2015/05/aws-lambda-echo/

May 25, 2015 08:03 AM

Debugging AWS Lambda Invocations With An Echo Function

As I create architectures that include AWS Lambda functions, I find there are situations where I just want to know that the AWS Lambda function is getting invoked and to review the exact event data structure that is being passed in to it.

I found that a simple “echo” function can be dropped in to copy the AWS Lambda event to the console log (CloudWatch Logs). It’s easy to review this output to make sure the function is getting invoked at the right times and with the right data.

There are probably dozens of debug/echo AWS Lambda functions floating around out there, but for my own future reference, I have created a GitHub repo with a four line echo function that does the trick for me. I’ve included a couple scripts to install and uninstall the AWS Lambda function in an account, including the required IAM role and policies.

Here’s the repo for the lambda-echo AWS Lambda function:

https://github.com/alestic/lambda-echo

The README.md provides instructions on how to install and test.

Note: Once you install an AWS Lambda function, there is no reason to delete it if you think it might be useful in the future. It costs nothing to let Amazon store it for you and keep it available for when you want to run it again.

Amazon has indicated that they may prune AWS Lambda functions that go unused for long periods of time, but I haven’t seen this happen in practice yet.

Is there a standard file structure yet for a directory with AWS Lambda function source and the related IAM role/policies? Should I convert this to the format expected by Mitch Garnaat’s kappa perhaps?

Original article and comments: https://alestic.com/2015/05/aws-lambda-echo/

May 25, 2015 08:03 AM

May 24, 2015

Elizabeth Krumbach

Liberty OpenStack Summit days 3-5

Summiting continued! The final three days of the conference offered two days of OpenStack Design Summit discussions and working sessions on specific topics, and Friday was spent doing a contributors meetup so we could have face time with people we’re working with on projects.

Wednesday began with a team breakfast, where over 30 of us descended upon a breakfast restaurant and had a lively morning. Unfortunately it ran a bit long and made us a bit late for the beginning of summit stuff, but the next Infrastructure work session was fully attended! The session sought to take some next steps with our activity tracking mechanisms, none of which are currently part of the OpenStack Infrastructure. Currently there are several different types of stats being collected, from reviewstats which are hosted by a community member and focus specifically on reviews to those produced from Bitergia (here) that are somewhat generic but help compare OpenStack to other open source projects to Stackalytics which is crafted specifically for the OpenStack community. There seems to be value in hosting various metric types, mostly so comparisons can be made across platforms if they differ in any way. The consensus of the session was to first move forward with moving Stackalytics into our infrastructure, since so many projects find such value in it. Etherpad here: YVR-infra-activity-tracking


With this view from the work session room, it’s amazing we got anything done

Next up was QA: Testing Beyond the Gate. In OpenStack there is a test gate that all changes must pass in order for a change to be merged. In the past cycle periodic and post-merge tests have also been added, but it’s been found that if a code merging isn’t dependent upon these passing, not many people pay attention to these additional tests. The result of the session is a proposed dashboard for tracking these tests so that there’s an easier view into what they’re doing, whether they’re failing and so empower developers to fix them up. Tracking of third party testing in this, or a similar, tracker was also discussed as a proposal once the infra-run tests are being accounted for. Etherpad here: YVR-QA-testing-beyond-the-gate

The QA: DevStack Roadmap session covered some of the general cleanup that typically needs to be done in DevStack, but then also went into some of the broader action items, including improving the reliability of Centos tests run against it that are currently non-voting, pulling some things out of DevStack to support them as plugins as we move into a Big Tent world and work out how to move forward with Grenade. Etherpad here: YVR-QA-Devstack-Roadmap

I then attended QA: QA in the Big Tent. In the past cycle, OpenStack dropped the long process of being accepted into OpenStack as an official project and streamlined it so that competing technologies are now all in the mix, we’re calling it the Big Tent – as we’re now including everyone. This session focused on how to support the QA needs now that OpenStack is not just a slim core of a few projects. The general idea from a QA perspective is that they can continue to support the things-everyone-uses (nova, neutron, glance… an organically evolving list) and improve pluggable support for projects beyond that so they can help themselves to the QA tools at their disposal. Etherpad here: YVR-QA-in-the-big-tent

With sessions behind me, I boarded a bus for the Core Reviewer Party, hosted at the Museum of Anthropology at UBC. As party venues go, this was a great one. The museum was open for us to explore, and they also offered tours. The main event took place outside, where they served design-your-own curry seafood dishes, bison, cheeses and salmon. Of course no OpenStack event would be complete with a few bars around serving various wines and beer. There was an adjacent small building where live music was playing and there was a lot of space to walk around, catch the sunset and enjoy some gardens. I spent much of my early evening with friends from Time Warner Cable, and rounded things off with several of my buddies from HP. This ended up being a get-back-after-midnight event for me, but it was totally worth it to spend such a great time with everyone.

Thursday morning kicked off with a series of fishbowl sessions where the Infrastructure team was discussing projects we have in the works. First up was Infrastructure: Zuul v3. Zuul is our pipeline-oriented project gating system, which currently works by facilitating the of running tests and automated tasks in response to Gerrit events. Right now it sends jobs off to Gearman for launching via Jenkins to our fleet of waiting nodes, but we’re really using Jenkins as a shim here, not really taking advantage of the built in features that Jenkins offers. We’re also in need of a system that better supports multi-tenancy and multi-node jobs and which can scale as OpenStack continues to grow, particularly with Big Tent. This session discussed the end game of phasing out Jenkins in favor of a more Zuul-driven workflow and more immediate changes that may be made to Nodepool and smaller projects like Zuul-merger to drive our vision. Etherpad here: YVR-infra-zuulv3

Everyone loves bug reporting and task tracking, right? In the next session, Infrastructure: Task tracking, that was our topic. We did an experiment with the creation of Storyboard as our homebrewed solution to bug and task tracking, but in spite of valiant efforts by the small team working on it, they were unable to gain more contributors and the job was simply too big for the size of the team doing the work. As a result, we’re now back to looking at solutions other than Canonical’s hosted Launchpad (which is currently used). The session went through some basic evaluation of a few tools, and at the end there was some consensus to work toward bringing up a more battle-hardened and Puppetized instance of Maniphest (from Phabricator) so that teams can see if it fits their needs. Etherpad here:YVR-infra-task-tracking

The morning continued with an Infrastructure: Infra-cloud session. The Infrastructure team has about 150 machines in a datacenter that have been donated to us by HP. The session focused on how we can put these to use as Nodepool instances by running OpenStack on our own and adding that “infra-cloud” to the providers in Nodepool. I’m particularly interested in this, given some of my history with getting TripleO into testing (so have deployed OpenStack many, many times!) and in general eager to learn even more about production OpenStack deployments. So it looks like I’ll be providing Infra-brains to Clint Byrum who is otherwise taking a lead here. To keep in sync with other things we host, we’ll be using Puppet to deploy OpenStack, so I’m thankful for the expertise of people like Colleen Murphy who just joined our team to help with that. Etherpad here: YVR-infra-cloud

Next up was the Infrastructure: Puppet testing session. It was great to have some of the OpenStack Puppet folks in the room so they could talk some about how they’re using beaker-rspec in our infra for testing the OpenStack modules themselves. Much of the discussion centered around whether we want to follow their lead, or do something else, leveraging our current system of node allocation to do our own module testing. We also have a much commented on spec up for proposal here. The result of the discussion was that it’s likely that we’ll just follow the lead of the OpenStack Puppet team. Etherpad here: kilo-infra-puppet-testing

That afternoon we had another Infrastructure: Work session where we focused on the refactor of portions of system-config OpenStack module puppet scripts, and some folks worked on getting the testing infrastructure that was talked about earlier. I took the opportunity to do some reviews of the related patches and help a new contributor do some review – she even submitted a patch that was merged the next morning! Etherpad for the work session here: YVR-infra-puppet-openstackci

The last session I attended that day was QA: Liberty Priorities. It wasn’t one I strictly needed to be in, but I hadn’t attended a session in room 306 yet, and it was the famous gosling room! The room had a glass wall that looked out onto a roof were a couple of geese had their babies and would routinely walk by and interrupt the session because everyone would stop, coo and take pictures of them. So I finally got to see the babies! The actual session collected the pile of to do list items generated at the summit, which I got roped into helping with, and prioritized them. Oh, and they gave me a task to help with. I just wanted to see the geese! Etherpad with the priorities is here: YVR-QA-Liberty-Priorities


Photo by Thierry Carrez (source)

Thursday night I ended up having dinner with the moderator of our women of OpenStack panel, Beth Cohen. We went down to Gastown to enjoy a dinner of oysters and seafood and had a wonderful time. It was great to swap tech (and women in tech) stories and chat about our work.

Friday! The OpenStack conference itself ended on Thursday, so it was just ATCs (Active Technical Contributors) attending for the final day of the Design Summit. So things were much quieter and the agenda was full of contributors meetups. I spent the day in the Infrastructure, QA and Release management contributors meetup. We had a long list of things to work on, but I focused on the election tooling, which I ended up following up with on list and then later had a chat with the author of the proposed tooling. My afternoon was spent working on the translations infrastructure with Steve Kowalik who works with me on OpenStack infra and Carlos Munoz of the Zanata team. We were able to work through the outstanding Zanata bugs and make some progress with how we’re going to tackle everything, it was a productive afternoon and always a pleasure to get together with the folks I work with online every day.

That evening, as we left the closing conference center, I met up with several colleagues for an amazing sushi dinner in downtown Vancouver. A perfect, low-key ending to an amazing event!

by pleia2 at May 24, 2015 02:15 AM

May 21, 2015

Elizabeth Krumbach

Liberty OpenStack Summit day 2

My second day of the OpenStack summit came early with he Women of OpenStack working breakfast at 7AM. It kicked off with a series of lightning talks that talked about impostor syndrome, growing as a technical leader (get yourself out there, ask questions) and suggestions from a tech start-up founder about being an entrepreneur. From there we broke up into groups to discuss what we’d like to see from the Women of OpenStack group in the next year. The big take-aways were around mentoring of new women joining our community and starting to get involved with all the OpenStack tooling and more generally giving voice to the women in our community.

Keynotes kicked off at 9AM with Mark Collier announcing the next OpenStack Summit venues: Austin for the spring 2016 summit and Barcelona for the fall 2016 summit. He then went into a series of chats and demos related to using containers, which may be the Next Big Thing in cloud computing. During the session we heard from a few companies who are already using OpenStack with containers (mostly Docker and Kubernetes) in production (video). The keynotes continued with one by Intel, where the speaker took time to talk about how valuable feedback from operators has been in the past year, and appreciation for the new diversity working group (video). The keynote from EBay/Paypal showed off the really amazing progress they’ve made with deploying OpenStack, with it now running on over 300k cores and pretty much powers Paypal at this point (video). Red Hat’s keynote focused on customer engagement as OpenStack matures (video). The keynotes wrapped up with one from NASA JPL, which mostly talked about the awesome Mars projects they’re working on and the massive data requirements therein (video).


OpenStack at EBay/Paypal

Following keynotes, Tuesday really kicked off the core OpenStack Design Summit sessions, where I focused on a series of Cross Project Workshops. First up was Moving our applications to Python 3. This session focused on the migration of Python 3 for functional and integration testing in OpenStack projects now that Oslo libraries are working in Python 3. The session mostly centered around strategy, how to incrementally move projects over and the requirements for the move (2.x dependencies, changes to Ubuntu required to effectively use Python 3.4 for gating, etc). Etherpad here: liberty-cross-project-python3. I then attended Functional Testing Show & Tell which was a great session where projects shared their stories about how they do functional (and some unit) testing in their projects. The Etherpad for this one is super valuable for seeing what everyone reports, it’s available here: liberty-functional-testing-show-tell.

My Design Summit sessions were broken up nicely with a lunch with my fellow panelists, and then the Standing Tall in the Room – Sponsored by the Women of OpenStack panel itself at 2PM (video). It was wonderful to finally meet my fellow panelists in person and the session itself was well-attended and we got a lot of positive feedback from it. I tackled a question about shyness with regard to giving presentations here at the OpenStack Summit, where I pointed at a webinar about submitting a proposal via the Women of OpenStack published in January. I also talked about difficulties related to the first time you write to the development mailing list, participate on IRC and submit code for review. I used an example of having to submit 28 patches for one of my early patches, and audience member Steve Martinelli helpfully tweeted about a 63 patch change. Diving in to all these things helps, as does supporting the ideas of and doing code review for others in your community. Of course my fellow panelists had great things to say too, watch the video!


Thanks to Lisa-Marie Namphy for the photo!

Panel selfie by Rainya Mosher

Following the panel, it was back to the Design Summit. The In-team scaling session was an interesting one with regard to metrics. We’ve learned that regardless of project size, socially within OpenStack it seems difficult for any projects to rise above 14 core reviewers, and keep enough common culture, focus and quality. The solutions presented during the session tended to be heavy on technology (changes to ACLs, splitting up the repo to trusted sub-groups). It’ll be interesting to see how the scaling actually pans out, as there seem to be many more social and leadership solutions to the problem of patches piling up and not having enough core folks to review them. There was also some discussion about the specs process, but the problems and solutions seem to heavily vary between teams, so it seemed unlikely that a unified solution to unprocessed specs would be universal, but it does seem like the process is often valuable for certain things. Etherpad here: liberty-cross-project-in-team-scaling.

My last session of the day was OpenStack release model(s). A time-based discussion required broader participation, so much of the discussion centered around the ability for projects to independently do intermediary releases outside of the release cycle and how that could be supported, but I think the jury is still out on a solution there. There was also talk about how to generally handle release tracking, as it’s difficult to predict what will land, so much so that people have stopped relying on the predictions and that bled into a discussion about release content reporting (release changelogs). In all, an interesting session with some good ideas about how to move forward, Etherpad here: liberty-cross-project-release-models.

I spent the evening with friends and colleagues at the HP+Scality hosted party at Rocky Mountaineer Station. BBQ, food trucks and getting to see non-Americans/non-Canadians try s’mores for the first time, all kinds of fun! Fortunately I managed to make it back to my hotel at a reasonable hour.

by pleia2 at May 21, 2015 10:03 PM

May 20, 2015

Elizabeth Krumbach

Liberty OpenStack Summit day 1

This week I’m at the OpenStack Summit. It’s the most wonderful, exhausting and valuable-to-my-job event I go to, and it happens twice a year. This time it’s being held in the beautiful city of Vancouver, BC, and the conference venue is right on the water, so we get to enjoy astonishing views throughout the day.


OpenStack Summit: Clouds inside and outside!

Jonathan Bryce Executive Director of the OpenStack Foundation kicked off the event with an introduction to the summit, success that OpenStack has built in the Process, Store and Move digital economy, and some announcements, among which was the success found with federated identity support in Keystone where Morgan Fainberg, PTL of Keystone, helped show off a demonstration. The first company keynote was presented by Digitalfilm Tree who did a really fun live demo of shooting video at the summit here in Vancouver, using their OpenStack-powered cloud so it was accessible in Los Angeles for editorial review and then retrieving and playing the resulting video. They shared that a recent show that was shot in Vancouver used this very process for the daily editing and that they had previously used courier services and staff-hopping-on-planes to do the physical moving of digital content because it was too much for their previous systems. Finally, Comcast employees rolled onto the stage on a couch to chat about how they’ve expanded their use of OpenStack since presenting at the summit in Portland, Oregon Video of the all of this available here.

Next up for keynotes was Walmart, who talked about how they moved to OpenStack and used it for all the load on their sites experienced over the 2014 holiday season and how OpenStack has met their needs, video here. Then came HP’s keynote, which really focused on the community and choices available aspect of OpenStack, where speaker Mark Interrante said “OpenStack should be simpler, you shouldn’t need a PhD to run it.” Bravo! He also pointed out that HP’s booth had a demonstration of OpenStack running on various hardware at the booth, an impressively inclusive step for a company that also sells hardware. Video for HP’s keynote here (I dig the Star Wars reference). Keynotes continued with one from TD Bank, which I became familiar with when they bought up the Commerce branches in the Philadelphia region, but have since learned are a major Canadian Bank (oooh, TD stands for Toronto Dominion!). The most fascinating thing about their moved to the cloud for me is how they’ve imposed a cloud-first policy across their infrastructure, where teams must have a really good reason and approval in order to do more traditional bare-metal, one off deployments for their applications, so it’s rare, video. Cybera was the next keynote and perhaps the most inspiring from a humanitarian standpoint. As one of the earliest OpenStack adopters, Cybera is a non-profit that seeks to improve access to the internet and valuable resources therein, which presented Robin Winsor stressed in his keynote was now as the physical infrastructure that was built in North America in the 19th and 20th centuries (railroads, highways, etc), video here. The final keynote was from Solidfire who discussed the importance of solid storage as a basis of a successful deployment, video here.

Following the keynotes, I headed over to the Virtual Networking in OpenStack: Neutron 101 (video) where Kyle Mestery and Mark McClain gave a great overview of how Neutron works with various diagrams showing of the agents and improvements made in Kilo with various new drivers and plugins. The video is well worth the watch.

A chunk of my day was then reserved for translations. My role here is as the Infrastructure team contact for the translations tooling, so it’s also been a crash course in learning about translations workflows since I only speak English. Each session, even unrelated to the actual infrastructure-focused tooling has been valuable to learning. In the first translation team working session the focus was translations glossaries, which are used to help give context/meaning to certain English words where the meaning can be unclear or otherwise needs to be defined in terms of the project. There was representation from the Documentation team, which was valuable as they maintain a docs-focused glossary (here) which is more maintained and has a bigger team than the proposed separate translations glossary would have. Interesting discussion, particularly as my knowledge of translations glossaries was limited. Etherpad here: Vancouver-I18n-WG-session.

I hosted the afternoon session on Building Translation Platform. We’re migrating the team to Zanata have been fortunate to have Carlos Munoz, one of the developers on Zanata, join us at every summit since Atlanta. They’ve been one of the most supportive upstreams I’ve ever worked with, prioritizing our bug reports and really working with us to make sure our adoption is a success. The session itself reviewed the progress of our migration and set some deadlines for having translators begin the testing/feedback cycle. We also talked about hosting a Horizon instance in infra, refreshed daily, so that translators can actually see where translations are most needed via the UI and can prioritize appropriately. Finally, it was a great opportunity to get feedback from translators about what they need from the new workflow and have Carlos there to answer questions and help prioritize bugs. Etherpad here: Vancouver-I18n-Translation-platform-session.

My last translations-related thing of the day was Here be dragons – Translating OpenStack (slides). This was a great talk by Łukasz Jernaś that began with some benefits of translations work and then went into best practices and tips for working with open source translations and OpenStack specifically. It was another valuable session for me as the tooling contact because it gave me insight into some of the pain points and how appropriate it would be to address these with tooling vs. social changes to translations workflows.

From there I went back to general talks, attending Building Clouds with OpenStack Puppet Modules by Emilien Macchi, Mike Dorman and Matt Fischer (video). The OpenStack Infrastructure team is looking at building our own infra-cloud (we have a session on it later this week) and the workflows and tips that this presentation gave would also be helpful to me in other work I’ve been focusing on.

The final session I wandered into was a series of Lightning Talks, put together by HP. They had a great lineup of speakers from various companies and organizations. My evening was then spent at an HP employee gathering, but given my energy level and planned attendance at the Women of OpenStack breakfast at 7AM the following morning I headed back to my hotel around 9PM.

by pleia2 at May 20, 2015 11:26 PM

May 18, 2015

Eric Hammond

Alestic.com Site Redesign

The Alestic.com web site has been redesigned. The old design was going on 8 years old. The new design is:

Ok, so I still have a little improvement remaining in the fast dimension, but at least the site is static now and served through a CDN.

Since fellow geeks care, here are the technologies currently employed:

Simple, efficient, and gets the job done.

The old site is aailable at http://old.alestic.com for a while.

Questions, comments, and suggestions in the comments below.

Original article and comments: https://alestic.com/post/2015/05/blog-redesign/

May 18, 2015 07:10 AM

Alestic.com Site Redesign

The Alestic.com web site has been redesigned. The old design was going on 8 years old. The new design is:

Ok, so I still have a little improvement remaining in the fast dimension, but at least the site is static now and served through a CDN.

Since fellow geeks care, here are the technologies currently employed:

Simple, efficient, and gets the job done.

The old site is available at http://old.alestic.com for a while.

Questions, comments, and suggestions in the comments below.

Original article and comments: https://alestic.com/2015/05/blog-redesign/

May 18, 2015 07:10 AM

May 16, 2015

Elizabeth Krumbach

Xubuntu sweatshirt, Wily, & Debian Jessie Release

People like shirts, stickers and goodies to show support of their favorite operation system, and though the Xubuntu project has been slower than our friends over at Kubuntu at offering them, we now have a decent line-up offered by companies we’re friendly with. Several months ago the Xubuntu team was contacted by Gabor Kum of HELLOTUX to see if we’d be interested in offering shirts through their site. We were indeed interested! So after he graciously sent our project lead a polo shirt to evaluate, we agreed to start offering his products on our site, alongside the others. See all products here.

Polos aren’t really my thing, so when the Xubuntu shirts went live I ordered the Xubuntu sweater. Now a language difference may be in play here, since I’d call it a sweatshirt with a zipper, or a light jacket, or a hoodie without a hood. But it’s a great shirt, I’ve been wearing it regularly since I got it in my often-chilly city of San Francisco. It fits wonderfully and the embroidery is top notch.

Xubuntu sweatshirt
Close-up of HELLOTUX Xubuntu embroidery

In other Ubuntu things, given my travel schedule Peter Ganthavorn has started hosting some of the San Francisco Ubuntu Hours. He hosted one last month that I wasn’t available for, and then another this week which I did attend. Wearing my trusty new Xubuntu sweatshirt, I also brought along my Wily Werewolf to his first Ubuntu Hour! I picked up this fluffy-yet-fearsome werewolf from Squishable.com, which is also where I found my Natty Narwhal.

When we wrapped up the Ubuntu Hour, we headed down the street to our favorite Chinese place for Linux meetings where I was hosting a Bay Area Debian Meeting and Jessie Release Party! I was pretty excited about doing this, since the Toy Story character Jessie is a popular one, I jumped at the opportunity to pick up some party supplies to mark the occasion, and ended up with a collection of party hats and notepads:

There were a total of 5 of us there, long time BAD member Michael Paoli being particularly generous with his support of my ridiculous hats:

We had a fun time, welcoming a couple of new folks to our meeting as well. A few more photos from the evening here: https://www.flickr.com/photos/pleia2/sets/72157650542082473

Now I just need to actually upgrade my servers to Jessie!

by pleia2 at May 16, 2015 03:09 AM

May 15, 2015

Akkana Peck

Of file modes, umasks and fmasks, and mounting FAT devices

I have a bunch of devices that use VFAT filesystems. MP3 players, camera SD cards, SD cards in my Android tablet. I mount them through /etc/fstab, and the files always look executable, so when I ls -f them, they all have asterisks after their names. I don't generally execute files on these devices; I'd prefer the files to have a mode that doesn't make them look executable.

I'd like the files to be mode 644 (or 0644 in most programming languages, since it's an octal, or base 8, number). 644 in binary is 110 100 100, or as the Unix ls command puts it, rw-r--r--.

There's a directive, fmask, that you can put in fstab entries to control the mode of files when the device is mounted. (Here's Wikipedia's long umask article.) But how do you get from the mode you want the files to be, 644, to the mask?

The mask (which corresponds to the umask command) represent the bits you don't want to have set. So, for instance, if you don't want the world-execute bit (1) set, you'd put 1 in the mask. If you don't want the world-write bit (2) set, as you likely don't, put 2 in the mask. So that's already a clue that I'm going to want the rightmost byte to be 3: I don't want files mounted from my MP3 player to be either world writable or executable.

But I also don't want to have to puzzle out the details of all nine bits every time I set an fmask. Isn't there some way I can take the mode I want the files to be -- 644 -- and turn them into the mask I'd need to put in /etc/fstab or set as a umask?

Fortunately, there is. It seemed like it ought to be straightforward, but it took a little fiddling to get it into a one-line command I can type. I made it a shell function in my .zshrc:

# What's the complement of a number, e.g. the fmask in fstab to get
# a given file mode for vfat files? Sample usage: invertmask 755
invertmask() {
    python -c "print '0%o' % (~(0777 & 0$1) & 0777)"
}

This takes whatever argument I give to it -- $1 -- and takes only the three rightmost bytes from it, (0777 & 0$1). It takes the bitwise NOT of that, ~. But the result of that is a negative number, and we only want the three rightmost bytes of the result, (result) & 0777, expressed as an octal number -- which we can do in python by printing it as %o. Whew!

Here's a shorter, cleaner looking alias that does the same thing, though it's not as clear about what it's doing:

invertmask1() {
    python -c "print '0%o' % (0777 - 0$1)"
}

So now, for my MP3 player I can put this in /etc/fstab:

UUID=0000-009E /mp3 vfat user,noauto,exec,fmask=133,shortname=lower 0 0

May 15, 2015 04:27 PM

May 11, 2015

Elizabeth Krumbach

OpenStack events, anniversary & organization, a museum and some computers & cats

I’ve been home for just over 3 weeks. I thought things would be quieter event-wise, but I have attended 2 OpenStack meetups since getting home, the first right after getting off the plane from South Carolina. My colleague and Keystone PTL Morgan Fainberg was giving a presentation on Keystone and I have the rare opportunity to finally meet a scholarship winner who I’ve been mentoring at work. It was great to meet up and see some of the folks who I only see at conference, including other colleagues from HP. Plus, Morgan’s presentation on Keystone was great and the audience had a lot of good questions. Video of the presentation here and slides are available here


With my Helion mentee!

This past week I went to the second meetup, this time over at Walmart Labs, just a quick walk from the Sunnyvale Caltrain station. For this meetup I was on a mainstage panel where discussions covered improvements to OpenStack in the Kilo release (including the continued rise of third party testing, which I was able to speak to), the new Big Tent approach to OpenStack project adoption and how baremetal is starting to change the OpenStack landscape. I was also able to meet some of the really smart people working at Walmart Labs, and learned that all of walmart.com is running on top of OpenStack (this article from March talks about it and they’ll be doing a session on it at the upcoming OpenStack Summit in Vancouver).


Panel at Walmart Labs

In other professional news, the work I did in Oman earlier this year continues to bear fruit. On April 20th issue #313 of the Sultan Qaboos University Horizon newsletter was published with my interview, (8M PDF here). They were kind enough to send me a few paper copies which I received on Friday. The interview touched upon key points that I spoke on during my presentation back in February, focusing on personal and business reasons for open source contributions.

Personally, MJ and I celebrated our second wedding anniversary with a fantastic meal at Murray Circle Restaurant where we sat on the porch and enjoyed our dinner with a nighttime view of the Golden Gate Bridge. We also recently agreed to start a diet together, largely going back to our pre-wedding diet that we both managed to lose a lot of weight on. Health-wise I continue to go out running, but running isn’t enough to help me to lose weight. I’m largely replacing starches with vegetables and reducing the sugar in my diet. Finally, we’ve been hacking our way through a massive joint to do list that’s been haunting us for several months now. Most of the tasks are home-based, from things like painting we need to get done to storage clean-outs. I don’t love that we have so much to do (don’t other adults get to have fun on weekends?), but finally having it organized and a plan for tackling it has reduced my stress incredibly.


2nd anniversary dinner

We do actually get to have fun on weekends, Saturday at least. We’ve continued to take Saturdays off together to attend services, have a nice lunch together and spend some time relaxing, whether that’s catching up on some shows together or visiting a local museum. Last weekend we had the opportunity of finally going to the Cable Car Museum here in San Francisco. Given my love for all things rail, it’s astonishing that I never made it up there before. The core of the museum is the above-ground, in-building housing for the four cables that run the three cable car lines, and then exhibits are built around it. It’s a fantastic little museum, and entrance is free.

I also picked up some beautifully 3d printed cable car earrings and matching necklace produced by Freeform Ind. I loved their stuff so much that I found their shop online and picked up some other local landmark jewelry.

More photos from our trip to the Cable Car Museum are available here: https://www.flickr.com/photos/pleia2/sets/72157652325687332

We’ve had some computer fun lately. MJ has finally ordered a replacement 1U server for the old one that he has co-located in Fremont. Burn-in testing happened this weekend but there are some more harddrive-related pieces that we’re still waiting on to get it finished up. We’re aiming for getting it installed at the datacenter in June. I also replaced the old Pentium 4 that I’ve been using as a monitoring server and backups machine. It was getting quite old and unusable as a second desktop, even when restricted to following social media accounts and watching videos here and there. It’s now been replaced with a refurbished HP DC6200 from 2011, which has an i3 processor and I bumped it up to 8G of RAM that I had laying around from when I maxed out my primary desktop with 16G. So far so good, I moved over the harddrive from the old machine and it’s been running great.


HP DC6200

In the time between work and other things, I’ve been watching The Good Wife on my own and Star Trek: Voyager with MJ. Also, hanging out with my darling kitties. One evening I got this epic picture of Caligula:

This week I’m hosting an Ubuntu Hour and Debian Dinner where we’re celebrate the release of Debian 8 “Jessie”. I’ve purchased Jessie (cowgirl from Toy Story 2 and 3) party hats to mark the occasion. At the break of dawn on Sunday I’ll be boarding a plane to go to the OpenStack Summit in Vancouver. I’ve never been to Vancouver, so I’m spending Sunday there and staying until late on the following Saturday night, so I hope to have time to see some of the city. After this trip, I’m staying home until July! Thank goodness, I can definitely use the down time to work on my book.

by pleia2 at May 11, 2015 03:07 AM

May 06, 2015

Akkana Peck

Tips for passing Google's "Mobile Friendly" tests

I saw on Slashdot that Google is going to start down-rating sites that don't meet its criteria of "mobile-friendly": Are you ready for Google's 'Mobilegeddon' on Tuesday?. And from the the Slashdot discussion, it was pretty clear that Google's definition included some arbitrary hoops to jump through.

So I headed over to Google's Mobile-friendly test to check out some of my pages.

Now, most of my website seemed to me like it ought to be pretty mobile friendly. It's size agnostic: I don't specify any arbitrary page widths in pixels, so most of my pages can resize down as far as necessary (I was under the impression that was what "responsive design" meant for websites, though I've been doing it for many years and it seems now that "responsive design" includes a whole lot of phone-specific tweaks and elaborate CSS for moving things around based on size.) I also don't set font sizes that might make the page less accessible to someone with vision problems -- or to someone on a small screen with high pixel density. So I was pretty confident.

[Google's mobile-friendly test page] I shouldn't have been. Basically all of my pages failed. And in chasing down some of the problems I've learned a bit about Google's mobile rules, as well as about some weird quirks in how current mobile browsers render websites.

Basically, all of my pages failed with the same three errors:

  • Text too small to read
  • Links too close together
  • Mobile viewport not set

What? I wasn't specifying text size at all -- if the text is too small to read with the default font, surely that's a bug in the mobile browser, not a bug in my website. Same with links too close together, when I'm using the browser's default line spacing.

But it turned out that the first two points were meaningless. They were just a side effect of that third error: the mobile viewport.

The mandatory meta viewport tag

It turns out that any page that doesn't add a new meta tag, called "viewport", will automatically fail Google's mobile friendly test and be downranked accordingly. What's that all about?

Apparently it's originally Apple's fault. iPhones, by default, pretend their screen is 980 pixels wide instead of the actual 320 or 640, and render content accordingly, and so they shrink everything down by a factor of 3 (980/320). They do this assuming that most website designers will set a hard limit of 980 pixels (which I've always considered to be bad design) ... and further assuming that their users care more about seeing the beautiful layout of a website than about reading the website's text.

And Google apparently felt, at some point during the Android development process, that they should copy Apple in this silly behavior. I'm not sure when Android started doing this; my Android 2.3 Samsung doesn't do it, so it must have happened later than that.

Anyway, after implementing this, Apple then introduced a meta tag you can add to an HTML file to tell iPhone browsers not to do this scaling, and to display the text at normal text size. There are various forms for this tag, but the most common is:

<meta name="viewport" content="width=device-width, initial-scale=1">
(A lot of examples I found on the web at first suggested this: <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> but don't do that -- it prevents people from zooming in to see more detail, and hurts the accessibility of the page, since people who need to zoom in won't be able to. Here's more on that: Stop using the viewport meta tag (until you know how to use it.)

Just to be clear, Google is telling us that in order not to have our pages downgraded, we have to add a new tag to every page on the web to tell mobile browsers not to do something silly that they shouldn't have been doing in the first place, and which Google implemented to copy a crazy thing Apple was doing.

How width and initial-scale relate

Documentation on how width and initial-scale relate to each other, and which takes precedence, are scant. Apple's documentation on the meta viewport tag says that setting initial-scale=1 automatically sets width=device-width. That implies that the two are basically equivalent: that they're only different if you want to do something else, like set a page width in pixels (use width=) or set the width to some ratio of the device width other than 1 (use initial-scale=.

That means that using initial-scale=1 should imply width=device-width -- yet nearly everyone on the web seems to use both. So I'm doing that, too. Apparently there was once a point to it: some older iPhones had a bug involving switching orientation to landscape mode, and specifying both initial-scale=1 and width=device-width helped, but supposedly that's long since been fixed.

initial-scale=2, by the way, sets the viewport to half what it would have been otherwise; so if the width would have been 320, it sets it to 160, so you'll see half as much. Why you'd want to set initial-scale to anything besides 1 in a web page, I don't know.

If the width specified by initial-scale conflicts with that specified by width, supposedly iOS browsers will take the larger of the two, while Android won't accept a width directive less than 320, according to Quirks mode: testing Meta viewport.

It would be lovely to be able to test this stuff; but my only Android device is running Android 2.3, which doesn't do all this silly zooming out. It does what a sensible small-screen device should do: it shows text at normal, readable size by default, and lets you zoom in or out if you need to.

(Only marginally related, but interesting if you're doing elaborate stylesheets that take device resolution into account, is A List Apart's discussion, A Pixel Identity Crisis.)

Control width of images

[Image with max-width 100%] Once I added meta viewport tags, most of my pages passed the test. But I was seeing something else on some of my photo pages, as well as blog pages where I have inline images:

  • Content wider than screen
  • Links too close together

Image pages are all about showing an image. Many of my images are wider than 320 pixels ... and thus get flagged as too wide for the screen. Note the scrollbars, and how you can only see a fraction of the image.

There's a simple way to fix this, and unlike the meta viewport thing, it actually makes sense. The solution is to force images to be no wider than the screen with this little piece of CSS:

<style type="text/css">
  img { max-width: 100%; height: auto; }
</style>

[Image with max-width 100%] I've been using similar CSS in my RSS reader for several months, and I know how much better it made the web, on news sites that insist on using 1600 pixel wide images inline in stories. So I'm happy to add it to my photo pages. If someone on a mobile browser wants to view every hair in a squirrel's tail, they can still zoom in to the page, or long-press on the image to view it at full resolution. Or rotate to landscape mode.

The CSS rule works for those wide page banners too. Or you can use overflow: hidden if the right side of your banner isn't all that important.

Anyway, that takes care of the "page too wide" problem. As for the "Links too close together" even after I added the meta viewport tag, that was just plain bad HTML and CSS, showing that I don't do enough testing on different window sizes. I fixed it so the buttons lay out better and don't draw on top of each other on super narrow screens, which I should have done long ago. Likewise for some layout problems I found on my blog.

So despite my annoyance with the whole viewport thing, Google's mandate did make me re-examine some pages that really needed fixing, and should have improved my website quite a bit for anyone looking at it on a small screen. I'm glad of that.

It'll be a while before I have all my pages converted, especially that business of adding the meta tag to all of them. But readers, if you see usability problems with my site, whether on mobile devices or otherwise, please tell me about them!

May 06, 2015 09:48 PM

April 30, 2015

iheartubuntu

How To Install BitMessage


If you are ever concerned about private messaging, BitMessage offers an easy solution. Bitmessage is a P2P communications protocol used to send encrypted messages to another person or to many subscribers. It is decentralized and trustless, meaning that you need-not inherently trust any entities like root certificate authorities. It uses strong authentication which means that the sender of a message cannot be spoofed, and it aims to hide "non-content" data, like the sender and receiver of messages, from passive eavesdroppers like those running warrantless wiretapping programs. If Bitmessage is completely new to you, you may wish to start by reading the whitepaper:

https://bitmessage.org/bitmessage.pdf

Windows, Mac and Source Code available here:

https://bitmessage.org/wiki/Main_Page

A community-based forum for questions, feedback, and discussion is also available on the subreddit:

http://www.reddit.com/r/bitmessage/

To install BitMessage on Ubuntu (and other linux distros) go to your terminal and type:

git clone git://github.com/Bitmessage/PyBitmessage.git

Once its finished, run this...

python2.7 PyBitmessage/src/bitmessagemain.py

BitMessage should now be installed with a link your menu or dash or by running that last line in your terminal window again.

* You may need to install git and python for the codes to work.

Give it a try and good luck!

by iheartubuntu (noreply@blogger.com) at April 30, 2015 09:42 PM

Akkana Peck

Stile style

On a hike a few weeks ago, we encountered an unusual, and amusing, stile across the trail.

[Normal stile] It isn't uncommon to see stiles along trails. There are lots of different designs, but their purpose is to allow humans, on foot, an easy way to cross a fence, while making it difficult for vehicles and livestock like cattle to pass through. A common design looks like this, with a break in the fence and "wings" so that anything small enough to make the sharp turn can pass through.

On a recent hike starting near Buckman, on the Rio Grande, we passed a few stiles with the "wings" design; but one of the stiles we came to had a rather less common design:

[Wrongly-built stile]

It was set up so that nothing could pass without climbing over the fence -- and one of the posts which was supposed to hold fence rails was just sitting by itself, with nothing attached to it. [Pathological stile]

I suspect someone gave a diagram to a welder, and the welder, not being an outdoor person and having no idea of the purpose of a stile, welded it up without giving it much thought. Not very functional ... and not very stilish, either!

I'm curious whether the error was in the spec, or in the welder's interpretation of it. But alas, I suspect I'll never learn the story behind the stile.

Giggling, we climbed over the fence and proceeded on our hike up to the very scenic Otowi Peak.

April 30, 2015 05:38 PM

April 21, 2015

Akkana Peck

Finding orphaned files on websites

I recently took over a website that's been neglected for quite a while. As well as some bad links, I noticed a lot of old files, files that didn't seem to be referenced by any of the site's pages. Orphaned files.

So I went searching for a link checker that also finds orphans. I figured that would be easy. It's something every web site maintainer needs, right? I've gotten by without one for my own website, but I know there are some bad links and orphans there and I've often wanted a way to find them.

An intensive search turned up only one possibility: linklint, which has a -orphan flag. Great! But, well, not really: after a few hours of fiddling with options, I couldn't find any way to make it actually find orphans. Either you run it on a http:// URL, and it says it's searching for orphans but didn't find any (because it ignors any local directory you specify); or you can run it just on a local directory, in which case it finds a gazillion orphans that aren't actually orphans, because they're referenced by files generated with PHP or other web technology. Plus it flags all the bad links in all those supposed orphans, which get in the way of finding the real bad links you need to worry about.

I tried asking on a couple of technical mailing lists and IRC channels. I found a few people who had managed to use linklint, but only by spidering an entire website to local files (thus getting rid of any server side dependencies like PHP, CGI or SSI) and then running linklint on the local directory. I'm sure I could do that one time, for one website. But if it's that much hassle, there's not much chance I'll keep using to to keep websites maintained.

What I needed was a program that could look at a website and local directory at the same time, and compare them, flagging any file that isn't referenced by anything on the website. That sounded like it would be such a simple thing to write.

So, of course, I had to try it. This is a tool that needs to exist -- and if for some bizarre reason it doesn't exist already, I was going to remedy that.

Naturally, I found out that it wasn't quite as easy to write as it sounded. Reconciling a URL like "http://mysite.com/foo/bar.html" or "../asdf.html" with the corresponding path on disk turned out to have a lot of twists and turns.

But in the end I prevailed. I ended up with a script called weborphans (on github). Give it both a local directory for the files making up your website, and the URL of that website, for instance:

$ weborphans /var/www/ http://localhost/

It's still a little raw, certainly not perfect. But it's good enough that I was able to find the 10 bad links and 606 orphaned files on this website I inherited.

April 21, 2015 08:55 PM

April 20, 2015

Elizabeth Krumbach

POSSCON 2015

This past week I had the pleasure of attending POSSCON in the beautiful capital city of South Carolina, Columbia. The great event kicked off with a social at Hickory Tavern, which I arranged to be at by tolerating a tight connection in Charlotte. It all worked out and in spite of generally being really shy at these kind of socials, I found some folks I knew and had a good time. Late in the evening several of us even had the opportunity to meet the Mayor of Columbia who had come down to the event and talk about our work and the importance of open source in the economy today. It’s really great to see that kind of support for open source in a city.

The next morning the conference actually kicked off. Organizer Todd Lewis opened the event and quickly handed things off to Lonnie Emard, the President of IT-oLogy. IT-oLogy is a non-profit that promotes initial and continued learning in technology through events targeting everyone from children in grade school to professionals who are seeking to extend their skill set, more on their About page. As a partner for POSSCON, they were a huge part of the event, even hosting the second day at their offices.

We then heard from aforementioned Columbia Mayor Steve Benjamin. A keynote from the city mayor was real treat, taking time out of what I’m sure is a busy schedule showed a clear commitment to building technology in Columbia. It was really inspiring to hear him talk about Columbia, with political support and work from IT-oLogy it sounds like an interesting place to be for building or growing a career in tech. There was then a welcome from Amy Love, the South Carolina Department of Commerce Innovation Director. Talk about local support! Go South Carolina!

The next keynote was from Andy Hunt, who was speaking on “A New Look at Openness” which began with a history of how we’ve progressed with development, from paying for licenses and compilers for proprietary development to the free and open source tool set and their respective licenses we work with today. He talked about how this all progresses into the Internet of Things, where we can now build physical objects and track everything from keys to pets. Today’s world for developers, he argued, is not about inventing but innovating, and he implored the audience to seek out this innovation by using the building blocks of open source as a foundation. In the idea space he proposed 5 steps for innovative thinking:

  1. Gather raw material
  2. Work it
  3. Forget the whole thing
  4. Eureka/My that’s peculiar
  5. Refine and develop
  6. profit!

Directly following the keynote I gave my talk on Tools for Open Source Systems Administration in the Operations/Back End track. It had the themes of many of my previous talks on how the OpenStack Infrastructure team does systems administration in an open source way, but I refocused this talk to be directly about the tools we use to accomplish this as a geographically distributed team across several different companies. The talk went well and I had a great audience, huge thanks to everyone who came out for it, it was a real pleasure to talk with folks throughout the rest of the conference who had questions about specific parts of how we collaborate. Slides from my presentation are here (pdf).

The next talk in the Operations/Back End track was Converged Infrastructure with Sanoid by Jim Salter. With SANOID, he was seeking to bring enterprise-level predictability, minimal downtime and rapid recover to small-to-medium-sized businesses. Using commodity components, from hardware through software, he’s built a system that virtualizes all services and runs on ZFS for Linux to take hourly (by default) snapshots of running systems. When something goes wrong, from a bad upgrade to a LAN infected with a virus, he has the ability to quickly roll users back to the latest snapshot. It also has a system for easily creating on and off-site backups and uses Nagios for monitoring, which is how I learned about aNag, a Nagios client for Android, I’ll have to check it out! I had the opportunity to spend more time with Jim as the conference went on, which included swinging by his booth for a SANOID demo. Slides from his presentation are here.

For lunch they served BBQ. I don’t really care for typical red BBQ sauce, so when I saw a yellow sauce option at the buffet I covered my chicken in that instead. I had discovered South Carolina Mustard BBQ sauce. Amazing stuff. Changed my life. I want more.

After lunch I went to see a talk by Isaac Christofferson on Assembling an Open Source Toolchain to Manage Public, Private and Hybrid Cloud Deployments. With a focus on automation, standardization and repeatability, he walked us through his usage of Packer, Vagrant and Ansible to interface with a variety of different clouds and VMs. I’m also apparently the last systems administrator alive who hadn’t heard of devopsbookmarks.com, but he shared the link and it’s a great site.

The rooms for the talks were spread around a very walkable area in downtown Columbia. I wasn’t sure how I’d feel about this and worried it would be a problem, but with speakers staying on schedule we were afforded a full 15 minutes between talks to switch tracks. The venue I spoke it was in a Hilton, and the next talk I went to was in a bar! It made for quite the enjoyable short walks outside between talks and a diversity in venues that was a lot of fun.

That next talk I went to was Open Source and the Internet of Things presented by Erica Stanley. I had the pleasure of being on a panel with Erica back in October during All Things Open (see here for a great panel recap) so it was really great running into her at this conference as well. Her talk was a deluge of information about the Internet of Things (IoT) and how we can all be makers for it! She went into detail about the technology and ideas behind all kinds of devices, and on slides 41 and 42 she gave a quick tour of hardware and software tools that can be used to build for the IoT. She also went through some of the philosophy, guidelines and challenges for IoT development. Slides from her talk are online here, the wealth of knowledge packed into that slidedeck are definitely worth spending some time with if you’re interested in the topic.

The last pre-keynote talk I went to was by Tarus Balog with a Guide to the Open Source Desktop. A self-confessed former Apple fanboy, he had quite the sense of humor about his past where “everything was white and had an apple on it” and his move to using only open source software. As someone who has been using Linux and friends for almost a decade and a half, I wasn’t at this talk to learn about the tools available, but instead see how a long time Mac user could actually make the transition. It’s also interesting to me as a member of the Ubuntu and Xubuntu projects to see how newcomers view entrance into the world of Linux and how they evaluate and select tools. He walked the audience through the process he used to select a distro and desktop environment and then all the applications: mail, calendar, office suite and more. Of particular interest he showed a preference for Banshee (reminded him of old iTunes), as well as digiKam for managing photos. Accounting-wise he is still tied to Quickbooks, but either runs it under wine or over VNC from a Mac.

The day wound down with a keynote from Jason Hibbets. He wrote The foundation for an open source city and is a Project Manager for opensource.com. His keynote was all about stories, and why it’s important to tell our open source stories. I’ve really been impressed with the development of opensource.com over the past year (disclaimer: I’ve written for them too), they’ve managed to find hundreds of inspirational and beneficial stories of open source adoption from around the world. In this talk he highlighted a few of these, including the work of my friend Charlie Reisinger at Penn Manor and Stu Keroff with students in the Asian Penguins computer club (check out a video from them here). How exciting! The evening wrapped up with an afterparty (I enjoyed a nice Palmetto Amber Ale) and a great speakers and sponsors dinner, huge thanks to the conference staff for putting on such a great event and making us feel so welcome.

The second day of the conference took place across the street from the South Carolina State House at the IT-oLogoy office. The day consisted of workshops, so the sessions were much longer and more involved. But the day also kicked off with a keynote by Bradley Kuhn, who gave a basic level talk on Free Software Licensing: Software Freedom Licensing: What You Must Know. He did a great job offering a balanced view of the licenses available and the importance of selecting one appropriate to your project and team from the beginning.

After the keynote I headed upstairs to learn about OpenNMS from Tarus Balog. I love monitoring, but as a systems administrator and not a network administrator, I’ve mostly been using service-based monitoring tooling and hadn’t really looking into OpenNMS. The workshop was an excellent tour of the basics of the project, including a short history and their current work. He walked us through the basic installation and setup, and some of the configuration changes needed for SNMP and XML-based changes made to various other parts of the infrastructure. He also talked about static and auto-discovery mechanisms for a network, how events and alarms work and details about setting up the notification system effectively. He wrapped up by showing off some interesting graphs and other visualizations that they’re working to bring into the system for individuals in your organization who prefer to see the data presented in less technical format.

The afternoon workshop I attended was put on by Jim Salter and went over Backing up Android using Open Source technologies. This workshop focused on backing up content and not the Android OS itself, but happily for me, that’s what I wanted to back up, as I run stock Android from Google otherwise (easy to install again from a generic source as needed). Now, Google will happily backup all your data, but what if you want to back it up locally and store it on your own system? By using rsync backup for Android, Jim demonstrated how to configure your phone to send backups to Linux, Windows and Mac using ssh+rsync. For Linux at least so far this is a fully open source solution, which I quite like and have started using it at home. The next component makes it automatic, which is where we get into a proprietary bit of software, Llama – Location Profiles. Based on various types of criteria (battery level, location, time, and lots more), Llama allows you to identify criteria of when it runs certain actions, like automatically running rsync to do backups. In all, it was a great and informative workshop and I’m happy to finally have a useful solution to pulling photos and things off my phone periodically without plugging it in and using MTP, which apparently I hate and so never I do it. Slides from Jim’s talk, which also include specific instructions and tools for Windows and Mac are online here.

The conference concluded with Todd Lewis sending more thanks all around. By this time in the day rain was coming down in buckets and there were no taxis to be seen, so I grabbed a ride from Aaron Crosman who I was happy to learn earlier was a local but had come from Philadelphia and we had great Philly tech and city vs. country tech stories to swap.

More of my photos from the event available here: https://www.flickr.com/photos/pleia2/sets/72157651981993941/

by pleia2 at April 20, 2015 06:07 PM

Jono Bacon

Announcing Chimp Foot.

I am delighted to share my new music project: Chimp Foot.

I am going to be releasing a bunch of songs, which are fairly upbeat rock and roll (no growly metal here). The first tune is called ‘Line In The Sand’ and is available here.

All of these songs are available under a Creative Commons Attribution ShareAlike license, which means you can download, share, remix, and sell them. I am also providing a karaoke version with vocals removed (great for background music) and all of the individual instrument tracks that I used to create each song. This should provide a pretty comprehensive archive of open material.

Please follow me on SoundCloud and/or on Twitter, Facebook, and Google+.

Shares of this would be much appreciated, and feedback welcome for the music!?

by jono at April 20, 2015 04:22 PM

April 16, 2015

Akkana Peck

I Love Small Town Papers

I've always loved small-town newspapers. Now I have one as a local paper (though more often, I read the online Los Alamos Daily Post. The front page of the Los Alamos Monitor yesterday particularly caught my eye:

[Los Alamos Monitor front page]

I'm not sure how they decide when to include national news along with the local news; often there are no national stories, but yesterday I guess this story was important enough to make the cut. And judging by font sizes, it was considered more important than the high school debate team's bake sale, but of the same importance as the Youth Leadership group's day for kids to meet fire and police reps and do arts and crafts. (Why this is called "Wild Day" is not explained in the article.)

Meanwhile, here are a few images from a hike at Bandelier National Monument: first, a view of the Tyuonyi Pueblo ruins from above (click for a larger version):

[View of Tyuonyi Pueblo ruins from above]

[Petroglyphs on the rim of Alamo Canyon] Some petroglyphs on the wall of Alamo Canyon. We initially called them spirals but they're actually all concentric circles, plus one handprint.

[Unusually artistic cairn in Lummis Canyon] And finally, a cairn guarding the bottom of Lummis Canyon. All the cairns along this trail were fairly elaborate and artistic, but this one was definitely the winner.

April 16, 2015 08:01 PM

April 14, 2015

Jono Bacon

Open Source, Makers, and Innovators

Recently I started writing a column on opensource.com called Six Degrees.

They just published my latest column on how Open Source could provide the guardrails for a new generation of makers and innovators

Go and read the column here.

You can read the two previous columns here:

by jono at April 14, 2015 03:59 PM

April 13, 2015

iheartubuntu

Free Ubuntu Stickers


I have only 3 sheets of Ubuntu stickers to give away! So if you are interested in one of them, I will randomly pick (via random.org) three people. I'll ship each page of stickers any where in the world along with an official Ubuntu 12.04 LTS disc.

To enter into our contest, please "like" our Facebook page for a chance to win. Contest ends Friday, April 17, 2015. I'll announce the three winners the day after. Thanks for the like!

https://www.facebook.com/iheartubuntu

by iheartubuntu (noreply@blogger.com) at April 13, 2015 09:55 PM

Eric Hammond

Subscribing AWS Lambda Function To SNS Topic With aws-cli

The aws-cli documentation and command line help text have not been updated yet to include the syntax for subscribing an AWS Lambda function to an SNS topic, but it does work!

Here’s the format:

aws sns subscribe \
  --topic-arn arn:aws:sns:REGION:ACCOUNT:SNSTOPIC \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:REGION:ACCOUNT:function:LAMBDAFUNCTION

where REGION, ACCOUNT, SNSTOPIC, and LAMBDAFUNCTION are substituted with appropriate values for your account.

For example:

aws sns subscribe --topic-arn arn:aws:sns:us-east-1:012345678901:mytopic \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:us-east-1:012345678901:function:myfunction

This returns an SNS subscription ARN like so:

{
    "SubscriptionArn": "arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe"
}

You can unsubscribe with a command like:

aws sns unsubscribe \
  --subscription-arn arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe

where the subscription ARN is the one returned from the subscribe command.

I’m using the latest version of aws-cli as of 2015-04-15 on the GitHub “develop” branch, which is version 1.7.22.

Original article and comments: https://alestic.com/post/2015/04/aws-cli-sns-lambda/

April 13, 2015 05:35 PM

Subscribing AWS Lambda Function To SNS Topic With aws-cli

The aws-cli documentation and command line help text have not been updated yet to include the syntax for subscribing an AWS Lambda function to an SNS topic, but it does work!

Here’s the format:

aws sns subscribe \
  --topic-arn arn:aws:sns:REGION:ACCOUNT:SNSTOPIC \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:REGION:ACCOUNT:function:LAMBDAFUNCTION

where REGION, ACCOUNT, SNSTOPIC, and LAMBDAFUNCTION are substituted with appropriate values for your account.

For example:

aws sns subscribe --topic-arn arn:aws:sns:us-east-1:012345678901:mytopic \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:us-east-1:012345678901:function:myfunction

This returns an SNS subscription ARN like so:

{
    "SubscriptionArn": "arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe"
}

You can unsubscribe with a command like:

aws sns unsubscribe \
  --subscription-arn arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe

where the subscription ARN is the one returned from the subscribe command.

I’m using the latest version of aws-cli as of 2015-04-15 on the GitHub “develop” branch, which is version 1.7.22.

Original article and comments: https://alestic.com/2015/04/aws-cli-sns-lambda/

April 13, 2015 05:35 PM

April 12, 2015

Elizabeth Krumbach

Spring Trip to Philadelphia and New Jersey

I didn’t think I’d be getting on a plane at all in March, but plans shifted and we scheduled a trip to Philadelphia and New Jersey that left my beloved San Francisco on Sunday March 29th and returned us home on Monday, April 6th.

Our mission: Deal with our east coast storage. Without getting into the boring and personal details, we had to shut down a storage unit that MJ has had for years and go through some other existing storage to clear out donatable goods and finally catalog what we have so we have a better idea what to bring back to California with us. This required movers, almost an entire day devoted to donations and several days of sorting and repacking. It’s not all done, but we made pretty major progress, and did close out that old unit, so I’m calling the trip a success.

Perhaps what kept me sane through it all was the fact that MJ has piles of really old hardware, which is a delight to share on social media. Geeks from all around got to gush over goodies like the 32-bit SPARC lunchboxes (and commiserate with me as I tried to close them).


Notoriously difficult to close, but it was done!

Now admittedly, I do have some stuff in storage too, including my SPARC Ultra 10 that I wrote about here, back in 2007. I wanted to bring it home on this trip, but I wasn’t willing to put it in checked baggage and the case is a bit too big to put in my carry-on. Perhaps next trip I’ll figure out some way to ship it.


SPARC Ultra 10

More gems were collected in my album from the trip: https://www.flickr.com/photos/pleia2/sets/72157651488307179/

We also got to visit friends and family and enjoy some of our favorite foods we can’t find here in California, including east coast sweet & sour chicken, hoagies and chicken cheese steaks.

Family visits began on Monday afternoon as we picked up the plastic storage totes we were using to replace boxes, many of which were hard to go through in their various states of squishedness and age. MJ had them delivered to his sister in Pennsylvania and they were immensely helpful when we did the move on Tuesday. We also got to visit with MJ’s father and mother, and on Saturday met up with his cousins in New Jersey to have my first family Seder for Passover! Previously I’d gone to ones at our synagogue, but this was the first time I’d done one in someone’s home, and it meant a lot to be invited and to participate. Plus, the Passover diet restrictions did nothing to stem the exceptional dessert spread, there was so much delicious food.

We were fortunate to be in town for the first Wednesday of the month, since that allowed us to attend the Philadelphia area Linux Users Group meeting in downtown Philadelphia. I got to see several of my Philadelphia friends at the meeting, and brought along a box of books from Pearson to give away (including several copies of mine), which went over very well with the crowd gathered to hear from Anthony Martin, Keith Perry, and Joe Rosato about ways to get started with Linux, and freed up space in my closet here at home. It was a great night.


Presentation at PLUG

Friend visits included a fantastic dinner with our friend Danita and a quick visit to see Mike and Jessica, who had just welcomed little David into the world, awww!


Staying in New Jersey meant we could find Passover-friendly meals!

Sunday wrapped up with a late night at storage, finalizing some of our sorting and packing up the extra suitcases we brought along. We managed to get a couple hours of sleep at the hotel before our flight home at 6AM on Monday morning.

In all, it was a productive trip, but exhausting and I spent this past week making up for sleep debt and the aches and pains. Still, it felt good to get the work done and visit with friends we’ve missed.

by pleia2 at April 12, 2015 04:26 PM

iheartubuntu

Edward Snowden on Passwords

Just a friendly reminder on developing stronger passwords...


by iheartubuntu (noreply@blogger.com) at April 12, 2015 01:30 PM

April 11, 2015

iheartubuntu

Elementary Freya Released

FREYA. The next generation of Elementary OS is here. Lightweight and beautiful. All-new apps. A refined look. You can help support the devs and name your price or download it for free.

Based on the countdown on their website, the new Freya version of Elementary OS has now arrived!

Download it here:

They will be having a Special LIVE Elementary OS Hangout here as well for the launch...


I have the beta version of Freya Elementary OS installed on one of my laptops and it works great. Its easy to install and its beautiful. It is crafted by designers and developers who believe that computers can be easy, fun, and gorgeous. By putting design first, Elementary ensures they wont compromise on quality or usability. Its also based on Ubuntu 14.04 making it easy to install PPAs.

You can get a feel of the new Elementary OS Freya by checking out this video on Youtube...



and this review also on Youtube...



Elementary OS is definitely worth a look!

by iheartubuntu (noreply@blogger.com) at April 11, 2015 03:00 PM

April 10, 2015

iheartubuntu

Please Take Our Survey

We would love your input. Please take our short little survey. We'll take what you say to "heart" and make I Heart Ubuntu awesome!


by iheartubuntu (noreply@blogger.com) at April 10, 2015 06:08 PM

We Are Back!


We are trying to sort out some graphics and artwork and other stuff so please bear with us. Hope to see everyone again very soon.

by iheartubuntu (noreply@blogger.com) at April 10, 2015 06:07 PM

LMDE 2 “Betsy” MATE & CINNAMON Released


Today, the Linux Mint team announced the release of the LMDE 2 “Betsy” MATE desktop as well as the Cinnamon desktop environments.

LMDE (Linux Mint Debian Edition) is a very exciting distribution, targeted at experienced users, which provides the same environment as Linux Mint but uses Debian as its package base, instead of Ubuntu.

LMDE is less mainstream than Linux Mint, it has a much smaller user base, it is not compatible with PPAs, and it lacks a few features. That makes it a bit harder to use and harder to find help for, so it is not recommended for novice users.

Important release info, system requirements, upgrade instructions and more can be found about these releases directly on Linux Mint website.

by iheartubuntu (noreply@blogger.com) at April 10, 2015 06:07 PM

Torchlight 2 Now on Steam


Torchlight II is a dungeon crawler game that lets you chose to play as a few different classes. The basic concept is the same as nearly all dungeon crawlers: Explore, level up, find gear, beat the boss, rinse and repeat.

A few years ago I really enjoyed playing the original Torchlight. It worked great on Ubuntu. There were some shading issues with the 3D rendering making your characters faces invisible, but those little problems are of no worry in this new version. Torchlight 2 has improved almost every aspect of the original game.

About a month ago STEAM launched Torchlight 2 with linux support. The new version is cross platform multi-player supported, game saves work across all platforms, and game modding is even supported.

Installation through Steam is simple. The download was about 1GB in size. At work I have a slimline computer with a Pentium G2020 processor, 4 GB ram, and a 1GB nVidia GeForce 210 video card. Graphics are superb. It doesnt get much better than this. I even maxxed out all of the graphics settings. Game play is smooth and enjoyable. Ive just been having a fun time going deeper and deeper into the dungeons fighting new bad guys. The scenery alone is worth it. Cant wait to try multiplayer!

You can even zoom in with the mouse wheel and fight your battles up close!


Here are the recommended system specs...

  • Ubuntu 12.04 LTS (or similar, Debian-based distro)
  • x86/x86_64-compatible, 2.0GHz or better processor
  • 2GB System RAM
  • 1.7 GB hard drive space (subject to change)
  • OpenGL 2.0 compatible 3D graphics card* with at least 256 MB of addressable memory (ATI Radeon x1600 or NVIDIA equivalent)
  • A broadband Internet connection (For Steam download and online multiplayer)
  • Online multiplayer requires a free Runic Account.
  • Requires Steam.

The game itself costs $20 on Steam, but you can get it a part of the Humble Bundle 14 set of games if you pay at least $6.15. If you are considering Torchlight 2, you have until April 14 when the Humble Bundle deal expires.

Torchlight II is a great hack-n-slash every fan of this type of games should own. It will entertain you for hours and hours and you will hardly repeat boring actions like farming and grinding. A must have, for such a low price.

I enjoyed this game so much I'm giving it a 5 out of 5 rating :)


by iheartubuntu (noreply@blogger.com) at April 10, 2015 05:58 PM

April 09, 2015

Eric Hammond

AWS Lambda Event-Driven Architecture With Amazon SNS

Today, Amazon announced that AWS Lambda functions can be subscribed to Amazon SNS topics.

This means that any message posted to an SNS topic can trigger the execution of custom code you have written, but you don’t have to maintain any infrastructure to keep that code available to listen for those events and you don’t have to pay for any infrastructure when the code is not being run.

This is, in my opinion, the first time that Amazon can truly say that AWS Lambda is event-driven, as we now have a central, independent, event management system (SNS) where any authorized entity can trigger the event (post a message to a topic) and any authorized AWS Lambda function can listen for the event, and neither has to know about the other.

Making this instantly useful is the fact that there already are a number of AWS services and events that can post messages to Amazon SNS. This means there are a lot of application ideas that are ready to be implemented with nothing but a few commands to set up the SNS topic, and some snippets of nodejs code to upload as an AWS Lambda function.

Unfortunately…

I was unable to find a comprehensive list of all the AWS services and events that can post messages to Amazon SNS (Simple Notification Service).

I’d like to try an experiment and ask the readers of this blog to submit pointers to AWS and other services which can be configured to post events to Amazon SNS. I will collect the list and update this blog post.

Here’s the list so far:

You can either submit your suggestions as comments on this blog post, or tweet the pointer mentioning @esh

Thanks for contributing ideas:

[2015-04-13: Updated with input from comments and Twitter]

Original article and comments: https://alestic.com/post/2015/04/aws-lambda-sns/

April 09, 2015 12:43 PM

AWS Lambda Event-Driven Architecture With Amazon SNS

Today, Amazon announced that AWS Lambda functions can be subscribed to Amazon SNS topics.

This means that any message posted to an SNS topic can trigger the execution of custom code you have written, but you don’t have to maintain any infrastructure to keep that code available to listen for those events and you don’t have to pay for any infrastructure when the code is not being run.

This is, in my opinion, the first time that Amazon can truly say that AWS Lambda is event-driven, as we now have a central, independent, event management system (SNS) where any authorized entity can trigger the event (post a message to a topic) and any authorized AWS Lambda function can listen for the event, and neither has to know about the other.

Making this instantly useful is the fact that there already are a number of AWS services and events that can post messages to Amazon SNS. This means there are a lot of application ideas that are ready to be implemented with nothing but a few commands to set up the SNS topic, and some snippets of nodejs code to upload as an AWS Lambda function.

Unfortunately…

I was unable to find a comprehensive list of all the AWS services and events that can post messages to Amazon SNS (Simple Notification Service).

I’d like to try an experiment and ask the readers of this blog to submit pointers to AWS and other services which can be configured to post events to Amazon SNS. I will collect the list and update this blog post.

Here’s the list so far:

You can either submit your suggestions as comments on this blog post, or tweet the pointer mentioning @esh

Thanks for contributing ideas:

[2015-04-13: Updated with input from comments and Twitter]

Original article and comments: https://alestic.com/2015/04/aws-lambda-sns/

April 09, 2015 12:43 PM

iheartubuntu

Ubuntu Artwork on Flickr


I am always in search of new and interesting wallpapers. For many years Ubuntu has had a great Flickr group that is used to help decide which wallpapers make it into each new Ubuntu release. Most of them dont make it, but there are definitely some great quality images in this Flickr group. You can easily spend an hour here and pick favorites.

Check it out:

https://www.flickr.com/groups/ubuntu-artwork/pool/page1

by iheartubuntu (noreply@blogger.com) at April 09, 2015 07:49 AM