Planet Ubuntu California

September 19, 2014

Akkana Peck

Mirror, mirror

A female hummingbird -- probably a black-chinned -- hanging out at our window feeder on a cool cloudy morning.

[female hummingbird at the window feeder]

September 19, 2014 01:04 AM

September 18, 2014

Elizabeth Krumbach

Offline, CLI-based Gerrit code review with Gertty

This past week I headed to Florida to present at Fossetcon and thought it would be a great opportunity to do a formal review of a new tool recently released by the OpenStack Infrastructure team (well, mostly James E. Blair): Gertty.

The description of this tool is as follows:

As compared to the web interface, the main advantages are:

  • Workflow — the interface is designed to support a workflow similar to reading network news or mail. In particular, it is designed to deal with a large number of review requests across a large number of projects.
  • Offline Use — Gertty syncs information about changes in subscribed projects to a local database and local git repos. All review operations are performed against that database and then synced back to Gerrit.
  • Speed — user actions modify locally cached content and need not wait for server interaction.
  • Convenience — because Gertty downloads all changes to local git repos, a single command instructs it to checkout a change into that repo for detailed examination or testing of larger changes.

For me the two big ones were CLI-based workflow and offline use, I could review patches while on a plane or on terrible hotel wifi!

I highly recommend reading the announcement email to learn more about the features, but to get going here’s a quick rundown for the currently released version 1.0.2:

First, you’ll need to set a password in Gerrit so you can use the REST API. Do that by logging into Gerrit and going to https://review.openstack.org/#/settings/http-password

From there:

pip install gertty

wget https://git.openstack.org/cgit/stackforge/gertty/plain/examples/openstack-gertty.yaml -O ~/.gertty.yaml

Edit ~/.gertty.yaml and update anything that says “CHANGEME”

A couple things worthy of note:

  • Be aware that by default, uses ~/git/ for the git-root, I had to change this in my ~/.gertty.yaml so it didn’t touch my existing ~/git/ directory.
  • You can also run it in a venv, as described on the pypi page.

Now run gertty from your terminal!

When you first load it up, you get a welcome screen with some hints on how to use it, including the all important “press F1 for help”:

Note: I use xfce4-terminal and F1 is bound to terminal help, see the Xfce FAQ to learn how to disable this so you can actually read the Gertty help and don’t have to ask on IRC how to do simple things like I did ;)

As instructed, from here you hit “L” to list projects, this is the page where you can subscribe to them:

You subscribe to projects by pressing “s” and they will show up as bright white, then you can navigate into them to list open reviews:

Go to a review you want to look at and hit enter, bringing up the review screen. This should look very familiar, just text only. I’ve expanded my standard 80×24 terminal window here so you can get a good look at what the full screen looks like:

Navigate down to < Diff > to see the diff. This is pretty cool, instead of showing it on separate pages like the web UI, it shows you a unified page with all of the file diffs, so you just need to scroll through them to see them all:

Finally, you review! Select < Review > back on the main review page and it will pop up a screen that allows you to select your +2, +1, -1, etc and add a message:

Your reviews are synced along with everything else when Gertty knows it’s online and can pull down review updates and upload your changes. At any time you can look at the top right of your screen to see how many pending sync requests it has.

When you want to quit, CTRL-q

I highly recommend giving it a spin. Feel free to ask questions about usage in #openstack-infra and bugs are tracked in Storyboard here: https://storyboard.openstack.org/#!/project/698. The code lives in a stackforge repo at: http://git.openstack.org/cgit/stackforge/gertty

by pleia2 at September 18, 2014 12:46 AM

September 17, 2014

Elizabeth Krumbach

Fossetcon 2014

As I wrote in my last post I attended Fossetcon this past weekend. The core of the event kicked off on Friday with a keynote by Iris Gardner on how Diversity Creates Innovation and the work that the CODE2040 organization is doing to help talented minorities succeed in technology. I first heard about this organization back in 2013 at OSCON, so it was great to hear more about their recent successes with their summer Fellows Program. It was also great to hear that their criteria for talent not only included coding skills, but also sought out a passion for engineering and leadership skills.

After a break, I went to see PJ Hagerty give his talk, Meetup Groups: Act Locally – Think Globally. I’ve been running open source related groups for over a decade, so I’ve been in this space for quite a long time and was hoping to get some new tips, PJ didn’t disappoint! He led off with the need to break out of the small “pizza and a presentation by a regular” grind, which is indeed important to growing a group and making people show up. Some of his suggestions for doing this included:

  • Seek out students to attend and participate in the group, they can be some of your most motivated attendees and will bring friends
  • Seek out experienced programmers (and technologists) not necessarily in your specific field to give more agnostic talks about general programming/tech practices
  • Do cross-technology meetups – a PHP and Ruby night! Maybe Linux and BSD?
  • Bring in guest speakers from out of town (if they’re close enough, many will come for the price of gas and/or train/bus ticket – I would!)
  • Send members to regional conferences… or run your own conference
  • Get kids involved
  • Host an OpenHack event

I’ll have to see what my co-conspiratorsorganizers at some local groups think of these ideas, it certainly would be fun to spice up some of the groups I regularly attend.

From there I went to MySQL Server Performance Tuning 101 by Ligaya Turmelle. Her talk centered around the fact that MySQL tuning is not simple, but went through a variety of mechanisms to tune it in different ways for specific cases you may run into. Perhaps most useful to me were her tips for gathering usage statistics from MySQL, I was unfamiliar with many of the metrics she pulled out. Very cool stuff.

After lunch and some booth duty, I headed over to Crash Course in Open Source Cloud Computing presented by Mark Hinkle. Now, I work on OpenStack (referred to as the “Boy Band” of cloud infrastructures in the talk – hah!), so my view of the cloud world is certainly influenced by that perspective. It was great to see a whirlwind tour of other and related technologies in the open source ecosystem.

The closing keynote for the day was by Deb Nicholson, Style or substance? Free Software is Totally the 80′s. She gave a bit of a history of free software and speculated as to whether our movement would be characterized by a shallow portrayal of “unconferences and penguin swag” (like 80s neon clothes and extravagance) or how free software communities are changing the world (like groups in the 80s who were really seeking social change or the fall of the Berlin wall). Her hope is that by stepping back and taking a look at our community that perhaps we could shape how our movement is remembered and focus on what is important to our future.

Saturday I had more booth duty with my colleague Yolanda Robla who came in from Spain to do a talk on Continuous integration automation. We were joined by another colleague from HP, Mark Atwood, who dropped by the conference for his talk How to Get One of These Awesome Open Source Jobs – one of my favorites.

The opening keynote on Saturday was Considering the Future of Copyleft by Bradley Kuhn. I always enjoy going to his talks because I’m considerably more optimistic about the health and future of free software, so his strong copyleft stance makes me stop and consider where I truly stand and what that means. He worries that an ecosystem of permissive licenses (like Apache, MIT, BSD) will lead to companies doing the least possible for free software and keeping all their secret sauces secret, diluting the ecosystem and making it less valuable for future consumers of free software since they’ll need the proprietary components. I’m more hopeful than that, particularly as I see real free software folks starting to get jobs in major companies and staying true to their free software roots. Indeed, these days I spend a vast majority of my time working on Apache-licensed software for a large company who pays me to do the work. Slides from his talk are here, I highly recommend having a browse: http://ebb.org/bkuhn/talks/FOSSETCON-2014/copyleft-future.html

After some more boothing, I headed over to Apache Mesos and Aurora, An Operating System For The Datacenter by David Lester. Again, being on the OpenStack bandwagon these past few years I haven’t had a lot of time to explore the ecosystem elsewhere, and I learned that this is some pretty cool stuff! Lester works for Twitter and talked some about how Twitter and other companies in the community are using both the Mesos and Aurora tools to build their efficient, fault tolerant datacenters and how it’s lead to impressive improvements in the reliability of their infrastructures. He also did a really great job explaining the concepts of both, hooray for diagrams. I kind of want to play with them now.

Introduction to The ELK Stack: Elasticsearch, Logstash & Kibana by Aaron Mildenstein was my next stop. We run an ELK stack in the OpenStack Infrastructure, but I’ve not been very involved in the management of that, instead focusing on how we’re using it in elastic-recheck so I hoped this talk would fill in some of the fundamentals for me. It did do that so I was happy with that, but I have to admit that I was pretty disappointed to see demos of plugins that required a paid license.

As the day wound down, I finally had my talk: Code Review for Systems Administrators.


Code Review for Sysadmins talk, thanks to Yolanda Robla for taking the photo

I love giving this talk. I’m really proud of the infrastructure that has been built for OpenStack and it’s one that I’m happy and excited to work with every day – in part because we do things through code review. Even better, my excitement during this presentation seemed contagious, with an audience that seemed really engaged with the topic and impressed. Huge thanks to everyone who came and particularly to those who asked questions and took time to chat with me after. Slides from my talk are available here: fossetcon-code-review-for-sysadmins/

And then we were at the end! The conference wrapped up with a closing keynote on Open Source Is More than Code by Jordan Sissel. I really loved this talk. I’ve known for some time that the logstash community was one of the friendlier ones, with their mantra of “If a newbie has a bad time, it’s a bug.” This talk dove further into that ethos in their community and how it’s impacted how members of the project handle unhappy users. He also talked about improvements made to documentation (both inline in code and formal documentation) and how they’ve tried to “break away from text” some and put more human interaction in their community so people don’t feel so isolated and dehumanized by a text only environment (though I do find this is where I’m personally most comfortable, not everyone feels that way). I hope more projects will look to the logstash community as a good example of how we all can do better, I know I have some work to do when it comes to support.

Thanks again to conference staff for making this event such a fun one, particularly as it was their first year!

by pleia2 at September 17, 2014 12:44 AM

September 16, 2014

Elizabeth Krumbach

Ubuntu at Fossetcon 2014

Last week I flew out to the east coast to attend the very first Fossetcon. The conference was on the smaller side, but I had a wonderful time meeting up with some old friends, meeting some new Ubuntu enthusiasts and finally meeting some folks I’ve only communicated with online. The room layout took some getting used to, but the conference staff was quick to put up signs and directing conference attendees in the right direction and in general leading to a pretty smooth conference experience.

On Thursday the conference hosted a “day zero” that had training and an Ubucon. I attended the Ubucon all day, which kicked off with Michael Hall doing an introduction to the Ubuntu on Phones ecosystem, including Mir, Unity8 and the Telephony features that needed to be added to support phones (voice calling, SMS/MMs, Cell data, SIM card management). He also talked about the improved developer portal with more resources aimed at app developers, including the Ubuntu SDK and simplified packaging with click packages.

He also addressed the concern of many about whether Ubuntu could break into the smartphone market at this point, arguing that it’s a rapidly developing and changing market, with every current market leader only having been there for a handful of years, and that new ideas need need to play to win. Canonical feels that convergence between phone and desktop/laptop gives Ubuntu a unique selling point and that users will like it because of intuitive design with lots of swiping and scrolling actions, gives apps the most screen space possible. It was interesting to hear that partners/OEMs can offer operator differentiation as a layer without fragmenting the actual operating system (something that Android struggles with), leaving the core operating system independently maintained.

This was followed up by a more hands on session on Creating your first Ubuntu SDK Application. Attendees downloaded the Ubuntu SDK and Michael walked through the creation of a demo app, using the App Dev School Workshop: Write your first app document.

After lunch, Nicholas Skaggs and I gave a presentation on 10 ways to get involved with Ubuntu today. I had given a “5 ways” talk earlier this year at the SCaLE in Los Angeles, so it was fun to do a longer one with a co-speaker and have his five items added in, along with some other general tips for getting involved with the community. I really love giving this talk, the feedback from attendees throughout the rest of the conference was overwhelmingly positive, and I hope to get some follow-up emails from some new contributors looking to get started. Slides from our presentation are available as pdf here: contributingtoubuntu-fossetcon-2014.pdf


Ubuntu panel, thanks to Chris Crisafulli for the photo

The day wrapped up with an Ubuntu Q&A Panel, which had Michael Hall and Nicholas Skaggs from the Community team at Canonical, Aaron Honeycutt of Kubuntu and myself. Our quartet fielded questions from moderator Alexis Santos of Binpress and the audience, on everything from the Ubuntu phone to challenges of working with such a large community. I ended up drawing from my experience with the Xubuntu community a lot in the panel, especially as we drilled down into discussing how much success we’ve had coordinating the work of the flavors with the rest of Ubuntu.

The next couple days brought Fossetcon proper, with I’ll write about later. The Ubuntu fun continued though! I was able to give away 4 copies of The Official Ubuntu Book, 8th Edition which I signed, and got José Antonio Rey to sign as well since he had joined us for the conference from Peru.

José ended up doing a talk on Automating your service with Juju during the conference, and Michael Hall had the opportunity to a talk on Convergence and the Future of App Development on Ubuntu. The Ubuntu booth also looked great and was one of the most popular of the conference.

I really had a blast talking to Ubuntu community members from Florida, they’re a great and passionate crowd.

by pleia2 at September 16, 2014 05:01 PM

September 14, 2014

Akkana Peck

Global key bindings in Emacs

Global key bindings in emacs. What's hard about that, right? Just something simple like

(global-set-key "\C-m" 'newline-and-indent)
and you're all set.

Well, no. global-set-key gives you a nice key binding that works ... until the next time you load a mode that wants to redefine that key binding out from under you.

For many years I've had a huge collection of mode hooks that run when specific modes load. For instance, python-mode defines \C-c\C-r, my binding that normally runs revert-buffer, to do something called run-python. I never need to run python inside emacs -- I do that in a shell window. But I fairly frequently want to revert a python file back to the last version I saved. So I had a hook that ran whenever python-mode loaded to override that key binding and set it back to what I'd already set it to:

(defun reset-revert-buffer ()
  (define-key python-mode-map "\C-c\C-r" 'revert-buffer) )
(setq python-mode-hook 'reset-revert-buffer)

That worked fine -- but you have to do it for every mode that overrides key bindings and every binding that gets overridden. It's a constant chase, where you keep needing to stop editing whatever you wanted to edit and go add yet another mode-hook to .emacs after chasing down which mode is causing the problem. There must be a better solution.

A web search quickly led me to the StackOverflow discussion Globally override key bindings. I tried the techniques there; but they didn't work.

It took a lot of help from the kind folks on #emacs, but after an hour or so they finally found the key: emulation-mode-map-alists. It's only barely documented -- the key there is "The “active” keymaps in each alist are used before minor-mode-map-alist and minor-mode-overriding-map-alist" -- and there seem to be no examples anywhere on the web for how to use it. It's a list of alists mapping names to keymaps. Oh, clears it right up! Right?

Okay, here's what it means. First you define a new keymap and add your bindings to it:

(defvar global-keys-minor-mode-map (make-sparse-keymap)
  "global-keys-minor-mode keymap.")

(define-key global-keys-minor-mode-map "\C-c\C-r" 'revert-buffer)
(define-key global-keys-minor-mode-map (kbd "C-;") 'insert-date)

Now define a minor mode that will use that keymap. You'll use that minor mode for basically everything.

(define-minor-mode global-keys-minor-mode
  "A minor mode so that global key settings override annoying major modes."
  t "global-keys" 'global-keys-minor-mode-map)

(global-keys-minor-mode 1)

Now build an alist consisting of a list containing a single dotted pair: the name of the minor mode and the keymap.

;; A keymap that's supposed to be consulted before the first
;; minor-mode-map-alist.
(defconst global-minor-mode-alist (list (cons 'global-keys-minor-mode
                                              global-keys-minor-mode-map)))

Finally, set emulation-mode-map-alists to a list containing only the global-minor-mode-alist.

(setf emulation-mode-map-alists '(global-minor-mode-alist))

There's one final step. Even though you want these bindings to be global and work everywhere, there is one place where you might not want them: the minibuffer. To be honest, I'm not sure if this part is necessary, but it sounds like a good idea so I've kept it.

(defun my-minibuffer-setup-hook ()
  (global-keys-minor-mode 0))
(add-hook 'minibuffer-setup-hook 'my-minibuffer-setup-hook)

Whew! It's a lot of work, but it'll let me clean up my .emacs file and save me from endlessly adding new mode-hooks.

September 14, 2014 10:46 PM

September 11, 2014

Akkana Peck

Making emailed LinkedIn discussion thread links actually work

I don't use web forums, the kind you have to read online, because they don't scale. If you're only interested in one subject, then they work fine: you can keep a browser tab for your one or two web forums perenially open and hit reload every few hours to see what's new. If you're interested in twelve subjects, each of which has several different web forums devoted to it -- how could you possibly keep up with that? So I don't bother with forums unless they offer an email gateway, so they'll notify me by email when new discussions get started, without my needing to check all those web pages several times per day.

LinkedIn discussions mostly work like a web forum. But for a while, they had a reasonably usable email gateway. You could set a preference to be notified of each new conversation. You still had to click on the web link to read the conversation so far, but if you posted something, you'd get the rest of the discussion emailed to you as each message was posted. Not quite as good as a regular mailing list, but it worked pretty well. I used it for several years to keep up with the very active Toastmasters group discussions.

About a year ago, something broke in their software, and they lost the ability to send email for new conversations. I filed a trouble ticket, and got a note saying they were aware of the problem and working on it. I followed up three months later (by filing another ticket -- there's no way to add to an existing one) and got a response saying be patient, they were still working on it. 11 months later, I'm still being patient, but it's pretty clear they have no intention of ever fixing the problem.

Just recently I fiddled with something in my LinkedIn prefs, and started getting "Popular Discussions" emails every day or so. The featured "popular discussion" is always something stupid that I have no interest in, but it's followed by a section headed "Other Popular Discussions" that at least gives me some idea what's been posted in the last few days. Seemed like it might be worth clicking on the links even though it means I'd always be a few days late responding to any conversations.

Except -- none of the links work. They all go to a generic page with a red header saying "Sorry it seems there was a problem with the link you followed."

I'm reading the plaintext version of the mail they send out. I tried viewing the HTML part of the mail in a browser, and sure enough, those links worked. So I tried comparing the text links with the HTML:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&amp;t=gde&amp;midToken=AQEqep2nxSZJIg&amp;ek=b2_anet_digest&amp;li=82&amp;m=group_discussions&amp;ts=textdisc-6&amp;itemID=5914453683503906819&amp;itemType=member&amp;anetID=98449
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken=AQEqep2nxSZJIg&ek=b2_anet_digest&li=17&m=group_discussions&ts=grouppost-disc-6&itemID=5914453683503906819&itemType=member&anetID=98449

Well, that's clear as mud, isn't it?

HTML entity substitution

I pasted both links one on top of each other, to make it easier to compare them one at a time. That made it fairly easy to find the first difference:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&amp;t=gde&amp;midToken= ...
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken= ...

Time to die laughing: they're doing HTML entity substitution on the plaintext part of their email notifications, changing & to &amp; everywhere in the link.

If you take the link from the text email and replace &amp; with &, the link works, and takes you to the specific discussion.

Pagination

Except you can't actually read the discussion. I went to a discussion that had been open for 2 days and had 35 responses, and LinkedIn only showed four of them. I don't even know which four they are -- are they the first four, the last four, or some Facebook-style "four responses we thought you'd like". There's a button to click on to show the most recent entries, but then I only see a few of the most recent responses, still not the whole thread.

Hooray for the web -- of course, plenty of other people have had this problem too, and a little web searching unveiled a solution. Add a pagination token to the end of the URL that tells LinkedIn to show 1000 messages at once.

&count=1000&paginationToken=
It won't actually show 1000 (or all) responses -- but if you start at the beginning of the page and scroll down reading responses one by one, it will auto-load new batches. Yes, infinite scrolling pages can be annoying, but at least it's a way to read a LinkedIn conversation in order.

Making it automatic

Okay, now I know how to edit one of their URLs to make it work. Do I want to do that by hand any time I want to view a discussion? Noooo!

Time for a script! Since I'll be selecting the URLs from mutt, they'll be in the X PRIMARY clipboard. And unfortunately, mutt adds newlines so I might as well strip those as well as fixing the LinkedIn problems. (Firefox will strip newlines for me when I paste in a multi-line URL, but why rely on that?)

Here's the important part of the script:

import subprocess, gtk

primary = gtk.clipboard_get(gtk.gdk.SELECTION_PRIMARY)
if not primary.wait_is_text_available() :
    sys.exit(0)
link = primary.wait_for_text()
link = link.replace("\n", "").replace("&amp;", "&") + \
       "&count=1000&paginationToken="
subprocess.call(["firefox", "-new-tab", link])

And here's the full script: linkedinify on GitHub. I also added it to pyclip, the script I call from Openbox to open a URL in Firefox when I middle-click on the desktop.

Now I can finally go back to participating in those discussions.

September 11, 2014 07:10 PM

Jono Bacon

Ubuntu for Smartwatches?

I read an interesting article on OMG! Ubuntu! about whether Canonical will enter the wearables business, now the smartwatch industry is hotting up.

On one hand (pun intended), it makes sense. Ubuntu is all about convergence; a core platform from top to bottom that adjusts and expands across different form factors, with a core developer platform, and a focus on content.

On the other hand (pun still intended), the wearables market is another complex economy, that is heavily tethered, both technically and strategically, to existing markets and devices. If we think success in the phone market is complex, success in the burgeoning wearables market is going to be just as complex too.

Now, to be clear, I have no idea whether Canonical is planning on entering the wearables market or not. It wouldn’t surprise me if this is a market of interest though as the investment in Ubuntu over the last few years has been in building a platform that could ultimately scale. It is logical to think this could map to a smartwatch as “another form factor”.

So, if technically it is doable, Canonical should do it, right?

No.

I want to see my friends and former colleagues at Canonical succeed, and this needs focus.

Great companies focus on doing a small set of things and doing them well. Spiraling off in a hundred different directions means dividing teams, dividing focus, and limiting opportunities. To use a tired saying…being a “jack of all trades and master of none”.

While all companies can be tempted in this direction, I am happy that on the client side of Canonical, the focus has been firmly placed on phone. TV has taken a back seat, tablet has taken a back seat. The focus has been on building a featureful, high-quality platform that is focused on phone, and bringing that product to market.

I would hate to think that Canonical would get distracted internally by chasing the smartwatch market while it is young. I believe it would do little but direct resources away from the major push now, which is phone.

If there is something we can learn from Apple here is that it isn’t important to be first. It is important to be the best. Apple rarely ships the first innovation, but they consistently knock it out of the park by building brilliant products that become best in class.

So, I have no doubt that the exciting new convergent future of Ubuntu could run on a watch, but lets keep our heads down and get the phone out there and rocking, and the wearables and other form factors can come later.

by jono at September 11, 2014 05:11 AM

September 09, 2014

Jono Bacon

One Simple Request

I do a podcast called Bad Voltage with a bunch of my pals. In it we cover Open Source and technology, we do interviews, reviews, and more. It is a lot of fun.

We started a contest recently in which the presenters have to take part in a debate, but with a viewpoint that is actually the opposite of what we actually think.

In the first episode of this three part series, Bryan Lunduke and Stuart Langridge duked it out. Lunduke won (seriously).

In the most recent episode, Jeremy Garcia and I went up against each other.

Sadly, my tiny opponent is beating me right now.

Thus, I ask for a favor. Go here and vote for Bacon. Doing so will make you feel great about your life, save a puppy, and potentially get you that promotion you have been wanting.

Also, for my Ubuntu friends…a vote for Bacon…is a vote for Ubuntu.

UPDATE: The stakes have been increased. Want to see me donate $300 to charity, have an awkward avatar, and pour a bucket of ice/ketchup/BBQ sauce/waste vegetables on me? Read more and then vote.

by jono at September 09, 2014 02:28 AM

September 08, 2014

Akkana Peck

Dot Reminders

I read about cool computer tricks all the time. I think "Wow, that would be a real timesaver!" And then a week later, when it actually would save me time, I've long since forgotten all about it.

After yet another session where I wanted to open a frequently opened file in emacs and thought "I think I made a bookmark for that a while back", but then decided it's easier to type the whole long pathname rather than go re-learn how to use emacs bookmarks, I finally decided I needed a reminder system -- something that would poke me and remind me of a few things I want to learn.

I used to keep cheat sheets and quick reference cards on my desk; but that never worked for me. Quick reference cards tend to be 50 things I already know, 40 things I'll never care about and 4 really great things I should try to remember. And eventually they get burned in a pile of other papers on my desk and I never see them again.

My new system is working much better. I created a file in my home directory called .reminders, in which I put a few -- just a few -- things I want to learn and start using regularly. It started out at about 6 lines but now it's grown to 12.

Then I put this in my .zlogin (of course, you can do this for any shell, not just zsh, though the syntax may vary):

if [[ -f ~/.reminders ]]; then
  cat ~/.reminders
fi

Now, in every login shell (which for me is each new terminal window I create on my desktop), I see my reminders. Of course, I don't read them every time; but I look at them often enough that I can't forget the existence of great things like emacs bookmarks, or diff <(cmd1) <(cmd2).

And if I forget the exact keystroke or syntax, I can always cat ~/.reminders to remind myself. And after a few weeks of regular use, I finally have internalized some of these tricks, and can remove them from my .reminders file.

It's not just for tech tips, either; I've used a similar technique for reminding myself of hard-to-remember vocabulary words when I was studying Spanish. It could work for anything you want to teach yourself.

Although the details of my .reminders are specific to Linux/Unix and zsh, of course you could use a similar system on any computer. If you don't open new terminal windows, you can set a reminder to pop up when you first log in, or once a day, or whatever is right for you. The important part is to have a small set of tips that you see regularly.

September 08, 2014 03:10 AM

Elizabeth Krumbach

Simcoe’s August 2014 Checkup

This upcoming December will mark Simcoe living with the CRF diagnosis for 3 years. We’re happy to say that she continues to do well, with this latest batch of blood work showing more good news about her stable levels.

Unfortunately we brought her in a few weeks early this time following a bloody sneeze. As I’ve written earlier this year, they’ve both been a bit sneezy this year with an as yet undiagnosed issue that has been eluding all tests. Every month or so they switch off who is sneezing, but this was the first time there was any blood.

Simcoe at vet
“I still don’t like vet visits.”

Following the exam, the vet said she wasn’t worried. The bleeding was a one time thing and could have just been caused by rawness brought on by the sneezing and sniffles. Since the appointment on August 26th we haven’t seen any more problems (and the cold seems to have migrated back to Caligula).

As for her levels, it was great to see her weight come up a bit, from 9.62 to 9.94lbs.

Her BUN and CRE levels have both shifted slightly, from 51 to 59 on BUN and 3.9 to 3.8 on CRE.

BUN: 59 (normal range: 14-36)
CRE: 3.8 (normal range: .6-2.4)

by pleia2 at September 08, 2014 12:57 AM

September 02, 2014

Akkana Peck

Using strace to find configuration file locations

I was using strace to figure out how to set up a program, lftp, and a friend commented that he didn't know how to use it and would like to learn. I don't use strace often, but when I do, it's indispensible -- and it's easy to use. So here's a little tutorial.

My problem, in this case, was that I needed to find out what configuration file I needed to modify in order to set up an alias in lftp. The lftp man page tells you how to define an alias, but doesn't tell you how to save it for future sessions; apparently you have to edit the configuration file yourself.

But where? The man page suggested a couple of possible config file locations -- ~/.lftprc and ~/.config/lftp/rc -- but neither of those existed. I wanted to use the one that already existed. I had already set up bookmarks in lftp and it remembered them, so it must have a config file already, somewhere. I wanted to find that file and use it.

So the question was, what files does lftp read when it starts up? strace lets you snoop on a program and see what it's doing.

strace shows you all system calls being used by a program. What's a system call? Well, it's anything in section 2 of the Unix manual. You can get a complete list by typing: man 2 syscalls (you may have to install developer man pages first -- on Debian that's the manpages-dev package). But the important thing is that most file access calls -- open, read, chmod, rename, unlink (that's how you remove a file), and so on -- are system calls.

You can run a program under strace directly:

$ strace lftp sitename
Interrupt it with Ctrl-C when you've seen what you need to see.

Pruning the output

And of course, you'll see tons of crap you're not interested in, like rt_sigaction(SIGTTOU) and fcntl64(0, F_GETFL). So let's get rid of that first. The easiest way is to use grep. Let's say I want to know every file that lftp opens. I can do it like this:

$ strace lftp sitename |& grep open

I have to use |& instead of just | because strace prints its output on stderr instead of stdout.

That's pretty useful, but it's still too much. I really don't care to know about strace opening a bazillion files in /usr/share/locale/en_US/LC_MESSAGES, or libraries like /usr/lib/i386-linux-gnu/libp11-kit.so.0.

In this case, I'm looking for config files, so I really only want to know which files it opens in my home directory. Like this:

$ strace lftp sitename |& grep 'open.*/home/akkana'

In other words, show me just the lines that have either the word "open" or "read" followed later by the string "/home/akkana".

Digression: grep pipelines

Now, you might think that you could use a simpler pipeline with two greps:

$ strace lftp sitename |& grep open | grep /home/akkana

But that doesn't work -- nothing prints out. Why? Because grep, under certain circumstances that aren't clear to me, buffers its output, so in some cases when you pipe grep | grep, the second grep will wait until it has collected quite a lot of output before it prints anything. (This comes up a lot with tail -f as well.) You can avoid that with

$ strace lftp sitename |& grep --line-buffered open | grep /home/akkana
but that's too much to type, if you ask me.

Back to that strace | grep

Okay, whichever way you grep for open and your home directory, it gives:

open("/home/akkana/.local/share/lftp/bookmarks", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.netrc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/home/akkana/.local/share/lftp/rl_history", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.inputrc", O_RDONLY|O_LARGEFILE) = 5
Now we're getting somewhere! The file where it's getting its bookmarks is ~/.local/share/lftp/bookmarks -- and I probably can't use that to set my alias.

But wait, why doesn't it show lftp trying to open those other config files?

Using script to save the output

At this point, you might be sick of running those grep pipelines over and over. Most of the time, when I run strace, instead of piping it through grep I run it under script to save the whole output.

script is one of those poorly named, ungoogleable commands, but it's incredibly useful. It runs a subshell and saves everything that appears in that subshell, both what you type and all the output, in a file.

Start script, then run lftp inside it:

$ script /tmp/lftp.strace
Script started on Tue 26 Aug 2014 12:58:30 PM MDT
$ strace lftp sitename

After the flood of output stops, I type Ctrl-D or Ctrl-C to exit lftp, then another Ctrl-D to exit the subshell script is using. Now all the strace output was in /tmp/lftp.strace and I can grep in it, view it in an editor or anything I want.

So, what files is it looking for in my home directory and why don't they show up as open attemps?

$ grep /home/akkana /tmp/lftp.strace

Ah, there it is! A bunch of lines like this:

access("/home/akkana/.lftprc", R_OK)    = -1 ENOENT (No such file or directory)
stat64("/home/akkana/.lftp", 0xbff821a0) = -1 ENOENT (No such file or directory)
mkdir("/home/akkana/.config", 0755)     = -1 EEXIST (File exists)
mkdir("/home/akkana/.config/lftp", 0755) = -1 EEXIST (File exists)
access("/home/akkana/.config/lftp/rc", R_OK) = 0

So I should have looked for access and stat as well as open. Now I have the list of files it's looking for. And, curiously, it creates ~/.config/lftp if it doesn't exist already, even though it's not going to write anything there.

So I created ~/.config/lftp/rc and put my alias there. Worked fine. And I was able to edit my bookmark in ~/.local/share/lftp/bookmarks later when I had a need for that. All thanks to strace.

September 02, 2014 07:06 PM

September 01, 2014

Elizabeth Krumbach

CI, Validation and more at DebConf14

I’ve been a Debian user since 2002 and got my first package into Debian in 2006. Though I continued to maintain a couple packages through the years, my open source interests (and career) have expanded significantly so that I now spend much more time with Ubuntu and OpenStack than anything else. Still, I do still host Bay Area Debian events in San Francisco and when I learned that DebConf14 would only be quick plane flight away from home I was eager for the opportunity to attend.

Given my other obligations, I decided to come in halfway through the conference, arriving Wednesday evening. Thursday was particularly interesting to me because they were doing most of the Debian Validation & CI discussions then. Given my day job on the OpenStack Infrastructure team, it seemed to be a great place to meet other folks who are interested in CI and see where our team could support Debian’s initiatives.

First up was the Validation and Continuous Integration BoF led by Neil Williams.

It was interesting to learn the current validation methods being used in Debian, including:

From there talk moved into what kinds of integration tests people wanted, where various ideas were covered, including package sets (collections of related packages) and how to inject “dirty” data into systems to test in more real world like situations. Someone also mentioned doing tests on more real systems rather than in chrooted environments.

Discussion touched upon having a Gerrit-like workflow that had packages submitted for review and testing prior to landing in the archive. This led to my having some interesting conversations with the drivers of Gerrit efforts in Debian after the session (nice to meet you, mika!). There was also discussion about notification to developers when their packages run afoul of the testing infrastructure, either themselves or as part of a dependency chain (who wants notifications? how to make them useful and not overwhelming?).

I’ve uploaded the gobby notes from the session here: validation-bof and the video of the session is available on the meetings-archive.

Next up on the schedule was debci and the Debian Continuous Integration project presented by Antonio Terceiro. He gave a tour of the Debian Continuous Integration system and talked about how packages can take advantage of the system by having their own test suites. He also discussed some about the current architecture for handling tests and optimizations they want to make in the future. Documentation for debci can be found here: ci.debian.net/doc/. Video of the session is also available on the meetings-archive.

The final CI talk I went to of the day was Automated Validation in Debian using LAVA where Neil Williams gave a tour of the expanded LAVA (Linaro Automated Validation Architecture). I heard about it back when it was a more simple ARM-only testing infrastructure, but it’s grown beyond that to now test distribution kernel images, package combinations and installer images and has been encouraging folks to write tests. He also talked about some of the work they’re doing to bring along LAVA demo stations to conferences, nice! Slides from this talk are available on the debconf annex site, here: http://annex.debconf.org/debconf-share/debconf14/slides/lava/

On Friday I also bumped into a testing-related talk by Paul Wise during a series of Live Demos, he showed off check-all-the-things which runs a pile of tools against your project to check… all the things, detecting what it needs to do automatically. Check out the README for rationale, and for a taste of things it checks and future plans, have a peek at some of the data files, like this one.

It’s really exciting to see more effort being spent on testing in Debian, and open source projects in general. This has long been the space of companies doing private, internal testing of open source products they use and reporting results back to projects in the form of patches and bug reports. Having the projects themselves provide QA is a huge step for the maturity of open source, and I believe will lead to even more success for projects as we move into the future.

The rest of DebConf for me was following my more personal interests in Debian. I also have to admit that my lack of involvement lately made me feel like a bit of an outsider and I’m quite shy anyway, so I was thankful to know a few Debian folks who I could hang out with and join for meals.

On Thursday evening I attended A glimpse into a systemd future by Josh Triplett. I haven’t really been keeping up with systemd news or features, so I learned a lot. I have to say, it would be great to see things like session management, screen brightness and other user settings be controlled by something lower level than the desktop environment. Friday I attended Thomas Goirand’s OpenStack update & packaging experience sharing. I’ve been loosely tracking this, but it was good to learn that Jessie will come with Icehouse and that install docs exist for Wheezy (here).

I also attended Outsourcing your webapp maintenance to Debian with Francois Marier. The rationale for his talk was that one should build their application with the mature versions of web frameworks included with Debian in mind, making it so you don’t have the burden of, say, managing Django along with your Django-based app, since Debian handles that. I continue to have mixed feelings when it comes to webapps in the main Debian repository, while some developers who are interested in reducing maintenance burden are ok with using older versions shipped with Debian, most developers I’ve worked with are very much not in this camp and I’m better off trying to support what they want than fighting with them about versions. Then it was off to Docker + Debian = ♥ with Paul Tagliamonte where he talked about some of his best practices for using Docker on Debian and ideas for leveraging it more in development (having multiple versions of services running on one host, exporting docker images to help with replication of tests and development environments).

Friday night Linus Torvalds joined us for a Q&A session. As someone who has put a lot of work into making friendly environments for new open source contributors, I can’t say I’m thrilled with his abrasive conduct in the Linux kernel project. I do worry that he sets a tone that impressionable kernel hackers then go on to emulate, perpetuating the caustic environment that spills out beyond just the kernel, but he has no interest in changing. That aside, it was interesting to hear him talk about other aspects of his work, his thoughts on systemd, a rant about compiling against specific libraries for every distro and versions (companies won’t do it, they’ll just ship their own statically linked ones) and his continued comments in support of Google Chrome.

DebConf wrapped up on Sunday. I spent the morning in one of the HackLabs catching up on some work, and at 1:30 headed up to the Plenary room for the last few talks of the event, starting with a series of lightning talks. A few of the talks stood out for me, including Geoffrey Thomas’ talk on being a bit of an outsider at DebConf and how difficult it is to be running a non-Debian/Linux system at the event. I’ve long been disappointed when people bring along their proprietary OSes to Linux events, but he made good points about people being able to contribute without fully “buying in” to having free software everywhere, including their laptop. He’s right. Margarita Manterola shared some stats from the Mini-DebConf Barcelona which focused on having only female speakers, it was great to hear such positive statistics, particularly since DebConf14 itself had a pretty poor ratio, there were several talks I attended (particularly around CI) where I was the only woman in the room. It was also interesting to learn about safe-rm to save us from ourselves and non-free.org to help make a distinction between what is Debian and what is not.

There was also a great talk by Vagrant Cascadian about his work on packages that he saw needed help but he didn’t necessarily know everything about, and encouraged others to take the same leap to work on things that may be outside their comfort zone. To help he listed several resources people could use to find work in Debian:

Next up for the afternoon was the Bits from the Release Team where they fleshed out what the next few months leading up to the freeze would look like and sharing the Jessie Freeze Policy.

DebConf wrapped up with a thank you to the volunteers (thank you!) and peek at the next DebConf, to be held in Heidelberg, Germany the 15th-22nd of August 2015.

Then it was off to the airport for me!

The rest of my photos from DebConf14 here: https://www.flickr.com/photos/pleia2/sets/72157646626186269/

by pleia2 at September 01, 2014 06:54 PM

August 28, 2014

Akkana Peck

Debugging a mysterious terminal setting

For the last several months, I repeatedly find myself in a mode where my terminal isn't working quite right. In particular, Ctrl-C doesn't work to interrupt a running program. It's always in a terminal where I've been doing web work. The site I'm working on sadly has only ftp access, so I've been using ncftp to upload files to the site, and git and meld to do local version control on the copy of the site I keep on my local machine. I was pretty sure the problem was coming from either git, meld, or ncftp, but I couldn't reproduce it.

Running reset fixed the problem. But since I didn't know what program was causing the problem, I didn't know when I needed to type reset.

The first step was to find out which of the three programs was at fault. Most of the time when this happened, I wouldn't notice until hours later, the next time I needed to stop a program with Ctrl-C. I speculated that there was probably some way to make zsh run a check after every command ... if I could just figure out what to check.

Terminal modes and stty -a

It seemed like my terminal was getting put into raw mode. In programming lingo, a terminal is in raw mode when characters from it are processed one at a time, and special characters like Ctrl-C, which would normally interrupt whatever program is running, are just passed like any other character.

You can list your terminal modes with stty -a:

$ stty -a
speed 38400 baud; rows 32; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = ;
eol2 = ; swtch = ; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
ignbrk -brkint ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl -ixon -ixoff
-iuclc -ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig icanon -iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke

But that's a lot of information. Unfortunately there's no single flag for raw mode; it's a collection of a lot of flags. I checked the interrupt character: yep, intr = ^C, just like it should be. So what was the problem?

I saved the output with stty -a >/tmp/stty.bad, then I started up a new xterm and made a copy of what it should look like with stty -a >/tmp/stty.good. Then I looked for differences: meld /tmp/stty.good /tmp/stty.bad. I saw these flags differing in the bad one: ignbrk ignpar -iexten -ixon, while the good one had -ignbrk -ignpar iexten ixon. So I should be able to run:

$ stty -ignbrk -ignpar iexten ixon
and that would fix the problem. But it didn't. Ctrl-C still didn't work.

Setting a trap, with precmd

However, knowing some things that differed did give me something to test for in the shell, so I could test after every command and find out exactly when this happened. In zsh, you do that by defining a precmd function, so here's what I did:

precmd()
{
    stty -a | fgrep -- -ignbrk > /dev/null
    if [ $? -ne 0 ]; then
        echo
        echo "STTY SETTINGS HAVE CHANGED \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!"
        echo
    fi
}
Pardon all the exclams. I wanted to make sure I saw the notice when it happened.

And this fairly quickly found the problem: it happened when I suspended ncftp with Ctrl-Z.

stty sane and isig

Okay, now I knew the culprit, and that if I switched to a different ftp client the problem would probably go away. But I still wanted to know why my stty command didn't work, and what the actual terminal difference was.

Somewhere in my web searching I'd stumbled upon some pages suggesting stty sane as an alternative to reset. I tried it, and it worked.

According to man stty, stty sane is equivalent to

$ stty cread -ignbrk brkint -inlcr -igncr icrnl -iutf8 -ixoff -iuclc -ixany  imaxbel opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke

Eek! But actually that's helpful. All I had to do was get a bad terminal (easy now that I knew ncftp was the culprit), then try:

$ stty cread 
$ stty -ignbrk 
$ stty brkint
... and so on, trying Ctrl-C each time to see if things were back to normal. Or I could speed up the process by grouping them:
$ stty cread -ignbrk brkint
$ stty -inlcr -igncr icrnl -iutf8 -ixoff
... and so forth. Which is what I did. And that quickly narrowed it down to isig. I ran reset, then ncftp again to get the terminal in "bad" mode, and tried:
$ stty isig
and sure enough, that was the difference.

I'm still not sure why meld didn't show me the isig difference. But if nothing else, I learned a bit about debugging stty settings, and about stty sane, which is a much nicer way of resetting the terminal than reset since it doesn't clear the screen.

August 28, 2014 09:41 PM

August 27, 2014

Elizabeth Krumbach

OpenStack Infrastructure August 2014 Bug Day

The OpenStack Infrastructure team has a pretty big bug collection.

1855 collection
Well, not literal bugs

We’ve slowly been moving new bugs for some projects over to StoryBoard in order to kick the tires on that new system, but today we focused back on our Launchpad Bugs to par down our list.

Interested in running a bug day? The steps we have for running a bug day can be a bit tedious, but it’s not hard, here’s the rundown:

  1. I create our etherpad: cibugreview-august2014 (see etherpad from past bug days on the wiki at: InfraTeam#Bugs)
  2. I run my simple infra_bugday.py script and populate the etherpad.
  3. Grab the bug stats from launchpad and copy them into the pad so we (hopefully) have inspiring statistics at the end of the day.
  4. Then comes the real work. I open up the old etherpad and go through all the bugs, copying over comments from the old etherpad where applicable and making my own comments as necessary about obvious updates I see (and updating my own bugs).
  5. Let the rest of the team dive in on the etherpad and bugs!

Throughout the day we chat in #openstack-infra about bug statuses, whether we should continue pursuing certain strategies outlined in bugs, reaching out to folks who have outstanding bugs in the tracker that we’d like to see movement on but haven’t in a while. Plus, we get to triage a whole pile of New bugs (thanks Clark) and close others we may have lost track of (thanks everyone).

As we wrap up, here are the stats from today:

Starting bug day count: 270

31 New bugs
39 In-progress bugs
6 Critical bugs
15 High importance bugs
8 Incomplete bugs

Ending bug day count: 233

0 New bugs
37 In-progress bugs
3 Critical bugs
10 High importance bugs
14 Incomplete bugs

Full disclosure, 4 of the bugs we “closed” were actually moved to the Zuul project on Launchpad so we can import them into StoryBoard at a later date. The rest were legitimate though!

It was a busy day, thanks to everyone who participated.

by pleia2 at August 27, 2014 12:08 AM

August 25, 2014

Elizabeth Krumbach

Market Street Railway Exploratorium Charter

Last month I learned about an Exploratorium Charter being put on by Market Street Railway. I’m a member of the organization and they do charters throughout the year, but my schedule rarely syncs up with when charters or other events are happening, so I was delighted when I firmed up my DebConf schedule and knew I’d be in town for this one!

It was a 2 hour planned charter, which would pick is up at the railway museum near Ferry building and take us down to Muni Metro East, “the current home of the historic streetcar fleet and not usually open to the public.” Sign me up.

The car taking us on our journey was the 1050, which was originally a Philadelphia street car (built in 1948, given No. 2119) which had been painted in Muni livery. MJ’s best friend is in town this weekend, so I had both Matti and MJ to join me on this excursion.

The route began by going down what will become the E line next year, and we stopped at the AT&T ballpark for some photo ops. The conductor (not the driver) of the event posed for photos.

Throughout the ride various volunteers from Market Street Railway passed around photos and historic pieces from street cars to demonstrate how they worked and some of the historic routes where they ran. Of particular interest was learning just how busy Ferry Building was at it’s height in the 1930s, not only serving as a busy passenger ferry port, but also with lots of street cars and other transit stopping at the building pretty much non-stop.

From the E-line there the street car went down Third street through Dogpatch and finally arrived at our first destination, the Muni Metro East Light Rail Vehicle Maintenance & Operations Facility. We all had to bright vests in order to enter the working facility.


Obligatory “Me with streetcar” photo

The facility is a huge warehouse where repairs are done on both the street cars and the Metro coaches. We had quite a bit of time to look around and peek under the cars and see some of the ones that were under repair or getting phased into usage.

I think my favorite part of the visit was getting to go outside and see the several cars outside. Some of them were just coming in for scheduled maintenance, and others like the cream colored 1056 that are going to be sent off for restoration (hooray!).

The tour concluded by taking us back up the Embarcadero and dropping us off at the Exploratorium science museum, skipping a loop around Pier 39 due to being a bit behind schedule. We spent about an hour at the museum, which was a nice visit as MJ and I had just been several months earlier.

Lots more photos from our day here: https://www.flickr.com/photos/pleia2/sets/72157646412090817/

Huge thanks to Market Street Railway for putting on such a fun and accessible event!

by pleia2 at August 25, 2014 03:43 AM

August 24, 2014

Akkana Peck

One of them Los Alamos liberals

[Adopt-a-Highway: One of them Los Alamos liberals] I love this Adopt-a-Highway sign on Highway 4 on the way back down from the Jemez.

I have no idea who it is (I hope to find out, some day), but it gives me a laugh every time I see it.

August 24, 2014 04:50 PM

Elizabeth Krumbach

August 2014 miscellany

It’s been about a month since my surgery. I feel like I’ve taken it easy, but looking at my schedule (which included a conference on the other side of the continent) I think it’s pretty safe to say I’m not very good at that. I’m happy to say I’m pretty much recovered though, so my activities don’t seem to have caused problems.

Although, going to the 4th birthday party for OpenStack just 6 days after my surgery was probably pushing it. I thoroughly rationalized it due to the proximity of the event to my home (a block), but I only lasted about an hour. At least I got a cool t-shirt and got to see the awesome OpenStack ice sculpture. Also, didn’t die, so all is good right?

Fast forward a week and a half and we were wrapping up our quick trip to Philadelphia for Fosscon. We had some time on Sunday so decided to visit the National Museum of American Jewish History right by Independence Mall. In addition to a fun special exhibit about minorities in baseball, the museum boasts 3 floors of permanent exhibits that trace the history of Jews in America from the first settlement until today. We made it through much of the museum before our flight time crept up, and even managed to swing by the gift shop where we found a beautiful glass menorah to bring home.

Safely back in San Francisco, I met up with a few of my local Ubuntu and Debian friends on the 13th for an Ubuntu Hour and a Debian dinner. The Ubuntu Hour was pretty standard, I was able to bring along my Nexus 7 with Ubuntu on it to show off the latest features in the development channel for the tablet version. I also received several copies of The Official Ubuntu Book so I was able to bring one along to give away to an attendee, hooray!

From there, we made it over to 21st Amendment Brewery where we’d be celebrating Debian’s 21st birthday (which was coming up on the 16th). It took some time to get a table, but had lots of time to chat while we were waiting. At the dinner we signed a card to send off to a donation to SPI on behalf of Debian.

In other excitement, our car needed some work last week and MJ has been putting a lot of work into getting a sound system set up to go along with a new TV. Since I’ve been feeling better this week my energy has finally returned and I’ve been able to get caught up on a lot of projects I had pushed aside during my recovery. I also signed up for a new gym this week, it’s not as beautiful as the club I used to go to, but it has comparable facilities (including pool!) and is about half of what I was paying before. I’m thinking as I ease back into a routine I’ll use my time there for swimming and strength exercises. I sure need it, being these past few months really did a number on my fitness.

Today I met up with my friend Steve for Chinese lunch and then a visit over to the Asian Art Museum to see the Gorgeous exhibit. I’m really glad we went, it was an unusual collection that I really enjoyed. While we were there we also browsed the rest of the galleries in the museum, making it the first time that I’d actually walked through the whole museum on an excursion there.

I think the Mythical bird-man was my favorite piece of the exhibit:

And I was greatly amused when Steve used his iPhone to take a picture of the first generation iPhone on exhibit, so I captured the moment.

On Wednesday afternoon I’ll be flying up to Portland, OR to attend my first DebConf! It actually started today, but given my current commitment load I decided that 9 days away from home was a bit much and picked days later in the week where some of the discussions were most interesting to me. I’m really looking forward to seeing some of my long time Debian friends and learning more about work the teams are doing in the Continuous Integration space for Debian.

by pleia2 at August 24, 2014 04:52 AM

August 20, 2014

Akkana Peck

Mouse Release Movie

[Mouse peeking out of the trap] We caught another mouse! I shot a movie of its release.

Like the previous mouse we'd caught, it was nervous about coming out of the trap: it poked its nose out, but didn't want to come the rest of the way.

[Mouse about to fall out of the trap] Dave finally got impatient, picked up the trap and turned it opening down, so the mouse would slide out.

It turned out to be the world's scruffiest mouse, which immediately darted toward me. I had to step back and stand up to follow it on camera. (Yes, I know my camera technique needs work. Sorry.)

[scruffy mouse, just released from trap] [Mouse bounding away] Then it headed up the hill a ways before finally lapsing into the high-bounding behavior we've seen from other mice and rats we've released. I know it's hard to tell in the last picture -- the photo is so small -- but look at the distance between the mouse and its shadow on the ground.

Very entertaining! I don't understand why anyone uses killing traps -- even if you aren't bothered by killing things unnecessarily, the entertainment we get from watching the releases is worth any slight extra hassle of using the live traps.

Here's the movie: Mouse released from trap. [Mouse released from trap]

August 20, 2014 11:10 PM

August 18, 2014

Jono Bacon

New Facebook Page

As many of you will know, I am really passionate about growing strong and inspirational communities. I want all communities to benefit from well organized, structured, and empowerinfg community leadership. This is why I wrote The Art of Community and Dealing With Disrespect, and founded the Community Leadership Summit and Community Leadership Forum to further the art and science of community leadership.

In my work I am sharing lots of content, blog posts, videos, and other guidance via my new Facebook page. I would be really grateful if you could hop over and Like it to help build some momentum.

Many thanks!

by jono at August 18, 2014 04:35 PM

August 16, 2014

Elizabeth Krumbach

SanDisk Clip Sport

I got my first MP3 player in 2006, a SanDisk Sansa e140. As that one aged, I picked up the SanDisk Sansa Fuze in 2009. Recently my poor Sansa Fuze has been having trouble updating the library (takes a long time) and would randomly freeze up. After getting worse over my past few trips, I finally resigned to getting a new player.

As I began looking for players, I was quickly struck by how limited the MP3 player market is these days. I suspect this is due to so many people using their phones for music these days, but that’s not a great option for me for a variety of reasons:

  • Limits to battery life on my phone make a 12 hour flight (or a 3 hour flight, then an 8 hour flight, then navigating a foreign city…) a bit tricky.
  • While I do use my phone for runs (yay for running apps) I don’t like using my phone in the gym, because it’s bulky and I’m afraid of breaking it.
  • Finally, my desire for an FM tuner hasn’t changed, and I’m quite fond of the range of formats my Fuze supported (flac, ogg, etc).

So I found the SanDisk Clip Sport MP3 Player. Since I’ve been happy with my SanDisk players throughout the years and the specs pages seemed to meet my needs, I didn’t hesitate too much about picking it up for $49.99 on Amazon. Obviously I got the one with pink trim.

I gave the player a spin on my recent trip to Philadelphia. Flight time: 5 hours each way. I’m happy to report that the battery life was quite good, I forgot to charge it while in Philadelphia but the charge level was still quite high when I turned it on for my flight home.

Overall, I’m very happy with it, but no review would be complete without the details!

Cons:

  • Feels a bit plasticy – the Fuze had a metal casing
  • I can’t figure out how it sorts music in file view, doesn’t seem alphabetical…

Pros:

  • Meets my requirements: FM Tuner, multiple formats – my oggs play fine out of the box, the Fuze required a firmware upgrade
  • Standard Micro USB connector for charging – the Fuze had a custom connector
  • File directory listing option, not just by tags
  • Mounts via USB mass storage in Linux
  • Micro SD/SDHC expansion slot if I need to go beyond 8G

We’ll see how it holds up through the abuse I put it through while traveling.

by pleia2 at August 16, 2014 12:32 AM

August 15, 2014

Jono Bacon

Community Management Training at LinuxCon

I am a firm believer in building strong and empowered communities. We are in an age of a community management renaissance in which we are defining repeatable best practice that can be applied many different types of communities, whether internal to companies, external to volunteers, or a mix of both. The opportunity here is to grow large, well-managed, passionate communities, no matter what industry or area you work in.

I have been working to further this growth in community management via my books, The Art of Community and Dealing With Disrespect, the Community Leadership Summit, the Community Leadership Forum, and delivering training to our next generation of community managers and leaders.

LinuxCon North America and Europe

I am delighted to bring my training to the excellent LinuxCon events in both North America and Europe.

Firstly, on Fri 22nd August 2014 (next week) I will be presenting the course at LinuxCon North America in Chicago, Illinois and then on Thurs Oct 16th 2014 I will deliver the training at LinuxCon Europe in Düsseldorf, Germany.

Tickets are $300 for the day’s training. This is a steal; I usually charge $2500+/day when delivering the training as part of a consultancy arrangement. Thanks to the Linux Foundation for making this available at an affordable rate.

Space is limited, so go and register ASAP:

What Is Covered

So what is in the training course?

If you like videos, go and watch this:

If you prefer to read, read on!

My goal with each training day is to discuss how to build and grow a community, including building collaborative workflows, defining a governance structure, planning, marketing, and evaluating effectiveness. The day is packed with Q&A, discussion, and I encourage my students to raise questions, challenge me, and explore ways of optimizing their communities. This is not a sit-down-and-listen-to-a-teacher-drone on kind of session; it is interactive and designed to spark discussion.

The day is mapped out like this:

  • 9.00am – Welcome and introductions
  • 9.30am – The core mechanics of community
  • 10.00am – Planning your community
  • 10.30am – Building a strategic plan
  • 11.00am – Building collaborative workflow
  • 12.00pm – Governance: Part I
  • 12.30pm – Lunch
  • 1.30pm – Governance: Part II
  • 2.00pm – Marketing, advocacy, promotion, and social
  • 3.00pm – Measuring your community
  • 3.30pm – Tracking, measuring community management
  • 4.30pm – Burnout and conflict resolution
  • 5.00pm – Finish

I will warn you; it is an exhausting day, but ultimately rewarding. It covers a lot of ground in a short period of time, and then you can follow with further discussion of these and other topics on our Community Leadership discussion forum.

I hope to see you there!

by jono at August 15, 2014 07:27 PM

Akkana Peck

Time-lapse photography: stitching movies together on Linux

[Time-lapse clouds movie on youtube] A few weeks ago I wrote about building a simple Arduino-driven camera intervalometer to take repeat photos with my DSLR. I'd been entertained by watching the clouds build and gather and dissipate again while I stepped through all the false positives in my crittercam, and I wanted to try capturing them intentionally so I could make cloud movies.

Of course, you don't have to build an Arduino device. A search for timer remote control or intervalometer will find lots of good options around $20-30. I bought one so I'll have a nice LCD interface rather than having to program an Arduino every time I want to make movies.

Setting the image size

Okay, so you've set up your camera on a tripod with the intervalometer hooked to it. (Depending on how long your movie is, you may also want an external power supply for your camera.)

Now think about what size images you want. If you're targeting YouTube, you probably want to use one of YouTube's preferred settings, bitrates and resolutions, perhaps 1280x720 or 1920x1080. But you may have some other reason to shoot at higher resolution: perhaps you want to use some of the still images as well as making video.

For my first test, I shot at the full resolution of the camera. So I had a directory full of big ten-megapixel photos with filenames ranging from img_6624.jpg to img_6715.jpg. I copied these into a new directory, so I didn't overwrite the originals. You can use ImageMagick's mogrify to scale them all:

mogrify -scale 1280x720 *.jpg

I had an additional issue, though: rain was threatening and I didn't want to leave my camera at risk of getting wet while I went dinner shopping, so I moved the camera back under the patio roof. But with my fisheye lens, that meant I had a lot of extra house showing and I wanted to crop that off. I used GIMP on one image to determine the x, y, width and height for the crop rectangle I wanted. You can even crop to a different aspect ratio from your target, and then fill the extra space with black:

mogrify img_6624.jpg -crop 2720x1450+135+315 -scale 1280 -gravity center -background black -extent 1280x720 *.jpg

If you decide to rescale your images to an unusual size, make sure both dimensions are even, otherwise avconv will complain that they're not divisible by two.

Finally: Making your movie

I found lots of pages explaining how to stitch together time-lapse movies using mencoder, and a few using ffmpeg. Unfortunately, in Debian, both are deprecated. Mplayer has been removed entirely. The ffmpeg-vs-avconv issue is apparently a big political war, and I have no position on the matter, except that Debian has come down strongly on the side of avconv and I get tired of getting nagged at every time I run a program. So I needed to figure out how to use avconv.

I found some pages on avconv, but most of them didn't actually work. Here's what worked for me:

avconv -f image2 -r 15 -start_number 6624 -i 'img_%04d.jpg' -vcodec libx264 time-lapse.mp4

Adjust the start_number and filename appropriately for the files you have.

Avconv produces an mp4 file suitable for uploading to youtube. So here is my little test movie: Time Lapse Clouds.

August 15, 2014 06:05 PM

August 14, 2014

Elizabeth Krumbach

The Ubuntu Weekly Newsletter needs you!

On Monday we released Issue 378 of the Ubuntu Weekly Newsletter. The newsletter has thousands of readers across various formats from wiki to email to forums and discourse.

As we creep toward the 400th issue, we’ve been running a bit low on contributors. Thanks to Tiago Carrondo and David Morfin for pitching in these past few weeks while they could, but the bulk of the work has fallen to José Antonio Rey and myself and we can’t keep this up forever.

So we need more volunteers like you to help us out!

We specifically need folks to let us know about news throughout the week (email them to ubuntu-news-team@lists.ubuntu.com) and to help write summaries over the weekend. All links and summaries are stored in a Google Doc, so you don’t need to learn any special documentation formatting or revision control software to participate. Plus, everyone who participates is welcome to add their name to the credits!

Summary writers. Summary writers receive an email every Friday evening (or early Saturday) with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.

Interested? Email editor.ubuntu.news@ubuntu.com and we’ll get you added to the list of folks who are emailed each week and you can help as you have time.

by pleia2 at August 14, 2014 04:41 AM

August 12, 2014

Elizabeth Krumbach

Fosscon 2014

Flying off to a conference on the other side of the country 2 weeks after having my gallbladder removed may not have been one of the wisest decisions of my life, but I am very glad I went. Thankfully MJ had planned on coming along to this event anyway, so I had companionship… and someone to carry the luggage :)

This was Fosscon‘s 5th year, 4th in Philadelphia and the 3rd one I’ve been able to attend. I was delighted this year to have my employer, HP, sponsor the conference at a level that gave us a booth and track room. Throughout the day I was attending talks, giving my own and chatting with people at the HP booth about the work we’re doing in OpenStack and opportunities for people who are looking to work with open source technologies.

The day started off with a keynote by Corey Quinn titled “We are not special snowflakes” which stressed the importance of friendliness and good collaboration skills in technical candidates.

I, for one, am delighted to see us as an industry moving away from BOFHs and kudos for antisocial behavior. I may not be a social butterfly, but I value the work of my peers and strive to be someone people enjoy working with.

After the keynote I did a talk about having a career in FOSS. I was able to tell stories about my own work and experiences and those of some of my colleagues. I talked about my current role at HP and spent a fair amount of time giving participation examples related to my work on Xubuntu. I must really enjoy this topic, because I didn’t manage to leave time for questions! Fortunately I think I made up for it in some great chats with other attendees throughout the day.

My slides from the talk are available here: FOSSCON-2014-FOSS_career.pdf

Some other resources related to my talk:

During the conference I always was able to visit with my friends at the Ubuntu booth. They had brought along a couple copies of The Official Ubuntu Book, 8th Edition for me to sign (hooray!) and then sell to conference attendees. I brought along my Ubuntu tablet which they were able to have at the booth, and which MJ grabbed from me during a session when someone asked to see a demo.

After lunch I went to see Charlie Reisinger’s “Lessons From Open Source Schoolhouse” where he talked about the Ubuntu deployments in his school district. I’ve been in contact with Charlie for quite some time now since the work we do with Partimus also puts us in schools, but he’s been able to achieve some pretty exceptional success in his district. It was a great pleasure to finally meet him in person and his talk was very inspiring.

I’ve been worried for quite some time that children growing up today will only have access to tablets and smart phones that I classify as “read only devices.” I think back to when I first started playing with computers and the passion for them grew out of the ability to tinker and discover, if my only exposure had been a tablet I don’t think I’d be where I am today. Charlie’s talk went in a similar direction, particularly as he revealed that he controversially allows students to have administrative (sudo) access on the Ubuntu laptops! The students feel trusted, empowered and in the time the program has been going on, he’s been able to put together a team of student apprentices who are great at working with the software and can help train other students, and teachers too.

It was also interesting to learn that after the district got so much press the students began engaging people in online communities.

Fosscon talks aren’t recorded, but check out Charlie’s TEDx Lancaster talk to get a taste of the key points about student freedom and the apprentice program he covered: Enabling students in a digital age: Charlie Reisinger at TEDxLancaster

GitHub for Penn Manor School District here: https://github.com/pennmanor

The last talk I went to of the day was by Robinson Tryon on “LibreOffice Tools and Tricks For Making Your Work Easier” where I was delighted to see how far they’ve come with the Android/iOS Impress remote and work being done in the space of editing PDFs, including the development of Hybrid PDFs which can be opened by LibreOffice for editing or a PDF viewer and contain full versions of both documents. I also didn’t realized that LibreOffice retained any of the command line tools, so it was pretty cool to learn about soffice --headless --convert to do CLI-based conversions of files.

Huge thanks to the volunteers who make Fosscon happen. The Franklin Institute was a great venue and aside from the one room downstairs, I think the layout worked out well for us. Booths were in common spaces that attendees congregated in, and I was even able to meet some tech folks who were just at the museum and happened upon us, which was a lot of fun.

More photos from the event here: https://www.flickr.com/photos/pleia2/sets/72157646362111741/

by pleia2 at August 12, 2014 04:43 PM

August 10, 2014

Akkana Peck

Sphinx Moths

[White-lined sphinx moth on pale trumpets] We're having a huge bloom of a lovely flower called pale trumpets (Ipomopsis longiflora), and it turns out that sphinx moths just love them.

The white-lined sphinx moth (Hyles lineata) is a moth the size of a hummingbird, and it behaves like a hummingbird, too. It flies during the day, hovering from flower to flower to suck nectar, being far too heavy to land on flowers like butterflies do.

[Sphinx moth eye] I've seen them before, on hikes, but only gotten blurry shots with my pocket camera. But with the pale trumpets blooming, the sphinx moths come right at sunset and feed until near dark. That gives a good excuse to play with the DSLR, telephoto lens and flash ... and I still haven't gotten a really sharp photo, but I'm making progress.

Check out that huge eye! I guess you need good vision in order to make your living poking a long wiggly proboscis into long skinny flowers while laboriously hovering in midair.

Photos here: White-lined sphinx moths on pale trumpets.

August 10, 2014 03:23 AM

August 08, 2014

iheartubuntu

TAILS The Privacy Distro


TAILS, the anonymizing distribution released its version 1.1 about two weeks ago – which means you can download it now. The Tails 1.1 release is largely a catalog of security fixes and bug fixes, limiting itself otherwise to minor improvements such as to the ISO upgrade and installer, and the Windows 8 camouflage. This is one to grab to keep your online privacy intact.

https://tails.boum.org/

by iheartubuntu (noreply@blogger.com) at August 08, 2014 06:09 PM

August 05, 2014

Akkana Peck

Privacy Policy

I got an envelope from my bank in the mail. The envelope was open and looked like the flap had never been sealed.

Inside was a copy of their privacy policy. Nothing else.

The policy didn't say whether their privacy policy included sealing the envelope when they send me things.

August 05, 2014 07:22 PM

Elizabeth Krumbach

Recovery reading

During the most painful phase of the recovery from my gallbladder removal I was able to do a whole lot. Short walks around the condo to relieve stiffness and bloating post-surgery, but mostly I was resting to encourage healing. Sitting up hurt, so I spent a lot of time in bed. But what to do? So bored! I ended up reading a lot.

I don’t often write about what I’ve been reading, but I typically have 6 or so books going of various genres, usually one or two about history and/or science, a self improvement type of book (improving speaking, time/project management), readable tech (not reference), scifi/fantasy, fiction (usually cheesy/easy read, see Ian Fleming below!), social justice. This is largely reflected in what I read this past week, but for some reason I’ve been slanted toward history more than scifi/fantasy lately.

Surviving Justice: America’s Wrongfully Convicted and Exonerated edited by Dave Eggers and Lola Vollen. I think I heard about this book from a podcast since I’ve had a recent increase in interest in capital punishment following the narrowly defeated Prop 34 in 2012 seeking to end capital punishment in California. I’ve long been against capital punishment for a variety of reasons, and the real faces that this book put on wrongfully accused people (some of whom were on death row) really solidified some of my feelings around it. The book is made up of interviews from several exonerated individuals from all walks of life and gives a sad view into how their convictions ruined their lives and the painful process they went through to finally prove their innocence. Highly recommended.

Siddhartha by Hermann Hesse. I read this book in high school, and it interested me then but I always wanted to get back and read it as an adult with my perspectives now. It was a real pleasure, and much shorter than I remembered!

Casino Royale, by Ian Fleming. One of my father’s guilty pleasures was reading Ian Fleming books. Unfortunately his copies have been lost over the years, so when I started looking for my latest paperback indulgence I loaded up my Nook to start diving in. Fleming’s opinion and handling of women in his books is pretty dreadful, but once I put aside that part of my brain and just enjoyed it I found it to be a lot of fun.

The foundation for an open source city by Jason Hibbets. I saw Hibbets speak at Scale12x this year and downloaded the epub version of this book then. He hails from Raleigh, NC where over the past several years he’s been working in the community there to make the city an “Open Source City” – defined by one which not only uses open source tools, but also has an open source philosophy for civic engagement, from ordinary citizen to the highest level of government. The book goes through a series of projects they’ve done in Raleigh, as well as expanding to experiences that he’s had with other cities around the country, giving advice for how other communities can accomplish the same.

Orla’s Code by Fiona Pearse. This book tells of the life and work of Orla, a computer programmer in London. Having subject matter in a fiction book about a women and which is near to my own profession was particularly enjoyable to me!

Book of Ages: The Life and Opinions of Jane Franklin by Jill Lepore. I heard about this book through another podcast, and as a big Ben Franklin fan I was eager to learn more about his sister! I loved how Lepore wove in pieces of Ben Franklin’s life with that of his sister and the historical context in which they were living. She also worked to give the unedited excerpts from Jane’s letters, even if she had to then spend a paragraph explaining the meaning and context due to Jane’s poor written skills. Having the book presented in this way gave an extra depth of understanding Jane’s level of education and subsequent hardships, while keeping it a very enjoyable, if often sad, read.

Freedom Rider Diary: Smuggled Notes from Parchman Prison by Carol Ruth Silver. I didn’t intend to read two books related to prisons while I was laid up (as I routinely tell my friends “I don’t like prison shows”), but I was eager to read this one because I’ve had the pleasure of working with Carol Ruth Silver on some OLPC-SF stuff and she’s been a real inspiration to me. The book covers Silver’s time as a Freedom Rider in the south in 1961 and the 40 days she spent in jail and prison with fellow Freedom Riders resisting bail. She was able to take shorthand-style notes on whatever paper she could find and then type them up following her experience, so now 50 years later they are available for this book. The journal style of this book really pulled me in to this foreign world of the Civil Rights movement which I’m otherwise inclined to feel was somehow very distant and backwards. It was also exceptionally inspiring to read how these young men and women traveled for these rides and put their bodies on the line for a cause that many argued “wasn’t their problem” at all. The Afterward by Cherie A. Gaines was also wonderful.

Those were the books I finished, but I also I put a pretty large dent in the following:

All of these are great so far!

by pleia2 at August 05, 2014 04:06 PM

iheartubuntu

Diagnosing Internet Problems


Recently I had been having some internet speed problems. There are several basic checks you can do yourself such as double checking your wired connections are all plugged in properly, making sure you are logged onto the correct wi-fi network :) and so on. You can even check to make sure your modem or router is not overheating (I once had one that was smoking).

So here are some tests you can run if you think you might have a problem, and what the heck, check it even if there isnt a problem so you know where you stand.

Pingtest checks your line quality by examining packet loss, ping rate and jitter rate. Low jitter means you have a stable connection.

Give yours a test here: http://www.pingtest.net/

I also like to use M-Labs Network Diagnostic Test (java required). Besides the basic up/down speeds, it also has sophisticated diagnosis of any problems limiting your speed.

Check it out here: http://www.measurementlab.net/tools/ndt

M-Labs also has NPAD test (Network Path & Application Diagnostics; java required) which is designed to diagnose network performance problems in your end-system (the machine your browser is running on) or the network between it and your nearest NPAD server (basically the last mile or so of your broadband). For each diagnosed problem, the server prescribes corrective actions with instructions suitable for non-experts.

Finally, you can also install NEUBOT (Ubuntu Linux, Windows, Mac). Neubot is a research project on network neutrality. Transmission tests probe the Internet using various application level protocols. The results dataset contains samples from various provides and is published on the web, allowing anyone to analyze the data for research purposes.

With this software you can also check to see if your internet service provider is throttling your internet speed for any reason.

Learn more & how to install here:

http://www.neubot.org/neubot-install-guide

Ubuntu users can easily install with DEB file...

http://releases.neubot.org/neubot-0.4.16.9-1_all.deb

At home I determined my wi-fi card was the problem and replaced it. At work, I found my ISP was throttling internet speeds when using Deluge (a bittorrent-like app). Knowledge is power!

by iheartubuntu (noreply@blogger.com) at August 05, 2014 12:18 PM

August 02, 2014

Elizabeth Krumbach

The gallbladder ordeal

3 months ago I didn’t know where or what what a gallbladder was.

Turns out it’s a little thing that helps out the liver by storing some bile (gall). It also turns out to be not strictly required in most people, luckily for me.


“Blausen 0428 Gallbladder-Liver-Pancreas Location” by BruceBlaus – Own work. Licensed under Creative Commons Attribution 3.0 via Wikimedia Commons

Way back in April I came down with what I thought was a stomach bug. It was very painful and lasted 3 days before I went to an urgent care clinic to make sure nothing major was wrong. They took some blood samples and sent me on my way, calling it a stomach bug. When blood results came in I was showing elevated liver enzymes and was told to steer clear of red meat, alcohol and fatty foods.

The active “stomach bug” went away pretty quickly and after a couple weeks of boring diet the pain went away too. Hooray!

2 weeks later the pain and “stomach bug” came back. This time I ended up in the emergency room, dehydrated and in severe pain. They did some blood work and a CT scan to confirm my appendix wasn’t swollen and sent me home after a few hours. At this point we’re in early May and I had to cancel attending the OpenStack Summit in Atlanta because of the pain. That sucked.

May and June saw 3 major diagnostic tests to figure out what was wrong. I continued avoiding alcohol and fatty foods since they did make it worse, but the constant, dull pain persisted. I stopped exercising, switched to small meals which would hurt less and was quite tired and miserable. Finally, in July they came to the conclusion that I had gallbladder “sludge” and that my gallbladder should probably be removed.

Sign me up!

In preparation for my surgery I read a lot, talked with lots of people who had theirs out and found experiences landed into two categories:

  1. Best thing I ever did, no residual problems and the $%$# pain is gone!
  2. Wish I had tried managing it first, I now have trouble digesting fatty/fried foods and alcohol

This was a little worrying, but given the constant pain I’d been in for 3 months I was willing to deal with the potential side effects. Fortunately feedback was pretty consistent regarding immediate recovery: the surgery is easy and recovery is quick.

My surgery was on July 24th.

They offered it as either outpatient or a single night in the hospital, and I opted for outpatient. I arrived at 8AM and sent home without a gallbladder and nibbling on animal crackers and water by 1PM. Easy!

Actually, the first 3 days were a bit tough. It was a laparoscopic surgery that only required 4 small incisions, so I had pain in my belly and at the incision sites. Activity is based on the individual, but loosely estimated a week for basic recovery, and 2-3 weeks before you’re fully recovered. They recommend both a lot of rest and walking as you can so that you can rid your body of stiffness and bloating from the surgery, leading to a quicker recovery. MJ was able to take time off of work Thursday and Friday and spend the weekend taking care of me.

As the weekend progressed sitting up was still a bit painful, so that limited TV watching. I could only sleep on my back which started causing some neck and back soreness. I did a lot of reading! Books, magazines, caught up on RSS feeds that I fed to my phone. Sunday evening I was able to take the bandages off the incision sites, leaving the wound closure strips in place (in lieu of stitches, and they said they should fall off in 10-14 days). I got dizzy and became nauseated while removing the bandages, which was very unusual for me because blood and stuff doesn’t tend to bother me. I think I was just nervous about finding an infection or pulling on one of the closure strips too hard, but it all went well.

By Monday I was doing a bit better, was able to go outside to pick up some breakfast, walk a block down to the pharmacy (both in my pajamas – haha!). The rest of the week went like this, each day I felt a little better, but still taking the pain medication. Tuesday I spent some time at my desk on email triage so I could respond to anything urgent and have a clearer idea of my task list when I was feeling better. Sitting up got easier, so I added some binge TV watching into the mix and also finally had the opportunity to watch some videos from the OpenStack Summit I missed – awesome!

On Wednesday afternoon I started easing back into work with a couple of patch fix-ups and starting to more actively follow up with email. I even made it out to an OpenStack 4th birthday party for a little while on Wednesday night, which was fortuitously held at a gallery on my block so I was able to go home quickly as soon as I started feeling tired. I’m also happy to say that I wore an elastic waist cotton skirt to this, not my pajamas! Thursday and Friday I still took a lot of breaks from my desk, but was able to start getting caught up with work.

I’m still taking it easy this weekend and on Tuesday I have a follow-up appointment with the surgeon to confirm that everything is healing well. I am hopeful that I’ll be feeling much better by Monday, and certainly by the time I’m boarding a plane to Philly on Thursday. Fortunately MJ is coming with me and has offered to handle the luggage, which is great because aside from wanting him to join me on this trip anyway, I probably won’t be ready to haul around anything heavy yet.

So far I haven’t had trouble eating anything, even when I took a risk and had pizza (fatty!) and egg rolls (fried!) this week. And while I still have surgical pain lurking around and some more healing to do, the constant pain I was having left with my gallbladder. I am so happy! This has truly been a terrible few months for me, I’m looking forward to having energy again so I can get back to my usual productive self and to getting back on track with my diet and exercise routine.

by pleia2 at August 02, 2014 01:26 PM

August 01, 2014

Akkana Peck

Predicting planetary visibility with PyEphem

Part II: Predicting Conjunctions

After I'd written a basic script to calculate when planets will be visible, the next step was predicting conjunctions, times when two or more planets are close together in the sky.

Finding separation between two objects is easy in PyEphem: it's just one line once you've set up your objects, observer and date.

p1 = ephem.Mars()
p2 = ephem.Jupiter()
observer = ephem.Observer()  # and then set it to your city, etc.
observer.date = ephem.date('2014/8/1')
p1.compute(observer)
p2.compute(observer)

ephem.separation(p1, p2)

So all I have to do is loop over all the visible planets and see when the separation is less than some set minimum, like 4 degrees, right?

Well, not really. That tells me if there's a conjunction between a particular pair of planets, like Mars and Jupiter. But the really interesting events are when you have three or more objects close together in the sky. And events like that often span several days. If there's a conjunction of Mars, Venus, and the moon, I don't want to print something awful like

Friday:
  Conjunction between Mars and Venus, separation 2.7 degrees.
  Conjunction between the moon and Mars, separation 3.8 degrees.
Saturday:
  Conjunction between Mars and Venus, separation 2.2 degrees.
  Conjunction between Venus and the moon, separation 3.9 degrees.
  Conjunction between the moon and Mars, separation 3.2 degrees.
Sunday:
  Conjunction between Venus and the moon, separation 4.0 degrees.
  Conjunction between the moon and Mars, separation 2.5 degrees.

... and so on, for each day. I'd prefer something like:

Conjunction between Mars, Venus and the moon lasts from Friday through Sunday.
  Mars and Venus are closest on Saturday (2.2 degrees).
  The moon and Mars are closest on Sunday (2.5 degrees).

At first I tried just keeping a list of planets involved in the conjunction. So if I see Mars and Jupiter close together, I'd make a list [mars, jupiter], and then if I see Venus and Mars on the same date, I search through all the current conjunction lists and see if either Venus or Mars is already in a list, and if so, add the other one. But that got out of hand quickly. What if my conjunction list looks like [ [mars, venus], [jupiter, saturn] ] and then I see there's also a conjunction between Mars and Jupiter? Oops -- how do you merge those two lists together?

The solution to taking all these pairs and turning them into a list of groups that are all connected actually lies in graph theory: each conjunction pair, like [mars, venus], is an edge, and the trick is to find all the connected edges. But turning my list of conjunction pairs into a graph so I could use a pre-made graph theory algorithm looked like it was going to be more code -- and a lot harder to read and less maintainable -- than making a bunch of custom Python classes.

I eventually ended up with three classes: ConjunctionPair, for a single conjunction observed between two bodies on a single date; Conjunction, a collection of ConjunctionPairs covering as many bodies and dates as needed; and ConjunctionList, the list of all Conjunctions currently active. That let me write methods to handle merging multiple conjunction events together if they turned out to be connected, as well as a method to summarize the event in a nice, readable way.

So predicting conjunctions ended up being a lot more code than I expected -- but only because of the problem of presenting it neatly to the user. As always, user interface represents the hardest part of coding.

The working script is on github at conjunctions.py.

August 01, 2014 01:57 AM

July 31, 2014

Elizabeth Krumbach

A Career in FOSS at Fosscon in Philadelphia, August 9th

After years fueled by hobbyist passion, I’ve been really excited to see how work that many of my peers and I have been doing in open source has grown into us having serious technical careers these past few years. Whether you’re a programmer, community manager, systems administrator like me or other type of technologist, familiarity with Open Source technology, culture and projects can be a serious boon to your career.

Last year when I attended Fosscon in Philadelphia, I did a talk about my work as an “Open Source Sysadmin” – meaning all my work for the OpenStack Infrastructure team is done in public code repositories. Following my talk I got a lot of questions about how I’m funded to do this, and a lot of interest in the fact that a company like HP is making such an investment.

So this year I’m returning to Fosscon to talk about these things! In addition to my own experiences with volunteer and paid work in Open Source, I’ll be drawing experience from my colleague at HP, Mark Atwood, who recently wrote 7 skills to land your open source dream job and those of others folks I work with who are also “living the dream” with a job in open source.

I’m delighted to be joined at this conference by keynote speaker and friend Corey Quinn and Charlie Reisinger of Penn Manor School District who I’ve chatted with via email and social media many times about the amazing Ubuntu deployment at his district and whom am looking forward to finally meeting.

In Philadelphia or near by? The conference is coming up on Saturday, August 9th and is being held at the the world-renowned Franklin Institute science museum.

Registration to the conference is free, but you get a t-shirt if you pay the small stipend of $25 to support the conference (I did!): http://fosscon.us/Attend

by pleia2 at July 31, 2014 05:05 PM

July 24, 2014

Akkana Peck

Predicting planetary visibility with PyEphem

Part 1: Basic Planetary Visibility

All through the years I was writing the planet observing column for the San Jose Astronomical Association, I was annoyed at the lack of places to go to find out about upcoming events like conjunctions, when two or more planets are close together in the sky. It's easy to find out about conjunctions in the next month, but not so easy to find sites that will tell you several months in advance, like you need if you're writing for a print publication (even a club newsletter).

For some reason I never thought about trying to calculate it myself. I just assumed it would be hard, and wanted a source that could spoon-feed me the predictions.

The best source I know of is the RASC Observer's Handbook, which I faithfully bought every year and checked each month so I could enter that month's events by hand. Except for January and February, when I didn't have the next year's handbook yet by the time my column went to press and I was on my own. I have to confess, I was happy to get away from that aspect of the column when I moved.

In my new town, I've been helping the local nature center with their website. They had some great pages already, like a What's Blooming Now? page that keeps track of which flowers are blooming now and only shows the current ones. I've been helping them extend it by adding features like showing only flowers of a particular color, separating the data into CSV databases so it's easier to add new flowers or butterflies, and so forth. Eventually we hope to build similar databases of birds, reptiles and amphibians.

And recently someone suggested that their astronomy page could use some help. Indeed it could -- it hadn't been updated in about five years. So we got to work looking for a source of upcoming astronomy events we could use as a data source for the page, and we found sources for a few things, like moon phases and eclipses, but not much.

Someone asked about planetary conjunctions, and remembering how I'd always struggled to find that data, especially in months when I didn't have the RASC handbook yet, I got to wondering about calculating it myself. Obviously it's possible to calculate when a planet will be visible, or whether two planets are close to each other in the sky. And I've done some programming with PyEphem before, and found it fairly easy to use. How hard could it be?

Note: this article covers only the basic problem of predicting when a planet will be visible in the evening. A followup article will discuss the harder problem of conjunctions.

Calculating planet visibility with PyEphem

The first step was figuring out when planets were up. That was straightforward. Make a list of the easily visible planets (remember, this is for a nature center, so people using the page aren't expected to have telescopes):

import ephem

planets = [
    ephem.Moon(),
    ephem.Mercury(),
    ephem.Venus(),
    ephem.Mars(),
    ephem.Jupiter(),
    ephem.Saturn()
    ]

Then we need an observer with the right latitude, longitude and elevation. Elevation is apparently in meters, though they never bother to mention that in the PyEphem documentation:

observer = ephem.Observer()
observer.name = "Los Alamos"
observer.lon = '-106.2978'
observer.lat = '35.8911'
observer.elevation = 2286  # meters, though the docs don't actually say

Then we loop over the date range for which we want predictions. For a given date d, we're going to need to know the time of sunset, because we want to know which planets will still be up after nightfall.

observer.date = d
sunset = observer.previous_setting(sun)

Then we need to loop over planets and figure out which ones are visible. It seems like a reasonable first approach to declare that any planet that's visible after sunset and before midnight is worth mentioning.

Now, PyEphem can tell you directly the rising and setting times of a planet on a given day. But I found it simplified the code if I just checked the planet's altitude at sunset and again at midnight. If either one of them is "high enough", then the planet is visible that night. (Fortunately, here in the mid latitudes we don't have to worry that a planet will rise after sunset and then set again before midnight. If we were closer to the arctic or antarctic circles, that would be a concern in some seasons.)

min_alt = 10. * math.pi / 180.
for planet in planets:
    observer.date = sunset
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "is already up at sunset"

Easy enough for sunset. But how do we set the date to midnight on that same night? That turns out to be a bit tricky with PyEphem's date class. Here's what I came up with:

    midnight = list(observer.date.tuple())
    midnight[3:6] = [7, 0, 0]
    observer.date = ephem.date(tuple(midnight))
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "will rise before midnight"

What's that 7 there? That's Greenwich Mean Time when it's midnight in our time zone. It's hardwired because this is for a web site meant for locals. Obviously, for a more general program, you should get the time zone from the computer and add accordingly, and you should also be smarter about daylight savings time and such. The PyEphem documentation, fortunately, gives you tips on how to deal with time zones. (In practice, though, the rise and set times of planets on a given day doesn't change much with time zone.)

And now you have your predictions of which planets will be visible on a given date. The rest is just a matter of writing it out into your chosen database format.

In the next article, I'll cover planetary and lunar conjunctions -- which were superficially very simple, but turned out to have some tricks that made the programming harder than I expected.

July 24, 2014 03:32 AM

July 22, 2014

Elizabeth Krumbach

Surgery coming up, Pride, Tiburon and a painting

This year has been super packed with conferences and travel. I’ve done 13 talks across 3 continents and have several more coming up in the next few months. I’ve also been squeezing in the hosting of Ubuntu Hours each month.


Buttercup at his first Utopic Unicorn cycle Ubuntu Hour

Aside from all this, life-wise things have been pretty mellow due to my abdominal pain (sick of hearing about it yet?). I’ve been watching a lot of TV because of how exhausted the pain is making me. Exercise has totally taken a back seat, this compounds the tiredness and means I’ve put on some weight that I’m not at all happy about. Once I’m better I plan on starting Couch to 5K again and may also join a new gym to get back into shape.

The gallbladder removal surgery itself is on Thursday and I’m terribly nervous about it. Jet lag combined with surgery nervousness means I haven’t been sleeping exceptionally well either. I’m not looking forward to the recovery, it should be relatively fast (a couple of weeks), but I’m a terrible patient and get bored easily when I’m not doing things. It will take a lot of effort to not put too much stress on my system too quickly. I’ll be so happy when this is all over.

I did take some time to do a few things though. On June 29th our friend Danita was still in town and we got to check out the Pride parade, which is always a lot of fun, even if I did get a bit too much sun.

Lots more photos from the parade here: https://www.flickr.com/photos/pleia2/sets/72157645439712155/

MJ and I also took a Sunday to drive north a couple weeks ago to visit Tiburon for some brunch. It was a beautiful day for it, and always nice to further explore the beautiful places around where we live, I hope we can make more time for it.


Sunny day in Tiburon!

Finally, I’m happy to report that after a couple months, I’ve gotten a painting back from Chandler Fine Art who was working with a restoration artist to clean it up and to have it framed. Not much can be done about the cracks without a significant amount of work (the nature of oil paintings!) but they were able to fix a dent in the canvas and clean up some stains, I can’t even tell where the defects were now.

It may not strictly match the decor of our home, but it was a favorite of my father’s growing up and it’s nice to have such a nice memory from my childhood hanging here now.

by pleia2 at July 22, 2014 04:05 PM

July 21, 2014

Elizabeth Krumbach

The Official Ubuntu Book, 8th Edition now available!

This past spring I had the great opportunity to work with Matthew Helmke, José Antonio Rey and Debra Williams of Pearson on the 8th edition of The Official Ubuntu Book.

Official Ubuntu Book, 8th Edition

In addition to the obvious task of updating content, one of our most important tasks was working to “future proof” the book more by doing rewrites in a way that would make sure the content of the book was going to be useful until the next Long Term Support release, in 2016. This meant a fair amount of content refactoring, less specifics when it came to members of teams and lots of goodies for folks looking to become power users of Unity.

Quoting the product page from Pearson:

The Official Ubuntu Book, Eighth Edition, has been extensively updated with a single goal: to make running today’s Ubuntu even more pleasant and productive for you. It’s the ideal one-stop knowledge source for Ubuntu novices, those upgrading from older versions or other Linux distributions, and anyone moving toward power-user status.

Its expert authors focus on what you need to know most about installation, applications, media, administration, software applications, and much more. You’ll discover powerful Unity desktop improvements that make Ubuntu even friendlier and more convenient. You’ll also connect with the amazing Ubuntu community and the incredible resources it offers you.

Huge thanks to all my collaborators on this project. It was a lot of fun to work them and I already have plans to work with all three of them on other projects in the future.

So go pick up a copy! As my first published book, I’d be thrilled to sign it for you if you bring it to an event I’m at, upcoming events include:

And of course, monthly Ubuntu Hours and Debian Dinners in San Francisco.

by pleia2 at July 21, 2014 04:21 PM

July 20, 2014

Elizabeth Krumbach

Tourist in Darmstadt

This past week I was in Germany! I’ve gone through Frankfurt many times over the years, but this was the first time I actually left the airport via ground transportation.


Trip began with a flight on a Lufthansa 380

Upon arrival I found the bus stop for the shuttle to Darmstadt and after a 20 minute ride was at Hauptbahnhof (main transit station) in Darmstadt and a very short walk took me to the Maritim Konferenzhotel Darmstadt where I’d be staying for the week.

The hotel was great, particularly for a European hotel. The rooms were roomy, the shower was amazing, and all the food I had was good.

Our timing on the sprint was pretty exceptional, with most of us arriving on Sunday just in time to spend the evening watching the World Cup final, which Germany was in! Unfortunately for us the beer gardens in the city required reservations and we didn’t have any, so we ended up camping out in the hotel bar and enjoying the game there, along with some beers and good conversations. In spite of my current gallbladder situation, I made an exception to my abstinence from alcohol that night and had a couple of beers to commemorate the World Cup and my first proper time in Germany.


Beer, World Cup

Unfortunately I wasn’t so lucky gallbladder-wise the rest of the week. I’m not sure if I was having some psychosomatic reaction to knowing the removal surgery is so close, but it definitely felt like I was in more pain this week. This kept me pretty close to the hotel and I sadly had to skip most of the evenings out with my co-workers at beer gardens because I was too tired, in pain and couldn’t have beer anyway.

I did make it out on Wednesday night, since I couldn’t resist a visit to Darmstädter Ratskeller, even if I did only have apple juice. This evening brought me into Darmstadt center where I got to take all my tourist photos, and also gave me an opportunity to visit the beer garden and chat with everyone.


Darmstädter Ratskeller

Plus, I managed to avoid pork by ordering Goulash – a dish I hadn’t had the opportunity to enjoy since my childhood.


Goulash! Accompanied by apple juice

I wish I had felt up to more adventuring. Had I felt better I probably would have spent a few extra days in Frankfurt proper giving myself a mini-vacation to explore. Next time.

All photos from my adventure that night in Darmstadt center (and planes and food and things!) here: https://www.flickr.com/photos/pleia2/sets/72157645839688233/

by pleia2 at July 20, 2014 03:58 PM

July 19, 2014

Elizabeth Krumbach

OpenStack QA/Infrastructure Meetup in Darmstadt

I spent this week at the QA/Infrastructure Meetup in Darmstadt, Germany.

Our host was Marc Koderer of Deutsche Telekom who sorted out all logistics for having our event at their office in Darmstadt. Aside from the summer heat (the conference room lacked air conditioning) it all worked out very well, we had a lot of space to work, the food was great, we had plenty of water. It was also nice that the hotel most of us stayed at was an easy walk away.

The first day kicked off with an introduction by Deutsche Telekom that covered what they’re using OpenStack for in their company. Since they’re a network provider, networking support was a huge component, but they use other components as well to build an infrastructure as they plan to have a quicker software development cycle that’s less tied to the hardware lifetime. We also got a quick tour of one of their data centers and a demo of some of the running prototypes for quicker provisioning and changing of service levels for their customers.

Monday afternoon was spent with an on-boarding tutorial for newcomers to OpenStack when it comes to contributing, and on Tuesday we transitioned into an overview of the OpenStack Infrastructure and QA systems that we’d be working on for the rest of the week. Beyond the overview of the infrastructure presented by James E. Blair, key topics included in the infrastructure included jeepyb presented by Jeremy Stanley, devstack-gate and Grenade presented by Sean Dague, Tempest presented by Matthew Treinish (including the very useful Tempest Field Guide) and our Elasticsearch, Logstash and Kibana (ELK) stack presented by Clark Boylan.

Wednesday we began the hacking/sprint portion of the event, where we moved to another conference room and moved tables around so we could get into our respective working groups. Anita Kuno presented the Infrastructure User Manual which we’re looking to flesh out, and gave attendees a task of helping to write a section to help guide users of our CI system. This ended up being a great thing for newcomers to get their feet wet with, and I hope to have a kind of entry level task at every infrastructure sprint moving forward. Some folks worked on getting support for uploading log files to Swift, some on getting multinode testing architected, and others worked on Tempest. In the early afternoon we had some discussions covering recheck language, next steps I’d be taking when it comes to the evaluation of translations tools, a “Gerrit wishlist” for items that developers are looking for as Khai Do prepares to attend a Gerrit hack event and more. I also took time on Wednesday to dive into some documentation I noticed needed some updating after the tutorial day the day before.

Thursday the work continued, I did some reviews, helped out a couple of new contributors and wrote my own patch for the Infra Manual. It was also great to learn and collaborate on some of the aspects of the systems we use that I’m less familiar with and explain portions to others that I was familiar with.


Zuul supervised my work

Friday was a full day of discussions, which were great but a bit overwhelming (might have been nice to have had more on Thursday). Discussions kicked off with strategies for handling the continued publishing of OpenStack Documentation, which is currently just being published to a proprietary web platform donated by one of the project sponsors.

A very long discussion was then had about managing the gate runtime growth. Managing developer and user expectations for our gating system (thorough, accurate testing) while balancing the human and compute resources that we have available on the project is a tough thing to do. Some technical solutions to ease the pain on some failures were floated and may end up being used, but the key takeaway I had from this discussion was that we’d really like the community to be more engaged with us and each other (particularly when patches impact projects or functionality that you might not feel is central to your patch). We also want to stress that the infrastructure is a living entity that evolves and we accept input as to ideas and solutions to problems that we’re encountering, since right now the team is quite small for what we’re doing. Finally, there were some comments about how we run tests in the process of reviewing, and how scalable the growth of tests is over time and how we might lighten that load (start doing some “traditional CI” post merge jobs? having some periodic jobs? leverage experimental jobs more?).

The discussion I was most keen on was around the refactoring of our infrastructure to make it more easily consumable by 3rd parties. Our vision early on was that we were an open source project ourselves, but that all of our customizations were a kind of example for others to use, not that they’d want to use them directly, so we hard coded a lot into our special openstack_projects module. As the project has grown and more organizations are starting to use the infrastructure, we’ve discovered that many want to use one largely identical to ours and that making this easier is important to them. To this end, we’re developing a Specification to outline the key steps we need to go through to achieve this goal, including splitting out our puppet modules, developing a separate infra system repo (what you need to run an infrastructure) and project stuff repo (data we load into our infrastructure) and then finally looking toward a way to “productize” the infrastructure to make it as easily consumable by others as possible.

The afternoon finished up with discussions about vetting and signing of release artifacts, ideas for possible adjustment of the job definition language and how teams can effectively manage their current patch queues now that the auto-abandon feature has been turned off.

And with that – our sprint concluded! And given the rise in temperature on Friday and how worn out we all were from discussions and work, it was well-timed.

Huge thanks to Deutsche Telekom for hosting this event, being able to meet like this is really valuable to the work we’re all doing in the infrastructure and QA for OpenStack.

Full (read-only) notes from our time spent throughout the week available here: https://etherpad.openstack.org/p/r.OsxMMUDUOYJFKgkE

by pleia2 at July 19, 2014 11:07 AM

July 17, 2014

Jono Bacon

Community Leadership Summit and OSCON Plans

As many of you will know, I organize an event every year called the Community Leadership Summit. The event brings together community leaders, organizers and managers and the projects and organizations that are interested in growing and empowering a strong community.

The event kicks off this week on Thursday evening (17th July) with a pre-CLS gathering at the Doubletree Hotel at 7.30pm, and then we get started with the main event on Friday (18th July) and Saturday (19th July). For more details, see http://www.communityleadershipsummit.com/.

This year’s event is shaping up to be incredible. We have a fantastic list of registered attendees and I want to thank our sponsors, O’Reilly, Citrix, Oracle, Mozilla, Ubuntu, and LinuxFund.

Also, be sure to join the new Community Leadership Forum for discussing topics that relate to community management, as well as topics for discussion at the Community Leadership Summit event each year. The forum is designed to be a great place for sharing and learning tips and techniques, getting to know other community leaders, and having fun.

The forum is powered by Discourse, so it is a pleasure to use, and I want to thank discoursehosting.com for generously providing free hosting for us.

Speaking Events and Training at OSCON

I also have a busy OSCON schedule. Here is the summary:

Community Management Training

On Monday 21st July from 9am – 6pm in D135 I will be providing a full day of community management training at OSCON. This full day of training will include topics such as

  • The Core Mechanics Of Community
  • Planning Your Community
  • Building a Strategic Plan
  • Building Collaborative Workflow
  • Defining Community Governance
  • Marketing, Advocacy, Promotion, and Social Media
  • Measuring Your Community
  • Tracking and Measuring Community Management
  • Conflict Resolution

Dealing With Disrespect

On Tues 22nd July at 10.40am in Expo Hall A I will be providing an Office Hours Meeting in which you can come and ask me about:

  • Building collaborative workflow and tooling
  • Conflict resolution and managing complex personalities
  • Building buzz and excitement around your community
  • Incentivized prizes and innovation
  • Hiring community managers
  • Anything else!

Office Hours

Finally, on Wed 23rd July at 2.30pm in E144 I will be giving a presentation called Dealing With Disrespect that is based upon my free book of the same name for managing complex communications.

This is the summary of the talk:

In this new presentation from Jono Bacon, author of The Art of Community, founder of the Community Leadership Summit, and Ubuntu Community Manager, he discusses how to process, interpret, and manage rude, disrespectful, and non-constructive feedback in communities so the constructive criticism gets through but the hate doesn’t.

The presentation covers the three different categories of communications, how we evaluate and assess different attributes in each communication, the factors that influence all of our communications, and how to put in place a set of golden rules for handling feedback and putting it in perspective.

If you personally or your community has suffered rudeness, trolling, and disrespect, this presentation is designed to help.

I will also be available for discussions and meetings. Just drop me an email at jono@jonobacon.org if you want to meet.

I hope to see many of you in Portland this week!

by jono at July 17, 2014 12:44 AM

Akkana Peck

Time-lapse photography: a simple Arduino-driven camera intervalometer

[Arduino intervalometer] While testing my automated critter camera, I was getting lots of false positives caused by clouds gathering and growing and then evaporating away. False positives are annoying, but I discovered that it's fun watching the clouds grow and change in all those photos ... which got me thinking about time-lapse photography.

First, a disclaimer: it's easy and cheap to just buy an intervalometer. Search for timer remote control or intervalometer and you'll find plenty of options for around $20-30. In fact, I ordered one. But, hey, it's not here yet, and I'm impatient. And I've always wanted to try controlling a camera from an Arduino. This seemed like the perfect excuse.

Why an Arduino rather than a Raspberry Pi or BeagleBone? Just because it's simpler and cheaper, and this project doesn't need much compute power. But everything here should be applicable to any microcontroller.

My Canon Rebel Xsi has a fairly simple wired remote control plug: a standard 2.5mm stereo phone plug. I say "standard" as though you can just walk into Radio Shack and buy one, but in fact it turned out to be surprisingly difficult, even when I was in Silicon Valley, to find them. Fortunately, I had found some, several years ago, and had cables already wired up waiting for an experiment.

The outside connector ("sleeve") of the plug is ground. Connecting ground to the middle ("ring") conductor makes the camera focus, like pressing the shutter button halfway; connecting ground to the center ("tip") conductor makes it take a picture. I have a wired cable release that I use for astronomy and spent a few minutes with an ohmmeter verifying what did what, but if you don't happen to have a cable release and a multimeter there are plenty of Canon remote control pinout diagrams on the web.

Now we need a way for the controller to connect one pin of the remote to another on command. There are ways to simulate that with transistors -- my Arduino-controlled robotic shark project did that. However, the shark was about a $40 toy, while my DSLR cost quite a bit more than that. While I did find several people on the web saying they'd used transistors with a DSLR with no ill effects, I found a lot more who were nervous about trying it. I decided I was one of the nervous ones.

The alternative to transistors is to use something like a relay. In a relay, voltage applied across one pair of contacts -- the signal from the controller -- creates a magnetic field that closes a switch and joins another pair of contacts -- the wires going to the camera's remote.

But there's a problem with relays: that magnetic field, when it collapses, can send a pulse of current back up the wire to the controller, possibly damaging it.

There's another alternative, though. An opto-isolator works like a relay but without the magnetic pulse problem. Instead of a magnetic field, it uses an LED (internally, inside the chip where you can't see it) and a photo sensor. I bought some opto-isolators a while back and had been looking for an excuse to try one. Actually two: I needed one for the focus pin and one for the shutter pin.

How do you choose which opto-isolator to use out of the gazillion options available in a components catalog? I don't know, but when I bought a selection of them a few years ago, it included a 4N25, 4N26 and 4N27, which seem to be popular and well documented, as well as a few other models that are so unpopular I couldn't even find a datasheet for them. So I went with the 4N25.

Wiring an opto-isolator is easy. You do need a resistor across the inputs (presumably because it's an LED). 380&ohm is apparently a good value for the 4N25, but it's not critical. I didn't have any 380&ohm but I had a bunch of 330&ohm so that's what I used. The inputs (the signals from the Arduino) go between pins 1 and 2, with a resistor; the outputs (the wires to the camera remote plug) go between pins 4 and 5, as shown in the diagram on this Arduino and Opto-isolators discussion, except that I didn't use any pull-up resistor on the output.

Then you just need a simple Arduino program to drive the inputs. Apparently the camera wants to see a focus half-press before it gets the input to trigger the shutter, so I put in a slight delay there, and another delay while I "hold the shutter button down" before releasing both of them.

Here's some Arduino code to shoot a photo every ten seconds:

int focusPin = 6;
int shutterPin = 7;

int focusDelay = 50;
int shutterOpen = 100;
int betweenPictures = 10000;

void setup()
{
    pinMode(focusPin, OUTPUT);
    pinMode(shutterPin, OUTPUT);
}

void snapPhoto()
{
    digitalWrite(focusPin, HIGH);
    delay(focusDelay);
    digitalWrite(shutterPin, HIGH);
    delay(shutterOpen);
    digitalWrite(shutterPin, LOW);
    digitalWrite(focusPin, LOW);
}

void loop()
{
    delay(betweenPictures);
    snapPhoto();
}

Naturally, since then we haven't had any dramatic clouds, and the lightning storms have all been late at night after I went to bed. (I don't want to leave my nice camera out unattended in a rainstorm.) But my intervalometer seemed to work fine in short tests. Eventually I'll make some actual time-lapse movies ... but that will be a separate article.

July 17, 2014 12:31 AM

July 12, 2014

Akkana Peck

Trapped our first pack rat

[White throated woodrat in a trap] One great thing about living in the country: the wildlife. I love watching animals and trying to photograph them.

One down side of living in the country: the wildlife.

Mice in the house! Pack rats in the shed and the crawlspace! We found out pretty quickly that we needed to learn about traps.

We looked at traps at the local hardware store. Dave assumed we'd get simple snap-traps, but I wanted to try other options first. I'd prefer to avoid killing if I don't have to, especially killing in what sounds like a painful way.

They only had one live mousetrap. It was a flimsy plastic thing, and we were both skeptical that it would work. We made a deal: we'd try two of them for a week or two, and when (not if) they didn't work, then we'd get some snap-traps.

We baited the traps with peanut butter and left them in the areas where we'd seen mice. On the second morning, one of the traps had been sprung, and sure enough, there was a mouse inside! Or at least a bit of fur, bunched up at the far inside end of the trap.

We drove it out to open country across the highway, away from houses. I opened the trap, and ... nothing. I looked in -- yep, there was still a furball in there. Had we somehow killed it, even in this seemingly humane trap?

I pointed the open end down and shook the trap. Nothing came out. I shook harder, looked again, shook some more. And suddenly the mouse burst out of the plastic box and went HOP-HOP-HOPping across the grass away from us, bounding like a tiny kangaroo over tufts of grass, leaving us both giggling madly. The entertainment alone was worth the price of the traps.

Since then we've seen no evidence of mice inside, and neither of the traps has been sprung again. So our upstairs and downstairs mice must have been the same mouse.

But meanwhile, we still had a pack rat problem (actually, probably, white-throated woodrats, the creature that's called a pack rat locally). Finding no traps for sale at the hardware store, we went to Craigslist, where we found a retired wildlife biologist just down the road selling three live Havahart rat traps. (They also had some raccoon-sized traps, but the only raccoon we've seen has stayed out in the yard.)

We bought the traps, adjusted one a bit where its trigger mechanism was bent, baited them with peanut butter and set them in likely locations. About four days later, we had our first captive little brown furball. Much smaller than some of the woodrats we've seen; probably just a youngster.

[White throated woodrat bounding away] We drove quite a bit farther than we had for the mouse. Woodrats can apparently range over a fairly wide area, and we didn't want to let it go near houses. We hiked a little way out on a trail, put the trap down and opened both doors. The woodrat looked up, walked to one open end of the trap, decided that looked too scary; walked to the other open end, decided that looked too scary too; and retreated back to the middle of the trap.

We had to tilt and shake the trap a bit, but eventually the woodrat gathered up its courage, chose a side, darted out and HOP-HOP-HOPped away into the bunchgrass, just like the mouse had.

No reference I've found says anything about woodrats hopping, but the mouse did that too. I guess hopping is just what you do when you're a rodent suddenly set free.

I was only able to snap one picture before it disappeared. It's not in focus, but at least I managed to catch it with both hind legs off the ground.

July 12, 2014 06:05 PM

July 08, 2014

Elizabeth Krumbach

OpenStack Infrastructure July 2014 Bug Day

Today the OpenStack Infrastructure team hosted our first bug day of the cycle.

The Killing Jar; the last moments of a Parage aegeria

The steps we have for running a bug day can be a bit tedious, but it’s not hard, here’s the rundown:

  1. I create our etherpad: cibugreview-july2014 (see etherpad from past bug days on the wiki at: InfraTeam#Bugs)
  2. I run my simple infra_bugday.py script and populate the etherpad.
  3. Grab the bug stats from launchpad and copy them into the pad so we (hopefully) have inspiring statistics at the end of the day.
  4. Then comes the real work. I open up the old etherpad and go through all the bugs, copying over comments from the old etherpad where applicable and making my own comments as necessary about obvious updates I see (and updating my own bugs).
  5. Let the rest of the team dive in on the etherpad and bugs!

Throughout the day we chat in #openstack-infra about bug statuses, whether we should continue pursuing certain strategies outlined in bugs, reaching out to folks who have outstanding bugs in the tracker that we’d like to see movement on but haven’t in a while. Plus, we get to triage a whole pile of New bugs and close others we may have lost track of.

As we wrap up, here are the stats from today:

Bug day start total open bugs: 281

  • 64 New bugs
  • 41 In-progress bugs
  • 5 Critical bugs
  • 22 High importance bugs
  • 2 Incomplete bugs

Bug day end total open bugs: 231

  • 0 New bugs
  • 33 In-progress bugs
  • 4 Critical bugs
  • 16 High importance bugs
  • 10 Incomplete bugs

Thanks again everyone!

by pleia2 at July 08, 2014 10:43 PM

Akkana Peck

Big and contrasty mouse cursors

[Big mouse cursor from Comix theme] My new home office with the big picture windows and the light streaming in come with one downside: it's harder to see my screen.

A sensible person would, no doubt, keep the shades drawn when working, or move the office to a nice dim interior room without any windows. But I am not sensible and I love my view of the mountains, the gorge and the birds at the feeders. So accommodations must be made.

The biggest problem is finding the mouse cursor. When I first sit down at my machine, I move my mouse wildly around looking for any motion on the screen. But the default cursors, in X and in most windows, are little subtle black things. They don't show up at all. Sometimes it takes half a minute to figure out where the mouse pointer is.

(This wasn't helped by a recent bug in Debian Sid where the USB mouse would disappear entirely, and need to be unplugged from USB and plugged back in before the computer would see it. I never did find a solution to that, and for now I've downgraded from Sid to Debian testing to make my mouse work. I hope they fix the bug in Sid eventually, rather than porting whatever "improvement" caused the bug to more stable versions. Dealing with that bug trained me so that when I can't see the mouse cursor, I always wonder whether I'm just not seeing it, or whether it really isn't there because the kernel or X has lost track of the mouse again.)

What I really wanted was bigger mouse cursor icons in bright colors that are visible against any background. This is possible, but it isn't documented at all. I did manage to get much better cursors, though different windows use different systems.

So I wrote up what I learned. It ended up too long for a blog post, so I put it on a separate page: X Cursor Themes for big and contrasty mouse cursors.

It turned out to be fairly complicated. You can replace the existing cursor font, or install new cursor "themes" that many (but not all) apps will honor. You can change theme name and size (if you choose a scalable theme), and some apps will honor that. You have to specify theme and size separately for GTK apps versus other apps. I don't know what KDE/Qt apps do.

I still have a lot of unanswered questions. In particular, I was unable to specify a themed cursor for xterm windows, and for non text areas in emacs and firefox, and I'd love to know how to do that.

But at least for now, I have a great big contrasty blue mouse cursor that I can easily see, even when I have the shades on the big windows open and the light streaming in.

July 08, 2014 04:25 PM

July 04, 2014

Elizabeth Krumbach

Google I/O 2014

Last week I had the opportunity to attend Google I/O. As someone who has only done much hardware-focused development as a hobby I’d never been to a vendor-specific developer conference. Google’s was a natural choice for me, I’m a fan of Android (yay Linux!) and their events always have cool goodies for attendees, plus it was 2 blocks from home. My friend Danita attended with me, so it was also nice to not go alone.

We registered on Tuesday, before the conference. Wednesday we headed over what we thought was early, but then after picking up breakfast and getting in line for the 9:00 keynote at 8:25 found ourselves in a line that had wrapped around the whole block of Moscone West + Intercontinental! The keynote began while we were very much still in line, and didn’t get into the main room until around 9:30. The line was still wrapped around the building when we got in, so I can’t imagine how late the other folks got in, and many of them must have ended up in some kind of overflow room since we got some of the last few seats in the main room.

Once we got in, the keynote itself was fun. Talked about Android design, Android Wear including a couple watches that we later learned we’d get to take home (woo! One we could pick up the next day, the next later this year when it’s released) and Android Auto which has partnerships with several vehicle manufacturers that will start coming out later this year (they didn’t give us one of these though). They also talked about Android TV, which was nice to hear about since it always seemed a bit strange that they had a separate division for the OS they run on tablets/phones, TVs and for Google Fiber. The keynote wrapped up by talking about Google’s cloud offerings.

By the time the keynote had finished at 11:40 the first session was pretty much over, so I grabbed some lunch and then made my way over to Who cares about new domain names? We do. If you want happy users, then you should too. In this session they announced their initiative to sell domain names as a registrar and then, most interestingly, dove into the details related to how the new naming scheme will impact web and application development when it comes to URL and email validation. Beyond just parsing of more domains, there are now considerations for UTF-8 characters included in some new domain names and how that works with DNS. I particularly liked that they showed off some of the problems Google itself was having with applications like GMail when it comes to these new domains, and how they fixed them.

The next session I went to was Making sense of online course data. I’m a big fan of MOOCs, so this was of particular interest to me. Peter Norvig and Julia Wilkowski discussed some of Google’s initiatives in developing MOOCs and what they’ve learned from their students following each one. It was refreshing to hear that they were catering to the educational needs of the students by going as far as completely breaking down old course models and doing things like offering search tools for classes if students only want to complete a portion of it, making all materials and schedule (including quiz dates and deadlines) available at the beginning and largely giving the students the ability to create their own lesson plans based on what they want to learn.

We also found time in the day to check out vendor booths and get our pictures taken with giant Androids!

The last session I attended the first day was HTTPS Everywhere. As a technical person, I’m very careful about my online activities and simply avoid authentication for non-HTTPS sites when I’m not on a trusted network. The main argument that kicked off this talk was that most people don’t do that, plus the cumulative effect of having all your HTTP-based traffic sniffed can be a major privacy violation even if there are no authentication details leaked. Fair enough. The rest of the talk covered tools and tips for migrating a site to be HTTPS-only, including how to do things properly so that your search rankings don’t plummet during the switch. Some of the key resources I gleaned from this talk include:

The first day after party was held by the conference in Yerba Buena park, and I got plenty of rest before Thursday morning when we got our chance to check out Android Auto in one of the several cars they got up to the 3rd floor for demoing! As someone who has almost always driven older cars, I am concerned about how dated the Android Auto technology will quickly become, but it does seem better than many of the current dead end technologies that seem to be shipping with cars today that are fully built in.

We also got to pick up our Android watch! After finally tracking down the developer info and charging for a bit I was able to get mine going. It’s still pretty buggy, but it is nice to get alerts on my wrist without having to pull my phone out of my purse.

Session-wise, we started out with the packed Cardboard: VR for Android session. Google Cardboard sure seems like a joke, but it’s actually a pretty cool way to use your phone for a cheap Virtual Reality environment. The session covered some of the history of the project (and of VR in general), the current apps available to try out for Cardboard and some ideas for developers.

From there I went to Transforming democracy and disasters with APIs. After seeing a presentation on Poplus when I was in Croatia, I was interested to see what Google was doing in the space of civic hacking, and was pleasantly surprised! Many of these sorts of organizations – Code for America, Poplus, Google’s initiatives, actually make efforts work together in this space. Some of the things Google has been focusing on include getting voting data to people (including who their representative is, where polling places are) accessible via the Google Civic Information API. They also talked some about the Common Alerting Protocol (CAP), which is an XML standard that Google is trying to help encourage adoption for so their and other services can more easily consume alerts worldwide for tools that use these feeds to alert populations. From this, they talked about various other sites, including:

And many more 3rd party APIs documented in this Google doc.

After lunch I went to the very crowded Nest for developers session. Even after watching this I am somewhat skeptical about how much more home automation you can get from a system that started with a thermostat and still focuses on environmental control. On the flip side, I’ve actually seen Nest “in the wild” so perhaps it gets closer to home automation than most other technologies have in this space.

Continuing my interest in sessions about civic good, I then attended Maps for good: Saving trees and saving lives with petapixel-scale computing. Presenter Rebecca Moore started off with this great story about how she stopped a very bad logging plan in her area by leveraging maps and other technology tools to give presentations around her community (see here for more). Out of her work here, and further 20% work at Google, came the birth of the initiative she currently works on full time, Google Earth Outreach.

Google Earth Outreach gives nonprofits and public benefit organizations the knowledge and resources they need to visualize their cause and tell their story in Google Earth & Maps to hundreds of millions of people.

Pretty cool stuff. She spoke more in depth about some really map geek stuff, including collection and inclusion of historical and current Landsat data in Google Earth, as well as the tools now available for organizations looking to process map data now and over time for everything from disaster relief to tracking loss of habitat.

The last slot of the day was a contentious one, so many things I wanted to see! Fortunately it’s all recorded so I can go back and see the ones I missed. I decided to go to Strengthening communities with technology: A discussion with Bay Area Impact Challenge finalists. This session featured three bay-area organizations who have been doing good in the community:

  • One Degree – “The easiest way to find, manage, and share nonprofit services for you and your family.”
  • Hack the Hood – “Hack the Hood provides technical training in high in-demand multimedia and tech skills to youth who will then apply their learning through real-world consulting projects with locally-owned businesses and non-profits.”
  • Beyond 12 – “Ensuring all students have the opportunity to succeed in college and beyond.”

Google brought these organizations together as finalists in their Bay Area Impact Challenge, from which they all received large grants. There were some interesting observations from all these organizations, on the technical side I learned that most low income people in the bay area have a smartphone, whereas only half have a computer and internet at home. There was also higher access to text messaging abilities than to email, which was an important consideration when some organizations were launching their online services – it’s better to rely on text for registration than email. They also all work with existing organizations and are very involved with communities they serve so they make sure they are meeting the needs of their communities – which may seem obvious, but there are many technical initiatives for under-served communities that fail because they are either solving the wrong problem, have the wrong solution or aren’t very accessible.

And with that, Google I/O came to a close!

In all, it was a worthwhile experience, but as someone who is not doing application development as my core job function it was more “fun and interesting” than truly valuable (particularly with the $900 price tag). I sure do enjoy my Android watch though! And am looking forward to the round face version coming out in a few months (which we’ll also get one of!).

More photos from the event here: https://www.flickr.com/photos/pleia2/sets/72157645456636793/

by pleia2 at July 04, 2014 03:18 AM

Akkana Peck

Detecting wildlife with a PIR sensor (or not)

[PIR sensor] In my last crittercam installment, the NoIR night-vision crittercam, I was having trouble with false positives, where the camera would trigger repeatedly after dawn as leaves moved in the wind and the morning shadows marched across the camera's field of view. I wondered if a passive infra-red (PIR) sensor would be the answer.

I got one, and the answer is: no. It was very easy to hook up, and didn't cost much, so it was a worthwhile experiment; but it gets nearly as many false positives as camera-based motion detection. It isn't as sensitive to wind, but as the ground and the foliage heat up at dawn, the moving shadows are just as much a problem as they were with image-based motion detection.

Still, I might be able to combine the two, so I figure it's worth writing up.

Reading inputs from the HC-SR501 PIR sensor

[PIR sensor pins]

The PIR sensor I chose was the common HC-SR501 module. It has three pins -- Vcc, ground, and signal -- and two potentiometer adjustments.

It's easy to hook up to a Raspberry Pi because it can take 5 volts in on its Vcc pin, but its signal is 3.3v (a digital signal -- either motion is detected or it isn't), so you don't have to fool with voltage dividers or other means to get a 5v signal down to the 3v the Pi can handle. I used GPIO pin 7 for signal, because it's right on the corner of the Pi's GPIO header and easy to find.

There are two ways to track a digital signal like this. Either you can poll the pin in an infinfte loop:

import time
import RPi.GPIO as GPIO

pir_pin = 7
sleeptime = 1

GPIO.setmode(GPIO.BCM)
GPIO.setup(pir_pin, GPIO.IN)

while True:
    if GPIO.input(pir_pin):
        print "Motion detected!"
    time.sleep(sleeptime)

or you can use interrupts: tell the Pi to call a function whenever it sees a low-to-high transition on a pin:

import time
import RPi.GPIO as GPIO

pir_pin = 7
sleeptime = 300

def motion_detected(pir_pin):
    print "Motion Detected!"

GPIO.setmode(GPIO.BCM)
GPIO.setup(pir_pin, GPIO.IN)

GPIO.add_event_detect(pir_pin, GPIO.RISING, callback=motion_detected)

while True:
    print "Sleeping for %d sec" % sleeptime
    time.sleep(sleeptime)

Obviously the second method is more efficient. But I already had a loop set up checking the camera output and comparing it against previous output, so I tried that method first, adding support to my motion_detect.py script. I set up the camera pointing at the wall, and, as root, ran the script telling it to use a PIR sensor on pin 7, and the local and remote directories to store photos:

# python motion_detect.py -p 7 /tmp ~pi/shared/snapshots/
and whenever I walked in front of the camera, it triggered and took a photo. That was easy!

Reliability problems with add_event_detect

So easy that I decided to switch to the more efficient interrupt-driven model. Writing the code was easy, but I found it triggered more often: if I walked in front of the camera (and stayed the requisite 7 seconds or so that it takes raspistill to get around to taking a photo), when I walked back to my desk, I would find two photos, one showing my feet and the other showing nothing. It seemed like it was triggering when I got there, but also when I left the scene.

A bit of web searching indicates this is fairly common: that with RPi.GPIO a lot of people see triggers on both rising and falling edges -- e.g. when the PIR sensor starts seeing motion, and when it stops seeing motion and goes back to its neutral state -- when they've asked for just GPIO.RISING. Reports for this go back to 2011.

On the other hand, it's also possible that instead of seeing a GPIO falling edge, what was happening was that I was getting multiple calls to my function while I was standing there, even though the RPi hadn't finished processing the first image yet. To guard against that, I put a line at the beginning of my callback function that disabled further callbacks, then I re-enabled them at the end of the function after the Pi had finished copying the photo to the remote filesystem. That reduced the false triggers, but didn't eliminate them entirely.

Oh, well, The sun was getting low by this point, so I stopped fiddling with the code and put the camera out in the yard with a pile of birdseed and peanut suet nuggets in front of it. I powered on, sshed to the Pi and ran the motion_detect script, came back inside and ran a tail -f on the output file.

I had dinner and worked on other things, occasionally checking the output -- nothing! Finally I sshed to the Pi and ran ps aux and discovered the script was no longer running.

I started it again, this time keeping my connection to the Pi active so I could see when the script died. Then I went outside to check the hardware. Most of the peanut suet nuggets were gone -- animals had definitely been by. I waved my hands in front of the camera a few times to make sure it got some triggers.

Came back inside -- to discover that Python had gotten a segmentation fault. It turns out that nifty GPIO.add_event_detect() code isn't all that reliable, and can cause Python to crash and dump core. I ran it a few more times and sure enough, it crashed pretty quickly every time. Apparently GPIO.add_event_detect needs a bit more debugging, and isn't safe to use in a program that has to run unattended.

Back to polling

Bummer! Fortunately, I had saved the polling version of my program, so I hastily copied that back to the Pi and started things up again. I triggered it a few times with my hand, and everything worked fine. In fact, it ran all night and through the morning, with no problems except the excessive number of false positives, already mentioned.

[piñon mouse] False positives weren't a problem at all during the night. I'm fairly sure the problem happens when the sun starts hitting the ground. Then there's a hot spot that marches along the ground, changing position in a way that's all too obvious to the infra-red sensor.

I may try cross-checking between the PIR sensor and image changes from the camera. But I'm not optimistic about that working: they both get the most false positives at the same times, at dawn and dusk when the shadow angle is changing rapidly. I suspect I'll have to find a smarter solution, doing some image processing on the images as well as cross-checking with the PIR sensor.

I've been uploading photos from my various tests here: Tests of the Raspberry Pi Night Vision Crittercam. And as always, the code is on github: scripts/motioncam with some basic documentation on my site: motion-detect.py: a motion sensitive camera for Raspberry Pi or other Linux machines. (I can't use github for the documentation because I can't seem to find a way to get github to display html as anything other than source code.)

July 04, 2014 02:13 AM

July 01, 2014

Jono Bacon

Getting Started in Community Management

If there is one question I get more than most, it is the proverbial:

How do I get started in community management?

While there are many tactical things to learn about building strong communities (which I cover in depth in The Art of Community), the main guidance I am keen to share is the importance of leadership.

Last night, while working in my hotel room, I bored myself writing up my thoughts in a blog post and just fired up my webcam:

Can’t see it? See it here

If you want to get involved in community management, be sure to join the awesome community that is forming on the Community Leadership Forum.

by jono at July 01, 2014 04:31 PM

June 28, 2014

Elizabeth Krumbach

Symphony, giraffes and pinnipeds

Prior to my trips to Texas and Croatia, MJ and I were able to make it over to Sherith Israel to enjoy the wonderful acoustics in a show by the Musicians of the San Francisco Symphony in a concert to benefit the SF-Marin Food Bank. It was a wonderful concert, and a wonderful way to round out a busy weekend before my trips.


During intermission

Last weekend our friend Danita came into town to visit for a week. Saturday we spent with a leisurely brunch at the Beach Chalet, one of my favorites. From there we went to the San Francisco Zoo to catch up with our new little friend, the baby patas monkey, who has grown even since my last visit a couple weeks ago!

We visited with the giraffes, as is appropriate since it was World Giraffe Day. I also got to finally see the peccary babies, but we were too late to make it into the lion house by 4pm to visit the two-toed sloth who I’ve never met. Next time.

On Sunday we went the amusement park route and made our way up to Six Flags Discovery Kingdom. Given my health lately, I wasn’t keen on going on any rides, but I learned a while back that this park has walruses (the only place in the bay area that does), along with lots of other animals, so I was pretty excited.

The walruses didn’t disappoint. One of the larger of the three seemed thrilled to delight the humans who were visiting their tank:

And the rest swam around doing walrus things. It was awesome to see them, I’m a general pinniped fan but I don’t get to see walruses all that often.

I also got to visit the seals and sea lions, and got to feed a mamma sea lion, the baby was a bit too shy.

Continuing on our giraffe trend, we also got to visit the giraffes there at the park as they celebrated a whole weekend of World Giraffe Day!

More photos from Six Flags here (I even got one of a roller coaster!): https://www.flickr.com/photos/pleia2/sets/72157645359472733/

Then I had a busy week. I attended Google I/O for the first time, which I’ll write about later. I also had an Upper Endoscopic Ultrasound (EUS) done to poke around to see what was going in with my gallbladder. The worst part about the procedure was the sore throat and mild neck bruising I had following it, which hasn’t made me feel great when coupled with the cough I’m recovering from. The doctor looking at the initial results mentioned sludge, but didn’t think there was concern, but upon follow-up with the surgeon I’ve been working with I learned that the amount of sludge when combined with my symptoms and family history made him think the right course of action would be gallbladder removal. I’m scheduled to have it removed on July 24th. I’ve never had surgery aside from wisdom teeth removal, so I’m pretty apprehensive about the procedure, but thankful that they finally found something so there is hope that the abdominal pain I’ve been having since April will finally go away.

by pleia2 at June 28, 2014 07:18 PM

June 27, 2014

Jono Bacon

Exponential Community

As some of you will know, recently I moved from Canonical to XPRIZE to work as Sr. Dir. Community. My charter here at XPRIZE is to infuse the organization and the incentive prizes it runs with community engagement.

For those unfamiliar with XPRIZE, it was created by Peter H. Diamandis to solve the grand challenges of our time by creating incentive prize competitions. The first XPRIZE was the $10million Ansari XPRIZE to create a space-craft that went into space and back twice in two weeks and carrying three crew. It was won by Scaled Composites with their SpaceShipOne craft, and the technology ultimately led to birth of the commercial space-flight industry. Other prizes have focused on ocean health, more efficient vehicles, portable health diagnosis, and more.

The incentivized prize model is powerful. It is accessible to anyone with the drive to compete, it results in hundreds of teams engaging in extensive R&D, only the winner gets paid, and the competitive nature generally results in market competition which then drives even more affordable and accessible technology to be built.

The XPRIZE model is part of Peter Diamandis’s vision of exponential technology. In a nutshell, Peter has identified that technology is doubling every year, across a diverse range of areas (not just computing), and that technology can ultimately solve our grand challenges such as scarcity, clean water, illiteracy, space exploration, clean energy, and more. If you are interested in finding out more, read Abundance; it really is an excellent and inspirational read.

When I was first introduced to XPRIZE the piece that inspired me about the model is that collaboratively we can solve grand challenges that we couldn’t solve alone. Regular readers of my work will know that this is precisely the same attribute in communities that I find so powerful.

As such, connecting the dots between incentivized prizes that solve grand challenges with effective and empowering community management, has the potential for a profound impact on the world.


The XPRIZE Lobby.

My introduction to XPRIZE helped me to realize that the exponential growth that Peter sees in technology is a key ingredient in how communities work. While not as crisply predictable (a doubling of community does not neccessarily mean a doubling of output), we have seen time and time again that when communities build momentum and growth the overall output of the community (irrespective of the specific size) can often exponentially grow.

An example of this is Wikipedia. From the inception of the site, the tremendous growth of the community resulted in huge growth in not just the site, but the value the site brought to users (as value is often defined by completeness). Another example is Linux. When the Linux kernel was only authored by Linus Torvalds, it had limited growth. The huge community that formed there has resulted in a technology that has literally changed how technology infrastructure in the world runs. We also have political examples such as the Arab Spring in which social media helped empower large swathes of citizens to challenge their leaders. Again, as the community grew, so did the potency of their goals.

XPRIZE plays a valuable role because exponential growth in technology does not necessarily mean that the technology will be built. Traditionally, only governments were solving the grand challenges of our time because companies found it difficult to understand or define a market. XPRIZE competitions put a solid stake in the ground that highlights the problem, legitimizes the development of the technology with a clear goal and prize purse, and empowers fair participation.

The raw ingredients (smart people with drive and passion) for solving these challenges are already out there, and XPRIZE works to mobilize them. In a similar fashion, the raw ingredients for creating globally impactive communities are there, we just need to activate them.

So what will I be doing at XPRIZE to build community engagement?

Well, I have only been here a few weeks so my priority right now are some near-term goals and getting to know the team and culture, so I don’t have anything concrete I can share right now. I assure you though, I will be talking more about my work in the coming months.

You can stay connected to this work via this blog, my Twitter account, and my Google+ account. Also, be sure to follow XPRIZE to stay up to date with the general work of the organization.

by jono at June 27, 2014 04:42 PM

June 26, 2014

Akkana Peck

A Raspberry Pi Night Vision Camera

[Mouse caught on IR camera]

When I built my http://shallowsky.com/blog/hardware/raspberry-pi-motion-camera.html (and part 2), I always had the NoIR camera in the back of my mind. The NoIR is a version of the Pi camera module with the infra-red blocking filter removed, so you can shoot IR photos at night without disturbing nocturnal wildlife (or alerting nocturnal burglars, if that's your target).

After I got the daylight version of the camera working, I ordered a NoIR camera module and plugged it in to my RPi. I snapped some daylight photos with raspstill and verified that it was connected and working; then I waited for nightfall.

In the dark, I set up the camera and put my cup of hot chocolate in front of it. Nothing. I hadn't realized that although CCD cameras are sensitive in the near IR, the wavelengths only slightly longer than visible light, they aren't sensitive anywhere near the IR wavelengths that hot objects emit. For that, you need a special thermal camera. For a near-IR CCD camera like the Pi NoIR, you need an IR light source.

Knowing nothing about IR light sources, I did a search and came up with something called a "Infrared IR 12 Led Illuminator Board Plate for CCTV Security CCD Camera" for about $5. It seemed similar to the light sources used on a few pages I'd found for home-made night vision cameras, so I ordered it. Then I waited, because I stupidly didn't notice until a week and a half later that it was coming from China and wouldn't arrive for three weeks. Always check the shipping time when ordering hardware!

When it finally arrived, it had a tiny 2-pin connector that I couldn't match locally. In the end I bought a package of female-female SchmartBoard jumpers at Radio Shack which were small enough to make decent contact on the light's tiny-gauge power and ground pins. I soldered up a connector that would let me use a a universal power supply, taking a guess that it wanted 12 volts (most of the cheap LED rings for CCD cameras seem to be 12V, though this one came with no documentation at all). I was ready to test.

Testing the IR light

[IR light and NoIR Pi camera]

One problem with buying a cheap IR light with no documentation: how do you tell if your power supply is working? Since the light is completely invisible.

The only way to find out was to check on the Pi. I didn't want to have to run back and forth between the dark room where the camera was set up and the desktop where I was viewing raspistill images. So I started a video stream on the RPi:

$ raspivid -o - -t 9999999 -w 800 -h 600 | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264

Then, on the desktop: I ran vlc, and opened the network stream:
rtsp://pi:8554/
(I have a "pi" entry in /etc/hosts, but using an IP address also works).

Now I could fiddle with hardware in the dark room while looking through the doorway at the video output on my monitor.

It took some fiddling to get a good connection on that tiny connector ... but eventually I got a black-and-white view of my darkened room, just as I'd expect under IR illumination. I poked some holes in the milk carton and used twist-ties to seccure the light source next to the NoIR camera.

Lights, camera, action

Next problem: mute all the blinkenlights, so my camera wouldn't look like a christmas tree and scare off the nocturnal critters.

The Pi itself has a relatively dim red run light, and it's inside the milk carton so I wasn't too worried about it. But the Pi camera has quite a bright red light that goes on whenever the camera is being used. Even through the thick milk carton bottom, it was glaring and obvious. Fortunately, you can disable the Pi camera light: edit /boot/config.txt and add this line

disable_camera_led=1

My USB wi-fi dongle has a blue light that flickers as it gets traffic. Not super bright, but attention-grabbing. I addressed that issue with a triple thickness of duct tape.

The IR LEDs -- remember those invisible, impossible-to-test LEDs? Well, it turns out that in darkness, they emit a faint but still easily visible glow. Obviously there's nothing I can do about that -- I can't cover the camera's only light source! But it's quite dim, so with any luck it's not spooking away too many animals.

Results, and problems

For most of my daytime testing I'd used a threshold of 30 -- meaning a pixel was considered to have changed if its value differed by more than 30 from the previous photo. That didn't work at all in IR: changes are much more subtle since we're seeing essentially a black-and-white image, and I had to divide by three and use a sensitivity of 10 or 11 if I wanted the camera to trigger at all.

With that change, I did capture some nocturnal visitors, and some early morning ones too. Note the funny colors on the daylight shots: that's why cameras generally have IR-blocking filters if they're not specifically intended for night shots.

[mouse] [rabbit] [rock squirrel] [house finch]

Here are more photos, and larger versions of those: Images from my night-vision camera tests.

But I'm not happy with the setup. For one thing, it has far too many false positives. Maybe one out of ten or fifteen images actually has an animal in it; the rest just triggered because the wind made the leaves blow, or because a shadow moved or the color of the light changed. A simple count of differing pixels is clearly not enough for this task.

Of course, the software could be smarter about things: it could try to identify large blobs that had changed, rather than small changes (blowing leaves) all over the image. I already know SimpleCV runs fine on the Raspberry Pi, and I could try using it to do object detection.

But there's another problem with detection purely through camera images: the Pi is incredibly slow to capture an image. It takes around 20 seconds per cycle; some of that is waiting for the network but I think most of it is the Pi talking to the camera. With quick-moving animals, the animal may well be gone by the time the system has noticed a change. I've caught several images of animal tails disappearing out of the frame, including a quail who visited yesterday morning. Adding smarts like SimpleCV will only make that problem worse.

So I'm going to try another solution: hooking up an infra-red motion detector. I'm already working on setting up tests for that, and should have a report soon. Meanwhile, pure image-based motion detection has been an interesting experiment.

June 26, 2014 07:31 PM

June 25, 2014

Jono Bacon

Community Leadership Forum

A little while ago I set up the Community Leadership Forum. The forum is designed to be a place where community leaders and managers can learn and share experience about how to grow fun, productive, and empowered communities.

The forum is open and accessible to all communities – technology, social, environmental, entertainment, or anything else. It is intended to be diverse and pull together a great set of people.

It is also designed to be another tool (in addition to the Community Leadership Summit) to further the profession, art, and science of building great communities.

We are seeing some wonderful growth on the forum, and because the forum is powered by Discourse it is a simple pleasure to use.

I am also encouraging organizations who are looking for community managers to share their job descriptions on the forum. This forum will be a strong place to find the best talent in community management and for the talent to find great job opportunities.

I hope to see you there!

Join the Community Leadership Forum

by jono at June 25, 2014 04:29 PM

June 24, 2014

Jono Bacon

The Return of my Weekly Q&A

As many of you will know, I used to do a weekly Q&A on Ubuntu On Air for the Ubuntu community where anyone could come and ask any question about anything.

I am pleased to announce my weekly Q&A is coming back but in a new time and place. Now it will be every Thursday at 6pm UTC (6pm UK, 7pm Europe, 11am Pacific, 2pm Eastern), starting this week.

You can join each weekly session at http://www.jonobacon.org/live/

You are welcome to ask questions about:

  • Community management, leadership, and how to build fun and productive communities.
  • XPRIZE, our work there, and how we solve the world’s grand challenges.
  • My take on Ubuntu from the perspective of an independent community member.
  • My views on technology, Open Source, news, politics, or anything else.

As ever, all questions are welcome! I hope to see you there!

by jono at June 24, 2014 05:09 AM

June 23, 2014

Eric Hammond

EBS-SSD Boot AMIs For Ubuntu On Amazon EC2

With Amazon’s announcement that SSD is now available for EBS volumes, they have also declared this the recommended EBS volume type.

The good folks at Canonical are now building Ubuntu AMIs with EBS-SSD boot volumes. In my preliminary tests, running EBS-SSD boot AMIs instead of EBS magnetic boot AMIs speeds up the instance boot time by approximately… a lot.

Canonical now publishes a wide variety of Ubuntu AMIs including:

  • 64-bit, 32-bit
  • EBS-SSD, EBS-SSD pIOPS, EBS-magnetic, instance-store
  • PV, HVM
  • in every EC2 region
  • for every active Ubuntu release

Matrix that out for reasonable combinations and you get 492 AMIs actively supported today.

On the Alestic.com blog, I provide a handy reference to the much smaller set of Ubuntu AMIs that match my generally recommended configurations for most popular uses, specifically:

I list AMIs for both PV and HVM, because different virtualization technologies are required for different EC2 instance types.

Where SSD is not available, I list the magnetic EBS boot AMI (e.g., Ubuntu 10.04 Lucid).

To access this list of recommended AMIs, select an EC2 region in the pulldown menu towards the top right of any page on Alestic.com.

If you like using the AWS console to launch instances, click on the orange launch button to the right of the AMI id.

The AMI ids are automatically updated using an API provided by Canonical, so you always get the freshest released images.

Original article: http://alestic.com/2014/06/ec2-ebs-ssd-ami

by Eric Hammond at June 23, 2014 06:12 AM

June 21, 2014

Akkana Peck

Mirror a website using lftp

I'm helping an organization some website work. But I'm not the only one working on the website, and there's no version control. I wanted an easy way to make sure all my files were up-to-date before I start to work on one ... a way to mirror the website, or at least specific directories, to my local disk.

Normally I use rsync -av over ssh to mirror directories, but this website is on a server that only offers ftp access. I've been using ncftp to copy files up one by one, but although ncftp's manual says it has a mirror mode and I found a few web references to that, I couldn't find anything telling me how to activate it.

Making matters worse, there are some large files that I don't need to mirror. The first time I tried to use get * in ncftp to get one directory, it spent 15 minutes trying to download a huge powerpoint file, then stalled and lost the connection. There are some big .doc and .docx files, too. And ncftp doesn't seem to have a way to exclude specific files.

Enter lftp. It has a mirror mode (with documentation, even!) which includes a -X to exclude files matching specified patterns.

lftp includes a -e to pass commands -- like "mirror" -- to it on the command line. But the documentation doesn't say whether you can use more than one command at a time. So it seemed safer to start up an lftp session and pass a series of commands to it.

And that works nicely. Just set up the list of directories you want to mirror, and you can write a nice shell function you can put in your. .zshrc or .bashrc:

sitemirror() {
commands=""
for dir in thisdir thatdir theotherdir
do
  commands="$commands
mirror --only-newer -vvv -X '*.ppt' -X '*.doc*' -X '*.pdf' htdocs/$dir $HOME/web/webmirror/$dir"
done

echo Commands to be run:
echo $commands
echo

lftp <<EOF
open -u 'user,password' ftp.example.com
$commands
bye
EOF
}

Super easy -- all I do is type sitemirror and wait a little. Now I don't have any excuse for not being up to date.

June 21, 2014 06:39 PM

Elizabeth Krumbach

Tourist in Zagreb, Croatia

In addition to attending and presenting at DORS/CLUC, I had the opportunity to see some sights while I was in Zagreb, Croatia this past week.


View from my room at Panorama Zagreb

My tourist days began in the late afternoon on Monday when my local friend Jasna could pull herself away from conference things. Huge thanks to her for doing this, I know the exhaustion and pressure of working with a conference, and I’m really grateful that she was willing to take the time in the midst of this to walk around the city with me.

We did about 7 miles of walking around the center of the city. Our first stop was to visit the Nikola Tesla statue! I learned on my trip that Tesla was born in what is modern day Croatia so visiting the statue quickly became a must.

From the statue, we walked north and picked up a snack at one of the dozens of small bakeries that are all over the city and sat down next to the beautiful Croatian National Theatre to enjoy.

I was able to get some shopping done and when we made it to the main square in Upper Town I noticed that it had been almost completely taken over by World Cup festivities. Most of the United States doesn’t get too excited about the World Cup, so being in a country that cares about it during it was a treat. In addition to giant screens put up, they had little soccer (er, football?) games set up at pubs, roadside stands selling fan goodies and even cars were sporting the iconic red and white checkers of the Croatian team.

As our adventures wound down, I also got to see the outside of the main railway station in the city, which we’d go back to on Tuesday to catch a tram down to the zoo.

Monday night after tourist adventures, the conference organizers had a wonderful dinner for the keynote speakers at Pod grickim topom or “Under the Canon.” The food was exceptional, and even though I’m off alcohol at the moment (no honey schnapps for me!), I really enjoyed the family style dinner that was prepared for us.

Photos from the rest of my touristing adventures here: https://www.flickr.com/photos/pleia2/sets/72157645274192044/

On Tuesday evening we went to the Zagreb Zoo! It’s always interesting to visit zoos in other countries, but I’m also a bit apprehensive since they often aren’t particularly accredited by organizations like the AZA for many zoos in the United States, so I am not sure what to expect. I was pleasantly surprised by the quality of the Zagreb Zoo – many of the animals had big enclosures, very natural. The new lion enclosure was particularly impressive. As a city zoo in a park it reminded me a lot of the Central Park Zoo, but it definitely larger, if not as big as some of the other zoos I’ve been to.

More photos from the zoo here: https://www.flickr.com/photos/pleia2/sets/72157645264459992/

Unfortunately I had to cut my touristing short by Wednesday as I had come down with a cold and decided that my time would be better spent getting some rest before my trip home. Still, I got a lot in during my stay. Next time I’ll have to visit the coast, I hear the beaches on the Adriatic are well worth the visit.

by pleia2 at June 21, 2014 06:21 AM

June 20, 2014

Elizabeth Krumbach

DORS/CLUC 2014 OpenStack CI Keynote and more

Several months ago I was invited to give a keynote at the DORS/CLUC conference in Croatia on the OpenStack Continuous Integration System. I’ve been excited about this opportunity since it came up, so it was a real pleasure to spend this past week at the conference, getting to know my friend Jasna better and meeting the rest of the conference crew and attendees.

Each day I attended for the keynotes, as they were all in English, and on Monday I also participated in a Women in FLOSS panel. The evenings were spent exploring the beautiful city of Zagreb, which I’ll write about in another post once I upload my photos.

The first keynote was by Georg Greve who spoke on Kolab in his talk “Kolab: Do it right, or don’t do it at all” (slides here). I evaluated Kolab for use about 3 years ago and it was a bit rough around the edges, and I believe it was still using Horde as the webmail client. It was interesting to learn about where they’ve gone with development and the progress they’ve made. I was happy to learn that they are still fully open source (no “open core” or other kinds of proprietary modules for paying customers). Today it uses RoundCube for webmail, which I’d also go with if I were in a position to deploy a webmail client again. Finally, he spoke some about the somewhat unexpected success of their hosted Kolab solution, MyKolab, which had me seriously thinking again about my non-free use of Google Apps for email.

Next up was Dave Whiteland in a talk he called “Sharing things that work or ‘hey I just had somebody else’s really good idea’” where he talked about the work that MySociety and Poplus are doing in the space of civic coding. I’m a big fan of civic coding projects, and it was great to hear that the existing projects are working together to provide platforms for governments and municipalities all over the world. He also talked about public engagement and the success of email alerts in the UK about politics, saying that if people have access to structured data, they will use it. This really resonated with me, as someone who is interested in being better informed, I find myself struggling to find the time to stay informed, we’re all busy people and if we can get access to the facts in a clean, simple interface that draws from officially released information (which is hopefully largely unbiased) it’s super helpful. It was really cool to hear about the Poplus components available today, including MapIt and WriteIt to make civic mapping and contact projects easier.


Dave Whiteland

It was then time for the “Women in FLOSS technology” round table where I participated in with Ana Mandić, Jasna Benčić, Lucija Pilić and Marta Milaković. Jasna did a great job of rounding up great, really technical women for this panel with a variety of experiences and experience levels in the FLOSS sphere. After introductions, we talked about challenges we’ve encountered in our work, which tended to be those that every new contributor runs into (not so gender-based) and ways in which we’ve been helped in our work, from women-focused groups like LinuxChix to more formal programs like the Outreach Program for Women organized by the GNOME Foundation, serving a vast array of FLOSS projects. Huge thanks to all my fellow round table participants and the great, positive comments from the audience about how they can help.


Women in FLOSS round table participants, thanks to Milan Rajačić for this photo

Tuesday kicked off with a keynote by Miklos Vajna on “Libre Office – what we fixed in 4.2/4.3.” During preparation for my recent talks on Ubuntu 14.04, I reviewed the release notes for 4.2, so I was somewhat familiar with changes like the bigger, improved Start Center screen that you get upon launch, but some of the other features I was less familiar with. They’ve added LibCMIS support (perhaps most notably for GDrive support), preliminary release of import of Keynote slide decks into Impress, per-character border support, spreadsheet (Calc) storage rewrite for improved functionality and speed, ability to print notes and more. Upcoming features include Impress slide control from Android and iPhone and ability collaborate on documents using the Jabber protocol.


Miklos Vajna

We then heard from Georg Greve in his second keynote, “Living in the Free Software Society” (slides here). He began his talk by covering some of the fundamentals of FLOSS philosophies and then went into the importance of having people understand their rights when it comes to software they use and depend upon. He had several quotes from Lawrence Lessig’s recent commentary on free software and civic involvement. There was the sad realization that even though FLOSS has had better arguments for being the preferred solution for users (transparency, rights), it often hasn’t “won” as the preferred solution. As a result, he stressed the importance of helping those in power understand the technological fundamentals of bills and laws they are getting through Congress/Parliament and our role there. I also appreciated the observation that companies need to make an investment in implementing FLOSS technologies the “right way” with upstream collaboration (not internal forking) to avoid the massive internal maintenance problem that so many companies have encountered when going down this path and causing their FLOSS deployment to ultimately fail. Finally, I learned about the Terms of Service; Didn’t Read project which seeks “to rate and label website terms & privacy policies, from very good Class A to very bad Class E.” Cool.


Georg Greve

Wednesday morning was my keynote, and I had unfortunately developed a cold by this point in my week! Fortunately, I was able to get a lot of rest prior to my talk and my familiarity with the material and slide deck made my talk go well in spite of this. Several of my colleagues have given this Overview of the Continuous Integration for OpenStack before, so I was excited about my own opportunity to give a talk on this fully open source system, particular to an audience who are pretty new to the relatively new CI concept – hopefully they’ll think of us when they do get around to setting up their own CI system.

Slides from my talk are available here. We manage the slide deck collaboratively in git and you can always view the most recent rendered version here: http://docs.openstack.org/infra/publications/overview/


Thanks to Vedran Papeš for this photo, source

I really enjoyed this conference, huge thanks to Jasna Benčić and the whole conference crew for helping organize my trip and providing meals and entertainment for all of us while we were in town. It means so much to be welcomed so warmly into a country I’m not familiar with!

A few more of my photos from the event available here: https://www.flickr.com/photos/pleia2/sets/72157644861010398/

And photos from others in the DORS/CLUC 2014 Group on Flickr: https://www.flickr.com/groups/dc2014/

by pleia2 at June 20, 2014 09:08 PM

iheartubuntu

Retrieve Your Ubuntu One Data

PUBLIC SERVICE ANNOUNCEMENT

Canonical announced the file services for Ubuntu One has been discontinued.  Your data is available for download until the end of July - if you haven't taken action already, you need to do so now in order to ensure you have a copy of all your data.

In order to make it easy for you to retrieve all of your content, Ubuntu has released a new feature that lets you download all your content at once. The website https://one.ubuntu.com/ has been updated with instructions on how to conveniently download all your files.

In addition, you still can use Mover.io's offer to transfer your data to another cloud provider for free. And the Ubuntu One web interface is available for you to download individual files.

https://mover.io/connectors/ubuntu-one/

The previously announced option of downloading your files as a zip file is producing unreliable results for a small number of users and therefore that option has been removed. If you already retrieved your files as a zip file, Ubuntu encourages you to check for the validity of the zip file contents. If there are problems with that file, please use one of the options above to retrieve a complete copy of your data.

Remember that you will have until 31st July 2014 to collect all of your content. After that date, all remaining content will be deleted.

The Ubuntu One team

NOTE: To remove the annoying "Ubuntu One is closing down soon" pop ups, you can remove Ubuntu one with the following terminal command (I used this and it worked fine):

sudo apt-get autoremove --purge python-ubuntuone-storageprotocol

Just make sure it doesnt try to remove "ubuntu-desktop" :) Alternatively, if you dont trust the command line, Go into the Software Center and search for "python-ubuntuone-storageprotocol" and uninstall that.

by iheartubuntu (noreply@blogger.com) at June 20, 2014 04:15 AM

June 18, 2014

Akkana Peck

Fuzzy house finch chicks

[house finch chick] The wind was strong a couple of days ago, but that didn't deter the local house finch family. With three hungry young mouths to feed, and considering how long it takes to crack sunflower seeds, poor dad -- two days after Father's Day -- was working overtime trying to keep them all fed. They emptied by sunflower seed feeder in no time and I had to refill it that evening.

The chicks had amusing fluffy "eyebrow" feathers sticking up over their heads, and one of them had an interesting habit of cocking its tail up like a wren, something I've never seen house finches do before.

More photos: House finch chicks.

June 18, 2014 08:40 PM

June 16, 2014

Elizabeth Krumbach

Texas Linuxfest wrap-up

Last week I finally had the opportunity to attend Texas Linuxfest. I first heard about this conference back when it started from some Ubuntu colleagues who were getting involved with it, so it was exciting when my talk on Code Review for Systems Administrators was accepted.

I arrived late on Thursday night, much later than expected after some serious flight delays due to weather (including 3 hours on the tarmac at a completely different airport due to running out of fuel over DFW). But I got in early enough to get rest before the expo hall opened on Friday afternoon where I helped staff the HP booth.

At the HP booth, we were showing off the latest developments in the high density Moonshot system, including the ARM-based processors that are coming out later this year (currently it’s sold with server grade Atom processors). It was cool to be able to see one, learn more about it and chat with some of the developers at HP who are focusing on ARM.


HP Moonshot

That evening I joined others at the Speaker dinner at one of the Austin Java locations in town. Got to meet several cool new people, including another fellow from HP who was giving a talk, an editor from Apress who joined us from England and one of the core developers of BusyBox.

On Saturday the talks portion of the conference began!

The keynote was by Karen Sandler, titled “Identity Crisis: Are we who we say we are? which was a fascinating look at how we all present ourselves in the community. As a lawyer, she gave some great insight into the multiple loyalties that many contributors to Open Source have and explored some of them. This was quite topical for me as I continue to do a considerable amount of volunteer work with Ubuntu and work at HP on the OpenStack project as my paid job. But am I always speaking for HP in my role in OpenStack? I am certainly proud to represent HP’s considerable efforts in the community, but in my day to day work I’m largely passionate about the project and my work on a personal level and my views tend to be my own. During the Q&A there was also interesting discussion about use of email aliases, which got me thinking about my own. I have an Ubuntu address which I pretty strictly use for Ubuntu mailing lists and private Ubuntu-related correspondences, I have an HP address that I pretty much just use for internal HP work and then everything else in my life pretty much goes to my main personal address – including all correspondences on the OpenStack, local Linux and other mailing lists.


Karen Sandler beginning her talk with a “Thank You” to the conference organizers

The next talk I went to was by Corey Quinn on “Selling Yourself: How to handle a technical interview” (slides here). I had a chat with him a couple weeks back about this talk and was able to give some suggestions, so it was nice to see the full talk laid out. His experience comes from work at Taos where he does a lot of interviewing of candidates and was able to make several observations based on how people present themselves. He began by noting that a resume’s only job is to get you an interview, so more time should be spent on actually practicing interviewing rather than strictly focusing on a resume. As the title indicates, the key take away was generally that an interview is the place where you should be selling yourself, no modesty here. He also stressed that it’s a 2 way interview, and the interviewer is very interested in making sure that the person will like the job and that they are actually interested to some degree in the work and the company.

It was then time for my own talk, “Code Review for Systems Administrators,” where I talked about how we do our work on the OpenStack Infrastructure team (slides here). I left a bit of extra time for questions than I usually do since my colleague Khai Do was doing a presentation later that did a deeper dive into our continuous integration system (“Scaling the Openstack Test Environment“). I’m glad I did, there were several questions from the audience about some of our additional systems administration focused tooling and how we determine what we use (why Puppet? why Cacti?) and what our review process for those systems looked like.

Unfortunately this was all I could attend of the conference, as I had a flight to catch in order to make it to Croatia in time for DORS/CLUC 2014 this week. I do hope to make it back to Texas Linuxfest at some point, the event had a great venue and was well-organized with speaker helpers in every room to do introductions, keep things on track (so nice!) and make sure the A/V was working properly. Huge thanks to Nathan Willis and the other organizers for doing such a great job.

by pleia2 at June 16, 2014 05:23 AM

June 15, 2014

Akkana Peck

Vim: Set wrapping and indentation according to file type

Although I use emacs for most of my coding, I use vim quite a lot too, for quick edits, mail messages, and anything I need to edit when logged onto a remote server. In particular, that means editing my procmail spam filter files on the mail server.

The spam rules are mostly lists of regular expression patterns, and they can include long lines, such as:
gift ?card .*(Visa|Walgreen|Applebee|Costco|Starbucks|Whitestrips|free|Wal.?mart|Arby)

My default vim settings for editing text, including line wrap, don't work if get a flood of messages offering McDonald's gift cards and decide I need to add a "|McDonald" on the end of that long line.

Of course, I can type ":set tw=0" to turn off wrapping, but who wants to have to do that every time? Surely vim has a way to adjust settings based on file type or location, like emacs has.

It didn't take long to find an example of Project specific settings on the vim wiki. Thank goodness for the example -- I definitely wouldn't have figured that syntax out just from reading manuals. From there, it was easy to make a few modifications and set textwidth=0 if I'm opening a file in my procmail directory:

" Set wrapping/textwidth according to file location and type
function! SetupEnvironment()
  let l:path = expand('%:p')
  if l:path =~ '/home/akkana/Procmail'
    " When editing spam filters, disable wrapping:
    setlocal textwidth=0
endfunction
autocmd! BufReadPost,BufNewFile * call SetupEnvironment()

Nice! But then I remembered other cases where I want to turn off wrapping. For instance, editing source code in cases where emacs doesn't work so well -- like remote logins over slow connections, or machines where emacs isn't even installed, or when I need to do a lot of global substitutes or repetitive operations. So I'd like to be able to turn off wrapping for source code.

I couldn't find any way to just say "all source code file types" in vim. But I can list the ones I use most often. While I was at it, I threw in a special wrap setting for mail files:

" Set wrapping/textwidth according to file location and type
function! SetupEnvironment()
  let l:path = expand('%:p')
  if l:path =~ '/home/akkana/Procmail'
    " When editing spam filters, disable wrapping:
    setlocal textwidth=0
  elseif (&ft == 'python' || &ft == 'c' || &ft == 'html' || &ft == 'php')
    setlocal textwidth=0
  elseif (&ft == 'mail')
    " Slightly narrower width for mail (and override mutt's override):
    setlocal textwidth=68
  else
    " default textwidth slightly narrower than the default
    setlocal textwidth=70
  endif
endfunction
autocmd! BufReadPost,BufNewFile * call SetupEnvironment()

As long as we're looking at language-specific settings, what about doing language-specific indentation like emacs does? I've always suspected vim must have a way to do that, but it doesn't enable it automatically like emacs does. You need to set three variables, assuming you prefer to use spaces rather than tabs:

" Indent specifically for the current filetype
filetype indent on
" Set indent level to 4, using spaces, not tabs
set expandtab shiftwidth=4

Then you can also use useful commands like << and >> for in- and out-denting blocks of code, or ==, for indenting to the right level. It turns out vim's language indenting isn't all that smart, at least for Python, and gets the wrong answer a lot of them time. You can't rely on it as a syntax checker the way you can with emacs. But it's a lot better than no language-specific indentation.

I will be a much happier vimmer now!

June 15, 2014 05:29 PM

June 12, 2014

Jono Bacon

FirefoxOS and Developing Markets

It seems Mozilla is targeting emerging markets and developing nations with $25 cell phones. This is tremendous news, and an admirable focus for Mozilla, but it is not without risk.

Bringing simple, accessible technology to these markets can have a profound impact. As an example, in 2001, 134 million Nigerians shared 500,000 land-lines (as covered by Jack Ewing in Businessweek back in 2007). That year the government started encouraging wireless market competition and by 2007 Nigeria had 30 million cellular subscribers.

This generated market competition and better products, but more importantly, we have seen time and time again that access to technology such as cell phones improves education, provides opportunities for people to start small businesses, and in many cases is a contributing factor for bringing people out of poverty.

So, cell phones are having a profound impact in these nations, but the question is, will it work with FirefoxOS?

I am not sure.

In Mozilla’s defence, they have done an admirable job with FirefoxOS. They have built a powerful platform, based on open web technology, and they lined up a raft of carriers to launch with. They have a strong brand, an active and passionate community, and like so many other success stories, they already have a popular existing product (their browser) to get them into meetings and headlines.

Success though is judged by many different factors, and having a raft of carriers and products on the market is not enough. If they ship in volume but get high return rates, it could kill them, as is common for many new product launches.

What I don’t know is whether this volume/return-rate balance plays such a critical role in developing markets. I would imagine that return rates could be higher (such as someone who has never used a cell phone before taking it back because it is just too alien to them). On the other hand, I wonder if those consumers there are willing to put up with more quirks just to get access to the cell network and potentially the Internet.

What seems clear to me is that success here has little to do with the elegance or design of FirefoxOS (or any other product for that matter). It is instead about delivering incredibly dependable hardware. In developing nations people have less access to energy (for charging devices) and have to work harder to obtain it, and have lower access to support resources for how to use new technology. As such, it really needs to just work. This factor, I imagine, is going to be more outside of Mozilla’s hands.

So, in a nutshell, if the $25 phones fail to meet expectations, it may not be Mozilla’s fault. Likewise, if they are successful, it may not be to their credit.

by jono at June 12, 2014 11:40 PM

Akkana Peck

Comcast actually installed a cable! Or say they did.

The doorbell rings at 10:40. It's a Comcast contractor.

They want to dig across the driveway. They say the first installer didn't know anything, he was wrong about not being able to use the box that's already on this side of the road. They say they can run a cable from the other side of the road through an existing conduit to the box by the neighbor's driveway, then dig a trench across the driveway to run the cable to the old location next to the garage.

They don't need to dig across the road since there's an existing conduit; they don't even need to park in the road. So no need for a permit.

We warn them we're planning to have driveway work done, so the driveway is going to be dug up at some point, and they need to put it as deep as possible. We even admit that we've signed a contract with CenturyLink for DSL. No problem, they say, they're being paid by Comcast to run this cable, so they'll go ahead and do it.

We shrug and say fine, go for it. We figure we'll mark the trench across the driveway afterward, and when we finally have the driveway graded, we'll make sure the graders know about the buried cable. They do the job, which takes less than an hour.

If they're right that this setup works, that means, of course, that this could have been done back in February or any time since then. There was no need to wait for a permit, let alone a need to wait for someone to get around to applying for a permit.

So now, almost exactly 4 months after the first installer came out, we may have a working cable installed. No way to know for sure, since we've been happily using DSL for over a month. But perhaps we'll find out some day.

The back story, in case you missed it: Getting cable at the house: a Comcast Odyssey.

June 12, 2014 09:48 PM