Planet Ubuntu California

April 21, 2015

Akkana Peck

Finding orphaned files on websites

I recently took over a website that's been neglected for quite a while. As well as some bad links, I noticed a lot of old files, files that didn't seem to be referenced by any of the site's pages. Orphaned files.

So I went searching for a link checker that also finds orphans. I figured that would be easy. It's something every web site maintainer needs, right? I've gotten by without one for my own website, but I know there are some bad links and orphans there and I've often wanted a way to find them.

An intensive search turned up only one possibility: linklint, which has a -orphan flag. Great! But, well, not really: after a few hours of fiddling with options, I couldn't find any way to make it actually find orphans. Either you run it on a http:// URL, and it says it's searching for orphans but didn't find any (because it ignors any local directory you specify); or you can run it just on a local directory, in which case it finds a gazillion orphans that aren't actually orphans, because they're referenced by files generated with PHP or other web technology. Plus it flags all the bad links in all those supposed orphans, which get in the way of finding the real bad links you need to worry about.

I tried asking on a couple of technical mailing lists and IRC channels. I found a few people who had managed to use linklint, but only by spidering an entire website to local files (thus getting rid of any server side dependencies like PHP, CGI or SSI) and then running linklint on the local directory. I'm sure I could do that one time, for one website. But if it's that much hassle, there's not much chance I'll keep using to to keep websites maintained.

What I needed was a program that could look at a website and local directory at the same time, and compare them, flagging any file that isn't referenced by anything on the website. That sounded like it would be such a simple thing to write.

So, of course, I had to try it. This is a tool that needs to exist -- and if for some bizarre reason it doesn't exist already, I was going to remedy that.

Naturally, I found out that it wasn't quite as easy to write as it sounded. Reconciling a URL like "http://mysite.com/foo/bar.html" or "../asdf.html" with the corresponding path on disk turned out to have a lot of twists and turns.

But in the end I prevailed. I ended up with a script called weborphans (on github). Give it both a local directory for the files making up your website, and the URL of that website, for instance:

$ weborphans /var/www/ http://localhost/

It's still a little raw, certainly not perfect. But it's good enough that I was able to find the 10 bad links and 606 orphaned files on this website I inherited.

April 21, 2015 08:55 PM

April 20, 2015

Elizabeth Krumbach

POSSCON 2015

This past week I had the pleasure of attending POSSCON in the beautiful capital city of South Carolina, Columbia. The great event kicked off with a social at Hickory Tavern, which I arranged to be at by tolerating a tight connection in Charlotte. It all worked out and in spite of generally being really shy at these kind of socials, I found some folks I knew and had a good time. Late in the evening several of us even had the opportunity to meet the Mayor of Columbia who had come down to the event and talk about our work and the importance of open source in the economy today. It’s really great to see that kind of support for open source in a city.

The next morning the conference actually kicked off. Organizer Todd Lewis opened the event and quickly handed things off to Lonnie Emard, the President of IT-oLogy. IT-oLogy is a non-profit that promotes initial and continued learning in technology through events targeting everyone from children in grade school to professionals who are seeking to extend their skill set, more on their About page. As a partner for POSSCON, they were a huge part of the event, even hosting the second day at their offices.

We then heard from aforementioned Columbia Mayor Steve Benjamin. A keynote from the city mayor was real treat, taking time out of what I’m sure is a busy schedule showed a clear commitment to building technology in Columbia. It was really inspiring to hear him talk about Columbia, with political support and work from IT-oLogy it sounds like an interesting place to be for building or growing a career in tech. There was then a welcome from Amy Love, the South Carolina Department of Commerce Innovation Director. Talk about local support! Go South Carolina!

The next keynote was from Andy Hunt, who was speaking on “A New Look at Openness” which began with a history of how we’ve progressed with development, from paying for licenses and compilers for proprietary development to the free and open source tool set and their respective licenses we work with today. He talked about how this all progresses into the Internet of Things, where we can now build physical objects and track everything from keys to pets. Today’s world for developers, he argued, is not about inventing but innovating, and he implored the audience to seek out this innovation by using the building blocks of open source as a foundation. In the idea space he proposed 5 steps for innovative thinking:

  1. Gather raw material
  2. Work it
  3. Forget the whole thing
  4. Eureka/My that’s peculiar
  5. Refine and develop
  6. profit!

Directly following the keynote I gave my talk on Tools for Open Source Systems Administration in the Operations/Back End track. It had the themes of many of my previous talks on how the OpenStack Infrastructure team does systems administration in an open source way, but I refocused this talk to be directly about the tools we use to accomplish this as a geographically distributed team across several different companies. The talk went well and I had a great audience, huge thanks to everyone who came out for it, it was a real pleasure to talk with folks throughout the rest of the conference who had questions about specific parts of how we collaborate. Slides from my presentation are here (pdf).

The next talk in the Operations/Back End track was Converged Infrastructure with Sanoid by Jim Salter. With SANOID, he was seeking to bring enterprise-level predictability, minimal downtime and rapid recover to small-to-medium-sized businesses. Using commodity components, from hardware through software, he’s built a system that virtualizes all services and runs on ZFS for Linux to take hourly (by default) snapshots of running systems. When something goes wrong, from a bad upgrade to a LAN infected with a virus, he has the ability to quickly roll users back to the latest snapshot. It also has a system for easily creating on and off-site backups and uses Nagios for monitoring, which is how I learned about aNag, a Nagios client for Android, I’ll have to check it out! I had the opportunity to spend more time with Jim as the conference went on, which included swinging by his booth for a SANOID demo. Slides from his presentation are here.

For lunch they served BBQ. I don’t really care for typical red BBQ sauce, so when I saw a yellow sauce option at the buffet I covered my chicken in that instead. I had discovered South Carolina Mustard BBQ sauce. Amazing stuff. Changed my life. I want more.

After lunch I went to see a talk by Isaac Christofferson on Assembling an Open Source Toolchain to Manage Public, Private and Hybrid Cloud Deployments. With a focus on automation, standardization and repeatability, he walked us through his usage of Packer, Vagrant and Ansible to interface with a variety of different clouds and VMs. I’m also apparently the last systems administrator alive who hadn’t heard of devopsbookmarks.com, but he shared the link and it’s a great site.

The rooms for the talks were spread around a very walkable area in downtown Columbia. I wasn’t sure how I’d feel about this and worried it would be a problem, but with speakers staying on schedule we were afforded a full 15 minutes between talks to switch tracks. The venue I spoke it was in a Hilton, and the next talk I went to was in a bar! It made for quite the enjoyable short walks outside between talks and a diversity in venues that was a lot of fun.

That next talk I went to was Open Source and the Internet of Things presented by Erica Stanley. I had the pleasure of being on a panel with Erica back in October during All Things Open (see here for a great panel recap) so it was really great running into her at this conference as well. Her talk was a deluge of information about the Internet of Things (IoT) and how we can all be makers for it! She went into detail about the technology and ideas behind all kinds of devices, and on slides 41 and 42 she gave a quick tour of hardware and software tools that can be used to build for the IoT. She also went through some of the philosophy, guidelines and challenges for IoT development. Slides from her talk are online here, the wealth of knowledge packed into that slidedeck are definitely worth spending some time with if you’re interested in the topic.

The last pre-keynote talk I went to was by Tarus Balog with a Guide to the Open Source Desktop. A self-confessed former Apple fanboy, he had quite the sense of humor about his past where “everything was white and had an apple on it” and his move to using only open source software. As someone who has been using Linux and friends for almost a decade and a half, I wasn’t at this talk to learn about the tools available, but instead see how a long time Mac user could actually make the transition. It’s also interesting to me as a member of the Ubuntu and Xubuntu projects to see how newcomers view entrance into the world of Linux and how they evaluate and select tools. He walked the audience through the process he used to select a distro and desktop environment and then all the applications: mail, calendar, office suite and more. Of particular interest he showed a preference for Banshee (reminded him of old iTunes), as well as digiKam for managing photos. Accounting-wise he is still tied to Quickbooks, but either runs it under wine or over VNC from a Mac.

The day wound down with a keynote from Jason Hibbets. He wrote The foundation for an open source city and is a Project Manager for opensource.com. His keynote was all about stories, and why it’s important to tell our open source stories. I’ve really been impressed with the development of opensource.com over the past year (disclaimer: I’ve written for them too), they’ve managed to find hundreds of inspirational and beneficial stories of open source adoption from around the world. In this talk he highlighted a few of these, including the work of my friend Charlie Reisinger at Penn Manor and Stu Keroff with students in the Asian Penguins computer club (check out a video from them here). How exciting! The evening wrapped up with an afterparty (I enjoyed a nice Palmetto Amber Ale) and a great speakers and sponsors dinner, huge thanks to the conference staff for putting on such a great event and making us feel so welcome.

The second day of the conference took place across the street from the South Carolina State House at the IT-oLogoy office. The day consisted of workshops, so the sessions were much longer and more involved. But the day also kicked off with a keynote by Bradley Kuhn, who gave a basic level talk on Free Software Licensing: Software Freedom Licensing: What You Must Know. He did a great job offering a balanced view of the licenses available and the importance of selecting one appropriate to your project and team from the beginning.

After the keynote I headed upstairs to learn about OpenNMS from Tarus Balog. I love monitoring, but as a systems administrator and not a network administrator, I’ve mostly been using service-based monitoring tooling and hadn’t really looking into OpenNMS. The workshop was an excellent tour of the basics of the project, including a short history and their current work. He walked us through the basic installation and setup, and some of the configuration changes needed for SNMP and XML-based changes made to various other parts of the infrastructure. He also talked about static and auto-discovery mechanisms for a network, how events and alarms work and details about setting up the notification system effectively. He wrapped up by showing off some interesting graphs and other visualizations that they’re working to bring into the system for individuals in your organization who prefer to see the data presented in less technical format.

The afternoon workshop I attended was put on by Jim Salter and went over Backing up Android using Open Source technologies. This workshop focused on backing up content and not the Android OS itself, but happily for me, that’s what I wanted to back up, as I run stock Android from Google otherwise (easy to install again from a generic source as needed). Now, Google will happily backup all your data, but what if you want to back it up locally and store it on your own system? By using rsync backup for Android, Jim demonstrated how to configure your phone to send backups to Linux, Windows and Mac using ssh+rsync. For Linux at least so far this is a fully open source solution, which I quite like and have started using it at home. The next component makes it automatic, which is where we get into a proprietary bit of software, Llama – Location Profiles. Based on various types of criteria (battery level, location, time, and lots more), Llama allows you to identify criteria of when it runs certain actions, like automatically running rsync to do backups. In all, it was a great and informative workshop and I’m happy to finally have a useful solution to pulling photos and things off my phone periodically without plugging it in and using MTP, which apparently I hate and so never I do it. Slides from Jim’s talk, which also include specific instructions and tools for Windows and Mac are online here.

The conference concluded with Todd Lewis sending more thanks all around. By this time in the day rain was coming down in buckets and there were no taxis to be seen, so I grabbed a ride from Aaron Crosman who I was happy to learn earlier was a local but had come from Philadelphia and we had great Philly tech and city vs. country tech stories to swap.

More of my photos from the event available here: https://www.flickr.com/photos/pleia2/sets/72157651981993941/

by pleia2 at April 20, 2015 06:07 PM

Jono Bacon

Announcing Chimp Foot.

I am delighted to share my new music project: Chimp Foot.

I am going to be releasing a bunch of songs, which are fairly upbeat rock and roll (no growly metal here). The first tune is called ‘Line In The Sand’ and is available here.

All of these songs are available under a Creative Commons Attribution ShareAlike license, which means you can download, share, remix, and sell them. I am also providing a karaoke version with vocals removed (great for background music) and all of the individual instrument tracks that I used to create each song. This should provide a pretty comprehensive archive of open material.

Please follow me on SoundCloud and/or on Twitter, Facebook, and Google+.

Shares of this would be much appreciated, and feedback welcome for the music!?

by jono at April 20, 2015 04:22 PM

April 16, 2015

Akkana Peck

I Love Small Town Papers

I've always loved small-town newspapers. Now I have one as a local paper (though more often, I read the online Los Alamos Daily Post. The front page of the Los Alamos Monitor yesterday particularly caught my eye:

[Los Alamos Monitor front page]

I'm not sure how they decide when to include national news along with the local news; often there are no national stories, but yesterday I guess this story was important enough to make the cut. And judging by font sizes, it was considered more important than the high school debate team's bake sale, but of the same importance as the Youth Leadership group's day for kids to meet fire and police reps and do arts and crafts. (Why this is called "Wild Day" is not explained in the article.)

Meanwhile, here are a few images from a hike at Bandelier National Monument: first, a view of the Tyuonyi Pueblo ruins from above (click for a larger version):

[View of Tyuonyi Pueblo ruins from above]

[Petroglyphs on the rim of Alamo Canyon] Some petroglyphs on the wall of Alamo Canyon. We initially called them spirals but they're actually all concentric circles, plus one handprint.

[Unusually artistic cairn in Lummis Canyon] And finally, a cairn guarding the bottom of Lummis Canyon. All the cairns along this trail were fairly elaborate and artistic, but this one was definitely the winner.

April 16, 2015 08:01 PM

April 14, 2015

Jono Bacon

Open Source, Makers, and Innovators

Recently I started writing a column on opensource.com called Six Degrees.

They just published my latest column on how Open Source could provide the guardrails for a new generation of makers and innovators

Go and read the column here.

You can read the two previous columns here:

by jono at April 14, 2015 03:59 PM

Eric Hammond

AWS Lambda Event-Driven Architecture With Amazon SNS

Today, Amazon announced that AWS Lambda functions can be subscribed to Amazon SNS topics.

This means that any message posted to an SNS topic can trigger the execution of custom code you have written, but you don’t have to maintain any infrastructure to keep that code available to listen for those events and you don’t have to pay for any infrastructure when the code is not being run.

This is, in my opinion, the first time that Amazon can truly say that AWS Lambda is event-driven, as we now have a central, independent, event management system (SNS) where any authorized entity can trigger the event (post a message to a topic) and any authorized AWS Lambda function can listen for the event, and neither has to know about the other.

Making this instantly useful is the fact that there already are a number of AWS services and events that can post messages to Amazon SNS. This means there are a lot of application ideas that are ready to be implemented with nothing but a few commands to set up the SNS topic, and some snippets of nodejs code to upload as an AWS Lambda function.

Unfortunately, I was unable to find a comprehensive list of all the AWS services and events that can post messages to Amazon SNS (Simple Notification Service).

I’d like to try an experiment and ask the readers of this blog to submit pointers to AWS and other services which can be configured to post events to Amazon SNS. I will collect the list and update this blog post.

Here’s the list so far:

You can either submit your suggestions as comments on this blog post, or tweet the pointer mentioning @esh

Thanks for contributing ideas:

[2015-04-13: Updated with input from comments and Twitter]

Original article: http://alestic.com/2015/04/aws-lambda-sns

by Eric Hammond at April 14, 2015 02:40 AM

Subscribing AWS Lambda Function To SNS Topic With aws-cli

The aws-cli documentation and command line help text have not been updated yet to include the syntax for subscribing an AWS Lambda function to an SNS topic, but it does work!

Here’s the format:

aws sns subscribe \
  --topic-arn arn:aws:sns:REGION:ACCOUNT:SNSTOPIC \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:REGION:ACCOUNT:function:LAMBDAFUNCTION

where REGION, ACCOUNT, SNSTOPIC, and LAMBDAFUNCTION are substituted with appropriate values for your account.

For example:

aws sns subscribe --topic-arn arn:aws:sns:us-east-1:012345678901:mytopic \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:us-east-1:012345678901:function:myfunction

This returns an SNS subscription ARN like so:

{
    "SubscriptionArn": "arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe"
}

You can unsubscribe with a command like:

aws sns unsubscribe \
  --subscription-arn arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe

where the subscription ARN is the one returned from the subscribe command.

I’m using the latest version of aws-cli as of 2015-04-15 on the GitHub “develop” branch, which is version 1.7.22.

Original article: http://alestic.com/2015/04/aws-cli-sns-lambda

by Eric Hammond at April 14, 2015 01:56 AM

April 13, 2015

iheartubuntu

Free Ubuntu Stickers


I have only 3 sheets of Ubuntu stickers to give away! So if you are interested in one of them, I will randomly pick (via random.org) three people. I'll ship each page of stickers any where in the world along with an official Ubuntu 12.04 LTS disc.

To enter into our contest, please "like" our Facebook page for a chance to win. Contest ends Friday, April 17, 2015. I'll announce the three winners the day after. Thanks for the like!

https://www.facebook.com/iheartubuntu

by iheartubuntu (noreply@blogger.com) at April 13, 2015 09:55 PM

April 12, 2015

Elizabeth Krumbach

Spring Trip to Philadelphia and New Jersey

I didn’t think I’d be getting on a plane at all in March, but plans shifted and we scheduled a trip to Philadelphia and New Jersey that left my beloved San Francisco on Sunday March 29th and returned us home on Monday, April 6th.

Our mission: Deal with our east coast storage. Without getting into the boring and personal details, we had to shut down a storage unit that MJ has had for years and go through some other existing storage to clear out donatable goods and finally catalog what we have so we have a better idea what to bring back to California with us. This required movers, almost an entire day devoted to donations and several days of sorting and repacking. It’s not all done, but we made pretty major progress, and did close out that old unit, so I’m calling the trip a success.

Perhaps what kept me sane through it all was the fact that MJ has piles of really old hardware, which is a delight to share on social media. Geeks from all around got to gush over goodies like the 32-bit SPARC lunchboxes (and commiserate with me as I tried to close them).


Notoriously difficult to close, but it was done!

Now admittedly, I do have some stuff in storage too, including my SPARC Ultra 10 that I wrote about here, back in 2007. I wanted to bring it home on this trip, but I wasn’t willing to put it in checked baggage and the case is a bit too big to put in my carry-on. Perhaps next trip I’ll figure out some way to ship it.


SPARC Ultra 10

More gems were collected in my album from the trip: https://www.flickr.com/photos/pleia2/sets/72157651488307179/

We also got to visit friends and family and enjoy some of our favorite foods we can’t find here in California, including east coast sweet & sour chicken, hoagies and chicken cheese steaks.

Family visits began on Monday afternoon as we picked up the plastic storage totes we were using to replace boxes, many of which were hard to go through in their various states of squishedness and age. MJ had them delivered to his sister in Pennsylvania and they were immensely helpful when we did the move on Tuesday. We also got to visit with MJ’s father and mother, and on Saturday met up with his cousins in New Jersey to have my first family Seder for Passover! Previously I’d gone to ones at our synagogue, but this was the first time I’d done one in someone’s home, and it meant a lot to be invited and to participate. Plus, the Passover diet restrictions did nothing to stem the exceptional dessert spread, there was so much delicious food.

We were fortunate to be in town for the first Wednesday of the month, since that allowed us to attend the Philadelphia area Linux Users Group meeting in downtown Philadelphia. I got to see several of my Philadelphia friends at the meeting, and brought along a box of books from Pearson to give away (including several copies of mine), which went over very well with the crowd gathered to hear from Anthony Martin, Keith Perry, and Joe Rosato about ways to get started with Linux, and freed up space in my closet here at home. It was a great night.


Presentation at PLUG

Friend visits included a fantastic dinner with our friend Danita and a quick visit to see Mike and Jessica, who had just welcomed little David into the world, awww!


Staying in New Jersey meant we could find Passover-friendly meals!

Sunday wrapped up with a late night at storage, finalizing some of our sorting and packing up the extra suitcases we brought along. We managed to get a couple hours of sleep at the hotel before our flight home at 6AM on Monday morning.

In all, it was a productive trip, but exhausting and I spent this past week making up for sleep debt and the aches and pains. Still, it felt good to get the work done and visit with friends we’ve missed.

by pleia2 at April 12, 2015 04:26 PM

iheartubuntu

Edward Snowden on Passwords

Just a friendly reminder on developing stronger passwords...


by iheartubuntu (noreply@blogger.com) at April 12, 2015 01:30 PM

April 11, 2015

iheartubuntu

Elementary Freya Released

FREYA. The next generation of Elementary OS is here. Lightweight and beautiful. All-new apps. A refined look. You can help support the devs and name your price or download it for free.

Based on the countdown on their website, the new Freya version of Elementary OS has now arrived!

Download it here:

They will be having a Special LIVE Elementary OS Hangout here as well for the launch...


I have the beta version of Freya Elementary OS installed on one of my laptops and it works great. Its easy to install and its beautiful. It is crafted by designers and developers who believe that computers can be easy, fun, and gorgeous. By putting design first, Elementary ensures they wont compromise on quality or usability. Its also based on Ubuntu 14.04 making it easy to install PPAs.

You can get a feel of the new Elementary OS Freya by checking out this video on Youtube...



and this review also on Youtube...



Elementary OS is definitely worth a look!

by iheartubuntu (noreply@blogger.com) at April 11, 2015 03:00 PM

April 10, 2015

iheartubuntu

Please Take Our Survey

We would love your input. Please take our short little survey. We'll take what you say to "heart" and make I Heart Ubuntu awesome!


by iheartubuntu (noreply@blogger.com) at April 10, 2015 06:08 PM

We Are Back!


We are trying to sort out some graphics and artwork and other stuff so please bear with us. Hope to see everyone again very soon.

by iheartubuntu (noreply@blogger.com) at April 10, 2015 06:07 PM

LMDE 2 “Betsy” MATE & CINNAMON Released


Today, the Linux Mint team announced the release of the LMDE 2 “Betsy” MATE desktop as well as the Cinnamon desktop environments.

LMDE (Linux Mint Debian Edition) is a very exciting distribution, targeted at experienced users, which provides the same environment as Linux Mint but uses Debian as its package base, instead of Ubuntu.

LMDE is less mainstream than Linux Mint, it has a much smaller user base, it is not compatible with PPAs, and it lacks a few features. That makes it a bit harder to use and harder to find help for, so it is not recommended for novice users.

Important release info, system requirements, upgrade instructions and more can be found about these releases directly on Linux Mint website.

by iheartubuntu (noreply@blogger.com) at April 10, 2015 06:07 PM

Torchlight 2 Now on Steam


Torchlight II is a dungeon crawler game that lets you chose to play as a few different classes. The basic concept is the same as nearly all dungeon crawlers: Explore, level up, find gear, beat the boss, rinse and repeat.

A few years ago I really enjoyed playing the original Torchlight. It worked great on Ubuntu. There were some shading issues with the 3D rendering making your characters faces invisible, but those little problems are of no worry in this new version. Torchlight 2 has improved almost every aspect of the original game.

About a month ago STEAM launched Torchlight 2 with linux support. The new version is cross platform multi-player supported, game saves work across all platforms, and game modding is even supported.

Installation through Steam is simple. The download was about 1GB in size. At work I have a slimline computer with a Pentium G2020 processor, 4 GB ram, and a 1GB nVidia GeForce 210 video card. Graphics are superb. It doesnt get much better than this. I even maxxed out all of the graphics settings. Game play is smooth and enjoyable. Ive just been having a fun time going deeper and deeper into the dungeons fighting new bad guys. The scenery alone is worth it. Cant wait to try multiplayer!

You can even zoom in with the mouse wheel and fight your battles up close!


Here are the recommended system specs...

  • Ubuntu 12.04 LTS (or similar, Debian-based distro)
  • x86/x86_64-compatible, 2.0GHz or better processor
  • 2GB System RAM
  • 1.7 GB hard drive space (subject to change)
  • OpenGL 2.0 compatible 3D graphics card* with at least 256 MB of addressable memory (ATI Radeon x1600 or NVIDIA equivalent)
  • A broadband Internet connection (For Steam download and online multiplayer)
  • Online multiplayer requires a free Runic Account.
  • Requires Steam.

The game itself costs $20 on Steam, but you can get it a part of the Humble Bundle 14 set of games if you pay at least $6.15. If you are considering Torchlight 2, you have until April 14 when the Humble Bundle deal expires.

Torchlight II is a great hack-n-slash every fan of this type of games should own. It will entertain you for hours and hours and you will hardly repeat boring actions like farming and grinding. A must have, for such a low price.

I enjoyed this game so much I'm giving it a 5 out of 5 rating :)


by iheartubuntu (noreply@blogger.com) at April 10, 2015 05:58 PM

April 09, 2015

iheartubuntu

Ubuntu Artwork on Flickr


I am always in search of new and interesting wallpapers. For many years Ubuntu has had a great Flickr group that is used to help decide which wallpapers make it into each new Ubuntu release. Most of them dont make it, but there are definitely some great quality images in this Flickr group. You can easily spend an hour here and pick favorites.

Check it out:

https://www.flickr.com/groups/ubuntu-artwork/pool/page1

by iheartubuntu (noreply@blogger.com) at April 09, 2015 07:49 AM

April 08, 2015

iheartubuntu

MAT - Metadata Anonymisation Toolkit


This is a great program used to help protect your privacy.

Metadata consists of information that characterizes data. Metadata is used to provide documentation for data products. In essence, metadata answers who, what, when, where, why, and how about every facet of the data that is being documented.

Metadata within a file can tell a lot about you. Cameras record data about when a picture was taken and what camera was used. Office documents like PDF or Office automatically adds author and company information to documents and spreadsheets.

Maybe you don't want to disclose that information on the web.

MAT can only remove metadata from your files, it does not anonymise their content, nor can it handle watermarking, steganography, or any too custom metadata field/system.

If you really want to be anonymous, use a format that does not contain any metadata, or better yet, use plain-text.

These are the formats supported to some extent:

Portable Network Graphics (PNG)
JPEG (.jpeg, .jpg, ...)
Open Document (.odt, .odx, .ods, ...)
Office Openxml (.docx, .pptx, .xlsx, ...)
Portable Document Fileformat (.pdf)
Tape ARchive (.tar, .tar.bz2, .tar.gz)
ZIP (.zip)
MPEG Audio (.mp3, .mp2, .mp1, .mpa)
Ogg Vorbis (.ogg)
Free Lossless Audio Codec (.flac)
Torrent (.torrent)

The President of the United States and his birth certificate would have greatly benefited from software such as MAT.

You can install MAT with this terminal command:

sudo apt-get install mat

Look for more articles about privacy soon and by searching in our search by under "privacy".

by iheartubuntu (noreply@blogger.com) at April 08, 2015 08:00 PM

Tasque TODO List App


We're getting back to some of the old basic apps that a lot of people used to use in Ubuntu. Many of them still work great and work great without any internet connection needed.

Tasque (pronounced like “task”) is a simple task management app (TODO list) for the Linux Desktop and Windows. It supports syncing with the online service Remember the Milk or simply storing your tasks locally.

The main window has the ability to complete a task, change the priority, change the name, and change the due date without additional property dialogs.

When a user clicks on a task priority, a list of possible priorities is presented and when selected, the task is re-prioritized in the order you wish.

When you click on the due date, a list of the next seven days is presented along with an option to remove the date or select a date from a calendar.

A user completes a task by clicking the check box on a task. The task is crossed out indicating it is complete and a timer begins counting down to the right of the task. When the timer is done, the task is removed from view.

As mentioned, Tasque has the ability to save tasks locally or backend used Remember the Milk, a free online to-do list. On one of my computers saving my tasks using RTM works great, on my computer at work, it wont sync my tasks. I havent figure out why, but I will post any updates here once I get it working or find a workaround.

You can install Tasque from the Ubuntu Software Center or with this terminal command:

sudo apt-get install tasque

All in all, Tasque is a great little task app. Really simple to use!

by iheartubuntu (noreply@blogger.com) at April 08, 2015 08:00 PM

Tomboy The Original Note App


When I first started using Ubuntu back in early 2007 (Ubuntu 6.10) I fell in love with a pre-installed app called Tomboy. I had used Tomboy for several years until Ubuntu One notified users it would stop syncing Tomboy a couple years ago, and then the finality of Ubuntu One shutting down earlier this year. I had rushed to find alternatives like Evernote, Gnotes, etc but none of them were simple and easily integrated.

The Tomboy description is as follows... "Tomboy is a simple & easy to use desktop note-taking application for Linux, Unix, Windows, and Mac OS X. Tomboy can help you organize the ideas and information you deal with every day."

Some of Tomboys notable features are highlighting text, inline spell checking, auto-linking web & email addresses, undo/redo, font styling & sizing and bulleted lists.

I am creating new notes as well as manually importing a few of my old notes from a couple years ago. Tomboy used to sync easily with Ubuntu One. Since that is no longer an option, you can do it with your Dropbox folder or your Google Drive folder (I'm using Insync).

Tomboy hasnt been updated in a while, but it installs and works great on Ubuntu 14.04 using:

sudo apt-get install tomboy

When you start Tomboy there will be a little square note icon with pen up on your top bar. Clicking the icon will show you the Tomboy menu options. To sync your notes across your computers you would go to the Tomboy preferences, clicking the Syncronization tab, and pick a local folder in your Dropbox or Google Drive. Thats pretty much it! Start writing those notes! On your other computers that you want to sync your notes, you would select the same sync folder you chose on your first computer.

A few quick points. When you sync your notes, it will create a folder titled "0" in whatever folder you have chosen to sync your notes in.

If you want to launch Tomboy with your system startup (in Ubuntu 14.04) in Unity search for "Startup Applications" and run it. Add a new app titled "Tomboy" with the command "tomboy", save and close. Next time you log on, your Tomboy notes will be ready to use.

Tomboy also works with Windows and Mac OS X and installation instructions can be found here:

Windows ... https://wiki.gnome.org/Apps/Tomboy/Installing/Windows
Mac ... https://wiki.gnome.org/Apps/Tomboy/Installing/Mac

- - - - -

If you are still looking for syncing options, this comes in from Christian....

You can self-host your note sync server with either Rainy or Grauphel...

Learn more here...

http://dynalon.github.io/Rainy/

http://apps.owncloud.com/content/show.php?action=content&content=166654

by iheartubuntu (noreply@blogger.com) at April 08, 2015 08:00 PM

Ubuntu - 10 Years Strong


Ubuntu, the Debian based linux operating system is approaching its 21st release in just a couple of weeks (October 23rd) moving forward 10 years strong now!

Mark Shuttleworth invests in Ubuntu's parent company Canonical, which continues to lose money year after year. It's clear that profit isn't his main concern. There is still a clear plan for Ubuntu and Canonical. That plan appears to be very much 'cloud' and business based.

Shuttleworth is proud that the vast majority of cloud deployments are based-on Ubuntu. The recent launch of Canonical's 'Cloud in a box' deployable Ubuntu system is another indication where it sees things going.

Ubuntu Touch will soon appear on phones and tablets, which is really the glue for this cloud/mobile/desktop ecosystem. Ubuntu has evolved impressively over the last ten years and it will continue to develop in this new age.

Ubuntu provides a seamless ecosystem for devices deployed to businesses and users alike. Being able to run the identical software on multiple devices and in the cloud, all sharing the same data is very appealing.

Ubuntu will be at the heart of this with or without the survival of Canonical.

"I love technology, and I love economics and I love what’s going on in society. For me, Ubuntu brings those three things together in a unique way." - Mark Shuttleworth on the next 5 years of Ubuntu

by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:59 PM

BirdFont Font Editor


If you have ever been interested in making your own fonts for fun or profit, BirdFont is an easy to use free font editor that lets you create vector graphics and export TTF, EOT & SVG fonts.

To install Birdfont, simply use the PPA below to insure you always have the most updated version. Open the terminal and run the following commands:

sudo add-apt-repository ppa:ubuntuhandbook1/birdfont

sudo apt-get update

sudo apt-get install birdfont

If you dont like using a PPA repository, you can download the appropriate DEB package  for your particular system....

http://ppa.launchpad.net/ubuntuhandbook1/birdfont/ubuntu/pool/main/b/birdfont/

If you need help developing a font, there is also an official tutorial here!

http://birdfont.org/doku/doku.php/tutorials

by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:59 PM

Check Ubuntu Linux for Shellshock


Shellshock is a new vulnerability that allows attackers to put code onto your machine, which could put your Ubuntu Linux system at a serious risk for malicious attacks.

Shellshock uses a bash script to access your computer. From there, hackers can launch programs, enable features, and even access your files. The script only affects UNIX-based systems (linux and mac).

You can test your system by running this test command from Terminal:

env x='() { :;}; echo vulnerable' bash -c 'echo hello'

If you're not vulnerable, you'll get this result:

bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' hello

If you are vulnerable, you'll get:

vulnerable hello

You can also check the version of bash you're running by entering:

bash --version

If you get version 3.2.51(1)-release as a result, you will need to update. Most Linux distributions already have patches available.

-----------

If your system is vulnerable make sure your computer has all critical updates and it should be patched already. if you are using a version of Ubuntu that has already reached end of life status (12.10, 13.04, 13.10, etc), you may be screwed and may need to start using a newer version of Ubuntu.

This should update Bash for you so your system is not vulnerable...

sudo apt-get update && sudo apt-get install --only-upgrade bash

by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:59 PM

Ubuntu Kylin Wallpapers


Looking for some new wallpapers these days? The Chinese version of Ubuntu, Ubuntu Kylin, has some beautiful new wallpapers for the 14.10 release. Download and install the DEB to put them on your computer (a total of 24 wallpapers)...

http://security.ubuntu.com/ubuntu/pool/universe/u/ubuntukylin-wallpapers/ubuntukylin-wallpapers-utopic_14.10.0_all.deb





by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:58 PM

Welcome to the New I Heart Ubuntu


And here we go! The NEW I Heart Ubuntu website takes a more modern magazine style look. Some of the new features include: an easy categories section at the top, an easy search feature at the top right, the 6 top featured articles right on the main part of your screen, and our popular posts are easy to find on the right side too. We now have a "Video of the Week" on the righthand side and our easy to see social icons at the bottom of the page. We have our RSS link easily available at the top and bottom of our site now. Commenting on articles is available via Disqus and Facebook. Our website also (finally) looks great and easy to use on mobile devices.

We will also begin covering other linux distros but will focus primarily on Ubuntu, Linux Mint and Elementary OS.

We have a bit more work to do yet such as tagging & labeling 460+ articles correctly, building up our Facebook page (you can "like us" on the right side), as well as focusing on the key points you have all recommended based on our survey.

Thanks again for everyones input! Our three websites I Heart Ubuntu, Daily Ubuntu and Crypto Reporter combined have well over 4 million views. Stay tuned & lets make I Heart Ubuntu a destination once again!

by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:55 PM

TAILS The Privacy Distro


TAILS, the anonymizing distribution released its version 1.1 about two weeks ago – which means you can download it now. The Tails 1.1 release is largely a catalog of security fixes and bug fixes, limiting itself otherwise to minor improvements such as to the ISO upgrade and installer, and the Windows 8 camouflage. This is one to grab to keep your online privacy intact.

https://tails.boum.org/

by iheartubuntu (noreply@blogger.com) at April 08, 2015 06:10 PM

April 07, 2015

Elizabeth Krumbach

Puppet Camp San Francisco 2015

On Tuesday, March 24th I woke up early and walked down the street to a regional Puppet Camp, this edition held not only in my home city of San Francisco, but just a few blocks from home. The schedule for the event can be found up on the Eventbrite page.

The event kicked off with a keynote by Ryan Coleman of Puppet Labs, who gave an overview of how configuration management tools like Puppet have shaped our current systems administration landscape. Our work will only continue to grow in scale as we move forward, and based on results of the 2014 DevOps Report more companies will continue to move their infrastructures to the cloud, where automation is key to a well-functioning system. He went on to talk about the work that has been going into Puppet 4 RC and some tips for attendees on how they can learn more about Puppet beyond the event, including free resources like Learn Puppet (which also links to paid training resources) and the Puppet Labs Documentation site, for which they have a dedicated documentation team working to make great docs.

Next up was a great talk by Jason O’Rourke of Salesforce who talked about his infrastructure of tens of thousands of servers and how automation using Puppet has allowed his team to do less of the boring, repetitive tasks and more interesting things. His talk then focused in on “Puppet Adoption in a Mature Environment” where he quickly reviewed different types of deployments, from fresh new ones where it’s somewhat easy to deploy a new management framework to old ones where you may have a lot of technical debt, compliance and regulatory considerations and inability to take risks in a production environment. He walked through the strategies they used to accomplished to make changes in the most mature environments, including the creation of a DevOps team who were responsible for focusing on the “infrastructure is code” mindset, use of tools like Vagrant so identical test environments can be deployed by developers without input from IT, the development of best practices for managing the system (including code review, testing, and more). One of the interesting things they also did was give production access to their DevOps team so they could run limited read/test-only commands against Puppet. This new system was then slowly rolled out typically when hardware or datacenters were rolled out, or when audits or upgrades are being conducted. They also rolled out specific “roles” in their infrastructure separately, from the less risky internal-only services to partner and customer-facing. The rest of the talk was mostly about how they actually deploy into production on a set schedule and do a massive amount of testing for everything they roll out, nice to see!


Jason O’Rourke of Salesforce

Tray Torrance of NCC Group rounded out the morning talks by giving a talk on MCollective (Marionette Collective). He began the talk by covering some history of the orchestration space that MCollective seeks to cover, and how many of the competing solutions are ssh-based, including Ansible, which we’ve been using in the OpenStack infrastructure. It was certainly interesting to learn how it integrates with Puppet and is extendable with Ruby code.

After lunch I presented a talk on “Puppet in the Open Source OpenStack Infrastructure” where I walked through how and why we have an open source infrastructure, and steps for how other organizations and projects can adopt similar methods for managing their infrastructure code. This is similar to some other “open sourcing all our Puppet” talks I have given, but with this audience I definitely honed in on the DevOps-y value of making the code for infrastructure more broadly accessible, even if it’s just within an organization. Slides here.

The next couple of talks were by Nathan Valentine and David Lutterkort of Puppet Labs. Nathan did several live demos of Puppet Enterprise, mostly working through the dashboard to demonstrate how services can be associated with servers and each other for easy deployment. David’s presentation went into a bit of systems administration history in the world before ever-present configuration management and virtualization to discuss how containerization software like Docker has really changed the landscape for testing and deployments. He walked through usage of the Puppet module for Docker written by Gareth Rushgrove and his cooresponding proof of concept for a service deployment in Docker for ASP.NET, available here.

The final talk of the day was by Aaron Stone (nickname “sodabrew”) of BrightRoll on “Dashboard 2 and External Node Classification” where he walked through the improvements to the Puppet Dashboard with the release of version 2. I myself had been exposed to Puppet Dashboard when I first joined the OpenStack Infrastructure team a couple years ago and we were using it to share read-only data to our community so we’d have insight into when Puppet changes merged and whether they were successful. Unfortunately, a period of poor support for the dashboard caused us to go through several ideas for an alternative dashboard (documented in this bug) until we finally settled on using a simple front end for PuppetDB, PuppetBoard. We’re really happy with the capabilities for our team, since read-only access is what we were looking for, but it was great to hear from Aaron about work he’s resumed on the Dashboard, should I have a need in the future. Some of the improvements he covered included some maintenance fixes, including broader support for newer versions of Ruby and updating of the libraries (gems) it’s using, an improved REST API and some UI tweaks. He said that upgrading should be easy, but in an effort to focus on development he wouldn’t be packaging it for all the distros, though the files (ie debian/ for .deb packages) to make this a task for someone else are available if someone is able to do the work.

In all, this was a great little event and the low ticket price of $50 it was quite the cost-effective way to learn about a few new technologies in the Puppet ecosystem and meet fellow, local systems administrators and engineers.

A few more photos from the event are here: https://www.flickr.com/photos/pleia2/sets/72157649225111213

by pleia2 at April 07, 2015 01:47 AM

Eric Hammond

S3 Bucket Notification to SQS/SNS on Object Creation

A fantastic new and oft-requested AWS feature was released during AWS re:Invent, but has gotten lost in all the hype about AWS Lambda functions being triggered when objects are added to S3 buckets. AWS Lambda is currently in limited Preview mode and you have to request access, but this related feature is already available and ready to use.

I’m talking about automatic S3 bucket notifications to SNS topics and SQS queues when new S3 objects are added.

Unlike AWS Lambda, with S3 bucket notifications you do need to maintain the infrastructure to run your code, but you’re already running EC2 instances for application servers and job processing, so this will fit right in.

To detect and respond to S3 object creation in the past, you needed to either have every process that uploaded to S3 subsequently trigger your back end code in some way, or you needed to poll the S3 bucket to see if new objects had been added. The former adds code complexity and tight coupling dependencies. The latter can be costly in performance and latency, especially as the number of objects in the bucket grows.

With the new S3 bucket notification configuration options, the addition of an object to a bucket can send a message to an SNS topic or to an SQS queue, triggering your code quickly and effortlessly.

Here’s a working example of how to set up and use S3 bucket notification configurations to send messages to SNS on object creation and update.

Setup

Replace parameter values with your preferred names.

region=us-east-1
s3_bucket_name=BUCKETNAMEHERE
email_address=YOURADDRESS@EXAMPLE.COM
sns_topic_name=s3-object-created-$(echo $s3_bucket_name | tr '.' '-')
sqs_queue_name=$sns_topic_name

Create the test bucket.

aws s3 mb \
  --region "$region" \
  s3://$s3_bucket_name

Create an SNS topic.

sns_topic_arn=$(aws sns create-topic \
  --region "$region" \
  --name "$sns_topic_name" \
  --output text \
  --query 'TopicArn')
echo sns_topic_arn=$sns_topic_arn

Allow S3 to publish to the SNS topic for activity in the specific S3 bucket.

aws sns set-topic-attributes \
  --topic-arn "$sns_topic_arn" \
  --attribute-name Policy \
  --attribute-value '{
      "Version": "2008-10-17",
      "Id": "s3-publish-to-sns",
      "Statement": [{
              "Effect": "Allow",
              "Principal": { "AWS" : "*" },
              "Action": [ "SNS:Publish" ],
              "Resource": "'$sns_topic_arn'",
              "Condition": {
                  "ArnLike": {
                      "aws:SourceArn": "arn:aws:s3:*:*:'$s3_bucket_name'"
                  }
              }
      }]
  }'

Add a notification to the S3 bucket so that it sends messages to the SNS topic when objects are created (or updated).

aws s3api put-bucket-notification \
  --region "$region" \
  --bucket "$s3_bucket_name" \
  --notification-configuration '{
    "TopicConfiguration": {
      "Events": [ "s3:ObjectCreated:*" ],
      "Topic": "'$sns_topic_arn'"
    }
  }'

Test

You now have an S3 bucket that is going to post a message to an SNS topic when objects are added. Let’s give it a try by connecting an email address listener to the SNS topic.

Subscribe an email address to the SNS topic.

aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol email \
  --notification-endpoint "$email_address"

IMPORTANT! Go to your email inbox now and click the link to confirm that you want to subscribe that email address to the SNS topic.

Upload one or more files to the S3 bucket to trigger the SNS topic messages.

aws s3 cp [SOMEFILE] s3://$s3_bucket_name/testfile-01

Check your email for the notification emails in JSON format, containing attributes like:

{ "Records":[  
    { "eventTime":"2014-11-27T00:57:44.387Z",
      "eventName":"ObjectCreated:Put", ...
      "s3":{
        "bucket":{ "name":"BUCKETNAMEHERE", ... },
        "object":{ "key":"testfile-01", "size":5195, ... }
}}]}

Notification to SQS

The above example connects an SNS topic to the S3 bucket notification configuration. Amazon also supports having the bucket notifications go directly to an SQS queue, but I do not recommend it.

Instead, send the S3 bucket notification to SNS and have SNS forward it to SQS. This way, you can easily add other listeners to the SNS topic as desired. You can even have multiple SQS queues subscribed, which is not possible when using a direct notification configuration.

Here are some sample commands that create an SQS queue and connect it to the SNS topic.

Create the SQS queue and get the ARN (Amazon Resource Name). Some APIs need the SQS URL and some need the SQS ARN. I don’t know why.

sqs_queue_url=$(aws sqs create-queue \
  --queue-name $sqs_queue_name \
  --attributes 'ReceiveMessageWaitTimeSeconds=20,VisibilityTimeout=300'  \
  --output text \
  --query 'QueueUrl')
echo sqs_queue_url=$sqs_queue_url

sqs_queue_arn=$(aws sqs get-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attribute-names QueueArn \
  --output text \
  --query 'Attributes.QueueArn')
echo sqs_queue_arn=$sqs_queue_arn

Give the SNS topic permission to post to the SQS queue.

sqs_policy='{
    "Version":"2012-10-17",
    "Statement":[
      {
        "Effect":"Allow",
        "Principal": { "AWS": "*" },
        "Action":"sqs:SendMessage",
        "Resource":"'$sqs_queue_arn'",
        "Condition":{
          "ArnEquals":{
            "aws:SourceArn":"'$sns_topic_arn'"
          }
        }
      }
    ]
  }'
sqs_policy_escaped=$(echo $sqs_policy | perl -pe 's/"/\\"/g')
sqs_attributes='{"Policy":"'$sqs_policy_escaped'"}'
aws sqs set-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attributes "$sqs_attributes"

Subscribe the SQS queue to the SNS topic.

aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol sqs \
  --notification-endpoint "$sqs_queue_arn"

You can upload another test file to the S3 bucket, which will now generate both the email and a message to the SQS queue.

aws s3 cp [SOMEFILE] s3://$s3_bucket_name/testfile-02

Read the S3 bucket notification message from the SQS queue:

aws sqs receive-message \
  --queue-url $sqs_queue_url

The output of that command is not quite human readable as it has quoted JSON inside quoted JSON inside JSON, but your queue processing software should be able to decode it and take appropriate actions.

You can tell the SQS queue that you have “processed” the message by grabbing the “ReceiptHandle” value from the above output and deleting the message.

sqs_receipt_handle=...
aws sqs delete-message \
  --queue-url "$sqs_queue_url" \
  --receipt-handle "$sqs_receipt_handle"

You only have a limited amount of time to process the message and delete it before SQS tosses it back in the queue for somebody else to process. This test queue gives you 5 minutes (VisibilityTimeout=300). If you go past this timeout, simply read the message from the queue and try again.

Cleanup

Delete the SQS queue:

aws sqs delete-queue \
  --queue-url "$sqs_queue_url"

Delete the SNS topic (and all subscriptions).

aws sns delete-topic \
  --region "$region" \
  --topic-arn "$sns_topic_arn"

Delete test objects in the bucket:

aws s3 rm s3://$s3_bucket_name/testfile-01
aws s3 rm s3://$s3_bucket_name/testfile-02

Remove the S3 bucket notification configuration:

aws s3api put-bucket-notification \
  --region "$region" \
  --bucket "$s3_bucket \
  --notification-configuration '{}'

Delete the bucket, but only if it was created for this test!

aws s3 rb s3://$s3_bucket_name

History / Future

If the concept of an S3 bucket notification sounds a bit familiar, it’s because AWS S3 has had it for years, but the only supported event type was “s3:ReducedRedundancyLostObject”, triggered when S3 lost an RRS object. Given the way that this feature was designed, we all assumed that Amazon would eventually add more useful events like “S3 object created”, which indeed they released a couple weeks ago.

I would continue to assume/hope that Amazon will eventually support an “S3 object deleted” event because it just makes too much sense for applications that need to keep track of the keys in a bucket.

[Update 2015-04-06: Add code to remove S3 bucket notification, which Amazon just added to aws-cli in release 18]

Original article: http://alestic.com/2014/12/s3-bucket-notification-to-sqssns-on-object-creation

by Eric Hammond at April 07, 2015 12:07 AM

April 06, 2015

Akkana Peck

Quickly seeing bird sightings maps on eBird

The local bird community has gotten me using eBird. It's sort of social networking for birders -- you can report sightings, keep track of what birds you've seen where, and see what other people are seeing in your area.

The only problem is the user interface for that last part. The data is all there, but asking a question like "Where in this county have people seen broad-tailed hummingbirds so far this spring?" is a lengthy process, involving clicking through many screens and typing the county name (not even a zip code -- you have to type the name). If you want some region smaller than the county, good luck.

I found myself wanting that so often that I wrote an entry page for it.

My Bird Maps page is meant to be used as a smart bookmark (also known as bookmarklets or keyword bookmarks), so you can type birdmap hummingbird or birdmap golden eagle in your location bar as a quick way of searching for a species. It reads the bird you've typed in, and looks through a list of species, and if there's only one bird that matches, it takes you straight to the eBird map to show you where people have reported the bird so far this year.

If there's more than one match -- for instance, for birdmap hummingbird or birdmap sparrow -- it will show you a list of possible matches, and you can click on one to go to the map.

Like every Javascript project, it was both fun and annoying to write. Though the hardest part wasn't programming; it was getting a list of the nonstandard 4-letter bird codes eBird uses. I had to scrape one of their HTML pages for that. But it was worth it: I'm finding the page quite useful.

How to make a smart bookmark

I think all the major browsers offer smart bookmarks now, but I can only give details for Firefox. But here's a page about using them in Chrome.

Firefox has made it increasingly difficult with every release to make smart bookmarks. There are a few extensions, such as "Add Bookmark Here", which make it a little easier. But without any extensions installed, here's how you do it in Firefox 36:

[Firefox bookmarks dialog] First, go to the birdmap page (or whatever page you want to smart-bookmark) and click on the * button that makes a bookmark. Then click on the = next to the *, and in the menu, choose Show all bookmarks. In the dialog that comes up, find the bookmark you just made (maybe in Unsorted bookmarks?) and click on it.

Click the More button at the bottom of the dialog.
(Click on the image at right for a full-sized screenshot.)
[Firefox bookmarks dialog showing keyword]

Now you should see a Keyword entry under the Tags entry in the lower right of that dialog.

Change the Location to http://shallowsky.com/birdmap.html?bird=%s.

Then give it a Keyword of birdmap (or anything else you want to call it).

Close the dialog.

Now, you should be able to go to your location bar and type:
birdmap common raven or birdmap sparrow and it will take you to my birdmap page. If the bird name specifies just one bird, like common raven, you'll go straight from there to the eBird map. If there are lots of possible matches, as with sparrow, you'll stay on the birdmap page so you can choose which sparrow you want.

How to change the default location

If you're not in Los Alamos, you probably want a way to set your own coordinates. Fortunately, you can; but first you have to get those coordinates.

Here's the fastest way I've found to get coordinates for a region on eBird:

  • Click "Explore a Region"
  • Type in your region and hit Enter
  • Click on the map in the upper right

Then look at the URL: a part of it should look something like this: env.minX=-122.202087&env.minY=36.89291&env.maxX=-121.208778&env.maxY=37.484802 If the map isn't right where you want it, try editing the URL, hitting Enter for each change, and watch the map reload until it points where you want it to. Then copy the four parameters and add them to your smart bookmark, like this: http://shallowsky.com/birdmap.html?bird=%s&minX=-122.202087&minY=36.89291&maxX=-121.208778&maxY=37.484802

Note that all of the the "env." have been removed.

The only catch is that I got my list of 4-letter eBird codes from an eBird page for New Mexico. I haven't found any way of getting the list for the entire US. So if you want a bird that doesn't occur in New Mexico, my page might not find it. If you like birdmap but want to use it in a different state, contact me and tell me which state you need, and I'll add those birds.

April 06, 2015 08:30 PM

April 02, 2015

Akkana Peck

One-antlered stags

[mule deer stag with one antler] This fellow stopped by one evening a few weeks ago. He'd lost one of his antlers (I'd love to find it in the yard, but no luck so far). He wasn't hungry; just wandering, maybe looking for a place to bed down. He didn't seem to mind posing for the camera.

Eventually he wandered down the hill a bit, and a friend joined him. I guess losing one antler at a time isn't all that uncommon for mule deer, though it was the first time I'd seen it. I wonder if their heads feel unbalanced.
[two mule deer stags with one antler each]

Meanwhile, spring has really sprung -- I put a hummingbird feeder out yesterday, and today we got our first customer, a male broad-tailed hummer who seemed quite happy with the fare here. I hope he stays around!

April 02, 2015 01:25 AM

March 28, 2015

Elizabeth Krumbach

Simcoe’s March 2015 Checkup

Our little Siamese, Simcoe, has Chronic Renal Failure (CRF). She has been doing well for over 3 years now with subcutaneous fluid injections every other day to keep her hydrated and quarterly check-ins with the vet to make sure her key blood levels and weight are staying within safe parameters.

On March 14th she went in for her latest visit and round of blood work. As usual, she wasn’t thrilled about the visit and worked hard to stay in her carrier the whole time.

She came out long enough for the exam, and the doctor was healthy with her physical, though her weight had dropped a little again, going from 9.74lbs to 9.54lbs.

Both her BUN and CRE levels remained steady.

Unfortunately her Calcium levels continue to come back a bit high, so the vet wants her in for an ionized Calicum test. She has explained that it’s only the ionized Calcium that is a concern because it can build up in the kidneys and lead to more rapid deterioration, so we’d want to get her on something to reduce the risk if that was the case. We’ll probably be making an appointment once I return from my travels in mid April to get this test done.

In the meantime, she gets to stay at home and enjoy a good book.

…my good book.

by pleia2 at March 28, 2015 02:07 AM

The spaces between

It’s been over 2 months since I’ve done a “miscellaneous life stuff” blog post. Anyone reading this blog recently might think I only write about travel and events! Since that last post I have had other things pop up here and there, but I am definitely doing too many events. That should calm down a bit in the 2nd quarter of the year and almost disappear in the third, with the notable exception of a trip to Peru, part work and part pleasure.

Unfortunately it looks like stress I mentioned in that last post flipped the switch on my already increasing-in-frequency migraines. I’ve seen my neurologist twice this year and we’ve worked through several medications, finally finding one that seems to work. And at least a visit to my neurologist affords me some nice views.

So I have been working on stress reduction, part which is making sure I keep running. It doesn’t reduce stress immediately but a routine of exercise does help even me out in the long term. To help clear my head, I’ve also been refining my todo lists to make them more comprehensive. I’m also continuing to let projects go when I find they’re causing my stress levels to spike for little gain. This is probably the hardest thing to do, I care about everything I work on and I know some things will just drop on the ground if I don’t do them, but I really need to be more realistic about what I can actually get done and focus my energy accordingly.

And to clear the way in this post for happier things, I did struggle with the loss of Eric in January. My Ubuntu work here in San Francisco simply won’t be the same without him, and every time I start thinking about planning an event I am reminded that he won’t be around to help or attend. Shortly after learning of his passing, several of us met up at BerkeleyLUG to share memories. Then on March 18th a more organized event was put together to gather friends from his various spheres of influence to celebrate his life at one of his favorite local pizzerias. It was a great event, I met some really good people and saw several old friends. It also brought some closure for me that I’d been lacking in dealing with this on my own.

On to happier things! I actually spent 30 days in a row off a plane in March. Home time means I got to do lots of enjoyable home things, like actually spending time with my husband over some fantastic meals, as well as finally finishing watching Breaking Bad together. I also think I’ve managed to somewhat de-traumatize my cats, who haven’t been thrilled about all my travel. We’ve been able to take some time to do some “home things” – like get some painting estimates so we can get some repairs done around the condo. I also spent a day down in Mountain View so I could meet up with a local colleague who I hadn’t yet met to kick off a new project, and then have dinner with a friend who was in the area visiting. Plus, I got to see cool things like a rare storm colliding with a sunset one evening:

I’ve been writing some, in January my article 10 entry points to tech (for girls, women, and everyone) went live on opensource.com. In early March I was invited to publish an article on Tech Talk Live Blog on Five Ways to Get Involved with Ubuntu as an Educator based on experience working with teachers over the past several years. I’ve also continued work toward a new book in progress, which has been time-consuming but I’m hoping will be ready for more public discussion in the coming months. Mark G. Sobell’s A Practical Guide to Ubuntu Linux, 4th Edition also came out earlier this year, and while I didn’t write that, I did spend a nice chunk of time last summer doing review for it. I came away with a quote on the cover endorsing the great work Mark did with the book!

Work-wise, aside from travel and conferences I’ve talked about in previous posts, I was recently promoted to root and core for OpenStack Infrastructure. This has meant a tremendous amount to me, both the trust the team has placed in me and the increased ability for me to contribute to the infrastructure I’ve spent so much time with over these past couple of years. It also means I’ve been learning a lot and sorting through the tribal knowledge that should be formally documented. I was also able to participate as a Track Chair for selecting talks for the Related OSS Projects track at the OpenStack Summit in Vancouver in May, I did this for Atlanta last year but ended up not being able to attend due to being too sick (stupid gallbladder). And while on the topic of Vancouver, a panel proposed by the Women of OpenStack that I’m participating in has been accepted, Standing Tall in the Room, where we hope to give other women in our community some tips for success. My next work trip is coming up before Vancouver I’m heading off to South Carolina for Posscon where I’ll be presenting on Tools for Open Source Systems Administration, a tour of tools we use in order to make collaborating online with a distributed team of systems administrators from various companies possible (and even fun!).

In the tech goodies department, I recently purchased a Nexus 6. I was compelled to after I dropped my Galaxy S3 while sitting up on the roof deck. I was pretty disappointed by the demise of my S3, it was a solid phone and the stress of replacement wasn’t something I was thrilled to deal with immediately upon my return from Oman. I did a bunch of research before I settled on the Nexus 6 and spent my hard-earned cash on retail price for a phone for the first time in my life. It’s now been almost a month and I’m still not quite used to how BIG the Nexus 6 is, but it is quite a pleasure to use. I still haven’t quite worked out how to carry it on my runs; it’s too big for my pockets and the arm band solution isn’t working (too bulky, and other reasons), I might switch to a small backpack that can carry water too. It’s a great phone though, so much faster than my old one, which honestly did deserve to be replaced, even if not in the way I face-planted it on the concrete, sorry S3.


Size difference: Old S3 in new Nexus 6 case

I also found my old Chumby while searching through the scary cave that is our storage unit for the paint that was used for previous condo painting. They’ve resurrected the service for a small monthly fee, now I just need to find a place to plug it in near my desk…

I actually made it out of the house to be social a little too. My cousin Steven McCorry is the lead singer in a band called Exotype, which signed a record deal last year and has since been on several tours. This one brought him to San Francisco, so I finally made my way out to the famous DNA Lounge to see the show. It was a lot of fun, but as much as I can appreciate metal, I’m pleased with their recent trend toward rock, which I prefer. It was also great to visit with my cousin and his band mates.

This week it was MJ’s turn to be out of the country for work. While I had Puppet Camp to keep me busy on Tuesday, I did a poor job of scheduling social engagements and it’s been a pretty lonely time. It gave me space to do some organization and get work done, but I wasn’t as productive as I really wanted to be and I may have binge watched the latest slew of Mad Men episodes that landed on Netflix one night. Was nice to have snuggle time with the kitties though.

MJ comes home Sunday afternoon, at which time we have to swap out the contents of his suitcase and head back to the airport to catch a red eye flight to Philadelphia. We’re spending next week moving a storage unit, organizing our new storage situation and making as many social calls as possible. I’m really looking forward to visiting PLUG on Wednesday to meet up with a bunch of my old Philadelphia Linux friends. And while I’m not actively looking forward to the move, it’s something we’ve needed to do for some time now, so it’ll be nice for that to be behind us.

by pleia2 at March 28, 2015 01:58 AM

March 27, 2015

Akkana Peck

Hide Google's begging (or any other web content) via a Firefox userContent trick

Lately, Google is wasting space at the top of every search with a begging plea to be my default search engine.

[Google begging: Switch your default search engine to Google] Google already is my default search engine -- that's how I got to that page. But if you don't have persistent Google cookies set, you have to see this begging every time you do a search. (Why they think pestering users is the way to get people to switch to them is beyond me.)

Fortunately, in Firefox you can hide the begging with a userContent trick. Find the chrome directory inside your Firefox profile, and edit userContent.css in that directory. (Create a new file with that name if you don't already have one.) Then add this:

#taw { display: none !important; }

Restart Firefox, do a Google search and the begs should be gone.

In case you have any similar pages where there's pointless content getting in the way and you want to hide it: what I did was to right-click inside the begging box and choose Inspect element. That brings up Firefox's DOM inspector. Mouse over various lines in the inspector and watch what gets highlighted in the browser window. Find the element that highlights everything you want to remove -- in this case, it's a div with id="taw". Then you can write CSS to address that: hide it, change its style or whatever you're trying to do.

You can even use Inspect element to remove elements immediately. That won't help you prevent them from showing up later, but it can be wonderful if you need to use a page that has an annoying blinking ad on it, or a mis-designed page that has images covering the content you're trying to read.

March 27, 2015 02:17 PM

March 20, 2015

kdub

A few years of Mir TDD

asm header

We started the Mir project a few years ago guided around the principles in the book, Growing Object Oriented Software Guided by Tests. I recommend a read, especially if you’ve never been exposed to “Test-driven development”

Compared to other projects that I’ve worked on, I find that as a greenfield  TDD project Mir has really benefitted from the TDD process in terms of ease of development, and reliability. Just a few quick thoughts:

  • I’ve found the mir code to be ready to ship as soon as code lands. There’s very little going back and figuring out how the new feature has caused regressions in other parts of the code.
  • There’s much less debugging in the intial rounds of development, as you’ve already planned and written out tests for what you want the code to do.
  • It takes a bit more faith when you’re starting a new line of work that you’ll be able to get the code completed. Test-driven development forces more exploratory spikes (which tend to have exploratory interfaces), and then to revisit and methodically introduce refactorings and new interfaces that are clearer than the ropey interfaces seen in the ‘spike’ branches. That is, the interfaces that land tend to be the second-attempt interfaces that have been selected from a fuller understanding of the problem, and tend to be more coherent.
  • You end up with more modular, object-oriented code, because generally you’re writing a minimum of two implementations of any interface you’re working on (the production code, and the mock/stub)
  • The reviews tend to be less about whether things work, and more about the sensibility of the interfaces.

by Kevin at March 20, 2015 11:31 PM

test post, ignore

test post, ignore the man behind the curtain

by Kevin at March 20, 2015 01:25 PM

March 19, 2015

Akkana Peck

Hints on migrating Google Code to GitHub

Google Code is shutting down. They've sent out notices to all project owners suggesting they migrate projects to other hosting services.

I moved all my personal projects to GitHub years ago, back when Google Code still didn't support git. But I'm co-owner on another project that was still hosted there, and I volunteered to migrate it. I remembered that being very easy back when I moved my personal projects: GitHub had a one-click option to import from Google Code. I assumed (I'm sure you know what that stands for) that it would be just as easy now.

Nope. Turns out GitHub no longer has any way to import from Google Code: it tells you it can't find a repository there when you give it the address to Google's SVN repository.

Google's announcement said they were providing an exporter to GitHub. So I tried that next. I had the new repository ready on GitHub -- under the owner's account, not mine -- and I expected Google's exporter to ask me for the repository.

Not so. As soon as I gave it my OAuth credentials, it immediately created a new repository on GitHub under my name, using the name we had used on Google Code (not the right name, since Google Code project names have to be globally unique while GitHub projects don't).

So I had to wait for the export to finish; then, on GitHub, I went to our real repository, and did an import there from the new repository Google had created under my name. I have no idea how long that took: GitHub's importer said it would email me when the import was finished, but it didn't, so I waited several hours and decided it was probably finished. Then I deleted the intermediate repository.

That worked fine, despite being a bit circuitous, and we're up and running on GitHub now.

If you want to move your Google Code repository to GitHub without the intermediate step of making a temporary repository, or if you don't want to give Google OAuth access to your GitHub account, here are some instructions (which I haven't tested) on how to do the import via a local copy of the repo on your own machine, rather than going directly from Google to GitHub: krishnanand's steps for migrating Google code to GitHub

March 19, 2015 07:11 PM

March 18, 2015

Elizabeth Krumbach

Elastic{ON} 2015

I’m finally home for a month, so I’ve taken advantage of some of this time to attend and present at some local events. The first of which was Elastic{ON}, the first user conference for Elasticsearch and related projects now under the Elastic project umbrella. The conference venue was Pier 27, a cruise terminal on the bay. It was a beautiful venue with views of the bay, and clever use for a terminal while there aren’t ships coming in.

The conference kicked off with a keynote where they welcomed attendees (of which there were over 1300 from 35 countries!) and dove into project history from the first release in 2010. A tour of old logos and websites built up to the big announcement, the “Elastic” rebranding, as the scope of their work now goes beyond search in the former Elasticsearch name. The opening keynotes continued with several leads from projects within the Elastic family, including updates from Logstash and Kibana.

At lunch I ended up sitting with 3 other women who were attending the conference on behalf of their companies (when gender ratios are skewed, this type of congregation tends to happen naturally). We all got to share details about how we were using Elasticsearch, so that was a lot of fun. One woman was doing data analysis against it for her networking-related work, another was using it to store metadata for videos and the third was actually speaking that afternoon on how they’re using it to supplement the traditional earthquake data with social media data about earthquakes at the USGS.

Track sessions began after lunch, and I spent my afternoon camped out in the Demo Theater. The first talk was by the Elastic Director of Developer Relations, Leslie Hawthorne. She talked about the international trio of developer evangelists that she works with, focusing on their work to support and encourage meetup groups worldwide, noting that 75 cities now have meetups with a total of over 17,000 individual members. She shared some tips from successful meetup groups, including offering a 2nd track during meetups for beginners, using an unconference format rather than set schedule and mixing things up sometimes with hack nights on Elastic projects. It was interesting to learn how they track community metrics (code/development stats, plus IRC and mailing list activity) and she wrapped up by noting the new site at https://www.elastic.co/community where they’re working to add more how-tos and on-ramping content, which their recent acquisition of Found, which has maintained a lot of that kind of material.


Leslie Hawthorn on “State of the Community”

The next session was “Elasticsearch Data Journey: Life of a Document in Elasticsearch” by Alexander Reelsen & Boaz Leskes. When documents enter Elasticsearch as json output from a service like Logstash, it can seem like a bit of a black box as far as what exactly happens to it in order for it to be added to Elasticsearch. This talk went through what happens. It’s first stored in Elasticsearch, where it’s stored node-wise is based on several bits of criteria analyzed upon bringing in, and the data is normalized and sorted. While the data is coming in, it’s stored in a buffer and also written to a transaction log until it’s actually committed to disk, at which time it’s still in the transaction log until it can be replicated across the Elasticsearch cluster. From there, they went into discussing data retrieval, cluster scaling and while stressing that replication is NOT backups, how to actually do backups of each node and how to restore from them. Finally, they talked about the data deletion process and how it queues data for deletion on each node in segments and noted that this is not a reversible option.

Continuing in “Life of” theme, I also attended “Life of an Event in Logstash” by Colin Surprenant. Perhaps my favorite talk of the day, Colin did an excellent job of explaining and defining all the terms he used in his talk. Contrary to popular belief, this isn’t just useful to folks new to the project, but as a systems administrator who maintains dozens of different types of applications over hundreds of servers, I am not necessarily familiar with what Logstash in particular calls everything terminology-wise, so having it made clear during the talk was great. His talk walked us through the 3 stages that events coming into Logstash go through: Input, Filter and Output, and the sized queues between each of them. The Input stage takes whatever data you’re feeding into Logstash and uses plugins to transform it into a Logstash event. The Filter stage actually modifies the data from the event so that the data is made uniform. The Output stage translates the uniform data into whatever output you’re sending it to, whether it’s STDOUT or sending it off to Elastisearch as json. Knowing the bits of this system is really valuable for debugging loss of documents, I look forward to having the video online to share with my colleagues. EDIT 3/20/2015: Detailed slides online here.


Colin Surprenant on “Life of an Event in Logstash”

I tended to a avoid many of the talks by Elasticsearch users talking about how they use it. While I’m sure there’s valuable insights to be gained by learning how others use it, we’re pretty much convinced about our use and things are going well. So use cases were fresh to me when the day 2 keynotes kicked off with a discussion with Don Duet, Co-head of Technology at Goldman Sachs. It was interesting to learn that nearly 1/3 of the employees at Goldman Sachs are in engineering or working directly with engineering in some kind of technical analysis capacity. They were also framed as very tech-conscious company and long time open source advocate. In exploring some of their work with Elasticsearch he used legal documents as an example: previously they were difficult to search and find, but using Elasticsearch an engineer was empowered to work with the legal department to make the details about contracts and more searchable and easier to find.

The next keynote was a surprising one, from Microsoft! As a traditional proprietary, closed-source company, they haven’t historically been known for their support of open source software, at least in public. This has changed in recent years as the world around has changed and they’ve found themselves needing to not only support open source software in their stacks but contributing to things like the Linux kernel as well. Speaker Pablo Castro had a good sense of humor about this all as he walked attendees through three major examples of Elasticsearch use at Microsoft. It was fascinating to learn that it’s used for content on MSN.com, which gets 18 billion hits per month. They’re using Elasticsearch on the Microsoft Dynamics CRM for social media data, and in this case their actually using Ubuntu as well. Finally, they’re using it for the search tool in their cloud offering, Azure. They’ve come a long way!


Pablo Castro of Microsoft

The final keynote was from NASA JPL. The room was clearly full of space fans, so this was a popular presentation. They talked about how they use Elasticsearch to hold data about user behavior from browsers on internal sites so they can improve them for employees. They also noted the terribly common practice of putting data (in this case, for the Mars rover) into Excel or Powerpoint and emailing it around as a mechanism for data sharing, and how they’ve managed to get this data into Elasticsearch instead, clearly improving the experience for everyone.

After the keynotes, it time to do my presentation! The title of my talk was “elastic-Recheck Yourself Before You Wreck Yourself: How Elasticsearch Helps OpenStack QA” and I can’t take credit for the title, my boring title was replaced by a suggestion from the talk selection staff. The talk was fun, I walked through our use of Elasticsearch to power our elastic-recheck (status page, docs) tooling in OpenStack. It’s been valuable not only for developer feedback (“your patch failed tests because of $problem, not your code”), but by giving the QA an Infrastructure teams a much better view into what the fleet of test VMs are up to in the aggregate so we can fix problems more efficiently. Slides from my talk are here (pdf).


All set up for elastic-Recheck Yourself Before You Wreck Yourself

Following my talk, ended up having lunch with the excellent Carol Willing. We got to geek out on all kinds of topics from Python to clouds as we enjoyed an outdoor lunch by the bay. Until it started drizzling.

The most valuable talk in the afternoon for me was “Resiliency in Elasticsearch and Lucene” with Boaz Leskes & Igor Motov. They began by talking about how with scale came the realization that more attention needed to be paid to recovering from various types of failures, and that they show up more often when you have more workers. The talk walked through various failures scenarios and how they’ve worked (and are working) on making improvements in these areas, including “pulling the plug” for a full shutdown, various hard disk failures, data corruption, and several types of cluster and HA failures (splitbrain and otherwise), out of memory resiliency and external pressures. This is another one I’m really looking forward to the video from.

The event wrapped up with a panel from GuideStar, kCura and E*Trade on how they’re using Elasticsearch and several “war stories” from their experiences working with the software itself, open source in general and Elastic the company.

In all, the conference was a great experience for me, and it was an impressive inaugural conference, though perhaps I should have expected that given the expertise and experience of the community team they have working there! They plan on doing a second one, and I recommend attendance to folks working with Elasticsearch.

More of my photos from the conference here: https://www.flickr.com/photos/pleia2/sets/72157650940379129/

by pleia2 at March 18, 2015 10:58 PM

March 14, 2015

Akkana Peck

Making a customized Firefox search plug-in

It's getting so that I dread Firefox's roughly weekly "There's a new version -- do you want to upgrade?" With every new upgrade, another new crucial feature I use every day disappears and I have to spend hours looking for a workaround.

Last week, upgrading to Firefox 36.0.1, it was keyword search: the feature where, if I type something in the location bar that isn't a URL, Firefox would instead search using the search URL specified in the "keyword.URL" preference.

In my case, I use Google but I try to turn off the autocomplete feature, which I find it distracting and unhelpful when typing new search terms. (I say "try to" because complete=0 only works sporadically.) I also add the prefix allintext: to tell Google that I only want to see pages that contain my search term. (Why that isn't the default is anybody's guess.) So I set keyword.URL to: http://www.google.com/search?complete=0&q=allintext%3A+ (%3A is URL code for the colon character).

But after "up"grading to 36.0.1, search terms I typed in the location bar took me to Yahoo search. I guess Yahoo is paying Mozilla more than Google is now.

Now, Firefox has a Search tab under Edit->Preferences -- but that just gives you a list of standard search engines' default searches. It would let me use Google, but not with my preferred options.

If you follow the long discussions in bugzilla, there are a lot of people patting each other on the back about how much easier the preferences window is, with no discussion of how to specify custom searches except vague references to "search plugins". So how do these search plugins work, and how do you make one?

Fortunately a friend had a plugin installed, acquired from who knows where. It turns out that what you need is an XML file inside a directory called searchplugins in your profile directory. (If you're not sure where your profile lives, see Profiles - Where Firefox stores your bookmarks, passwords and other user data, or do a systemwide search for "prefs.js" or "search.json" or "cookies.sqlite" and it should lead you to your profile.)

Once you have one plugin installed, it's easy to edit it and modify it to do anything you want. The XML file looks roughly like this:

<SearchPlugin xmlns="http://www.mozilla.org/2006/browser/search/" xmlns:os="http://a9.com/-/spec/opensearch/1.1/">
<os:ShortName>MySearchPlugin</os:ShortName>
<os:Description>The search engine I prefer to use</os:Description>
<os:InputEncoding>UTF-8</os:InputEncoding>
<os:Image width="16" height="16">data:image/x-icon;base64,ICON GOES HERE</os:Image>
<SearchForm>http://www.google.com/</SearchForm>
<os:Url type="text/html" method="GET" template="https://www.google.com/search">
  <os:Param name="complete" value="0"/>
  <os:Param name="q" value="allintext: {searchTerms}"/>
  <!--os:Param name="hl" value="en"/-->
</os:Url>
</SearchPlugin>

There are four things you'll want to modify. First, and most important, os:Url and os:Param control the base URL of the search engine and the list of parameters it takes. {searchTerms} in one of those Param arguments will be replaced by whatever terms you're searching for. So <os:Param name="q" value="allintext: {searchTerms}"/> gives me that allintext: parameter I wanted.

(The other parameter I'm specifying, <os:Param name="complete" value="0"/>, used to make Google stop the irritating autocomplete every time you try to modify your search terms. Unfortunately, this has somehow stopped working at exactly the same time that I upgraded Firefox. I don't see how Firefox could be causing it, but the timing is suspicious. I haven't been able to figure out another way of getting rid of the autocomplete.)

Next, you'll want to give your plugin a ShortName and Description so you'll be able to recognize it and choose it in the preferences window.

Finally, you may want to modify the icon: I'll tell you how to do that in a moment.

Using your new search plugin

[Firefox search prefs]

You've made all your modifications and saved the file to something inside the searchplugins folder in your Firefox profile. How do you make it your default?

I restarted firefox to make sure it saw the new plugin, though that may not have been necessary. Then Edit->Preferences and click on the Search icon at the top. The menu near the top under Default search engine is what you want: your new plugin should show up there.

Modifying the icon

Finally, what about that icon?

In the plugin XML file I was copying, the icon line looked like:

<os:Image width="16"
height="16">data:image/x-icon;base64,AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAAAAAAA
... many more lines like this then ... ==</os:Image>
So how do I take that and make an image I can customize in GIMP?

I tried copying everything after "base64," and pasting it into a file, then opening it in GIMP. No luck. I tried base64 decoding it (you do this with base64 -d filename >outfilename) and reading it in with GIMP. Still no luck: "Unknown file type".

The method I found is roundabout, but works:

  1. Copy everything inside the tag: data:image/x-icon;base64,AA ... ==
  2. Paste that into Firefox's location bar and hit return. You'll see the icon from the search plugin you're modifying.
  3. Right-click on the image and choose Save image as...
  4. Save it to a file with the extension .ico -- GIMP won't open it without that extension.
  5. Open it in GIMP -- a 16x16 image -- and edit to your heart's content.
  6. File->Export as...
  7. Use the type "Microsoft Windows icon (*.ico)"
  8. Base64 encode the file you just saved, like this: base64 yourfile.ico >newfile
  9. Copy the contents of newfile and paste that into your os:Image line, replacing everything after data:image/x-icon;base64, and before </os:Image>

Whew! Lots of steps, but none of them are difficult. (Though if you're not on Linux and don't have the base64 command, you'll have to find some other way of encoding and decoding base64.)

But if you don't want to go through all the steps, you can download mine, with its lame yellow smiley icon, as a starting point: Google-clean plug-in.

Happy searching! See you when Firefox 36.0.2 comes out and they break some other important feature.

March 14, 2015 06:35 PM

March 10, 2015

kdub

Mir Android-platform Multimonitor

My latest work on the mir android platform includes multimonitor support! It should work with slimport/mhl; Mir happily sits at an abstraction level above the details of mhl/slimport. This should be available in the next release (probably mir 0.13), or you can grab lp:mir now to start tinkering.

by Kevin at March 10, 2015 01:33 PM

March 08, 2015

Akkana Peck

GIMP: Turn black to another color with Screen mode

[20x20 icon, magnified 8 times] I needed to turn some small black-on-white icons to blue-on-white. Simple task, right? Except, not really. If there are intermediate colors that are not pure white or pure black -- which you can see if you magnify the image a lot, like this 800% view of a 20x20 icon -- it gets trickier.

[Bucket fill doesn't work for this] You can't use anything like Color to Alpha or Bucket Fill, because all those grey antialiased pixels will stay grey, as you see in the image at left.

And the Hue-Saturation dialog, so handy for changing the hue of a sky, a car or a dress, does nothing at all -- because changing hue has no effect when saturation is zero, as for black, grey or white. So what can you do?

I fiddled with several options, but the best way I've found is the Screen layer mode. It works like this:

[Make a new layer] In the Layers dialog, click the New Layer button and accept the defaults. You'll get a new, empty layer.

[Set the foreground color] Set the foreground color to your chosen color.

[Set the foreground color] Drag the foreground color into the image, or do Edit->Fill with FG Color.

Now it looks like your whole image is the new color. But don't panic!

[Use screen mode] Use the menu at the top of the Layers dialog to change the top layer's mode to Screen.

Layer modes specify how to combine two layers. (For a lot more information, see my book, Beginning GIMP). Multiply mode, for example, multiplies each pixel in the two layers, which makes light colors a lot more intense while not changing dark colors very much. Screen mode is sort of the opposite of Multiply mode: GIMP inverts each of the layers, multiplies them together, then inverts them again. All those white pixels in the image, when inverted, are black (a value of zero), so multiplying them doesn't change anything. They'll still be white when they're inverted back. But black pixels, in Screen mode, take on the color of the other layer -- exactly what I needed here.

Intensify the effect with contrast

[Mars sketch, colorized orange] One place I use this Screen mode trick is with pencil sketches. For example, I've made a lot of sketches of Mars over the years, like this sketch of Lacus Solis, the "Eye of Mars". But it's always a little frustrating: Mars is all shades of reddish orange and brown, not grey like a graphite pencil.

Adding an orange layer in Screen mode helps, but it has another problem: it washes out the image. What I need is to intensify the image underneath: increase the contrast, make the lights lighter and the darks darker.

[Colorized Mars sketch, enhanced  with brightness/contrast] Fortunately, all you need to do is bump up the contrast of the sketch layer -- and you can do that while keeping the orange Screen layer in place.

Just click on the sketch layer in the Layers dialog, then run Colors->Brightness/Contrast...

This sketch needed the brightness reduced a lot, plus a little more contrast, but every image will be different. Experiment!

March 08, 2015 01:22 AM

March 02, 2015

Elizabeth Krumbach

Tourist in Muscat, Oman

I had the honor of participating in FOSSC Oman this February, which I wrote about here. Our gracious hosts were very accommodating to all of our needs, starting with arranging assistance at the airport and lodging at a nearby Holiday Inn.

The Holiday Inn was near the airport without much else around, so it was my first experience with a familiar property in a foreign land. It was familiar enough for me to be completely comfortable, but different enough to never let me forget that I was in a new, interesting place. In keeping with standards of the country, the hotel didn’t serve alcohol or pork, which was fine by me.

During my stay we had one afternoon and evening to visit the sights with some guides from the conference. Speakers and other guests convened at the hotel and boarded a bus which first took us to the Sultan Qaboos Grand Mosque. Visiting hours for non-Muslims were in the morning, so we couldn’t go inside, but we did get to visit the outside gardens and take some pictures in front of the beautiful building.

From there we went to a downtown area of Muscat and were able to browse through some shops that seemed aimed at tourists and enjoy the harbor for a bit. Browsing the shops allowed me to identify some of the standard pieces I may want to purchase later, like the style of traditional incense burner. The harbor was quite enjoyable, a nice breeze coming in to take the edge off the hot days, which topped out around 90F while we were there (and it was their winter!).

We were next taken to Al Alam Palace, where the Sultan entertains guests. This was another outside only tour, but the walk through the plaza up to the palace and around was well worth the trip. There were also lit up mountainside structures visible from the palace which looked really stunning in the evening light.

That evening we headed up to the Shangri-La resort area on what seemed like the outskirts of Muscat. It was a whole resort complex, where we got to visit a beach before meeting up with other conference folks for a buffet dinner and musical entertainment for the evening.

I really enjoyed my time in Oman. It was safe, beautiful and in spite of being hot, the air conditioning in all the buildings made up for the time we spent outdoors, and the mornings and evenings were nice and cool. There was some apprehension as it was my first trip to the middle east and as a woman traveling alone, but I had no problems and everyone I worked with throughout the conference and or stay was professional, welcoming and treated me well. I’d love the opportunity to go back some day.

More photos from my trip here: https://www.flickr.com/photos/pleia2/sets/72157650553216248/

by pleia2 at March 02, 2015 02:47 AM

February 24, 2015

Akkana Peck

Tips for developing on a web host that offers only FTP

Generally, when I work on a website, I maintain a local copy of all the files. Ideally, I use version control (git, svn or whatever), but failing that, I use rsync over ssh to keep my files in sync with the web server's files.

But I'm helping with a local nonprofit's website, and the cheap web hosting plan they chose doesn't offer ssh, just ftp.

While I have to question the wisdom of an ISP that insists that its customers use insecure ftp rather than a secure encrypted protocol, that's their problem. My problem is how to keep my files in sync with theirs. And the other folks working on the website aren't developers and are very resistant to the idea of using any version control system, so I have to be careful to check for changed files before modifying anything.

In web searches, I haven't found much written about reasonable workflows on an ftp-only web host. I struggled a lot with scripts calling ncftp or lftp. But then I discovered curftpfs, which makes things much easier.

I put a line in /etc/fstab like this:

curlftpfs#user:password@example.com/ /servername fuse rw,allow_other,noauto,user 0 0

Then all I have to do is type mount /servername and the ftp connection is made automagically. From then on, I can treat it like a (very slow and somewhat limited) filesystem.

For instance, if I want to rsync, I can

rsync -avn --size-only /servername/subdir/ ~/servername/subdir/
for any particular subdirectory I want to check. A few things to know about this:
  1. I have to use --size-only because timestamps aren't reliable. I'm not sure whether this is a problem with the ftp protocol, or whether this particular ISP's server has problems with its dates. I suspect it's a problem inherent in ftp, because if I ls -l, I see things like this:
    -rw-rw---- 1 root root 7651 Feb 23  2015 guide-geo.php
    -rw-rw---- 1 root root 1801 Feb 14 17:16 guide-header.php
    -rw-rw---- 1 root root 8738 Feb 23  2015 guide-table.php
    
    Note that a file modified a week ago shows a modification time, but files modified today show only a day and year, not a time. I'm not sure what to make of this.
  2. Note the -n flag. I don't automatically rsync from the server to my local directory, because if I have any local changes newer than what's on the server they'd be overwritten. So I check the diffs by hand with tkdiff or meld before copying.
  3. It's important to rsync only the specific directories you're working on. You really don't want to see how long it takes to get the full file tree of a web server recursively over ftp.

How do you change and update files? It is possible to edit the files on the curlftpfs filesystem directly. But at least with emacs, it's incredibly slow: emacs likes to check file modification dates whenever you change anything, and that requires an ftp round-trip so it could be ten or twenty seconds before anything you type actually makes it into the file, with even longer delays any time you save.

So instead, I edit my local copy, and when I'm ready to push to the server, I cp filename /servername/path/to/filename.

Of course, I have aliases and shell functions to make all of this easier to type, especially the long pathnames: I can't rely on autocompletion like I usually would, because autocompleting a file or directory name on /servername requires an ftp round-trip to ls the remote directory.

Oh, and version control? I use a local git repository. Just because the other people working on the website don't want version control is no reason I can't have a record of my own changes.

None of this is as satisfactory as a nice git or svn repository and a good ssh connection. But it's a lot better than struggling with ftp clients every time you need to test a file.

February 24, 2015 02:46 AM

February 23, 2015

Elizabeth Krumbach

FOSSC Oman 2015

This past week I had the honor of speaking at FOSSC Oman 2015 in Muscat, following an invitation last fall from Professor Hadj Bourdoucen and the organizing team. Prior to my trip I was able to meet up with 2013 speaker Cat Allman who gave me invaluable tips about visiting the country, but above all made me really excited to visit the middle east for the first time and meet the extraordinary people putting on the conference.


Some of the speakers and organizers meet on Tuesday, from left: Wolfgang F. Finke, Matthias Stürmer, Khalil Al Maawali, me and Hadj Bourdoucen

My first observation was that the conference staff really went out of their way to be welcoming to all the speakers, welcoming us at the hotel the day before the conference, making sure all our needs were met. My second was that the conference was that it was really well planned and funded. They did a wonderful job finding a diverse speaker list (both topic and gender-wise) from around the world. I was really happy to learn that the conference was also quite open and free to attend, so there were participants from other nearby companies, universities and colleges. I’ll also note that there were more women at this conference than I’ve ever seen at an open source conference, at least half the audience, perhaps slightly more.

The conference itself began on Wednesday morning with several introductions and welcome speeches from officials of Sultan Qaboos University (SQU), the Information Technology Authority (ITA) and Professor Hadj Bourdoucen who gave the opening FOSSC 2015 speech. These introductions were all in Arabic and we were all given headsets for live translations into English.

The first formal talk of the conference was Patrick Sinz on “FOSS as a motor for entrepreneurship and job creation.” In this talk he really spoke to the heart of why the trend has been leaning toward open source, with companies tired of being beholden to vendors for features, being surprised by changes in contracts, and the general freedom of not needing “permission” to alter the software that’s running your business, or your country. After a break, his talk was followed by one by Jan Wildeboer titled “Open is default.” He covered a lot in his talk, first talking about how 80% of most software stacks can easily be shared between companies without harming any competitive advantage, since everyone needs all the basics of hardware interaction, basic user interaction and more, thus making use of open source for this 80% an obvious choice. He also talked about open standards and how important it is to innovation that they exist. While on the topic of innovation he noted that instead of trying to make copies of proprietary offerings, open source is now leading innovation in many areas of technology, and has been for the past 5 years.

My talk came up right after Jan’s, and with a topic of “Building a Career in FOSS” it nicely worked into things that Patrick and Jan had just said before me. In this world of companies who need developers for features and where they’re paying good money for deployment of open source, there are a lot of jobs cropping up in the open source space. My talk gave a tour of some of the types of reasons one may contribute (aside from money, there’s passion for openness, recognition, and opportunity to work with contributors from around the world), types of ways to get involved (aside from programming, people are paid for deployments, documentation, support and more) and companies to aim for when looking to find a job working on open source (fully open source, open source core, open source division of a larger company). Slides from my talk are available here (pdf).

Directly following my talk, I participated in a panel with Patrick, Jan and Matthias (who I’d met the previous day) where we talked about some more general issues in the open source career space, including how language barriers can impact contributions, how the high profile open source security issues of 2014 have impacted the industry and some of the biggest mistakes developers make regarding software licenses.

The afternoon began with a talk by Hassan Al-Lawati on the “FOSS Initiative in Oman, Facts and Challenges” where he outlined the work they’ve been doing in their multi-year plan to promote the use and adoption of FOSS inside of Oman. Initiatives began with awareness campaigns to familiarize people with the idea of open source software, development of training material and programs, in addition to existing certificate programs in the industry, and the deployment of Open Source Labs where classes on and development of open source can be promoted. He talked about some of the further future plans including more advanced training. He wrapped up his talk by discussing some of the challenges, including continued fears about open source by established technologists and IT managers working with proprietary software and in general less historical demand for using open source solutions. Flavia Marzano spoke next on “The role and opportunities of FOSS in Public Administrations” where she drew upon her 15 years of experience working in the public sector in Italy to promote open source solutions. Her core points centered around the importance of the releasing of data by governments in open formats and the value of laws that make government organizations consider FOSS solutions, if not compel them. She also stressed that business leaders need to understand the value of using open source software, even if they themselves aren’t the ones who will get the read the source code, it’s important that someone in your organization can. Afternoon sessions wrapped up with a panel on open source in government, which talked about how cost is often not a motivator and that much of the work with governments is not a technical issue, but a political one.


FOSS in Government panel: David Hurley, Hassan Al-Lawati, Ali Al Shidhani and Flavia Marzano

The conference wrapped up with lunch around 2:30PM and then we all headed back to our hotels before an evening out, which I’ll talk more about in an upcoming post about my tourist fun in Muscat.

Thursday began a bit earlier than Wednesday, with the bus picking us up at the hotel at 7:45AM and first talks beginning at 8:30AM.

Matthias Stürmer kicked off the day with a talk on “Digital sustainability of open source communities” where he outlined characteristics of healthy open source communities. He first talked about the characteristics that defined digital sustainability, including transparency and lack of legal or policy restrictions. The characteristics of healthy open source communities included:

  • Good governance
  • Heterogeneous community (various motivations, organizations involved)
  • Nonprofit foundation (doing marketing)
  • Ecosystem of commercial service providers
  • Opportunity for users to get things done

It was a really valuable presentation, and his observations were similar to mine when it comes to healthy communities, particularly as they grow. His slides are pretty thorough with main points clearly defined and are up on slideshare here.

After his presentation, several of us speakers were whisked off to have a meeting with the Vice-chancellor of SQU to talk about some of the work that’s been done locally to promote open source education, adoption and training. Can’t say I was particularly useful at this session, lacking experience with formal public sector migration plans, but it was certainly interesting for me to participate in.

I then met up with Khalil for another adventure, over to Middle East College to give a short open source presentation to students in an introductory Linux class. The class met in one of the beautiful Open Source Labs that Hassan had mentioned in his talk, it was a real delight to go to one. It was also fascinating to see that the vast majority of the class was made up of women, with only a handful of men – quite the opposite from what I’m used to! My presentation quickly covered the basics of open source, the work I’ve done both as a paid and volunteer contributor, examples of some types of open source projects (different size, structure and volunteer to paid ratios) and common motivations for companies and individuals to get involved. The session concluded with a great Q&A session, followed by a bunch of pictures and chats with students. Slides from my talk are here (pdf).


Khalil and me at the OSL at MEC

My day wound down back at SQU by attending the paper sessions that concluded the conference and then lunch with my fellow speakers.

Now for some goodies!

There is a YouTube video of each day up, so you can skim through it along with the schedule to find specific talks:

There was also press at the conference, so you can see one release published on Zawya: FOSSC-Oman Kicks Off; Forum Focuses on FOSS Opportunities and Communities and an article by the Oman Tribune: Conference on open source software begins at SQU.

And more of my photos from the conference are here: https://www.flickr.com/photos/pleia2/sets/72157650553205488/

by pleia2 at February 23, 2015 02:15 AM

February 19, 2015

Akkana Peck

Finding core dump files

Someone on the SVLUG list posted about a shell script he'd written to find core dumps.

It sounded like a simple task -- just locate core | grep -w core, right? I mean, any sensible packager avoids naming files or directories "core" for just that reason, don't they?

But not so: turns out in the modern world, insane numbers of software projects include directories called "core", including projects that are developed primarily on Linux so you'd think they would avoid it ... even the kernel. On my system, locate core | grep -w core | wc -l returned 13641 filenames.

Okay, so clearly that isn't working. I had to agree with the SVLUG poster that using "file" to find out which files were actual core dumps is now the only reliable way to do it. The output looks like this:

$ file core
core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (375)

The poster was using a shell script, but I was fairly sure it could be done in a single shell pipeline. Let's see: you need to run locate to find any files with 'core" in the name.

Then you pipe it through grep to make sure the filename is actually core: since locate gives you a full pathname, like /lib/modules/3.14-2-686-pae/kernel/drivers/edac/edac_core.ko or /lib/modules/3.14-2-686-pae/kernel/drivers/memstick/core, you want lines where only the final component is core -- so core has a slash before it and an end-of-line (in grep that's denoted by a dollar sign, $) after it. So grep '/core$' should do it.

Then take the output of that locate | grep and run file on it, and pipe the output of that file command through grep to find the lines that include the phrase 'core file'.

That gives you lines like

/home/akkana/geology/NorCal/pinnaclesGIS/core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (523)

But those lines are long and all you really need are the filenames; so pass it through sed to get rid of anything to the right of "core" followed by a colon.

Here's the final command:

file `locate core | grep '/core$'` | grep 'core file' | sed 's/core:.*//'

On my system that gave me 11 files, and they were all really core dumps. I deleted them all.

February 19, 2015 07:54 PM

February 18, 2015

Jono Bacon

Bobbing for Influence

Companies, communities, families, clubs, and other clumps of humans all have some inherent social dynamics. At a simple level there are leaders and followers, but in reality the lines are rarely as clear as that.

Many leaders, with a common example being some founders, have tremendous vision and imagination, but lack the skills to translate that vision into actionable work. Many followers need structure to their efforts, but are dynamic and creative in the execution. Thus, the social dynamic in organizations needs a little more nuance.

This is where traditional management hierarchies break down in companies. You may have your SVPs, then your VPs, then your Senior Directors, then your Directors, and so on, but in reality most successful companies don’t observe those hierarchies stringently. In many organizations a junior-level employee who has been there for a while can have as much influence and value, if not more, than a brand new SVP.

As such, the dream is that we build organizations with crisp reporting lines but in which all employees feel they have the ability to bring their creativity and ideas to logically influence the scope, work, and culture of the organization.

Houston, we have a problem

Sadly, this is where many organizations run into trouble. It seems to be the same ‘ol story time after time: as the organization grows, the divide between the senior leadership and the folks on the ground widens. Water cooler conversations and bar-side grumblings fuel the fire and resentment, frustrations, and resume-editing often sets in.

So much of this is avoidable though. Of course, there will always be frustration in any organization: this is part and parcel of people working together. Nothing will be perfect, and it shouldn’t be…frustration and conflict can often lead to organizations re-pivoting and taking a new approach. I believe though, that there are a lot of relatively simple things we can do to make organizations feel more engaging.

Influence

A big chunk of the problems many organizations face is around influence. More specifically, the problems set in when employees and contributors feel that they no longer have the ability to have a level of influence or impact in an organization, and thus, their work feels more mechanical, is not appreciated, and there is little validation.

Now, influence here is subtle. It is not always about being involved in the decision-making or being in the cool meetings. Some people won’t, and frankly shouldn’t, be involved in certain decisions: when we have too many cooks in the kitchen, you get a mess. Or Arby’s. Choose your preferred mess.

The influence I am referring to here is the ability to feed into the overall culture and to help shape and craft the organization. If we want to build truly successful organizations, we need to create a culture in which the very best ideas and perspectives bubble to the surface. These ideas may come from SVPs or it may come from the dude who empties out the bins.

The point being, if we can figure out a formula in which people can feel they can feed into the culture and help shape it, you will build a stronger sense of belonging and people will stick around longer. A sense of empowerment like this keeps people around for the long haul. When people feel unengaged or pushed to the side, they will take the next shiny opportunity that bubbles up on LinkedIn.

Some Practical Things To Do So, we get what the challenge ahead is. How do we beat it? Well, while there are many books written on the subject, I believe there are ten simple approaches we can get started with.

You don’t have to execute them in this order (in fact, these are not in any specific order), and you may place different levels of importance in some of them. I do believe though, they are all important. Let’s take a spin through them.

1. Regularly inform

A lack of information is a killer in an organization. If an organization has problems and is working to resolve them, the knowledge and assurance of solving these challenges is of critical importance to share.

In the Seven Habits, Covey talks about the importance of working on Important, and not just Urgent things. In the rush to solve problems we often forget to inform where changes, improvements, and engagement is happening. No one ever cried about getting too much clarity, but the inverse has resulted in a few gin and tonics in an evening.

There are two key types of updates here: informational and engagement. For the former, this is the communication to the wider organization. It is the memo, or if you are more adventurous, the podcast, video presentation, all-hands meeting or otherwise. These updates are useful, but everyone expects them to be very formal, lack specifics, and speak in generalities.

The latter, engagement updates, are within specific teams or with individuals. These should be more specific, and where appropriate, share some of the back-story. This gives a sense of feeling “in” on the story. Careful use of both approaches can do wondrous things to build a sense of engagement to leadership.

2. Be collaborative around the mission and values

Remember that mission statement you wrote and stuck on a web page or plaque somewhere? Yeah, so do we. Looked at it recently? Probably not.

Mission statements are often a broad and ambiguous statement, once written, and mostly forgotten. They are typically drafted by a select group of people, and everyone on the ground in service of that very mission typically feels rather disconnected from it.

Let’s change that. Dig out the mission statement and engage with your organization to bring it up to date. Have an interactive conversation about what people feel the broader goals and opportunities are, and take practical input from people and merge it into the mission. You will end up with a mission that is more specific, more representative, and in which people really felt a part of.

Do the same for your organizational values, code of conduct, and other key documents.

3. Provide opportunities for success

The very best organizations are ones where everyone has the opportunities to bring their creativity to the fold and further our overall mission and goals. The very worst organizations shut their people down because their business card doesn’t have the right thing written on it, or because of a clique of personalities.

We want an environment where everyone has the opportunity to step to the plate. An example of this was when I hired a translations coordinator for my team at Canonical. He did great work so I offered him opportunities to challenge himself and his skills. That same guy filled my shoes when I left the Canonical few years later.

Now, let’s be honest. This is tough. It relies on leaders really knowing their teams. It relies on seeing potential, not just ticked-off work items. If you create a culture though where you can read potential, tap it, and bring it into new projects, it will create an environment in which everyone feels opportunity is around the corner if they work hard.

4. If you Make Plans, Action Them

This is going to sound like a doozy, but it blows me away how much this happens. This is one for the leaders of organizations. Yes, you reading this: this includes you.

If you create a culture in which people can be more engaged, this will invariably result in new plans, ideas, and platforms. When these plans are shared, those people will feel engaged and excited about contributing to the wider team.

If that then goes into a black hole never to be assessed, actioned, or approved, discontentment will set in.

So, if you want to have a culture of engagement, take the time to actually follow up and make sure people can actually do something. Accepting great ideas, agreeing to them, and not following up will merely spark frustration for those who take the initiative to think holistically about the organization.

5. Regularly survey

It never ceases to amaze me how valuable surveys can be. You often have an idea of what you think people have a perspective on, you decide to survey them, and the results are in many cases enlightening.

Well structured surveys are an incredibly useful tool. You don’t need to do any crazy data analysis on these things: you often just need to see the general trends and feedback. It is important in these surveys to to always have a general open-ended question that can gather all feedback that didn’t fit neatly into your question matrix.

Of course, there is a whole science around running great surveys, and some great books to read, but my primary point here is to do them, do them often, and learn-from and action the results.

One final point: surveys will often freak managers out as they will worry about accountability. Don’t treat these worries with a sledgehammer: help them to understand the value of learning from feedback and to embrace a culture in which we constantly improve. This is not about yelling about mistakes, it is about exploring how we improve.

6. Create a thoughtful management culture

OK, that title might sound a little fluffy, but this is a key recommendation.

I learned from an old manager a style of management that I have applied subsequently and that I feel works well.

The idea is simple: when someone joins my team, I tell them that I want to help them in two key ways. Firstly, I want them to be successful in their role, to have all the support they need, to get the answers they need, and to be able to do a great job and enjoy doing it. Most managers focus their efforts here.

What is important is the second area of focus as a manager. I tell my team members that I want to help them be the very best they can be in their career; to support, mentor, and motivate them to not just do a great job here at the organization, but to feel that this time working here was one that was a wider investment in their career.

I believe both of these pledges from a manager are critical. Think about the best managers and teachers you have had: they paid attention to your immediate as well as long-term success.

If you are on an executive team of company, you should demand that your managers provide both of these pledges to their teams. This should be real, not just words, and be authentic.

7. Surprise your staff

This is another one for leaders in an organization.

We are all people and in business we often forget we are people. We all have hobbies, interests, ideas, jokes, stories, experiences to share. When we infuse our organizations with this humanity they feel more real and more engaging.

In any melting pot of an organization, some people will freely share their human side…their past experiences, stories, families, hobbies, favorite movies and bands…but in many cases the more senior up the chain you go, these kinds of human elements become isolated and shared with people who have a similar rank in the organization. This creates leadership cliques.

On many cases, seeing leaders surprise their staff and be relaxed, open, and engaging, can send remarkably positive messages. It shows the human side of someone who may be primarily experienced by staff as merely giving directives and reviewing performance. Remember, folks, we are all animals.

8. Set expectations

Setting expectations is a key thing in many successful projects. Invariably though, we often think about the expectations of consumers of our work; stakeholders, customers, partners etc.

It is equally important to set expectations with our teams that we welcome input, ideas, and perspectives for how the team and the wider organization works.

I like to make this bluntly clear to anyone I work with: I want all feedback, even if that feedback is deeply critical of my or the work I am doing. I would rather have an uncomfortable conversation and be able to tend to those concerns, than never to hear them in the first place and keep screwing up.

Thus, even if you think it is well understood that feedback and engagement is welcome, make it bluntly clear, from the top level and throughout the ranks that this is not only welcome, but critical for success.

9. Focus on creativity and collaboration

I hated writing that title. It sounds so buzzwordy, but it is an important point. The most successful organizations are ones that feel creative and collaborative, and where people have the ability to explore new ideas.

Covey talks about the importance of synergy and that working with others not only brings the best out of us, but helps us to challenge broken or misaligned assumptions. As such, getting people together to creatively solve problems is not just important for the mission, but also for the wellbeing of the people involved.

As discussed earlier though, we want to infuse specific teams with this, but also create a general culture of collaboration. To do this on a wider level you could have organization-wide discussions, online/offline planning events, incentive competitions and more.

10. Should I stay or should I go?

This is going to be a tough pill to swallow for some founders and leaders, but sometimes you just need to get out the way and let your people do their jobs.

Organizations that are too directed and constrained by leadership, either senior or middle-management, feel restrictive and limiting. Invariably this will quash the creativity and enthusiasm in some staff.

We want to strike a balance where teams are provided the parameters of what success looks like, and then leadership trusts them to succeed within those parameters. Regular gate reviews make perfect sense, but daily whittering over specifics does not.

This means that for some leaders, you just need to get out the way. I learned this bluntly when a member of my team at Canonical told me over a few beers one night that I needed to stop meddling and leave the team alone to get on with a project. They were right: I was worried about my teams delivery and projecting that down by micro-managing them. I gave them the air they needed, and they succeeded.

On the flip side, we also need to ensure leadership is there for support and guidance when needed. Regular check-ins, 1-on-1s, and water-cooler time is a great way to do this in a more comfortable way.

I hope this was useful and if nothing else, provided some ideas for further thinking about how we build organizations where we can tap into the rich chemistry of ideas, creativity, and experience in our wider teams. As usual, feedback is always welcome. Thanks for reading!

by jono at February 18, 2015 05:45 PM

February 17, 2015

Jono Bacon

Video Phone Review and Wider Thoughts

I recorded and posted a video with a detailed review of the bq Aquaris E4.5 Ubuntu phone, complete with wider commentary on the scopes and convergence strategy and the likelihood of success.

See it below:

Can’t see it? See it here.

by jono at February 17, 2015 05:26 PM

February 14, 2015

Akkana Peck

The Sangre de Cristos wish you a Happy Valentine's Day

[Snow hearts on the Sangre de Cristo mountains]

The snow is melting fast in the lovely sunny weather we've been having; but there's still enough snow on the Sangre de Cristos to see the dual snow hearts on the slopes of Thompson Peak above Santa Fe, wishing everyone for miles around a happy Valentine's Day.

Dave and I are celebrating for a different reason: yesterday was our 1-year anniversary of moving to New Mexico. No regrets yet! Even after a tough dirty work session clearing dead sage from the yard.

So Happy Valentine's Day, everyone! Even if you don't put much stock in commercial Hallmark holidays. As I heard someone say yesterday, "Valentine's day is coming up, and you know what that means. That's right: absolutely nothing!"

But never mind what you may think about the holiday -- you just go ahead and have a happy day anyway, y'hear? Look at whatever pretty scenery you have near you; and be sure to enjoy some good chocolate.

February 14, 2015 10:01 PM

February 11, 2015

Elizabeth Krumbach

Wrap up of the San Francisco Ubuntu Global Jam at Gandi

This past Sunday I hosted an Ubuntu Global Jam at the Gandi office here in downtown San Francisco. Given the temporal proximity to a lot of travel, I’ve had to juggle a lot to make this happen, a fair amount of work goes into an event like this, from logistics of getting venue, food and drinks, and giveaways to the actual prep for the event and actually telling people about it. In this case we were working on Quality Assurance for Xubuntu (and a little Lubuntu on a PPC Mac).

It’s totally worth it though, so I present to you the full list of prep, should you wish to do a QA event in your region:

  • Secure venue: Completed in December (thanks AJ at Gandi!).
  • Secure refreshments funding: Completed in January via the Ubuntu donations funding.
  • Create LoCo Team Portal event and start sharing it everywhere (social media, friendly mailing lists for locals who may be interested). Do this for weeks!
  • Prepare goodies. I had leftover pens and stickers from a previous event. I then met up with Mark Sobell earlier in the week to have him sign copies of A Practical Guide to Ubuntu Linux, 4th Edition we received from the publisher (thank you Mark and Prentice Hall!).
  • Collect and stage all the stuff you’re bringing.
  • Print out test cases, since it can be tricky to juggle reading the test case while also navigating the actual test on their laptop.
  • Also print out signs for the doors at the venue.
  • Tour venue and have final chat with your host about what you need (plates, cups and utensils? power? wifi? projector?).
  • Send out last minute email to attendees as a reminder and in case of any last minute info.
  • Make sure dietary requirements of attendees are met. I did go with pizza for this event, but I made sure to go with a pizzeria that offered gluten free options and I prepared a gluten free salad (which people ate!).
  • Download and burn/copy the daily ISOs as soon as they come out on the day of the event, and put them on USB sticks or discs as needed: Xubuntu went on USB sticks, Lubuntu for PPC went on a CD-R (alternate) and DVD-R (desktop, currently oversized).
  • Bring along any extra laptops you have so folks who don’t bring one or have trouble doing testing on theirs can participate
  • Make penguin-shaped cookies (this one may be optional).

With all of this completed, I think the event went pretty smoothly. My Ubuntu California team mates James Ouyang and Christian Einfeldt met me at my condo nearby to help me carry over everything. AJ met us upon arrival and we were able to get quickly set up.

I had planned on doing a short presentation to give folks a tour of the ISO Tracker but the flow of attendees made it such that I could get the experienced attendees off and running pretty quick (some had used the tracker before) and by the time they were starting we had some newcomers joining us who I was able to guide one-on-one.

I did a lot of running around, but attendees were able to help out each other too, and it was a huge help to bring along some extra laptops. I was also surprised to see that another PPC Mac showed up at the event! I thought the one I brought would be the only one that would be used for Lubuntu. Later in the event we were joined by some folks who came over after the nearby BerkeleyLUG meeting wrapped up at 3PM, and caused us to push the event a full hour later than expected (thanks to AJ for putting up with us for another hour!).

Prior to the event, I had worried some about attendance, but throughout the event we had about 12 people total come and go, which was the perfect amount for me and a couple of other Ubuntu Members to manage so that attendees didn’t feel ignored as they worked through their tests. Post event, I’ve been able to provide some feedback to the Ubuntu Quality team about some snafus we encountered while doing testing. Hopefully these can be fixed next time around so other teams don’t run into the same issues we did.

Aside from some of the hiccups with the trackers, I received really positive feedback from attendees. Looking forward to doing this again in the future!

More photos from the event available here: https://www.flickr.com/photos/pleia2/sets/72157650663176996/

by pleia2 at February 11, 2015 04:26 AM

February 10, 2015

Akkana Peck

Making flashblock work again; and why HTML5 video doesn't work in Firefox

Back in December, I wrote about Problems with Firefox 35's new deprecation of flash, and a partial solution for Debian. That worked to install a newer version of the flash plug-in on my Debian Linux machine; but it didn't fix the problem that the flashblock program no longer works properly on Firefox 35, so that clicking on the flashblock button does nothing at all.

A friend suggested that I try Firefox's built-in flash blocking. Go to Tools->Add-ons and click on Plug-ins if that isn't the default tab. Under Shockwave flash, choose Ask to Activate.

Unfortunately, the result of that is a link to click, which pops up a dialog that requires clicking a button to dismiss it -- a pointless and annoying extra step. And there's no way to enable flash for just the current page; once you've enabled it for a domain (like youtube), any flash from that domain will auto-play for the remainder of the Firefox session. Not what I wanted.

So I looked into whether there was a way to re-enable flashblock. It turns out I'm not the only one to have noticed the problem with it: the FlashBlock reviews page is full of recent entries from people saying it no longer works. Alas, flashblock seems to be orphaned; there's no comment about any of this on the main flashblock page, and the links on that page for discussions or bug reports go to a nonexistent mailing list.

But fortunately there's a comment partway down the reviews page from user "c627627" giving a fix.

Edit your chrome/userContent.css in your Firefox profile. If you're not sure where your profile lives, Mozilla has a poorly written page on it here, Profiles - Where Firefox stores your bookmarks, passwords and other user data, or do a systemwide search for "prefs.js" or "search.json" or "cookies.sqlite" and it will probably lead you to your profile.

Inside yourprofile/chrome/userContent.css (create it if it doesn't already exist), add these lines:

@namespace url(http://www.w3.org/1999/xhtml);
@-moz-document domain("youtube.com"){
#theater-background { display:none !important;}}

Now restart Firefox, and flashblock should work again, at least on YouTube. Hurray!

Wait, flash? What about HTML5 on YouTube?

Yes, I read that too. All the tech press sites were reporting week before last that YouTube was now streaming HTML5 by default.

Alas, not with Firefox. It works with most other browsers, but Firefox's HTML5 video support is too broken. And I guess it's a measure of Firefox's increasing irrelevance that almost none of the reportage two weeks ago even bothered to try it on Firefox before reporting that it worked everywhere.

It turns out that using HTML5 video on YouTube depends on something called Media Source Extensions (MSE). You can check your MSE support by going to YouTube's HTML5 info page. In Firefox 35, it's off by default.

You can enable MSE in Firefox by flipping the media.mediasource preference, but that's not enough; YouTube also wants "MSE & H2.64". Apparently if you care enough, you can set a new preference to enable MSE & H2.64 support on YouTube even though it's not supported by Firefox and is considered too buggy to enable.

If you search the web, you'll find lots of people talking about how HTML5 with MSE is enabled by default for Firefox 32 on youtube. But here we are at Firefox 35 and it requires jumping through hoops. What gives?

Well, it looks like they enabled it briefly, discovered it was too buggy and turned it back off again. I found bug 1129039: Disable MSE for Firefox 36, which seems an odd title considering that it's off in Firefox 35, but there you go.

Here is the dependency tree for the MSE tracking bug, 778617. Its dependency graph is even scarier. After taking a look at that, I switched my media.mediasource preference back off again. With a dependency tree like that, and nothing anywhere summarizing the current state of affairs ... I think I can live with flash. Especially now that I know how to get flashblock working.

February 10, 2015 12:08 AM

February 04, 2015

Akkana Peck

Studying Glaciers on our Roof

[Roof glacier as it slides off the roof] A few days ago, I wrote about the snowpack we get on the roof during snowstorms:

It doesn't just sit there until it gets warm enough to melt and run off as water. Instead, the whole mass of snow moves together, gradually, down the metal roof, like a glacier.

When it gets to the edge, it still doesn't fall; it somehow stays intact, curling over and inward, until the mass is too great and it loses cohesion and a clump falls with a Clunk!

The day after I posted that, I had a chance to see what happens as the snow sheet slides off a roof if it doesn't have a long distance to fall. It folds gracefully and gradually, like a sheet.

[Underside of a roof glacier] [Underside of a roof glacier] The underside as they slide off the roof is pretty interesting, too, with varied shapes and patterns in addition to the imprinted pattern of the roof.

But does it really move like a glacier? I decided to set up a camera and film it on the move. I set the Rebel on a tripod with an AC power adaptor, pointed it out the window at a section of roof with a good snow load, plugged in the intervalometer I bought last summer, located the manual to re-learn how to program it, and set it for a 30-second interval. I ran that way for a bit over an hour -- long enough that one section of ice had detached and fallen and a new section was starting to slide down. Then I moved to another window and shot a series of the same section of snow from underneath, with a 40-second interval.

I uploaded the photos to my workstation and verified that they'd captured what I wanted. But when I stitched them into a movie, the way I'd used for my time-lapse clouds last summer, it went way too fast -- the movie was over in just a few seconds and you couldn't see what it was doing. Evidently a 30-second interval is far too slow for the motion of a roof glacier on a day in the mid-thirties.

But surely that's solvable in software? There must be a way to get avconv to make duplicates of each frame, if I don't mind that the movie come out slightly jump. I read through the avconv manual, but it wasn't very clear about this. After a lot of fiddling and googling and help from a more expert friend, I ended up with this:

avconv -r 3 -start_number 8252 -i 'img_%04d.jpg' -vcodec libx264 -r 30 timelapse.mp4

In avconv, -r specifies a frame rate for the next file, input or output, that will be specified. So -r 3 specifies the frame rate for the set of input images, -i 'img_%04d.jpg'; and then the later -r 30 overrides that 3 and sets a new frame rate for the output file, -timelapse.mp4. The start number is because the first file in my sequence is named img_8252.jpg. 30, I'm told, is a reasonable frame rate for movies intended to be watched on typical 60FPS monitors; 3 is a number I adjusted until the glacier in the movie moved at what seemed like a good speed.

The movies came out quite interesting! The main movie, from the top, is the most interesting; the one from the underside is shorter.

Roof Glacier
Roof Glacier from underneath.

I wish I had a time-lapse of that folded sheet I showed above ... but that happened overnight on the night after I made the movies. By the next morning there wasn't enough left to be worth setting up another time-lapse. But maybe one of these years I'll have a chance to catch a sheet-folding roof glacier.

February 04, 2015 02:46 AM

Elizabeth Krumbach

Afternoon in Brussels

My trip to Brussels for FOSDEM was a short one, I have a lot of work to do at home so it was impossible for me to make the case for staying more than three days. But since I got in early Friday morning, I did have Friday afternoon to do a bit of exploring.

First stop: get some mussels and frites!

For the rest of the afternoon I had planned on taking one of the tourist buses around town, but by the time I was ready to go it was 2PM and the last loop started at 2:30 that day, not giving me enough time to snag the last bus, and even if I had, where’s the fun in never getting off it? So I made my way toward Grand Place, where there were loads of shops, drinks and museums.

I decided to spend my afternoon at the Museum of the City of Brussels, which is dedicated to the history of the city and housed at Grand Place in the former King’s Mansion (Maison du Roi).

I’m glad I went, the museum had some beautiful pieces and I enjoyed learning about some of the history of the city. They were also running a special exhibit about the German occupation around World War I, which offered some interesting and sad insight into how the Belgians handled the occupation and the suffering endured by citizens during that time. Finally, I thoroughly enjoyed the browse through the amusing array of costumes made for the famous Manneken Pis.

The museum closed at 5PM and I made my way to visit the actual Manneken Pis fountain, located a few blocks south of the Grand Palace. It was starting to get quite chilly out and I was glad I had packed mittens. I snapped my photo of the fountain and then meandered my way back north until I found a little cafe where I got myself a nice cup of hot chocolate and warmed up while I waited for the Software Freedom Conservancy dinner at Drug Opera.

I also spent time scouring shop fronts for a Delirium Tremens stuffed toy elephant (as seen here). I saw one through a shop window the last time I was in Brussels in 2010, but it was late at night and the shop was closed. Alas, I never did find the elephant… until after dinner when I was walking back to my hotel once again late at night and the shop was closed! Argh! May we meet again some day, dear pink elephant.

In general the short length of the trip meant that I also didn’t get to enjoy many Belgian beers on my trip, quite the tragedy, but I did have to be alert for the actual conference I came to speak at and attend.

More photos from my tourist adventure here: https://www.flickr.com/photos/pleia2/sets/72157650562831526/

by pleia2 at February 04, 2015 02:25 AM

February 02, 2015

Jono Bacon

Bad Voltage: Live @ SCALE13x

As regular readers of my blog will know, I rather like the SoCal Linux Expo, more commonly known as SCALE. I have been going for over eight years and every year it delivers an incredible balance of content and community spirit. I absolutely love going every year.

Other readers may also know that I do a podcast with three other idiots every two weeks called Bad Voltage. The show is a soup of Linux, Open Source, technology, digital rights, politics, and more, all mixed together with reviews, interviews, and plenty more. I am really proud of the show: I think it is fun but also informative, and has developed an awesome community around it.

Given my love of SCALE and Bad Voltage, I am therefore tickled pink that we are going to be taping Bad Voltage: Live at SCALE. This will be our very first show in front of a live audience, and in-fact, the first time the whole team has been in the same building before.

The show takes place on the evening of Fri 20th Feb 2015 in the main La Jolla room.

The show will be packed with discussions, contests, give-aways, challenges, and more. It will be a very audience participatory show and we will be filming it as well as recording the podcast, ready for release post-SCALE.

So, be sure to get along and join the show on the evening of Fri 20th Feb 2015, currently slated to start at 9pm, but the time may adjust, so keep your eye on the schedule!

by jono at February 02, 2015 06:55 AM

Elizabeth Krumbach

FOSDEM 2015

This weekend I spent in Brussels for my first FOSDEM. As someone who has been actively involved with open source since 2003, stories of FOSDEM have floated around in communities I’ve participated in for a long time, so I was happy to finally have the opportunity to attend and present.

Events kicked off Friday night with a gathering at a dinner with the Software Freedom Conservancy. It was great to start things off with such a friendly crowd, most of whom I’ve known for years. I sat with several of my OpenStack colleagues as we enjoyed dinner and conversation about StoryBoard and bringing the OpenStack activity board formally into our infrastructure with Puppet modules. It was a fun and productive dinner, I really appreciated that so many at this event took the initiative to gather in team tables so we could have our own little mini-meetups during the SFC event. After dinner I followed some colleagues over to Delirium Cafe for the broader pre-FOSDEM beer event, but the crowd was pretty overwhelming and I was tired, so I ended up just heading back to my hotel to get some rest.

On Saturday I met up with my colleague Devananda van der Veen and we headed over to the conference venue. The conference began with a couple keynotes. Karen Sandler was the first, giving her talk on Identity Crisis: Are we who we say we are? where she addressed the different “hats” we wear as volunteers, paid contributors, board members, and more in open source projects. She stressed how important it is that we’re clear about who and what we’re representing when we contribute to discussions and take actions in our communities. I was excited to see that she also took the opportunity to announce Outreachy, the successor to the Outreach Program for Women, which not only continues the work of bringing women into open source beyond GNOME, but also “from groups underrepresented in free and open source software.” This was pretty exciting news, congratulations to everyone involved!

The next keynote was by Antti Kantee who spoke on What is wrong with Operating Systems (and how do we make things better). Antti works on the NetBSD Rump Kernels and is a passionate advocate for requiring as little as possible from an underlying Operating System in today’s world. He argues that a complicated OS only serves to introduce instability and unnecessary complexity into most ways we do computing these days, with their aggressive support of multi-user environments on devices that are single user and more. He demonstrated how you can strip away massive amounts of the kernel and still have a viable, basic user environment with a TCP/IP stack that applications can then interface with.

The next talk I went to was Upstream Downstream: The relationship between developer and package maintainer by Norvald H. Ryeng of the MySQL project. Over the years I’ve been a contributor on both sides of this, but it’s been a few years since I was directly involved in the developer-packager relationship so it was great to hear about the current best practices of communities working in this space. He walked through what a release of MySQL looks like, including all the types of artifacts created and distribution mechanisms utilized (source, packages, FTP, developer site direct downloads) and how they work with distribution package maintainers. He had a lot of great tips for both upstream developers and downstream packagers about how to have an effective collaboration, much of it centering around communication. Using MySQL as an example, he went through several things they’ve done, including:

  • Being part of Ubuntu’s Micro Release Exception program so packagers don’t cherry-pick security vulnerabilities, instead they can take the full micro-release from the trusted, well-tested upstream.
  • Participating in downstream bug trackers, sometimes even bumping the priority of packaged software bugs because they know a huge number of users are using the distro packages.
  • Running their own package repos, which gives users more options version-wise but has also taught their upstream team about some of the challenges in packaging so they can be more effective collaborators with the in-distro packagers and even catch pain points and issues earlier. Plus, then packaging is integrated into their QA processes!

He also talked some about how cross-distro collaboration doesn’t really happen on the distro level, so it’s important for upstream to stay on top of that so they can track things like whether the installation is interactive (setting passwords, other config options during install), whether the application is started upon install and more. Their goal being to make the experience of using their application as consistent as possible across platforms, both by similar configuration and reduction of local patches carried by distributions.

At lunch I met up with Louise Corrigan of Apress, who I met last year at the Texas Linux Fest. We also grabbed some much needed coffee, as my jet lag was already starting to show. From there I headed over to the OpenStack booth for my 2PM shift, where I met Adrien Cunin (who I also knew from the Ubuntu community) and later Marton Kiss who I work with on the OpenStack Infrastructure team. I was one of my more fun booth experiences, with lots of folks I knew dropping by, like Jim Campbell who I’d worked with on Documentation in Ubuntu in the past and a couple the people I met at DORS/CLUC in Croatia last year. I also got to meet Charles Butler of Canonical whose Juju talk I attended later in the afternoon.

At 5PM things got exciting for my team, with Spencer Krum presenting Consuming Open Source Configuration: Infrastructure and configuration is now code, and some of it is open source. What is it like to be downstream of one of these projects? In addition to working with us upstream in the OpenStack Infrastructure team, Spencer works on a team within HP that is consuming our infrastructure for projects within HP that need a Continuous Integration workflow. The OpenStack Infrastructure team has always first been about providing for the needs of the OpenStack community, and with Spencer’s help as an active downstream contributor we’ve slowly shifted our infrastructure to be more consumable by the team he’s on and others. In this talk he covered the value of consuming our architecture, including not having to do all the work, and benefiting from a viable architecture that’s been used in production for several years. He noted that any divergence from upstream incurred technical debt for the downstream team, so he’s worked upstream to help decouple components and not make assumptions about things like users and networks, reducing the need for these patches downstream. The biggest takeaway from this, was how much Spencer has been involved with the OpenStack Infrastructure team. His incremental work over time to make our infrastructure more consumable, coupled with his desire to also further the goals on our team (I can always depend upon him for a review of one of my Puppet changes) makes his work as a downstream much easier. Slides from his presentation are online (html-based) here.

My day of talks wrapped up with one of my own! In The open source OpenStack project infrastructure: Fully public Puppet I gave a talk that was complementary to Spencer’s where I spoke from the upstream side about the lesson’s we’ve learned about crafting an effective upstream infrastructure project using Puppet in the past year to make our infrastructure more consumable by downstreams like the team at HP. I outlined the reasons we had for going with a fully open source Puppet configuration (rather than just releasing modules) and why you might want to (others can contribute! sharing is nice!). Then I outlined the work we did in a couple specs we’ve finished to break out some of our components from the previously monolithic configuration. I think the talk went well, it was great to talk to some folks about their own infrastructure challenges afterwards and how our thorough specifications about splitting modules may help them too. Slides from the talk as pdf available here.

I spent the evening with some of my colleagues at HP who are working on OpenStack Designate. I had intended to call it a somewhat early night, but dinner didn’t manage to wrap up until 11PM, cutting severely into beer time!

Sunday morning I headed over to the conference venue at 9AM, noticing that it had snowed over night. I spent the morning at the OpenStack booth, my booth volunteer slot sadly overlapping with Thierry Carrez’s talk on our OpenStack infrastructure tools talk. Wrapping up booth duty, I met up with a friend and made our way through the campus as the snow came down to check out another building with project booths.

I then made my way over to Testing and automation dev room to see Aleksandra Fedorova speak on CI as an Infrastructure. The talk diverged from the typical “process” talks about Continuous Integration (CI), which often pretty abstractly talk about the theory and workflows. She instead talked about the technical infrastructure that is actually required for running such a system, and how it ends up being much more complicated in practice. Beyond the general workflow, you need artifact (logs and other things that result from builds) management, service communication coordination (CIs are chatty! Particularly when there are failures) and then hooks into all the pieces of your infrastructure, from the bug tool to your revision control system and perhaps a code review system. Even when running a very simple test like flake8 you need a place to run it, proper isolation to set up, a pinning process for flake8 versions (need to test it when new versions come out – else it could break your whole process!) and preferably do all of this using QA and language-specific tools created for the purpose. Perhaps my favorite part of her talk was the stress she placed upon putting infrastructure configuration into revision control. I’ve been a fan of doing this for quite some time, particularly in our world of configuration management where it’s now easy to do, but perhaps her most compelling point was keeping track of your Jenkins jobs over time. By putting your Jenkins configurations into revision control, you have a proper history of how you ran your tests months ago, which can be a valuable resource as your project matures.

I attended one more talk, but spent much of the rest of the event meeting up with open source friends who I hadn’t seen in a while. Astonishingly, even though I got to catch up with a number of people, the conference was so big and spread out around the campus that there were people who I knew were there but I never managed to see! One of my colleagues at HP I never saw until after the conference when a group met up for dinner on Sunday night.

The closing keynote was by Ryan MacDonald who spoke on Living on Mars: A Beginner’s Guide: Can we Open Source a society? He spoke about the Mars One program which seemed well on its way. I’m looking forward to the video being published, as I know more than a few people who’d be interested in seeing it from the perspective he presented.

Finally, the wrap-up. Looking back to the introduction to the conference, one of the organizers told the audience that unlike other conferences recently, they didn’t feel the need to adopt a Code of Conduct. They cited that we’re “all adults here” and pretty much know how to act toward each other. I was pretty disappointed by this, particularly at a conference that served alcohol throughout the day and had a pretty bad gender ratio (it’s one of the worst I’ve ever seen). Apparently I wasn’t the only one. Prior to the keynote, a tweet from FOSDEM said “message received” regarding the importance of a Code of Conduct. I’m really proud of them for acknowledging the importance and promising to improve, it makes me feel much better about coming back in the future.

Huge thanks to all the volunteers who make this conference happen every year, I hope I can make it back next year! A few more photos from the event here: https://www.flickr.com/photos/pleia2/sets/72157650191787498/

by pleia2 at February 02, 2015 05:30 AM

January 31, 2015

Akkana Peck

Snow day!

We're having a series of snow days here. On Friday, they closed the lab and all the schools; the ski hill people are rejoicing at getting some real snow at last.

[Snow-fog coming up from the Rio Grande] It's so beautiful out there. Dave and I had been worried about this business of living in snow, being wimpy Californians. But how cool (literally!) is it to wake up, look out your window and see a wintry landscape with snow-fog curling up from the Rio Grande in White Rock Canyon?

The first time we saw it, we wondered how fog can exist when the temperature is below freezing. (Though just barely below -- as I write this the nearest LANL weather station is reporting 30.9°F. But we've seen this in temperatures as low as 12°F.) I tweeted the question, and Mike Alexander found a reference that explains that freezing fog consists of supercooled droplets -- they haven't encountered a surface to freeze upon yet. Another phenomenon, ice fog, consists of floating ice crystals and only occurs below 14°F.

['Glacier' moving down the roof] It's also fun to watch the snow off the roof.

It doesn't just sit there until it gets warm enough to melt and run off as water. Instead, the whole mass of snow moves together, gradually, down the metal roof, like a glacier.

When it gets to the edge, it still doesn't fall; it somehow stays intact, curling over and inward, until the mass is too great and it loses cohesion and a clump falls with a Clunk!

[Mysterious tracks in the snow] When we do go outside, the snow has wonderful collections of tracks to try to identify. This might be a coyote who trotted past our house on the way over to the neighbors.

We see lots of rabbit tracks and a fair amount of raccoon, coyote and deer, but some are hard to identify: a tiny carnivore-type pad that might be a weasel; some straight lines that might be some kind of bird; a tail-dragging swish that could be anything. It's all new to us, and it'll be great fun learning about all these tracks as we live here longer.

January 31, 2015 05:17 PM

Snow day!

We're having a series of snow days here. On Friday, they closed the lab and all the schools; the ski hill people are rejoicing at getting some real snow at last.

[Snow-fog coming up from the Rio Grande] It's so beautiful out there. Dave and I had been worried about this business of living in snow, being wimpy Californians. But how cool (literally!) is it to wake up, look out your window and see a wintry landscape with snow-fog curling up from the Rio Grande in White Rock Canyon?

The first time we saw it, we wondered how fog can exist when the temperature is below freezing. (Though just barely below -- as I write this the nearest LANL weather station is reporting 30.9°F. But we've seen this in temperatures as low as 12°F.) I tweeted the question, and Mike Alexander found a reference that explains that freezing fog consists of supercooled droplets -- they haven't encountered a surface to freeze upon yet. Another phenomenon, ice fog, consists of floating ice crystals and only occurs below 14°F.

['Glacier' moving down the roof] It's also fun to watch the snow off the roof.

It doesn't just sit there until it gets warm enough to melt and run off as water. Instead, the whole mass of snow moves together, gradually, down the metal roof, like a glacier.

When it gets to the edge, it still doesn't fall; it somehow stays intact, curling over and inward, until the mass is too great and it loses cohesion and a clump falls with a Clunk!

[Mysterious tracks in the snow] When we do go outside, the snow has wonderful collections of tracks to try to identify. This might be a coyote who trotted past our house on the way over to the neighbors.

We see lots of rabbit tracks and a fair amount of raccoon, coyote and deer, but some are hard to identify: a tiny carnivore-type pad that might be a weasel; some straight lines that might be some kind of bird; a tail-dragging swish that could be anything. It's all new to us, and it'll be great fun learning about all these tracks as we live here longer.

January 31, 2015 05:17 PM

January 27, 2015

Jono Bacon

Designers Needed to Help Build Software to Teach Kids Literacy

Designers! Imagine you could design a piece of Open Source tablet software that teaches a child how to read, write, and perform arithmetic, without the aid of a teacher. This is not designed to replace teachers, but to bring education where little or none exists.

Just think of the impact. UNESCO tells us that 54 million children have zero access to education. 250 million kids have rudimentary access to education but don’t have any literacy skills. If we can build software that teaches kids literacy, think of the opportunities this opens up in their lives, and the ability to help bring nations out of poverty.

The Global Learning XPRIZE is working to solve that problem and help build this technology.

A Foundation of Awesome Design

This is where designers come in.

We want to encourage designers to use their talent and imagination to explore and share ideas of how this software could look and work. Designers create and craft unique and innovative experiences, and these ideas can be the formation of great discussions with other members of the community.

We are asking designers to explore and create those experiences and then share those wireframes/mockups in the XPRIZE community. This will inspire discussion and ideas for how we create this important software. This is such an important way in which you can participate.

Find out more about how to participate by clicking right here and please share this call for designers widely – the more designs we can see, the more designers involved, the more ideas we can explore. Every one of you can play such a key role in building this technology. Thanks!

by jono at January 27, 2015 06:00 PM

kdub

SVG Hardware Drawer Labels

I recently made a set of SVG labels for my hardware small parts bin in Inkscape for the common Akro-Mills 10164 small parts organizer. Its sized to print the labels the correct size on a 11″x8.5″ sheet of paper (results may vary, so make sure to resize for whatever drawer and printer you have)

AkroMillsLabels

The Labels in action

I thought I’d share them here in SVG format, which should make it pretty easy for you to download and customize. (Eg, you could change the resistor color codes to your set of resistors, change the values, etc). If you do sink a lot of effort into adapting the file, please share-back (open source!) via the comments, and I’ll update the file so others can use it.

Drawer Labels

Drawer Labels

SVG file (copyright (c) 2015 Kevin DuBois, Licensed under CC BY-NC-SA)

by Kevin at January 27, 2015 03:28 AM

January 26, 2015

Jono Bacon

Global Learning XPRIZE: Call For Teams!

As many of my regular readers will know, I joined the XPRIZE Foundation last year. At XPRIZE we run large competitions that incentivize the solution of some of the grandest challenges that face humanity.

My role at XPRIZE is to create a global community that can practically change the world via XPRIZE, inside and outside of these competitions. You will be reading more about this in the coming months.

Back in September we launched the Global Learning XPRIZE. This is a $15 million competition that has the ability to impact over 250 million kids. From the website:

The Global Learning XPRIZE challenges teams from around the world to develop open source and scalable software that will enable children in developing countries to teach themselves basic reading, writing and arithmetic within the 18 month period of competition field testing. Our goal is an empowered generation that will positively impact their communities, countries and the world.

Many of my readers here are Open Source folks, and this prize is an enormous Open Source opportunity. Here we can not only change the world, but we can create Open Source technology that is at the core of this revolution in education.

Not only that, but a key goal we have with the competition is to encourage teams and other contributors to collaborate around common areas of interest. Think about collaboration around storytelling platforms, power management, design, voice recognition, and more. We will be encouraging this collaboration openly on our forum and in GitHub.

You will be hearing more and more about this in the coming months, but be sure to join our forum to keep up to date.

Call For Teams

Since we launched the prize, we have seen an awesome number of teams registering to participate. Our view though is that the more teams the better…it creates a stronger environment of collaboration and competition. We want more though!

Can’t see the video? See it here!

As such, I want to encourage you all to consider joining up as a team. We recommend people form diverse teams of developers, designers, artists, scientists, and more to feed into and explore how we build software that can automate the teaching of literacy. Just think about the impact that this software could have on the world, and also how interesting a technical and interaction challenge this is.

To find out more, and to sign up, head over to learning.xprize.org and be sure to join our community forum to be a part of our community as it grows!

by jono at January 26, 2015 05:46 PM

January 24, 2015

Elizabeth Krumbach

Remembering Eric P. Scott (eps)

Last night I learned the worst kind of news, my friend and valuable member of the Linux community here in San Francisco, Eric P. Scott (eps) recently passed away.

In an excerpt from a post by Chaz Boston Baden, he cites the news from Ron Hipschman:

I hate to be the bearer of bad news, but It is my sad duty to inform you that Eric passed away sometime in the last week or so. After a period of not hearing from Eric by phone or by email, Karil Daniels (another friend) and I became concerned that something might be more serious than a lost phone or a trip to a convention, so I called his property manager and we met at Eric’s place Friday night. Unfortunately, the worst possible reason for his lack of communication was what we found. According to the medical examiner, he apparently died in his sleep peacefully (he was in bed). Eric had been battling a heart condition. We may learn more next week when they do an examination.

He was a good friend, the kind who was hugely supportive of any local events I had concocted for the Ubuntu California community, but as a friend he was also the thoughtful kind of man who would spontaneously give me thoughtful gifts. Sometimes they were related to an idea he had for promoting Ubuntu, like a new kind of candy we could use for our candy dishes at the Southern California Linux Expo, a toy penguin we could use at booths or a foldable origami-like street car he thought we could use as inspiration for something similar as a giveaway to promote the latest animal associated with an Ubuntu LTS release.

He also went beyond having ideas and we spent time together several times scouring local shops for giveaway booth candy, and once meeting at Costco to buy cookies and chips in bulk for an Ubuntu release party last spring, which he then helped me cart home on a bus! Sometimes after the monthly Ubuntu Hours, which he almost always attended, we’d go out to explore options for candy to include at booth events, with the amusing idea he also came up with: candy dishes that came together to form the Ubuntu logo.

In 2012 we filled the dishes with M&Ms:

The next year we became more germ conscious and he suggested we go with individually wrapped candies, searching the city for ones that would taste good and not too expensive. Plus, he found a California-shaped bowl which fit into our Ubuntu California astonishingly theme well!

He also helped with Partimus, often coming out to hardware triage and installfests we’d have at the schools.


At a Partimus-supported school, back row, middle

As a friend, he was also always welcome to share his knowledge with others. Upon learning that I don’t cook, he gave me advice on some quick and easy things I could do at home, which culminated in the gift of a plastic container built for cooking pasta in the microwave. Skeptical of all things microwave, it’s actually something I now use routinely when I’m eating alone, I even happened to use it last night before learning of his passing.

He was a rail fan and advocate for public transportation, so I could always count on him for the latest transit news, or just a pure geek out about trains in general, which often happened with other rail fans at our regular Bay Area Debian dinners. He had also racked up the miles on his favorite airline alliance, so there were plenty of air geek conversations around ticket prices, destinations and loyalty programs. And though I haven’t really connected with the local science fiction community here in San Francisco (so many hobbies, so little time!), we definitely shared a passion for scifi too.

This is a hard and shocking loss for me. I will deeply miss his friendship and support.

by pleia2 at January 24, 2015 08:10 PM

January 20, 2015

Elizabeth Krumbach

Stress, flu, Walt’s Trains and a scrap book

I’ve spent this month at home. Unfortunately, I’ve been pretty stressed out. Now that I’m finally home I have a ton to catch up on here, I’m getting back into the swing of things with the pure technical (not event, travel, talk) part of my day job and and have my book to work on. I know I haven’t backed off enough from projects I’m part of, even though I’ve made serious efforts to move away from a few leadership roles in 2014, so keeping up with everything remains challenging. Event-wise, I’ve managed to arrange my schedule so I only have 4 trips during this half of the year (down from 5, thanks to retracting a submission to one domestic conference), and 1-3 major local events that I’m either speaking at or hosting. It still feels like too much.

Perhaps adding to my stress was the complete loss of 5 days last week to the flu. I had some sniffles and cough on Friday morning, which quickly turned into a fever that sent me to bed as soon as I wrapped up work in the early evening. Saturday through most of Tuesday are a bit of a blur, I attempted to get some things done but honestly should have just stayed in bed and not tried to work on anything, because nothing I did was useful and actually made it more difficult to pick up where I left off come late Tuesday and into Wednesday. I always forget how truly miserable having the flu is, sleep is the only escape, even something as mind-numbing as TV isn’t easy as everything hurts. However, kitty snuggles are always wonderful.

Sickness aside, strict adherence to taking Saturdays off has helped my stress. I really look forward to my Saturdays when I can relax for a bit, read, watch TV, play video games, visit an exhibit at a museum or make progress in learning how to draw. I’m finally at the point where I no longer feel guilty for taking this time, and it’s pretty refreshing to simply ignore all email and social media for a day, even if I do have the impulse to check both. It turns out it’s not so bad to disconnect for a weekend day, and I come back somewhat refreshed on Sunday. It ultimately does make me more productive during the rest of the week too, and less likely to just check out in the middle of the week in a guiltful and poorly-timed evening of pizza, beer and television.

This Saturday MJ and I enjoyed All Aboard: A Celebration of Walt’s Trains exhibit at the Walt Disney Family Museum. It was a fantastic exhibit. I’m a total sucker for the entrepreneurial American story of Walt Disney and I love trains, so the mix of the two was really inspiring. This is particularly true as I find my own hobbies being as work-like and passion-driven as my actual work. Walt’s love of trains and creation of a train at his family home in order to have a hobby outside work led to trains at Disney parks around the world. So cool.

No photos are allowed in the exhibit, but I did take some time around the buildings to capture some signs and the beautiful day in the Presidio: https://www.flickr.com/photos/pleia2/sets/72157650347931082/

One evening over these past few weeks took time to put together a scrap book, which I’d been joking about for years (“ticket stub? I’ll keep it for my scrap book!”). Several months ago I dug through drawers and things to find all my “scrap book things” and put them into a bag, collecting everything from said ticket stubs to conference badges from the past 5 years. I finally swung by a craft store recently and picked up some rubber cement, good clear tape and an empty book made for the purpose. Armed with these tools, I spent about 3 hours gluing and taping things into the book one evening after work. The result is a mess, not at all beautiful, but one that I appreciate now that it exists.

I mentioned in my last “life” blog post that I was finishing a services migration from one of my old servers. That’s now done, I shut off my old VPS yesterday. It was pretty sad when I realized I’d been using that VPS for 7 years when the level plan I had offered a mere 360M of RAM (up to 2G now), I had gotten kind of attached! But that faded today when I did an upgrade on my new server and realized how much faster it is. On to bigger and better things! In other computer news, I’m really pushing hard on promoting the upcoming Ubuntu Global Jam here in the city and spent Wednesday evening of this week hosting a small Ubuntu Hour, thankful that it was the only event of the evening as I continued to need rest post-flu.

Today is a Monday, but a holiday in the US. I spent it catching up with work for Partimus in the morning, Ubuntu in the afternoon and this evening I’m currently avoiding doing more work around the house by writing this blog post. I’m happy to say that we did get some tricky light bulbs replaced and whipped out the wood glue in an attempt to give some repair love to the bathroom cabinet. Now off to do some laundry and cat-themed chores before spending a bit more time on my book.

by pleia2 at January 20, 2015 02:07 AM

January 19, 2015

Elizabeth Krumbach

San Francisco Ubuntu Global Jam at Gandi.net on Sunday February 8th

For years Gandi.net has been a strong supporter of Open Source communities and non-profits. From their early support of Debian to their current support of Ubuntu via discounts to Ubuntu Members they’ve been directly supportive of projects I’m passionate about. I was delighted when I heard they had opened an office in my own city of San Francisco, and they’ve generously offered to host the next Ubuntu Global Jam for the Ubuntu California team right here at their office in the city.

Gandi.net

+

Ubuntu

=

Jam for days
Jam!

What’s an Ubuntu Global Jam? From the FAQ on the wiki:

A world-wide online and face-to-face event to get people together to work on Ubuntu projects – we want to get as many people online working on things, having a great time doing so, and putting their brick in the wall for free software as possible. This is not only a great opportunity to really help Ubuntu, but to also get together with other Ubuntu fans to make a difference together, either via your LoCo team, your LUG, other free software group, or just getting people together in your house/apartment to work on Ubuntu projects and have a great time.

The event will take place on Sunday, February 8th from noon – 5PM at the Gandi offices on 2nd street, just south of Mission.

Community members will gather to do some Quality Assurance testing on Xubuntu ISOs and packages for the upcoming release, Vivid Vervet, using the trackers built for this purpose. We’re focusing on Xubuntu because that’s the project I volunteer with and I can help put us into contact with the developers as we test the ISOs and submit bugs. The ISO tracker and package tracker used for Xubuntu are used for all recognized flavors of Ubuntu, so what you learn from this event will transfer into testing for Ubuntu, Kubuntu, Ubuntu GNOME and all the rest.

No experience with Testing or Quality Assurance is required and Quality Assurance is not as boring as it sounds, honest :) Plus, one of the best things about doing testing on your hardware is that your bugs are found and submitted prior to release, increasing the chances significantly that any bugs that exist with your hardware are fixed prior to release!

The event will begin with a presentation that gives a tour of how manual testing is done on Ubuntu releases. From there we’ll be able to do Live Testing, Package Testing and Installation testing as we please, working together as we confirm bugs and when we get stuck. Installation Testing is the only one that requires you to make any changes to the laptop you bring along, so feel free to bring along one you can do Live and Package testing on if you’re not able to do installations on your hardware.

I’ll also have the following two laptops for folks to do testing on if they aren’t able to bring along a laptop:

I’ll also be bringing along DVDs and USB sticks with the latest daily builds for tests to be done and some notes about how to go about submitting bugs.

Please RSVP here (full address also available at this link):

http://loco.ubuntu.com/events/ubuntu-california/2984-ubuntu-california-san-francisco-qa-jam/

Or email me at lyz@ubuntu.com if you’re interested in attending and have trouble with or don’t wish to RSVP through the site. Also please feel free to contact me if you’re interested in helping out (it’s ok if you don’t know about QA, I need logistical and promotional help too!).

Food and drinks will be provided, the current menu is a platter of sandwiches and some pizzas, so please let me know if you have dietary restrictions so we can place orders accordingly. I’d hate to exclude folks because of our menu, so I’m happy to accommodate vegan, gluten free, whatever you need, I just need to know :)

Finally, giveaways of Ubuntu stickers and pens for everyone and a couple Ubuntu books (hopefully signed by the authors!) will also be available to a few select attendees.

Somewhere other than San Francisco and interested in hosting or attending an event? The Ubuntu Global Jam is an international event with teams focusing on a variety of topics, details at: https://wiki.ubuntu.com/UbuntuGlobalJam. Events currently planned for this Jam can be found via this link: http://loco.ubuntu.com/events/global/2967/

by pleia2 at January 19, 2015 11:00 PM