Planet Ubuntu California

October 31, 2014

Akkana Peck

Simulating a web page timeout

Today dinner was a bit delayed because I got caught up dealing with an RSS feed that wasn't feeding. The website was down, and Python's urllib2, which I use in my "feedme" RSS fetcher, has an inordinately long timeout.

That certainly isn't the first time that's happened, but I'd like it to be the last. So I started to write code to set a shorter timeout, and realized: how does one test that? Of course, the offending site was working again by the time I finished eating dinner, went for a little walk then sat down to code.

I did a lot of web searching, hoping maybe someone had already set up a web service somewhere that times out for testing timeout code. No such luck. And discussions of how to set up such a site always seemed to center around installing elaborate heavyweight Java server-side packages. Surely there must be an easier way!

How about PHP? A web search for that wasn't helpful either. But I decided to try the simplest possible approach ... and it worked!

Just put something like this at the beginning of your HTML page (assuming, of course, your server has PHP enabled):

<?php sleep(500); ?>

Of course, you can adjust that 500 to be any delay you like.

Or you can even make the timeout adjustable, with a few more lines of code:

<?php
 if (isset($_GET['timeout']))
     sleep($_GET['timeout']);
 else
     sleep(500);
?>

Then surf to yourpage.php?timeout=6 and watch the page load after six seconds.

Simple once I thought of it, but it's still surprising no one had written it up as a cookbook formula. It certainly is handy. Now I just need to get some Python timeout-handling code working.

October 31, 2014 01:38 AM

October 24, 2014

Akkana Peck

Partial solar eclipse, with amazing sunspots

[Partial solar eclipse, with sunspots] We had perfect weather for the partial solar eclipse yesterday. I invited some friends over for an eclipse party -- we set up a couple of scopes with solar filters, put out food and drink and had an enjoyable afternoon.

And what views! The sunspot group right on the center of the sun's disk was the most large and complex I'd ever seen, and there were some much smaller, more subtle spots in the path of the eclipse. Meanwhile, the moon's limb gave us a nice show of mountains and crater rims silhouetted against the sun.

I didn't do much photography, but I did hold the point-and-shoot up to the eyepiece for a few shots about twenty minutes before maximum eclipse, and was quite pleased with the result.

An excellent afternoon. And I made too much blueberry bread and far too many oatmeal cookies ... so I'll have sweet eclipse memories for quite some time.

October 24, 2014 03:15 PM

Jono Bacon

Bad Voltage Turns 1

Today Bad Voltage celebrates our first birthday. We plan on celebrating it by having someone else blow out our birthday candles while we smash a cake and quietly defecate on ourselves.

For those of you unaware of the show, Bad Voltage is an Open Source, technology, and “other things we find interesting” podcast featuring Stuart Langridge (LugRadio, Shot Of Jaq), Bryan Lunduke (Linux Action Show), Jeremy Garcia (Linux Questions), and myself (LugRadio, Shot Of Jaq). The show takes fun but informed take on various topics, and includes interviews, reviews, competitions, and challenges.

Over the last year we have covered quite the plethora of topics. This has included VR, backups, atheism, ElementaryOS, guns, bitcoin, biohacking, PS4 vs. XBOX, kids and coding, crowdfunding, genetics, Open Source health, 3D printed weapons, the GPL, work/life balance, Open Source political parties, the right to be forgotten, smart-watches, equality, Mozilla, tech conferences, tech on TV, and more.

We have interviewed some awesome guests including Chris Anderson (Wired), Tim O’Reilly (O’Reilly Media), Greg Kroah-Hartman (Linux), Miguel de Icaza (Xamarin/GNOME), Stormy Peters (Mozilla), Simon Phipps (OSI), Jeff Atwood (Discourse), Emma Marshall (System76), Graham Morrison (Linux Voice), Matthew Miller (Fedora), Ilan Rabinovitch (Southern California Linux Expo), Daniel Foré (Elementary), Christian Schaller (Redhat), Matthew Garrett (Linux), Zohar Babin (Kaltura), Steven J. Vaughan-Nicols (ZDNet), and others.

…and then there are the competitions and challenges. We had a debate where we had to take the opposite viewpoints of what we think, we had a rocking poetry contest, challenged our listeners to mash up the shows to humiliate us, ran a selfie competition, and more. In many cases we punished each other when we lost and even tried to take on a sausage company.

It is all a lot of fun, and if you haven’t checked the show out, be sure to head over to www.badvoltage.org and load up on some shows.

One of the most awesome aspects of Bad Voltage is our community. Our pad is at community.badvoltage.org and we have a fantastically diverse community of different ideas, perspectives and viewpoints. In many cases we have discussed a topic on the show and there has been a long and interesting (and always respectful debate on the forum). It is so much fun to be around.

I just want to say a huge thank-you to everyone who has supported the show and stuck with us through our first year. We have a lot of fun doing it, but the Bad Voltage community make every ounce of effort worthwhile. I also want to thank my fellow presenters, Bryan, Stuart, and Jeremy; it is a pleasure getting to shoot the proverbial with you guys every few weeks.

Live Voltage!

Before I wrap up, I need to share an important piece of information. The Bad Voltage team will be performing our very first live show at the Southern California Linux Expo on the evening of Friday 20th Feb 2015 in Los Angeles.

We can’t think of a better place to do our first live show than SCALE, and we hope to see you there!

by jono at October 24, 2014 04:39 AM

October 22, 2014

Elizabeth Krumbach

3 weeks at home

I am sitting in a hotel room in Raleigh where I’m staying for a conference, but prior to this I had a full 3 weeks at home! I was the longest stretch I’ve had in months, even my gallbladder removal surgery didn’t afford me a full 3 weeks. Unfortunately during this blessed 3 weeks home MJ was out of town for a full 2 weeks of it. It also decided to be summer time in San Francisco (typical of early October) with temperatures rising to 90F for several days and our condo not cooling off. Some days it made work a challenge as I sometimes fled to coffee shops. The cats didn’t seem amused by this either.

The time at home alone did give me a chance to chill out at home and listen to the Giants playoff games on the little AM radio I had set up in our living room. As any good pseudo-fan does I only loosely keep up with the team during the actual season, going to actual games only here and there as I have the opportunity, which I didn’t this year (too much travel + gallbladder). It felt nice to sit and listen to the games as I got some work done in the evenings. I did learn how much modern technology gets in the way of AM reception though, as I listened to the quality tank when I turned on the track lighting in my living room or random times when my highrise neighbors must have been doing something.

Fleet week also came to San Francisco while I was home. I think I’ve only actually been in town for it twice, so it was a nice treat. To add to the fun I was meeting up with a friend to work on some OpenStack stuff on Sunday when they were doing their final show and her office offers amazing floor to ceiling windows with a stunning view of the bay. Perfect for watching the show!

I also did manage to get out for some non-work social time with a couple friends, and finally made it out to Off the Grid in the Marina for some street food adventuring. I hadn’t been before because I’m not the biggest fan of food trucks, the food is fine but you end up standing while eating, making a mess, and not getting a meal for all that cheaper than you would if you just went to a proper restaurant with tables. Maybe I’m just a giant snob, but it was an interesting experience, and I got to take the cable car home, so that’s always fun.

And now Raleigh. I’m here for All Things Open which I’ll be blogging about soon. This kicked off about 3 weeks away from home, so I had to pack accordingly:

After Raleigh I’ll be flying to Miami for a cousin’s wedding, then staying several extra days in a beach hotel where I’ll be working (and taking breaks to visit the ocean!). At the end of the week I’m flying to Paris for the OpenStack Summit for a week. I’ve never been to Paris before so I’m really looking forward to that. When the conference wraps up I’m flying back stateside for another wedding for a family member, this time in Philadelphia. So during this time I’ll get to see MJ twice, as we meet in cities for weddings. Thankfully I head home after that, but then we’re off for a proper vacation a few days later – to Jamaica! Then maybe I’ll spend all of December in a stay-at-home coma, but I’ll probably end up going somewhere because apparently I really like airplanes. Plus December would be the only month I didn’t fly, and I can’t have that.

by pleia2 at October 22, 2014 11:17 PM

Akkana Peck

A surprise in the mousetrap

I went out this morning to check the traps, and found the mousetrap full ... of something large and not at all mouse-like.

[young bullsnake] It was a young bullsnake. Now slender and maybe a bit over two feet long, it will eventually grow into a larger relative of the gopher snakes that I used to see back in California. (I had a gopher snake as a pet when I was in high school -- they're harmless, non-poisonous and quite docile.)

The snake watched me alertly as I peered in, but it didn't seem especially perturbed to be trapped. In fact, it was so non-perturbed that when I opened the trap, the snake stayed right where it was. It had found a nice comfortable resting place, and it wasn't very interested in moving on a cold morning.

I had to poke it gently through the bars, hold the trap vertically and shake for a while before the snake grudgingly let go and slithered out onto the ground.

I wondered if it had found its way into the trap by chasing a mouse, but I didn't see any swellings that looked like it had eaten recently. I'm fairly sure it wasn't interested in the peanut butter bait.

I released the snake in a spot near the shed where the mousetrap is set up. There are certainly plenty of mice there for it to eat, and gophers when it gets a little larger, and there are lots of nice black basalt boulders to use for warming up in the morning, and gopher holes to hide in. I hope it sticks around -- gopher/bullsnakes are good neighbors.

[young bullsnake caught in mousetrap]

October 22, 2014 01:37 AM

October 20, 2014

Jono Bacon

Happy Birthday Ubuntu!

Today is Ubuntu’s ten year anniversary. Scott did a wonderful job summarizing many of those early years and his own experience, and while I won’t be as articulate as him, I wanted to share a few thoughts on my experience too.

I heard of this super secret Debian startup from Scott James Remnant. When I worked at OpenAdvantage we would often grab lunch in Birmingham, and he filled me in on what he was working on, but leaving a bunch of the blanks out due to confidentiality.

I was excited about this new mystery distribution. For many years I had been advocating at conferences about a consumer-facing desktop, and felt that Debian and GNOME, complete with the exciting Project Utopia work from Robert Love and David Zeuthen made sense. This was precisely what this new distro would be shipping.

When Warty was released I installed it and immediately became an Ubuntu user. Sure, it was simple, but the level of integration was a great step forward. More importantly though, what really struck me was how community-focused Ubuntu was. There was open governance, a Code Of Conduct, fully transparent mailing lists and IRC channels, and they had the Oceans 11 of rock-star developers involved from Debian, GNOME, and elsewhere.

I knew I wanted to be part of this.

While at GUADEC in Stuttgart I met Mark Shuttleworth and had a short meeting with him. He seemed a pretty cool guy, and I invited him to speak at our very first LugRadio Live in Wolverhampton.

Mark at LugRadio Live.

I am not sure how many multi-millionaires would consider speaking to 250 sweaty geeks in a football stadium sports bar in Wolverhampton, but Mark did it, not once, but twice. In fact, one time he took a helicopter to Wolverhampton and landed at the dog racing stadium. We had to have a debate in the LugRadio team for who had the nicest car to pick him up in. It was not me.

This second LugRadio Live appearance was memorable because two weeks previous I had emailed Mark to see if he had a spot for me at Canonical. OpenAdvantage was a three-year funded project and was wrapping up, and I was looking at other options.

Mark’s response was:

“Well, we are opening up an Ubuntu Community Manager position, but I am not sure it is for you.”

I asked him if he could send over the job description. When I read it I knew I wanted to do it.

Fast forward four interviews, the last of which being in his kitchen (which didn’t feel awkward, at all), and I got the job.

The day I got that job was one of the greatest days of my life. I felt like I had won the lottery; working on a project with mission, meaning, and something that could grow my career and skill-set.

Canonical team in 2007

The day I got the job was not without worry though.

I was going to be working with people like Colin Watson, Scott James Remnant, Martin Pitt, Matt Zimmerman, Robert Collins, and Ben Collins. How on earth was I going to measure up?

A few months later I flew out to my first Ubuntu Developer Summit in Mountain View, California. Knowing little about California in November, I packed nothing but shorts and t-shirts. Idiot.

I will always remember the day I arrived, going to a bar with Scott and some others, meeting the team, and knowing absolutely nothing about what they were saying. It sounded like gibberish, and I felt like I was a fairly technical guy at this point. Obviously not.

What struck me though was how kind, patient, and friendly everyone was. The delta in technical knowledge was narrowed with kindness and mentoring. I met some of my heroes, and they were just normal people wanting to make an awesome Linux distro, and wanting to help others get in on the ride too.

What followed was an incredible seven and a half years. I travelled to Ubuntu Developer Summits, sprints, and conferences in more than 30 countries, helped create a global community enthused by a passion for openness and collaboration, experimented with different methods of getting people to work together, and met some of the smartest and kindest people walking on this planet.

The awesome Ubuntu community

Ubuntu helped to define my career, but more importantly, it helped to define my perspective and outlook on life. My experience in Ubuntu helped me learn how to think, to manage, and to process and execute ideas. It helped me to be a better version of me, and to fill my world with good people doing great things, all of which inspired my own efforts.

This is the reason why Ubuntu has always been much more than just software to me. It is a philosophy, an ethos, and most importantly, a family. While some of us have moved on from Canonical, and some others have moved on from Ubuntu, one thing we will always share is this remarkable experience and a special connection that makes us Ubuntu people.

by jono at October 20, 2014 05:52 PM

October 17, 2014

Eric Hammond

Installing aws-cli, the New AWS Command Line Tool

consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon

Readers of this tech blog know that I am a fan of the power of the command line. I enjoy presenting functional command line examples that can be copied and pasted to experience services and features.

The Old World

Users of the various AWS legacy command line tools know that, though they get the job done, they are often inconsistent in where you get them, how you install them, how you pass options, how you provide credentials, and more. Plus, there are only tool sets for a limited number of AWS services.

I wrote an article that demonstrated the simplest approach I use to install and configure the legacy AWS command line tools, and it ended up being extraordinarily long.

I’ve been using the term “legacy” when referring to the various old AWS command line tools, which must mean that there is something to replace them, right?

The New World

The future of the AWS command line tools is aws-cli, a single, unified, consistent command line tool that works with almost all of the AWS services.

Here is a quick list of the services that aws-cli currently supports: Auto Scaling, CloudFormation, CloudSearch, CloudWatch, Data Pipeline, Direct Connect, DynamoDB, EC2, ElastiCache, Elastic Beanstalk, Elastic Transcoder, ELB, EMR, Identity and Access Management, Import/Export, OpsWorks, RDS, Redshift, Route 53, S3, SES, SNS, SQS, Storage Gateway, Security Token Service, Support API, SWF, VPC.

Support for the following appears to be planned: CloudFront, Glacier, SimpleDB.

The aws-cli software is being actively developed as an open source project on Github, with a lot of support from Amazon. You’ll note that the biggest contributors to aws-cli are Amazon employees with Mitch Garnaat leading. Mitch is also the author of boto, the amazing Python library for AWS.

Installing aws-cli

I recommend reading the aws-cli documentation as it has complete instructions for various ways to install and configure the tool, but for convenience, here are the steps I use on Ubuntu:

sudo apt-get install -y python-pip
sudo pip install awscli

Add your Access Key ID and Secret Access Key to $HOME/.aws/config using this format:

[default]
aws_access_key_id = <access key id>
aws_secret_access_key = <secret access key>
region = us-east-1

Protect the config file:

chmod 600 $HOME/.aws/config

Optionally set an environment variable pointing to the config file, especially if you put it in a non-standard location. For future convenience, also add this line to your $HOME/.bashrc

export AWS_CONFIG_FILE=$HOME/.aws/config

Now, wasn’t that a lot easier than installing and configuring all of the old tools?

Testing

Test your installation and configuration:

aws ec2 describe-regions

The default output is in JSON. You can try out other output formats:

 aws ec2 describe-regions --output text
 aws ec2 describe-regions --output table

I posted this brief mention of aws-cli because I expect some of my future articles are going to make use of it instead of the legacy command line tools.

So go ahead and install aws-cli, read the docs, and start to get familiar with this valuable tool.

Notes

Some folks might already have a command line tool installed with the name “aws”. This is likely Tim Kay’s “aws” tool. I would recommend renaming that to another name so that you don’t run into conflicts and confusion with the “aws” command from the aws-cli software.

[Update 2013-10-09: Rename awscli to aws-cli as that seems to be the direction it’s heading.]

*[Update 2014-10-16: Use new .aws/config filename standard.]

Original article: http://alestic.com/2013/08/awscli

by Eric Hammond at October 17, 2014 01:54 AM

October 16, 2014

Akkana Peck

Aspens are turning the mountains gold

Last week both of the local mountain ranges turned gold simultaneously as the aspens turned. Here are the Sangre de Cristos on a stormy day:

[Sangre de Cristos gold with aspens]

And then over the weekend, a windstorm blew a lot of those leaves away, and a lot of the gold is gone now. But the aspen groves are still beautiful up close ... here's one from Pajarito Mountain yesterday.

[Sangre de Cristos gold with aspens]

October 16, 2014 07:37 PM

October 14, 2014

iheartubuntu

Tomboy The Original Note App


When I first started using Ubuntu back in early 2007 (Ubuntu 6.10) I fell in love with a pre-installed app called Tomboy. I had used Tomboy for several years until Ubuntu One notified users it would stop syncing Tomboy a couple years ago, and then the finality of Ubuntu One shutting down earlier this year. I had rushed to find alternatives like Evernote, Gnotes, etc but none of them were simple and easily integrated.

The Tomboy description is as follows... "Tomboy is a simple & easy to use desktop note-taking application for Linux, Unix, Windows, and Mac OS X. Tomboy can help you organize the ideas and information you deal with every day."

Some of Tomboys notable features are highlighting text, inline spell checking, auto-linking web & email addresses, undo/redo, font styling & sizing and bulleted lists.

I am creating new notes as well as manually importing a few of my old notes from a couple years ago. Tomboy used to sync easily with Ubuntu One. Since that is no longer an option, you can do it with your Dropbox folder or your Google Drive folder (I'm using Insync).

Tomboy hasnt been updated in a while, but it installs and works great on Ubuntu 14.04 using:

sudo apt-get install tomboy

When you start Tomboy there will be a little square note icon with pen up on your top bar. Clicking the icon will show you the Tomboy menu options. To sync your notes across your computers you would go to the Tomboy preferences, clicking the Syncronization tab, and pick a local folder in your Dropbox or Google Drive. Thats pretty much it! Start writing those notes! On your other computers that you want to sync your notes, you would select the same sync folder you chose on your first computer.

A few quick points. When you sync your notes, it will create a folder titled "0" in whatever folder you have chosen to sync your notes in.

If you want to launch Tomboy with your system startup (in Ubuntu 14.04) in Unity search for "Startup Applications" and run it. Add a new app titled "Tomboy" with the command "tomboy", save and close. Next time you log on, your Tomboy notes will be ready to use.

Tomboy also works with Windows and Mac OS X and installation instructions can be found here:

Windows ... https://wiki.gnome.org/Apps/Tomboy/Installing/Windows
Mac ... https://wiki.gnome.org/Apps/Tomboy/Installing/Mac

- - - - -

If you are still looking for syncing options, this comes in from Christian....

You can self-host your note sync server with either Rainy or Grauphel...

Learn more here...

http://dynalon.github.io/Rainy/

http://apps.owncloud.com/content/show.php?action=content&content=166654

by iheartubuntu (noreply@blogger.com) at October 14, 2014 11:42 AM

October 13, 2014

iheartubuntu

MAT - Metadata Anonymisation Toolkit


This is a great program used to help protect your privacy.

Metadata consists of information that characterizes data. Metadata is used to provide documentation for data products. In essence, metadata answers who, what, when, where, why, and how about every facet of the data that is being documented.

Metadata within a file can tell a lot about you. Cameras record data about when a picture was taken and what camera was used. Office documents like PDF or Office automatically adds author and company information to documents and spreadsheets.

Maybe you don't want to disclose that information on the web.

MAT can only remove metadata from your files, it does not anonymise their content, nor can it handle watermarking, steganography, or any too custom metadata field/system.

If you really want to be anonymous, use a format that does not contain any metadata, or better yet, use plain-text.

These are the formats supported to some extent:

Portable Network Graphics (PNG)
JPEG (.jpeg, .jpg, ...)
Open Document (.odt, .odx, .ods, ...)
Office Openxml (.docx, .pptx, .xlsx, ...)
Portable Document Fileformat (.pdf)
Tape ARchive (.tar, .tar.bz2, .tar.gz)
ZIP (.zip)
MPEG Audio (.mp3, .mp2, .mp1, .mpa)
Ogg Vorbis (.ogg)
Free Lossless Audio Codec (.flac)
Torrent (.torrent)

The President of the United States and his birth certificate would have greatly benefited from software such as MAT.

You can install MAT with this terminal command:

sudo apt-get install mat

Look for more articles about privacy soon and by searching in our search by under "privacy".

by iheartubuntu (noreply@blogger.com) at October 13, 2014 12:05 PM

October 12, 2014

iheartubuntu

Tasque TODO List App


We're getting back to some of the old basic apps that a lot of people used to use in Ubuntu. Many of them still work great and work great without any internet connection needed.

Tasque (pronounced like “task”) is a simple task management app (TODO list) for the Linux Desktop and Windows. It supports syncing with the online service Remember the Milk or simply storing your tasks locally.

The main window has the ability to complete a task, change the priority, change the name, and change the due date without additional property dialogs.

When a user clicks on a task priority, a list of possible priorities is presented and when selected, the task is re-prioritized in the order you wish.

When you click on the due date, a list of the next seven days is presented along with an option to remove the date or select a date from a calendar.

A user completes a task by clicking the check box on a task. The task is crossed out indicating it is complete and a timer begins counting down to the right of the task. When the timer is done, the task is removed from view.

As mentioned, Tasque has the ability to save tasks locally or backend used Remember the Milk, a free online to-do list. On one of my computers saving my tasks using RTM works great, on my computer at work, it wont sync my tasks. I havent figure out why, but I will post any updates here once I get it working or find a workaround.

You can install Tasque from the Ubuntu Software Center or with this terminal command:

sudo apt-get install tasque

All in all, Tasque is a great little task app. Really simple to use!

by iheartubuntu (noreply@blogger.com) at October 12, 2014 05:02 AM

October 11, 2014

Akkana Peck

Railroading exponentially

or: Smart communities can still be stupid

I attended my first Los Alamos County Council meeting yesterday. What a railroad job!

The controversial issue of the day was the town's "branding". Currently, as you drive into Los Alamos on highway 502, you pass a tasteful rock sign proclaiming "LOS ALAMOS: WHERE DISCOVERIES ARE MADE". But back in May, the county council announced the unanimous approval of a new slogan, for which they'd paid an ad agency some $55,000: "LIVE EXPONENTIALLY".

As you might expect in a town full of scientists, the announcement was greeted with much dismay. What is it supposed to mean, anyway? Is it a reference to exponential population growth? Malignant tumor growth? Gaining lots of weight as we age?

The local online daily, tired of printing the flood of letters protesting the stupid new slogan, ran a survey about the "Live Exponentially" slogan. The results were that 8.24% liked it, 72.61% didn't, and 19.16% didn't like it and offered alternatives or comments. My favorites were Dave's suggestion of "It's Da Bomb!", and a suggestion from another reader, "Discover Our Secrets"; but many of the alternate suggestions were excellent, or hilarious, or both -- follow the link to read them all.

For further giggles, try a web search on the term. If you search without quotes, Ebola tops the list. With quotes, you get mostly religious tracts and motivational speakers.

The Council Meeting

(The rest of this is probably only of interest to Los Alamos folk.)

Dave read somewhere -- it wasn't widely announced -- that Friday's council meeting included an agenda item to approve spending $225,000 -- yes, nearly a quarter of a million dollars -- on "brand implementation". Of course, we had to go.

In the council discussion leading up to the call for public comment, everyone spoke vaguely of "branding" without mentioning the slogan. Maybe they hoped no one would realize what they were really voting for. But in the call for public comment, Dave raised the issue and urged them to reconsider the slogan.

Kristin Henderson seemed to have quite a speech prepared. She acknowledged that "people who work with math" universally thought the slogan was stupid, but she said that people from a liberal arts background, like herself, use the term to mean hiking, living close to nature, listening to great music, having smart friends and all the other things that make this such a great place to live. (I confess to being skeptical -- I can't say I've ever heard "exponential" used in that way.)

Henderson also stressed the research and effort that had already gone into choosing the current slogan, and dismissed the idea that spending another $50,000 on top of the $55k already spent would be "throwing money after bad." She added that showing the community some images to go with the slogan might change people's minds.

David Izraelevitz admitted that being an engineer, he initially didn't like "Live Exponentially". But he compared it to Apple's "Think Different": though some might think it ungrammatical, it turned out to be a highly successful brand because it was coupled with pictures of Gandhi and Einstein. (Hmm, maybe that slogan should be "Live Exponential".)

Izraelevitz described how he convinced a local business owner by showing him the ad agency's full presentation, with pictures as well as the slogan, and said that we wouldn't know how effective the slogan was until we'd spent the $50k for logo design and an implementation plan. If the council didn't like the results they could choose not to go forward with the remaining $175,000 for "brand implementation". (Councilor Fran Berting had previously gotten clarification that those two parts of the proposal were separate.)

Rick Reiss said that what really mattered was getting business owners to approve the new branding -- "the people who would have to use it." It wasn't so important what people in the community thought, since they didn't have logos or ads that might incorporate the new branding.

Pete Sheehey spoke up as the sole dissenter. He pointed out that most of the community input on the slogan has been negative, and that should be taken into account. The proposed slogan might have a positive impact on some people but it would have a negative impact on others, and he couldn't support the proposal.

Fran Berting said she was "not all that taken" with the slogan, but agreed with Izraelevitz that we wouldn't know if it was any good without spending the $50k. She echoed the "so much work has already gone into it" argument. Reiss also echoed "so much work", and that he liked the slogan because he saw it in print with a picture.

But further discussion was cut off. It was 1:30, the fixed end time for the meeting, and chairman Geoff Rodgers (who had pretty much stayed out of the discussion to this point) called for a vote. When the roll call got to Sheehey, he objected to the forced vote while they were still in the middle of a discussion. But after a brief consultation on Robert's Rules of Order, chairman Rogers declared the discussion over and said the vote would continue. The motion was approved 5-1.

The Exponential Railroad

Quite a railroading. One could almost think it had been planned that way.

First, the item was listed as one of two in the "Consent Agenda" -- items which were expected to be approved all together in one vote with no discussion or public comment. It was moved at the last minute into "Business"; but that put it last on the agenda.

Normally that wouldn't have mattered. But although the council more often meets in the evenings and goes as long as it needs to, Friday's meeting had a fixed time of noon to 1:30. Even I could see that wasn't much time for all the items on the agenda.

And that mid-day timing meant that working folk weren't likely to be able to listen or comment. Further, the branding issue didn't come up until 1 pm, after some of the audience had already left to go back to work. As a result, there were only two public comments.

Logic deficit

I heard three main arguments repeated by every council member who spoke in favor:

  1. the slogan makes much more sense when viewed with pictures -- they all voted for it because they'd seen it presented with visuals;
  2. a lot of time, effort and money has already gone into this slogan, so it didn't make sense to drop it now; and
  3. if they didn't like the logo after spending the first $50k, they didn't have to approve the other $175k.

The first argument doesn't make any sense. If the pictures the council saw were so convincing, why weren't they showing those images to the public? Why spend an additional $50,000 for different pictures? I guess $50k is just pocket change, and anyone who thinks it's a lot of money is just being silly.

As for the second and third, they contradict each other. If most of the board thinks now that the initial $50k contract was so much work that we have to go forward with the next $50k, what are the chances that they'll decide not to continue after they've already invested $100k?

Exponentially low, I'd say.

I was glad of one thing, though. As a newcomer to the area faced with a ballot next month, it was good to see the council members in action, seeing their attitudes toward spending and how much they care about community input. That will be helpful come ballot time.

If you're in the same boat but couldn't make the meeting, catch the October 10, 2014 County Council Meeting video.

October 11, 2014 06:54 PM

October 09, 2014

Akkana Peck

What's nesting in our truck's engine?

We park the Rav4 outside, under an overhang. A few weeks ago, we raised the hood to check the oil before heading out on an adventure, and discovered a nest of sticks and grass wedged in above the valve cover. (Sorry, no photos -- we were in a hurry to be off and I didn't think to grab the camera.)

Pack rats were the obvious culprits, of course. There are lots of them around, and we've caught quite a few pack rats in our live traps. Knowing that rodents can be a problem since they like to chew through hoses and wiring, we decided we'd better keep an eye on the Rav and maybe investigate some sort of rodent-repelling technology.

Sunday, we got back from another adventure, parked the Rav in its usual place, went inside to unload before heading out for an evening walk, and when we came back out, there was a small flock of birds hanging around under the Rav. Towhees! Not only hanging around under the still-warm engine, but several times we actually saw one fly between the tires and disappear.

Could towhees really be our engine nest builders? And why would they be nesting in fall, with the days getting shorter and colder?

I'm keeping an eye on that engine compartment now, checking every few days. There are still a few sticks and juniper sprigs in there, but no real nest has reappeared so far. If it does, I'll post a photo.

October 09, 2014 12:10 AM

October 08, 2014

iheartubuntu

Check Ubuntu Linux for Shellshock


Shellshock is a new vulnerability that allows attackers to put code onto your machine, which could put your Ubuntu Linux system at a serious risk for malicious attacks.

Shellshock uses a bash script to access your computer. From there, hackers can launch programs, enable features, and even access your files. The script only affects UNIX-based systems (linux and mac).

You can test your system by running this test command from Terminal:

env x='() { :;}; echo vulnerable' bash -c 'echo hello'

If you're not vulnerable, you'll get this result:

bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' hello

If you are vulnerable, you'll get:

vulnerable hello

You can also check the version of bash you're running by entering:

bash --version

If you get version 3.2.51(1)-release as a result, you will need to update. Most Linux distributions already have patches available.

-----------

If your system is vulnerable make sure your computer has all critical updates and it should be patched already. if you are using a version of Ubuntu that has already reached end of life status (12.10, 13.04, 13.10, etc), you may be screwed and may need to start using a newer version of Ubuntu.

This should update Bash for you so your system is not vulnerable...

sudo apt-get update && sudo apt-get install --only-upgrade bash

by iheartubuntu (noreply@blogger.com) at October 08, 2014 11:48 PM

October 03, 2014

iheartubuntu

Ubuntu - 10 Years Strong


Ubuntu, the Debian based linux operating system is approaching its 21st release in just a couple of weeks (October 23rd) moving forward 10 years strong now!

Mark Shuttleworth invests in Ubuntu's parent company Canonical, which continues to lose money year after year. It's clear that profit isn't his main concern. There is still a clear plan for Ubuntu and Canonical. That plan appears to be very much 'cloud' and business based.

Shuttleworth is proud that the vast majority of cloud deployments are based-on Ubuntu. The recent launch of Canonical's 'Cloud in a box' deployable Ubuntu system is another indication where it sees things going.

Ubuntu Touch will soon appear on phones and tablets, which is really the glue for this cloud/mobile/desktop ecosystem. Ubuntu has evolved impressively over the last ten years and it will continue to develop in this new age.

Ubuntu provides a seamless ecosystem for devices deployed to businesses and users alike. Being able to run the identical software on multiple devices and in the cloud, all sharing the same data is very appealing.

Ubuntu will be at the heart of this with or without the survival of Canonical.

"I love technology, and I love economics and I love what’s going on in society. For me, Ubuntu brings those three things together in a unique way." - Mark Shuttleworth on the next 5 years of Ubuntu

by iheartubuntu (noreply@blogger.com) at October 03, 2014 07:48 PM

BirdFont Font Editor


If you have ever been interested in making your own fonts for fun or profit, BirdFont is an easy to use free font editor that lets you create vector graphics and export TTF, EOT & SVG fonts.

To install Birdfont, simply use the PPA below to insure you always have the most updated version. Open the terminal and run the following commands:

sudo add-apt-repository ppa:ubuntuhandbook1/birdfont

sudo apt-get update

sudo apt-get install birdfont

If you dont like using a PPA repository, you can download the appropriate DEB package  for your particular system....

http://ppa.launchpad.net/ubuntuhandbook1/birdfont/ubuntu/pool/main/b/birdfont/

If you need help developing a font, there is also an official tutorial here!

http://birdfont.org/doku/doku.php/tutorials

by iheartubuntu (noreply@blogger.com) at October 03, 2014 05:27 PM

Elizabeth Krumbach

33rd Birthday Weekend

I’m a big fan of trying new things and places, so it came as a surprise that when I decided upon a birthday getaway this past weekend we decided to go back to the Resort at Squaw Creek, where we had been last year. It wasn’t just travel exhaustion that made us choose this one, we knew we wanted to get some work done during the weekend and the suite-style was great for that. Honestly we love everything about this place – beautiful views, amazing pools, good food. The price was also right for a quick getaway.

The drive up was a long one, Friday evening traffic combined with a thunderstorm. We stopped for dinner at Cottonwood Restaurant in Truckee. By the time we arrived at the driveway to the resort the rain had transformed… what is that, slush? By the time we got to the front door it was properly snowing!

Saturday morning we had breakfast brought to our room (heaven!) and enjoyed the stunning view outside our window.

The rain kept us inside for most of the day, which was wonderful. I was able to get some work done on my book (as planned!) and MJ did a bunch of research into our first proper vacation of the year coming up in November. Fireplace, hot chocolate, the man I love, perfect!

As 4PM rolled around the rain tapered off and we went down to the pool. It was 45F degrees out, so not exactly swimming weather, but the pools were heated and the trio of hot tubs were a popular spot for other folks visiting for the weekend. It turned out wonderful, particularly with the standard warm fall we’re having in San Francisco. That evening we had wonderful dinner (and dessert!) at the on site restaurant.

Sunday was even more rainy. We took advantage of their option to pay an extra $85 to get an 8pm checkout, giving us the whole day to enjoy before we had to go home. The rain did end up keeping us from the pool, but I did take a 2 mile walk through the woods with an umbrella after lunch. In spite of the rain, it was a beautiful walk up the sometimes steep and rocky terrain through the woods.

Alas, it had to end. On our way out we stopped at FiftyFifty Brewing Company for a casual dinner. They had the most amazing mussels appetizer, I kind of want to go back to have that again. Dinner wrapped up with some cake!

Fortunately the drive home was quicker (and drier!) than our drive to the mountains had been and we got in shortly before 1AM.

My actual 33rd birthday was on Monday. I ended up making plans with a friend who was in town to celebrate her own birthday the following day. We met up at the San Francisco Zoo around 11AM and I finally got to meet the wolverines! Even better, we caught them as a keeper was putting out some treats, so we got to see them uncharacteristically bounding about their enclosure as they attacked the treat bags that were put out for them. Alas, in spite of staying until the opening of the Lion House, I still managed to miss the sneaky two-toed sloth who decided to hide from me.

We wrapped up the afternoon with lunch over at the Beach Chalet.

It was a great birthday weekend+birthday, aside from the whole turning 33 part. Getting older hasn’t tended to bother me, but time is passing too quickly, much still to do.

by pleia2 at October 03, 2014 05:26 AM

October 02, 2014

Akkana Peck

Photographing a double rainbow

[double rainbow]

The wonderful summer thunderstorm season here seems to have died down. But while it lasted, we had some spectacular double rainbows. And I kept feeling frustrated when I took the SLR outside only to find that my 18-55mm kit lens was nowhere near wide enough to capture it. I could try stitching it together as a panorama, but panoramas of rainbows turn out to be quite difficult -- there are no clean edges in the photo to tell you where to join one image to the next, and automated programs like Hugin won't even try.

There are plenty of other beautiful vistas here too -- cloudscapes, mesas, stars. Clearly, it was time to invest in a wide-angle lens. But how wide would it need to be to capture a double rainbow?

All over the web you can find out that a rainbow has a radius of 42 degrees, so you need a lens that covers 84 degrees to get the whole thing.

But what about a double rainbow? My web searches came to naught. Lots of pages talk about double rainbows, but Google wasn't finding anything that would tell me the angle.

I eventually gave up on the web and went to my physical bookshelf, where Color and Light in Nature gave me a nice table of primary and secondary rainbow angles of various wavelengths of light. It turns out that 42 degrees everybody quotes is for light of 600 nm wavelength, a blue-green or cyan color. At that wavelength, the primary angle is 42.0° and the secondary angle is 51.0°.

Armed with that information, I went back to Google and searched for double rainbow 51 OR 102 angle and found a nice Slate article on a Double rainbow and lightning photo. The photo in the article, while lovely (lightning and a double rainbow in the South Dakota badlands), only shows a tiny piece of the rainbow, not the whole one I'm hoping to capture; but the article does mention the 51-degree angle.

Okay, so 51°×2 captures both bows in cyan light. But what about other wavelengths? A typical eye can see from about 400 nm (deep purple) to about 760 nm (deep red). From the table in the book:

Wavelength Primary Secondary
400 40.5° 53.7°
600 42.0° 51.0°
700 42.4° 50.3°

Notice that while the primary angles get smaller with shorter wavelengths, the secondary angles go the other way. That makes sense if you remember that the outer rainbow has its colors reversed from the inner one: red is on the outside of the primary bow, but the inside of the secondary one.

So if I want to photograph a complete double rainbow in one shot, I need a lens that can cover at least 108 degrees.

What focal length lens does that translate to? Howard's Astronomical Adventures has a nice focal length calculator. If I look up my Rebel XSi on Wikipedia to find out that other countries call it a 450D, and plug that in to the calculator, then try various focal lengths (the calculator offers a chart but it didn't work for me), it turns out that I need an 8mm lens, which will give me an 108° 26‘ 46" field of view -- just about right.

[Double rainbow with the Rokinon 8mm fisheye] So that's what I ordered -- a Rokinon 8mm fisheye. And it turns out to be far wider than I need -- apparently the actual field of view in fisheyes varies widely from lens to lens, and this one claims to have a 180° field. So the focal length calculator isn't all that useful. At any rate, this lens is plenty wide enough to capture those double rainbows, as you can see.

About those books

By the way, that book I linked to earlier is apparently out of print and has become ridiculously expensive. Another excellent book on atmospheric phenomena is Light and Color in the Outdoors by Marcel Minnaert (I actually have his earlier version, titled The Nature of Light and Color in the Open Air). Minnaert doesn't give the useful table of frequencies and angles, but he has lots of other fun and useful information on rainbows and related phenomena, including detailed instructions for making rainbows indoors if you want to measure angles or other quantities yourself.

October 02, 2014 07:37 PM

September 30, 2014

Elizabeth Krumbach

PuppetConf 2014

Wow, so many conferences lately! Fortunately for me, PuppetConf was local so I didn’t need to catch any flights or deal with hotel hassle, it was just a 2 block walk from home each day.

My focus for this conference was learning more about how people are using code-driven infrastructures similar to ours in the OpenStack Infrastructure project and meeting up with some colleagues, several of whom I’ve only communicated with online. I succeeded on both counts and it ended up being a great conference for me.

There was a keynote by Gene Kim, he is one of the authors of the “devops novel” The Phoenix Project, which I first learned about from my colleague Robert Collins. His talk focused around the book, as The Phoenix Project: Lessons Learned. In spite of having read the book, it was great to hear from Kim on the topic more directly as he talked about technical debt and outlined his 4 top lessons learned:

  • The business value of DevOps is higher than we thought.
  • DevOps Is As Good For Dev… …As It Is For Ops
  • The Need For High-Trust Management (can’t bog people down)
  • DevOps is not just for the unicorns… DevOps is for horses, too. (ie – not just for tech stars like Facebook)

Talk slides here.

The next keynote was by Kate Matsudaira of Popforms who gave a talk titled Trust Me. I wasn’t sure what to expect with this one, but I was pleasantly surprised. She covered some of what one may call “soft skills” in the tech industry, including helping others, supporting your colleagues and in general being a resourceful person who people enjoy working with. Over the years I’ve seen far too much of people assuming these skills aren’t valuable, even as people look around and identify folks with these skills as the colleagues they like working with the most. Huge thanks to Kate for bringing attention to these skills. She also talked a lot about building trust within your organization as it can often be hard for managers to do evaluations of employees who have the freedom to work unobstructed (as we want!) and mechanisms to build that trust, including reporting what you do to your boss and team mates. Slides from her talk available here: Keynote: Trust Me slides

After the keynote I headed over to Evan Scheessele’s talk on Infrastructure-as-Code with Puppet Enterprise in the Cloud. He works in HP’s Printing & Personal Systems division and shared the evolution and use of a code-driven infrastructure on HP Cloud along with Puppet Enterprise. The driving vision in his organization was boiled down to a series of points:

  • Infrastructure as “Cattle” not “Pets”
  • Modern configuration-management means: Executable Documentation
  • “Infrastructure as Code”
  • Focus on the production-pattern, and automate it end-to-end
  • Everything is consistently reproducible

He also went application-specific, discussing their use of Jenkins, and hiera and puppetdb in PE. It was a great talk and a pleasure to catch up with him afterwards. Slides available here.


Thanks to Evan Scheessele for the photo

My talk was on How to Open Source Your Puppet Configuration and I brought along Monty Taylor and James E. Blair stick puppets I made to demonstrate the rationale of running our infrastructure as an open source project. I walked the audience through some of the benefits of making Puppet configurations public (or at least public within an organization), the importance of licensing and documentation and some steps for splitting up your configuration so it’s understandable and consumable by others. My slides are here.

On Wednesday I attended Gareth Rushgrove’s talk on Continuous Integration for Infrastructure as Code. He skipped over a lot of the more common individual testing mechanisms (puppet-lint, puppet-syntax, rspec-puppet, beaker) and dove into higher level view things like testing of images and containers and test-driven infrastructure (analogous to test-driven development). Through his talk he gave several examples of how this is accomplished, from use of Serverspec, the need to write tests before infrastructure, writing tests to enforce policy and pulling data from PuppetDB to run tests. Slides here.

After lunch I headed over to Chris Hoge’s talk about Understanding OpenStack Deployments with the Puppet modules available. In spite of all my work with OpenStack, I haven’t had a very close look at these modules, so it was nice meeting up with him and Colleen Murphy from the puppet-openstack team. In his talk he walked the audience through some of the basic design decisions of OpenStack and then pulled in examples of how the Puppet modules for OpenStack are used to bring this all together. Slides here.

Two talks I’ll have to catch once the videos are online, Continuous Infrastructure: Modern Puppet for the Jenkins Project – R.Tyler Croy, Jenkins (slides) and Infrastructure as Software – Dustin J. Mitchell, Mozilla, Inc. (slides). Both of these are open source infrastructures that I mentioned during my own talk! I wish I had taken the opportunity while we were all in one spot to meet together, fortunately I was able to chat with R.Tyler Croy prior to my talk, but his talk conflicted with mine and Dustin’s with the OpenStack talk.

In all, this was a very valuable event. I learned some interesting new things about how others are using code-driven infrastructures and I made some great connections.

More photos from PuppetConf here: https://www.flickr.com/photos/pleia2/sets/72157648049231891/

by pleia2 at September 30, 2014 08:50 PM

September 28, 2014

Akkana Peck

Petroglyphs, ancient and modern

In the canyons below White Rock there are many wonderful petroglyphs, some dating back many centuries, like this jaguar: [jaguar petroglyph in White Rock Canyon]

as well as collections like these:
[pictographs] [petroglyph collection]

Of course, to see them you have to negotiate a trail down the basalt cliff face. [Red Dot trail]

Up the hill in Los Alamos there are petroglyphs too, on trails that are a bit more accessible ... but I suspect they're not nearly so old. [petroglyph face]

September 28, 2014 03:47 AM

September 27, 2014

iheartubuntu

Ubuntu Kylin Wallpapers


Looking for some new wallpapers these days? The Chinese version of Ubuntu, Ubuntu Kylin, has some beautiful new wallpapers for the 14.10 release. Download and install the DEB to put them on your computer (a total of 24 wallpapers)...

http://security.ubuntu.com/ubuntu/pool/universe/u/ubuntukylin-wallpapers/ubuntukylin-wallpapers-utopic_14.10.0_all.deb



 

by iheartubuntu (noreply@blogger.com) at September 27, 2014 12:30 PM

September 25, 2014

Eric Hammond

Throw Away The Password To Your AWS Account

reduce the risk of losing control of your AWS account by not knowing the root account password

As Amazon states, one of the best practices for using AWS is

Don’t use your AWS root account credentials to access AWS […] Create an IAM user for yourself […], give that IAM user administrative privileges, and use that IAM user for all your work.

The root account credentials are the email address and password that you used when you first registered for AWS. These credentials have the ultimate authority to create and delete IAM users, change billing, close the account, and perform all other actions on your AWS account.

You can create a separate IAM user with near-full permissions for use when you need to perform admin tasks, instead of using the AWS root account. If the credentials for the admin IAM user are compromised, you can use the AWS root account to disable those credentials to prevent further harm, and create new credentials for ongoing use.

However, if the credentials for your AWS root account are compromised, the person who stole them can take over complete control of your account, change the associated email address, and lock you out.

I have consulted companies who lost control over the root AWS account which contained their assets. You want to avoid this.

Proposal

Given:

  • The AWS root account is not required for regular use as long as you have created an IAM user with admin privileges

  • Amazon recommends not using your AWS root account

  • You can’t accidentally expose your AWS root account password if you don’t know it and haven’t saved it anywhere

  • You can always reset your AWS root account password as long as you have access to the email address associated with the account

Consider this approach to improving security:

  1. Create an IAM user with full admin privileges. Use this when you need to do administrative tasks. Activate IAM user access to account billing information for the IAM user to have access to read and modify billing, payment, and account information.

  2. Change the AWS root account password to a long, randomly generated string. Do not save the password. Do not try to remember the password. On Ubuntu, you can use a command like the following to generate a random password for copy/paste into the change password form:

    pwgen -s 24 1
    
  3. If you need access to the AWS root account at some point in the future, use the “Forgot Password” function on the signin form.

It should be clear from this that protecting access to your email account is critical to your overall AWS security, as that is all that is needed to change your password, but that has been true for many online services for many years.

Caveats

You currently need to use the AWS root account in the following situations:

  • to change the email address and password associated with the AWS root account

  • to deactivate IAM user access to account billing information

  • to cancel AWS services (e.g., support)

  • to close the AWS account

  • to buy stuff on Amazon.com, Audible.com, etc. if you are using the same account (not recommended)

  • anything else? Let folks know in the comments.

MFA

For completeness, I should also reiterate Amazon’s constant and strong recommendation to use MFA (multi-factor authentication) on your root AWS account. Consider buying the hardware MFA device, associating it with your root account, then storing it in a lock box with your other important things.

You should also add MFA to your IAM accounts that have AWS console access. For this, I like to use Google Authenticator software running on a locked down mobile phone.

MFA adds a second layer of protection beyond just knowing the password or having access to your email account.

Original article: http://alestic.com/2014/09/aws-root-password

by Eric Hammond at September 25, 2014 10:04 PM

AWS Community Heroes Program

Amazon Web Services recently announced an AWS Community Heroes Program where they are starting to recognize publicly some of the many individuals around the world who contribute in so many ways to the community that has grown up around the services and products provided by AWS.

It is fun to be part of this community and to share the excitement that so many have experienced as they discover and promote new ways of working and more efficient ways of building projects and companies.

Here are some technologies I have gotten the most excited about over the decades. Each of these changed my life in a significant way as I invested serious time and effort learning and using the technology. The year represents when I started sharing the “good news” of the technology with people around me, who at the time usually couldn’t have cared less.

  • 1980: Computers and Programming - “You can write instructions and the computer does what you tell it to! This is going to be huge!”

  • 1987: The Internet - “You can talk to people around the world, access information that others make available, and publish information for others to access! This is going to be huge!”

  • 1993: The World Wide Web - “You can view remote documents by clicking on hyperlinks, making it super-easy to access information, and publishing is simple! This is going to be huge!”

  • 2007: Amazon Web Services - “You can provision on-demand disposable compute infrastructure from the command line and only pay for what you use! This is going to be huge!”

I feel privileged to have witnessed amazing growth in each of these and look forward to more productive use on all fronts.

There are a ton of local AWS meetups and AWS user groups where you can make contact with other AWS users. AWS often sends employees to speak and share with these groups.

A great way to meet thousands of people in the AWS community (and to spend a few days in intense learning about AWS no matter your current expertise level) is to attend the AWS re:Invent conference in Las Vegas this November. Perhaps I’ll see you there!

Original article: http://alestic.com/2014/09/aws-community-heroes

by Eric Hammond at September 25, 2014 10:03 PM

iheartubuntu

Inside Bitcoins Las Vegas Is Two Weeks Away - 10% OFF!


I Heart Ubuntu is excited to be partnering with Inside Bitcoins Conference and Expo once again, which will be returning to Las Vegas at the Flamingo Hotel on October 5-7, 2014!

The event will explore the way that cryptocurrency has been affecting the payments industry, and will cover a wide range of topics including mainstream adoption, compliance, bitcoin startups, investing, mining, altcoins, equipment, and more. The first 300 paid attendees will receive US$50 in bitcoin.

New to Inside Bitcoins Las Vegas will be a half day of small classroom-style workshops taught by cryptocurrency veterans, which will provide attendees with an interactive, informative setting to learn about various facets of the bitcoin ecosystem.

Recently announced is a keynote by Patrick Byrne, CEO of Overstock.com, who will be leading a session titled, “Cryptosecurities: the Next Decentralized Frontier” on October 6 at 3:30pm. Byrne will also be making an exciting announcement at the event regarding Overstock’s latest development on the Bitcoin front.

Featured speakers include:

  • Patrick Byrne, CEO, Overstock.com 
  • Bobby Lee, CEO, BTC China & Board Member, Bitcoin Foundation
  • Daniel Larimer, Founder, Bitshares.org
  • Perianne Boring, Founder & President, Chamber of Digital Commerce

And many more! See the full roster of speakers here.

I Heart Ubuntu is once again partnering with Inside Bitcoins to offer all readers 10% OFF Gold and Silver Passports. Enter code HEART at checkout to redeem your discount. Register now!

by iheartubuntu (noreply@blogger.com) at September 25, 2014 01:34 AM

September 23, 2014

iheartubuntu

ONIONSHARE - Send Big Files Securely and Anonymously


OnionShare lets you securely and anonymously share files of any size. It works by starting a web server, making it accessible as a Tor hidden service, and generating an unguessable URL to access and download the files. It doesn't require setting up a server on the internet somewhere or using a third party filesharing service. You host the file on your own computer and use a Tor hidden service to make it temporarily accessible over the internet. The other user just needs to use Tor Browser to download the file from you.

Features include:

  • A user-friendly drag-and-drop graphical user interface that works in Windows, Mac OS X, and Linux
  • Ability to share multiple files and folders at once
  • Support for multiple people downloading files at once
  • Automatically copies the unguessable URL to your clipboard
  • Shows you the progress of file transfers
  • When file is done transferring, automatically closes OnionShare to reduce the attack surface
  • Localized into several languages, and supports international unicode filenames
  • Designed to work in Tails, for high risk users

You can learn more about OnionShare here: https://onionshare.org/

To install OnionShare on Ubuntu, open a terminal and type:

sudo add-apt-repository ppa:micahflee/ppa

sudo apt-get update && sudo apt-get install onionshare

Before you can share files, you need to open Tor Browser in the background. This will provide the Tor service that OnionShare uses to start the hidden service.

Open OnionShare and drag and drop files and folders you wish to share, and start the server. It will show you a long, random-looking URL such as http://cfxipsrhcujgebmu.onion/7aoo4nnzj3qurkafvzn7kket7u and copy it to your clipboard. This is the secret URL that can be used to download the file you're sharing. If you'd like multiple people to be able to download this file, uncheck the "close automatically" checkbox.

Send this URL to the person you're trying to send the files to. If the files you're sending aren't secret, you can use normal means of sending the URL: emailing it, posting it to Facebook or Twitter, etc. If you're trying to send secret files then it's important to send this URL securely.

The person who is receiving the files doesn't need OnionShare. All they need is to open the URL you send them in Tor Browser to be able to download the file.



by iheartubuntu (noreply@blogger.com) at September 23, 2014 12:00 PM

Jono Bacon

Bringing Literacy to Millions of Kids With Open Source

Today we are launching the Global Learning XPRIZE complete with Indigogo crowdfunding campaign.

This is a $15 million competition in which teams are challenged to create Open Source software that will teach a child to read, write, and perform arithmetic in 18 months without the aid of a teacher. This is not designed to replace teachers but to instead provide an educational solution where little or none exists.

There are 57 million children aged 5 – 12 in the world today who have no access to education. There are 250 million children below basic literacy levels, even after several years of school. You may think the solution to this is to build more schools, but we would need an extra 1.6 million teachers by next year to provide universal primary education.

This is a tragedy.

This new XPRIZE is designed to help fix this.

Every child should have a right to the core ingredient that is literacy. It unlocks their potential and opens up opportunity. Just think of all the resources we depend on today for growth and education…the Internet, books, wikipedia, collaborative tools…without literacy all of these are inaccessible. It is time to change this. Too many suffer from a lack of literacy, and sadly girls bear much of the brunt of this too.

This prize is open to anyone to participate in. Professional developers, hobbyists, students, scientists, teachers…everyone is welcome to join in and compete. While the $15 million purse is attractive in itself, just think of the impact of potentially changing the lives of hundreds of millions of kids.

Coopetition For a Great Cause

What really excites me about this new XPRIZE is that it is the first Open Source XPRIZE. The winning team and the four runner-up teams will be expected to Open Source their entire code-base, assets, and material. This will create a solid foundation of education technology that can live on…long past the conclusion of this XPRIZE.

That isn’t the only reason why this excites me though. The Open Source nature of this prize provides an incredible opportunity for coopetition; where teams can collaborate around common areas of interest and problem-solving, while keeping their secret sauce to themselves. The impact of this could be profound.

I will be working hard to build an environment in which we encourage this kind of collaboration. It makes no sense for 100 teams to all solve the same problems privately in their own silo. Let’s get everyone up and running in GitHub, collaborating around these common challenges, so all the teams benefit from that pooling of resources.

Let’s also open this up so everyone can help us be successful. Let’s invite designers, translators, testers, teachers, scientists, musicians, artists and more…everyone has something they can bring to solve one of our grandest challenges, and help create a more literate and peaceful world.

Everyone Can Play a Part

As part of this new XPRIZE we are also launching a crowdfunding campaign that is designed to raise additional resources so we can do even more as part of the prize. We have already funded the $15 million prize purse and some field-testing, but this crowdfunding campaign will provide the resources for us to do so much more.

This will help us broaden the field-testing in more countries, with more kids, to grow a global community around solving these grand challenges, build a collaborative environment for teams to work together on common problems, and optimize this new XPRIZE to be as successful as possible. Every penny contributed helps us to do more and ultimately squeeze the most value out of this important XPRIZE.

There are ten things you can do to help:

  1. Contribute! – a great place to start is to buy one of our awesome perks from the crowdfunding campaign. Find out more here.
  2. Join the Community – come and meet the new XPRIZE community at http://forum.xprize.org and share ideas, brainstorm, and collaborate around new projects.
  3. Refer Friends and Win Prizes – want to win an expenses-paid trip to our Visioneering event where we create new XPRIZEs while also helping spread the word? To find out more click here.
  4. Download the Street Team Kit – head over to our Get Involved page and download a free kit with avatars, banners, posters, presentations, FAQs and more. The page also includes videos for how to get started!
  5. Create and Share Digital Content – we are encouraging authors, developers, content-creators and more to create content that will spread the word about literacy, the Global Learning XPRIZE, and more!
  6. Share and Tag Your Fave Children’s Book – which children’s books have been the most memorable for you? Share your favorite (and preferably post a picture of the cover), complete with a link to

    http://igg.me/at/learningxprize and tag 5 friends to share theirs too! When using social media, be sure to use the #learningprize hashtag.

  7. Show Your Pride –  go and download the Street Team Kit and use the images and avatars in there to change your profile picture and banner on your favorite social media networks (e.g. Twitter, Facebook, Google+).
  8. Create Your ‘Learning Moment’ Video – record a video and share on a video website (such as YouTube) about how learning has really impact your life. Give the video the title “Global Learning XPRIZE: My Learning Moment“. Be sure to share your video on social media too with the #learningprize hashtag!
  9. Put Posters up in Your Community – go and download the Street Team Kit, print the posters out and put them up in your local coffee shops, universities, colleges, schools, and elsewhere!
  10. Organize a Local Event – create a local event to share the Global Learning XPRIZE. Fortunately we have a video on our Get Involved page that explains how you can do this, and we have a presentation deck with notes ready for you to use!

I know a lot of people who read my blog are Open Source folks, and I believe this prize offers an incredible opportunity for us to come together to have a very real profound impact on the world. Come and join the community, support the crowdfunding campaign, and help us to all be successful in bringing literacy to millions.

by jono at September 23, 2014 05:00 AM

September 22, 2014

Akkana Peck

Pi Crittercam vs. Bushnell Trophycam

I had the opportunity to borrow a commercial crittercam for a week from the local wildlife center. [Bushnell Trophycam vs. Raspberry Pi Crittercam] Having grown frustrated with the high number of false positives on my Raspberry Pi based crittercam, I was looking forward to see how a commercial camera compared.

The Bushnell Trophycam I borrowed is a nicely compact, waterproof unit, meant to strap to a tree or similar object. It has an 8-megapixel camera that records photos to the SD card -- no wi-fi. (I believe there are more expensive models that offer wi-fi.) The camera captures IR as well as visible light, like the PiCam NoIR, and there's an IR LED illuminator (quite a bit stronger than the cheap one I bought for my crittercam) as well as what looks like a passive IR sensor.

I know the TrophyCam isn't immune to false positives; I've heard complaints along those lines from a student who's using them to do wildlife monitoring for LANL. But how would it compare with my homebuilt crittercam?

I put out the TrophyCam first night, with bait (sunflower seeds) in front of the camera. In the morning I had ... nothing. No false positives, but no critters either. I did have some shots of myself, walking away from it after setting it up, walking up to it to adjust it after it got dark, and some sideways shots while I fiddled with the latches trying to turn it off in the morning, so I know it was working. But no woodrats -- and I always catch a woodrat or two in PiCritterCam runs. Besides, the seeds I'd put out were gone, so somebody had definitely been by during the night. Obviously I needed a more sensitive setting.

I fiddled with the options, changed the sensitivity from automatic to the most sensitive setting, and set it out for a second night, side by side with my Pi Crittercam. This time it did a little better, though not by much: one nighttime shot with a something in it, plus one shot of someone's furry back and two shots of a mourning dove after sunrise.

[blown-out image from Bushnell Trophycam] What few nighttime shots there were were mostly so blown out you couldn't see any detail to be sure. Doesn't this camera know how to adjust its exposure? The shot here has a creature in it. See it? I didn't either, at first. It's just to the right of the bush. You can just see the curve of its back and the beginning of a tail.

Meanwhile, the Pi cam sitting next to it caught eight reasonably exposed nocturnal woodrat shots and two dove shots after dawn. And 369 false positives where a leaf had moved in the wind or a dawn shadow was marching across the ground. The TrophyCam only shot 47 photos total: 24 were of me, fiddling with the camera setup to get them both pointing in the right direction, leaving 20 false positives.

So the Bushnell, clearly, gives you fewer false positives to hunt through -- but you're also a lot less likely to catch an actual critter. It also doesn't deal well with exposures in small areas and close distances: its IR light source seems to be too bright for the camera to cope with. I'm guessing, based on the name, that it's designed for shooting deer walking by fifty feet away, not woodrats at a two-foot distance.

Okay, so let's see what the camera can do in a larger space. The next two nights I set it up in large open areas to see what walked by. The first night it caught four rabbit shots that night, with only five false positives. The quality wasn't great, though: all long exposures of blurred bunnies. The second night it caught nothing at all overnight, but three rabbit shots the next morning. No false positives.

[coyote caught on the TrophyCam] The final night, I strapped it to a piñon tree facing a little clearing in the woods. Only two morning rabbits, but during the night it caught a coyote. And only 5 false positives. I've never caught a coyote (or anything else larger than a rabbit) with the PiCam.

So I'm not sure what to think. It's certainly a lot more relaxing to go through the minimal output of the TrophyCam to see what I caught. And it's certainly a lot easier to set up, and more waterproof, than my jury-rigged milk carton setup with its two AC cords, one for the Pi and one for the IR sensor. Being self-contained and battery operated makes it easy to set up anywhere, not just near a power plug.

But it's made me rethink my pessimistic notion that I should give up on this homemade PiCam setup and buy a commercial camera. Even on its most sensitive setting, I can't make the TrophyCam sensitive enough to catch small animals. And the PiCam gets better picture quality than the Bushnell, not to mention the option of hooking up a separate camera with flash.

So I guess I can't give up on the Pi setup yet. I just have to come up with a sensible way of taming the false positives. I've been doing a lot of experimenting with SimpleCV image processing, but alas, it's no better at detecting actual critters than my simple pixel-counting script was. But maybe I'll find the answer, one of these days. Meanwhile, I may look into battery power.

September 22, 2014 08:29 PM

September 19, 2014

Akkana Peck

Mirror, mirror

A female hummingbird -- probably a black-chinned -- hanging out at our window feeder on a cool cloudy morning.

[female hummingbird at the window feeder]

September 19, 2014 01:04 AM

September 18, 2014

Elizabeth Krumbach

Offline, CLI-based Gerrit code review with Gertty

This past week I headed to Florida to present at Fossetcon and thought it would be a great opportunity to do a formal review of a new tool recently released by the OpenStack Infrastructure team (well, mostly James E. Blair): Gertty.

The description of this tool is as follows:

As compared to the web interface, the main advantages are:

  • Workflow — the interface is designed to support a workflow similar to reading network news or mail. In particular, it is designed to deal with a large number of review requests across a large number of projects.
  • Offline Use — Gertty syncs information about changes in subscribed projects to a local database and local git repos. All review operations are performed against that database and then synced back to Gerrit.
  • Speed — user actions modify locally cached content and need not wait for server interaction.
  • Convenience — because Gertty downloads all changes to local git repos, a single command instructs it to checkout a change into that repo for detailed examination or testing of larger changes.

For me the two big ones were CLI-based workflow and offline use, I could review patches while on a plane or on terrible hotel wifi!

I highly recommend reading the announcement email to learn more about the features, but to get going here’s a quick rundown for the currently released version 1.0.2:

First, you’ll need to set a password in Gerrit so you can use the REST API. Do that by logging into Gerrit and going to https://review.openstack.org/#/settings/http-password

From there:

pip install gertty

wget https://git.openstack.org/cgit/stackforge/gertty/plain/examples/openstack-gertty.yaml -O ~/.gertty.yaml

Edit ~/.gertty.yaml and update anything that says “CHANGEME”

A couple things worthy of note:

  • Be aware that by default, uses ~/git/ for the git-root, I had to change this in my ~/.gertty.yaml so it didn’t touch my existing ~/git/ directory.
  • You can also run it in a venv, as described on the pypi page.

Now run gertty from your terminal!

When you first load it up, you get a welcome screen with some hints on how to use it, including the all important “press F1 for help”:

Note: I use xfce4-terminal and F1 is bound to terminal help, see the Xfce FAQ to learn how to disable this so you can actually read the Gertty help and don’t have to ask on IRC how to do simple things like I did ;)

As instructed, from here you hit “L” to list projects, this is the page where you can subscribe to them:

You subscribe to projects by pressing “s” and they will show up as bright white, then you can navigate into them to list open reviews:

Go to a review you want to look at and hit enter, bringing up the review screen. This should look very familiar, just text only. I’ve expanded my standard 80×24 terminal window here so you can get a good look at what the full screen looks like:

Navigate down to < Diff > to see the diff. This is pretty cool, instead of showing it on separate pages like the web UI, it shows you a unified page with all of the file diffs, so you just need to scroll through them to see them all:

Finally, you review! Select < Review > back on the main review page and it will pop up a screen that allows you to select your +2, +1, -1, etc and add a message:

Your reviews are synced along with everything else when Gertty knows it’s online and can pull down review updates and upload your changes. At any time you can look at the top right of your screen to see how many pending sync requests it has.

When you want to quit, CTRL-q

I highly recommend giving it a spin. Feel free to ask questions about usage in #openstack-infra and bugs are tracked in Storyboard here: https://storyboard.openstack.org/#!/project/698. The code lives in a stackforge repo at: http://git.openstack.org/cgit/stackforge/gertty

by pleia2 at September 18, 2014 12:46 AM

September 17, 2014

Elizabeth Krumbach

Fossetcon 2014

As I wrote in my last post I attended Fossetcon this past weekend. The core of the event kicked off on Friday with a keynote by Iris Gardner on how Diversity Creates Innovation and the work that the CODE2040 organization is doing to help talented minorities succeed in technology. I first heard about this organization back in 2013 at OSCON, so it was great to hear more about their recent successes with their summer Fellows Program. It was also great to hear that their criteria for talent not only included coding skills, but also sought out a passion for engineering and leadership skills.

After a break, I went to see PJ Hagerty give his talk, Meetup Groups: Act Locally – Think Globally. I’ve been running open source related groups for over a decade, so I’ve been in this space for quite a long time and was hoping to get some new tips, PJ didn’t disappoint! He led off with the need to break out of the small “pizza and a presentation by a regular” grind, which is indeed important to growing a group and making people show up. Some of his suggestions for doing this included:

  • Seek out students to attend and participate in the group, they can be some of your most motivated attendees and will bring friends
  • Seek out experienced programmers (and technologists) not necessarily in your specific field to give more agnostic talks about general programming/tech practices
  • Do cross-technology meetups – a PHP and Ruby night! Maybe Linux and BSD?
  • Bring in guest speakers from out of town (if they’re close enough, many will come for the price of gas and/or train/bus ticket – I would!)
  • Send members to regional conferences… or run your own conference
  • Get kids involved
  • Host an OpenHack event

I’ll have to see what my co-conspiratorsorganizers at some local groups think of these ideas, it certainly would be fun to spice up some of the groups I regularly attend.

From there I went to MySQL Server Performance Tuning 101 by Ligaya Turmelle. Her talk centered around the fact that MySQL tuning is not simple, but went through a variety of mechanisms to tune it in different ways for specific cases you may run into. Perhaps most useful to me were her tips for gathering usage statistics from MySQL, I was unfamiliar with many of the metrics she pulled out. Very cool stuff.

After lunch and some booth duty, I headed over to Crash Course in Open Source Cloud Computing presented by Mark Hinkle. Now, I work on OpenStack (referred to as the “Boy Band” of cloud infrastructures in the talk – hah!), so my view of the cloud world is certainly influenced by that perspective. It was great to see a whirlwind tour of other and related technologies in the open source ecosystem.

The closing keynote for the day was by Deb Nicholson, Style or substance? Free Software is Totally the 80’s. She gave a bit of a history of free software and speculated as to whether our movement would be characterized by a shallow portrayal of “unconferences and penguin swag” (like 80s neon clothes and extravagance) or how free software communities are changing the world (like groups in the 80s who were really seeking social change or the fall of the Berlin wall). Her hope is that by stepping back and taking a look at our community that perhaps we could shape how our movement is remembered and focus on what is important to our future.

Saturday I had more booth duty with my colleague Yolanda Robla who came in from Spain to do a talk on Continuous integration automation. We were joined by another colleague from HP, Mark Atwood, who dropped by the conference for his talk How to Get One of These Awesome Open Source Jobs – one of my favorites.

The opening keynote on Saturday was Considering the Future of Copyleft by Bradley Kuhn. I always enjoy going to his talks because I’m considerably more optimistic about the health and future of free software, so his strong copyleft stance makes me stop and consider where I truly stand and what that means. He worries that an ecosystem of permissive licenses (like Apache, MIT, BSD) will lead to companies doing the least possible for free software and keeping all their secret sauces secret, diluting the ecosystem and making it less valuable for future consumers of free software since they’ll need the proprietary components. I’m more hopeful than that, particularly as I see real free software folks starting to get jobs in major companies and staying true to their free software roots. Indeed, these days I spend a vast majority of my time working on Apache-licensed software for a large company who pays me to do the work. Slides from his talk are here, I highly recommend having a browse: http://ebb.org/bkuhn/talks/FOSSETCON-2014/copyleft-future.html

After some more boothing, I headed over to Apache Mesos and Aurora, An Operating System For The Datacenter by David Lester. Again, being on the OpenStack bandwagon these past few years I haven’t had a lot of time to explore the ecosystem elsewhere, and I learned that this is some pretty cool stuff! Lester works for Twitter and talked some about how Twitter and other companies in the community are using both the Mesos and Aurora tools to build their efficient, fault tolerant datacenters and how it’s lead to impressive improvements in the reliability of their infrastructures. He also did a really great job explaining the concepts of both, hooray for diagrams. I kind of want to play with them now.

Introduction to The ELK Stack: Elasticsearch, Logstash & Kibana by Aaron Mildenstein was my next stop. We run an ELK stack in the OpenStack Infrastructure, but I’ve not been very involved in the management of that, instead focusing on how we’re using it in elastic-recheck so I hoped this talk would fill in some of the fundamentals for me. It did do that so I was happy with that, but I have to admit that I was pretty disappointed to see demos of plugins that required a paid license.

As the day wound down, I finally had my talk: Code Review for Systems Administrators.


Code Review for Sysadmins talk, thanks to Yolanda Robla for taking the photo

I love giving this talk. I’m really proud of the infrastructure that has been built for OpenStack and it’s one that I’m happy and excited to work with every day – in part because we do things through code review. Even better, my excitement during this presentation seemed contagious, with an audience that seemed really engaged with the topic and impressed. Huge thanks to everyone who came and particularly to those who asked questions and took time to chat with me after. Slides from my talk are available here: fossetcon-code-review-for-sysadmins/

And then we were at the end! The conference wrapped up with a closing keynote on Open Source Is More than Code by Jordan Sissel. I really loved this talk. I’ve known for some time that the logstash community was one of the friendlier ones, with their mantra of “If a newbie has a bad time, it’s a bug.” This talk dove further into that ethos in their community and how it’s impacted how members of the project handle unhappy users. He also talked about improvements made to documentation (both inline in code and formal documentation) and how they’ve tried to “break away from text” some and put more human interaction in their community so people don’t feel so isolated and dehumanized by a text only environment (though I do find this is where I’m personally most comfortable, not everyone feels that way). I hope more projects will look to the logstash community as a good example of how we all can do better, I know I have some work to do when it comes to support.

Thanks again to conference staff for making this event such a fun one, particularly as it was their first year!

by pleia2 at September 17, 2014 12:44 AM

September 16, 2014

Elizabeth Krumbach

Ubuntu at Fossetcon 2014

Last week I flew out to the east coast to attend the very first Fossetcon. The conference was on the smaller side, but I had a wonderful time meeting up with some old friends, meeting some new Ubuntu enthusiasts and finally meeting some folks I’ve only communicated with online. The room layout took some getting used to, but the conference staff was quick to put up signs and directing conference attendees in the right direction and in general leading to a pretty smooth conference experience.

On Thursday the conference hosted a “day zero” that had training and an Ubucon. I attended the Ubucon all day, which kicked off with Michael Hall doing an introduction to the Ubuntu on Phones ecosystem, including Mir, Unity8 and the Telephony features that needed to be added to support phones (voice calling, SMS/MMs, Cell data, SIM card management). He also talked about the improved developer portal with more resources aimed at app developers, including the Ubuntu SDK and simplified packaging with click packages.

He also addressed the concern of many about whether Ubuntu could break into the smartphone market at this point, arguing that it’s a rapidly developing and changing market, with every current market leader only having been there for a handful of years, and that new ideas need need to play to win. Canonical feels that convergence between phone and desktop/laptop gives Ubuntu a unique selling point and that users will like it because of intuitive design with lots of swiping and scrolling actions, gives apps the most screen space possible. It was interesting to hear that partners/OEMs can offer operator differentiation as a layer without fragmenting the actual operating system (something that Android struggles with), leaving the core operating system independently maintained.

This was followed up by a more hands on session on Creating your first Ubuntu SDK Application. Attendees downloaded the Ubuntu SDK and Michael walked through the creation of a demo app, using the App Dev School Workshop: Write your first app document.

After lunch, Nicholas Skaggs and I gave a presentation on 10 ways to get involved with Ubuntu today. I had given a “5 ways” talk earlier this year at the SCaLE in Los Angeles, so it was fun to do a longer one with a co-speaker and have his five items added in, along with some other general tips for getting involved with the community. I really love giving this talk, the feedback from attendees throughout the rest of the conference was overwhelmingly positive, and I hope to get some follow-up emails from some new contributors looking to get started. Slides from our presentation are available as pdf here: contributingtoubuntu-fossetcon-2014.pdf


Ubuntu panel, thanks to Chris Crisafulli for the photo

The day wrapped up with an Ubuntu Q&A Panel, which had Michael Hall and Nicholas Skaggs from the Community team at Canonical, Aaron Honeycutt of Kubuntu and myself. Our quartet fielded questions from moderator Alexis Santos of Binpress and the audience, on everything from the Ubuntu phone to challenges of working with such a large community. I ended up drawing from my experience with the Xubuntu community a lot in the panel, especially as we drilled down into discussing how much success we’ve had coordinating the work of the flavors with the rest of Ubuntu.

The next couple days brought Fossetcon proper, with I’ll write about later. The Ubuntu fun continued though! I was able to give away 4 copies of The Official Ubuntu Book, 8th Edition which I signed, and got José Antonio Rey to sign as well since he had joined us for the conference from Peru.

José ended up doing a talk on Automating your service with Juju during the conference, and Michael Hall had the opportunity to a talk on Convergence and the Future of App Development on Ubuntu. The Ubuntu booth also looked great and was one of the most popular of the conference.

I really had a blast talking to Ubuntu community members from Florida, they’re a great and passionate crowd.

by pleia2 at September 16, 2014 05:01 PM

September 14, 2014

Akkana Peck

Global key bindings in Emacs

Global key bindings in emacs. What's hard about that, right? Just something simple like

(global-set-key "\C-m" 'newline-and-indent)
and you're all set.

Well, no. global-set-key gives you a nice key binding that works ... until the next time you load a mode that wants to redefine that key binding out from under you.

For many years I've had a huge collection of mode hooks that run when specific modes load. For instance, python-mode defines \C-c\C-r, my binding that normally runs revert-buffer, to do something called run-python. I never need to run python inside emacs -- I do that in a shell window. But I fairly frequently want to revert a python file back to the last version I saved. So I had a hook that ran whenever python-mode loaded to override that key binding and set it back to what I'd already set it to:

(defun reset-revert-buffer ()
  (define-key python-mode-map "\C-c\C-r" 'revert-buffer) )
(setq python-mode-hook 'reset-revert-buffer)

That worked fine -- but you have to do it for every mode that overrides key bindings and every binding that gets overridden. It's a constant chase, where you keep needing to stop editing whatever you wanted to edit and go add yet another mode-hook to .emacs after chasing down which mode is causing the problem. There must be a better solution.

A web search quickly led me to the StackOverflow discussion Globally override key bindings. I tried the techniques there; but they didn't work.

It took a lot of help from the kind folks on #emacs, but after an hour or so they finally found the key: emulation-mode-map-alists. It's only barely documented -- the key there is "The “active” keymaps in each alist are used before minor-mode-map-alist and minor-mode-overriding-map-alist" -- and there seem to be no examples anywhere on the web for how to use it. It's a list of alists mapping names to keymaps. Oh, clears it right up! Right?

Okay, here's what it means. First you define a new keymap and add your bindings to it:

(defvar global-keys-minor-mode-map (make-sparse-keymap)
  "global-keys-minor-mode keymap.")

(define-key global-keys-minor-mode-map "\C-c\C-r" 'revert-buffer)
(define-key global-keys-minor-mode-map (kbd "C-;") 'insert-date)

Now define a minor mode that will use that keymap. You'll use that minor mode for basically everything.

(define-minor-mode global-keys-minor-mode
  "A minor mode so that global key settings override annoying major modes."
  t "global-keys" 'global-keys-minor-mode-map)

(global-keys-minor-mode 1)

Now build an alist consisting of a list containing a single dotted pair: the name of the minor mode and the keymap.

;; A keymap that's supposed to be consulted before the first
;; minor-mode-map-alist.
(defconst global-minor-mode-alist (list (cons 'global-keys-minor-mode
                                              global-keys-minor-mode-map)))

Finally, set emulation-mode-map-alists to a list containing only the global-minor-mode-alist.

(setf emulation-mode-map-alists '(global-minor-mode-alist))

There's one final step. Even though you want these bindings to be global and work everywhere, there is one place where you might not want them: the minibuffer. To be honest, I'm not sure if this part is necessary, but it sounds like a good idea so I've kept it.

(defun my-minibuffer-setup-hook ()
  (global-keys-minor-mode 0))
(add-hook 'minibuffer-setup-hook 'my-minibuffer-setup-hook)

Whew! It's a lot of work, but it'll let me clean up my .emacs file and save me from endlessly adding new mode-hooks.

September 14, 2014 10:46 PM

September 11, 2014

Akkana Peck

Making emailed LinkedIn discussion thread links actually work

I don't use web forums, the kind you have to read online, because they don't scale. If you're only interested in one subject, then they work fine: you can keep a browser tab for your one or two web forums perenially open and hit reload every few hours to see what's new. If you're interested in twelve subjects, each of which has several different web forums devoted to it -- how could you possibly keep up with that? So I don't bother with forums unless they offer an email gateway, so they'll notify me by email when new discussions get started, without my needing to check all those web pages several times per day.

LinkedIn discussions mostly work like a web forum. But for a while, they had a reasonably usable email gateway. You could set a preference to be notified of each new conversation. You still had to click on the web link to read the conversation so far, but if you posted something, you'd get the rest of the discussion emailed to you as each message was posted. Not quite as good as a regular mailing list, but it worked pretty well. I used it for several years to keep up with the very active Toastmasters group discussions.

About a year ago, something broke in their software, and they lost the ability to send email for new conversations. I filed a trouble ticket, and got a note saying they were aware of the problem and working on it. I followed up three months later (by filing another ticket -- there's no way to add to an existing one) and got a response saying be patient, they were still working on it. 11 months later, I'm still being patient, but it's pretty clear they have no intention of ever fixing the problem.

Just recently I fiddled with something in my LinkedIn prefs, and started getting "Popular Discussions" emails every day or so. The featured "popular discussion" is always something stupid that I have no interest in, but it's followed by a section headed "Other Popular Discussions" that at least gives me some idea what's been posted in the last few days. Seemed like it might be worth clicking on the links even though it means I'd always be a few days late responding to any conversations.

Except -- none of the links work. They all go to a generic page with a red header saying "Sorry it seems there was a problem with the link you followed."

I'm reading the plaintext version of the mail they send out. I tried viewing the HTML part of the mail in a browser, and sure enough, those links worked. So I tried comparing the text links with the HTML:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&amp;t=gde&amp;midToken=AQEqep2nxSZJIg&amp;ek=b2_anet_digest&amp;li=82&amp;m=group_discussions&amp;ts=textdisc-6&amp;itemID=5914453683503906819&amp;itemType=member&amp;anetID=98449
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken=AQEqep2nxSZJIg&ek=b2_anet_digest&li=17&m=group_discussions&ts=grouppost-disc-6&itemID=5914453683503906819&itemType=member&anetID=98449

Well, that's clear as mud, isn't it?

HTML entity substitution

I pasted both links one on top of each other, to make it easier to compare them one at a time. That made it fairly easy to find the first difference:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&amp;t=gde&amp;midToken= ...
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken= ...

Time to die laughing: they're doing HTML entity substitution on the plaintext part of their email notifications, changing & to &amp; everywhere in the link.

If you take the link from the text email and replace &amp; with &, the link works, and takes you to the specific discussion.

Pagination

Except you can't actually read the discussion. I went to a discussion that had been open for 2 days and had 35 responses, and LinkedIn only showed four of them. I don't even know which four they are -- are they the first four, the last four, or some Facebook-style "four responses we thought you'd like". There's a button to click on to show the most recent entries, but then I only see a few of the most recent responses, still not the whole thread.

Hooray for the web -- of course, plenty of other people have had this problem too, and a little web searching unveiled a solution. Add a pagination token to the end of the URL that tells LinkedIn to show 1000 messages at once.

&count=1000&paginationToken=
It won't actually show 1000 (or all) responses -- but if you start at the beginning of the page and scroll down reading responses one by one, it will auto-load new batches. Yes, infinite scrolling pages can be annoying, but at least it's a way to read a LinkedIn conversation in order.

Making it automatic

Okay, now I know how to edit one of their URLs to make it work. Do I want to do that by hand any time I want to view a discussion? Noooo!

Time for a script! Since I'll be selecting the URLs from mutt, they'll be in the X PRIMARY clipboard. And unfortunately, mutt adds newlines so I might as well strip those as well as fixing the LinkedIn problems. (Firefox will strip newlines for me when I paste in a multi-line URL, but why rely on that?)

Here's the important part of the script:

import subprocess, gtk

primary = gtk.clipboard_get(gtk.gdk.SELECTION_PRIMARY)
if not primary.wait_is_text_available() :
    sys.exit(0)
link = primary.wait_for_text()
link = link.replace("\n", "").replace("&amp;", "&") + \
       "&count=1000&paginationToken="
subprocess.call(["firefox", "-new-tab", link])

And here's the full script: linkedinify on GitHub. I also added it to pyclip, the script I call from Openbox to open a URL in Firefox when I middle-click on the desktop.

Now I can finally go back to participating in those discussions.

September 11, 2014 07:10 PM

Jono Bacon

Ubuntu for Smartwatches?

I read an interesting article on OMG! Ubuntu! about whether Canonical will enter the wearables business, now the smartwatch industry is hotting up.

On one hand (pun intended), it makes sense. Ubuntu is all about convergence; a core platform from top to bottom that adjusts and expands across different form factors, with a core developer platform, and a focus on content.

On the other hand (pun still intended), the wearables market is another complex economy, that is heavily tethered, both technically and strategically, to existing markets and devices. If we think success in the phone market is complex, success in the burgeoning wearables market is going to be just as complex too.

Now, to be clear, I have no idea whether Canonical is planning on entering the wearables market or not. It wouldn’t surprise me if this is a market of interest though as the investment in Ubuntu over the last few years has been in building a platform that could ultimately scale. It is logical to think this could map to a smartwatch as “another form factor”.

So, if technically it is doable, Canonical should do it, right?

No.

I want to see my friends and former colleagues at Canonical succeed, and this needs focus.

Great companies focus on doing a small set of things and doing them well. Spiraling off in a hundred different directions means dividing teams, dividing focus, and limiting opportunities. To use a tired saying…being a “jack of all trades and master of none”.

While all companies can be tempted in this direction, I am happy that on the client side of Canonical, the focus has been firmly placed on phone. TV has taken a back seat, tablet has taken a back seat. The focus has been on building a featureful, high-quality platform that is focused on phone, and bringing that product to market.

I would hate to think that Canonical would get distracted internally by chasing the smartwatch market while it is young. I believe it would do little but direct resources away from the major push now, which is phone.

If there is something we can learn from Apple here is that it isn’t important to be first. It is important to be the best. Apple rarely ships the first innovation, but they consistently knock it out of the park by building brilliant products that become best in class.

So, I have no doubt that the exciting new convergent future of Ubuntu could run on a watch, but lets keep our heads down and get the phone out there and rocking, and the wearables and other form factors can come later.

by jono at September 11, 2014 05:11 AM

September 09, 2014

Jono Bacon

One Simple Request

I do a podcast called Bad Voltage with a bunch of my pals. In it we cover Open Source and technology, we do interviews, reviews, and more. It is a lot of fun.

We started a contest recently in which the presenters have to take part in a debate, but with a viewpoint that is actually the opposite of what we actually think.

In the first episode of this three part series, Bryan Lunduke and Stuart Langridge duked it out. Lunduke won (seriously).

In the most recent episode, Jeremy Garcia and I went up against each other.

Sadly, my tiny opponent is beating me right now.

Thus, I ask for a favor. Go here and vote for Bacon. Doing so will make you feel great about your life, save a puppy, and potentially get you that promotion you have been wanting.

Also, for my Ubuntu friends…a vote for Bacon…is a vote for Ubuntu.

UPDATE: The stakes have been increased. Want to see me donate $300 to charity, have an awkward avatar, and pour a bucket of ice/ketchup/BBQ sauce/waste vegetables on me? Read more and then vote.

by jono at September 09, 2014 02:28 AM

September 08, 2014

Akkana Peck

Dot Reminders

I read about cool computer tricks all the time. I think "Wow, that would be a real timesaver!" And then a week later, when it actually would save me time, I've long since forgotten all about it.

After yet another session where I wanted to open a frequently opened file in emacs and thought "I think I made a bookmark for that a while back", but then decided it's easier to type the whole long pathname rather than go re-learn how to use emacs bookmarks, I finally decided I needed a reminder system -- something that would poke me and remind me of a few things I want to learn.

I used to keep cheat sheets and quick reference cards on my desk; but that never worked for me. Quick reference cards tend to be 50 things I already know, 40 things I'll never care about and 4 really great things I should try to remember. And eventually they get burned in a pile of other papers on my desk and I never see them again.

My new system is working much better. I created a file in my home directory called .reminders, in which I put a few -- just a few -- things I want to learn and start using regularly. It started out at about 6 lines but now it's grown to 12.

Then I put this in my .zlogin (of course, you can do this for any shell, not just zsh, though the syntax may vary):

if [[ -f ~/.reminders ]]; then
  cat ~/.reminders
fi

Now, in every login shell (which for me is each new terminal window I create on my desktop), I see my reminders. Of course, I don't read them every time; but I look at them often enough that I can't forget the existence of great things like emacs bookmarks, or diff <(cmd1) <(cmd2).

And if I forget the exact keystroke or syntax, I can always cat ~/.reminders to remind myself. And after a few weeks of regular use, I finally have internalized some of these tricks, and can remove them from my .reminders file.

It's not just for tech tips, either; I've used a similar technique for reminding myself of hard-to-remember vocabulary words when I was studying Spanish. It could work for anything you want to teach yourself.

Although the details of my .reminders are specific to Linux/Unix and zsh, of course you could use a similar system on any computer. If you don't open new terminal windows, you can set a reminder to pop up when you first log in, or once a day, or whatever is right for you. The important part is to have a small set of tips that you see regularly.

September 08, 2014 03:10 AM

Elizabeth Krumbach

Simcoe’s August 2014 Checkup

This upcoming December will mark Simcoe living with the CRF diagnosis for 3 years. We’re happy to say that she continues to do well, with this latest batch of blood work showing more good news about her stable levels.

Unfortunately we brought her in a few weeks early this time following a bloody sneeze. As I’ve written earlier this year, they’ve both been a bit sneezy this year with an as yet undiagnosed issue that has been eluding all tests. Every month or so they switch off who is sneezing, but this was the first time there was any blood.

Simcoe at vet
“I still don’t like vet visits.”

Following the exam, the vet said she wasn’t worried. The bleeding was a one time thing and could have just been caused by rawness brought on by the sneezing and sniffles. Since the appointment on August 26th we haven’t seen any more problems (and the cold seems to have migrated back to Caligula).

As for her levels, it was great to see her weight come up a bit, from 9.62 to 9.94lbs.

Her BUN and CRE levels have both shifted slightly, from 51 to 59 on BUN and 3.9 to 3.8 on CRE.

BUN: 59 (normal range: 14-36)
CRE: 3.8 (normal range: .6-2.4)

by pleia2 at September 08, 2014 12:57 AM

September 02, 2014

Akkana Peck

Using strace to find configuration file locations

I was using strace to figure out how to set up a program, lftp, and a friend commented that he didn't know how to use it and would like to learn. I don't use strace often, but when I do, it's indispensible -- and it's easy to use. So here's a little tutorial.

My problem, in this case, was that I needed to find out what configuration file I needed to modify in order to set up an alias in lftp. The lftp man page tells you how to define an alias, but doesn't tell you how to save it for future sessions; apparently you have to edit the configuration file yourself.

But where? The man page suggested a couple of possible config file locations -- ~/.lftprc and ~/.config/lftp/rc -- but neither of those existed. I wanted to use the one that already existed. I had already set up bookmarks in lftp and it remembered them, so it must have a config file already, somewhere. I wanted to find that file and use it.

So the question was, what files does lftp read when it starts up? strace lets you snoop on a program and see what it's doing.

strace shows you all system calls being used by a program. What's a system call? Well, it's anything in section 2 of the Unix manual. You can get a complete list by typing: man 2 syscalls (you may have to install developer man pages first -- on Debian that's the manpages-dev package). But the important thing is that most file access calls -- open, read, chmod, rename, unlink (that's how you remove a file), and so on -- are system calls.

You can run a program under strace directly:

$ strace lftp sitename
Interrupt it with Ctrl-C when you've seen what you need to see.

Pruning the output

And of course, you'll see tons of crap you're not interested in, like rt_sigaction(SIGTTOU) and fcntl64(0, F_GETFL). So let's get rid of that first. The easiest way is to use grep. Let's say I want to know every file that lftp opens. I can do it like this:

$ strace lftp sitename |& grep open

I have to use |& instead of just | because strace prints its output on stderr instead of stdout.

That's pretty useful, but it's still too much. I really don't care to know about strace opening a bazillion files in /usr/share/locale/en_US/LC_MESSAGES, or libraries like /usr/lib/i386-linux-gnu/libp11-kit.so.0.

In this case, I'm looking for config files, so I really only want to know which files it opens in my home directory. Like this:

$ strace lftp sitename |& grep 'open.*/home/akkana'

In other words, show me just the lines that have either the word "open" or "read" followed later by the string "/home/akkana".

Digression: grep pipelines

Now, you might think that you could use a simpler pipeline with two greps:

$ strace lftp sitename |& grep open | grep /home/akkana

But that doesn't work -- nothing prints out. Why? Because grep, under certain circumstances that aren't clear to me, buffers its output, so in some cases when you pipe grep | grep, the second grep will wait until it has collected quite a lot of output before it prints anything. (This comes up a lot with tail -f as well.) You can avoid that with

$ strace lftp sitename |& grep --line-buffered open | grep /home/akkana
but that's too much to type, if you ask me.

Back to that strace | grep

Okay, whichever way you grep for open and your home directory, it gives:

open("/home/akkana/.local/share/lftp/bookmarks", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.netrc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/home/akkana/.local/share/lftp/rl_history", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.inputrc", O_RDONLY|O_LARGEFILE) = 5
Now we're getting somewhere! The file where it's getting its bookmarks is ~/.local/share/lftp/bookmarks -- and I probably can't use that to set my alias.

But wait, why doesn't it show lftp trying to open those other config files?

Using script to save the output

At this point, you might be sick of running those grep pipelines over and over. Most of the time, when I run strace, instead of piping it through grep I run it under script to save the whole output.

script is one of those poorly named, ungoogleable commands, but it's incredibly useful. It runs a subshell and saves everything that appears in that subshell, both what you type and all the output, in a file.

Start script, then run lftp inside it:

$ script /tmp/lftp.strace
Script started on Tue 26 Aug 2014 12:58:30 PM MDT
$ strace lftp sitename

After the flood of output stops, I type Ctrl-D or Ctrl-C to exit lftp, then another Ctrl-D to exit the subshell script is using. Now all the strace output was in /tmp/lftp.strace and I can grep in it, view it in an editor or anything I want.

So, what files is it looking for in my home directory and why don't they show up as open attemps?

$ grep /home/akkana /tmp/lftp.strace

Ah, there it is! A bunch of lines like this:

access("/home/akkana/.lftprc", R_OK)    = -1 ENOENT (No such file or directory)
stat64("/home/akkana/.lftp", 0xbff821a0) = -1 ENOENT (No such file or directory)
mkdir("/home/akkana/.config", 0755)     = -1 EEXIST (File exists)
mkdir("/home/akkana/.config/lftp", 0755) = -1 EEXIST (File exists)
access("/home/akkana/.config/lftp/rc", R_OK) = 0

So I should have looked for access and stat as well as open. Now I have the list of files it's looking for. And, curiously, it creates ~/.config/lftp if it doesn't exist already, even though it's not going to write anything there.

So I created ~/.config/lftp/rc and put my alias there. Worked fine. And I was able to edit my bookmark in ~/.local/share/lftp/bookmarks later when I had a need for that. All thanks to strace.

September 02, 2014 07:06 PM

September 01, 2014

Elizabeth Krumbach

CI, Validation and more at DebConf14

I’ve been a Debian user since 2002 and got my first package into Debian in 2006. Though I continued to maintain a couple packages through the years, my open source interests (and career) have expanded significantly so that I now spend much more time with Ubuntu and OpenStack than anything else. Still, I do still host Bay Area Debian events in San Francisco and when I learned that DebConf14 would only be quick plane flight away from home I was eager for the opportunity to attend.

Given my other obligations, I decided to come in halfway through the conference, arriving Wednesday evening. Thursday was particularly interesting to me because they were doing most of the Debian Validation & CI discussions then. Given my day job on the OpenStack Infrastructure team, it seemed to be a great place to meet other folks who are interested in CI and see where our team could support Debian’s initiatives.

First up was the Validation and Continuous Integration BoF led by Neil Williams.

It was interesting to learn the current validation methods being used in Debian, including:

From there talk moved into what kinds of integration tests people wanted, where various ideas were covered, including package sets (collections of related packages) and how to inject “dirty” data into systems to test in more real world like situations. Someone also mentioned doing tests on more real systems rather than in chrooted environments.

Discussion touched upon having a Gerrit-like workflow that had packages submitted for review and testing prior to landing in the archive. This led to my having some interesting conversations with the drivers of Gerrit efforts in Debian after the session (nice to meet you, mika!). There was also discussion about notification to developers when their packages run afoul of the testing infrastructure, either themselves or as part of a dependency chain (who wants notifications? how to make them useful and not overwhelming?).

I’ve uploaded the gobby notes from the session here: validation-bof and the video of the session is available on the meetings-archive.

Next up on the schedule was debci and the Debian Continuous Integration project presented by Antonio Terceiro. He gave a tour of the Debian Continuous Integration system and talked about how packages can take advantage of the system by having their own test suites. He also discussed some about the current architecture for handling tests and optimizations they want to make in the future. Documentation for debci can be found here: ci.debian.net/doc/. Video of the session is also available on the meetings-archive.

The final CI talk I went to of the day was Automated Validation in Debian using LAVA where Neil Williams gave a tour of the expanded LAVA (Linaro Automated Validation Architecture). I heard about it back when it was a more simple ARM-only testing infrastructure, but it’s grown beyond that to now test distribution kernel images, package combinations and installer images and has been encouraging folks to write tests. He also talked about some of the work they’re doing to bring along LAVA demo stations to conferences, nice! Slides from this talk are available on the debconf annex site, here: http://annex.debconf.org/debconf-share/debconf14/slides/lava/

On Friday I also bumped into a testing-related talk by Paul Wise during a series of Live Demos, he showed off check-all-the-things which runs a pile of tools against your project to check… all the things, detecting what it needs to do automatically. Check out the README for rationale, and for a taste of things it checks and future plans, have a peek at some of the data files, like this one.

It’s really exciting to see more effort being spent on testing in Debian, and open source projects in general. This has long been the space of companies doing private, internal testing of open source products they use and reporting results back to projects in the form of patches and bug reports. Having the projects themselves provide QA is a huge step for the maturity of open source, and I believe will lead to even more success for projects as we move into the future.

The rest of DebConf for me was following my more personal interests in Debian. I also have to admit that my lack of involvement lately made me feel like a bit of an outsider and I’m quite shy anyway, so I was thankful to know a few Debian folks who I could hang out with and join for meals.

On Thursday evening I attended A glimpse into a systemd future by Josh Triplett. I haven’t really been keeping up with systemd news or features, so I learned a lot. I have to say, it would be great to see things like session management, screen brightness and other user settings be controlled by something lower level than the desktop environment. Friday I attended Thomas Goirand’s OpenStack update & packaging experience sharing. I’ve been loosely tracking this, but it was good to learn that Jessie will come with Icehouse and that install docs exist for Wheezy (here).

I also attended Outsourcing your webapp maintenance to Debian with Francois Marier. The rationale for his talk was that one should build their application with the mature versions of web frameworks included with Debian in mind, making it so you don’t have the burden of, say, managing Django along with your Django-based app, since Debian handles that. I continue to have mixed feelings when it comes to webapps in the main Debian repository, while some developers who are interested in reducing maintenance burden are ok with using older versions shipped with Debian, most developers I’ve worked with are very much not in this camp and I’m better off trying to support what they want than fighting with them about versions. Then it was off to Docker + Debian = ♥ with Paul Tagliamonte where he talked about some of his best practices for using Docker on Debian and ideas for leveraging it more in development (having multiple versions of services running on one host, exporting docker images to help with replication of tests and development environments).

Friday night Linus Torvalds joined us for a Q&A session. As someone who has put a lot of work into making friendly environments for new open source contributors, I can’t say I’m thrilled with his abrasive conduct in the Linux kernel project. I do worry that he sets a tone that impressionable kernel hackers then go on to emulate, perpetuating the caustic environment that spills out beyond just the kernel, but he has no interest in changing. That aside, it was interesting to hear him talk about other aspects of his work, his thoughts on systemd, a rant about compiling against specific libraries for every distro and versions (companies won’t do it, they’ll just ship their own statically linked ones) and his continued comments in support of Google Chrome.

DebConf wrapped up on Sunday. I spent the morning in one of the HackLabs catching up on some work, and at 1:30 headed up to the Plenary room for the last few talks of the event, starting with a series of lightning talks. A few of the talks stood out for me, including Geoffrey Thomas’ talk on being a bit of an outsider at DebConf and how difficult it is to be running a non-Debian/Linux system at the event. I’ve long been disappointed when people bring along their proprietary OSes to Linux events, but he made good points about people being able to contribute without fully “buying in” to having free software everywhere, including their laptop. He’s right. Margarita Manterola shared some stats from the Mini-DebConf Barcelona which focused on having only female speakers, it was great to hear such positive statistics, particularly since DebConf14 itself had a pretty poor ratio, there were several talks I attended (particularly around CI) where I was the only woman in the room. It was also interesting to learn about safe-rm to save us from ourselves and non-free.org to help make a distinction between what is Debian and what is not.

There was also a great talk by Vagrant Cascadian about his work on packages that he saw needed help but he didn’t necessarily know everything about, and encouraged others to take the same leap to work on things that may be outside their comfort zone. To help he listed several resources people could use to find work in Debian:

Next up for the afternoon was the Bits from the Release Team where they fleshed out what the next few months leading up to the freeze would look like and sharing the Jessie Freeze Policy.

DebConf wrapped up with a thank you to the volunteers (thank you!) and peek at the next DebConf, to be held in Heidelberg, Germany the 15th-22nd of August 2015.

Then it was off to the airport for me!

The rest of my photos from DebConf14 here: https://www.flickr.com/photos/pleia2/sets/72157646626186269/

by pleia2 at September 01, 2014 06:54 PM

August 28, 2014

Akkana Peck

Debugging a mysterious terminal setting

For the last several months, I repeatedly find myself in a mode where my terminal isn't working quite right. In particular, Ctrl-C doesn't work to interrupt a running program. It's always in a terminal where I've been doing web work. The site I'm working on sadly has only ftp access, so I've been using ncftp to upload files to the site, and git and meld to do local version control on the copy of the site I keep on my local machine. I was pretty sure the problem was coming from either git, meld, or ncftp, but I couldn't reproduce it.

Running reset fixed the problem. But since I didn't know what program was causing the problem, I didn't know when I needed to type reset.

The first step was to find out which of the three programs was at fault. Most of the time when this happened, I wouldn't notice until hours later, the next time I needed to stop a program with Ctrl-C. I speculated that there was probably some way to make zsh run a check after every command ... if I could just figure out what to check.

Terminal modes and stty -a

It seemed like my terminal was getting put into raw mode. In programming lingo, a terminal is in raw mode when characters from it are processed one at a time, and special characters like Ctrl-C, which would normally interrupt whatever program is running, are just passed like any other character.

You can list your terminal modes with stty -a:

$ stty -a
speed 38400 baud; rows 32; columns 80; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = ;
eol2 = ; swtch = ; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
ignbrk -brkint ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl -ixon -ixoff
-iuclc -ixany -imaxbel iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-isig icanon -iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
echoctl echoke

But that's a lot of information. Unfortunately there's no single flag for raw mode; it's a collection of a lot of flags. I checked the interrupt character: yep, intr = ^C, just like it should be. So what was the problem?

I saved the output with stty -a >/tmp/stty.bad, then I started up a new xterm and made a copy of what it should look like with stty -a >/tmp/stty.good. Then I looked for differences: meld /tmp/stty.good /tmp/stty.bad. I saw these flags differing in the bad one: ignbrk ignpar -iexten -ixon, while the good one had -ignbrk -ignpar iexten ixon. So I should be able to run:

$ stty -ignbrk -ignpar iexten ixon
and that would fix the problem. But it didn't. Ctrl-C still didn't work.

Setting a trap, with precmd

However, knowing some things that differed did give me something to test for in the shell, so I could test after every command and find out exactly when this happened. In zsh, you do that by defining a precmd function, so here's what I did:

precmd()
{
    stty -a | fgrep -- -ignbrk > /dev/null
    if [ $? -ne 0 ]; then
        echo
        echo "STTY SETTINGS HAVE CHANGED \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!"
        echo
    fi
}
Pardon all the exclams. I wanted to make sure I saw the notice when it happened.

And this fairly quickly found the problem: it happened when I suspended ncftp with Ctrl-Z.

stty sane and isig

Okay, now I knew the culprit, and that if I switched to a different ftp client the problem would probably go away. But I still wanted to know why my stty command didn't work, and what the actual terminal difference was.

Somewhere in my web searching I'd stumbled upon some pages suggesting stty sane as an alternative to reset. I tried it, and it worked.

According to man stty, stty sane is equivalent to

$ stty cread -ignbrk brkint -inlcr -igncr icrnl -iutf8 -ixoff -iuclc -ixany  imaxbel opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke

Eek! But actually that's helpful. All I had to do was get a bad terminal (easy now that I knew ncftp was the culprit), then try:

$ stty cread 
$ stty -ignbrk 
$ stty brkint
... and so on, trying Ctrl-C each time to see if things were back to normal. Or I could speed up the process by grouping them:
$ stty cread -ignbrk brkint
$ stty -inlcr -igncr icrnl -iutf8 -ixoff
... and so forth. Which is what I did. And that quickly narrowed it down to isig. I ran reset, then ncftp again to get the terminal in "bad" mode, and tried:
$ stty isig
and sure enough, that was the difference.

I'm still not sure why meld didn't show me the isig difference. But if nothing else, I learned a bit about debugging stty settings, and about stty sane, which is a much nicer way of resetting the terminal than reset since it doesn't clear the screen.

August 28, 2014 09:41 PM

August 27, 2014

Elizabeth Krumbach

OpenStack Infrastructure August 2014 Bug Day

The OpenStack Infrastructure team has a pretty big bug collection.

1855 collection
Well, not literal bugs

We’ve slowly been moving new bugs for some projects over to StoryBoard in order to kick the tires on that new system, but today we focused back on our Launchpad Bugs to par down our list.

Interested in running a bug day? The steps we have for running a bug day can be a bit tedious, but it’s not hard, here’s the rundown:

  1. I create our etherpad: cibugreview-august2014 (see etherpad from past bug days on the wiki at: InfraTeam#Bugs)
  2. I run my simple infra_bugday.py script and populate the etherpad.
  3. Grab the bug stats from launchpad and copy them into the pad so we (hopefully) have inspiring statistics at the end of the day.
  4. Then comes the real work. I open up the old etherpad and go through all the bugs, copying over comments from the old etherpad where applicable and making my own comments as necessary about obvious updates I see (and updating my own bugs).
  5. Let the rest of the team dive in on the etherpad and bugs!

Throughout the day we chat in #openstack-infra about bug statuses, whether we should continue pursuing certain strategies outlined in bugs, reaching out to folks who have outstanding bugs in the tracker that we’d like to see movement on but haven’t in a while. Plus, we get to triage a whole pile of New bugs (thanks Clark) and close others we may have lost track of (thanks everyone).

As we wrap up, here are the stats from today:

Starting bug day count: 270

31 New bugs
39 In-progress bugs
6 Critical bugs
15 High importance bugs
8 Incomplete bugs

Ending bug day count: 233

0 New bugs
37 In-progress bugs
3 Critical bugs
10 High importance bugs
14 Incomplete bugs

Full disclosure, 4 of the bugs we “closed” were actually moved to the Zuul project on Launchpad so we can import them into StoryBoard at a later date. The rest were legitimate though!

It was a busy day, thanks to everyone who participated.

by pleia2 at August 27, 2014 12:08 AM

August 25, 2014

Elizabeth Krumbach

Market Street Railway Exploratorium Charter

Last month I learned about an Exploratorium Charter being put on by Market Street Railway. I’m a member of the organization and they do charters throughout the year, but my schedule rarely syncs up with when charters or other events are happening, so I was delighted when I firmed up my DebConf schedule and knew I’d be in town for this one!

It was a 2 hour planned charter, which would pick is up at the railway museum near Ferry building and take us down to Muni Metro East, “the current home of the historic streetcar fleet and not usually open to the public.” Sign me up.

The car taking us on our journey was the 1050, which was originally a Philadelphia street car (built in 1948, given No. 2119) which had been painted in Muni livery. MJ’s best friend is in town this weekend, so I had both Matti and MJ to join me on this excursion.

The route began by going down what will become the E line next year, and we stopped at the AT&T ballpark for some photo ops. The conductor (not the driver) of the event posed for photos.

Throughout the ride various volunteers from Market Street Railway passed around photos and historic pieces from street cars to demonstrate how they worked and some of the historic routes where they ran. Of particular interest was learning just how busy Ferry Building was at it’s height in the 1930s, not only serving as a busy passenger ferry port, but also with lots of street cars and other transit stopping at the building pretty much non-stop.

From the E-line there the street car went down Third street through Dogpatch and finally arrived at our first destination, the Muni Metro East Light Rail Vehicle Maintenance & Operations Facility. We all had to bright vests in order to enter the working facility.


Obligatory “Me with streetcar” photo

The facility is a huge warehouse where repairs are done on both the street cars and the Metro coaches. We had quite a bit of time to look around and peek under the cars and see some of the ones that were under repair or getting phased into usage.

I think my favorite part of the visit was getting to go outside and see the several cars outside. Some of them were just coming in for scheduled maintenance, and others like the cream colored 1056 that are going to be sent off for restoration (hooray!).

The tour concluded by taking us back up the Embarcadero and dropping us off at the Exploratorium science museum, skipping a loop around Pier 39 due to being a bit behind schedule. We spent about an hour at the museum, which was a nice visit as MJ and I had just been several months earlier.

Lots more photos from our day here: https://www.flickr.com/photos/pleia2/sets/72157646412090817/

Huge thanks to Market Street Railway for putting on such a fun and accessible event!

by pleia2 at August 25, 2014 03:43 AM

August 24, 2014

Akkana Peck

One of them Los Alamos liberals

[Adopt-a-Highway: One of them Los Alamos liberals] I love this Adopt-a-Highway sign on Highway 4 on the way back down from the Jemez.

I have no idea who it is (I hope to find out, some day), but it gives me a laugh every time I see it.

August 24, 2014 04:50 PM

Elizabeth Krumbach

August 2014 miscellany

It’s been about a month since my surgery. I feel like I’ve taken it easy, but looking at my schedule (which included a conference on the other side of the continent) I think it’s pretty safe to say I’m not very good at that. I’m happy to say I’m pretty much recovered though, so my activities don’t seem to have caused problems.

Although, going to the 4th birthday party for OpenStack just 6 days after my surgery was probably pushing it. I thoroughly rationalized it due to the proximity of the event to my home (a block), but I only lasted about an hour. At least I got a cool t-shirt and got to see the awesome OpenStack ice sculpture. Also, didn’t die, so all is good right?

Fast forward a week and a half and we were wrapping up our quick trip to Philadelphia for Fosscon. We had some time on Sunday so decided to visit the National Museum of American Jewish History right by Independence Mall. In addition to a fun special exhibit about minorities in baseball, the museum boasts 3 floors of permanent exhibits that trace the history of Jews in America from the first settlement until today. We made it through much of the museum before our flight time crept up, and even managed to swing by the gift shop where we found a beautiful glass menorah to bring home.

Safely back in San Francisco, I met up with a few of my local Ubuntu and Debian friends on the 13th for an Ubuntu Hour and a Debian dinner. The Ubuntu Hour was pretty standard, I was able to bring along my Nexus 7 with Ubuntu on it to show off the latest features in the development channel for the tablet version. I also received several copies of The Official Ubuntu Book so I was able to bring one along to give away to an attendee, hooray!

From there, we made it over to 21st Amendment Brewery where we’d be celebrating Debian’s 21st birthday (which was coming up on the 16th). It took some time to get a table, but had lots of time to chat while we were waiting. At the dinner we signed a card to send off to a donation to SPI on behalf of Debian.

In other excitement, our car needed some work last week and MJ has been putting a lot of work into getting a sound system set up to go along with a new TV. Since I’ve been feeling better this week my energy has finally returned and I’ve been able to get caught up on a lot of projects I had pushed aside during my recovery. I also signed up for a new gym this week, it’s not as beautiful as the club I used to go to, but it has comparable facilities (including pool!) and is about half of what I was paying before. I’m thinking as I ease back into a routine I’ll use my time there for swimming and strength exercises. I sure need it, being these past few months really did a number on my fitness.

Today I met up with my friend Steve for Chinese lunch and then a visit over to the Asian Art Museum to see the Gorgeous exhibit. I’m really glad we went, it was an unusual collection that I really enjoyed. While we were there we also browsed the rest of the galleries in the museum, making it the first time that I’d actually walked through the whole museum on an excursion there.

I think the Mythical bird-man was my favorite piece of the exhibit:

And I was greatly amused when Steve used his iPhone to take a picture of the first generation iPhone on exhibit, so I captured the moment.

On Wednesday afternoon I’ll be flying up to Portland, OR to attend my first DebConf! It actually started today, but given my current commitment load I decided that 9 days away from home was a bit much and picked days later in the week where some of the discussions were most interesting to me. I’m really looking forward to seeing some of my long time Debian friends and learning more about work the teams are doing in the Continuous Integration space for Debian.

by pleia2 at August 24, 2014 04:52 AM

August 20, 2014

Akkana Peck

Mouse Release Movie

[Mouse peeking out of the trap] We caught another mouse! I shot a movie of its release.

Like the previous mouse we'd caught, it was nervous about coming out of the trap: it poked its nose out, but didn't want to come the rest of the way.

[Mouse about to fall out of the trap] Dave finally got impatient, picked up the trap and turned it opening down, so the mouse would slide out.

It turned out to be the world's scruffiest mouse, which immediately darted toward me. I had to step back and stand up to follow it on camera. (Yes, I know my camera technique needs work. Sorry.)

[scruffy mouse, just released from trap] [Mouse bounding away] Then it headed up the hill a ways before finally lapsing into the high-bounding behavior we've seen from other mice and rats we've released. I know it's hard to tell in the last picture -- the photo is so small -- but look at the distance between the mouse and its shadow on the ground.

Very entertaining! I don't understand why anyone uses killing traps -- even if you aren't bothered by killing things unnecessarily, the entertainment we get from watching the releases is worth any slight extra hassle of using the live traps.

Here's the movie: Mouse released from trap. [Mouse released from trap]

August 20, 2014 11:10 PM

August 18, 2014

Jono Bacon

New Facebook Page

As many of you will know, I am really passionate about growing strong and inspirational communities. I want all communities to benefit from well organized, structured, and empowerinfg community leadership. This is why I wrote The Art of Community and Dealing With Disrespect, and founded the Community Leadership Summit and Community Leadership Forum to further the art and science of community leadership.

In my work I am sharing lots of content, blog posts, videos, and other guidance via my new Facebook page. I would be really grateful if you could hop over and Like it to help build some momentum.

Many thanks!

by jono at August 18, 2014 04:35 PM

August 16, 2014

Elizabeth Krumbach

SanDisk Clip Sport

I got my first MP3 player in 2006, a SanDisk Sansa e140. As that one aged, I picked up the SanDisk Sansa Fuze in 2009. Recently my poor Sansa Fuze has been having trouble updating the library (takes a long time) and would randomly freeze up. After getting worse over my past few trips, I finally resigned to getting a new player.

As I began looking for players, I was quickly struck by how limited the MP3 player market is these days. I suspect this is due to so many people using their phones for music these days, but that’s not a great option for me for a variety of reasons:

  • Limits to battery life on my phone make a 12 hour flight (or a 3 hour flight, then an 8 hour flight, then navigating a foreign city…) a bit tricky.
  • While I do use my phone for runs (yay for running apps) I don’t like using my phone in the gym, because it’s bulky and I’m afraid of breaking it.
  • Finally, my desire for an FM tuner hasn’t changed, and I’m quite fond of the range of formats my Fuze supported (flac, ogg, etc).

So I found the SanDisk Clip Sport MP3 Player. Since I’ve been happy with my SanDisk players throughout the years and the specs pages seemed to meet my needs, I didn’t hesitate too much about picking it up for $49.99 on Amazon. Obviously I got the one with pink trim.

I gave the player a spin on my recent trip to Philadelphia. Flight time: 5 hours each way. I’m happy to report that the battery life was quite good, I forgot to charge it while in Philadelphia but the charge level was still quite high when I turned it on for my flight home.

Overall, I’m very happy with it, but no review would be complete without the details!

Cons:

  • Feels a bit plasticy – the Fuze had a metal casing
  • I can’t figure out how it sorts music in file view, doesn’t seem alphabetical…

Pros:

  • Meets my requirements: FM Tuner, multiple formats – my oggs play fine out of the box, the Fuze required a firmware upgrade
  • Standard Micro USB connector for charging – the Fuze had a custom connector
  • File directory listing option, not just by tags
  • Mounts via USB mass storage in Linux
  • Micro SD/SDHC expansion slot if I need to go beyond 8G

We’ll see how it holds up through the abuse I put it through while traveling.

by pleia2 at August 16, 2014 12:32 AM

August 15, 2014

Jono Bacon

Community Management Training at LinuxCon

I am a firm believer in building strong and empowered communities. We are in an age of a community management renaissance in which we are defining repeatable best practice that can be applied many different types of communities, whether internal to companies, external to volunteers, or a mix of both. The opportunity here is to grow large, well-managed, passionate communities, no matter what industry or area you work in.

I have been working to further this growth in community management via my books, The Art of Community and Dealing With Disrespect, the Community Leadership Summit, the Community Leadership Forum, and delivering training to our next generation of community managers and leaders.

LinuxCon North America and Europe

I am delighted to bring my training to the excellent LinuxCon events in both North America and Europe.

Firstly, on Fri 22nd August 2014 (next week) I will be presenting the course at LinuxCon North America in Chicago, Illinois and then on Thurs Oct 16th 2014 I will deliver the training at LinuxCon Europe in Düsseldorf, Germany.

Tickets are $300 for the day’s training. This is a steal; I usually charge $2500+/day when delivering the training as part of a consultancy arrangement. Thanks to the Linux Foundation for making this available at an affordable rate.

Space is limited, so go and register ASAP:

What Is Covered

So what is in the training course?

If you like videos, go and watch this:

If you prefer to read, read on!

My goal with each training day is to discuss how to build and grow a community, including building collaborative workflows, defining a governance structure, planning, marketing, and evaluating effectiveness. The day is packed with Q&A, discussion, and I encourage my students to raise questions, challenge me, and explore ways of optimizing their communities. This is not a sit-down-and-listen-to-a-teacher-drone on kind of session; it is interactive and designed to spark discussion.

The day is mapped out like this:

  • 9.00am – Welcome and introductions
  • 9.30am – The core mechanics of community
  • 10.00am – Planning your community
  • 10.30am – Building a strategic plan
  • 11.00am – Building collaborative workflow
  • 12.00pm – Governance: Part I
  • 12.30pm – Lunch
  • 1.30pm – Governance: Part II
  • 2.00pm – Marketing, advocacy, promotion, and social
  • 3.00pm – Measuring your community
  • 3.30pm – Tracking, measuring community management
  • 4.30pm – Burnout and conflict resolution
  • 5.00pm – Finish

I will warn you; it is an exhausting day, but ultimately rewarding. It covers a lot of ground in a short period of time, and then you can follow with further discussion of these and other topics on our Community Leadership discussion forum.

I hope to see you there!

by jono at August 15, 2014 07:27 PM

Akkana Peck

Time-lapse photography: stitching movies together on Linux

[Time-lapse clouds movie on youtube] A few weeks ago I wrote about building a simple Arduino-driven camera intervalometer to take repeat photos with my DSLR. I'd been entertained by watching the clouds build and gather and dissipate again while I stepped through all the false positives in my crittercam, and I wanted to try capturing them intentionally so I could make cloud movies.

Of course, you don't have to build an Arduino device. A search for timer remote control or intervalometer will find lots of good options around $20-30. I bought one so I'll have a nice LCD interface rather than having to program an Arduino every time I want to make movies.

Setting the image size

Okay, so you've set up your camera on a tripod with the intervalometer hooked to it. (Depending on how long your movie is, you may also want an external power supply for your camera.)

Now think about what size images you want. If you're targeting YouTube, you probably want to use one of YouTube's preferred settings, bitrates and resolutions, perhaps 1280x720 or 1920x1080. But you may have some other reason to shoot at higher resolution: perhaps you want to use some of the still images as well as making video.

For my first test, I shot at the full resolution of the camera. So I had a directory full of big ten-megapixel photos with filenames ranging from img_6624.jpg to img_6715.jpg. I copied these into a new directory, so I didn't overwrite the originals. You can use ImageMagick's mogrify to scale them all:

mogrify -scale 1280x720 *.jpg

I had an additional issue, though: rain was threatening and I didn't want to leave my camera at risk of getting wet while I went dinner shopping, so I moved the camera back under the patio roof. But with my fisheye lens, that meant I had a lot of extra house showing and I wanted to crop that off. I used GIMP on one image to determine the x, y, width and height for the crop rectangle I wanted. You can even crop to a different aspect ratio from your target, and then fill the extra space with black:

mogrify img_6624.jpg -crop 2720x1450+135+315 -scale 1280 -gravity center -background black -extent 1280x720 *.jpg

If you decide to rescale your images to an unusual size, make sure both dimensions are even, otherwise avconv will complain that they're not divisible by two.

Finally: Making your movie

I found lots of pages explaining how to stitch together time-lapse movies using mencoder, and a few using ffmpeg. Unfortunately, in Debian, both are deprecated. Mplayer has been removed entirely. The ffmpeg-vs-avconv issue is apparently a big political war, and I have no position on the matter, except that Debian has come down strongly on the side of avconv and I get tired of getting nagged at every time I run a program. So I needed to figure out how to use avconv.

I found some pages on avconv, but most of them didn't actually work. Here's what worked for me:

avconv -f image2 -r 15 -start_number 6624 -i 'img_%04d.jpg' -vcodec libx264 time-lapse.mp4

Adjust the start_number and filename appropriately for the files you have.

Avconv produces an mp4 file suitable for uploading to youtube. So here is my little test movie: Time Lapse Clouds.

August 15, 2014 06:05 PM

August 14, 2014

Elizabeth Krumbach

The Ubuntu Weekly Newsletter needs you!

On Monday we released Issue 378 of the Ubuntu Weekly Newsletter. The newsletter has thousands of readers across various formats from wiki to email to forums and discourse.

As we creep toward the 400th issue, we’ve been running a bit low on contributors. Thanks to Tiago Carrondo and David Morfin for pitching in these past few weeks while they could, but the bulk of the work has fallen to José Antonio Rey and myself and we can’t keep this up forever.

So we need more volunteers like you to help us out!

We specifically need folks to let us know about news throughout the week (email them to ubuntu-news-team@lists.ubuntu.com) and to help write summaries over the weekend. All links and summaries are stored in a Google Doc, so you don’t need to learn any special documentation formatting or revision control software to participate. Plus, everyone who participates is welcome to add their name to the credits!

Summary writers. Summary writers receive an email every Friday evening (or early Saturday) with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.

Interested? Email editor.ubuntu.news@ubuntu.com and we’ll get you added to the list of folks who are emailed each week and you can help as you have time.

by pleia2 at August 14, 2014 04:41 AM

August 12, 2014

Elizabeth Krumbach

Fosscon 2014

Flying off to a conference on the other side of the country 2 weeks after having my gallbladder removed may not have been one of the wisest decisions of my life, but I am very glad I went. Thankfully MJ had planned on coming along to this event anyway, so I had companionship… and someone to carry the luggage :)

This was Fosscon‘s 5th year, 4th in Philadelphia and the 3rd one I’ve been able to attend. I was delighted this year to have my employer, HP, sponsor the conference at a level that gave us a booth and track room. Throughout the day I was attending talks, giving my own and chatting with people at the HP booth about the work we’re doing in OpenStack and opportunities for people who are looking to work with open source technologies.

The day started off with a keynote by Corey Quinn titled “We are not special snowflakes” which stressed the importance of friendliness and good collaboration skills in technical candidates.

I, for one, am delighted to see us as an industry moving away from BOFHs and kudos for antisocial behavior. I may not be a social butterfly, but I value the work of my peers and strive to be someone people enjoy working with.

After the keynote I did a talk about having a career in FOSS. I was able to tell stories about my own work and experiences and those of some of my colleagues. I talked about my current role at HP and spent a fair amount of time giving participation examples related to my work on Xubuntu. I must really enjoy this topic, because I didn’t manage to leave time for questions! Fortunately I think I made up for it in some great chats with other attendees throughout the day.

My slides from the talk are available here: FOSSCON-2014-FOSS_career.pdf

Some other resources related to my talk:

During the conference I always was able to visit with my friends at the Ubuntu booth. They had brought along a couple copies of The Official Ubuntu Book, 8th Edition for me to sign (hooray!) and then sell to conference attendees. I brought along my Ubuntu tablet which they were able to have at the booth, and which MJ grabbed from me during a session when someone asked to see a demo.

After lunch I went to see Charlie Reisinger’s “Lessons From Open Source Schoolhouse” where he talked about the Ubuntu deployments in his school district. I’ve been in contact with Charlie for quite some time now since the work we do with Partimus also puts us in schools, but he’s been able to achieve some pretty exceptional success in his district. It was a great pleasure to finally meet him in person and his talk was very inspiring.

I’ve been worried for quite some time that children growing up today will only have access to tablets and smart phones that I classify as “read only devices.” I think back to when I first started playing with computers and the passion for them grew out of the ability to tinker and discover, if my only exposure had been a tablet I don’t think I’d be where I am today. Charlie’s talk went in a similar direction, particularly as he revealed that he controversially allows students to have administrative (sudo) access on the Ubuntu laptops! The students feel trusted, empowered and in the time the program has been going on, he’s been able to put together a team of student apprentices who are great at working with the software and can help train other students, and teachers too.

It was also interesting to learn that after the district got so much press the students began engaging people in online communities.

Fosscon talks aren’t recorded, but check out Charlie’s TEDx Lancaster talk to get a taste of the key points about student freedom and the apprentice program he covered: Enabling students in a digital age: Charlie Reisinger at TEDxLancaster

GitHub for Penn Manor School District here: https://github.com/pennmanor

The last talk I went to of the day was by Robinson Tryon on “LibreOffice Tools and Tricks For Making Your Work Easier” where I was delighted to see how far they’ve come with the Android/iOS Impress remote and work being done in the space of editing PDFs, including the development of Hybrid PDFs which can be opened by LibreOffice for editing or a PDF viewer and contain full versions of both documents. I also didn’t realized that LibreOffice retained any of the command line tools, so it was pretty cool to learn about soffice --headless --convert to do CLI-based conversions of files.

Huge thanks to the volunteers who make Fosscon happen. The Franklin Institute was a great venue and aside from the one room downstairs, I think the layout worked out well for us. Booths were in common spaces that attendees congregated in, and I was even able to meet some tech folks who were just at the museum and happened upon us, which was a lot of fun.

More photos from the event here: https://www.flickr.com/photos/pleia2/sets/72157646362111741/

by pleia2 at August 12, 2014 04:43 PM

August 10, 2014

Akkana Peck

Sphinx Moths

[White-lined sphinx moth on pale trumpets] We're having a huge bloom of a lovely flower called pale trumpets (Ipomopsis longiflora), and it turns out that sphinx moths just love them.

The white-lined sphinx moth (Hyles lineata) is a moth the size of a hummingbird, and it behaves like a hummingbird, too. It flies during the day, hovering from flower to flower to suck nectar, being far too heavy to land on flowers like butterflies do.

[Sphinx moth eye] I've seen them before, on hikes, but only gotten blurry shots with my pocket camera. But with the pale trumpets blooming, the sphinx moths come right at sunset and feed until near dark. That gives a good excuse to play with the DSLR, telephoto lens and flash ... and I still haven't gotten a really sharp photo, but I'm making progress.

Check out that huge eye! I guess you need good vision in order to make your living poking a long wiggly proboscis into long skinny flowers while laboriously hovering in midair.

Photos here: White-lined sphinx moths on pale trumpets.

August 10, 2014 03:23 AM

August 08, 2014

iheartubuntu

TAILS The Privacy Distro


TAILS, the anonymizing distribution released its version 1.1 about two weeks ago – which means you can download it now. The Tails 1.1 release is largely a catalog of security fixes and bug fixes, limiting itself otherwise to minor improvements such as to the ISO upgrade and installer, and the Windows 8 camouflage. This is one to grab to keep your online privacy intact.

https://tails.boum.org/

by iheartubuntu (noreply@blogger.com) at August 08, 2014 06:09 PM

August 05, 2014

Akkana Peck

Privacy Policy

I got an envelope from my bank in the mail. The envelope was open and looked like the flap had never been sealed.

Inside was a copy of their privacy policy. Nothing else.

The policy didn't say whether their privacy policy included sealing the envelope when they send me things.

August 05, 2014 07:22 PM

Elizabeth Krumbach

Recovery reading

During the most painful phase of the recovery from my gallbladder removal I was able to do a whole lot. Short walks around the condo to relieve stiffness and bloating post-surgery, but mostly I was resting to encourage healing. Sitting up hurt, so I spent a lot of time in bed. But what to do? So bored! I ended up reading a lot.

I don’t often write about what I’ve been reading, but I typically have 6 or so books going of various genres, usually one or two about history and/or science, a self improvement type of book (improving speaking, time/project management), readable tech (not reference), scifi/fantasy, fiction (usually cheesy/easy read, see Ian Fleming below!), social justice. This is largely reflected in what I read this past week, but for some reason I’ve been slanted toward history more than scifi/fantasy lately.

Surviving Justice: America’s Wrongfully Convicted and Exonerated edited by Dave Eggers and Lola Vollen. I think I heard about this book from a podcast since I’ve had a recent increase in interest in capital punishment following the narrowly defeated Prop 34 in 2012 seeking to end capital punishment in California. I’ve long been against capital punishment for a variety of reasons, and the real faces that this book put on wrongfully accused people (some of whom were on death row) really solidified some of my feelings around it. The book is made up of interviews from several exonerated individuals from all walks of life and gives a sad view into how their convictions ruined their lives and the painful process they went through to finally prove their innocence. Highly recommended.

Siddhartha by Hermann Hesse. I read this book in high school, and it interested me then but I always wanted to get back and read it as an adult with my perspectives now. It was a real pleasure, and much shorter than I remembered!

Casino Royale, by Ian Fleming. One of my father’s guilty pleasures was reading Ian Fleming books. Unfortunately his copies have been lost over the years, so when I started looking for my latest paperback indulgence I loaded up my Nook to start diving in. Fleming’s opinion and handling of women in his books is pretty dreadful, but once I put aside that part of my brain and just enjoyed it I found it to be a lot of fun.

The foundation for an open source city by Jason Hibbets. I saw Hibbets speak at Scale12x this year and downloaded the epub version of this book then. He hails from Raleigh, NC where over the past several years he’s been working in the community there to make the city an “Open Source City” – defined by one which not only uses open source tools, but also has an open source philosophy for civic engagement, from ordinary citizen to the highest level of government. The book goes through a series of projects they’ve done in Raleigh, as well as expanding to experiences that he’s had with other cities around the country, giving advice for how other communities can accomplish the same.

Orla’s Code by Fiona Pearse. This book tells of the life and work of Orla, a computer programmer in London. Having subject matter in a fiction book about a women and which is near to my own profession was particularly enjoyable to me!

Book of Ages: The Life and Opinions of Jane Franklin by Jill Lepore. I heard about this book through another podcast, and as a big Ben Franklin fan I was eager to learn more about his sister! I loved how Lepore wove in pieces of Ben Franklin’s life with that of his sister and the historical context in which they were living. She also worked to give the unedited excerpts from Jane’s letters, even if she had to then spend a paragraph explaining the meaning and context due to Jane’s poor written skills. Having the book presented in this way gave an extra depth of understanding Jane’s level of education and subsequent hardships, while keeping it a very enjoyable, if often sad, read.

Freedom Rider Diary: Smuggled Notes from Parchman Prison by Carol Ruth Silver. I didn’t intend to read two books related to prisons while I was laid up (as I routinely tell my friends “I don’t like prison shows”), but I was eager to read this one because I’ve had the pleasure of working with Carol Ruth Silver on some OLPC-SF stuff and she’s been a real inspiration to me. The book covers Silver’s time as a Freedom Rider in the south in 1961 and the 40 days she spent in jail and prison with fellow Freedom Riders resisting bail. She was able to take shorthand-style notes on whatever paper she could find and then type them up following her experience, so now 50 years later they are available for this book. The journal style of this book really pulled me in to this foreign world of the Civil Rights movement which I’m otherwise inclined to feel was somehow very distant and backwards. It was also exceptionally inspiring to read how these young men and women traveled for these rides and put their bodies on the line for a cause that many argued “wasn’t their problem” at all. The Afterward by Cherie A. Gaines was also wonderful.

Those were the books I finished, but I also I put a pretty large dent in the following:

All of these are great so far!

by pleia2 at August 05, 2014 04:06 PM

iheartubuntu

Diagnosing Internet Problems


Recently I had been having some internet speed problems. There are several basic checks you can do yourself such as double checking your wired connections are all plugged in properly, making sure you are logged onto the correct wi-fi network :) and so on. You can even check to make sure your modem or router is not overheating (I once had one that was smoking).

So here are some tests you can run if you think you might have a problem, and what the heck, check it even if there isnt a problem so you know where you stand.

Pingtest checks your line quality by examining packet loss, ping rate and jitter rate. Low jitter means you have a stable connection.

Give yours a test here: http://www.pingtest.net/

I also like to use M-Labs Network Diagnostic Test (java required). Besides the basic up/down speeds, it also has sophisticated diagnosis of any problems limiting your speed.

Check it out here: http://www.measurementlab.net/tools/ndt

M-Labs also has NPAD test (Network Path & Application Diagnostics; java required) which is designed to diagnose network performance problems in your end-system (the machine your browser is running on) or the network between it and your nearest NPAD server (basically the last mile or so of your broadband). For each diagnosed problem, the server prescribes corrective actions with instructions suitable for non-experts.

Finally, you can also install NEUBOT (Ubuntu Linux, Windows, Mac). Neubot is a research project on network neutrality. Transmission tests probe the Internet using various application level protocols. The results dataset contains samples from various provides and is published on the web, allowing anyone to analyze the data for research purposes.

With this software you can also check to see if your internet service provider is throttling your internet speed for any reason.

Learn more & how to install here:

http://www.neubot.org/neubot-install-guide

Ubuntu users can easily install with DEB file...

http://releases.neubot.org/neubot-0.4.16.9-1_all.deb

At home I determined my wi-fi card was the problem and replaced it. At work, I found my ISP was throttling internet speeds when using Deluge (a bittorrent-like app). Knowledge is power!

by iheartubuntu (noreply@blogger.com) at August 05, 2014 12:18 PM

August 02, 2014

Elizabeth Krumbach

The gallbladder ordeal

3 months ago I didn’t know where or what what a gallbladder was.

Turns out it’s a little thing that helps out the liver by storing some bile (gall). It also turns out to be not strictly required in most people, luckily for me.


“Blausen 0428 Gallbladder-Liver-Pancreas Location” by BruceBlaus – Own work. Licensed under Creative Commons Attribution 3.0 via Wikimedia Commons

Way back in April I came down with what I thought was a stomach bug. It was very painful and lasted 3 days before I went to an urgent care clinic to make sure nothing major was wrong. They took some blood samples and sent me on my way, calling it a stomach bug. When blood results came in I was showing elevated liver enzymes and was told to steer clear of red meat, alcohol and fatty foods.

The active “stomach bug” went away pretty quickly and after a couple weeks of boring diet the pain went away too. Hooray!

2 weeks later the pain and “stomach bug” came back. This time I ended up in the emergency room, dehydrated and in severe pain. They did some blood work and a CT scan to confirm my appendix wasn’t swollen and sent me home after a few hours. At this point we’re in early May and I had to cancel attending the OpenStack Summit in Atlanta because of the pain. That sucked.

May and June saw 3 major diagnostic tests to figure out what was wrong. I continued avoiding alcohol and fatty foods since they did make it worse, but the constant, dull pain persisted. I stopped exercising, switched to small meals which would hurt less and was quite tired and miserable. Finally, in July they came to the conclusion that I had gallbladder “sludge” and that my gallbladder should probably be removed.

Sign me up!

In preparation for my surgery I read a lot, talked with lots of people who had theirs out and found experiences landed into two categories:

  1. Best thing I ever did, no residual problems and the $%$# pain is gone!
  2. Wish I had tried managing it first, I now have trouble digesting fatty/fried foods and alcohol

This was a little worrying, but given the constant pain I’d been in for 3 months I was willing to deal with the potential side effects. Fortunately feedback was pretty consistent regarding immediate recovery: the surgery is easy and recovery is quick.

My surgery was on July 24th.

They offered it as either outpatient or a single night in the hospital, and I opted for outpatient. I arrived at 8AM and sent home without a gallbladder and nibbling on animal crackers and water by 1PM. Easy!

Actually, the first 3 days were a bit tough. It was a laparoscopic surgery that only required 4 small incisions, so I had pain in my belly and at the incision sites. Activity is based on the individual, but loosely estimated a week for basic recovery, and 2-3 weeks before you’re fully recovered. They recommend both a lot of rest and walking as you can so that you can rid your body of stiffness and bloating from the surgery, leading to a quicker recovery. MJ was able to take time off of work Thursday and Friday and spend the weekend taking care of me.

As the weekend progressed sitting up was still a bit painful, so that limited TV watching. I could only sleep on my back which started causing some neck and back soreness. I did a lot of reading! Books, magazines, caught up on RSS feeds that I fed to my phone. Sunday evening I was able to take the bandages off the incision sites, leaving the wound closure strips in place (in lieu of stitches, and they said they should fall off in 10-14 days). I got dizzy and became nauseated while removing the bandages, which was very unusual for me because blood and stuff doesn’t tend to bother me. I think I was just nervous about finding an infection or pulling on one of the closure strips too hard, but it all went well.

By Monday I was doing a bit better, was able to go outside to pick up some breakfast, walk a block down to the pharmacy (both in my pajamas – haha!). The rest of the week went like this, each day I felt a little better, but still taking the pain medication. Tuesday I spent some time at my desk on email triage so I could respond to anything urgent and have a clearer idea of my task list when I was feeling better. Sitting up got easier, so I added some binge TV watching into the mix and also finally had the opportunity to watch some videos from the OpenStack Summit I missed – awesome!

On Wednesday afternoon I started easing back into work with a couple of patch fix-ups and starting to more actively follow up with email. I even made it out to an OpenStack 4th birthday party for a little while on Wednesday night, which was fortuitously held at a gallery on my block so I was able to go home quickly as soon as I started feeling tired. I’m also happy to say that I wore an elastic waist cotton skirt to this, not my pajamas! Thursday and Friday I still took a lot of breaks from my desk, but was able to start getting caught up with work.

I’m still taking it easy this weekend and on Tuesday I have a follow-up appointment with the surgeon to confirm that everything is healing well. I am hopeful that I’ll be feeling much better by Monday, and certainly by the time I’m boarding a plane to Philly on Thursday. Fortunately MJ is coming with me and has offered to handle the luggage, which is great because aside from wanting him to join me on this trip anyway, I probably won’t be ready to haul around anything heavy yet.

So far I haven’t had trouble eating anything, even when I took a risk and had pizza (fatty!) and egg rolls (fried!) this week. And while I still have surgical pain lurking around and some more healing to do, the constant pain I was having left with my gallbladder. I am so happy! This has truly been a terrible few months for me, I’m looking forward to having energy again so I can get back to my usual productive self and to getting back on track with my diet and exercise routine.

by pleia2 at August 02, 2014 01:26 PM

August 01, 2014

Akkana Peck

Predicting planetary visibility with PyEphem

Part II: Predicting Conjunctions

After I'd written a basic script to calculate when planets will be visible, the next step was predicting conjunctions, times when two or more planets are close together in the sky.

Finding separation between two objects is easy in PyEphem: it's just one line once you've set up your objects, observer and date.

p1 = ephem.Mars()
p2 = ephem.Jupiter()
observer = ephem.Observer()  # and then set it to your city, etc.
observer.date = ephem.date('2014/8/1')
p1.compute(observer)
p2.compute(observer)

ephem.separation(p1, p2)

So all I have to do is loop over all the visible planets and see when the separation is less than some set minimum, like 4 degrees, right?

Well, not really. That tells me if there's a conjunction between a particular pair of planets, like Mars and Jupiter. But the really interesting events are when you have three or more objects close together in the sky. And events like that often span several days. If there's a conjunction of Mars, Venus, and the moon, I don't want to print something awful like

Friday:
  Conjunction between Mars and Venus, separation 2.7 degrees.
  Conjunction between the moon and Mars, separation 3.8 degrees.
Saturday:
  Conjunction between Mars and Venus, separation 2.2 degrees.
  Conjunction between Venus and the moon, separation 3.9 degrees.
  Conjunction between the moon and Mars, separation 3.2 degrees.
Sunday:
  Conjunction between Venus and the moon, separation 4.0 degrees.
  Conjunction between the moon and Mars, separation 2.5 degrees.

... and so on, for each day. I'd prefer something like:

Conjunction between Mars, Venus and the moon lasts from Friday through Sunday.
  Mars and Venus are closest on Saturday (2.2 degrees).
  The moon and Mars are closest on Sunday (2.5 degrees).

At first I tried just keeping a list of planets involved in the conjunction. So if I see Mars and Jupiter close together, I'd make a list [mars, jupiter], and then if I see Venus and Mars on the same date, I search through all the current conjunction lists and see if either Venus or Mars is already in a list, and if so, add the other one. But that got out of hand quickly. What if my conjunction list looks like [ [mars, venus], [jupiter, saturn] ] and then I see there's also a conjunction between Mars and Jupiter? Oops -- how do you merge those two lists together?

The solution to taking all these pairs and turning them into a list of groups that are all connected actually lies in graph theory: each conjunction pair, like [mars, venus], is an edge, and the trick is to find all the connected edges. But turning my list of conjunction pairs into a graph so I could use a pre-made graph theory algorithm looked like it was going to be more code -- and a lot harder to read and less maintainable -- than making a bunch of custom Python classes.

I eventually ended up with three classes: ConjunctionPair, for a single conjunction observed between two bodies on a single date; Conjunction, a collection of ConjunctionPairs covering as many bodies and dates as needed; and ConjunctionList, the list of all Conjunctions currently active. That let me write methods to handle merging multiple conjunction events together if they turned out to be connected, as well as a method to summarize the event in a nice, readable way.

So predicting conjunctions ended up being a lot more code than I expected -- but only because of the problem of presenting it neatly to the user. As always, user interface represents the hardest part of coding.

The working script is on github at conjunctions.py.

August 01, 2014 01:57 AM

July 31, 2014

Elizabeth Krumbach

A Career in FOSS at Fosscon in Philadelphia, August 9th

After years fueled by hobbyist passion, I’ve been really excited to see how work that many of my peers and I have been doing in open source has grown into us having serious technical careers these past few years. Whether you’re a programmer, community manager, systems administrator like me or other type of technologist, familiarity with Open Source technology, culture and projects can be a serious boon to your career.

Last year when I attended Fosscon in Philadelphia, I did a talk about my work as an “Open Source Sysadmin” – meaning all my work for the OpenStack Infrastructure team is done in public code repositories. Following my talk I got a lot of questions about how I’m funded to do this, and a lot of interest in the fact that a company like HP is making such an investment.

So this year I’m returning to Fosscon to talk about these things! In addition to my own experiences with volunteer and paid work in Open Source, I’ll be drawing experience from my colleague at HP, Mark Atwood, who recently wrote 7 skills to land your open source dream job and those of others folks I work with who are also “living the dream” with a job in open source.

I’m delighted to be joined at this conference by keynote speaker and friend Corey Quinn and Charlie Reisinger of Penn Manor School District who I’ve chatted with via email and social media many times about the amazing Ubuntu deployment at his district and whom am looking forward to finally meeting.

In Philadelphia or near by? The conference is coming up on Saturday, August 9th and is being held at the the world-renowned Franklin Institute science museum.

Registration to the conference is free, but you get a t-shirt if you pay the small stipend of $25 to support the conference (I did!): http://fosscon.us/Attend

by pleia2 at July 31, 2014 05:05 PM

July 24, 2014

Akkana Peck

Predicting planetary visibility with PyEphem

Part 1: Basic Planetary Visibility

All through the years I was writing the planet observing column for the San Jose Astronomical Association, I was annoyed at the lack of places to go to find out about upcoming events like conjunctions, when two or more planets are close together in the sky. It's easy to find out about conjunctions in the next month, but not so easy to find sites that will tell you several months in advance, like you need if you're writing for a print publication (even a club newsletter).

For some reason I never thought about trying to calculate it myself. I just assumed it would be hard, and wanted a source that could spoon-feed me the predictions.

The best source I know of is the RASC Observer's Handbook, which I faithfully bought every year and checked each month so I could enter that month's events by hand. Except for January and February, when I didn't have the next year's handbook yet by the time my column went to press and I was on my own. I have to confess, I was happy to get away from that aspect of the column when I moved.

In my new town, I've been helping the local nature center with their website. They had some great pages already, like a What's Blooming Now? page that keeps track of which flowers are blooming now and only shows the current ones. I've been helping them extend it by adding features like showing only flowers of a particular color, separating the data into CSV databases so it's easier to add new flowers or butterflies, and so forth. Eventually we hope to build similar databases of birds, reptiles and amphibians.

And recently someone suggested that their astronomy page could use some help. Indeed it could -- it hadn't been updated in about five years. So we got to work looking for a source of upcoming astronomy events we could use as a data source for the page, and we found sources for a few things, like moon phases and eclipses, but not much.

Someone asked about planetary conjunctions, and remembering how I'd always struggled to find that data, especially in months when I didn't have the RASC handbook yet, I got to wondering about calculating it myself. Obviously it's possible to calculate when a planet will be visible, or whether two planets are close to each other in the sky. And I've done some programming with PyEphem before, and found it fairly easy to use. How hard could it be?

Note: this article covers only the basic problem of predicting when a planet will be visible in the evening. A followup article will discuss the harder problem of conjunctions.

Calculating planet visibility with PyEphem

The first step was figuring out when planets were up. That was straightforward. Make a list of the easily visible planets (remember, this is for a nature center, so people using the page aren't expected to have telescopes):

import ephem

planets = [
    ephem.Moon(),
    ephem.Mercury(),
    ephem.Venus(),
    ephem.Mars(),
    ephem.Jupiter(),
    ephem.Saturn()
    ]

Then we need an observer with the right latitude, longitude and elevation. Elevation is apparently in meters, though they never bother to mention that in the PyEphem documentation:

observer = ephem.Observer()
observer.name = "Los Alamos"
observer.lon = '-106.2978'
observer.lat = '35.8911'
observer.elevation = 2286  # meters, though the docs don't actually say

Then we loop over the date range for which we want predictions. For a given date d, we're going to need to know the time of sunset, because we want to know which planets will still be up after nightfall.

observer.date = d
sunset = observer.previous_setting(sun)

Then we need to loop over planets and figure out which ones are visible. It seems like a reasonable first approach to declare that any planet that's visible after sunset and before midnight is worth mentioning.

Now, PyEphem can tell you directly the rising and setting times of a planet on a given day. But I found it simplified the code if I just checked the planet's altitude at sunset and again at midnight. If either one of them is "high enough", then the planet is visible that night. (Fortunately, here in the mid latitudes we don't have to worry that a planet will rise after sunset and then set again before midnight. If we were closer to the arctic or antarctic circles, that would be a concern in some seasons.)

min_alt = 10. * math.pi / 180.
for planet in planets:
    observer.date = sunset
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "is already up at sunset"

Easy enough for sunset. But how do we set the date to midnight on that same night? That turns out to be a bit tricky with PyEphem's date class. Here's what I came up with:

    midnight = list(observer.date.tuple())
    midnight[3:6] = [7, 0, 0]
    observer.date = ephem.date(tuple(midnight))
    planet.compute(observer)
    if planet.alt > min_alt:
        print planet.name, "will rise before midnight"

What's that 7 there? That's Greenwich Mean Time when it's midnight in our time zone. It's hardwired because this is for a web site meant for locals. Obviously, for a more general program, you should get the time zone from the computer and add accordingly, and you should also be smarter about daylight savings time and such. The PyEphem documentation, fortunately, gives you tips on how to deal with time zones. (In practice, though, the rise and set times of planets on a given day doesn't change much with time zone.)

And now you have your predictions of which planets will be visible on a given date. The rest is just a matter of writing it out into your chosen database format.

In the next article, I'll cover planetary and lunar conjunctions -- which were superficially very simple, but turned out to have some tricks that made the programming harder than I expected.

July 24, 2014 03:32 AM