Planet Ubuntu California

December 18, 2014

Akkana Peck

Firefox deprecates flash -- temporarily

Recently Firefox started refusing to run flash, including youtube videos (about the only flash I run). A bar would appear at the top of the page saying "This plug-in is vulnerable and should be upgraded". Apparently Adobe had another security bug. There's an "Update now" button in the Firefox bar, but it's a chimera: Firefox has never known how to install plug-ins for Linux (there are longstanding bugs filed on why it claims to be able to but can't), and it certainly doesn't know how to update a Debian package.

I use a Firefox downloaded from Mozilla.org, but flash from Debian's flashplugin-nonfree package. So I figured updating Debian -- apt-get update; apt-get dist-upgrade -- would fix it. Nope. I still got the same message.

A little googling found several pages recommending update-flashplugin-nonfree --install; I tried that but it didn't help either. It seemed to download a tarball, but as far as I could tell it never unpacked or installed the tarball it downloaded.

What finally did the trick was

apt-get install --reinstall flashplugin-nonfree
That downloaded a new tarball, AND unpacked and installed it. After restarting Firefox, I was able to view the video I'd been trying to watch.

December 18, 2014 10:14 PM

December 17, 2014

Jono Bacon

The Impact of One Person

I am 35 years old and people never cease to surprise me. My trip home from Los Angeles today was a good example of this.

It was a tortuous affair that should have been a quick hop from LA to Oakland, popping on BArt, and then getting home for a cup of tea and an episode of The Daily Show.

It didn’t work out like that.

My flight was delayed. Then we sat on the tarmac for an hour. Then the new AirBart train was delayed. Then I was delayed at the BArt station in Oakland for 30 minutes. Throughout this I was tired, it was raining, and my patience was wearing thin.

Through the duration of this chain of minor annoyances, I was reading about the horrifying school attack in Pakistan. As I read more, related articles were linked with other stories of violence, aggression, and rape, perpetuated by the dregs of our species.

As anyone who knows me will likely testify, I am a generally pretty positive guy who sees the good in people. I have baked my entire philosophy in life and focus in my career upon the core belief that people are good and the solutions to our problems and the doors to opportunity are created by good people.

On some days though, even the strongest sense of belief in people can be tested when reading about events such as this dreadful act of violence in Pakistan. My seemingly normal trip home from the office in LA just left me disappointed in people.

While stood at the BArt station I decided I had had enough and called an Uber. I just wanted to get home and see my family. This is when my mood changed entirely.

Gerald

A few minutes later, my Uber arrived, and I was picked up by an older gentleman called Gerald. He put my suitcase in the trunk of his car and off we went.

We started talking about the Pakistan shooting. We both shared a desperate sense of disbelief at all those innocent children slaughtered. We questioned how anyone with any sense of humanity and emotion could even think about doing that, let alone going through with it. With a somber air filling the car, Gerald switched gears and started talking about his family.

He told me about his two kids, both of which are in their mid-thirtees. He doted on their accomplishments in their careers, their sense of balance and integrity as people, and his three beautiful grand-children.

He proudly shared that he had shipped his grandkids’ Christmas presents off to them today (they are on the East Coast) so he didn’t miss the big day. He was excited about the joy he hoped the gifts would bring to them. His tone and sentiment was one of happiness and pride.

We exchanged stories about our families, our plans for Christmas, and how lucky we both felt to love and be loved.

While we were generations apart…our age, our experiences, and our differences didn’t matter. We were just proud husbands and fathers who were cherishing the moments in life that were so important to both of us.

We arrived at my home and I told Gerald that until I stepped in his car I was having a pretty shitty trip home and he completely changed that. We shook hands, shared Christmas best wishes, and parted ways.

Good People

What I was expecting to be a typical Uber ride home with me exchanging a few pleasantries and then doing email on my phone, instead really illuminated what is important in life.

We live in complex world. We live on a planet with a rich tapestry of people and perspectives.

Evil people do exist. I am not referring to a specific religious or spiritual definition of evil, but instead the extreme inverse of the good we see in others.

There are people who can hurt others, who can so violently shatter innocence and bring pain to hundreds, so brutally, and so unnecessarily. I can’t even imagine what the parents of those kids are going through right now.

It can be easy to focus on these tragedies and to think that our world is getting worse; to look at the full gamut of negative humanity, from the inconsequential, such as the miserable lady yelling at the staff at the airport, to the hateful, such as the violence directed at innocent children. It is easy to assume that our species is rotting from the inside out, to see poison in the well, and that the rot is spreading.

While it is easy to lose faith in people, I believe our wider humanity keeps us on the right path.

While there is evil in the world, there is an abundance of good. For every evil person screaming there is a choir of good people who drown them out. These good people create good things, they create beautiful things that help others to also create good things and be good people too.

Like many of you, I am fortunate to see many of these things every day. I see people helping the elderly in their local communities, many donating toys to orphaned kids over the holidays, others creating technology and educational resources that help people to create new content, art, music, businesses, and more. Every day millions devote hours to helping and inspiring others to create a brighter future.

What is most important about all of this is that every individual, every person, every one of you reading this, has the opportunity to have this impact. These opportunities may be small and localized, or they may be large and international, but we can all leave this planet a little better than when we arrived on it.

The simplest way of doing this is to share our humanity with others and to cherish the good in the face of evil. The louder our choir, the weaker theirs.

Gerald did exactly that tonight. He shared happiness and opportunity with a random guy he picked up in his car and I felt I should pass that spirit on to you folks too. Now it is your turn.

Thanks for reading.

by jono at December 17, 2014 07:35 AM

December 16, 2014

Elizabeth Krumbach

Recent time between travel

This year has pretty much been consumed by travel and events. I’ll dive into that more in a wrap-up post in a couple weeks, but for now I’ll just note that it’s been tiring and I’ve worked to value my time at home as much as possible.

It’s been uncharacteristically wet here in San Francisco since coming home from Jamaica. We’re fortunate to have the rain since we’re currently undergoing a pretty massive drought here in California, but I would have been happier if it didn’t come at once! There was some flooding in our basement garage at the beginning (fortunately a leak was found and fixed) and we had possibly the first power outage since I moved here almost five years ago. Internet has had outages too, which could be a bit tedious work-wise even with a back up connection. All because of a few inches of rain that we’d not think anything of back in Pennsylvania, let alone during the kinds of winter storms I grew up with in Maine.

On Thanksgiving I got ambitious about my time at home and decided to actually make a full dinner. We’d typically either gone out or picked up prepared food somewhere, so this was quite a change from the norm. I skipped the full turkey and went with cutlets I prepared in a pan, the rest of the menu included the usual suspects: gravy, stuffing, mashed potatoes, cranberry sauce, green beans and rolls. I had leftovers for days. I also made MJ suffer with me through a Mystery Science Theater 3000 Turkey Day marathon, hah!

I’ve spent a lot of time catching up with project work in the past few weeks. Following up on a number of my Xubuntu tasks and working through my Partimus backlog. Xubuntu-wise we’re working on a few contributor incentives, so I’m receiving a box of Xubuntu stickers in the mail soon, courtesy of UnixStickers.com, which I’ll be sending out to select QA contributors in the coming months. We’re also working on a couple of polls that can give us a better idea of who are user base is and how to serve them better. I also spent an afternoon in Alameda recently to meet with an organization that Partimus may partner with and met up with the Executive Director this past weekend for a board meeting where we identified some organizational work for the next quarter.

At home I’ve been organizing the condo and I’m happy to report that the boxes have gone, even working from home means I still have too much stuff around all the time. MJ took some time to set up our shiny new PlayStation 4 and several antennas so our TV now has channels and we can get AM and FM radio. I’ll finally be able to watch baseball at home! I also got holiday cards sent out and some Hanukkah lights put up, so it’s feeling quite comfortable here.

Having time at home has also meant I’ve been able to make time for friends who’ve come into town to visit lately. Laura Czajkowski, who I’ve worked with for years in the Ubuntu community, was recently in town and we met up for dinner. I also recently had dinner with my friend BJ, who I know from the Linux scene back in Philadelphia, though we’ve both moved since. Now I just need to make more time for my local friends.

The holiday season has afforded us some time to dress up and go out, like to a recent holiday party by MJ’s employer.

Plus I’ve had the typical things to keep me busy outside of work, an Ubuntu Hour and Debian Dinner last week and the Ubuntu Weekly Newsletter which will hit issue 400 early next year. Plus, I have work on my book, which I wish were going faster, but is coming along.

I have one more trip coming this year, off to St. Louis late next week. I’ll be spending few days visiting with friends and traveling around a city I’ve never been to! This trip will put me over 100k miles for the calendar year, which is a pretty big milestone for me, and one I’m not sure I’ll reach again. Plans are still firming up for how my travel schedule will look next year, but I do have a couple big international trips on the horizon that I’m excited about.

by pleia2 at December 16, 2014 05:24 AM

December 12, 2014

Eric Hammond

Exploring The AWS Lambda Runtime Environment

In the AWS Lambda Shell Hack article, I present a crude hack that lets me run shell commands in the AWS Lambda environment to explore what might be available to Lambda functions running there.

I’ve added a wrapper that lets me type commands on my laptop and see the output of the command run in the Lambda function. This is not production quality software, but you can take a look at it in the alestic/lambdash GitHub repo.

For the curious, here are some results. Please note that this is running on a preview and is in no way a guaranteed part of the environment of a Lambda function. Amazon could change any of it at any time, so don’t build production code using this information.

The version of Amazon Linux:

$ lambdash cat /etc/issue
Amazon Linux AMI release 2014.03
Kernel \r on an \m

The kernel version:

$ lambdash uname -a
Linux ip-10-0-168-157 3.14.19-17.43.amzn1.x86_64 #1 SMP Wed Sep 17 22:14:52 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

The working directory of the Lambda function:

$ lambdash pwd
/var/task

which contains the unzipped contents of the Lambda function I uploaded:

$ lambdash ls -l
total 12
-rw-rw-r-- 1 slicer 497 5195 Nov 18 05:52 lambdash.js
drwxrwxr-x 5 slicer 497 4096 Nov 18 05:52 node_modules

The user running the Lambda function:

$ lambdash id
uid=495(sbx_user1052) gid=494 groups=494

which is one of one hundred sbx_userNNNN users in /etc/passwd. “sbx_user” presumably stands for “sandbox user”.

The environment variables (in a shell subprocess). This appears to be how AWS Lambda is passing the AWS credentials to the Lambda function.

$ lambdash env
 AWS_SESSION_TOKEN=[ELIDED]
LAMBDA_TASK_ROOT=/var/task
LAMBDA_CONSOLE_SOCKET=14
PATH=/usr/local/bin:/usr/bin:/bin
PWD=/var/task
AWS_SECRET_ACCESS_KEY=[ELIDED]
NODE_PATH=/var/runtime:/var/task:/var/runtime/node_modules
AWS_ACCESS_KEY_ID=[ELIDED]
SHLVL=1
LAMBDA_CONTROL_SOCKET=11
_=/usr/bin/env

The versions of various pre-installed software:

$ lambdash perl -v
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
[...]

$ lambdash python --version
Python 2.6.9

$ lambdash node -v
v0.10.32

Running processes:

$ lambdash ps axu
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
493          1  0.2  0.7 1035300 27080 ?       Ssl  14:26   0:00 node --max-old-space-size=0 --max-new-space-size=0 --max-executable-size=0 /var/runtime/node_modules/.bin/awslambda
493         13  0.0  0.0  13444  1084 ?        R    14:29   0:00 ps axu

The entire file system: 2.5 MB download

 $ lambdash ls -laiR /
 [click link above to download]

Kernel ring buffer: 34K download

$ lambdash dmesg
[click link above to download]

CPU info:

$ lambdash cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 62
model name  : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
stepping    : 4
microcode   : 0x416
cpu MHz     : 2800.110
cache size  : 25600 KB
physical id : 0
siblings    : 2
core id     : 0
cpu cores   : 1
apicid      : 0
initial apicid  : 0
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
bogomips    : 5600.22
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
[...]

Installed nodejs modules:

$ dirs=$(lambdash 'echo $NODE_PATH' | tr ':' '\n' | sort)
$ echo $dirs
/var/runtime /var/runtime/node_modules /var/task

$ lambdash 'for dir in '$dirs'; do echo $dir; ls -1 $dir; echo; done'
/var/runtime
node_modules

/var/runtime/node_modules
aws-sdk
awslambda
dynamodb-doc
imagemagick

/var/task # Uploaded in Lambda function ZIP file
lambdash.js
node_modules

[Update 2013-12-03]

We’re probably not on a bare EC2 instance. The standard EC2 instance metadata service is not accessible through HTTP:

$ lambdash curl -sS http://169.254.169.254:8000/latest/meta-data/instance-type
curl: (7) Failed to connect to 169.254.169.254 port 8000: Connection refused

Browsing the AWS Lambda environment source code turns up some nice hints about where the product might be heading. I won’t paste the copyrighted code here, but you can download into an “awslambda” subdirectory with:

$ lambdash 'cd /var/runtime/node_modules;tar c awslambda' | tar xv

[Update 2013-12-11]

There’s a half gig of writable disk space available under /tmp (when run with 256MB of RAM. Does this scale up with memory?)

$ lambdash 'df -h 2>/dev/null'
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       30G  1.9G   28G   7% /
devtmpfs         30G  1.9G   28G   7% /dev
/dev/xvda1       30G  1.9G   28G   7% /
/dev/loop0      526M  832K  514M   1% /tmp

Anything else you’d like to see? Suggest commands in the comments on this article.

Original article: http://alestic.com/2014/11/aws-lambda-environment

by Eric Hammond at December 12, 2014 05:07 AM

December 10, 2014

Akkana Peck

Not exponential after all

We're saved! From the embarrassing slogan "Live exponentially", that is.

Last night the Los Alamos city council voted to bow to public opinion and reconsider the contract to spend $50,000 on a logo and brand strategy based around the slogan "Live Exponentially." Though nearly all the councilors (besides Pete Sheehey) said they still liked the slogan, and made it clear that the slogan isn't for residents but for people in distant states who might consider visiting as tourists, they now felt that basing a campaign around a theme nearly of the residents revile was not the best idea.

There were quite a few public comments (mine included); everyone was civil and sensible and stuck well under the recommended 3-minute time limit.

Instead, the plan is to go ahead with the contract, but ask the ad agency (Atlas Services) to choose two of the alternate straplines from the initial list of eight that North Star Research had originally provided.

Wait -- eight options? How come none of the previous press or the previous meeting mentioned that there were options? Even in the 364 page Agenda Packets PDF provided for this meeting, there was no hint of that report or of any alternate strap lines.

But when they displayed the list of eight on the board, it became a little clearer why they didn't want to make the report public: they were embarrassed to have paid for work of this quality. Check out the list:

  • Where Everything is Elevated
  • High Intelligence in the High Desert
  • Think Bigger. Live Brighter.
  • Great. Beyond.
  • Live Exponentially
  • Absolutely Brilliant
  • Get to a Higher Plane
  • Never Stop Questioning What's Possible

I mean, really. Great Beyond? Are we're all dead? High Intelligence in the High Desert? That'll certainly help with people who think this might be a bunch of snobbish intellectuals.

It was also revealed that at no point during the plan was there ever any sort of focus group study or other tests to see how anyone reacted to any of these slogans.

Anyway, after a complex series of motions and amendments and counter-motions and amendments and amendments to the amendments, they finally decided to ask Atlas to take the above list, minus "Live Exponentially"; add the slogan currently displayed on the rocks as you drive into town, "Where Discoveries are Made" (which came out of a community contest years ago and is very popular among residents); and ask Atlas to choose two from the list to make logos, plus one logo that has no slogan at all attached to it.

If we're lucky, Atlas will pick Discoveries as one of the slogans, or maybe even come up with something decent of their own.

The chicken ordinance discussion went well, too. They amended the ordinance to allow ten chickens (instead of six) and to try to allow people in duplexes and quads to keep chickens if there's enough space between the chickens and their neighbors. One commenter asked for the "non-commercial' clause to be struck because his kids sell eggs from a stand, like lemonade, which sounded like a very reasonable request (nobody's going to run a large commercial egg ranch with ten chickens); but it turned out there's a state law requiring permits and inspections to sell eggs.

So, folks can have chickens, and we won't have to live exponentially. I'm sure everyone's breathing a little more easily now.

December 10, 2014 11:27 PM

December 09, 2014

Eric Hammond

Persistence Of The AWS Lambda Environment Between Function Invocations

AWS Lambda functions are run inside of an Amazon Linux environment (presumably a container of some sort). Sequential calls to the same Lambda function could hit the same or different instantiations of the environment.

If you hit the same copy (I don’t want to say “instance”) of the Lambda function, then stuff you left in the environment from a previous run might still be available.

This could be useful (think caching) or hurtful (if your code incorrectly expects a fresh start every run).

Here’s an example using lambdash, a hack I wrote that sends shell commands to a Lambda function to be run in the AWS Lambda environment, with stdout/stderr being sent back through S3 and displayed locally.

$ lambdash 'echo a $(date) >> /tmp/run.log; cat /tmp/run.log'
a Tue Dec 9 13:54:50 PST 2014

$ lambdash 'echo b $(date) >> /tmp/run.log; cat /tmp/run.log'
a Tue Dec 9 13:54:50 PST 2014
b Tue Dec 9 13:55:00 PST 2014

$ lambdash 'echo c $(date) >> /tmp/run.log; cat /tmp/run.log'
a Tue Dec 9 13:54:50 PST 2014
b Tue Dec 9 13:55:00 PST 2014
c Tue Dec 9 13:55:20 PST 2014

As you can see in this example, the file in /tmp contains content from previous runs.

These tests are being run in AWS Lambda Preview, and should not be depended on for long term or production plans. Amazon could change how AWS Lambda works at any time for any reason, especially when the behaviors are not documented as part of the interface. For example, Amazon could decide to clear out writable file system areas like /tmp after each run.

If you want to have a dependable storage that can be shared among multiple copies of an AWS Lambda function, consider using standard AWS services like DynamoDB, RDS, ElastiCache, S3, etc.

Original article: http://alestic.com/2014/12/aws-lambda-persistence

by Eric Hammond at December 09, 2014 10:07 PM

December 08, 2014

Eric Hammond

AWS Lambda: Pay The Same Price For Faster Execution

multiply the speed of compute-intensive Lambda functions without (much) increase in cost

Given:

  • AWS Lambda duration charges are proportional to the requested memory.

  • The CPU power, network, and disk are proportional to the requested memory.

One could conclude that the charges are proportional to the CPU power available to the Lambda function. If the function completion time is inversely proportional to the CPU power allocated (not entirely true), then the cost remains roughly fixed as you dial up power to make it faster.

If your Lambda function is primarily CPU bound and takes at least several hundred ms to execute, then you may find that you can simply allocate more CPU by allocating more memory, and get the same functionality completed in a shorter time period for about the same cost.

For example, if you allocate 128 MB of memory and your Lambda function takes 10 seconds to run, then you might be able to allocate 640 MB and see it complete in about 2 seconds.

At current AWS Lambda pricing, both of these would cost about $0.02 per thousand invocations, but the second one completes five times faster.

Things that would cause the higher memory/CPU option to cost more in total include:

  • Time chunks are rounded up to the nearest 100 ms. If your Lambda function runs near or under that in less memory, then increasing the CPU allocated will make it return faster, but the rounding up will cause the resulting cost to be more expensive.

  • Doubling the CPU allocated to a Lambda function does not necessarily cut the run time in half. The code might be accessing external resources (e.g., calling S3 APIs) or interacting with disk. If you double the requested CPU, then those fixed time actions will end up costing twice as much.

If you have a slow Lambda function, and it seems that most of its time is probably spent in CPU activities, then it might be worth testing an increase in requested memory to see if you can get it to complete much faster without increasing the cost by much.

I’d love to hear what practical test results people find when comparing different memory/CPU allocation values for the same Lambda function.

Original article: http://alestic.com/2014/11/aws-lambda-speed

by Eric Hammond at December 08, 2014 05:54 PM

Elizabeth Krumbach

My father passed away 10 years ago

It’s December 7th, which marks 10 years since my father passed away. In the past decade I’ve had much to reflect on about his life.

When he passed away I was 23 and had bought a house in the suburbs of Philadelphia. I had just transitioned from doing web development contract work to working various temp jobs to pay the bills. It was one of those temp jobs that I went to the morning after I learned my father had passed, because I didn’t know what else to do, I learned quickly that people tend to take a few days off when they have such a loss and why. The distance from home made it challenging to work through the loss, as is seen in my blog post from the week it happened, I felt pretty rutterless.

My father had been an inspiration for me. He was always making things, had a wood workshop where he’d build dollhouses, model planes, and even a stable for my My Little Ponies. He was also a devout Tolkien fan, making The Hobbit a more familiar story for me growing up than Noah’s Ark. I first saw and fell in love with Star Wars because he was a big scifi fan. My passion for technology was sparked when his brother at IBM shipped us our first computer and he told me stories about talking to people from around the world on his HAM radios. He was also an artist, with his drawings of horses being among my favorites growing up. Quite the Renaissance man. Just this year, when my grandmother passed, I was honored received several of his favorite things that she had kept, including a painting that hung in our house growing up, a video of his time at college and photos that highlighted his love of travel.

He was also very hard on me. Every time I excelled, he pushed harder. Unfortunately it felt like “I could never do good enough” when in fact I now believe he pushed me for my own good, I could usually take it and I’m ultimately better for it. I know he was also supremely disappointed that I never went to college, something that was very important to him. This all took me some time to reconcile, but deep down I know my father loved my sisters and I very much, and regardless of what we accomplished I’m sure he’d be proud of all of us.

And he struggled with alcoholism. It’s something I’ve tended to gloss over in most public discussions about him because it’s so painful. It’s had a major impact on my life, I’m pretty much as text book example of “eldest child of an alcoholic” as you can get. It also tore apart my family and inevitably lead to my father’s death from cirrhosis of the liver. For a long time I was angry with him. Why couldn’t he give it up for his family? Not even to save his own life? I’ve since come to understand that alcoholism is a terrible, destructive thing and for many people it’s a lifelong battle that requires a tremendous amount of support from family and community. While I may have gotten genetic fun bag of dyslexia, migraines and seizures from my father, I’m routinely thankful I didn’t inherit the predisposition toward alcoholism.

And so, on this sad anniversary, I won’t be having an drink to his life. Instead I think I’ll honor his memory by spending the evening working on one of the many projects that his legacy inspired and brings me so much joy. I love you, Daddy.

by pleia2 at December 08, 2014 01:49 AM

Akkana Peck

My Letter to the Editor: Make Your Voice Heard On 'Live Exponentially'

More on the Los Alamos "Live Exponentially" slogan saga: There's been a flurry of letters, all opposed to the proposed slogan, in the Los Alamos Daily Post these last few weeks.

And now the issue is back on the council agenda; apparently they're willing to reconsider the October vote to spend another $50,000 on the slogan.

But considering that only two people showed up to that October meeting, I wrote a letter to the Post urging people to speak before the council: Letter to the Editor: Attend Tuesday's Council Meeting To Make Your Voice Heard On 'Live Exponentially'.

I'll be there. I've never actually spoken at a council meeting before, but hey, confidence in public speaking situations is what Toastmasters is all about, right?

(Even though it means I'll have to miss an interesting sounding talk on bats that conflicts with the council meeting. Darn it!)

A few followup details that I had no easy way to put into the Post letter:

The page with the links to Council meeting agendas and packets is here: Los Alamos County Calendar.

There, you can get the short Agenda for Tuesday's meeting, or the full 364 page Agenda Packets PDF.

[Breathtaking raised to the power of you] The branding section covers pages 93 - 287. But the graphics the council apparently found so compelling, which swayed several of them from initially not liking the slogan to deciding to spend a quarter million dollars on it, are in the final presentation from the marketing company, starting on page p. 221 of the PDF.

In particular, a series of images like this one, with the snappy slogan:

Breathtaking raised to the power of you
LIVE EXPONENTIALLY

That's right: the advertising graphics that were so compelling they swayed most of the council are even dumber than the slogan by itself. Love the superscript on the you that makes it into an exponent. Get it ... exponentially? Oh, now it all makes sense!

There's also a sadly funny "Written Concept" section just before the graphics (pages 242- in the PDF) where they bend over backward to work in scientific-sounding words, in bold each time.

But there you go. Hopefully some of those Post letter writers will come to the meeting and let the council know what they think.

The council will also be discussing the much debated proposed chicken ordinance; that discussion runs from page 57 to 92 of the PDF. It's a non-issue for Dave and me since we're in a rural zone that already allows chickens, but I hope they vote to allow them everywhere.

December 08, 2014 01:05 AM

December 03, 2014

Elizabeth Krumbach

December 2014 OpenStack Infrastructure User Manual Sprint

Back in April, the OpenStack Infrastructure project create the Infrastructure User Manual. This manual sought consolidate our existing documentation for Developers, Core Reviewers and Project Drivers, which was spread across wiki pages, project-specific documentation files and general institutional knowledge that was mostly just in our brains.

Books

In July, at our mid-cycle sprint, Anita Kuno drove a push to start getting this document populated. There was some success here, we had a couple of new contributors. Unfortunately, after the mid-cycle reviews only trickled in and vast segments of the manual remained empty.

At the summit, we had a session to plan out how to change this and announced an online sprint in the new #openstack-sprint channel (see here for scheduling: https://wiki.openstack.org/wiki/VirtualSprints). We hosted the sprint on Monday and Tuesday of this week.

Over these 2 days we collaborated on an etherpad so no one was duplicating work and we all did a lot of reviewing. Contributors worked to flesh out missing pieces of the guide and added a Project Creator’s section to the manual.

We’re now happy to report, that with the exception of the Third Party section of the manual (to be worked on collaboratively with the broader Third Party community at a later date), our manual is looking great!

The following are some stats about our sprint gleaned from Gerrit and Stackalytics:

Sprint start

  • Patches open for review: 10
  • Patches merged in total repo history: 13

Sprint end:

  • Patches open for review: 3, plus 2 WIP (source)
  • Patches merged during sprint: 30 (source)
  • Reviews: Over 200 (source)

We also have 16 patches for documentation in flight that were initiated or reviewed elsewhere in the openstack-infra project during this sprint, including the important reorganization of the git-review documentation (source)

Finally, thanks to sprint participants who joined me for this sprint, sorted chronologically by reviews: Andreas Jaeger, James E. Blair, Anita Kuno, Clark Boylan, Spencer Krum, Jeremy Stanley, Doug Hellmann, Khai Do, Antoine Musso, Stefano Maffulli, Thierry Carrez and Yolanda Robla

by pleia2 at December 03, 2014 04:30 PM

Jono Bacon

Feedback Requested: Great Examples of Community

Folks, I need to ask for some help.

Like many, I have some go-to examples of great communities. This includes Wikipedia, OpenStreetmap, Ubuntu, Debian, Linux, and others. Many of these are software related, many of them are Open Source.

I would like to ask your feedback for other examples of great communities. These don’t have to be software-related…in fact I would love to see examples of great communities in other areas and disciplines.

They could be collaborative communities, communities that share a common interest, communities that process big chunks of data, communities that inspire and educate certain groups (e.g. kids, under-privilaged), or anything else.

I am looking for inspiring examples that get to the heart of what makes communities beautiful. These don’t have to be huge and elaborate communities, they just need to demonstrate the power of people sharing a mission or ethos and doing interesting things.

Please share your examples in the comments, and in doing so, please share the following:

  • The name of the community
  • A web address / contact person
  • Overview of the community, what it does, and why you feel it is special

Thanks!

by jono at December 03, 2014 06:13 AM

Eric Hammond

Before You Buy Amazon EC2 (New) Reserved Instances

understand the commitment you are making to pay for the entire 1-3 years

Amazon just announced a change in the way that Reserved Instances are sold. Instead of selling the old Reserved Instance types:

  • Light Utilization
  • Medium Utilization
  • Heavy Utilization

EC2 is now selling these new Reserved Instance types:

  • No Upfront
  • Partial Upfront
  • All Upfront

Despite the fact that they are still called “Reserved Instances” and that there are three plans which sound like increasing commitment, the are not equivalent and do not map 1-1 old to new. In fact the new Reserved Instances are not even increasing commitment.

You should forget what you knew about Reserved Instances and read all the fine print before making any further Reserved Instance purchases.

One of the big differences between the old and the new is that you are always committing to spend the entire 1-3 years of cost even if you are not running a matching instance during part of that time. This text is buried in the fine print in a “**” footnote towards the bottom of the pricing page:

When you purchase a Reserved Instance, you are billed for every hour during the entire Reserved Instance term that you select, regardless of whether the instance is running or not.

As I pointed out in the 2012 article titled Save Money by Giving Away Unused Heavy Utilization Reserved Instances, this was also true of Heavy Utilization Reserved Instances, but with the old Light and Medium Utilization Reserved Instances you stopped spending money by stopping or terminating your instance.

Let’s walk through an example with the new EC2 Reserved Instance prices. Say you expect to run a c3.2xlarge for a year. Here are some options at the prices when this article was published:

Pricing Option Cost Structure Yearly Cost Savings over On Demand
On Demand $0.420/hour $3,679.20/year  
No Upfront RI $213.16/month $2,557.92/year 30%
Partial Upfront RI $1,304/once + $75.92/month $2,215.04/year 40%
All Upfront RI $2,170/once $2,170.00/year 41%

There’s a big jump in yearly savings from On Demand to the Reserved Instances, and then there is an increasing (but sometimes small) savings for the more of the total cost you pay up front. The percentage savings varies by instance type, so read up on the pricing page.

The big difference is that you can stop paying the On Demand price if you decide you don’t need that instance running, or you figure out that the application can work better on a larger (or smaller) instance type.

With all new Reserved Instance pricing options, you commit to paying the entire year’s cost. The only difference is how much of it you pay up front and how much you pay over the next 12 months.

If you purchase a Reserved Instance and decide you don’t need it after a while, you may be able to sell it (perhaps at some loss) on the Reserved Instance Marketplace, but your odds of completing a sale and the money you get back from that are not guaranteed.

Original article: http://alestic.com/2014/12/ec2-reserved-instances

by Eric Hammond at December 03, 2014 12:23 AM

December 02, 2014

Akkana Peck

Ripping a whole CD on Linux

I recently discovered that my ancient stereo turntable didn't survive our move. So all those LPs I brought along, intending to rip to mp3 when I had more time, will never see bits.

So I need to buy new versions of some of that old music. In particular, I'd lately been wanting to listen to my old Flanders and Swann albums. Flanders and Swann were a terrific comedy music duo (think Tom Lehrer only less scientifically oriented) from the 1960s.

So I ordered a CD of The Complete Flanders & Swann, which contains all three of the albums I inherited from my parents. Woohoo! I ran a little script I have that rips a whole CD to a directory of separate MP3 songs, and I was all set.

Until I listened to it. It turns out that when the LP album was turned into a CD, they put the track breaks in the wrong place. These albums are recordings of live performances. Each song has a spoken intro, giving a little context for the song that follows. On the CD, each track starts with a song, and ends with the spoken intro for the next song. That's no problem if you always listen to whole albums in order. But I like to play individual tracks, or listen to music on random play. So this wasn't going to work at all.

I tried using audacity to copy the intro from the end of one track and paste it onto the beginning of another. That worked, but it was tedious and fiddly. A little research showed me a much better way.

First: Rip the whole CD

First I needed to rip the whole CD as one gigantic track. My script had been running cdparanoia tracknumber filename.wav. But it took some study of the cdparanoia manual before I finally found the way to rip a whole CD to one track: you can specify a range of tracks, starting at 0 and omitting the end track.

cdparanoia 0- outfile.wav

Use Audacity to split and save the tracks

Now what's the best way to split a recording into separate tracks? Fortunately the Audacity manual has a nice page on that very subject: Splitting a recording into separate tracks.

Mostly, the issue is setting labels -- with Tracks->Add Label at Selection or Tracks->Add Label at Playback Position. Use Ctrl-1 to zoom as much as you need to see where the short pauses are. Then listen to the audio, pausing or clicking and setting labels appropriately.

It's a bit fiddly. For instance, if you pause your listening to set a label, you might want to save the audacity project so you don't lose the label positions you've set so far. But you can't save unless you Stop the playback; and that loses the current playback position which you may not yet have set a label for. Even if you have set a label for it, you'll need to click to set the selection to the label you just made if you want to continue playing from where you left off. It all seems a little silly and unintuitive ... but after a few tries you'll find a routine that works for you.

When all your labels are set, then File->Export Multiple.... You will have to go through a bunch of dialogs involving metadata for each track; just hit return, since audacity ignores any metadata you type in and won't actually write it to the MP3 file. I have no idea why it always prompts for metadata then doesn't use it, but you can use a program like id3tool later to add proper metadata to the tracks.

So, no, the tools aren't perfect. On the other hand, I now have a nice set of Flanders and Swann tracks, and can listen to Misalliance, Ill Wind and The GNU Song complete with their proper introductions.

December 02, 2014 08:35 PM

December 01, 2014

Eric Hammond

S3 Bucket Notification to SQS/SNS on Object Creation

A fantastic new and oft-requested AWS feature was released during AWS re:Invent, but has gotten lost in all the hype about AWS Lambda functions being triggered when objects are added to S3 buckets. AWS Lambda is currently in limited Preview mode and you have to request access, but this related feature is already available and ready to use.

I’m talking about automatic S3 bucket notifications to SNS topics and SQS queues when new S3 objects are added.

Unlike AWS Lambda, with S3 bucket notifications you do need to maintain the infrastructure to run your code, but you’re already running EC2 instances for application servers and job processing, so this will fit right in.

To detect and respond to S3 object creation in the past, you needed to either have every process that uploaded to S3 subsequently trigger your back end code in some way, or you needed to poll the S3 bucket to see if new objects had been added. The former adds code complexity and tight coupling dependencies. The latter can be costly in performance and latency, especially as the number of objects in the bucket grows.

With the new S3 bucket notification configuration options, the addition of an object to a bucket can send a message to an SNS topic or to an SQS queue, triggering your code quickly and effortlessly.

Here’s a working example of how to set up and use S3 bucket notification configurations to send messages to SNS on object creation and update.

Setup

Replace parameter values with your preferred names.

region=us-east-1
s3_bucket_name=BUCKETNAMEHERE
email_address=YOURADDRESS@EXAMPLE.COM
sns_topic_name=s3-object-created-$(echo $s3_bucket_name | tr '.' '-')
sqs_queue_name=$sns_topic_name

Create the test bucket.

aws s3 mb \
  --region "$region" \
  s3://$s3_bucket_name

Create an SNS topic.

sns_topic_arn=$(aws sns create-topic \
  --region "$region" \
  --name "$sns_topic_name" \
  --output text \
  --query 'TopicArn')
echo sns_topic_arn=$sns_topic_arn

Allow S3 to publish to the SNS topic for activity in the specific S3 bucket.

aws sns set-topic-attributes \
  --topic-arn "$sns_topic_arn" \
  --attribute-name Policy \
  --attribute-value '{
      "Version": "2008-10-17",
      "Id": "s3-publish-to-sns",
      "Statement": [{
              "Effect": "Allow",
              "Principal": { "AWS" : "*" },
              "Action": [ "SNS:Publish" ],
              "Resource": "'$sns_topic_arn'",
              "Condition": {
                  "ArnLike": {
                      "aws:SourceArn": "arn:aws:s3:*:*:'$s3_bucket_name'"
                  }
              }
      }]
  }'

Add a notification to the S3 bucket so that it sends messages to the SNS topic when objects are created (or updated).

aws s3api put-bucket-notification \
  --region "$region" \
  --bucket "$s3_bucket_name" \
  --notification-configuration '{
    "TopicConfiguration": {
      "Events": [ "s3:ObjectCreated:*" ],
      "Topic": "'$sns_topic_arn'"
    }
  }'

Test

You now have an S3 bucket that is going to post a message to an SNS topic when objects are added. Let’s give it a try by connecting an email address listener to the SNS topic.

Subscribe an email address to the SNS topic.

aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol email \
  --notification-endpoint "$email_address"

IMPORTANT! Go to your email inbox now and click the link to confirm that you want to subscribe that email address to the SNS topic.

Upload one or more files to the S3 bucket to trigger the SNS topic messages.

aws s3 cp [SOMEFILE] s3://$s3_bucket_name/testfile-01

Check your email for the notification emails in JSON format, containing attributes like:

{ "Records":[  
    { "eventTime":"2014-11-27T00:57:44.387Z",
      "eventName":"ObjectCreated:Put", ...
      "s3":{
        "bucket":{ "name":"BUCKETNAMEHERE", ... },
        "object":{ "key":"testfile-01", "size":5195, ... }
}}]}

Notification to SQS

The above example connects an SNS topic to the S3 bucket notification configuration. Amazon also supports having the bucket notifications go directly to an SQS queue, but I do not recommend it.

Instead, send the S3 bucket notification to SNS and have SNS forward it to SQS. This way, you can easily add other listeners to the SNS topic as desired. You can even have multiple SQS queues subscribed, which is not possible when using a direct notification configuration.

Here are some sample commands that create an SQS queue and connect it to the SNS topic.

Create the SQS queue and get the ARN (Amazon Resource Name). Some APIs need the SQS URL and some need the SQS ARN. I don’t know why.

sqs_queue_url=$(aws sqs create-queue \
  --queue-name $sqs_queue_name \
  --attributes 'ReceiveMessageWaitTimeSeconds=20,VisibilityTimeout=300'  \
  --output text \
  --query 'QueueUrl')
echo sqs_queue_url=$sqs_queue_url

sqs_queue_arn=$(aws sqs get-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attribute-names QueueArn \
  --output text \
  --query 'Attributes.QueueArn')
echo sqs_queue_arn=$sqs_queue_arn

Give the SNS topic permission to post to the SQS queue.

sqs_policy='{
    "Version":"2012-10-17",
    "Statement":[
      {
        "Effect":"Allow",
        "Principal": { "AWS": "*" },
        "Action":"sqs:SendMessage",
        "Resource":"'$sqs_queue_arn'",
        "Condition":{
          "ArnEquals":{
            "aws:SourceArn":"'$sns_topic_arn'"
          }
        }
      }
    ]
  }'
sqs_policy_escaped=$(echo $sqs_policy | perl -pe 's/"/\\"/g')
sqs_attributes='{"Policy":"'$sqs_policy_escaped'"}'
aws sqs set-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attributes "$sqs_attributes"

Subscribe the SQS queue to the SNS topic.

aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol sqs \
  --notification-endpoint "$sqs_queue_arn"

You can upload another test file to the S3 bucket, which will now generate both the email and a message to the SQS queue.

aws s3 cp [SOMEFILE] s3://$s3_bucket_name/testfile-02

Read the S3 bucket notification message from the SQS queue:

aws sqs receive-message \
  --queue-url $sqs_queue_url

The output of that command is not quite human readable as it has quoted JSON inside quoted JSON inside JSON, but your queue processing software should be able to decode it and take appropriate actions.

You can tell the SQS queue that you have “processed” the message by grabbing the “ReceiptHandle” value from the above output and deleting the message.

sqs_receipt_handle=...
aws sqs delete-message \
  --queue-url "$sqs_queue_url" \
  --receipt-handle "$sqs_receipt_handle"

You only have a limited amount of time to process the message and delete it before SQS tosses it back in the queue for somebody else to process. This test queue gives you 5 minutes (VisibilityTimeout=300). If you go past this timeout, simply read the message from the queue and try again.

Cleanup

Delete the SQS queue:

aws sqs delete-queue \
  --queue-url "$sqs_queue_url"

Delete the SNS topic (and all subscriptions).

aws sns delete-topic \
  --region "$region" \
  --topic-arn "$sns_topic_arn"

Delete test objects in the bucket:

aws s3 rm s3://$s3_bucket_name/testfile-01
aws s3 rm s3://$s3_bucket_name/testfile-02

Delete the bucket, but only if it was created for this test!

aws s3 rb s3://$s3_bucket_name

Note: There is currently no way that I’ve found to use the aws-cli to remove an S3 bucket notification configuration if you want to keep the bucket. This must be done through the S3 API or AWS console.

History / Future

If the concept of an S3 bucket notification sounds a bit familiar, it’s because AWS S3 has had it for years, but the only supported event type was “s3:ReducedRedundancyLostObject”, triggered when S3 lost an RRS object. Given the way that this feature was designed, we all assumed that Amazon would eventually add more useful events like “S3 object created”, which indeed they released a couple weeks ago.

I would continue to assume/hope that Amazon will eventually support an “S3 object deleted” event because it just makes too much sense for applications that need to keep track of the keys in a bucket.

Original article: http://alestic.com/2014/12/s3-bucket-notification-to-sqssns-on-object-creation

by Eric Hammond at December 01, 2014 06:16 PM

November 29, 2014

Elizabeth Krumbach

My Smart Watch

I wear a watch.

Like many people, I went through a period where I thought my phone was enough. However, when my travel schedule picked up and I often found myself in planes with my phone off in an effort to save my battery for whatever exotic land I found myself in next. I also found it was nice to be able to have a clock I could adjust so I knew what time it was in this foreign land before I got there. Enter the mechanical watch.

When I learned I’d be receiving an Android Wear device at Google I/O I was skeptical that I’d have a real use for it, but amused and happy to give it a chance. I didn’t have high hopes though, another device to charge? Will interaction with my phone through a tiny device actually be that useful?

I’m happy to report that my skepticism was unnecessary. I have the Samsung Gear Live and I couldn’t be happier.

The battery life will last me a couple days, which is plenty of time to get me to my next destination, and I turn it off at night if I’m really concerned about not getting to an outlet (or just being to lazy to do so).

And usefulness? It sends alerts to my watch, so at a glance can see Twitter mentions and replies, and quickly favorite or retweet them from my watch. Perhaps my favorite feature is the ability to control Google Play music via the watch, walking around town I no longer need to dig my phone out of my purse to change the song (or now, adjust volume!). As an added bonus, the watch also has an icon for when it’s disconnected from my phone, so if I walk out the door and don’t remember if I grabbed my phone? Check my watch.

In addition to all this, it’s also much less distracting, I can feel in touch with people trying to contact me without having my face rudely buried in my phone all the time. I only need to pull out my phone when I actually have something to act on, which is pretty rare.

It seems I’m not alone. I was delighted to read this piece in Smithsonian Magazine several months ago: The Pocket Watch Was the World’s First Wearable Tech Game Changer. Unless some other, more convenient and socially acceptable wearable tech comes out, I’m hoping smart watches will catch on.

Perhaps the only caveat is how it looks. When I’m attending a wedding or nice dinner, I’m not going to strap on my giant black Gear Live, I switch back to my pretty mechanical watch. So I’m looking forward to the market opening up and giving us more options device-wise. In addition to something more feminine, a hybrid of mechanical and digital like the upcoming Kairos watches would be a lot of fun.

by pleia2 at November 29, 2014 12:34 AM

November 27, 2014

Eric Hammond

lambdash: AWS Lambda Shell Hack

I spent the weekend learning just enough JavaScript and nodejs to hack together a Lambda function that runs arbitrary shell commands in the AWS Lambda environment.

This hack allows you to explore the current file system, learn what versions of Perl and Python are available, and discover what packages might be installed.

If you’re interested in seeing the results, then read following article which uses this AWS Lambda shell hack to examine the inside of the AWS Lambda run time environment.

Exploring The AWS Lambda Runtime Environment

Now on to the hack…

Setup

Define the basic parameters.

# Replace with your bucket name
bucket_name=lambdash.alestic.com

function=lambdash
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
log_group_name=/aws/lambda/$function

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
      }]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. Log to Cloudwatch and upload files to a specific S3 bucket/location.

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Effect": "Allow",
          "Action": [ "logs:*" ],
          "Resource": "arn:aws:logs:*:*:*"
      }, {
          "Effect": "Allow",
          "Action": [ "s3:PutObject" ],
          "Resource": "arn:aws:s3:::'$bucket_name'/'$function'/*"
      }]
  }'

Grab the current Lambda function JavaScript from the Alestic lambdash GitHub repository, create the ZIP file, and upload the new Lambda function.

wget -q -O$function.js \
  https://raw.githubusercontent.com/alestic/lambdash/master/lambdash.js
npm install async fs tmp
zip -r $function.zip $function.js node_modules
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --runtime nodejs \
  --mode event \
  --handler "$function.handler" \
  --role "$lambda_execution_role_arn" \
  --timeout 60 \
  --memory-size 256

Invoke the Lambda function with the desired command and S3 output locations. Adjust the command and repeat as desired.

cat > $function-args.json <<EOM
{
    "command": "ls -laiR /",
    "bucket":  "$bucket_name",
    "stdout":  "$function/stdout.txt",
    "stderr":  "$function/stderr.txt"
}
EOM

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-args.json"

Look at the Lambda function log output in CloudWatch.

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Get the command output.

aws s3 cp s3://$bucket_name/$function/stdout.txt .
aws s3 cp s3://$bucket_name/$function/stderr.txt .
less stdout.txt stderr.txt

Clean up

If you are done with this example, you can delete the created resources. Or, you can leave the Lambda function in place ready for future use. After all, you aren’t charged unless you use it.

aws s3 rm s3://$bucket_name/$function/stdout.txt
aws s3 rm s3://$bucket_name/$function/stderr.txt
aws lambda delete-function \
  --function-name "$function"
aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"
aws iam delete-role \
  --role-name "$lambda_execution_role_name"
aws logs delete-log-group \
  --log-group-name "$log_group_name"

Requests

What command output would you like to see in the Lambda environment?

Original article: http://alestic.com/2014/11/aws-lambda-shell

by Eric Hammond at November 27, 2014 02:33 AM

November 26, 2014

Akkana Peck

Yam-Apple Casserole

Yams. I love 'em. (Actually, technically I mean sweet potatoes, since what we call "yams" here in the US aren't actual yams, but the root from a South American plant, Ipomoea batatas, related to the morning glory. I'm not sure I've ever had an actual yam, a tuber from an African plant of the genus Dioscorea).

But what's up with the way people cook them? You take something that's inherently sweet and yummy -- and then you cover them with brown sugar and marshmallows and maple syrup and who knows what else. Do you sprinkle sugar on apples before you eat them?

Normally, I bake a yam for about an hour in the oven, or, if time is short (which it usually is), microwave it for about four and a half minutes, then finish up with 20-40 minutes in a toaster oven at 350°. The oven part seems to be necessary: it brings out the sweetness and the nice crumbly texture in a way that the microwave doesn't. You can read about some of the science behind this at this Serious Eats discussion of cooking sweet potatoes: it's because sweet potatoes have an odd enzyme, beta amylase, that breaks down carbohydrates into sugars, thus bringing out the vegetable's sweetness, but that enzyme only works in a limited temperature range, so if you heat up a sweet potato too fast the enzyme doesn't have time to work.

But Thanksgiving is coming up, and for a friend's dinner party, I wanted to make something a little more festive (and more easily parceled out) than whole baked yams.

A web search wasn't much help: nearly everything I found involved either brown sugar or syrup. The most interesting casserole recipes I saw fell into two categories: sweet and spicy yams with chile powder and cayenne pepper (and brown sugar), and for yam-apple casserole (with brown sugar and lemon juice). As far as I can tell it has never occurred to anyone, before me, to try either of these without added sugar. So I bravely volunteered myself as test subject.

I was very pleased with the results. The combination of the tart apples, the sweet yams and the various spices made a lovely combination. And it's a lot healthier than the casseroles with all the sugary stuff piled on top.

Yam-Apple Casserole without added sugar

Ingredients:

  • Yams, as many as needed.
  • Apples: 1-2 apples per yam. Use a tart variety, like granny smith.
  • chile powder
  • sage
  • rosemary or thyme
  • cumin
  • nutmeg
  • ginger powder
  • salt
(Your choice whether to use all of these spices, just some, or different ones.)

Peel and dice yams and apples into bite-sized pieces, inch or half-inch cubes. (Peeling the yams is optional.)

Drizzle a little olive oil over the yam and apple pieces, then sprinkle spices. Your call as to which spices and how much. Toss it all together until the pieces are all evenly coated with oil and the spices look evenly distributed.

Lay out in a casserole dish or cake pan and bake at 350°F until the yam pieces are soft. This takes at least an hour, two if you made big pieces or layered the pieces thickly in the pan. The apples will mostly disintegrate into little mushy bits between the pieces of yam, but that's fine -- they're there for flavor, not consistency.

Note: After reading about beta-amylase and its temperature range, I had the bright idea that it would be even better to do this in a crockpot. Long cooking at low temps, right? Wrong! The result was terrible, almost completely tasteless. Stick to using the oven.

I'm going to try adding some parsnips, too, though parsnips seem to need to cook longer than sweet potatoes, so it might help to pre-cooked the parsnips a few minutes in the microwave before tossing them in with the yams and apples.

November 26, 2014 02:07 AM

November 25, 2014

Eric Hammond

AWS Lambda Walkthrough Command Line Companion

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name:
source_bucket=alestic-lambda-example

# Do not change this. Walkthrough code assumes this name
target_bucket=${source_bucket}resized

function=CreateThumbnail
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
lambda_invocation_role_name=lambda-$function-invocation
lambda_invocation_access_policy_name=lambda-$function-invocation-access
log_group_name=/aws/lambda/$function

Install some required software:

sudo apt-get install nodejs nodejs-legacy npm

Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket
aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas: https://www.flickr.com/photos/hatalmas/6094281702
wget -q -OHappyFace.jpg \
  https://c3.staticflickr.com/7/6209/6094281702_d4ac7290d3_b.jpg

aws s3 cp HappyFace.jpg s3://$source_bucket/

Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough
wget -q -O $function.js \
  http://run.alestic.com/lambda/aws-examples/CreateThumbnail.js

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $function.zip $function.js node_modules

Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "logs:*"
        ],
        "Resource": "arn:aws:logs:*:*:*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetObject"
        ],
        "Resource": "arn:aws:s3:::'$source_bucket'/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::'$target_bucket'/*"
      }
    ]
  }'

Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --role "$lambda_execution_role_arn" \
  --mode event \
  --handler "$function.handler" \
  --timeout 30 \
  --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM
{  
   "Records":[  
      {  
         "eventVersion":"2.0",
         "eventSource":"aws:s3",
         "awsRegion":"us-east-1",
         "eventTime":"1970-01-01T00:00:00.000Z",
         "eventName":"ObjectCreated:Put",
         "userIdentity":{  
            "principalId":"AIDAJDPLRKLG7UEXAMPLE"
         },
         "requestParameters":{  
            "sourceIPAddress":"127.0.0.1"
         },
         "responseElements":{  
            "x-amz-request-id":"C3D13FE58DE4C810",
            "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"testConfigRule",
            "bucket":{  
               "name":"$source_bucket",
               "ownerIdentity":{  
                  "principalId":"A3NL1KOZZKExample"
               },
               "arn":"arn:aws:s3:::$source_bucket"
            },
            "object":{  
               "key":"HappyFace.jpg",
               "size":1024,
               "eTag":"d41d8cd98f00b204e9800998ecf8427e",
               "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
            }
         }
      }
   ]
}
EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups \
  --output text \
  --query 'logGroups[*].[logGroupName]'

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName')
echo log_stream_names="'$log_stream_names'"
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role \
  --role-name "$lambda_invocation_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "s3.amazonaws.com"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:ExternalId": "arn:aws:s3:::*"
            }
          }
        }
      ]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --policy-document '{
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "lambda:InvokeFunction"
         ],
         "Resource": [
           "*"
         ]
       }
     ]
   }'

Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration \
  --function-name "$function" \
  --output text \
  --query 'FunctionARN'
)
echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification \
  --bucket "$source_bucket" \
  --notification-configuration '{
    "CloudFunctionConfiguration": {
      "CloudFunction": "'$lambda_function_arn'",
      "InvocationRole": "'$lambda_invocation_role_arn'",
      "Event": "s3:ObjectCreated:*"
    }
  }'

Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

myimages=...
aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket

Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions \
  --output text \
  --query 'Functions[*].[FunctionName]'

aws lambda get-function \
  --function-name "$function"

aws iam list-roles \
  --output text \
  --query 'Roles[*].[RoleName]'

aws iam get-role \
  --role-name "$lambda_execution_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_execution_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws iam get-role \
  --role-name "$lambda_invocation_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_invocation_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws s3api get-bucket-notification \
  --bucket "$source_bucket"

Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg
aws s3 rm s3://$source_bucket/HappyFace.jpg
aws s3 rb s3://$target_bucket/
aws s3 rb s3://$source_bucket/

aws lambda delete-function \
  --function-name "$function"

aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_execution_role_name"

aws iam delete-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_invocation_role_name"

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  echo "deleting log-stream $log_stream_name"
  aws logs delete-log-stream \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name"
done

aws logs delete-log-group \
  --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

Original article: http://alestic.com/2014/11/aws-lambda-cli

by Eric Hammond at November 25, 2014 09:36 PM

November 24, 2014

Jono Bacon

Ubuntu Governance Reboot: Five Proposals

Sorry, this is long, but hang in there.

A little while back I wrote a blog post that seemed to inspire some people and ruffle the feathers of some others. It was designed as a conversation-starter for how we can re-energize leadership in Ubuntu.

When I kicked off the blog post, Elizabeth quite rightly gave me a bit of a kick in the spuds about not providing a place to have a discussion, so I amended the blog post to a link to this thread where I encourage your feedback and participation.

Rather unsurprisingly, there was some good feedback, before much of it started wandering off the point a little bit.

I was delighted to see that Laura posted that a Community Council meeting on the 4th Dec at 5pm UTC has been set up to further discuss the topic. Thanks, CC, for taking the time to evaluate and discuss the topic in-hand.

I plan on joining the meeting, but I wanted to post five proposed recommendations that we can think about. Again, please feel free to share feedback about these ideas on the mailing list

1. Create our Governance Mission/Charter

I spent a bit of time trying to find the charter or mission statements for the Community Council and Technical Board and I couldn’t find anything. I suspect they are not formally documented as they were put together back in the early days, but other sub-councils have crisp charters (mostly based off the first sub-council, the Forum Council).

I think it could be interesting to define a crisp mission statement for Ubuntu governance. What is our governance here to do? What are the primary areas of opportunity? What are the priorities? What are the risks we want to avoid? Do we need both a CC and TB?

We already have the answers to some of these questions, but are the answers we have the right ones? Is there an opportunity to adjust our goals with our leadership and governance in the project?

Like many of the best mission statements, this should be a collaborative process. Not a mission defined by a single person or group, but an opportunity for multiple people to feed into so it feels like a shared mission. I would recommend that this be a process that all Ubuntu members can play a role in. Ubuntu members have earned their seat at the table via their contributions, and would be a wonderfully diverse group to pull ideas from.

This would give us a mission that feels shared, and feels representative of our community and culture. It would feel current and relevant, and help guide our governance and wider project forward.

2. Create an ‘Impact Constitution’

OK, I just made that term up, and yes, it sounds a bit buzzwordy, but let me explain.

The guiding principles in Ubuntu are the Ubuntu Promise. It puts in place a set of commitments that ensure Ubuntu always remains a collaborative Open Source project.

What we are missing though is a document that outlines the impact that Ubuntu gives you, others, and the wider world…the ways in which Ubuntu empowers us all to succeed, to create opportunity in our own lives and the life of others.

As an example:

Ubuntu is a Free Software platform and community. Our project is designed to create open technology that empowers individuals, groups, businesses, charities, and others. Ubuntu breaks down the digital divide, and brings together our collective energy into a system that is useful, practical, simple, and accessible.

Ubuntu empowers you to:

  1. Deploy an entirely free Operating System and archive of software to one of multiple computers in homes, offices, classrooms, government institutions, charities, and elsewhere.
  2. Learn a variety of programming and development languages and have the tools to design, create, test, and deploy software across desktops, phones, tablets, the cloud, the web, embedded devices and more.
  3. Have the tools for artistic creativity and expression in music, video, graphics, writing, and more.
  4. . . .

Imagine if we had a document with 20 or so of these impact statements that crisply show the power of our collective work. I think this will regularly remind us of the value of Ubuntu and provide a set of benefits that we as a wider community will seek to protect and improve.

I would then suggest that part of the governance charter of Ubuntu is that our leadership are there to inspire, empower, and protect the ‘impact constitution'; this then directly connects our governance and leadership to what we consider to be the primary practical impact of Ubuntu in making the world a better place.

3. Cross-Governance Strategic Meetings

Today we have CC meetings, TB meetings, FC meetings etc. I think it would be useful to have a monthly, or even quarterly meeting that brings together key representatives from each of the governance boards with a single specific goal – how do the different boards help further each other’s mission. As an example, how does the CC empower the TB for success? How does the TB empower the FC for success?

We don’t want governance that is either independent or dependent at the individual board level. We want governance that is inter-dependent with each other. This then creates a more connected network of leadership.

4. Annual In-Person Governance Summit

We have a community donations fund. I believe we should utilize it to get together key representatives across Ubuntu governance into the same room for two or three days to discuss (a) how to refine and optimize process, but also (b) how to further the impact of our ‘impact constitution’ and inspire wider opportunity in Ubuntu.

If Canonical could chip in and potentially there were a few sponsors, we could potentially bring all governance representatives together.

Now, it could be tempting to suggest we do this online. I think this would be a mistake. We want to get our leaders together to work together, socialize together, and bond together. The benefits of doing this in person significantly outweigh doing it online.

5. Optimize our community brand around “innovation”

Ubuntu has a good reputation for innovation. Desktop, Mobile, Tablet, Cloud…it is all systems go. Much of this innovation though is seen in the community as something that Canonical fosters and drives. There was a sentiment in the discussion after my last blog post that some folks feel that Canonical is in the driving seat of Ubuntu these days and there isn’t much the community can do to inspire and innovate. There was at times a jaded feeling that Canonical is standing in the way of our community doing great things.

I think this is a bit of an excuse. Yes, Canonical are primarily driving some key pieces…Unity, Mir, Juju for example…but there is nothing stopping anyone innovating in Ubuntu. Our archives are open, we have a multitude of toolsets people can use, we have extensive collaborative infrastructure, and an awesome community. Our flavors are a wonderful example of much of this innovation that is going on. There is significantly more in Ubuntu that is open than restricted.

As such, I think it could be useful to focus on this in our outgoing Ubuntu messaging and advocacy. As our ‘impact constitution’ could show, Ubuntu is a hotbed of innovation, and we could create some materials, messaging, taglines, imagery, videos, and more that inspires people to join a community that is doing cool new stuff.

This could be a great opportunity for designers and artists to participate, and I am sure the Canonical design team would be happy to provide some input too.

Imagine a world in which we see a constant stream of social media, blog posts, videos and more all thematically orientated around how Ubuntu is where the innovators innovate.

Bonus: Network of Ubucons

OK, this is a small extra one I would like to throw in for good measure. :-)

The in-person Ubuntu Developer Summits were a phenomenal experience for so many people, myself included. While the Ubuntu Online Summit is an excellent, well-organized online event, there is something to be said about in-person events.

I think there is a great opportunity for us to define two UbuCons that become the primary in-person events where people meet other Ubuntu folks. One would be focused on the US, and one of Europe, and if we could get more (such as an Asian event), that would be awesome.

These would be driven by the community for the community. Again, I am sure the donations fund could help with the running costs.

In fact, before I left Canonical, this is something I started working on with the always-excellent Richard Gaskin who puts on the UbuCon before SCALE in LA each year.

This would be more than a LoCo Team meeting. It would be a formal Ubuntu event before another conference that brings together speakers, panel sessions, and more. It would be where Ubuntu people to come to meet, share, learn, and socialize.

I think these events could be a tremendous boon for the community.


Well, that’s it. I hope this provided some food for thought for further discussion. I am keen to hear your thoughts on the mailing list!

by jono at November 24, 2014 10:35 PM

November 22, 2014

Elizabeth Krumbach

My Vivid Vervet has crazy hair

Keeping with my Ubuntu toy tradition, I placed an order for a vervet stuffed toy, available in the US via: Miguel the Vervet Monkey.

He arrived today!

He’ll be coming along to his first Ubuntu event on December 10th, a San Francisco Ubuntu Hour.

by pleia2 at November 22, 2014 02:57 AM

Vacation in Jamaica

This year I’ve traveled more than ever, but almost all of my trips have been for work. This past week, MJ and I finally snuck off for a romantic vacation together in Jamaica, where neither of us had been before.

Unfortunately we showed up a day late after I forgot my passport at home. I had removed it from my bag earlier in the day to get a copy of it for a VISA application and left it on the scanner. I realized it an hour before our flight, and the check in was 45 minutes prior to, not enough time for me to get home and back to the airport before the cutoff (but I did try!). I felt horrible. Fortunately the day home together before the trip did give us a little bit of breathing room between mad dash from work to airport.

Friday evening we got a flight! We sprung for First Class on our flights and thankfully all travel was uneventful. We got to Couples Negril around 3PM the following day after 2 flights, a 6 hour layover and a 90 minute van ride from Montego Bay to Negril.

It was beautiful. The rooms had recently been renovated and looked great. It was also nice that the room air conditioning was very good, so on those days when the humidity got to be a bit much I had a wonderful refuge. The resort was all-inclusive and we had confirmed ahead of time that the food was good, so there were no disappointments there. They had some low-key activities and little events and entertainment at lunch and later into the evening (including some ice carving and a great show by Dance Xpressionz). As a self-proclaimed not cool person I found it all to be the perfect atmosphere to relax and feel comfortable going to some of the events.

The view from our room (2nd floor Beachfront suite) was great too:

I had planned on going into deep Ian Fleming mode and getting a lot of writing done on my book, but I only ended up spending about 4 hours on it throughout the week. Upon arrival I realized how much I really needed the time off and took full advantage of it, which was totally the right decision. By Tuesday I was clear-headed and finally excited again about some of my work plans for the upcoming weeks, rather than feeling tired and overwhelmed by them.

Also, there were bottomless Strawberry Daiquiris.

Alas, it had to come to an end. We packed our things and were on our way on Thursday. Prior to the trip, MJ had looked into AirLink in order to take a 12 minute flight from Negril to Montego Bay rather than the 90 minute van ride. At $250 for the pair of us, I was happy to give it a go for the opportunity to ride in a Cessna and take some nice aerial shots. After getting our photo with the pilot, at 11AM the pair of us got into the Cessna with the pilot and co-pilot.

The views were everything I expected, and I was happy to get some nice pictures.

Jamaica is definitely now on my list for going back to. I really enjoyed our time there and it seemed to be a good season for it.

More photos from the week here (admittedly, mostly of the Cessna flight): https://www.flickr.com/photos/pleia2/sets/72157649408324165/

by pleia2 at November 22, 2014 02:32 AM

November 18, 2014

Akkana Peck

Unix "remind" file for US holidays

Am I the only one who's always confused about when holidays happen?

Partly it's software, I guess. In these days of everybody keeping their schedules on Google's or Apple's servers, maybe most people keep up on these things.

But being the dinosaur I am, I'm still resistant to keeping my schedule in the cloud on a public server. What if I need to check for upcoming events while I'm on a trip out in the remote desert somewhere? (Not to mention the obvious privacy considerations.) For years I used PalmOS PDAs, but when I switched to Android and discovered how poor the offline calendar options are, I decided that I should learn how to use the old Unix standby.

It's been pretty handy. I run remind ~/[remind-file-name] when I log in in the morning, and it gives me a nice summary of upcoming events:

DPU Solar surcharge meeting, 5:30-8:30 tomorrow
NMGLUG meeting in 2 days' time

Of course, I can also have it email me with reminders, or pop up a window, but so far I haven't felt the need.

I can also display a nice calendar showing upcoming events for this month or the next several months. I made a couple of aliases:

mycal () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=1 
        fi
        remind -c$months ~/Docs/Lists/remind
}

mycalp () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=2 
        fi
        remind -p$months ~/Docs/Lists/remind | rem2ps -e -l > /tmp/mycal.ps
        gv /tmp/mycal.ps &
}

The first prints an ascii calendar; the second displays a nice postscript calendar complete with little icons for phases of the moon.

But what about those holidays?

Okay, that gives me a good way of storing reminders about appointments. But I still don't know when holidays are. (I had that problem with the PalmOS scheduling program, too -- it never knew about holidays either.)

Web searching didn't help much. Unfortunately, "remind" is a terrible name in this age of search engines. If someone has already solved this problem, I sure wasn't able to find any evidence of it. So instead, I went to Wikipedia's list of US holidays, with the remind man page in another tab, and wrote remind stanzas for each one -- except Easter, which is much more complicated.

But wait -- it turns out that remind already has code to calculate Easter! It just needs a slightly more complicated stanza: instead of the standard form of

REM  1 Apr +1 MSG April Fool's Day %b
I need to use this form:
REM  [trigger(easterdate(today()))] +1 MSG Easter %b

The %b in each case is what gives you the notice of when the event is in your reminders, e.g. "Easter tomorrow" or "Easter in two days' time". The +1 is how far beforehand you want to be reminded of each event.

So here's my remind file for US holidays. I make no guarantees that every one is right, though I did check them for the next 12 months and they all seem to be working.

#
# US Holidays
#
REM      1 Jan    +3 MSG New Year's Day %b
REM Mon 15 Jan    +2 MSG MLK Day %b
REM      2 Feb       MSG Groundhog Day %b
REM     14 Feb    +2 MSG Valentine's Day %b
REM Mon 15 Feb    +2 MSG President's Day %b
REM     17 Mar    +2 MSG St Patrick's Day %b
REM      1 Apr    +9 MSG April Fool's Day %b
REM  [trigger(easterdate(today()))] +1 MSG Easter %b
REM     22 Apr    +2 MSG Earth Day %b
REM Fri  1 May -7 +2 MSG Arbor Day %b
REM Sun  8 May    +2 MSG Mother's Day %b
REM Mon  1 Jun -7 +2 MSG Memorial Day %b
REM Sun 15 Jun       MSG Father's Day
REM      4 Jul    +2 MSG 4th of July %b
REM Mon  1 Sep    +2 MSG Labor Day %b
REM Mon  8 Oct    +2 MSG Columbus Day %b
REM     31 Oct    +2 MSG Halloween %b
REM Tue  2 Nov    +4 MSG Election Day %b
REM     11 Nov    +2 MSG Veteran's Day %b
REM Thu 22 Nov    +3 MSG Thanksgiving %b
REM     25 Dec    +3 MSG Christmas %b

November 18, 2014 09:07 PM

November 14, 2014

Jono Bacon

Ubuntu Governance: Reboot?

For many years Ubuntu has had a comprehensive governance structure. At the top of the tree are the Community Council (community policy) and the Technical Board (technical policy).

Below those boards are sub-councils such as the IRC, Forum, and LoCo councils, and developer assessment boards.

The vast majority of these boards are populated by predominantly non-Canonical folks. I think this is a true testament to the openness and accessibility of governance in Ubuntu. There is no “Canonical needs to have people on half the board” shenanigans…if you are a good leader in the Ubuntu community, you could be on these boards if you work hard.

So, no-one is denying the openness of these boards, and I don’t question the intentions or focus of the people who join and operate them. They are good people who act in the best interests of Ubuntu.

What I do question is the purpose and effectiveness of these boards.

Let me explain.

From my experience, the charter and role of these boards has remained largely unchanged. The Community Council, for example, is largely doing much of the same work it did back in 2006, albeit with some responsibility delegated elsewhere.

Over the years though Ubuntu has changed, not just in terms of the product, but also the community. Ubuntu is no longer just platform contributors, but there are app and charm developers, a delicate balance between Canonical and community strategic direction, and a different market and world in which we operate.

Ubuntu governance has, as a general rule, been fairly reactive. In other words, items are added to a governance meeting by members of the community and the boards sit, review the topic, discuss it, and in some cases vote. In this regard I consider this method of governance not really leadership, but instead idea, policy, and conflict arbitration.

What saddens me is that when I see some of these meetings, much of the discussion seems to focus on paperwork and administrivia, and many of the same topics pop up over and over again. With no offense meant at the members of these boards, these meetings are neither inspirational and rarely challenge the status quo of the community. In fact, from my experience, challenging the status quo with some of these boards has invariably been met with reluctance to explore, experiment, and try new ideas, and to instead continue to enforce and protect existing procedures. Sadly, the result of this is more bureaucracy than I feel comfortable with.

Ubuntu is at a critical point in it’s history. Just look at the opportunity: we have a convergent platform that will run across phones, tablets, desktops and elsewhere, with a powerful SDK, secure application isolation, and an incredible developer community forming. We have a stunning cloud orchestration platform that spans all the major clouds, making the ability to spin up large or small scale services a cinch. In every part of this the code is open and accessible, with a strong focus on quality.

This is fucking awesome.

The opportunity is stunning, not just for Ubuntu but also for technology freedom.

Just think of how many millions of people can be empowered with this work. Kids can educate themselves, businesses can prosper, communities can form, all on a strong, accessible base of open technology.

Ubuntu is innovating on multiple fronts, and we have one of the greatest communities in the world at the core. The passion and motivation in the community is there, but it is untapped.

Our inspirational leader has typically been Mark Shuttleworth, but he is busy flying around the world working hard to move the needle forward. He doesn’t always have the time to inspire our community on a regular basis, and it is sorely missing.

As such, we need to look to our leadership…the Community Council, the Technical Board, and the sub-councils for inspiration and leadership.

I believe we need to transform and empower these governance boards to be inspirational vessels that our wider community look to for guidance and leadership, not for paper-shuffling and administrivia.

We need these boards to not be reactive but to be proactive…to constantly observe the landscape of the Ubuntu community…the opportunities and the challenges, and to proactively capitalize on protecting the community from risk while opening up opportunity to everyone. This will make our community stronger, more empowered, and have that important dose of inspiration that is so critical to focus our family on the most important reasons why we do this: to build a world of technology freedom across the client and the cloud, underlined by a passionate community.

To achieve this will require awkward and uncomfortable change. It will require a discussion to happen to modify the charter and purpose of these boards. It will mean that some people on the current boards will not be the right people for the new charter.

I do though think this is important and responsible work for the Ubuntu community to be successful: if we don’t do this, I worry that the community will slowly degrade from lack of inspiration and empowerment, and our wider mission and opportunity will be harmed.

I am sure this post may offend some members of these boards, but it is not mean’t too. This is not a reflection of the current staffing, this is a reflection of the charter and purpose of these boards. Our current board members do excellent work with good and strong intentions, but within that current charter.

We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success.

This, I believe will transform Ubuntu into a new world of potential, a level of potential I have always passionately believed in.

I have kicked off a discussion on ubuntu-community-team where we can discuss this. Please share your thoughts and solutions there!

by jono at November 14, 2014 06:16 PM

Elizabeth Krumbach

Holiday cards 2014!

Every year I send out a big batch of wintertime holiday cards to friends and acquaintances online.

Reading this? That means you! Even if you’re outside the United States!

Just drop me an email at lyz@princessleia.com with your postal address, please put “Holiday Card” in the subject so I can filter it appropriately. Please do this even if I’ve sent you a card in the past, I won’t be reusing the list from last year.

Typical disclaimer: My husband is Jewish and I’m not religious, the cards will say “Happy Holidays”

by pleia2 at November 14, 2014 04:38 PM

November 13, 2014

Elizabeth Krumbach

Wedding in Philadelphia

This past weekend MJ and I met in Philadelphia to attend his step-sister’s wedding on Sunday. My flight came in from Paris on Saturday, and unfortunately MJ was battling a cold so we had a pretty low key evening.

Sunday morning we were up ready to dress and pick up a truck to drive his sister to the church. The wedding itself didn’t begin until 2PM, but since we were coordinating transportation for the wedding party, we had to meet everyone pretty early to make sure everyone got into their respective bus/car to make it to St. Stephen’s Orthodox Cathedral on time.

I’d never been to an eastern Orthodox wedding, so it was an interesting ceremony to watch. It took about an hour, and we were all standing for the entire ceremony. There was a ring exchange in the back of the chapel, and then the bride and groom come up the center aisle together for the rest of their ceremony. I chose to keep my camera stashed away during the ceremony, but as soon as the priest had finished and was making some closing comments about the newlyweds I got one in real quick.

The weather in November can go either way in Philadelphia, but they got lucky with bright, clear skies and the quite comfortable temperature in the 60s.

The reception began at 4PM with a cocktail hour.

And we did manage to get a few minutes in with the beautiful bride, Irina :)

Big congratulations to Irina and Sam!

More photos here: https://www.flickr.com/photos/pleia2/sets/72157648832387979/

The trip was a short one, with us packing up on Monday to fly home that evening. I did manage to get in a quick lunch with my friend Crissi who made it down to the city for the occasion, so it was great to catch up with her. Our flights home were uneventful and I finally got to sleep in my own bed after 3 weeks on the road!

Tomorrow night we fly off to Jamaica for a proper vacation together, I’m very much looking forward to it.

by pleia2 at November 13, 2014 02:46 AM

Party in France

On Saturday November 1st I landed in Paris on a redeye flight from Miami. I didn’t manage to sleep much at all on the flight, but thankfully I was able to check into my hotel room around 8:30AM to drop off my bags and freshen up before going on a day of jetlag-battling tourism.

It was the right decision. Of all the days I spent in Paris, that Saturday was the most beautiful weather-wise. The sky was clear and blue, the temperature quite comfortable to be wandering around the city in a t-shirt. Since Saturday was one of my only 2 days to play the tourist in Paris, mixed in with some meetings with colleagues, I took the advice of my cousin Melissa and bought a ticket on one of the red hop-on, hop-off circuit buses that stopped at the various landmarks throughout the city.

The hotel I was staying not far from the Arc de Triomphe so I was able to have a look at that and pick up a bus at that stop. I rode the bus until it reached the Eiffel Tower.

The line to take a lift up to the top of the tower was quite long and I wasn’t keen on waiting while battling jet lag, so I took a nice long walk around the tower and the grounds, snapping pictures along the way. I also found myself hungry so I picked up a surprisingly delicious chicken sandwich at a booth under the tower and enjoyed it there.

I hopped on the bus again and drove through the grounds of the Louvre museum, which was an astonishingly large complex. Due to the crowds and other things on my list for the day, I skipped actually going to the Louvre and contented myself with simply seeing the glass pyramid and making a mental note to return the next time I’m in Paris.

Soon after my phone lit up with a notification from my friend and OpenStack colleague Chris Hoge saying that he was at Notre Dame and folks were welcome to join him. It was the next stop I was planning on making, so I made plans to meet up.

I adore old cathedrals, and Notre Dame is a special one for me. As funny as it sounds, Disney’s The Hunchback of Notre Dame is one of my favorite movies. Being released in 1996, I must have just been finishing up my freshman year in high school where one of my history classes had started diving into world religions. I was also growing my skeptic brain. I had also developed a habit at that time of seeing all Disney full-length animated features in theaters the day they were released because I was such a hopeless fan. The confluence of all these things made the movie hit me at the right time. It was a surprising tale of serious issues around compassion, religion and ethics for an animated film, I was totally into it. Plus, they didn’t disappoint with the venue for the film, I fell in love with Notre Dame that summer and started developing a passion for cathedrals and stained glass, particularly rose windows.

I met up with Chris and we took the bell tower tour, which all told took us up 387 steps to the roof of the 226 foot cathedral. We stopped halfway up to walk between the towers and hear the bells ring, which is where I took this video (YouTube). If you’re still with me with the Disney film, it’s where the final battle between Frollo and Quasimodo takes place ;)

387 steps is a lot, and I have to admit getting a bit winded as we climbed the narrow spiral staircases, but it was totally worth it. I really enjoyed being so close to all the gargoyles and the view from the top of the cathedral was beautiful, not to mention a fantastic way to see the architecture of the cathedral from above.

After the tour, I was was able to go inside the cathedral to take a good luck at all those stunning stained glass windows!

After Notre Dame, I did a little shopping and made my way back to the bus and eventually the hotel for a meeting and dinner with my colleagues.

Sunday morning I managed to sleep in a bit and made my way out of the hotel shortly before 10AM so I could make it over to the Catacombs of Paris. The line for the catacombs is very long, the website warning that you could wait 3-4 hours. I had hoped that getting there early would mitigate some of that wait, but it did end up taking 3 hours! I brought along my Nook so at least I got some reading done, but it probably was the longest I’ve ever waited in line.

I’d say that it was worth it though. I’d never been inside catacombs before, so it was a pretty exceptional experience. After walking through a fair number of tunnels going down and then you finally get to where they keep all the bones. So. Many. Bones. As you walk through the catacombs the walls are made of stacked bones, seeing skulls and leg bones piled up to make the walls, with all kinds of other bones stacked on the tops of the piles.

I also decided to bring along a bit of modernity into the catacombs with a selfie. I’ll leave it to the reader to judge whether or not I have respect for the dead.

By the time I left the catacombs it was after 2PM and I made my way over to the Avenue des Champs-Élysées to do some shopping. Most worthy of note was my stop at Louis Vuitton flagship store where I bought a lovely wallet.

And with that, my tourism wound down. Sunday night I began getting into the swing of things with the OpenStack Summit as we had a team dinner (for certain values of “team” – we’re so many now that any meal now is just a subset of us). I am looking forward to going again some day on a proper vacation with MJ, there are so many more things to see!

A couple hundred more photos from my travels around Paris here: https://www.flickr.com/photos/pleia2/sets/72157648830423229/

by pleia2 at November 13, 2014 02:31 AM

Akkana Peck

Crockpot Green Chile Posole Stew

Posole is a traditional New Mexican dish made with pork, hominy and chile. Most often it's made with red chile, but Dave and I are both green chile fans so that's how I make it. I make no claims as to the resemblance between my posole and anything traditional; but it sure is good after a cold, windy day like we had today.

Dave is leery of anything called "posole" -- I think the hominy reminds him visually of garbanzo beans, which he dislikes -- but he admits that they taste fine in this stew. I call it "green chile stew" rather than "posole" when talking to him, and then he gets enthusiastic.

Ingredients (all quantities very approximate):

  • pork, about a pound; tenderloin works well but cheaper cuts are okay too
  • about 10 medium-sized roasted green chiles, whatever heat you prefer (or 1 large or 2 medium cans diced green chile)
  • 1 can hominy
  • 1 large or two medium russet potatoes (or equivalent amount of other type)
  • 1 can chicken broth
  • 1 tsp salt
  • 1 tsp red chile powder
  • 1/2 tsp cumin
  • fresh garlic to taste
  • black pepper and hot sauce (I use Tapatio) to taste

Start the crockpot heating: I start it on high then turn it down later. Add broth.

Dice potato. At least half the potato should be in small pieces, say 1/4" cubes, or even shredded; the other half can be larger chunks. I leave the skin on.

Pre-cook diced potato in the microwave for 7 minutes or until nearly soft enough to eat, in a loosely covered bowl with maybe 1" of water in the bottom. (This will get messy and the water gets all over and you have to clean the microwave afterward. I haven't found a solution to that yet.) Dump cooked potato into crockpot.

Dice pork into stew-sized pieces, trimming fat as desired. Add to crockpot.

De-skin and de-seed the green chiles and cut into short strips. (Or use canned or frozen.) Add to crockpot.

Add spices: salt, chile powder, cumin, and hot sauce (if your chiles aren't hot enough -- we have a bulk order of mild chiles this year so I sprinkled liberally with Tapatio).

Cover, reduce heat to low.

Cook 6-7 hours, occasionally stirring, tasting and correcting the seasoning. (I always add more of everything after I taste it, but that's me.)

Serve with bread, tortillas, sopaipillas or similar. French bread baked from the refrigerated dough in the supermarket works well if you aren't brave enough to make sopaipillas (I'm not, yet).

November 13, 2014 12:49 AM

November 07, 2014

Elizabeth Krumbach

Final day of the OpenStack Kilo Summit

Today was the last day of the OpenStack Design Summit. It wrapped up with a change of pace this time around, each project had their own contributor meetup which was used to continue hashing out ideas and getting some work done. I think this was a really brilliant move. I was pretty tired by the time Friday rolled around (one of the reasons the later Ubuntu Developer Summits were shrunk to 4 days), so I’m not sure how useful I would have been in more discussion-driven sessions. The contributor meetup allowed us to chat about things we didn’t have time to run sessions on, or do in-person follow-ups to sessions we did have. We also had nice in-person time to collaborate on some things so that some of our projects got to a semi-working state before we all go home and take a vacation (my vacation starts next Thursday).

I spent my day meeting up with with people to talk about our new translations tools and did the first couple drafts of the infrastructure specification to get that project started. Given the timeline, I anticipate that my real work on that won’t really begin until after I return from Jamaica on November 21st, but that seemed to sync up with the timeline of others on the team who are either taking some time off post-summit or have some dependencies blocking their action items.

There was also time spent on talking about the Infrastructure User Manual as a follow up to the session earlier in the week. We decided to host a 48 hour virtual sprint on the first couple days of December in order to collaborate on fleshing out the rest of the document (announcement here). As we all know, I love documentation, so I’m glad to see this coming together. I was also able to have a chat with a contributor later in the day who is also looking forward to seeing it finished so he can build upon it as the foundation for more project-specific developer documentation.

Also, the topic of third party testing came up during one of my chats and was overheard by someone nearby – which is how we learned there were at least three teams talking about creating a more automatic mechanism for determining the health of the third party testing systems. That’s approximately two teams too many. Kurt Taylor was able to get us all on an email thread together so I’m happy to say that a specification for that project should be coming together too.

Late in the afternoon James E. Blair did a demo for developers of gertty. I wrote about the tool back in September (here) and I’m a big fan of CLI-based code review, so it was fun to see others excited and asking questions about it.

As things wound down, I realized that this was probably the best OpenStack summit I’ve attended. The occasional snafu aside (like the over-crowded lunch on Thursday – I ate elsewhere), for a conference with over 4,600 attendees it felt well-managed. The Design Summit itself had a format I was really pleased with, as in addition to having the Friday work day, Tuesday was devoted to much-needed cross-project summit sessions. As OpenStack grows and matures, I’m really happy to see everyone working to fine tune the summits like this to keep pace.

Tonight I joined several of my OpenStack colleagues for an early dinner, retiring early to my room so I could re-pack my suitcase (and hope it’s not over 50lbs) and get some work done before my flight tomorrow morning. As exhausting as this trip was, it sure flew by fast and I am quite sad to be leaving Paris! Alas, my sister in law’s wedding in Philadelphia on Sunday awaits and I’m looking forward to it (and finally seeing my husband again after almost 2 weeks).

by pleia2 at November 07, 2014 09:36 PM

November 06, 2014

Elizabeth Krumbach

Kilo OpenStack Summit Days 3-4

As the OpenStack Summit continued for those of us on the development side, Wednesday and Thursday were full of design sessions.

First up for me on Wednesday was a great session about the Infrastructure User Manual led by Anita Kuno. A pile of work went into this while we were at our mid-cycle Infrastructure sprint in July, but many of the patches have since been sitting around. This session worked to make sure we had a shared vision for the manual and to get more core contributors both reviewing patches and submitting content for some of the more complicated, institutional knowledge type sections of the manual. The etherpad for the session is available here.

The session on AFS (Andrew File System) for the Infrastructure team was also on Wednesday. In spite of having a lot of storage space at our disposal and tools like Swift (which we’re slowly moving logs to), there are still some problems we’re seeing to solve that a distributed filesystem would be useful for, enter the AFS cell set up for the OpenStack project. The session went through some of the benefits of using AFS in our environment (such as read-only replicas of volumes, heavy client-side caching support and more comprehensive ACLs than standard Unix filesystem permissions). From there the discussion moved on to how it may be used, some of the popular proposals were our pypi mirror, the git repos and documentation. Detailed Etherpad here.

There were also a couple QA/Infra sessions, including one on Gating Relationships. At the QA/Infra mid-cycle meetup back in July we touched upon some of the possible “over-testing” that may be done when a change in one project really has no potential to impact another project, but we run the tests anyway, using up testing resources. However, there isn’t really any criteria to follow for determining what changes and project combinations should trigger tests, and it was noted that many of what seem like unnecessary testing was actually put in place at one point to address a particular pain point. The main result of this session was to try to develop some of this criteria, even if it’s manual and human-based for now. Detailed Etherpad here.

We also had a QA and CI After Merge session. Currently all of our tests are pre-merge, which makes sure all code that lands in the development repository has undergone all official tests that the OpenStack CI system has to offer. This session discussed whether heavier, less “central” tests to the projects be tested post merge or with periodic tests, with what I believe was some consensus: We do want to split out some of the current gated jobs. Several todo items to move this forward were defined at the bottom of the etherpad.

I also attended the “Stable branches” session (lively etherpad here). Icehouse’s support is 15 months and the goal seems to be to support Juno for a similar time frame. Several representatives from distributions were attending and giving feedback about their own support needs and there seems to be hope that there will be work from folks from distros committing to do some of the maintenance work.

There were also a couple sessions about Tempest, the integration test suite. First there was “Tempest scope in the brave new world” which focused on the questions around what should be in Tempest moving forward and what the project should consider removing as the project moves forward. Etherpad for the session here. There was also a “Tempest-lib moving forward” session, which discussed this library that was created last cycle and various ways to improve it in the coming cycle, details in the Etherpad here.

Wednesday evening I made my way over to the Core Reviewer party put on by HP at the near rooftop event space of Cité de l’Architecture et du Patrimoine. We were driven there by what was described as “iconic, old French cars” which turned out to be the terrifying Citroën 2CV. And our drivers were all INSANE in Paris traffic. Fortunately no one died and it was actually pretty fun (though I was happy to see buses would be taking us back to the conference venue!).

The night itself kicked off with a lecture on the architecture of the Sagrada Família Basílica in Barcelona by one of the people currently working on it, and which drew some loose parallels between our own development work (including the observation that Sagrada Família is not complete – a 140+ year release cycle!). They also brought in entertainment in the form of several opera singers who came in throughout the night. Some food was served, but I spent much of the night outside chatting with various of my OpenStack colleagues and drinking so much Champagne that the outdoor bartender learned to pull out the bottle as soon as he saw me coming. Hah!

My favorite part of the night was the stunning view of the Eiffel Tower. It’s a beautiful thing on its own at night, but at the top of the hour it also sparkles for 5 minutes in a pretty impressive show. I was so caught up in discussions that I didn’t manage to go on the museum tour that was offered, but I heard good things about it today.

Then it was on to today, Thursday! I had a great chat with Steve Weston about the third party dashboard we’re working on before Anita came to find me so I wouldn’t be late for my own session (oops).

My (along with Andreas Jaeger’s who I saved a seat for up front) session was an infrastructure session on Translations Tools. We’re currently using Transifex but we need to move off of it now that they’ve transitioned to a closed source product. As I mentioned in my last post, we decided to go with Zanata so the session was primarily to firm up this decision with the rest of the infrastructure team and answer any questions from everyone involved. I have a lot of work to do during the Kilo cycle to finally get this going, but I’m really excited that all the work I did last cycle in getting demos set up and corralling the right talent for each component has finally culminated in a solid decision and action items for making the move. Next week I’ll start working on the spec for the transition. Etherpad here.

I attended a few other sessions, but the other big infrastructure one today was about Storyboard, the new task and bug tracker being written for the project to replace Launchpad. Michael Krotscheck has been doing an exceptional job on this project and the first decision of the session was whether it was ready for the OpenStack Infrastructure team to move to – yes! The rest of the session was spent outlining the key features that were needed to have really good support for infrastructure and to start supporting StackForge and OpenStack projects. The beautiful Etherpad that Michael created is here.

Tonight I went out with several of my OpenStack colleagues to dinner at La maison de Charly for delicious and stunningly arranged Moroccan food. I managed to get back to my room by 9PM so I could get an early night before the last day of the summit… but of course I got caught up in writing this, checking email and goofing off in IRC.

Tomorrow the summit wraps up with a working day with an open agenda for all the teams, so I’ll be spending my day in the Infra/QA/Release Management room.

by pleia2 at November 06, 2014 10:46 PM

Akkana Peck

New GIMP Save/Export plug-in: Saver

The split between Save and Export that GIMP introduced in version 2.8 has been a matter of much controversy. It's been over two years now, and people are still complaining on the gimp-users list.

Early on, I wrote a simple Python plug-in called Save-Export Clean, which saved over an image's current save or export filename regardless of whether the filename was XCF (save) or a different format (export). The idea was that you could bind Ctrl-S to the plug-in and not be pestered by needing to remember whether it was XCF, JPG or what.

Save-Export Clean has been widely cited, and I hope it's helped some people who were bothered by the Save/Export split. But personally I didn't like it very much. It wasn't very flexible -- there was no way to change the filename, for one thing, and it was awfully easy to overwrite an original image without knowing that you'd done it. I went back to using GIMP's separate Save and Export, but in the back of my mind I was turning over ideas, trying to understand my workflow and what I really wanted out of a GIMP Save plug-in.

[Screenshot: GIMP Saver-as... plug-in] The result of that was a new Python plug-in called Saver. I first wrote it a year ago, but I've been tweaking it and using it since then, with Ctrl-S bound to Saverand Ctrl-Shift-S bound to Saver as...). I wanted to make sure that it was useful and working reliably ... and somehow I never got around to writing it up and announcing it formally ... until now.

Saver, like Save/Export Clean, will overwrite your chosen filename, whether XCF or another format, and will mark the image as saved so GIMP won't pester you when you exit.

What's different? Mainly, three things:

  1. A Saver as... option so you can change the filename or file type.
  2. Merges multiple layers so they'll show up properly in your JPG or PNG image.
  3. An option to save as .xcf or .xcf.gz and, at the same time, export a copy in another format, possibly scaled down. So you can maintain your multi-layer XCF image but also update the JPG copy that you're going to put on the web.

I've been using Saver for nearly all my saving for the past year. If I'm just making a quick edit of a JPEG camera image, Ctrl-S overwrites it without questioning me. If I'm editing an elaborate multi-layer GIMP project, Ctrl-S overwrites the .xcf.gz. If I'm planning to export that image for the web, I Ctrl-Shift-S to bring up the Saver As... dialog, make sure the main filename is .xcf.gz, set a name (ending in .jpg) for the exported copy; and from then on, Ctrl-S will save both the XCF and the JPG copy.

Saver is available on my github page, with installation instructions here: GIMP Saver and Save/Export Clean Plug-ins. I hope you find it useful.

November 06, 2014 07:57 PM

Eric Hammond

When Are Your SSL Certificates Expiring on AWS?

If you uploaded SSL certificates to Amazon Web Services for ELB (Elastic Load Balancing) or CloudFront (CDN), then you will want to keep an eye on the expiration dates and renew the certificates well before to ensure uninterrupted service.

If you uploaded the SSL certificates yourself, then of course at that time you set an official reminder to make sure that you remembered to renew the certificate. Right?

However, if you inherited an AWS account and want to review your company or client’s configuration, then here’s an easy command to get a list of all SSL certificates in IAM, sorted by expiration date.

aws iam list-server-certificates \
  --output text \
  --query 'ServerCertificateMetadataList[*].[Expiration,ServerCertificateName]' \
  | sort

To get more information on an individual certificate, you might use something like:

certificate_name=...
aws iam get-server-certificate \
  --server-certificate-name $certificate_name \
  --output text \
  --query 'ServerCertificate.CertificateBody' \
| openssl x509 -text \
| less

That can let you review information like the DNS name(s) the SSL certificate is good for.

Exercise for the reader: Schedule an automated job that reviews SSL certificate expiration and generates messages to an SNS topic when certificates are near expiration. Subscribe email addresses and other alerting services to the SNS topic.

Read more from Amazon on Managing Server Certificates.

Note: SSL certificates embedded in web server applications running on EC2 instances would have to be checked and updated separately from those stored in AWS.

Original article: http://alestic.com/2014/11/aws-iam-ssl-certificate-expiration

by Eric Hammond at November 06, 2014 12:35 AM

November 04, 2014

Elizabeth Krumbach

Kilo OpenStack Summit Days 1-2

Saturday morning I arrived in Paris. The weather was gorgeous and I had a wonderful tourist day visiting some of the key sights of the city. I will write about that once I’m home and can upload all my photos, for now I am going to talk about the first couple of days of the OpenStack Summit, which began on Monday.

Both days kicked off with keynotes. While my work focuses on the infrastructure for the OpenStack project itself and I’m not strictly building components of OpenStack that people are deploying, the keynotes are still an inspiration. Companies from around the world get up on the stage and share how they’re using OpenStack to enable their developers to be more innovative by getting them development environments more quickly or how they’re putting serious production load on them in the processing of big data. This year they had BBVA, BMW (along with a stunning i8 driven onto the stage), Time Warner Cable, CERN, Expedia and Tapjoy get up on stage to share their stories.

CERN’s story was probably my favorite (even if the BMW on stage was shiny and I want one). Like many in my field, I hold a hobbyist level interest in science and could geek out about the work being done at CERN for days. Plus, they’re solving some really exceptional problems around massive amounts of big data produced by the LHC using OpenStack and a pile of other open source software.


Tim Bell of CERN

It was exciting to learn that they’re currently running 4 clusters using the latest release of OpenStack, the largest of which has over 70,000 cores across over 3,000 servers. Pretty serious stuff! He also shared some great links during his talk, including:

I was also delighted to see Jim Zemlin, Executive Director of the Linux Foundation, get on stage on the first day to share his excitement about the success of OpenStack and to tell us all what we wanted to hear: we’re doing great work for open source and are on the right side of history.

In short, the keynotes spoke to both my professional pride in what we’re all working on and the humanitarian and democratization side of technology that so seriously drew me into the possibilities of open source in the first place.

All the keynotes for both days are already online, you can check them out in this youtube playlist: OpenStack Summit Paris 2014 Keynote Presentations

Back to Monday, I headed over to the other venue to attend a session in the Ops Summit, “Top 10 Pain points from the user survey – how to fix them?” The session began by looking at results from the survey released that day: OpenStack User Survey Insights: November 2014. From that survey, they picked the top-cited issues that operations are having with OpenStack and worked to come up with some concrete issues that the operators could pass along to developers. Much of the discussion ended up focusing on problems with Neutron (including problems with the default configuration) and gaps in Documentation that made it difficult for operators to know that features existed or how to use them. The etherpad for the session goes further into depth about these and other issues raised and added during the session, see it here.

Monday afternoon I met up with Carlos Munoz of Red Hat and Andreas Jaeger of SUSE who I’ve been working with over these past couple of months to do an in depth exploration of our options for a new translations system. We have been evaluating both Pootle and Zanata, and though my preference had been Pootle because of it being written in Python and apparent popularity with other open source projects, the Translations team overwhelmingly preferred Zanata. As Andreas and I went through the Translations Infrastructure we currently have, it was also clear that Zanata was our best option. It was a great meeting, and I’m looking forward to the Translations Tools Session on Thursday at 11AM where we discuss these results with the rest of the Infrastructure team and work out some next steps.


Me, Carlos and Andreas!

From there I went down to the HP Sponsored track where lighting talks were being run during the last two sessions of the day. The room was packed! There were a lot of great presentations which I hope were recorded since I missed the first few. My talk was one of the last, and with a glowing introduction from my boss I gave a 5 minute whirlwind description of elastic-recheck. I fear the jetlag made my talk a bit weaker than I intended, but I was delighted to have 3 separate conversations about elastic-recheck and general failure tracking on CI systems that evening with people from different companies trying to do something similar. My slides are available here: Automated failure aggregation & detection with elastic-recheck slides (pdf).

On Tuesday morning I was up bright and early for the Women of OpenStack breakfast. Waking up with a headache made me tempted to skip it, but I’m glad I didn’t. The event kicked off with some stats from a recent poll of members of the Women of OpenStack LinkedIn group. It was nice to see that 50% of those who responded were OpenStack ATCs (Active Technical Contributor) and many of those who weren’t identified themselves as having other technical roles (not that I don’t value non-technical women in our midst, but the technical ones are My Tribe!).

Following the results summaries, we split into 4 groups to talk about some of the challenges facing us as a minority in the OpenStack community and came up with 4 problems and solutions: Coaching for building confidence, increasing profile and communication for and around the Women of OpenStack group, working to get more women in our community doing public speaking and helping women rejoin the community after a gap in involvement (bonus: this can directly help men too, but more women go through it when taking time off for children). The group decided on focusing on getting the word out about the community for now, seeking to improve our communication mechanisms and see about profiling some women in our community, as well as creating some space where we can put our basic information about what we’re working on and how to contact us. I was really happy with how this session went, kudos to all the amazing women who I got to interact with there, and sorry for being so shy!

After keynotes, I headed back over to the Design Summit venue to attend a couple cross-project testing-focused sessions: “DefCore, RefStack, Interoperability, and Tempest” (etherpad here) and Moving Functional Tests to Projects (etherpad here). One of the most valuable things I got out of these sessions was that projects really need to do a better job of communicating directly with each other. Currently so much is funneled through the Quality Assurance team (and Infrastructure team) because they run the test harness where things fail. Instead, it would be great to see some more direct communication between these projects, and splitting out some of the functional tests may be one way to help socially engineer this.

Following lunch and a quick meeting, I was off to “Changes to our Requirements Management Policy” (etherpad here) and then “Log Rationalization” (etherpad here). There seemed to be more work accomplished on the latter, which was nice to see since there’s a stalled specification up that it would be great to see moved along so that the project can come up with some guidelines for log levels. Operators have been reporting both that they often run logging at DEBUG level all the time so they can see even some of the more basic problems that crop up, AND are frustrated by some “non-issues” being promoted to WARNING and filling their logs with unnecessary stack traces.

Next up was the Gerrit third-party CI discussion session. I wasn’t sure what to expect from this session, but the self-selected group (many were more involved with OpenStack than was assumed, but they did come all the way to the summit…) was much more engaged than I had feared. Talk in the session centered around how to get more third party operators involved with the growing third party community, one suggestion being moving the meeting time to a more European friendly time every other week. There was also discussion around the need for improved documentation and I raised my hand about helping with a more dynamic dashboard for automatically determining the status of third party systems without manual notifications from operators. Etherpad here.

The last session of my very long day was “Translators / I18N team meetup” where the group sought to promote translations to grow the community and recognize translators, etherpad here. As I mentioned earlier, I’m working on some of the new tooling that the team will use, so in spite of only speaking English, I was able to chime in a bit on the technical side of making some of the recognitions and other statistics available once we switch back to an open source platform for translations.

Then it was off to the HP party at Musée des Arts Forains. Open for private events only, the venue hosts a collection of antique/vintage (dating from 1850-1950) games, rides and other fair-related objects. I played a couple of the games and enjoyed snacks and wine throughout the evening. It was certainly busy and some areas were quite loud and crowded, but it was easy to find large areas where the volume was quite conducive to conversations – of which I had many.

Social events and parties are not really my thing, but this one I really enjoyed. Transportation to the venue included an optional tourguide led tour on the bus past many of the stunning sights of Paris at night. And they began running shuttles back to the conference center at 9PM – which I figured I’d catch then, but it was after 10PM before I made my way back to the bus. I think what I really don’t like are club-like parties with loud music and nothing interesting to occupy myself with when I find myself frequently wandering around solo (apparently I’m a lousy pack animal). The ability to stop and play games, explore the interesting food offerings and run into lots of people I know made the evening fly by.

Huge thanks to my friends and colleagues at HP for putting on such a comfortable and exciting event, this one will be hard to top in my awesome-events-at-conferences ledger.

Tomorrow we begin the hardcore part of the conference for me, kicking off with an Infrastructure session at 9AM and moving through various QA and Infrastructure sessions going on through the rest of the week. Since it’s nearing 1AM, I should get some sleep!

by pleia2 at November 04, 2014 11:47 PM

November 03, 2014

Jono Bacon

Dealing With Disrespect: The Video

A while back I wrote and released a free e-book called Dealing With Disrespect.

It is a book that provides a short, simple to read, free guide for handling personalized, mean-spirited, disrespectful, and in some cases, malicious feedback and criticism of your work. I wrote it because this kind of feedback is increasingly common in online communities and places such as YouTube, reddit, and elsewhere, and I am tired of seeing good people give-up sharing their creative efforts because of this.

My goal with the book is that when someone reads mean-spirited feedback and criticism and feels demotivated, someone can point them to the book to it as a means of helping to put things in perspective.

Well, to make this content even easier to consume, I recorded a presentation version of it and put it up on my YouTube channel:

Can’t see it? Watch it here!

by jono at November 03, 2014 10:13 PM

Elizabeth Krumbach

Wedding and week in Florida

All this travel is leaving me in the unfortunate position of having a growing pile of blog posts queuing up, which will only get worse as the OpenStack Summit continues this week, so I better get these out! I’m now in Paris for the summit, but last week I was in Florida for MJ’s cousin Stephanie’s wedding.

I arrived on Friday afternoon from Raleigh and MJ picked me up at the airport, getting us to the hotel just in time to get changed for a family and friends gathering the evening before the wedding.

Saturday we were able to enjoy the beach and pools at the hotel with some of MJ’s cousins. The weather was great, even the humidity was quite low, relative to what I tend to expect from Florida.

As the day wound down, we got ready for the wedding!

The ceremony and reception took place at a beautiful country club not far from the hotel. As an attendee, it seemed like everything went very well. The reception was fun, lots of great food, a fun, sparkly signature drink and some stunning centerpieces decorating the dinner tables. I even danced a little.

Unfortunately I picked up a cold somewhere along the way, and spent all of Sunday in bed while MJ spent more time with family and pools. By Monday I was feeling a bit better and was able to see MJ off and get moved over to the beach motel where I spent the rest of the week.

My beach motel wasn’t the greatest place, but it was inexpensive, clean and ultimately quite tolerable. The plan to stay in Florida, in spite of my general “I don’t like Florida” attitude, was to avoid going all the way back to California prior to my Paris trip. And I have to say, with nice October weather and the views at sunset, I think it was the right choice.

My days were spent catching up with work post-conference and prepare for the summit this week. Thankfully it wasn’t very hot out, so I was able to open the windows during the day and let fresh air into my rooms. I also made plans throughout the week to visit with family in the area, managing to meet up with my cousin Shannon and her family, my Aunt Pam, and my Aunt Meg and cousin Melissa throughout the week.


At dinner with Shannon, Rich & Frankie

I also was able to take some long lunch breaks to enjoy a few quick dips in the ocean.

The San Francisco Giants won the World Series while I was in Florida too! I was able to watch the games in my room each night. I was disappointed not to be in town for the win, as the whole city explodes in celebration when there’s a win like this. My week wrapped up on Friday when I checked out of the motel and headed toward the airport for my redeye flight to Paris. And since I was also disappointed to be missing Halloween in San Francisco again, I dressed up for my flight, as Carmen Sandiego.

by pleia2 at November 03, 2014 09:57 PM

November 01, 2014

Elizabeth Krumbach

All Things Open 2014

From Oct 22-23rd I had the pleasure of speaking at and attending All Things Open in Raleigh, North Carolina. Of all the conferences I’ve attended this year, this conference is one of the most amazing when it comes to how well they treated their speakers. When I submitted my talk I received an email from the conference organizer thanking me for the submission. Frequent emails were sent keeping us informed about all the speaker-focused conference details. Leading up to the event I woke up one morning to this flattering profile on their news feed. A series of interviews was also published by the OpenSource.com folks featuring speakers. Once there, I was thanked about 100 times throughout the 2 day event. In short, they really did a remarkable job making me feel valued.

Thankfulness aside, the conference was top notch. Several months back I read The foundation for an open source city by Jason Hibbets so I was excited to go to Raleigh (where much of the work Hibbets talked about centered around) and doubly amused when Jason said hello to me and I got to say “hey, I read your book!” During the conference introduction they said the attendence last year (their first year) was around 700 and that they were looking at 1,100 this year. The conference was opened by Raleigh Mayor Nancy McFarlane, which was pretty exciting, I’d seen cities send CTOs or supervisors, but the having the mayor herself show up was quite the showing of support.

After her keynote came Jeffrey Hammond, VP & Principal Analyst at Forrester Research. I really enjoyed the statistics his company put together regarding the amount of open source software being used today. For instance, of developers surveyed they learned that 4/5 of them are using open source software and 73% of them are programming outside of their paid job, 27% on open source.

Right after the keynotes I headed downstairs to give my talk, Open Source Systems Administration. A blending of my passion for open source and love of systems administration, this is one of my favorite talks to give, I really enjoy being able to present on how the OpenStack infrastructure itself is an open source project. It was a lot of fun chatting with people throughout the rest of the conference who had attended (or missed) my talk, while there is less surprise these days that a project would open source an infrastructure, there’s a lot of interest in learning that there are project which actually have and how we’ve done it. Slides from my talk here: ATO-opensource_sysadmin.pdf (2.3M).


Giving my talk! Thanks to Charlie Reisinger for this photo.

The schedule made it hard to select talks, but I next decided to head over to the Design track to learn from Garth Braithwaite why Open Source Needs Design. I’ll start off by saying that it’s wonderful that there are some designers participating in open source these days, but as Garth points out in his talk they are generally: paid by a company as a designer to focus on the product (open sourceyness of it doesn’t matter, it’s a job), a designer friend of someone in the project who is helping out or a developer on the project who happens to have some design expertise (or is willing to get some in order to help the project). He explored some of the history of how developers made their way to open source and the tools we used, and explained that the “story” doesn’t exist for designers, why would they get involved? They’re not fixing a printer or solving some tricky problem. The tools for open collaboration for designers also don’t really exist, popular sites for design sharing like Dribbble don’t have source upload options and portfolio sites like BeHance lack any ability for collaboration. The new DesignOpen.org seeks to help change that, so it was interesting to learn about that. From there he detailed different types of design work, UX, IxD and UI and the tools and deliverables for each type of work. As someone who really has never worked with design it was an interesting tour of that space. His slides from the talk are available here: speakerdeck.com/garthdb/open-source-needs-design (first few slides are a image-full, but stick with it, some great slides with bullet points come later!).

Then it was off to see Lessons Learned with Distributed Systems at bit.ly presented by Sean O’Connor (it was a pleasure to meet him and colleague Peter Herndon during the keynote earlier in the day). The talk centered around some of the concerns when architecting systems at scale, from time syncronization to having codebases that are debuggable. At bit.ly they adopted a codebase that is broken out into many small pieces, allowing ops to dig into and learn about specific components when something goes wrong, not necessarily having to learn everything all at once in order to do their job effectively. He also went into how they’ve broken their workload up into what has to be done synchronously and what can be shifted into an asynchronous job, which is preferred because it’s easier to do well. Finally, he talked some about how they deal with failure, starting off with actually having a plan for failure, and doing things like back offs, where the retries end up spaced out over time rather than hammering the service constantly until it has returned.

After lunch I decided to check out the Messaging Standards and Systems – AMQP & RabbitMQ talk by Gavin M. Roy. I’ve used RabbitMQ a fair amount, but that doesn’t mean I’ve ever paid attention to AMQP (Advanced Message Queuing Protocol), I was pretty surprised to learn that releases 0-8 and 0-9-1 are very different the 1.0 release and are effectively overseen by different people, with many users still intentionally on 0-9-1. Good to know, I imagine that causes a ridiculous amount of confusion. He went through some of the architecture of how RabbitMQ can be used and things it does to “fix” issues encountered with the default AMQP 0-9-1. Slides from his talk here speakerdeck.com/gmr/messaging-standards-and-systems-amqp-and-rabbitmq (the exchange slides about halfway through are quite helpful).

I was then off to Saving the World with Open Source and Science presented by Dr. Marcus Hanwell. Given my job working on OpenStack, I perhaps have the distinct benefit of being exposed to scientists who understand how to store, process and present big data, plus who understand open source. I assumed this ubiquitous, so this talk was quite the wake up call. Not only are publicly-funded papers not available for free (perhaps a whole different rant), the papers often don’t have enough data for the results to be reproducible. Sources from which data was processed aren’t released (whether it be raw data, source code used to make computations or, seriously, an Excel spreadsheet with some data+formulas), images are shrunk and stripped of all metadata so it can be impossible to determine whether you’re actually seeing the same thing. Worse, most institutions have no way to index this source material at all, so something as simple as a harddrive failure on a laptop can mean loss of this precious data. Wow, how depressing. But the talk was actually a call for action in this space. As technologists there are things we can do to provide solutions to scientists, and scientists working in research can make social changes so that releasing full sources, code and more becomes something valued and validation of results is something that once again becomes central to all scientific research.

Day one completed with a keynote by Doug Cutting he titled “Pax Data” which was a fascinating look into the world we’re building where the collection of data is What We Do. He began by talking about how in most science fiction the collectors of data end up being the Bad Guys in a future dystopia, but the fact is that sectors from Education to Healthcare to Climate can benefit from the collection and analysis of big data. He posted the question to the audience: How do we do this without becoming those Bad Guys? He admitted not having a full answer, but provided some guidance on key things that would be required, including transparency, best practices around data handling, definition of data usage abuse so we can protect against it, and either government or industry oversight and/or regulation. Fascinating talk for me, particularly as I was in the middle of reading both a SciFi dystopia book where big data becomes really scary (The Circle by David Eggers) and non-fiction book about our overuse of technology (Program or be Programmed).

Day 2! Keynotes began with a talk by James Pearce of Facebook. I know Facebook is pretty much built on open source (just like everyone else) but this talk was about the open source program he and his team have built within Facebook starting about a year ago. As is standard for many companies starting with open source, they’d just “throw things over the wall” and expect the code to be useful to the community. It wasn’t. So they then began seriously working to develop the code the were open sourcing, assigning people internally to be the caretakers of projects, judging the health of projects based on metrics like forks and commits from community members outside of Facebook. They also run much of the same code versions internally as they release in the community. Github profile for Facebook is here: https://github.com/facebook. Very nice work!

The next keynote was by DeLisa Alexander of Red Hat on Women in Open Source. She started out with a history lesson about how the first real programmers were women and stressed why diversity is important in our industry. Stories about how the most successful women in open source have had encouragement of some form from their peers, and how important it is that everyone in the audience seek to do that with newcomers to their community, particularly women. It was also interesting to hear her talk about how children now often think of computers as opaque black boxes that they can’t influence, so it’s important to engage children (including girls) at a young age to teach them that they can make changes to the software and platforms they use.

Alexander also hosted a panel at lunch which I participated in on this topic. I was really honored to be a part of the panel, it was packed with very successful women in tech and open source. Jen Wike Huger wrote up some of her notes in a great article here: Keys to diversity in tech are more simple than you think. My own biggest takeaway from the panel was the realization that everyone on the panel has spent a significant amount of time being a mentor in some formal capacity. We’ve all supported students and other women in technology via organizations that we either work or volunteer for, or run ourselves.

Getting back to sessions, I went to Steven Vaughan-Nichols’ talk on Open Source, Marketing, and Using the Press. Now, technically I’m the Marketing Lead for Xubuntu, but I somewhat joke to people that it’s “only because I know how to use Twitter.” Amusingly, during his talk he covered people just like myself, project contributors who end up with the Marketing role. I gained a number of great insights from this talk, including defining your marketing audience properly – there’s your community and then there’s the rest of the world. Tips to knowing your customer, maybe we should do a more formal survey in Xubuntu about some of the decisions we make rather than relying upon sporadic social media feedback and expecting users to participate in development discussions? He also drove home the importance of branding, which thanks to our logo designer Pasi Lallinaho I believe we have done a good job of. There was also a crash course in communicating with the press: know who you’re contacting and what their focus is, be clear and concise in emails and explain the context in which your news is exciting. Oh, and be friendly and reply promptly when reporters contact you. I also realized I should add our press contact to our website, that’s a good idea! I have some updates to make to the Xubuntu Marketing blueprint this cycle.

Perhaps one of my favorite talks of the even was presented by Dr. Megan Squire: Case Study: We’re Watching You: How and Why Researchers Study Open Source And What We’ve Found So Far. I think what I found most interesting is that while I see poll from time to time put out by people claiming to do research on open source, I never see the results of that research. Using what I now know from Dr. Marcus Hanwell (many academic papers are locked behind journal pay walls) this suddenly makes sense. But Dr. Squire’s talk dove into the other side of research that doesn’t include polls: research done on data, or “artifacts” that open source projects create. Artifacts are pretty much anything that is public as a result of a project existing, from obvious things like code to the public communication methods we use like IRC and mailing lists. This is what is at the heart of a duo of websites she runs, the first being FLOSSmole which connects well-formatted data about projects with researches interested in doing datamining against it, and FLOSShub which is a collection of papers she’s collected about open source so it’s all in one place and we can see what kind of research is being done. Aside from her great presentation style, I think what made this one of my favorites was the fact that I didn’t know this was happening. I make FOSS artifacts all day long, both in my day job and with my open source hobbies, and sure I know it’s out there for anyone to find, piles of IRC logs, code reviews, emails, but learning that academics are actively processing them is another thing entirely. For instance, to take an example from a project I work on, I had no idea this existed: Estimating Development Effort in Free/Open Source Software Projects by Mining Software Repositories: A Case Study of OpenStack. It made me a bit tin-foil-hat for about 5 minutes until I once again realized that I’m not just fine, but happy to be putting my work out there. Huge thanks to her for doing this presentation and maintaining these really valuable websites.

Slides from her presentation are up on Google docs here and are well worth the browse for examples she uses to illustrate how our artifacts are being used in research.

After lunch I attended my last three talks for the conference, the first one being Software Development as a Civic Service presented by Ben Balter. I’ve attended a number of civic hacking focused talks at events over the past couple years, but this one wasn’t strictly talking about a specific project or organization in this space. Instead he focused on the challenges that confront governments and us as technologists as we attempt to enter the government space, and led to one of my favorite (sad!) slides of the event, in which you will note that doing anything remotely modern (use of public package repositories, configuration management or source control) doesn’t factor in:

He talked about how some government organizations are simply blinded by proprietary sales talk and FUD around open source, while others actually are bound by specific governmental requirements in the software that industries have figured out, but open source projects don’t think to include (ie – an Open Source CMS may get us 99% of us there, but this company is offering something that satisfies everything because it’s their job to do so). He also talked some about the “Command and Control” structure inside of government and how transparency can often be seen as a liability rather than the strength that we’ve come to trust in within the open source community. He wrapped up with some success stories from the government, like petitions.whitehouse.gov and GOV.UK and shared some stats about the increase of known government employees collaborating on Github.

The next talk was by Phil Shapiro on Open Sourcing the Public Library. To begin his talk he talked about how open source has a major opportunity as libraries move from the analog to digital space. He then moved into a fact he wanted to stress: libraries are owned by all of us. There is an effort to transform them from the community “reading room” into the community “living room” where people share ideas and collaborate on projects, bringing in more educational resources in the form of classes and the building of maker spaces. I love this idea, I find Hackerspaces to be unintentionally hostile places for many young women, so providing a different option to accomplish similar goals is appealing to me. I think what struck me most about this was how “open sourcey” it felt, people coming together to build something new together in the open in their community, it’s why I work on any of this at all. He shared a link of some collected writings about the future of Libraries here: https://sites.google.com/site/librarywritings/

The final talk of the day I attended was Your Company Culture is Awesome (But is Company Culture a Lie?) by Pamela Vickers. In her talk she identified the trend in technology of offering “perks” in lieu of an actual healthy work environment for workers. These perks often end up masking real underlying unhappiness for employees, and ultimately lead to loss of talent. She suggested that companies take a step back from their pile of perks and look to make sure they’re actually meeting the core needs of their employees. Are your developers happy? How do you know? Are you asking them? You should, and your employees should trust you to be honest with you and to at least professionally acknowledge their feedback. She also highlighted some of the key places where companies fall down on making their developers happy, including forcing them to use the wrong tools, upsetting a healthy work-life balance, giving them too much work or projects that don’t feel achievable and giving them boring or unimportant projects.

To wrap this up, huge thanks to everyone who worked on and participated in this conference. As a conference sponsor, my employer (HP) had a booth, but unfortunately I was the only one who was able to attend. I spent breaks and lunches at the booth (leaving a friendly note when I was away) and had some great chats with folks looking for Python jobs and who were more generally interested in the work we’re doing in the open source space. It still can strike people as unusual that HP is so committed to open source, so it’s nice to be available to not only give numbers, but be a living, breathing example of someone HP pays to contribute to open source.

by pleia2 at November 01, 2014 10:31 PM

October 31, 2014

Akkana Peck

Simulating a web page timeout

Today dinner was a bit delayed because I got caught up dealing with an RSS feed that wasn't feeding. The website was down, and Python's urllib2, which I use in my "feedme" RSS fetcher, has an inordinately long timeout.

That certainly isn't the first time that's happened, but I'd like it to be the last. So I started to write code to set a shorter timeout, and realized: how does one test that? Of course, the offending site was working again by the time I finished eating dinner, went for a little walk then sat down to code.

I did a lot of web searching, hoping maybe someone had already set up a web service somewhere that times out for testing timeout code. No such luck. And discussions of how to set up such a site always seemed to center around installing elaborate heavyweight Java server-side packages. Surely there must be an easier way!

How about PHP? A web search for that wasn't helpful either. But I decided to try the simplest possible approach ... and it worked!

Just put something like this at the beginning of your HTML page (assuming, of course, your server has PHP enabled):

<?php sleep(500); ?>

Of course, you can adjust that 500 to be any delay you like.

Or you can even make the timeout adjustable, with a few more lines of code:

<?php
 if (isset($_GET['timeout']))
     sleep($_GET['timeout']);
 else
     sleep(500);
?>

Then surf to yourpage.php?timeout=6 and watch the page load after six seconds.

Simple once I thought of it, but it's still surprising no one had written it up as a cookbook formula. It certainly is handy. Now I just need to get some Python timeout-handling code working.

October 31, 2014 01:38 AM

October 24, 2014

Akkana Peck

Partial solar eclipse, with amazing sunspots

[Partial solar eclipse, with sunspots] We had perfect weather for the partial solar eclipse yesterday. I invited some friends over for an eclipse party -- we set up a couple of scopes with solar filters, put out food and drink and had an enjoyable afternoon.

And what views! The sunspot group right on the center of the sun's disk was the most large and complex I'd ever seen, and there were some much smaller, more subtle spots in the path of the eclipse. Meanwhile, the moon's limb gave us a nice show of mountains and crater rims silhouetted against the sun.

I didn't do much photography, but I did hold the point-and-shoot up to the eyepiece for a few shots about twenty minutes before maximum eclipse, and was quite pleased with the result.

An excellent afternoon. And I made too much blueberry bread and far too many oatmeal cookies ... so I'll have sweet eclipse memories for quite some time.

October 24, 2014 03:15 PM

Jono Bacon

Bad Voltage Turns 1

Today Bad Voltage celebrates our first birthday. We plan on celebrating it by having someone else blow out our birthday candles while we smash a cake and quietly defecate on ourselves.

For those of you unaware of the show, Bad Voltage is an Open Source, technology, and “other things we find interesting” podcast featuring Stuart Langridge (LugRadio, Shot Of Jaq), Bryan Lunduke (Linux Action Show), Jeremy Garcia (Linux Questions), and myself (LugRadio, Shot Of Jaq). The show takes fun but informed take on various topics, and includes interviews, reviews, competitions, and challenges.

Over the last year we have covered quite the plethora of topics. This has included VR, backups, atheism, ElementaryOS, guns, bitcoin, biohacking, PS4 vs. XBOX, kids and coding, crowdfunding, genetics, Open Source health, 3D printed weapons, the GPL, work/life balance, Open Source political parties, the right to be forgotten, smart-watches, equality, Mozilla, tech conferences, tech on TV, and more.

We have interviewed some awesome guests including Chris Anderson (Wired), Tim O’Reilly (O’Reilly Media), Greg Kroah-Hartman (Linux), Miguel de Icaza (Xamarin/GNOME), Stormy Peters (Mozilla), Simon Phipps (OSI), Jeff Atwood (Discourse), Emma Marshall (System76), Graham Morrison (Linux Voice), Matthew Miller (Fedora), Ilan Rabinovitch (Southern California Linux Expo), Daniel Foré (Elementary), Christian Schaller (Redhat), Matthew Garrett (Linux), Zohar Babin (Kaltura), Steven J. Vaughan-Nicols (ZDNet), and others.

…and then there are the competitions and challenges. We had a debate where we had to take the opposite viewpoints of what we think, we had a rocking poetry contest, challenged our listeners to mash up the shows to humiliate us, ran a selfie competition, and more. In many cases we punished each other when we lost and even tried to take on a sausage company.

It is all a lot of fun, and if you haven’t checked the show out, be sure to head over to www.badvoltage.org and load up on some shows.

One of the most awesome aspects of Bad Voltage is our community. Our pad is at community.badvoltage.org and we have a fantastically diverse community of different ideas, perspectives and viewpoints. In many cases we have discussed a topic on the show and there has been a long and interesting (and always respectful debate on the forum). It is so much fun to be around.

I just want to say a huge thank-you to everyone who has supported the show and stuck with us through our first year. We have a lot of fun doing it, but the Bad Voltage community make every ounce of effort worthwhile. I also want to thank my fellow presenters, Bryan, Stuart, and Jeremy; it is a pleasure getting to shoot the proverbial with you guys every few weeks.

Live Voltage!

Before I wrap up, I need to share an important piece of information. The Bad Voltage team will be performing our very first live show at the Southern California Linux Expo on the evening of Friday 20th Feb 2015 in Los Angeles.

We can’t think of a better place to do our first live show than SCALE, and we hope to see you there!

by jono at October 24, 2014 04:39 AM

October 22, 2014

Elizabeth Krumbach

3 weeks at home

I am sitting in a hotel room in Raleigh where I’m staying for a conference, but prior to this I had a full 3 weeks at home! I was the longest stretch I’ve had in months, even my gallbladder removal surgery didn’t afford me a full 3 weeks. Unfortunately during this blessed 3 weeks home MJ was out of town for a full 2 weeks of it. It also decided to be summer time in San Francisco (typical of early October) with temperatures rising to 90F for several days and our condo not cooling off. Some days it made work a challenge as I sometimes fled to coffee shops. The cats didn’t seem amused by this either.

The time at home alone did give me a chance to chill out at home and listen to the Giants playoff games on the little AM radio I had set up in our living room. As any good pseudo-fan does I only loosely keep up with the team during the actual season, going to actual games only here and there as I have the opportunity, which I didn’t this year (too much travel + gallbladder). It felt nice to sit and listen to the games as I got some work done in the evenings. I did learn how much modern technology gets in the way of AM reception though, as I listened to the quality tank when I turned on the track lighting in my living room or random times when my highrise neighbors must have been doing something.

Fleet week also came to San Francisco while I was home. I think I’ve only actually been in town for it twice, so it was a nice treat. To add to the fun I was meeting up with a friend to work on some OpenStack stuff on Sunday when they were doing their final show and her office offers amazing floor to ceiling windows with a stunning view of the bay. Perfect for watching the show!

I also did manage to get out for some non-work social time with a couple friends, and finally made it out to Off the Grid in the Marina for some street food adventuring. I hadn’t been before because I’m not the biggest fan of food trucks, the food is fine but you end up standing while eating, making a mess, and not getting a meal for all that cheaper than you would if you just went to a proper restaurant with tables. Maybe I’m just a giant snob, but it was an interesting experience, and I got to take the cable car home, so that’s always fun.

And now Raleigh. I’m here for All Things Open which I’ll be blogging about soon. This kicked off about 3 weeks away from home, so I had to pack accordingly:

After Raleigh I’ll be flying to Miami for a cousin’s wedding, then staying several extra days in a beach hotel where I’ll be working (and taking breaks to visit the ocean!). At the end of the week I’m flying to Paris for the OpenStack Summit for a week. I’ve never been to Paris before so I’m really looking forward to that. When the conference wraps up I’m flying back stateside for another wedding for a family member, this time in Philadelphia. So during this time I’ll get to see MJ twice, as we meet in cities for weddings. Thankfully I head home after that, but then we’re off for a proper vacation a few days later – to Jamaica! Then maybe I’ll spend all of December in a stay-at-home coma, but I’ll probably end up going somewhere because apparently I really like airplanes. Plus December would be the only month I didn’t fly, and I can’t have that.

by pleia2 at October 22, 2014 11:17 PM

Akkana Peck

A surprise in the mousetrap

I went out this morning to check the traps, and found the mousetrap full ... of something large and not at all mouse-like.

[young bullsnake] It was a young bullsnake. Now slender and maybe a bit over two feet long, it will eventually grow into a larger relative of the gopher snakes that I used to see back in California. (I had a gopher snake as a pet when I was in high school -- they're harmless, non-poisonous and quite docile.)

The snake watched me alertly as I peered in, but it didn't seem especially perturbed to be trapped. In fact, it was so non-perturbed that when I opened the trap, the snake stayed right where it was. It had found a nice comfortable resting place, and it wasn't very interested in moving on a cold morning.

I had to poke it gently through the bars, hold the trap vertically and shake for a while before the snake grudgingly let go and slithered out onto the ground.

I wondered if it had found its way into the trap by chasing a mouse, but I didn't see any swellings that looked like it had eaten recently. I'm fairly sure it wasn't interested in the peanut butter bait.

I released the snake in a spot near the shed where the mousetrap is set up. There are certainly plenty of mice there for it to eat, and gophers when it gets a little larger, and there are lots of nice black basalt boulders to use for warming up in the morning, and gopher holes to hide in. I hope it sticks around -- gopher/bullsnakes are good neighbors.

[young bullsnake caught in mousetrap]

October 22, 2014 01:37 AM

October 20, 2014

Jono Bacon

Happy Birthday Ubuntu!

Today is Ubuntu’s ten year anniversary. Scott did a wonderful job summarizing many of those early years and his own experience, and while I won’t be as articulate as him, I wanted to share a few thoughts on my experience too.

I heard of this super secret Debian startup from Scott James Remnant. When I worked at OpenAdvantage we would often grab lunch in Birmingham, and he filled me in on what he was working on, but leaving a bunch of the blanks out due to confidentiality.

I was excited about this new mystery distribution. For many years I had been advocating at conferences about a consumer-facing desktop, and felt that Debian and GNOME, complete with the exciting Project Utopia work from Robert Love and David Zeuthen made sense. This was precisely what this new distro would be shipping.

When Warty was released I installed it and immediately became an Ubuntu user. Sure, it was simple, but the level of integration was a great step forward. More importantly though, what really struck me was how community-focused Ubuntu was. There was open governance, a Code Of Conduct, fully transparent mailing lists and IRC channels, and they had the Oceans 11 of rock-star developers involved from Debian, GNOME, and elsewhere.

I knew I wanted to be part of this.

While at GUADEC in Stuttgart I met Mark Shuttleworth and had a short meeting with him. He seemed a pretty cool guy, and I invited him to speak at our very first LugRadio Live in Wolverhampton.

Mark at LugRadio Live.

I am not sure how many multi-millionaires would consider speaking to 250 sweaty geeks in a football stadium sports bar in Wolverhampton, but Mark did it, not once, but twice. In fact, one time he took a helicopter to Wolverhampton and landed at the dog racing stadium. We had to have a debate in the LugRadio team for who had the nicest car to pick him up in. It was not me.

This second LugRadio Live appearance was memorable because two weeks previous I had emailed Mark to see if he had a spot for me at Canonical. OpenAdvantage was a three-year funded project and was wrapping up, and I was looking at other options.

Mark’s response was:

“Well, we are opening up an Ubuntu Community Manager position, but I am not sure it is for you.”

I asked him if he could send over the job description. When I read it I knew I wanted to do it.

Fast forward four interviews, the last of which being in his kitchen (which didn’t feel awkward, at all), and I got the job.

The day I got that job was one of the greatest days of my life. I felt like I had won the lottery; working on a project with mission, meaning, and something that could grow my career and skill-set.

Canonical team in 2007

The day I got the job was not without worry though.

I was going to be working with people like Colin Watson, Scott James Remnant, Martin Pitt, Matt Zimmerman, Robert Collins, and Ben Collins. How on earth was I going to measure up?

A few months later I flew out to my first Ubuntu Developer Summit in Mountain View, California. Knowing little about California in November, I packed nothing but shorts and t-shirts. Idiot.

I will always remember the day I arrived, going to a bar with Scott and some others, meeting the team, and knowing absolutely nothing about what they were saying. It sounded like gibberish, and I felt like I was a fairly technical guy at this point. Obviously not.

What struck me though was how kind, patient, and friendly everyone was. The delta in technical knowledge was narrowed with kindness and mentoring. I met some of my heroes, and they were just normal people wanting to make an awesome Linux distro, and wanting to help others get in on the ride too.

What followed was an incredible seven and a half years. I travelled to Ubuntu Developer Summits, sprints, and conferences in more than 30 countries, helped create a global community enthused by a passion for openness and collaboration, experimented with different methods of getting people to work together, and met some of the smartest and kindest people walking on this planet.

The awesome Ubuntu community

Ubuntu helped to define my career, but more importantly, it helped to define my perspective and outlook on life. My experience in Ubuntu helped me learn how to think, to manage, and to process and execute ideas. It helped me to be a better version of me, and to fill my world with good people doing great things, all of which inspired my own efforts.

This is the reason why Ubuntu has always been much more than just software to me. It is a philosophy, an ethos, and most importantly, a family. While some of us have moved on from Canonical, and some others have moved on from Ubuntu, one thing we will always share is this remarkable experience and a special connection that makes us Ubuntu people.

by jono at October 20, 2014 05:52 PM

October 17, 2014

Eric Hammond

Installing aws-cli, the New AWS Command Line Tool

consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon

Readers of this tech blog know that I am a fan of the power of the command line. I enjoy presenting functional command line examples that can be copied and pasted to experience services and features.

The Old World

Users of the various AWS legacy command line tools know that, though they get the job done, they are often inconsistent in where you get them, how you install them, how you pass options, how you provide credentials, and more. Plus, there are only tool sets for a limited number of AWS services.

I wrote an article that demonstrated the simplest approach I use to install and configure the legacy AWS command line tools, and it ended up being extraordinarily long.

I’ve been using the term “legacy” when referring to the various old AWS command line tools, which must mean that there is something to replace them, right?

The New World

The future of the AWS command line tools is aws-cli, a single, unified, consistent command line tool that works with almost all of the AWS services.

Here is a quick list of the services that aws-cli currently supports: Auto Scaling, CloudFormation, CloudSearch, CloudWatch, Data Pipeline, Direct Connect, DynamoDB, EC2, ElastiCache, Elastic Beanstalk, Elastic Transcoder, ELB, EMR, Identity and Access Management, Import/Export, OpsWorks, RDS, Redshift, Route 53, S3, SES, SNS, SQS, Storage Gateway, Security Token Service, Support API, SWF, VPC.

Support for the following appears to be planned: CloudFront, Glacier, SimpleDB.

The aws-cli software is being actively developed as an open source project on Github, with a lot of support from Amazon. You’ll note that the biggest contributors to aws-cli are Amazon employees with Mitch Garnaat leading. Mitch is also the author of boto, the amazing Python library for AWS.

Installing aws-cli

I recommend reading the aws-cli documentation as it has complete instructions for various ways to install and configure the tool, but for convenience, here are the steps I use on Ubuntu:

sudo apt-get install -y python-pip
sudo pip install awscli

Add your Access Key ID and Secret Access Key to $HOME/.aws/config using this format:

[default]
aws_access_key_id = <access key id>
aws_secret_access_key = <secret access key>
region = us-east-1

Protect the config file:

chmod 600 $HOME/.aws/config

Optionally set an environment variable pointing to the config file, especially if you put it in a non-standard location. For future convenience, also add this line to your $HOME/.bashrc

export AWS_CONFIG_FILE=$HOME/.aws/config

Now, wasn’t that a lot easier than installing and configuring all of the old tools?

Testing

Test your installation and configuration:

aws ec2 describe-regions

The default output is in JSON. You can try out other output formats:

 aws ec2 describe-regions --output text
 aws ec2 describe-regions --output table

I posted this brief mention of aws-cli because I expect some of my future articles are going to make use of it instead of the legacy command line tools.

So go ahead and install aws-cli, read the docs, and start to get familiar with this valuable tool.

Notes

Some folks might already have a command line tool installed with the name “aws”. This is likely Tim Kay’s “aws” tool. I would recommend renaming that to another name so that you don’t run into conflicts and confusion with the “aws” command from the aws-cli software.

[Update 2013-10-09: Rename awscli to aws-cli as that seems to be the direction it’s heading.]

*[Update 2014-10-16: Use new .aws/config filename standard.]

Original article: http://alestic.com/2013/08/awscli

by Eric Hammond at October 17, 2014 01:54 AM

October 16, 2014

Akkana Peck

Aspens are turning the mountains gold

Last week both of the local mountain ranges turned gold simultaneously as the aspens turned. Here are the Sangre de Cristos on a stormy day:

[Sangre de Cristos gold with aspens]

And then over the weekend, a windstorm blew a lot of those leaves away, and a lot of the gold is gone now. But the aspen groves are still beautiful up close ... here's one from Pajarito Mountain yesterday.

[Sangre de Cristos gold with aspens]

October 16, 2014 07:37 PM

October 14, 2014

iheartubuntu

Tomboy The Original Note App


When I first started using Ubuntu back in early 2007 (Ubuntu 6.10) I fell in love with a pre-installed app called Tomboy. I had used Tomboy for several years until Ubuntu One notified users it would stop syncing Tomboy a couple years ago, and then the finality of Ubuntu One shutting down earlier this year. I had rushed to find alternatives like Evernote, Gnotes, etc but none of them were simple and easily integrated.

The Tomboy description is as follows... "Tomboy is a simple & easy to use desktop note-taking application for Linux, Unix, Windows, and Mac OS X. Tomboy can help you organize the ideas and information you deal with every day."

Some of Tomboys notable features are highlighting text, inline spell checking, auto-linking web & email addresses, undo/redo, font styling & sizing and bulleted lists.

I am creating new notes as well as manually importing a few of my old notes from a couple years ago. Tomboy used to sync easily with Ubuntu One. Since that is no longer an option, you can do it with your Dropbox folder or your Google Drive folder (I'm using Insync).

Tomboy hasnt been updated in a while, but it installs and works great on Ubuntu 14.04 using:

sudo apt-get install tomboy

When you start Tomboy there will be a little square note icon with pen up on your top bar. Clicking the icon will show you the Tomboy menu options. To sync your notes across your computers you would go to the Tomboy preferences, clicking the Syncronization tab, and pick a local folder in your Dropbox or Google Drive. Thats pretty much it! Start writing those notes! On your other computers that you want to sync your notes, you would select the same sync folder you chose on your first computer.

A few quick points. When you sync your notes, it will create a folder titled "0" in whatever folder you have chosen to sync your notes in.

If you want to launch Tomboy with your system startup (in Ubuntu 14.04) in Unity search for "Startup Applications" and run it. Add a new app titled "Tomboy" with the command "tomboy", save and close. Next time you log on, your Tomboy notes will be ready to use.

Tomboy also works with Windows and Mac OS X and installation instructions can be found here:

Windows ... https://wiki.gnome.org/Apps/Tomboy/Installing/Windows
Mac ... https://wiki.gnome.org/Apps/Tomboy/Installing/Mac

- - - - -

If you are still looking for syncing options, this comes in from Christian....

You can self-host your note sync server with either Rainy or Grauphel...

Learn more here...

http://dynalon.github.io/Rainy/

http://apps.owncloud.com/content/show.php?action=content&content=166654

by iheartubuntu (noreply@blogger.com) at October 14, 2014 11:42 AM

October 13, 2014

iheartubuntu

MAT - Metadata Anonymisation Toolkit


This is a great program used to help protect your privacy.

Metadata consists of information that characterizes data. Metadata is used to provide documentation for data products. In essence, metadata answers who, what, when, where, why, and how about every facet of the data that is being documented.

Metadata within a file can tell a lot about you. Cameras record data about when a picture was taken and what camera was used. Office documents like PDF or Office automatically adds author and company information to documents and spreadsheets.

Maybe you don't want to disclose that information on the web.

MAT can only remove metadata from your files, it does not anonymise their content, nor can it handle watermarking, steganography, or any too custom metadata field/system.

If you really want to be anonymous, use a format that does not contain any metadata, or better yet, use plain-text.

These are the formats supported to some extent:

Portable Network Graphics (PNG)
JPEG (.jpeg, .jpg, ...)
Open Document (.odt, .odx, .ods, ...)
Office Openxml (.docx, .pptx, .xlsx, ...)
Portable Document Fileformat (.pdf)
Tape ARchive (.tar, .tar.bz2, .tar.gz)
ZIP (.zip)
MPEG Audio (.mp3, .mp2, .mp1, .mpa)
Ogg Vorbis (.ogg)
Free Lossless Audio Codec (.flac)
Torrent (.torrent)

The President of the United States and his birth certificate would have greatly benefited from software such as MAT.

You can install MAT with this terminal command:

sudo apt-get install mat

Look for more articles about privacy soon and by searching in our search by under "privacy".

by iheartubuntu (noreply@blogger.com) at October 13, 2014 12:05 PM

October 12, 2014

iheartubuntu

Tasque TODO List App


We're getting back to some of the old basic apps that a lot of people used to use in Ubuntu. Many of them still work great and work great without any internet connection needed.

Tasque (pronounced like “task”) is a simple task management app (TODO list) for the Linux Desktop and Windows. It supports syncing with the online service Remember the Milk or simply storing your tasks locally.

The main window has the ability to complete a task, change the priority, change the name, and change the due date without additional property dialogs.

When a user clicks on a task priority, a list of possible priorities is presented and when selected, the task is re-prioritized in the order you wish.

When you click on the due date, a list of the next seven days is presented along with an option to remove the date or select a date from a calendar.

A user completes a task by clicking the check box on a task. The task is crossed out indicating it is complete and a timer begins counting down to the right of the task. When the timer is done, the task is removed from view.

As mentioned, Tasque has the ability to save tasks locally or backend used Remember the Milk, a free online to-do list. On one of my computers saving my tasks using RTM works great, on my computer at work, it wont sync my tasks. I havent figure out why, but I will post any updates here once I get it working or find a workaround.

You can install Tasque from the Ubuntu Software Center or with this terminal command:

sudo apt-get install tasque

All in all, Tasque is a great little task app. Really simple to use!

by iheartubuntu (noreply@blogger.com) at October 12, 2014 05:02 AM

October 11, 2014

Akkana Peck

Railroading exponentially

or: Smart communities can still be stupid

I attended my first Los Alamos County Council meeting yesterday. What a railroad job!

The controversial issue of the day was the town's "branding". Currently, as you drive into Los Alamos on highway 502, you pass a tasteful rock sign proclaiming "LOS ALAMOS: WHERE DISCOVERIES ARE MADE". But back in May, the county council announced the unanimous approval of a new slogan, for which they'd paid an ad agency some $55,000: "LIVE EXPONENTIALLY".

As you might expect in a town full of scientists, the announcement was greeted with much dismay. What is it supposed to mean, anyway? Is it a reference to exponential population growth? Malignant tumor growth? Gaining lots of weight as we age?

The local online daily, tired of printing the flood of letters protesting the stupid new slogan, ran a survey about the "Live Exponentially" slogan. The results were that 8.24% liked it, 72.61% didn't, and 19.16% didn't like it and offered alternatives or comments. My favorites were Dave's suggestion of "It's Da Bomb!", and a suggestion from another reader, "Discover Our Secrets"; but many of the alternate suggestions were excellent, or hilarious, or both -- follow the link to read them all.

For further giggles, try a web search on the term. If you search without quotes, Ebola tops the list. With quotes, you get mostly religious tracts and motivational speakers.

The Council Meeting

(The rest of this is probably only of interest to Los Alamos folk.)

Dave read somewhere -- it wasn't widely announced -- that Friday's council meeting included an agenda item to approve spending $225,000 -- yes, nearly a quarter of a million dollars -- on "brand implementation". Of course, we had to go.

In the council discussion leading up to the call for public comment, everyone spoke vaguely of "branding" without mentioning the slogan. Maybe they hoped no one would realize what they were really voting for. But in the call for public comment, Dave raised the issue and urged them to reconsider the slogan.

Kristin Henderson seemed to have quite a speech prepared. She acknowledged that "people who work with math" universally thought the slogan was stupid, but she said that people from a liberal arts background, like herself, use the term to mean hiking, living close to nature, listening to great music, having smart friends and all the other things that make this such a great place to live. (I confess to being skeptical -- I can't say I've ever heard "exponential" used in that way.)

Henderson also stressed the research and effort that had already gone into choosing the current slogan, and dismissed the idea that spending another $50,000 on top of the $55k already spent would be "throwing money after bad." She added that showing the community some images to go with the slogan might change people's minds.

David Izraelevitz admitted that being an engineer, he initially didn't like "Live Exponentially". But he compared it to Apple's "Think Different": though some might think it ungrammatical, it turned out to be a highly successful brand because it was coupled with pictures of Gandhi and Einstein. (Hmm, maybe that slogan should be "Live Exponential".)

Izraelevitz described how he convinced a local business owner by showing him the ad agency's full presentation, with pictures as well as the slogan, and said that we wouldn't know how effective the slogan was until we'd spent the $50k for logo design and an implementation plan. If the council didn't like the results they could choose not to go forward with the remaining $175,000 for "brand implementation". (Councilor Fran Berting had previously gotten clarification that those two parts of the proposal were separate.)

Rick Reiss said that what really mattered was getting business owners to approve the new branding -- "the people who would have to use it." It wasn't so important what people in the community thought, since they didn't have logos or ads that might incorporate the new branding.

Pete Sheehey spoke up as the sole dissenter. He pointed out that most of the community input on the slogan has been negative, and that should be taken into account. The proposed slogan might have a positive impact on some people but it would have a negative impact on others, and he couldn't support the proposal.

Fran Berting said she was "not all that taken" with the slogan, but agreed with Izraelevitz that we wouldn't know if it was any good without spending the $50k. She echoed the "so much work has already gone into it" argument. Reiss also echoed "so much work", and that he liked the slogan because he saw it in print with a picture.

But further discussion was cut off. It was 1:30, the fixed end time for the meeting, and chairman Geoff Rodgers (who had pretty much stayed out of the discussion to this point) called for a vote. When the roll call got to Sheehey, he objected to the forced vote while they were still in the middle of a discussion. But after a brief consultation on Robert's Rules of Order, chairman Rogers declared the discussion over and said the vote would continue. The motion was approved 5-1.

The Exponential Railroad

Quite a railroading. One could almost think it had been planned that way.

First, the item was listed as one of two in the "Consent Agenda" -- items which were expected to be approved all together in one vote with no discussion or public comment. It was moved at the last minute into "Business"; but that put it last on the agenda.

Normally that wouldn't have mattered. But although the council more often meets in the evenings and goes as long as it needs to, Friday's meeting had a fixed time of noon to 1:30. Even I could see that wasn't much time for all the items on the agenda.

And that mid-day timing meant that working folk weren't likely to be able to listen or comment. Further, the branding issue didn't come up until 1 pm, after some of the audience had already left to go back to work. As a result, there were only two public comments.

Logic deficit

I heard three main arguments repeated by every council member who spoke in favor:

  1. the slogan makes much more sense when viewed with pictures -- they all voted for it because they'd seen it presented with visuals;
  2. a lot of time, effort and money has already gone into this slogan, so it didn't make sense to drop it now; and
  3. if they didn't like the logo after spending the first $50k, they didn't have to approve the other $175k.

The first argument doesn't make any sense. If the pictures the council saw were so convincing, why weren't they showing those images to the public? Why spend an additional $50,000 for different pictures? I guess $50k is just pocket change, and anyone who thinks it's a lot of money is just being silly.

As for the second and third, they contradict each other. If most of the board thinks now that the initial $50k contract was so much work that we have to go forward with the next $50k, what are the chances that they'll decide not to continue after they've already invested $100k?

Exponentially low, I'd say.

I was glad of one thing, though. As a newcomer to the area faced with a ballot next month, it was good to see the council members in action, seeing their attitudes toward spending and how much they care about community input. That will be helpful come ballot time.

If you're in the same boat but couldn't make the meeting, catch the October 10, 2014 County Council Meeting video.

October 11, 2014 06:54 PM

October 09, 2014

Akkana Peck

What's nesting in our truck's engine?

We park the Rav4 outside, under an overhang. A few weeks ago, we raised the hood to check the oil before heading out on an adventure, and discovered a nest of sticks and grass wedged in above the valve cover. (Sorry, no photos -- we were in a hurry to be off and I didn't think to grab the camera.)

Pack rats were the obvious culprits, of course. There are lots of them around, and we've caught quite a few pack rats in our live traps. Knowing that rodents can be a problem since they like to chew through hoses and wiring, we decided we'd better keep an eye on the Rav and maybe investigate some sort of rodent-repelling technology.

Sunday, we got back from another adventure, parked the Rav in its usual place, went inside to unload before heading out for an evening walk, and when we came back out, there was a small flock of birds hanging around under the Rav. Towhees! Not only hanging around under the still-warm engine, but several times we actually saw one fly between the tires and disappear.

Could towhees really be our engine nest builders? And why would they be nesting in fall, with the days getting shorter and colder?

I'm keeping an eye on that engine compartment now, checking every few days. There are still a few sticks and juniper sprigs in there, but no real nest has reappeared so far. If it does, I'll post a photo.

October 09, 2014 12:10 AM

October 08, 2014

iheartubuntu

Check Ubuntu Linux for Shellshock


Shellshock is a new vulnerability that allows attackers to put code onto your machine, which could put your Ubuntu Linux system at a serious risk for malicious attacks.

Shellshock uses a bash script to access your computer. From there, hackers can launch programs, enable features, and even access your files. The script only affects UNIX-based systems (linux and mac).

You can test your system by running this test command from Terminal:

env x='() { :;}; echo vulnerable' bash -c 'echo hello'

If you're not vulnerable, you'll get this result:

bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' hello

If you are vulnerable, you'll get:

vulnerable hello

You can also check the version of bash you're running by entering:

bash --version

If you get version 3.2.51(1)-release as a result, you will need to update. Most Linux distributions already have patches available.

-----------

If your system is vulnerable make sure your computer has all critical updates and it should be patched already. if you are using a version of Ubuntu that has already reached end of life status (12.10, 13.04, 13.10, etc), you may be screwed and may need to start using a newer version of Ubuntu.

This should update Bash for you so your system is not vulnerable...

sudo apt-get update && sudo apt-get install --only-upgrade bash

by iheartubuntu (noreply@blogger.com) at October 08, 2014 11:48 PM

October 03, 2014

iheartubuntu

Ubuntu - 10 Years Strong


Ubuntu, the Debian based linux operating system is approaching its 21st release in just a couple of weeks (October 23rd) moving forward 10 years strong now!

Mark Shuttleworth invests in Ubuntu's parent company Canonical, which continues to lose money year after year. It's clear that profit isn't his main concern. There is still a clear plan for Ubuntu and Canonical. That plan appears to be very much 'cloud' and business based.

Shuttleworth is proud that the vast majority of cloud deployments are based-on Ubuntu. The recent launch of Canonical's 'Cloud in a box' deployable Ubuntu system is another indication where it sees things going.

Ubuntu Touch will soon appear on phones and tablets, which is really the glue for this cloud/mobile/desktop ecosystem. Ubuntu has evolved impressively over the last ten years and it will continue to develop in this new age.

Ubuntu provides a seamless ecosystem for devices deployed to businesses and users alike. Being able to run the identical software on multiple devices and in the cloud, all sharing the same data is very appealing.

Ubuntu will be at the heart of this with or without the survival of Canonical.

"I love technology, and I love economics and I love what’s going on in society. For me, Ubuntu brings those three things together in a unique way." - Mark Shuttleworth on the next 5 years of Ubuntu

by iheartubuntu (noreply@blogger.com) at October 03, 2014 07:48 PM

BirdFont Font Editor


If you have ever been interested in making your own fonts for fun or profit, BirdFont is an easy to use free font editor that lets you create vector graphics and export TTF, EOT & SVG fonts.

To install Birdfont, simply use the PPA below to insure you always have the most updated version. Open the terminal and run the following commands:

sudo add-apt-repository ppa:ubuntuhandbook1/birdfont

sudo apt-get update

sudo apt-get install birdfont

If you dont like using a PPA repository, you can download the appropriate DEB package  for your particular system....

http://ppa.launchpad.net/ubuntuhandbook1/birdfont/ubuntu/pool/main/b/birdfont/

If you need help developing a font, there is also an official tutorial here!

http://birdfont.org/doku/doku.php/tutorials

by iheartubuntu (noreply@blogger.com) at October 03, 2014 05:27 PM

Elizabeth Krumbach

33rd Birthday Weekend

I’m a big fan of trying new things and places, so it came as a surprise that when I decided upon a birthday getaway this past weekend we decided to go back to the Resort at Squaw Creek, where we had been last year. It wasn’t just travel exhaustion that made us choose this one, we knew we wanted to get some work done during the weekend and the suite-style was great for that. Honestly we love everything about this place – beautiful views, amazing pools, good food. The price was also right for a quick getaway.

The drive up was a long one, Friday evening traffic combined with a thunderstorm. We stopped for dinner at Cottonwood Restaurant in Truckee. By the time we arrived at the driveway to the resort the rain had transformed… what is that, slush? By the time we got to the front door it was properly snowing!

Saturday morning we had breakfast brought to our room (heaven!) and enjoyed the stunning view outside our window.

The rain kept us inside for most of the day, which was wonderful. I was able to get some work done on my book (as planned!) and MJ did a bunch of research into our first proper vacation of the year coming up in November. Fireplace, hot chocolate, the man I love, perfect!

As 4PM rolled around the rain tapered off and we went down to the pool. It was 45F degrees out, so not exactly swimming weather, but the pools were heated and the trio of hot tubs were a popular spot for other folks visiting for the weekend. It turned out wonderful, particularly with the standard warm fall we’re having in San Francisco. That evening we had wonderful dinner (and dessert!) at the on site restaurant.

Sunday was even more rainy. We took advantage of their option to pay an extra $85 to get an 8pm checkout, giving us the whole day to enjoy before we had to go home. The rain did end up keeping us from the pool, but I did take a 2 mile walk through the woods with an umbrella after lunch. In spite of the rain, it was a beautiful walk up the sometimes steep and rocky terrain through the woods.

Alas, it had to end. On our way out we stopped at FiftyFifty Brewing Company for a casual dinner. They had the most amazing mussels appetizer, I kind of want to go back to have that again. Dinner wrapped up with some cake!

Fortunately the drive home was quicker (and drier!) than our drive to the mountains had been and we got in shortly before 1AM.

My actual 33rd birthday was on Monday. I ended up making plans with a friend who was in town to celebrate her own birthday the following day. We met up at the San Francisco Zoo around 11AM and I finally got to meet the wolverines! Even better, we caught them as a keeper was putting out some treats, so we got to see them uncharacteristically bounding about their enclosure as they attacked the treat bags that were put out for them. Alas, in spite of staying until the opening of the Lion House, I still managed to miss the sneaky two-toed sloth who decided to hide from me.

We wrapped up the afternoon with lunch over at the Beach Chalet.

It was a great birthday weekend+birthday, aside from the whole turning 33 part. Getting older hasn’t tended to bother me, but time is passing too quickly, much still to do.

by pleia2 at October 03, 2014 05:26 AM

October 02, 2014

Akkana Peck

Photographing a double rainbow

[double rainbow]

The wonderful summer thunderstorm season here seems to have died down. But while it lasted, we had some spectacular double rainbows. And I kept feeling frustrated when I took the SLR outside only to find that my 18-55mm kit lens was nowhere near wide enough to capture it. I could try stitching it together as a panorama, but panoramas of rainbows turn out to be quite difficult -- there are no clean edges in the photo to tell you where to join one image to the next, and automated programs like Hugin won't even try.

There are plenty of other beautiful vistas here too -- cloudscapes, mesas, stars. Clearly, it was time to invest in a wide-angle lens. But how wide would it need to be to capture a double rainbow?

All over the web you can find out that a rainbow has a radius of 42 degrees, so you need a lens that covers 84 degrees to get the whole thing.

But what about a double rainbow? My web searches came to naught. Lots of pages talk about double rainbows, but Google wasn't finding anything that would tell me the angle.

I eventually gave up on the web and went to my physical bookshelf, where Color and Light in Nature gave me a nice table of primary and secondary rainbow angles of various wavelengths of light. It turns out that 42 degrees everybody quotes is for light of 600 nm wavelength, a blue-green or cyan color. At that wavelength, the primary angle is 42.0° and the secondary angle is 51.0°.

Armed with that information, I went back to Google and searched for double rainbow 51 OR 102 angle and found a nice Slate article on a Double rainbow and lightning photo. The photo in the article, while lovely (lightning and a double rainbow in the South Dakota badlands), only shows a tiny piece of the rainbow, not the whole one I'm hoping to capture; but the article does mention the 51-degree angle.

Okay, so 51°×2 captures both bows in cyan light. But what about other wavelengths? A typical eye can see from about 400 nm (deep purple) to about 760 nm (deep red). From the table in the book:

Wavelength Primary Secondary
400 40.5° 53.7°
600 42.0° 51.0°
700 42.4° 50.3°

Notice that while the primary angles get smaller with shorter wavelengths, the secondary angles go the other way. That makes sense if you remember that the outer rainbow has its colors reversed from the inner one: red is on the outside of the primary bow, but the inside of the secondary one.

So if I want to photograph a complete double rainbow in one shot, I need a lens that can cover at least 108 degrees.

What focal length lens does that translate to? Howard's Astronomical Adventures has a nice focal length calculator. If I look up my Rebel XSi on Wikipedia to find out that other countries call it a 450D, and plug that in to the calculator, then try various focal lengths (the calculator offers a chart but it didn't work for me), it turns out that I need an 8mm lens, which will give me an 108° 26‘ 46" field of view -- just about right.

[Double rainbow with the Rokinon 8mm fisheye] So that's what I ordered -- a Rokinon 8mm fisheye. And it turns out to be far wider than I need -- apparently the actual field of view in fisheyes varies widely from lens to lens, and this one claims to have a 180° field. So the focal length calculator isn't all that useful. At any rate, this lens is plenty wide enough to capture those double rainbows, as you can see.

About those books

By the way, that book I linked to earlier is apparently out of print and has become ridiculously expensive. Another excellent book on atmospheric phenomena is Light and Color in the Outdoors by Marcel Minnaert (I actually have his earlier version, titled The Nature of Light and Color in the Open Air). Minnaert doesn't give the useful table of frequencies and angles, but he has lots of other fun and useful information on rainbows and related phenomena, including detailed instructions for making rainbows indoors if you want to measure angles or other quantities yourself.

October 02, 2014 07:37 PM

September 30, 2014

Elizabeth Krumbach

PuppetConf 2014

Wow, so many conferences lately! Fortunately for me, PuppetConf was local so I didn’t need to catch any flights or deal with hotel hassle, it was just a 2 block walk from home each day.

My focus for this conference was learning more about how people are using code-driven infrastructures similar to ours in the OpenStack Infrastructure project and meeting up with some colleagues, several of whom I’ve only communicated with online. I succeeded on both counts and it ended up being a great conference for me.

There was a keynote by Gene Kim, he is one of the authors of the “devops novel” The Phoenix Project, which I first learned about from my colleague Robert Collins. His talk focused around the book, as The Phoenix Project: Lessons Learned. In spite of having read the book, it was great to hear from Kim on the topic more directly as he talked about technical debt and outlined his 4 top lessons learned:

  • The business value of DevOps is higher than we thought.
  • DevOps Is As Good For Dev… …As It Is For Ops
  • The Need For High-Trust Management (can’t bog people down)
  • DevOps is not just for the unicorns… DevOps is for horses, too. (ie – not just for tech stars like Facebook)

Talk slides here.

The next keynote was by Kate Matsudaira of Popforms who gave a talk titled Trust Me. I wasn’t sure what to expect with this one, but I was pleasantly surprised. She covered some of what one may call “soft skills” in the tech industry, including helping others, supporting your colleagues and in general being a resourceful person who people enjoy working with. Over the years I’ve seen far too much of people assuming these skills aren’t valuable, even as people look around and identify folks with these skills as the colleagues they like working with the most. Huge thanks to Kate for bringing attention to these skills. She also talked a lot about building trust within your organization as it can often be hard for managers to do evaluations of employees who have the freedom to work unobstructed (as we want!) and mechanisms to build that trust, including reporting what you do to your boss and team mates. Slides from her talk available here: Keynote: Trust Me slides

After the keynote I headed over to Evan Scheessele’s talk on Infrastructure-as-Code with Puppet Enterprise in the Cloud. He works in HP’s Printing & Personal Systems division and shared the evolution and use of a code-driven infrastructure on HP Cloud along with Puppet Enterprise. The driving vision in his organization was boiled down to a series of points:

  • Infrastructure as “Cattle” not “Pets”
  • Modern configuration-management means: Executable Documentation
  • “Infrastructure as Code”
  • Focus on the production-pattern, and automate it end-to-end
  • Everything is consistently reproducible

He also went application-specific, discussing their use of Jenkins, and hiera and puppetdb in PE. It was a great talk and a pleasure to catch up with him afterwards. Slides available here.


Thanks to Evan Scheessele for the photo

My talk was on How to Open Source Your Puppet Configuration and I brought along Monty Taylor and James E. Blair stick puppets I made to demonstrate the rationale of running our infrastructure as an open source project. I walked the audience through some of the benefits of making Puppet configurations public (or at least public within an organization), the importance of licensing and documentation and some steps for splitting up your configuration so it’s understandable and consumable by others. My slides are here.

On Wednesday I attended Gareth Rushgrove’s talk on Continuous Integration for Infrastructure as Code. He skipped over a lot of the more common individual testing mechanisms (puppet-lint, puppet-syntax, rspec-puppet, beaker) and dove into higher level view things like testing of images and containers and test-driven infrastructure (analogous to test-driven development). Through his talk he gave several examples of how this is accomplished, from use of Serverspec, the need to write tests before infrastructure, writing tests to enforce policy and pulling data from PuppetDB to run tests. Slides here.

After lunch I headed over to Chris Hoge’s talk about Understanding OpenStack Deployments with the Puppet modules available. In spite of all my work with OpenStack, I haven’t had a very close look at these modules, so it was nice meeting up with him and Colleen Murphy from the puppet-openstack team. In his talk he walked the audience through some of the basic design decisions of OpenStack and then pulled in examples of how the Puppet modules for OpenStack are used to bring this all together. Slides here.

Two talks I’ll have to catch once the videos are online, Continuous Infrastructure: Modern Puppet for the Jenkins Project – R.Tyler Croy, Jenkins (slides) and Infrastructure as Software – Dustin J. Mitchell, Mozilla, Inc. (slides). Both of these are open source infrastructures that I mentioned during my own talk! I wish I had taken the opportunity while we were all in one spot to meet together, fortunately I was able to chat with R.Tyler Croy prior to my talk, but his talk conflicted with mine and Dustin’s with the OpenStack talk.

In all, this was a very valuable event. I learned some interesting new things about how others are using code-driven infrastructures and I made some great connections.

More photos from PuppetConf here: https://www.flickr.com/photos/pleia2/sets/72157648049231891/

by pleia2 at September 30, 2014 08:50 PM

September 28, 2014

Akkana Peck

Petroglyphs, ancient and modern

In the canyons below White Rock there are many wonderful petroglyphs, some dating back many centuries, like this jaguar: [jaguar petroglyph in White Rock Canyon]

as well as collections like these:
[pictographs] [petroglyph collection]

Of course, to see them you have to negotiate a trail down the basalt cliff face. [Red Dot trail]

Up the hill in Los Alamos there are petroglyphs too, on trails that are a bit more accessible ... but I suspect they're not nearly so old. [petroglyph face]

September 28, 2014 03:47 AM

September 27, 2014

iheartubuntu

Ubuntu Kylin Wallpapers


Looking for some new wallpapers these days? The Chinese version of Ubuntu, Ubuntu Kylin, has some beautiful new wallpapers for the 14.10 release. Download and install the DEB to put them on your computer (a total of 24 wallpapers)...

http://security.ubuntu.com/ubuntu/pool/universe/u/ubuntukylin-wallpapers/ubuntukylin-wallpapers-utopic_14.10.0_all.deb



 

by iheartubuntu (noreply@blogger.com) at September 27, 2014 12:30 PM

September 25, 2014

Eric Hammond

Throw Away The Password To Your AWS Account

reduce the risk of losing control of your AWS account by not knowing the root account password

As Amazon states, one of the best practices for using AWS is

Don’t use your AWS root account credentials to access AWS […] Create an IAM user for yourself […], give that IAM user administrative privileges, and use that IAM user for all your work.

The root account credentials are the email address and password that you used when you first registered for AWS. These credentials have the ultimate authority to create and delete IAM users, change billing, close the account, and perform all other actions on your AWS account.

You can create a separate IAM user with near-full permissions for use when you need to perform admin tasks, instead of using the AWS root account. If the credentials for the admin IAM user are compromised, you can use the AWS root account to disable those credentials to prevent further harm, and create new credentials for ongoing use.

However, if the credentials for your AWS root account are compromised, the person who stole them can take over complete control of your account, change the associated email address, and lock you out.

I have consulted companies who lost control over the root AWS account which contained their assets. You want to avoid this.

Proposal

Given:

  • The AWS root account is not required for regular use as long as you have created an IAM user with admin privileges

  • Amazon recommends not using your AWS root account

  • You can’t accidentally expose your AWS root account password if you don’t know it and haven’t saved it anywhere

  • You can always reset your AWS root account password as long as you have access to the email address associated with the account

Consider this approach to improving security:

  1. Create an IAM user with full admin privileges. Use this when you need to do administrative tasks. Activate IAM user access to account billing information for the IAM user to have access to read and modify billing, payment, and account information.

  2. Change the AWS root account password to a long, randomly generated string. Do not save the password. Do not try to remember the password. On Ubuntu, you can use a command like the following to generate a random password for copy/paste into the change password form:

    pwgen -s 24 1
    
  3. If you need access to the AWS root account at some point in the future, use the “Forgot Password” function on the signin form.

It should be clear from this that protecting access to your email account is critical to your overall AWS security, as that is all that is needed to change your password, but that has been true for many online services for many years.

Caveats

You currently need to use the AWS root account in the following situations:

  • to change the email address and password associated with the AWS root account

  • to deactivate IAM user access to account billing information

  • to cancel AWS services (e.g., support)

  • to close the AWS account

  • to buy stuff on Amazon.com, Audible.com, etc. if you are using the same account (not recommended)

  • anything else? Let folks know in the comments.

MFA

For completeness, I should also reiterate Amazon’s constant and strong recommendation to use MFA (multi-factor authentication) on your root AWS account. Consider buying the hardware MFA device, associating it with your root account, then storing it in a lock box with your other important things.

You should also add MFA to your IAM accounts that have AWS console access. For this, I like to use Google Authenticator software running on a locked down mobile phone.

MFA adds a second layer of protection beyond just knowing the password or having access to your email account.

Original article: http://alestic.com/2014/09/aws-root-password

by Eric Hammond at September 25, 2014 11:04 PM

AWS Community Heroes Program

Amazon Web Services recently announced an AWS Community Heroes Program where they are starting to recognize publicly some of the many individuals around the world who contribute in so many ways to the community that has grown up around the services and products provided by AWS.

It is fun to be part of this community and to share the excitement that so many have experienced as they discover and promote new ways of working and more efficient ways of building projects and companies.

Here are some technologies I have gotten the most excited about over the decades. Each of these changed my life in a significant way as I invested serious time and effort learning and using the technology. The year represents when I started sharing the “good news” of the technology with people around me, who at the time usually couldn’t have cared less.

  • 1980: Computers and Programming - “You can write instructions and the computer does what you tell it to! This is going to be huge!”

  • 1987: The Internet - “You can talk to people around the world, access information that others make available, and publish information for others to access! This is going to be huge!”

  • 1993: The World Wide Web - “You can view remote documents by clicking on hyperlinks, making it super-easy to access information, and publishing is simple! This is going to be huge!”

  • 2007: Amazon Web Services - “You can provision on-demand disposable compute infrastructure from the command line and only pay for what you use! This is going to be huge!”

I feel privileged to have witnessed amazing growth in each of these and look forward to more productive use on all fronts.

There are a ton of local AWS meetups and AWS user groups where you can make contact with other AWS users. AWS often sends employees to speak and share with these groups.

A great way to meet thousands of people in the AWS community (and to spend a few days in intense learning about AWS no matter your current expertise level) is to attend the AWS re:Invent conference in Las Vegas this November. Perhaps I’ll see you there!

Original article: http://alestic.com/2014/09/aws-community-heroes

by Eric Hammond at September 25, 2014 11:03 PM

iheartubuntu

Inside Bitcoins Las Vegas Is Two Weeks Away - 10% OFF!


I Heart Ubuntu is excited to be partnering with Inside Bitcoins Conference and Expo once again, which will be returning to Las Vegas at the Flamingo Hotel on October 5-7, 2014!

The event will explore the way that cryptocurrency has been affecting the payments industry, and will cover a wide range of topics including mainstream adoption, compliance, bitcoin startups, investing, mining, altcoins, equipment, and more. The first 300 paid attendees will receive US$50 in bitcoin.

New to Inside Bitcoins Las Vegas will be a half day of small classroom-style workshops taught by cryptocurrency veterans, which will provide attendees with an interactive, informative setting to learn about various facets of the bitcoin ecosystem.

Recently announced is a keynote by Patrick Byrne, CEO of Overstock.com, who will be leading a session titled, “Cryptosecurities: the Next Decentralized Frontier” on October 6 at 3:30pm. Byrne will also be making an exciting announcement at the event regarding Overstock’s latest development on the Bitcoin front.

Featured speakers include:

  • Patrick Byrne, CEO, Overstock.com 
  • Bobby Lee, CEO, BTC China & Board Member, Bitcoin Foundation
  • Daniel Larimer, Founder, Bitshares.org
  • Perianne Boring, Founder & President, Chamber of Digital Commerce

And many more! See the full roster of speakers here.

I Heart Ubuntu is once again partnering with Inside Bitcoins to offer all readers 10% OFF Gold and Silver Passports. Enter code HEART at checkout to redeem your discount. Register now!

by iheartubuntu (noreply@blogger.com) at September 25, 2014 01:34 AM

September 23, 2014

iheartubuntu

ONIONSHARE - Send Big Files Securely and Anonymously


OnionShare lets you securely and anonymously share files of any size. It works by starting a web server, making it accessible as a Tor hidden service, and generating an unguessable URL to access and download the files. It doesn't require setting up a server on the internet somewhere or using a third party filesharing service. You host the file on your own computer and use a Tor hidden service to make it temporarily accessible over the internet. The other user just needs to use Tor Browser to download the file from you.

Features include:

  • A user-friendly drag-and-drop graphical user interface that works in Windows, Mac OS X, and Linux
  • Ability to share multiple files and folders at once
  • Support for multiple people downloading files at once
  • Automatically copies the unguessable URL to your clipboard
  • Shows you the progress of file transfers
  • When file is done transferring, automatically closes OnionShare to reduce the attack surface
  • Localized into several languages, and supports international unicode filenames
  • Designed to work in Tails, for high risk users

You can learn more about OnionShare here: https://onionshare.org/

To install OnionShare on Ubuntu, open a terminal and type:

sudo add-apt-repository ppa:micahflee/ppa

sudo apt-get update && sudo apt-get install onionshare

Before you can share files, you need to open Tor Browser in the background. This will provide the Tor service that OnionShare uses to start the hidden service.

Open OnionShare and drag and drop files and folders you wish to share, and start the server. It will show you a long, random-looking URL such as http://cfxipsrhcujgebmu.onion/7aoo4nnzj3qurkafvzn7kket7u and copy it to your clipboard. This is the secret URL that can be used to download the file you're sharing. If you'd like multiple people to be able to download this file, uncheck the "close automatically" checkbox.

Send this URL to the person you're trying to send the files to. If the files you're sending aren't secret, you can use normal means of sending the URL: emailing it, posting it to Facebook or Twitter, etc. If you're trying to send secret files then it's important to send this URL securely.

The person who is receiving the files doesn't need OnionShare. All they need is to open the URL you send them in Tor Browser to be able to download the file.



by iheartubuntu (noreply@blogger.com) at September 23, 2014 12:00 PM