Planet Ubuntu California

November 26, 2014

Akkana Peck

Yam-Apple Casserole

Yams. I love 'em. (Actually, technically I mean sweet potatoes, since what we call "yams" here in the US aren't actual yams, but the root from a South American plant, Ipomoea batatas, related to the morning glory. I'm not sure I've ever had an actual yam, a tuber from an African plant of the genus Dioscorea).

But what's up with the way people cook them? You take something that's inherently sweet and yummy -- and then you cover them with brown sugar and marshmallows and maple syrup and who knows what else. Do you sprinkle sugar on apples before you eat them?

Normally, I bake a yam for about an hour in the oven, or, if time is short (which it usually is), microwave it for about four and a half minutes, then finish up with 20-40 minutes in a toaster oven at 350°. The oven part seems to be necessary: it brings out the sweetness and the nice crumbly texture in a way that the microwave doesn't. You can read about some of the science behind this at this Serious Eats discussion of cooking sweet potatoes: it's because sweet potatoes have an odd enzyme, beta amylase, that breaks down carbohydrates into sugars, thus bringing out the vegetable's sweetness, but that enzyme only works in a limited temperature range, so if you heat up a sweet potato too fast the enzyme doesn't have time to work.

But Thanksgiving is coming up, and for a friend's dinner party, I wanted to make something a little more festive (and more easily parceled out) than whole baked yams.

A web search wasn't much help: nearly everything I found involved either brown sugar or syrup. The most interesting casserole recipes I saw fell into two categories: sweet and spicy yams with chile powder and cayenne pepper (and brown sugar), and for yam-apple casserole (with brown sugar and lemon juice). As far as I can tell it has never occurred to anyone, before me, to try either of these without added sugar. So I bravely volunteered myself as test subject.

I was very pleased with the results. The combination of the tart apples, the sweet yams and the various spices made a lovely combination. And it's a lot healthier than the casseroles with all the sugary stuff piled on top.

Yam-Apple Casserole without added sugar

Ingredients:

  • Yams, as many as needed.
  • Apples: 1-2 apples per yam. Use a tart variety, like granny smith.
  • chile powder
  • sage
  • rosemary or thyme
  • cumin
  • nutmeg
  • ginger powder
  • salt
(Your choice whether to use all of these spices, just some, or different ones.)

Peel and dice yams and apples into bite-sized pieces, inch or half-inch cubes. (Peeling the yams is optional.)

Drizzle a little olive oil over the yam and apple pieces, then sprinkle spices. Your call as to which spices and how much. Toss it all together until the pieces are all evenly coated with oil and the spices look evenly distributed.

Lay out in a casserole dish or cake pan and bake at 350°F until the yam pieces are soft. This takes at least an hour, two if you made big pieces or layered the pieces thickly in the pan. The apples will mostly disintegrate into little mushy bits between the pieces of yam, but that's fine -- they're there for flavor, not consistency.

Note: After reading about beta-amylase and its temperature range, I had the bright idea that it would be even better to do this in a crockpot. Long cooking at low temps, right? Wrong! The result was terrible, almost completely tasteless. Stick to using the oven.

I'm going to try adding some parsnips, too, though parsnips seem to need to cook longer than sweet potatoes, so it might help to pre-cooked the parsnips a few minutes in the microwave before tossing them in with the yams and apples.

November 26, 2014 02:07 AM

November 25, 2014

Eric Hammond

AWS Lambda Walkthrough Command Line Companion

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name:
source_bucket=alestic-lambda-example

# Do not change this. Walkthrough code assumes this name
target_bucket=${source_bucket}resized

function=CreateThumbnail
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
lambda_invocation_role_name=lambda-$function-invocation
lambda_invocation_access_policy_name=lambda-$function-invocation-access
log_group_name=/aws/lambda/$function

Install some required software:

sudo apt-get install nodejs nodejs-legacy npm

Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket
aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas: https://www.flickr.com/photos/hatalmas/6094281702
wget -q -OHappyFace.jpg \
  https://c3.staticflickr.com/7/6209/6094281702_d4ac7290d3_b.jpg

aws s3 cp HappyFace.jpg s3://$source_bucket/

Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough
wget -q -O $function.js \
  http://run.alestic.com/lambda/aws-examples/CreateThumbnail.js

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $function.zip $function.js node_modules

Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "logs:*"
        ],
        "Resource": "arn:aws:logs:*:*:*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetObject"
        ],
        "Resource": "arn:aws:s3:::'$source_bucket'/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::'$target_bucket'/*"
      }
    ]
  }'

Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --role "$lambda_execution_role_arn" \
  --mode event \
  --handler "$function.handler" \
  --timeout 30 \
  --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM
{  
   "Records":[  
      {  
         "eventVersion":"2.0",
         "eventSource":"aws:s3",
         "awsRegion":"us-east-1",
         "eventTime":"1970-01-01T00:00:00.000Z",
         "eventName":"ObjectCreated:Put",
         "userIdentity":{  
            "principalId":"AIDAJDPLRKLG7UEXAMPLE"
         },
         "requestParameters":{  
            "sourceIPAddress":"127.0.0.1"
         },
         "responseElements":{  
            "x-amz-request-id":"C3D13FE58DE4C810",
            "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"testConfigRule",
            "bucket":{  
               "name":"$source_bucket",
               "ownerIdentity":{  
                  "principalId":"A3NL1KOZZKExample"
               },
               "arn":"arn:aws:s3:::$source_bucket"
            },
            "object":{  
               "key":"HappyFace.jpg",
               "size":1024,
               "eTag":"d41d8cd98f00b204e9800998ecf8427e",
               "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
            }
         }
      }
   ]
}
EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups \
  --output text \
  --query 'logGroups[*].[logGroupName]'

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName')
echo log_stream_names="'$log_stream_names'"
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role \
  --role-name "$lambda_invocation_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "s3.amazonaws.com"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:ExternalId": "arn:aws:s3:::*"
            }
          }
        }
      ]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --policy-document '{
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "lambda:InvokeFunction"
         ],
         "Resource": [
           "*"
         ]
       }
     ]
   }'

Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration \
  --function-name "$function" \
  --output text \
  --query 'FunctionARN'
)
echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification \
  --bucket "$source_bucket" \
  --notification-configuration '{
    "CloudFunctionConfiguration": {
      "CloudFunction": "'$lambda_function_arn'",
      "InvocationRole": "'$lambda_invocation_role_arn'",
      "Event": "s3:ObjectCreated:*"
    }
  }'

Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

myimages=...
aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket

Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions \
  --output text \
  --query 'Functions[*].[FunctionName]'

aws lambda get-function \
  --function-name "$function"

aws iam list-roles \
  --output text \
  --query 'Roles[*].[RoleName]'

aws iam get-role \
  --role-name "$lambda_execution_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_execution_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws iam get-role \
  --role-name "$lambda_invocation_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_invocation_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws s3api get-bucket-notification \
  --bucket "$source_bucket"

Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg
aws s3 rm s3://$source_bucket/HappyFace.jpg
aws s3 rb s3://$target_bucket/
aws s3 rb s3://$source_bucket/

aws lambda delete-function \
  --function-name "$function"

aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_execution_role_name"

aws iam delete-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_invocation_role_name"

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  echo "deleting log-stream $log_stream_name"
  aws logs delete-log-stream \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name"
done

aws logs delete-log-group \
  --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

Original article: http://alestic.com/2014/11/aws-lambda-cli

by Eric Hammond at November 25, 2014 09:36 PM

November 24, 2014

Jono Bacon

Ubuntu Governance Reboot: Five Proposals

Sorry, this is long, but hang in there.

A little while back I wrote a blog post that seemed to inspire some people and ruffle the feathers of some others. It was designed as a conversation-starter for how we can re-energize leadership in Ubuntu.

When I kicked off the blog post, Elizabeth quite rightly gave me a bit of a kick in the spuds about not providing a place to have a discussion, so I amended the blog post to a link to this thread where I encourage your feedback and participation.

Rather unsurprisingly, there was some good feedback, before much of it started wandering off the point a little bit.

I was delighted to see that Laura posted that a Community Council meeting on the 4th Dec at 5pm UTC has been set up to further discuss the topic. Thanks, CC, for taking the time to evaluate and discuss the topic in-hand.

I plan on joining the meeting, but I wanted to post five proposed recommendations that we can think about. Again, please feel free to share feedback about these ideas on the mailing list

1. Create our Governance Mission/Charter

I spent a bit of time trying to find the charter or mission statements for the Community Council and Technical Board and I couldn’t find anything. I suspect they are not formally documented as they were put together back in the early days, but other sub-councils have crisp charters (mostly based off the first sub-council, the Forum Council).

I think it could be interesting to define a crisp mission statement for Ubuntu governance. What is our governance here to do? What are the primary areas of opportunity? What are the priorities? What are the risks we want to avoid? Do we need both a CC and TB?

We already have the answers to some of these questions, but are the answers we have the right ones? Is there an opportunity to adjust our goals with our leadership and governance in the project?

Like many of the best mission statements, this should be a collaborative process. Not a mission defined by a single person or group, but an opportunity for multiple people to feed into so it feels like a shared mission. I would recommend that this be a process that all Ubuntu members can play a role in. Ubuntu members have earned their seat at the table via their contributions, and would be a wonderfully diverse group to pull ideas from.

This would give us a mission that feels shared, and feels representative of our community and culture. It would feel current and relevant, and help guide our governance and wider project forward.

2. Create an ‘Impact Constitution’

OK, I just made that term up, and yes, it sounds a bit buzzwordy, but let me explain.

The guiding principles in Ubuntu are the Ubuntu Promise. It puts in place a set of commitments that ensure Ubuntu always remains a collaborative Open Source project.

What we are missing though is a document that outlines the impact that Ubuntu gives you, others, and the wider world…the ways in which Ubuntu empowers us all to succeed, to create opportunity in our own lives and the life of others.

As an example:

Ubuntu is a Free Software platform and community. Our project is designed to create open technology that empowers individuals, groups, businesses, charities, and others. Ubuntu breaks down the digital divide, and brings together our collective energy into a system that is useful, practical, simple, and accessible.

Ubuntu empowers you to:

  1. Deploy an entirely free Operating System and archive of software to one of multiple computers in homes, offices, classrooms, government institutions, charities, and elsewhere.
  2. Learn a variety of programming and development languages and have the tools to design, create, test, and deploy software across desktops, phones, tablets, the cloud, the web, embedded devices and more.
  3. Have the tools for artistic creativity and expression in music, video, graphics, writing, and more.
  4. . . .

Imagine if we had a document with 20 or so of these impact statements that crisply show the power of our collective work. I think this will regularly remind us of the value of Ubuntu and provide a set of benefits that we as a wider community will seek to protect and improve.

I would then suggest that part of the governance charter of Ubuntu is that our leadership are there to inspire, empower, and protect the ‘impact constitution'; this then directly connects our governance and leadership to what we consider to be the primary practical impact of Ubuntu in making the world a better place.

3. Cross-Governance Strategic Meetings

Today we have CC meetings, TB meetings, FC meetings etc. I think it would be useful to have a monthly, or even quarterly meeting that brings together key representatives from each of the governance boards with a single specific goal – how do the different boards help further each other’s mission. As an example, how does the CC empower the TB for success? How does the TB empower the FC for success?

We don’t want governance that is either independent or dependent at the individual board level. We want governance that is inter-dependent with each other. This then creates a more connected network of leadership.

4. Annual In-Person Governance Summit

We have a community donations fund. I believe we should utilize it to get together key representatives across Ubuntu governance into the same room for two or three days to discuss (a) how to refine and optimize process, but also (b) how to further the impact of our ‘impact constitution’ and inspire wider opportunity in Ubuntu.

If Canonical could chip in and potentially there were a few sponsors, we could potentially bring all governance representatives together.

Now, it could be tempting to suggest we do this online. I think this would be a mistake. We want to get our leaders together to work together, socialize together, and bond together. The benefits of doing this in person significantly outweigh doing it online.

5. Optimize our community brand around “innovation”

Ubuntu has a good reputation for innovation. Desktop, Mobile, Tablet, Cloud…it is all systems go. Much of this innovation though is seen in the community as something that Canonical fosters and drives. There was a sentiment in the discussion after my last blog post that some folks feel that Canonical is in the driving seat of Ubuntu these days and there isn’t much the community can do to inspire and innovate. There was at times a jaded feeling that Canonical is standing in the way of our community doing great things.

I think this is a bit of an excuse. Yes, Canonical are primarily driving some key pieces…Unity, Mir, Juju for example…but there is nothing stopping anyone innovating in Ubuntu. Our archives are open, we have a multitude of toolsets people can use, we have extensive collaborative infrastructure, and an awesome community. Our flavors are a wonderful example of much of this innovation that is going on. There is significantly more in Ubuntu that is open than restricted.

As such, I think it could be useful to focus on this in our outgoing Ubuntu messaging and advocacy. As our ‘impact constitution’ could show, Ubuntu is a hotbed of innovation, and we could create some materials, messaging, taglines, imagery, videos, and more that inspires people to join a community that is doing cool new stuff.

This could be a great opportunity for designers and artists to participate, and I am sure the Canonical design team would be happy to provide some input too.

Imagine a world in which we see a constant stream of social media, blog posts, videos and more all thematically orientated around how Ubuntu is where the innovators innovate.

Bonus: Network of Ubucons

OK, this is a small extra one I would like to throw in for good measure. :-)

The in-person Ubuntu Developer Summits were a phenomenal experience for so many people, myself included. While the Ubuntu Online Summit is an excellent, well-organized online event, there is something to be said about in-person events.

I think there is a great opportunity for us to define two UbuCons that become the primary in-person events where people meet other Ubuntu folks. One would be focused on the US, and one of Europe, and if we could get more (such as an Asian event), that would be awesome.

These would be driven by the community for the community. Again, I am sure the donations fund could help with the running costs.

In fact, before I left Canonical, this is something I started working on with the always-excellent Richard Gaskin who puts on the UbuCon before SCALE in LA each year.

This would be more than a LoCo Team meeting. It would be a formal Ubuntu event before another conference that brings together speakers, panel sessions, and more. It would be where Ubuntu people to come to meet, share, learn, and socialize.

I think these events could be a tremendous boon for the community.


Well, that’s it. I hope this provided some food for thought for further discussion. I am keen to hear your thoughts on the mailing list!

by jono at November 24, 2014 10:35 PM

November 22, 2014

Elizabeth Krumbach

My Vivid Vervet has crazy hair

Keeping with my Ubuntu toy tradition, I placed an order for a vervet stuffed toy, available in the US via: Miguel the Vervet Monkey.

He arrived today!

He’ll be coming along to his first Ubuntu event on December 10th, a San Francisco Ubuntu Hour.

by pleia2 at November 22, 2014 02:57 AM

Vacation in Jamaica

This year I’ve traveled more than ever, but almost all of my trips have been for work. This past week, MJ and I finally snuck off for a romantic vacation together in Jamaica, where neither of us had been before.

Unfortunately we showed up a day late after I forgot my passport at home. I had removed it from my bag earlier in the day to get a copy of it for a VISA application and left it on the scanner. I realized it an hour before our flight, and the check in was 45 minutes prior to, not enough time for me to get home and back to the airport before the cutoff (but I did try!). I felt horrible. Fortunately the day home together before the trip did give us a little bit of breathing room between mad dash from work to airport.

Friday evening we got a flight! We sprung for First Class on our flights and thankfully all travel was uneventful. We got to Couples Negril around 3PM the following day after 2 flights, a 6 hour layover and a 90 minute van ride from Montego Bay to Negril.

It was beautiful. The rooms had recently been renovated and looked great. It was also nice that the room air conditioning was very good, so on those days when the humidity got to be a bit much I had a wonderful refuge. The resort was all-inclusive and we had confirmed ahead of time that the food was good, so there were no disappointments there. They had some low-key activities and little events and entertainment at lunch and later into the evening (including some ice carving and a great show by Dance Xpressionz). As a self-proclaimed not cool person I found it all to be the perfect atmosphere to relax and feel comfortable going to some of the events.

The view from our room (2nd floor Beachfront suite) was great too:

I had planned on going into deep Ian Fleming mode and getting a lot of writing done on my book, but I only ended up spending about 4 hours on it throughout the week. Upon arrival I realized how much I really needed the time off and took full advantage of it, which was totally the right decision. By Tuesday I was clear-headed and finally excited again about some of my work plans for the upcoming weeks, rather than feeling tired and overwhelmed by them.

Also, there were bottomless Strawberry Daiquiris.

Alas, it had to come to an end. We packed our things and were on our way on Thursday. Prior to the trip, MJ had looked into AirLink in order to take a 12 minute flight from Negril to Montego Bay rather than the 90 minute van ride. At $250 for the pair of us, I was happy to give it a go for the opportunity to ride in a Cessna and take some nice aerial shots. After getting our photo with the pilot, at 11AM the pair of us got into the Cessna with the pilot and co-pilot.

The views were everything I expected, and I was happy to get some nice pictures.

Jamaica is definitely now on my list for going back to. I really enjoyed our time there and it seemed to be a good season for it.

More photos from the week here (admittedly, mostly of the Cessna flight): https://www.flickr.com/photos/pleia2/sets/72157649408324165/

by pleia2 at November 22, 2014 02:32 AM

November 19, 2014

Eric Hammond

Exploring The AWS Lambda Runtime Environment

In the AWS Lambda Shell Hack article, I present a crude hack that lets me run shell commands in the AWS Lambda environment to explore what might be available to Lambda functions running there.

I’ve added a wrapper that lets me type commands on my laptop and see the output of the command run in the Lambda function. This is not production quality software, but you can take a look at it in the alestic/lambdash GitHub repo.

For the curious, here are some results. Please note that this is running on a preview and is in no way a guaranteed part of the environment of a Lambda function. Amazon could change any of it at any time, so don’t build production code using this information.

The version of Amazon Linux:

$ lambdash cat /etc/issue
Amazon Linux AMI release 2014.03
Kernel \r on an \m

The kernel version:

$ lambdash uname -a
Linux ip-10-0-168-157 3.14.19-17.43.amzn1.x86_64 #1 SMP Wed Sep 17 22:14:52 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

The working directory of the Lambda function:

$ lambdash pwd
/var/task

which contains the unzipped contents of the Lambda function I uploaded:

$ lambdash ls -l
total 12
-rw-rw-r-- 1 slicer 497 5195 Nov 18 05:52 lambdash.js
drwxrwxr-x 5 slicer 497 4096 Nov 18 05:52 node_modules

The user running the Lambda function:

$ lambdash id
uid=495(sbx_user1052) gid=494 groups=494

which is one of one hundred sbx_userNNNN users in /etc/passwd. “sbx_user” presumably stands for “sandbox user”.

The environment variables (in a shell subprocess). This appears to be how AWS Lambda is passing the AWS credentials to the Lambda function.

$ lambdash env
 AWS_SESSION_TOKEN=[ELIDED]
LAMBDA_TASK_ROOT=/var/task
LAMBDA_CONSOLE_SOCKET=14
PATH=/usr/local/bin:/usr/bin:/bin
PWD=/var/task
AWS_SECRET_ACCESS_KEY=[ELIDED]
NODE_PATH=/var/runtime:/var/task:/var/runtime/node_modules
AWS_ACCESS_KEY_ID=[ELIDED]
SHLVL=1
LAMBDA_CONTROL_SOCKET=11
_=/usr/bin/env

The versions of various pre-installed software:

$ lambdash perl -v
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi
[...]

$ lambdash python --version
Python 2.6.9

$ lambdash node -v
v0.10.32

Running processes:

$ lambdash ps axu
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
493          1  0.2  0.7 1035300 27080 ?       Ssl  14:26   0:00 node --max-old-space-size=0 --max-new-space-size=0 --max-executable-size=0 /var/runtime/node_modules/.bin/awslambda
493         13  0.0  0.0  13444  1084 ?        R    14:29   0:00 ps axu

The entire file system: 2.5 MB download

 $ lambdash ls -laiR /
 [click link above to download]

Kernel ring buffer: 34K download

$ lambdash dmesg
[click link above to download]

CPU info:

$ lambdash cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 62
model name  : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
stepping    : 4
microcode   : 0x416
cpu MHz     : 2800.110
cache size  : 25600 KB
physical id : 0
siblings    : 2
core id     : 0
cpu cores   : 1
apicid      : 0
initial apicid  : 0
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms
bogomips    : 5600.22
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
[...]

Installed nodejs modules:

$ dirs=$(lambdash 'echo $NODE_PATH' | tr ':' '\n' | sort)
$ echo $dirs
/var/runtime /var/runtime/node_modules /var/task

$ lambdash 'for dir in '$dirs'; do echo $dir; ls -1 $dir; echo; done'
/var/runtime
node_modules

/var/runtime/node_modules
aws-sdk
awslambda
dynamodb-doc
imagemagick

/var/task # Uploaded in Lambda function ZIP file
lambdash.js
node_modules

Anything else you’d like to see? Suggest commands in the comments on this article.

Original article: http://alestic.com/2014/11/aws-lambda-environment

by Eric Hammond at November 19, 2014 01:56 AM

AWS Lambda: Pay The Same Price For Faster Execution

multiply the speed of compute-intensive Lambda functions without (much) increase in cost

Given:

  • AWS Lambda duration charges are proportional to the requested memory.

  • The CPU power, network, and disk are poportional to the requested memory.

One could conclude that the charges are proportional to the CPU power available to the Lambda function. If the function completion time is inversely proportional to the CPU power allocated (not entirely true), then the cost remains roughly fixed as you dial up power to make it faster.

If your Lambda function is primarily CPU bound and takes at least several hundred ms to execute, then you may find that you can simply allocate more CPU by allocating more memory, and get the same functionality completed in a shorter time period for about the same cost.

For example, if you allocate 128 MB of memory and your Lambda function takes 10 seconds to run, then you might be able to allocate 640 MB and see it complete in about 2 seconds.

At current AWS Lambda pricing, both of these would cost about $0.02 per thousand invocations, but the second one completes five times faster.

Things that would cause the higher memory/CPU option to cost more in total include:

  • Time chunks are rounded up to the nearest 100 ms. If your Lambda function runs near or under that in less memory, then increasing the CPU allocated will make it return faster, but the rounding up will cause the resulting cost to be more expensive.

  • Doubling the CPU allocated to a Lambda function does not necessarily cut the run time in half. The code might be accessing external resources (e.g., calling S3 APIs) or interacting with disk. If you double the requested CPU, then those fixed time actions will end up costing twice as much.

If you have a slow Lambda function, and it seems that most of its time is probably spent in CPU activities, then it might be worth testing an increase in requested memory to see if you can get it to complete much faster without increasing the cost by much.

I’d love to hear what practical test results people find when comparing different memory/CPU allocation values for the same Lambda function.

Original article: http://alestic.com/2014/11/aws-lambda-speed

by Eric Hammond at November 19, 2014 01:01 AM

November 18, 2014

Eric Hammond

lambdash: AWS Lambda Shell Hack

I spent the weekend learning just enough JavaScript and nodejs to hack together a Lambda function that runs arbitrary shell commands in the AWS Lambda environment.

This hack allows you to explore the current file system, learn what versions of Perl and Python are available, and discover what packages might be installed.

Setup

Define the basic parameters.

# Replace with your bucket name
bucket_name=lambdash.alestic.com

function=lambdash
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
log_group_name=/aws/lambda/$function

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
      }]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. Log to Cloudwatch and upload files to a specific S3 bucket/location.

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Effect": "Allow",
          "Action": [ "logs:*" ],
          "Resource": "arn:aws:logs:*:*:*"
      }, {
          "Effect": "Allow",
          "Action": [ "s3:PutObject" ],
          "Resource": "arn:aws:s3:::'$bucket_name'/'$function'/*"
      }]
  }'

Grab the current Lambda function JavaScript from the Alestic lambdash GitHub repository, create the ZIP file, and upload the new Lambda function.

wget -q -O$function.js \
  https://raw.githubusercontent.com/alestic/lambdash/master/lambdash.js
npm install async fs tmp
zip -r $function.zip $function.js node_modules
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --runtime nodejs \
  --mode event \
  --handler "$function.handler" \
  --role "$lambda_execution_role_arn" \
  --timeout 60 \
  --memory-size 256

Invoke the Lambda function with the desired command and S3 output locations. Adjust the command and repeat as desired.

cat > $function-args.json <<EOM
{
    "command": "ls -laiR /",
    "bucket":  "$bucket_name",
    "stdout":  "$function/stdout.txt",
    "stderr":  "$function/stderr.txt"
}
EOM

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-args.json"

Look at the Lambda function log output in CloudWatch.

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Get the command output.

aws s3 cp s3://$bucket_name/$function/stdout.txt .
aws s3 cp s3://$bucket_name/$function/stderr.txt .
less stdout.txt stderr.txt

Clean up

If you are done with this example, you can delete the created resources. Or, you can leave the Lambda function in place ready for future use. After all, you aren’t charged unless you use it.

aws s3 rm s3://$bucket_name/$function/stdout.txt
aws s3 rm s3://$bucket_name/$function/stderr.txt
aws lambda delete-function \
  --function-name "$function"
aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"
aws iam delete-role \
  --role-name "$lambda_execution_role_name"
aws logs delete-log-group \
  --log-group-name "$log_group_name"

Requests

What command output would you like to see in the Lambda environment?

Original article: http://alestic.com/2014/11/aws-lambda-shell

by Eric Hammond at November 18, 2014 10:21 PM

Akkana Peck

Unix "remind" file for US holidays

Am I the only one who's always confused about when holidays happen?

Partly it's software, I guess. In these days of everybody keeping their schedules on Google's or Apple's servers, maybe most people keep up on these things.

But being the dinosaur I am, I'm still resistant to keeping my schedule in the cloud on a public server. What if I need to check for upcoming events while I'm on a trip out in the remote desert somewhere? (Not to mention the obvious privacy considerations.) For years I used PalmOS PDAs, but when I switched to Android and discovered how poor the offline calendar options are, I decided that I should learn how to use the old Unix standby.

It's been pretty handy. I run remind ~/[remind-file-name] when I log in in the morning, and it gives me a nice summary of upcoming events:

DPU Solar surcharge meeting, 5:30-8:30 tomorrow
NMGLUG meeting in 2 days' time

Of course, I can also have it email me with reminders, or pop up a window, but so far I haven't felt the need.

I can also display a nice calendar showing upcoming events for this month or the next several months. I made a couple of aliases:

mycal () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=1 
        fi
        remind -c$months ~/Docs/Lists/remind
}

mycalp () {
        months=$1 
        if [[ x$months = x ]]
        then
                months=2 
        fi
        remind -p$months ~/Docs/Lists/remind | rem2ps -e -l > /tmp/mycal.ps
        gv /tmp/mycal.ps &
}

The first prints an ascii calendar; the second displays a nice postscript calendar complete with little icons for phases of the moon.

But what about those holidays?

Okay, that gives me a good way of storing reminders about appointments. But I still don't know when holidays are. (I had that problem with the PalmOS scheduling program, too -- it never knew about holidays either.)

Web searching didn't help much. Unfortunately, "remind" is a terrible name in this age of search engines. If someone has already solved this problem, I sure wasn't able to find any evidence of it. So instead, I went to Wikipedia's list of US holidays, with the remind man page in another tab, and wrote remind stanzas for each one -- except Easter, which is much more complicated.

But wait -- it turns out that remind already has code to calculate Easter! It just needs a slightly more complicated stanza: instead of the standard form of

REM  1 Apr +1 MSG April Fool's Day %b
I need to use this form:
REM  [trigger(easterdate(today()))] +1 MSG Easter %b

The %b in each case is what gives you the notice of when the event is in your reminders, e.g. "Easter tomorrow" or "Easter in two days' time". The +1 is how far beforehand you want to be reminded of each event.

So here's my remind file for US holidays. I make no guarantees that every one is right, though I did check them for the next 12 months and they all seem to be working.

#
# US Holidays
#
REM      1 Jan    +3 MSG New Year's Day %b
REM Mon 15 Jan    +2 MSG MLK Day %b
REM      2 Feb       MSG Groundhog Day %b
REM     14 Feb    +2 MSG Valentine's Day %b
REM Mon 15 Feb    +2 MSG President's Day %b
REM     17 Mar    +2 MSG St Patrick's Day %b
REM      1 Apr    +9 MSG April Fool's Day %b
REM  [trigger(easterdate(today()))] +1 MSG Easter %b
REM     22 Apr    +2 MSG Earth Day %b
REM Fri  1 May -7 +2 MSG Arbor Day %b
REM Sun  8 May    +2 MSG Mother's Day %b
REM Mon  1 Jun -7 +2 MSG Memorial Day %b
REM Sun 15 Jun       MSG Father's Day
REM      4 Jul    +2 MSG 4th of July %b
REM Mon  1 Sep    +2 MSG Labor Day %b
REM Mon  8 Oct    +2 MSG Columbus Day %b
REM     31 Oct    +2 MSG Halloween %b
REM Tue  2 Nov    +4 MSG Election Day %b
REM     11 Nov    +2 MSG Veteran's Day %b
REM Thu 22 Nov    +3 MSG Thanksgiving %b
REM     25 Dec    +3 MSG Christmas %b

November 18, 2014 09:07 PM

November 14, 2014

Jono Bacon

Ubuntu Governance: Reboot?

For many years Ubuntu has had a comprehensive governance structure. At the top of the tree are the Community Council (community policy) and the Technical Board (technical policy).

Below those boards are sub-councils such as the IRC, Forum, and LoCo councils, and developer assessment boards.

The vast majority of these boards are populated by predominantly non-Canonical folks. I think this is a true testament to the openness and accessibility of governance in Ubuntu. There is no “Canonical needs to have people on half the board” shenanigans…if you are a good leader in the Ubuntu community, you could be on these boards if you work hard.

So, no-one is denying the openness of these boards, and I don’t question the intentions or focus of the people who join and operate them. They are good people who act in the best interests of Ubuntu.

What I do question is the purpose and effectiveness of these boards.

Let me explain.

From my experience, the charter and role of these boards has remained largely unchanged. The Community Council, for example, is largely doing much of the same work it did back in 2006, albeit with some responsibility delegated elsewhere.

Over the years though Ubuntu has changed, not just in terms of the product, but also the community. Ubuntu is no longer just platform contributors, but there are app and charm developers, a delicate balance between Canonical and community strategic direction, and a different market and world in which we operate.

Ubuntu governance has, as a general rule, been fairly reactive. In other words, items are added to a governance meeting by members of the community and the boards sit, review the topic, discuss it, and in some cases vote. In this regard I consider this method of governance not really leadership, but instead idea, policy, and conflict arbitration.

What saddens me is that when I see some of these meetings, much of the discussion seems to focus on paperwork and administrivia, and many of the same topics pop up over and over again. With no offense meant at the members of these boards, these meetings are neither inspirational and rarely challenge the status quo of the community. In fact, from my experience, challenging the status quo with some of these boards has invariably been met with reluctance to explore, experiment, and try new ideas, and to instead continue to enforce and protect existing procedures. Sadly, the result of this is more bureaucracy than I feel comfortable with.

Ubuntu is at a critical point in it’s history. Just look at the opportunity: we have a convergent platform that will run across phones, tablets, desktops and elsewhere, with a powerful SDK, secure application isolation, and an incredible developer community forming. We have a stunning cloud orchestration platform that spans all the major clouds, making the ability to spin up large or small scale services a cinch. In every part of this the code is open and accessible, with a strong focus on quality.

This is fucking awesome.

The opportunity is stunning, not just for Ubuntu but also for technology freedom.

Just think of how many millions of people can be empowered with this work. Kids can educate themselves, businesses can prosper, communities can form, all on a strong, accessible base of open technology.

Ubuntu is innovating on multiple fronts, and we have one of the greatest communities in the world at the core. The passion and motivation in the community is there, but it is untapped.

Our inspirational leader has typically been Mark Shuttleworth, but he is busy flying around the world working hard to move the needle forward. He doesn’t always have the time to inspire our community on a regular basis, and it is sorely missing.

As such, we need to look to our leadership…the Community Council, the Technical Board, and the sub-councils for inspiration and leadership.

I believe we need to transform and empower these governance boards to be inspirational vessels that our wider community look to for guidance and leadership, not for paper-shuffling and administrivia.

We need these boards to not be reactive but to be proactive…to constantly observe the landscape of the Ubuntu community…the opportunities and the challenges, and to proactively capitalize on protecting the community from risk while opening up opportunity to everyone. This will make our community stronger, more empowered, and have that important dose of inspiration that is so critical to focus our family on the most important reasons why we do this: to build a world of technology freedom across the client and the cloud, underlined by a passionate community.

To achieve this will require awkward and uncomfortable change. It will require a discussion to happen to modify the charter and purpose of these boards. It will mean that some people on the current boards will not be the right people for the new charter.

I do though think this is important and responsible work for the Ubuntu community to be successful: if we don’t do this, I worry that the community will slowly degrade from lack of inspiration and empowerment, and our wider mission and opportunity will be harmed.

I am sure this post may offend some members of these boards, but it is not mean’t too. This is not a reflection of the current staffing, this is a reflection of the charter and purpose of these boards. Our current board members do excellent work with good and strong intentions, but within that current charter.

We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success.

This, I believe will transform Ubuntu into a new world of potential, a level of potential I have always passionately believed in.

I have kicked off a discussion on ubuntu-community-team where we can discuss this. Please share your thoughts and solutions there!

by jono at November 14, 2014 06:16 PM

Elizabeth Krumbach

Holiday cards 2014!

Every year I send out a big batch of wintertime holiday cards to friends and acquaintances online.

Reading this? That means you! Even if you’re outside the United States!

Just drop me an email at lyz@princessleia.com with your postal address, please put “Holiday Card” in the subject so I can filter it appropriately. Please do this even if I’ve sent you a card in the past, I won’t be reusing the list from last year.

Typical disclaimer: My husband is Jewish and I’m not religious, the cards will say “Happy Holidays”

by pleia2 at November 14, 2014 04:38 PM

November 13, 2014

Elizabeth Krumbach

Wedding in Philadelphia

This past weekend MJ and I met in Philadelphia to attend his step-sister’s wedding on Sunday. My flight came in from Paris on Saturday, and unfortunately MJ was battling a cold so we had a pretty low key evening.

Sunday morning we were up ready to dress and pick up a truck to drive his sister to the church. The wedding itself didn’t begin until 2PM, but since we were coordinating transportation for the wedding party, we had to meet everyone pretty early to make sure everyone got into their respective bus/car to make it to St. Stephen’s Orthodox Cathedral on time.

I’d never been to an eastern Orthodox wedding, so it was an interesting ceremony to watch. It took about an hour, and we were all standing for the entire ceremony. There was a ring exchange in the back of the chapel, and then the bride and groom come up the center aisle together for the rest of their ceremony. I chose to keep my camera stashed away during the ceremony, but as soon as the priest had finished and was making some closing comments about the newlyweds I got one in real quick.

The weather in November can go either way in Philadelphia, but they got lucky with bright, clear skies and the quite comfortable temperature in the 60s.

The reception began at 4PM with a cocktail hour.

And we did manage to get a few minutes in with the beautiful bride, Irina :)

Big congratulations to Irina and Sam!

More photos here: https://www.flickr.com/photos/pleia2/sets/72157648832387979/

The trip was a short one, with us packing up on Monday to fly home that evening. I did manage to get in a quick lunch with my friend Crissi who made it down to the city for the occasion, so it was great to catch up with her. Our flights home were uneventful and I finally got to sleep in my own bed after 3 weeks on the road!

Tomorrow night we fly off to Jamaica for a proper vacation together, I’m very much looking forward to it.

by pleia2 at November 13, 2014 02:46 AM

Party in France

On Saturday November 1st I landed in Paris on a redeye flight from Miami. I didn’t manage to sleep much at all on the flight, but thankfully I was able to check into my hotel room around 8:30AM to drop off my bags and freshen up before going on a day of jetlag-battling tourism.

It was the right decision. Of all the days I spent in Paris, that Saturday was the most beautiful weather-wise. The sky was clear and blue, the temperature quite comfortable to be wandering around the city in a t-shirt. Since Saturday was one of my only 2 days to play the tourist in Paris, mixed in with some meetings with colleagues, I took the advice of my cousin Melissa and bought a ticket on one of the red hop-on, hop-off circuit buses that stopped at the various landmarks throughout the city.

The hotel I was staying not far from the Arc de Triomphe so I was able to have a look at that and pick up a bus at that stop. I rode the bus until it reached the Eiffel Tower.

The line to take a lift up to the top of the tower was quite long and I wasn’t keen on waiting while battling jet lag, so I took a nice long walk around the tower and the grounds, snapping pictures along the way. I also found myself hungry so I picked up a surprisingly delicious chicken sandwich at a booth under the tower and enjoyed it there.

I hopped on the bus again and drove through the grounds of the Louvre museum, which was an astonishingly large complex. Due to the crowds and other things on my list for the day, I skipped actually going to the Louvre and contented myself with simply seeing the glass pyramid and making a mental note to return the next time I’m in Paris.

Soon after my phone lit up with a notification from my friend and OpenStack colleague Chris Hoge saying that he was at Notre Dame and folks were welcome to join him. It was the next stop I was planning on making, so I made plans to meet up.

I adore old cathedrals, and Notre Dame is a special one for me. As funny as it sounds, Disney’s The Hunchback of Notre Dame is one of my favorite movies. Being released in 1996, I must have just been finishing up my freshman year in high school where one of my history classes had started diving into world religions. I was also growing my skeptic brain. I had also developed a habit at that time of seeing all Disney full-length animated features in theaters the day they were released because I was such a hopeless fan. The confluence of all these things made the movie hit me at the right time. It was a surprising tale of serious issues around compassion, religion and ethics for an animated film, I was totally into it. Plus, they didn’t disappoint with the venue for the film, I fell in love with Notre Dame that summer and started developing a passion for cathedrals and stained glass, particularly rose windows.

I met up with Chris and we took the bell tower tour, which all told took us up 387 steps to the roof of the 226 foot cathedral. We stopped halfway up to walk between the towers and hear the bells ring, which is where I took this video (YouTube). If you’re still with me with the Disney film, it’s where the final battle between Frollo and Quasimodo takes place ;)

387 steps is a lot, and I have to admit getting a bit winded as we climbed the narrow spiral staircases, but it was totally worth it. I really enjoyed being so close to all the gargoyles and the view from the top of the cathedral was beautiful, not to mention a fantastic way to see the architecture of the cathedral from above.

After the tour, I was was able to go inside the cathedral to take a good luck at all those stunning stained glass windows!

After Notre Dame, I did a little shopping and made my way back to the bus and eventually the hotel for a meeting and dinner with my colleagues.

Sunday morning I managed to sleep in a bit and made my way out of the hotel shortly before 10AM so I could make it over to the Catacombs of Paris. The line for the catacombs is very long, the website warning that you could wait 3-4 hours. I had hoped that getting there early would mitigate some of that wait, but it did end up taking 3 hours! I brought along my Nook so at least I got some reading done, but it probably was the longest I’ve ever waited in line.

I’d say that it was worth it though. I’d never been inside catacombs before, so it was a pretty exceptional experience. After walking through a fair number of tunnels going down and then you finally get to where they keep all the bones. So. Many. Bones. As you walk through the catacombs the walls are made of stacked bones, seeing skulls and leg bones piled up to make the walls, with all kinds of other bones stacked on the tops of the piles.

I also decided to bring along a bit of modernity into the catacombs with a selfie. I’ll leave it to the reader to judge whether or not I have respect for the dead.

By the time I left the catacombs it was after 2PM and I made my way over to the Avenue des Champs-Élysées to do some shopping. Most worthy of note was my stop at Louis Vuitton flagship store where I bought a lovely wallet.

And with that, my tourism wound down. Sunday night I began getting into the swing of things with the OpenStack Summit as we had a team dinner (for certain values of “team” – we’re so many now that any meal now is just a subset of us). I am looking forward to going again some day on a proper vacation with MJ, there are so many more things to see!

A couple hundred more photos from my travels around Paris here: https://www.flickr.com/photos/pleia2/sets/72157648830423229/

by pleia2 at November 13, 2014 02:31 AM

Akkana Peck

Crockpot Green Chile Posole Stew

Posole is a traditional New Mexican dish made with pork, hominy and chile. Most often it's made with red chile, but Dave and I are both green chile fans so that's how I make it. I make no claims as to the resemblance between my posole and anything traditional; but it sure is good after a cold, windy day like we had today.

Dave is leery of anything called "posole" -- I think the hominy reminds him visually of garbanzo beans, which he dislikes -- but he admits that they taste fine in this stew. I call it "green chile stew" rather than "posole" when talking to him, and then he gets enthusiastic.

Ingredients (all quantities very approximate):

  • pork, about a pound; tenderloin works well but cheaper cuts are okay too
  • about 10 medium-sized roasted green chiles, whatever heat you prefer (or 1 large or 2 medium cans diced green chile)
  • 1 can hominy
  • 1 large or two medium russet potatoes (or equivalent amount of other type)
  • 1 can chicken broth
  • 1 tsp salt
  • 1 tsp red chile powder
  • 1/2 tsp cumin
  • fresh garlic to taste
  • black pepper and hot sauce (I use Tapatio) to taste

Start the crockpot heating: I start it on high then turn it down later. Add broth.

Dice potato. At least half the potato should be in small pieces, say 1/4" cubes, or even shredded; the other half can be larger chunks. I leave the skin on.

Pre-cook diced potato in the microwave for 7 minutes or until nearly soft enough to eat, in a loosely covered bowl with maybe 1" of water in the bottom. (This will get messy and the water gets all over and you have to clean the microwave afterward. I haven't found a solution to that yet.) Dump cooked potato into crockpot.

Dice pork into stew-sized pieces, trimming fat as desired. Add to crockpot.

De-skin and de-seed the green chiles and cut into short strips. (Or use canned or frozen.) Add to crockpot.

Add spices: salt, chile powder, cumin, and hot sauce (if your chiles aren't hot enough -- we have a bulk order of mild chiles this year so I sprinkled liberally with Tapatio).

Cover, reduce heat to low.

Cook 6-7 hours, occasionally stirring, tasting and correcting the seasoning. (I always add more of everything after I taste it, but that's me.)

Serve with bread, tortillas, sopaipillas or similar. French bread baked from the refrigerated dough in the supermarket works well if you aren't brave enough to make sopaipillas (I'm not, yet).

November 13, 2014 12:49 AM

November 07, 2014

Elizabeth Krumbach

Final day of the OpenStack Kilo Summit

Today was the last day of the OpenStack Design Summit. It wrapped up with a change of pace this time around, each project had their own contributor meetup which was used to continue hashing out ideas and getting some work done. I think this was a really brilliant move. I was pretty tired by the time Friday rolled around (one of the reasons the later Ubuntu Developer Summits were shrunk to 4 days), so I’m not sure how useful I would have been in more discussion-driven sessions. The contributor meetup allowed us to chat about things we didn’t have time to run sessions on, or do in-person follow-ups to sessions we did have. We also had nice in-person time to collaborate on some things so that some of our projects got to a semi-working state before we all go home and take a vacation (my vacation starts next Thursday).

I spent my day meeting up with with people to talk about our new translations tools and did the first couple drafts of the infrastructure specification to get that project started. Given the timeline, I anticipate that my real work on that won’t really begin until after I return from Jamaica on November 21st, but that seemed to sync up with the timeline of others on the team who are either taking some time off post-summit or have some dependencies blocking their action items.

There was also time spent on talking about the Infrastructure User Manual as a follow up to the session earlier in the week. We decided to host a 48 hour virtual sprint on the first couple days of December in order to collaborate on fleshing out the rest of the document (announcement here). As we all know, I love documentation, so I’m glad to see this coming together. I was also able to have a chat with a contributor later in the day who is also looking forward to seeing it finished so he can build upon it as the foundation for more project-specific developer documentation.

Also, the topic of third party testing came up during one of my chats and was overheard by someone nearby – which is how we learned there were at least three teams talking about creating a more automatic mechanism for determining the health of the third party testing systems. That’s approximately two teams too many. Kurt Taylor was able to get us all on an email thread together so I’m happy to say that a specification for that project should be coming together too.

Late in the afternoon James E. Blair did a demo for developers of gertty. I wrote about the tool back in September (here) and I’m a big fan of CLI-based code review, so it was fun to see others excited and asking questions about it.

As things wound down, I realized that this was probably the best OpenStack summit I’ve attended. The occasional snafu aside (like the over-crowded lunch on Thursday – I ate elsewhere), for a conference with over 4,600 attendees it felt well-managed. The Design Summit itself had a format I was really pleased with, as in addition to having the Friday work day, Tuesday was devoted to much-needed cross-project summit sessions. As OpenStack grows and matures, I’m really happy to see everyone working to fine tune the summits like this to keep pace.

Tonight I joined several of my OpenStack colleagues for an early dinner, retiring early to my room so I could re-pack my suitcase (and hope it’s not over 50lbs) and get some work done before my flight tomorrow morning. As exhausting as this trip was, it sure flew by fast and I am quite sad to be leaving Paris! Alas, my sister in law’s wedding in Philadelphia on Sunday awaits and I’m looking forward to it (and finally seeing my husband again after almost 2 weeks).

by pleia2 at November 07, 2014 09:36 PM

November 06, 2014

Elizabeth Krumbach

Kilo OpenStack Summit Days 3-4

As the OpenStack Summit continued for those of us on the development side, Wednesday and Thursday were full of design sessions.

First up for me on Wednesday was a great session about the Infrastructure User Manual led by Anita Kuno. A pile of work went into this while we were at our mid-cycle Infrastructure sprint in July, but many of the patches have since been sitting around. This session worked to make sure we had a shared vision for the manual and to get more core contributors both reviewing patches and submitting content for some of the more complicated, institutional knowledge type sections of the manual. The etherpad for the session is available here.

The session on AFS (Andrew File System) for the Infrastructure team was also on Wednesday. In spite of having a lot of storage space at our disposal and tools like Swift (which we’re slowly moving logs to), there are still some problems we’re seeing to solve that a distributed filesystem would be useful for, enter the AFS cell set up for the OpenStack project. The session went through some of the benefits of using AFS in our environment (such as read-only replicas of volumes, heavy client-side caching support and more comprehensive ACLs than standard Unix filesystem permissions). From there the discussion moved on to how it may be used, some of the popular proposals were our pypi mirror, the git repos and documentation. Detailed Etherpad here.

There were also a couple QA/Infra sessions, including one on Gating Relationships. At the QA/Infra mid-cycle meetup back in July we touched upon some of the possible “over-testing” that may be done when a change in one project really has no potential to impact another project, but we run the tests anyway, using up testing resources. However, there isn’t really any criteria to follow for determining what changes and project combinations should trigger tests, and it was noted that many of what seem like unnecessary testing was actually put in place at one point to address a particular pain point. The main result of this session was to try to develop some of this criteria, even if it’s manual and human-based for now. Detailed Etherpad here.

We also had a QA and CI After Merge session. Currently all of our tests are pre-merge, which makes sure all code that lands in the development repository has undergone all official tests that the OpenStack CI system has to offer. This session discussed whether heavier, less “central” tests to the projects be tested post merge or with periodic tests, with what I believe was some consensus: We do want to split out some of the current gated jobs. Several todo items to move this forward were defined at the bottom of the etherpad.

I also attended the “Stable branches” session (lively etherpad here). Icehouse’s support is 15 months and the goal seems to be to support Juno for a similar time frame. Several representatives from distributions were attending and giving feedback about their own support needs and there seems to be hope that there will be work from folks from distros committing to do some of the maintenance work.

There were also a couple sessions about Tempest, the integration test suite. First there was “Tempest scope in the brave new world” which focused on the questions around what should be in Tempest moving forward and what the project should consider removing as the project moves forward. Etherpad for the session here. There was also a “Tempest-lib moving forward” session, which discussed this library that was created last cycle and various ways to improve it in the coming cycle, details in the Etherpad here.

Wednesday evening I made my way over to the Core Reviewer party put on by HP at the near rooftop event space of Cité de l’Architecture et du Patrimoine. We were driven there by what was described as “iconic, old French cars” which turned out to be the terrifying Citroën 2CV. And our drivers were all INSANE in Paris traffic. Fortunately no one died and it was actually pretty fun (though I was happy to see buses would be taking us back to the conference venue!).

The night itself kicked off with a lecture on the architecture of the Sagrada Família Basílica in Barcelona by one of the people currently working on it, and which drew some loose parallels between our own development work (including the observation that Sagrada Família is not complete – a 140+ year release cycle!). They also brought in entertainment in the form of several opera singers who came in throughout the night. Some food was served, but I spent much of the night outside chatting with various of my OpenStack colleagues and drinking so much Champagne that the outdoor bartender learned to pull out the bottle as soon as he saw me coming. Hah!

My favorite part of the night was the stunning view of the Eiffel Tower. It’s a beautiful thing on its own at night, but at the top of the hour it also sparkles for 5 minutes in a pretty impressive show. I was so caught up in discussions that I didn’t manage to go on the museum tour that was offered, but I heard good things about it today.

Then it was on to today, Thursday! I had a great chat with Steve Weston about the third party dashboard we’re working on before Anita came to find me so I wouldn’t be late for my own session (oops).

My (along with Andreas Jaeger’s who I saved a seat for up front) session was an infrastructure session on Translations Tools. We’re currently using Transifex but we need to move off of it now that they’ve transitioned to a closed source product. As I mentioned in my last post, we decided to go with Zanata so the session was primarily to firm up this decision with the rest of the infrastructure team and answer any questions from everyone involved. I have a lot of work to do during the Kilo cycle to finally get this going, but I’m really excited that all the work I did last cycle in getting demos set up and corralling the right talent for each component has finally culminated in a solid decision and action items for making the move. Next week I’ll start working on the spec for the transition. Etherpad here.

I attended a few other sessions, but the other big infrastructure one today was about Storyboard, the new task and bug tracker being written for the project to replace Launchpad. Michael Krotscheck has been doing an exceptional job on this project and the first decision of the session was whether it was ready for the OpenStack Infrastructure team to move to – yes! The rest of the session was spent outlining the key features that were needed to have really good support for infrastructure and to start supporting StackForge and OpenStack projects. The beautiful Etherpad that Michael created is here.

Tonight I went out with several of my OpenStack colleagues to dinner at La maison de Charly for delicious and stunningly arranged Moroccan food. I managed to get back to my room by 9PM so I could get an early night before the last day of the summit… but of course I got caught up in writing this, checking email and goofing off in IRC.

Tomorrow the summit wraps up with a working day with an open agenda for all the teams, so I’ll be spending my day in the Infra/QA/Release Management room.

by pleia2 at November 06, 2014 10:46 PM

Akkana Peck

New GIMP Save/Export plug-in: Saver

The split between Save and Export that GIMP introduced in version 2.8 has been a matter of much controversy. It's been over two years now, and people are still complaining on the gimp-users list.

Early on, I wrote a simple Python plug-in called Save-Export Clean, which saved over an image's current save or export filename regardless of whether the filename was XCF (save) or a different format (export). The idea was that you could bind Ctrl-S to the plug-in and not be pestered by needing to remember whether it was XCF, JPG or what.

Save-Export Clean has been widely cited, and I hope it's helped some people who were bothered by the Save/Export split. But personally I didn't like it very much. It wasn't very flexible -- there was no way to change the filename, for one thing, and it was awfully easy to overwrite an original image without knowing that you'd done it. I went back to using GIMP's separate Save and Export, but in the back of my mind I was turning over ideas, trying to understand my workflow and what I really wanted out of a GIMP Save plug-in.

[Screenshot: GIMP Saver-as... plug-in] The result of that was a new Python plug-in called Saver. I first wrote it a year ago, but I've been tweaking it and using it since then, with Ctrl-S bound to Saverand Ctrl-Shift-S bound to Saver as...). I wanted to make sure that it was useful and working reliably ... and somehow I never got around to writing it up and announcing it formally ... until now.

Saver, like Save/Export Clean, will overwrite your chosen filename, whether XCF or another format, and will mark the image as saved so GIMP won't pester you when you exit.

What's different? Mainly, three things:

  1. A Saver as... option so you can change the filename or file type.
  2. Merges multiple layers so they'll show up properly in your JPG or PNG image.
  3. An option to save as .xcf or .xcf.gz and, at the same time, export a copy in another format, possibly scaled down. So you can maintain your multi-layer XCF image but also update the JPG copy that you're going to put on the web.

I've been using Saver for nearly all my saving for the past year. If I'm just making a quick edit of a JPEG camera image, Ctrl-S overwrites it without questioning me. If I'm editing an elaborate multi-layer GIMP project, Ctrl-S overwrites the .xcf.gz. If I'm planning to export that image for the web, I Ctrl-Shift-S to bring up the Saver As... dialog, make sure the main filename is .xcf.gz, set a name (ending in .jpg) for the exported copy; and from then on, Ctrl-S will save both the XCF and the JPG copy.

Saver is available on my github page, with installation instructions here: GIMP Saver and Save/Export Clean Plug-ins. I hope you find it useful.

November 06, 2014 07:57 PM

Eric Hammond

When Are Your SSL Certificates Expiring on AWS?

If you uploaded SSL certificates to Amazon Web Services for ELB (Elastic Load Balancing) or CloudFront (CDN), then you will want to keep an eye on the expiration dates and renew the certificates well before to ensure uninterrupted service.

If you uploaded the SSL certificates yourself, then of course at that time you set an official reminder to make sure that you remembered to renew the certificate. Right?

However, if you inherited an AWS account and want to review your company or client’s configuration, then here’s an easy command to get a list of all SSL certificates in IAM, sorted by expiration date.

aws iam list-server-certificates \
  --output text \
  --query 'ServerCertificateMetadataList[*].[Expiration,ServerCertificateName]' \
  | sort

To get more information on an individual certificate, you might use something like:

certificate_name=...
aws iam get-server-certificate \
  --server-certificate-name $certificate_name \
  --output text \
  --query 'ServerCertificate.CertificateBody' \
| openssl x509 -text \
| less

That can let you review information like the DNS name(s) the SSL certificate is good for.

Exercise for the reader: Schedule an automated job that reviews SSL certificate expiration and generates messages to an SNS topic when certificates are near expiration. Subscribe email addresses and other alerting services to the SNS topic.

Read more from Amazon on Managing Server Certificates.

Note: SSL certificates embedded in web server applications running on EC2 instances would have to be checked and updated separately from those stored in AWS.

Original article: http://alestic.com/2014/11/aws-iam-ssl-certificate-expiration

by Eric Hammond at November 06, 2014 12:35 AM

November 04, 2014

Elizabeth Krumbach

Kilo OpenStack Summit Days 1-2

Saturday morning I arrived in Paris. The weather was gorgeous and I had a wonderful tourist day visiting some of the key sights of the city. I will write about that once I’m home and can upload all my photos, for now I am going to talk about the first couple of days of the OpenStack Summit, which began on Monday.

Both days kicked off with keynotes. While my work focuses on the infrastructure for the OpenStack project itself and I’m not strictly building components of OpenStack that people are deploying, the keynotes are still an inspiration. Companies from around the world get up on the stage and share how they’re using OpenStack to enable their developers to be more innovative by getting them development environments more quickly or how they’re putting serious production load on them in the processing of big data. This year they had BBVA, BMW (along with a stunning i8 driven onto the stage), Time Warner Cable, CERN, Expedia and Tapjoy get up on stage to share their stories.

CERN’s story was probably my favorite (even if the BMW on stage was shiny and I want one). Like many in my field, I hold a hobbyist level interest in science and could geek out about the work being done at CERN for days. Plus, they’re solving some really exceptional problems around massive amounts of big data produced by the LHC using OpenStack and a pile of other open source software.


Tim Bell of CERN

It was exciting to learn that they’re currently running 4 clusters using the latest release of OpenStack, the largest of which has over 70,000 cores across over 3,000 servers. Pretty serious stuff! He also shared some great links during his talk, including:

I was also delighted to see Jim Zemlin, Executive Director of the Linux Foundation, get on stage on the first day to share his excitement about the success of OpenStack and to tell us all what we wanted to hear: we’re doing great work for open source and are on the right side of history.

In short, the keynotes spoke to both my professional pride in what we’re all working on and the humanitarian and democratization side of technology that so seriously drew me into the possibilities of open source in the first place.

All the keynotes for both days are already online, you can check them out in this youtube playlist: OpenStack Summit Paris 2014 Keynote Presentations

Back to Monday, I headed over to the other venue to attend a session in the Ops Summit, “Top 10 Pain points from the user survey – how to fix them?” The session began by looking at results from the survey released that day: OpenStack User Survey Insights: November 2014. From that survey, they picked the top-cited issues that operations are having with OpenStack and worked to come up with some concrete issues that the operators could pass along to developers. Much of the discussion ended up focusing on problems with Neutron (including problems with the default configuration) and gaps in Documentation that made it difficult for operators to know that features existed or how to use them. The etherpad for the session goes further into depth about these and other issues raised and added during the session, see it here.

Monday afternoon I met up with Carlos Munoz of Red Hat and Andreas Jaeger of SUSE who I’ve been working with over these past couple of months to do an in depth exploration of our options for a new translations system. We have been evaluating both Pootle and Zanata, and though my preference had been Pootle because of it being written in Python and apparent popularity with other open source projects, the Translations team overwhelmingly preferred Zanata. As Andreas and I went through the Translations Infrastructure we currently have, it was also clear that Zanata was our best option. It was a great meeting, and I’m looking forward to the Translations Tools Session on Thursday at 11AM where we discuss these results with the rest of the Infrastructure team and work out some next steps.


Me, Carlos and Andreas!

From there I went down to the HP Sponsored track where lighting talks were being run during the last two sessions of the day. The room was packed! There were a lot of great presentations which I hope were recorded since I missed the first few. My talk was one of the last, and with a glowing introduction from my boss I gave a 5 minute whirlwind description of elastic-recheck. I fear the jetlag made my talk a bit weaker than I intended, but I was delighted to have 3 separate conversations about elastic-recheck and general failure tracking on CI systems that evening with people from different companies trying to do something similar. My slides are available here: Automated failure aggregation & detection with elastic-recheck slides (pdf).

On Tuesday morning I was up bright and early for the Women of OpenStack breakfast. Waking up with a headache made me tempted to skip it, but I’m glad I didn’t. The event kicked off with some stats from a recent poll of members of the Women of OpenStack LinkedIn group. It was nice to see that 50% of those who responded were OpenStack ATCs (Active Technical Contributor) and many of those who weren’t identified themselves as having other technical roles (not that I don’t value non-technical women in our midst, but the technical ones are My Tribe!).

Following the results summaries, we split into 4 groups to talk about some of the challenges facing us as a minority in the OpenStack community and came up with 4 problems and solutions: Coaching for building confidence, increasing profile and communication for and around the Women of OpenStack group, working to get more women in our community doing public speaking and helping women rejoin the community after a gap in involvement (bonus: this can directly help men too, but more women go through it when taking time off for children). The group decided on focusing on getting the word out about the community for now, seeking to improve our communication mechanisms and see about profiling some women in our community, as well as creating some space where we can put our basic information about what we’re working on and how to contact us. I was really happy with how this session went, kudos to all the amazing women who I got to interact with there, and sorry for being so shy!

After keynotes, I headed back over to the Design Summit venue to attend a couple cross-project testing-focused sessions: “DefCore, RefStack, Interoperability, and Tempest” (etherpad here) and Moving Functional Tests to Projects (etherpad here). One of the most valuable things I got out of these sessions was that projects really need to do a better job of communicating directly with each other. Currently so much is funneled through the Quality Assurance team (and Infrastructure team) because they run the test harness where things fail. Instead, it would be great to see some more direct communication between these projects, and splitting out some of the functional tests may be one way to help socially engineer this.

Following lunch and a quick meeting, I was off to “Changes to our Requirements Management Policy” (etherpad here) and then “Log Rationalization” (etherpad here). There seemed to be more work accomplished on the latter, which was nice to see since there’s a stalled specification up that it would be great to see moved along so that the project can come up with some guidelines for log levels. Operators have been reporting both that they often run logging at DEBUG level all the time so they can see even some of the more basic problems that crop up, AND are frustrated by some “non-issues” being promoted to WARNING and filling their logs with unnecessary stack traces.

Next up was the Gerrit third-party CI discussion session. I wasn’t sure what to expect from this session, but the self-selected group (many were more involved with OpenStack than was assumed, but they did come all the way to the summit…) was much more engaged than I had feared. Talk in the session centered around how to get more third party operators involved with the growing third party community, one suggestion being moving the meeting time to a more European friendly time every other week. There was also discussion around the need for improved documentation and I raised my hand about helping with a more dynamic dashboard for automatically determining the status of third party systems without manual notifications from operators. Etherpad here.

The last session of my very long day was “Translators / I18N team meetup” where the group sought to promote translations to grow the community and recognize translators, etherpad here. As I mentioned earlier, I’m working on some of the new tooling that the team will use, so in spite of only speaking English, I was able to chime in a bit on the technical side of making some of the recognitions and other statistics available once we switch back to an open source platform for translations.

Then it was off to the HP party at Musée des Arts Forains. Open for private events only, the venue hosts a collection of antique/vintage (dating from 1850-1950) games, rides and other fair-related objects. I played a couple of the games and enjoyed snacks and wine throughout the evening. It was certainly busy and some areas were quite loud and crowded, but it was easy to find large areas where the volume was quite conducive to conversations – of which I had many.

Social events and parties are not really my thing, but this one I really enjoyed. Transportation to the venue included an optional tourguide led tour on the bus past many of the stunning sights of Paris at night. And they began running shuttles back to the conference center at 9PM – which I figured I’d catch then, but it was after 10PM before I made my way back to the bus. I think what I really don’t like are club-like parties with loud music and nothing interesting to occupy myself with when I find myself frequently wandering around solo (apparently I’m a lousy pack animal). The ability to stop and play games, explore the interesting food offerings and run into lots of people I know made the evening fly by.

Huge thanks to my friends and colleagues at HP for putting on such a comfortable and exciting event, this one will be hard to top in my awesome-events-at-conferences ledger.

Tomorrow we begin the hardcore part of the conference for me, kicking off with an Infrastructure session at 9AM and moving through various QA and Infrastructure sessions going on through the rest of the week. Since it’s nearing 1AM, I should get some sleep!

by pleia2 at November 04, 2014 11:47 PM

November 03, 2014

Jono Bacon

Dealing With Disrespect: The Video

A while back I wrote and released a free e-book called Dealing With Disrespect.

It is a book that provides a short, simple to read, free guide for handling personalized, mean-spirited, disrespectful, and in some cases, malicious feedback and criticism of your work. I wrote it because this kind of feedback is increasingly common in online communities and places such as YouTube, reddit, and elsewhere, and I am tired of seeing good people give-up sharing their creative efforts because of this.

My goal with the book is that when someone reads mean-spirited feedback and criticism and feels demotivated, someone can point them to the book to it as a means of helping to put things in perspective.

Well, to make this content even easier to consume, I recorded a presentation version of it and put it up on my YouTube channel:

Can’t see it? Watch it here!

by jono at November 03, 2014 10:13 PM

Elizabeth Krumbach

Wedding and week in Florida

All this travel is leaving me in the unfortunate position of having a growing pile of blog posts queuing up, which will only get worse as the OpenStack Summit continues this week, so I better get these out! I’m now in Paris for the summit, but last week I was in Florida for MJ’s cousin Stephanie’s wedding.

I arrived on Friday afternoon from Raleigh and MJ picked me up at the airport, getting us to the hotel just in time to get changed for a family and friends gathering the evening before the wedding.

Saturday we were able to enjoy the beach and pools at the hotel with some of MJ’s cousins. The weather was great, even the humidity was quite low, relative to what I tend to expect from Florida.

As the day wound down, we got ready for the wedding!

The ceremony and reception took place at a beautiful country club not far from the hotel. As an attendee, it seemed like everything went very well. The reception was fun, lots of great food, a fun, sparkly signature drink and some stunning centerpieces decorating the dinner tables. I even danced a little.

Unfortunately I picked up a cold somewhere along the way, and spent all of Sunday in bed while MJ spent more time with family and pools. By Monday I was feeling a bit better and was able to see MJ off and get moved over to the beach motel where I spent the rest of the week.

My beach motel wasn’t the greatest place, but it was inexpensive, clean and ultimately quite tolerable. The plan to stay in Florida, in spite of my general “I don’t like Florida” attitude, was to avoid going all the way back to California prior to my Paris trip. And I have to say, with nice October weather and the views at sunset, I think it was the right choice.

My days were spent catching up with work post-conference and prepare for the summit this week. Thankfully it wasn’t very hot out, so I was able to open the windows during the day and let fresh air into my rooms. I also made plans throughout the week to visit with family in the area, managing to meet up with my cousin Shannon and her family, my Aunt Pam, and my Aunt Meg and cousin Melissa throughout the week.


At dinner with Shannon, Rich & Frankie

I also was able to take some long lunch breaks to enjoy a few quick dips in the ocean.

The San Francisco Giants won the World Series while I was in Florida too! I was able to watch the games in my room each night. I was disappointed not to be in town for the win, as the whole city explodes in celebration when there’s a win like this. My week wrapped up on Friday when I checked out of the motel and headed toward the airport for my redeye flight to Paris. And since I was also disappointed to be missing Halloween in San Francisco again, I dressed up for my flight, as Carmen Sandiego.

by pleia2 at November 03, 2014 09:57 PM

November 01, 2014

Elizabeth Krumbach

All Things Open 2014

From Oct 22-23rd I had the pleasure of speaking at and attending All Things Open in Raleigh, North Carolina. Of all the conferences I’ve attended this year, this conference is one of the most amazing when it comes to how well they treated their speakers. When I submitted my talk I received an email from the conference organizer thanking me for the submission. Frequent emails were sent keeping us informed about all the speaker-focused conference details. Leading up to the event I woke up one morning to this flattering profile on their news feed. A series of interviews was also published by the OpenSource.com folks featuring speakers. Once there, I was thanked about 100 times throughout the 2 day event. In short, they really did a remarkable job making me feel valued.

Thankfulness aside, the conference was top notch. Several months back I read The foundation for an open source city by Jason Hibbets so I was excited to go to Raleigh (where much of the work Hibbets talked about centered around) and doubly amused when Jason said hello to me and I got to say “hey, I read your book!” During the conference introduction they said the attendence last year (their first year) was around 700 and that they were looking at 1,100 this year. The conference was opened by Raleigh Mayor Nancy McFarlane, which was pretty exciting, I’d seen cities send CTOs or supervisors, but the having the mayor herself show up was quite the showing of support.

After her keynote came Jeffrey Hammond, VP & Principal Analyst at Forrester Research. I really enjoyed the statistics his company put together regarding the amount of open source software being used today. For instance, of developers surveyed they learned that 4/5 of them are using open source software and 73% of them are programming outside of their paid job, 27% on open source.

Right after the keynotes I headed downstairs to give my talk, Open Source Systems Administration. A blending of my passion for open source and love of systems administration, this is one of my favorite talks to give, I really enjoy being able to present on how the OpenStack infrastructure itself is an open source project. It was a lot of fun chatting with people throughout the rest of the conference who had attended (or missed) my talk, while there is less surprise these days that a project would open source an infrastructure, there’s a lot of interest in learning that there are project which actually have and how we’ve done it. Slides from my talk here: ATO-opensource_sysadmin.pdf (2.3M).


Giving my talk! Thanks to Charlie Reisinger for this photo.

The schedule made it hard to select talks, but I next decided to head over to the Design track to learn from Garth Braithwaite why Open Source Needs Design. I’ll start off by saying that it’s wonderful that there are some designers participating in open source these days, but as Garth points out in his talk they are generally: paid by a company as a designer to focus on the product (open sourceyness of it doesn’t matter, it’s a job), a designer friend of someone in the project who is helping out or a developer on the project who happens to have some design expertise (or is willing to get some in order to help the project). He explored some of the history of how developers made their way to open source and the tools we used, and explained that the “story” doesn’t exist for designers, why would they get involved? They’re not fixing a printer or solving some tricky problem. The tools for open collaboration for designers also don’t really exist, popular sites for design sharing like Dribbble don’t have source upload options and portfolio sites like BeHance lack any ability for collaboration. The new DesignOpen.org seeks to help change that, so it was interesting to learn about that. From there he detailed different types of design work, UX, IxD and UI and the tools and deliverables for each type of work. As someone who really has never worked with design it was an interesting tour of that space. His slides from the talk are available here: speakerdeck.com/garthdb/open-source-needs-design (first few slides are a image-full, but stick with it, some great slides with bullet points come later!).

Then it was off to see Lessons Learned with Distributed Systems at bit.ly presented by Sean O’Connor (it was a pleasure to meet him and colleague Peter Herndon during the keynote earlier in the day). The talk centered around some of the concerns when architecting systems at scale, from time syncronization to having codebases that are debuggable. At bit.ly they adopted a codebase that is broken out into many small pieces, allowing ops to dig into and learn about specific components when something goes wrong, not necessarily having to learn everything all at once in order to do their job effectively. He also went into how they’ve broken their workload up into what has to be done synchronously and what can be shifted into an asynchronous job, which is preferred because it’s easier to do well. Finally, he talked some about how they deal with failure, starting off with actually having a plan for failure, and doing things like back offs, where the retries end up spaced out over time rather than hammering the service constantly until it has returned.

After lunch I decided to check out the Messaging Standards and Systems – AMQP & RabbitMQ talk by Gavin M. Roy. I’ve used RabbitMQ a fair amount, but that doesn’t mean I’ve ever paid attention to AMQP (Advanced Message Queuing Protocol), I was pretty surprised to learn that releases 0-8 and 0-9-1 are very different the 1.0 release and are effectively overseen by different people, with many users still intentionally on 0-9-1. Good to know, I imagine that causes a ridiculous amount of confusion. He went through some of the architecture of how RabbitMQ can be used and things it does to “fix” issues encountered with the default AMQP 0-9-1. Slides from his talk here speakerdeck.com/gmr/messaging-standards-and-systems-amqp-and-rabbitmq (the exchange slides about halfway through are quite helpful).

I was then off to Saving the World with Open Source and Science presented by Dr. Marcus Hanwell. Given my job working on OpenStack, I perhaps have the distinct benefit of being exposed to scientists who understand how to store, process and present big data, plus who understand open source. I assumed this ubiquitous, so this talk was quite the wake up call. Not only are publicly-funded papers not available for free (perhaps a whole different rant), the papers often don’t have enough data for the results to be reproducible. Sources from which data was processed aren’t released (whether it be raw data, source code used to make computations or, seriously, an Excel spreadsheet with some data+formulas), images are shrunk and stripped of all metadata so it can be impossible to determine whether you’re actually seeing the same thing. Worse, most institutions have no way to index this source material at all, so something as simple as a harddrive failure on a laptop can mean loss of this precious data. Wow, how depressing. But the talk was actually a call for action in this space. As technologists there are things we can do to provide solutions to scientists, and scientists working in research can make social changes so that releasing full sources, code and more becomes something valued and validation of results is something that once again becomes central to all scientific research.

Day one completed with a keynote by Doug Cutting he titled “Pax Data” which was a fascinating look into the world we’re building where the collection of data is What We Do. He began by talking about how in most science fiction the collectors of data end up being the Bad Guys in a future dystopia, but the fact is that sectors from Education to Healthcare to Climate can benefit from the collection and analysis of big data. He posted the question to the audience: How do we do this without becoming those Bad Guys? He admitted not having a full answer, but provided some guidance on key things that would be required, including transparency, best practices around data handling, definition of data usage abuse so we can protect against it, and either government or industry oversight and/or regulation. Fascinating talk for me, particularly as I was in the middle of reading both a SciFi dystopia book where big data becomes really scary (The Circle by David Eggers) and non-fiction book about our overuse of technology (Program or be Programmed).

Day 2! Keynotes began with a talk by James Pearce of Facebook. I know Facebook is pretty much built on open source (just like everyone else) but this talk was about the open source program he and his team have built within Facebook starting about a year ago. As is standard for many companies starting with open source, they’d just “throw things over the wall” and expect the code to be useful to the community. It wasn’t. So they then began seriously working to develop the code the were open sourcing, assigning people internally to be the caretakers of projects, judging the health of projects based on metrics like forks and commits from community members outside of Facebook. They also run much of the same code versions internally as they release in the community. Github profile for Facebook is here: https://github.com/facebook. Very nice work!

The next keynote was by DeLisa Alexander of Red Hat on Women in Open Source. She started out with a history lesson about how the first real programmers were women and stressed why diversity is important in our industry. Stories about how the most successful women in open source have had encouragement of some form from their peers, and how important it is that everyone in the audience seek to do that with newcomers to their community, particularly women. It was also interesting to hear her talk about how children now often think of computers as opaque black boxes that they can’t influence, so it’s important to engage children (including girls) at a young age to teach them that they can make changes to the software and platforms they use.

Alexander also hosted a panel at lunch which I participated in on this topic. I was really honored to be a part of the panel, it was packed with very successful women in tech and open source. Jen Wike Huger wrote up some of her notes in a great article here: Keys to diversity in tech are more simple than you think. My own biggest takeaway from the panel was the realization that everyone on the panel has spent a significant amount of time being a mentor in some formal capacity. We’ve all supported students and other women in technology via organizations that we either work or volunteer for, or run ourselves.

Getting back to sessions, I went to Steven Vaughan-Nichols’ talk on Open Source, Marketing, and Using the Press. Now, technically I’m the Marketing Lead for Xubuntu, but I somewhat joke to people that it’s “only because I know how to use Twitter.” Amusingly, during his talk he covered people just like myself, project contributors who end up with the Marketing role. I gained a number of great insights from this talk, including defining your marketing audience properly – there’s your community and then there’s the rest of the world. Tips to knowing your customer, maybe we should do a more formal survey in Xubuntu about some of the decisions we make rather than relying upon sporadic social media feedback and expecting users to participate in development discussions? He also drove home the importance of branding, which thanks to our logo designer Pasi Lallinaho I believe we have done a good job of. There was also a crash course in communicating with the press: know who you’re contacting and what their focus is, be clear and concise in emails and explain the context in which your news is exciting. Oh, and be friendly and reply promptly when reporters contact you. I also realized I should add our press contact to our website, that’s a good idea! I have some updates to make to the Xubuntu Marketing blueprint this cycle.

Perhaps one of my favorite talks of the even was presented by Dr. Megan Squire: Case Study: We’re Watching You: How and Why Researchers Study Open Source And What We’ve Found So Far. I think what I found most interesting is that while I see poll from time to time put out by people claiming to do research on open source, I never see the results of that research. Using what I now know from Dr. Marcus Hanwell (many academic papers are locked behind journal pay walls) this suddenly makes sense. But Dr. Squire’s talk dove into the other side of research that doesn’t include polls: research done on data, or “artifacts” that open source projects create. Artifacts are pretty much anything that is public as a result of a project existing, from obvious things like code to the public communication methods we use like IRC and mailing lists. This is what is at the heart of a duo of websites she runs, the first being FLOSSmole which connects well-formatted data about projects with researches interested in doing datamining against it, and FLOSShub which is a collection of papers she’s collected about open source so it’s all in one place and we can see what kind of research is being done. Aside from her great presentation style, I think what made this one of my favorites was the fact that I didn’t know this was happening. I make FOSS artifacts all day long, both in my day job and with my open source hobbies, and sure I know it’s out there for anyone to find, piles of IRC logs, code reviews, emails, but learning that academics are actively processing them is another thing entirely. For instance, to take an example from a project I work on, I had no idea this existed: Estimating Development Effort in Free/Open Source Software Projects by Mining Software Repositories: A Case Study of OpenStack. It made me a bit tin-foil-hat for about 5 minutes until I once again realized that I’m not just fine, but happy to be putting my work out there. Huge thanks to her for doing this presentation and maintaining these really valuable websites.

Slides from her presentation are up on Google docs here and are well worth the browse for examples she uses to illustrate how our artifacts are being used in research.

After lunch I attended my last three talks for the conference, the first one being Software Development as a Civic Service presented by Ben Balter. I’ve attended a number of civic hacking focused talks at events over the past couple years, but this one wasn’t strictly talking about a specific project or organization in this space. Instead he focused on the challenges that confront governments and us as technologists as we attempt to enter the government space, and led to one of my favorite (sad!) slides of the event, in which you will note that doing anything remotely modern (use of public package repositories, configuration management or source control) doesn’t factor in:

He talked about how some government organizations are simply blinded by proprietary sales talk and FUD around open source, while others actually are bound by specific governmental requirements in the software that industries have figured out, but open source projects don’t think to include (ie – an Open Source CMS may get us 99% of us there, but this company is offering something that satisfies everything because it’s their job to do so). He also talked some about the “Command and Control” structure inside of government and how transparency can often be seen as a liability rather than the strength that we’ve come to trust in within the open source community. He wrapped up with some success stories from the government, like petitions.whitehouse.gov and GOV.UK and shared some stats about the increase of known government employees collaborating on Github.

The next talk was by Phil Shapiro on Open Sourcing the Public Library. To begin his talk he talked about how open source has a major opportunity as libraries move from the analog to digital space. He then moved into a fact he wanted to stress: libraries are owned by all of us. There is an effort to transform them from the community “reading room” into the community “living room” where people share ideas and collaborate on projects, bringing in more educational resources in the form of classes and the building of maker spaces. I love this idea, I find Hackerspaces to be unintentionally hostile places for many young women, so providing a different option to accomplish similar goals is appealing to me. I think what struck me most about this was how “open sourcey” it felt, people coming together to build something new together in the open in their community, it’s why I work on any of this at all. He shared a link of some collected writings about the future of Libraries here: https://sites.google.com/site/librarywritings/

The final talk of the day I attended was Your Company Culture is Awesome (But is Company Culture a Lie?) by Pamela Vickers. In her talk she identified the trend in technology of offering “perks” in lieu of an actual healthy work environment for workers. These perks often end up masking real underlying unhappiness for employees, and ultimately lead to loss of talent. She suggested that companies take a step back from their pile of perks and look to make sure they’re actually meeting the core needs of their employees. Are your developers happy? How do you know? Are you asking them? You should, and your employees should trust you to be honest with you and to at least professionally acknowledge their feedback. She also highlighted some of the key places where companies fall down on making their developers happy, including forcing them to use the wrong tools, upsetting a healthy work-life balance, giving them too much work or projects that don’t feel achievable and giving them boring or unimportant projects.

To wrap this up, huge thanks to everyone who worked on and participated in this conference. As a conference sponsor, my employer (HP) had a booth, but unfortunately I was the only one who was able to attend. I spent breaks and lunches at the booth (leaving a friendly note when I was away) and had some great chats with folks looking for Python jobs and who were more generally interested in the work we’re doing in the open source space. It still can strike people as unusual that HP is so committed to open source, so it’s nice to be available to not only give numbers, but be a living, breathing example of someone HP pays to contribute to open source.

by pleia2 at November 01, 2014 10:31 PM

October 31, 2014

Akkana Peck

Simulating a web page timeout

Today dinner was a bit delayed because I got caught up dealing with an RSS feed that wasn't feeding. The website was down, and Python's urllib2, which I use in my "feedme" RSS fetcher, has an inordinately long timeout.

That certainly isn't the first time that's happened, but I'd like it to be the last. So I started to write code to set a shorter timeout, and realized: how does one test that? Of course, the offending site was working again by the time I finished eating dinner, went for a little walk then sat down to code.

I did a lot of web searching, hoping maybe someone had already set up a web service somewhere that times out for testing timeout code. No such luck. And discussions of how to set up such a site always seemed to center around installing elaborate heavyweight Java server-side packages. Surely there must be an easier way!

How about PHP? A web search for that wasn't helpful either. But I decided to try the simplest possible approach ... and it worked!

Just put something like this at the beginning of your HTML page (assuming, of course, your server has PHP enabled):

<?php sleep(500); ?>

Of course, you can adjust that 500 to be any delay you like.

Or you can even make the timeout adjustable, with a few more lines of code:

<?php
 if (isset($_GET['timeout']))
     sleep($_GET['timeout']);
 else
     sleep(500);
?>

Then surf to yourpage.php?timeout=6 and watch the page load after six seconds.

Simple once I thought of it, but it's still surprising no one had written it up as a cookbook formula. It certainly is handy. Now I just need to get some Python timeout-handling code working.

October 31, 2014 01:38 AM

October 24, 2014

Akkana Peck

Partial solar eclipse, with amazing sunspots

[Partial solar eclipse, with sunspots] We had perfect weather for the partial solar eclipse yesterday. I invited some friends over for an eclipse party -- we set up a couple of scopes with solar filters, put out food and drink and had an enjoyable afternoon.

And what views! The sunspot group right on the center of the sun's disk was the most large and complex I'd ever seen, and there were some much smaller, more subtle spots in the path of the eclipse. Meanwhile, the moon's limb gave us a nice show of mountains and crater rims silhouetted against the sun.

I didn't do much photography, but I did hold the point-and-shoot up to the eyepiece for a few shots about twenty minutes before maximum eclipse, and was quite pleased with the result.

An excellent afternoon. And I made too much blueberry bread and far too many oatmeal cookies ... so I'll have sweet eclipse memories for quite some time.

October 24, 2014 03:15 PM

Jono Bacon

Bad Voltage Turns 1

Today Bad Voltage celebrates our first birthday. We plan on celebrating it by having someone else blow out our birthday candles while we smash a cake and quietly defecate on ourselves.

For those of you unaware of the show, Bad Voltage is an Open Source, technology, and “other things we find interesting” podcast featuring Stuart Langridge (LugRadio, Shot Of Jaq), Bryan Lunduke (Linux Action Show), Jeremy Garcia (Linux Questions), and myself (LugRadio, Shot Of Jaq). The show takes fun but informed take on various topics, and includes interviews, reviews, competitions, and challenges.

Over the last year we have covered quite the plethora of topics. This has included VR, backups, atheism, ElementaryOS, guns, bitcoin, biohacking, PS4 vs. XBOX, kids and coding, crowdfunding, genetics, Open Source health, 3D printed weapons, the GPL, work/life balance, Open Source political parties, the right to be forgotten, smart-watches, equality, Mozilla, tech conferences, tech on TV, and more.

We have interviewed some awesome guests including Chris Anderson (Wired), Tim O’Reilly (O’Reilly Media), Greg Kroah-Hartman (Linux), Miguel de Icaza (Xamarin/GNOME), Stormy Peters (Mozilla), Simon Phipps (OSI), Jeff Atwood (Discourse), Emma Marshall (System76), Graham Morrison (Linux Voice), Matthew Miller (Fedora), Ilan Rabinovitch (Southern California Linux Expo), Daniel Foré (Elementary), Christian Schaller (Redhat), Matthew Garrett (Linux), Zohar Babin (Kaltura), Steven J. Vaughan-Nicols (ZDNet), and others.

…and then there are the competitions and challenges. We had a debate where we had to take the opposite viewpoints of what we think, we had a rocking poetry contest, challenged our listeners to mash up the shows to humiliate us, ran a selfie competition, and more. In many cases we punished each other when we lost and even tried to take on a sausage company.

It is all a lot of fun, and if you haven’t checked the show out, be sure to head over to www.badvoltage.org and load up on some shows.

One of the most awesome aspects of Bad Voltage is our community. Our pad is at community.badvoltage.org and we have a fantastically diverse community of different ideas, perspectives and viewpoints. In many cases we have discussed a topic on the show and there has been a long and interesting (and always respectful debate on the forum). It is so much fun to be around.

I just want to say a huge thank-you to everyone who has supported the show and stuck with us through our first year. We have a lot of fun doing it, but the Bad Voltage community make every ounce of effort worthwhile. I also want to thank my fellow presenters, Bryan, Stuart, and Jeremy; it is a pleasure getting to shoot the proverbial with you guys every few weeks.

Live Voltage!

Before I wrap up, I need to share an important piece of information. The Bad Voltage team will be performing our very first live show at the Southern California Linux Expo on the evening of Friday 20th Feb 2015 in Los Angeles.

We can’t think of a better place to do our first live show than SCALE, and we hope to see you there!

by jono at October 24, 2014 04:39 AM

October 22, 2014

Elizabeth Krumbach

3 weeks at home

I am sitting in a hotel room in Raleigh where I’m staying for a conference, but prior to this I had a full 3 weeks at home! I was the longest stretch I’ve had in months, even my gallbladder removal surgery didn’t afford me a full 3 weeks. Unfortunately during this blessed 3 weeks home MJ was out of town for a full 2 weeks of it. It also decided to be summer time in San Francisco (typical of early October) with temperatures rising to 90F for several days and our condo not cooling off. Some days it made work a challenge as I sometimes fled to coffee shops. The cats didn’t seem amused by this either.

The time at home alone did give me a chance to chill out at home and listen to the Giants playoff games on the little AM radio I had set up in our living room. As any good pseudo-fan does I only loosely keep up with the team during the actual season, going to actual games only here and there as I have the opportunity, which I didn’t this year (too much travel + gallbladder). It felt nice to sit and listen to the games as I got some work done in the evenings. I did learn how much modern technology gets in the way of AM reception though, as I listened to the quality tank when I turned on the track lighting in my living room or random times when my highrise neighbors must have been doing something.

Fleet week also came to San Francisco while I was home. I think I’ve only actually been in town for it twice, so it was a nice treat. To add to the fun I was meeting up with a friend to work on some OpenStack stuff on Sunday when they were doing their final show and her office offers amazing floor to ceiling windows with a stunning view of the bay. Perfect for watching the show!

I also did manage to get out for some non-work social time with a couple friends, and finally made it out to Off the Grid in the Marina for some street food adventuring. I hadn’t been before because I’m not the biggest fan of food trucks, the food is fine but you end up standing while eating, making a mess, and not getting a meal for all that cheaper than you would if you just went to a proper restaurant with tables. Maybe I’m just a giant snob, but it was an interesting experience, and I got to take the cable car home, so that’s always fun.

And now Raleigh. I’m here for All Things Open which I’ll be blogging about soon. This kicked off about 3 weeks away from home, so I had to pack accordingly:

After Raleigh I’ll be flying to Miami for a cousin’s wedding, then staying several extra days in a beach hotel where I’ll be working (and taking breaks to visit the ocean!). At the end of the week I’m flying to Paris for the OpenStack Summit for a week. I’ve never been to Paris before so I’m really looking forward to that. When the conference wraps up I’m flying back stateside for another wedding for a family member, this time in Philadelphia. So during this time I’ll get to see MJ twice, as we meet in cities for weddings. Thankfully I head home after that, but then we’re off for a proper vacation a few days later – to Jamaica! Then maybe I’ll spend all of December in a stay-at-home coma, but I’ll probably end up going somewhere because apparently I really like airplanes. Plus December would be the only month I didn’t fly, and I can’t have that.

by pleia2 at October 22, 2014 11:17 PM

Akkana Peck

A surprise in the mousetrap

I went out this morning to check the traps, and found the mousetrap full ... of something large and not at all mouse-like.

[young bullsnake] It was a young bullsnake. Now slender and maybe a bit over two feet long, it will eventually grow into a larger relative of the gopher snakes that I used to see back in California. (I had a gopher snake as a pet when I was in high school -- they're harmless, non-poisonous and quite docile.)

The snake watched me alertly as I peered in, but it didn't seem especially perturbed to be trapped. In fact, it was so non-perturbed that when I opened the trap, the snake stayed right where it was. It had found a nice comfortable resting place, and it wasn't very interested in moving on a cold morning.

I had to poke it gently through the bars, hold the trap vertically and shake for a while before the snake grudgingly let go and slithered out onto the ground.

I wondered if it had found its way into the trap by chasing a mouse, but I didn't see any swellings that looked like it had eaten recently. I'm fairly sure it wasn't interested in the peanut butter bait.

I released the snake in a spot near the shed where the mousetrap is set up. There are certainly plenty of mice there for it to eat, and gophers when it gets a little larger, and there are lots of nice black basalt boulders to use for warming up in the morning, and gopher holes to hide in. I hope it sticks around -- gopher/bullsnakes are good neighbors.

[young bullsnake caught in mousetrap]

October 22, 2014 01:37 AM

October 20, 2014

Jono Bacon

Happy Birthday Ubuntu!

Today is Ubuntu’s ten year anniversary. Scott did a wonderful job summarizing many of those early years and his own experience, and while I won’t be as articulate as him, I wanted to share a few thoughts on my experience too.

I heard of this super secret Debian startup from Scott James Remnant. When I worked at OpenAdvantage we would often grab lunch in Birmingham, and he filled me in on what he was working on, but leaving a bunch of the blanks out due to confidentiality.

I was excited about this new mystery distribution. For many years I had been advocating at conferences about a consumer-facing desktop, and felt that Debian and GNOME, complete with the exciting Project Utopia work from Robert Love and David Zeuthen made sense. This was precisely what this new distro would be shipping.

When Warty was released I installed it and immediately became an Ubuntu user. Sure, it was simple, but the level of integration was a great step forward. More importantly though, what really struck me was how community-focused Ubuntu was. There was open governance, a Code Of Conduct, fully transparent mailing lists and IRC channels, and they had the Oceans 11 of rock-star developers involved from Debian, GNOME, and elsewhere.

I knew I wanted to be part of this.

While at GUADEC in Stuttgart I met Mark Shuttleworth and had a short meeting with him. He seemed a pretty cool guy, and I invited him to speak at our very first LugRadio Live in Wolverhampton.

Mark at LugRadio Live.

I am not sure how many multi-millionaires would consider speaking to 250 sweaty geeks in a football stadium sports bar in Wolverhampton, but Mark did it, not once, but twice. In fact, one time he took a helicopter to Wolverhampton and landed at the dog racing stadium. We had to have a debate in the LugRadio team for who had the nicest car to pick him up in. It was not me.

This second LugRadio Live appearance was memorable because two weeks previous I had emailed Mark to see if he had a spot for me at Canonical. OpenAdvantage was a three-year funded project and was wrapping up, and I was looking at other options.

Mark’s response was:

“Well, we are opening up an Ubuntu Community Manager position, but I am not sure it is for you.”

I asked him if he could send over the job description. When I read it I knew I wanted to do it.

Fast forward four interviews, the last of which being in his kitchen (which didn’t feel awkward, at all), and I got the job.

The day I got that job was one of the greatest days of my life. I felt like I had won the lottery; working on a project with mission, meaning, and something that could grow my career and skill-set.

Canonical team in 2007

The day I got the job was not without worry though.

I was going to be working with people like Colin Watson, Scott James Remnant, Martin Pitt, Matt Zimmerman, Robert Collins, and Ben Collins. How on earth was I going to measure up?

A few months later I flew out to my first Ubuntu Developer Summit in Mountain View, California. Knowing little about California in November, I packed nothing but shorts and t-shirts. Idiot.

I will always remember the day I arrived, going to a bar with Scott and some others, meeting the team, and knowing absolutely nothing about what they were saying. It sounded like gibberish, and I felt like I was a fairly technical guy at this point. Obviously not.

What struck me though was how kind, patient, and friendly everyone was. The delta in technical knowledge was narrowed with kindness and mentoring. I met some of my heroes, and they were just normal people wanting to make an awesome Linux distro, and wanting to help others get in on the ride too.

What followed was an incredible seven and a half years. I travelled to Ubuntu Developer Summits, sprints, and conferences in more than 30 countries, helped create a global community enthused by a passion for openness and collaboration, experimented with different methods of getting people to work together, and met some of the smartest and kindest people walking on this planet.

The awesome Ubuntu community

Ubuntu helped to define my career, but more importantly, it helped to define my perspective and outlook on life. My experience in Ubuntu helped me learn how to think, to manage, and to process and execute ideas. It helped me to be a better version of me, and to fill my world with good people doing great things, all of which inspired my own efforts.

This is the reason why Ubuntu has always been much more than just software to me. It is a philosophy, an ethos, and most importantly, a family. While some of us have moved on from Canonical, and some others have moved on from Ubuntu, one thing we will always share is this remarkable experience and a special connection that makes us Ubuntu people.

by jono at October 20, 2014 05:52 PM

October 17, 2014

Eric Hammond

Installing aws-cli, the New AWS Command Line Tool

consistent control over more AWS services with aws-cli, a single, powerful command line tool from Amazon

Readers of this tech blog know that I am a fan of the power of the command line. I enjoy presenting functional command line examples that can be copied and pasted to experience services and features.

The Old World

Users of the various AWS legacy command line tools know that, though they get the job done, they are often inconsistent in where you get them, how you install them, how you pass options, how you provide credentials, and more. Plus, there are only tool sets for a limited number of AWS services.

I wrote an article that demonstrated the simplest approach I use to install and configure the legacy AWS command line tools, and it ended up being extraordinarily long.

I’ve been using the term “legacy” when referring to the various old AWS command line tools, which must mean that there is something to replace them, right?

The New World

The future of the AWS command line tools is aws-cli, a single, unified, consistent command line tool that works with almost all of the AWS services.

Here is a quick list of the services that aws-cli currently supports: Auto Scaling, CloudFormation, CloudSearch, CloudWatch, Data Pipeline, Direct Connect, DynamoDB, EC2, ElastiCache, Elastic Beanstalk, Elastic Transcoder, ELB, EMR, Identity and Access Management, Import/Export, OpsWorks, RDS, Redshift, Route 53, S3, SES, SNS, SQS, Storage Gateway, Security Token Service, Support API, SWF, VPC.

Support for the following appears to be planned: CloudFront, Glacier, SimpleDB.

The aws-cli software is being actively developed as an open source project on Github, with a lot of support from Amazon. You’ll note that the biggest contributors to aws-cli are Amazon employees with Mitch Garnaat leading. Mitch is also the author of boto, the amazing Python library for AWS.

Installing aws-cli

I recommend reading the aws-cli documentation as it has complete instructions for various ways to install and configure the tool, but for convenience, here are the steps I use on Ubuntu:

sudo apt-get install -y python-pip
sudo pip install awscli

Add your Access Key ID and Secret Access Key to $HOME/.aws/config using this format:

[default]
aws_access_key_id = <access key id>
aws_secret_access_key = <secret access key>
region = us-east-1

Protect the config file:

chmod 600 $HOME/.aws/config

Optionally set an environment variable pointing to the config file, especially if you put it in a non-standard location. For future convenience, also add this line to your $HOME/.bashrc

export AWS_CONFIG_FILE=$HOME/.aws/config

Now, wasn’t that a lot easier than installing and configuring all of the old tools?

Testing

Test your installation and configuration:

aws ec2 describe-regions

The default output is in JSON. You can try out other output formats:

 aws ec2 describe-regions --output text
 aws ec2 describe-regions --output table

I posted this brief mention of aws-cli because I expect some of my future articles are going to make use of it instead of the legacy command line tools.

So go ahead and install aws-cli, read the docs, and start to get familiar with this valuable tool.

Notes

Some folks might already have a command line tool installed with the name “aws”. This is likely Tim Kay’s “aws” tool. I would recommend renaming that to another name so that you don’t run into conflicts and confusion with the “aws” command from the aws-cli software.

[Update 2013-10-09: Rename awscli to aws-cli as that seems to be the direction it’s heading.]

*[Update 2014-10-16: Use new .aws/config filename standard.]

Original article: http://alestic.com/2013/08/awscli

by Eric Hammond at October 17, 2014 01:54 AM

October 16, 2014

Akkana Peck

Aspens are turning the mountains gold

Last week both of the local mountain ranges turned gold simultaneously as the aspens turned. Here are the Sangre de Cristos on a stormy day:

[Sangre de Cristos gold with aspens]

And then over the weekend, a windstorm blew a lot of those leaves away, and a lot of the gold is gone now. But the aspen groves are still beautiful up close ... here's one from Pajarito Mountain yesterday.

[Sangre de Cristos gold with aspens]

October 16, 2014 07:37 PM

October 14, 2014

iheartubuntu

Tomboy The Original Note App


When I first started using Ubuntu back in early 2007 (Ubuntu 6.10) I fell in love with a pre-installed app called Tomboy. I had used Tomboy for several years until Ubuntu One notified users it would stop syncing Tomboy a couple years ago, and then the finality of Ubuntu One shutting down earlier this year. I had rushed to find alternatives like Evernote, Gnotes, etc but none of them were simple and easily integrated.

The Tomboy description is as follows... "Tomboy is a simple & easy to use desktop note-taking application for Linux, Unix, Windows, and Mac OS X. Tomboy can help you organize the ideas and information you deal with every day."

Some of Tomboys notable features are highlighting text, inline spell checking, auto-linking web & email addresses, undo/redo, font styling & sizing and bulleted lists.

I am creating new notes as well as manually importing a few of my old notes from a couple years ago. Tomboy used to sync easily with Ubuntu One. Since that is no longer an option, you can do it with your Dropbox folder or your Google Drive folder (I'm using Insync).

Tomboy hasnt been updated in a while, but it installs and works great on Ubuntu 14.04 using:

sudo apt-get install tomboy

When you start Tomboy there will be a little square note icon with pen up on your top bar. Clicking the icon will show you the Tomboy menu options. To sync your notes across your computers you would go to the Tomboy preferences, clicking the Syncronization tab, and pick a local folder in your Dropbox or Google Drive. Thats pretty much it! Start writing those notes! On your other computers that you want to sync your notes, you would select the same sync folder you chose on your first computer.

A few quick points. When you sync your notes, it will create a folder titled "0" in whatever folder you have chosen to sync your notes in.

If you want to launch Tomboy with your system startup (in Ubuntu 14.04) in Unity search for "Startup Applications" and run it. Add a new app titled "Tomboy" with the command "tomboy", save and close. Next time you log on, your Tomboy notes will be ready to use.

Tomboy also works with Windows and Mac OS X and installation instructions can be found here:

Windows ... https://wiki.gnome.org/Apps/Tomboy/Installing/Windows
Mac ... https://wiki.gnome.org/Apps/Tomboy/Installing/Mac

- - - - -

If you are still looking for syncing options, this comes in from Christian....

You can self-host your note sync server with either Rainy or Grauphel...

Learn more here...

http://dynalon.github.io/Rainy/

http://apps.owncloud.com/content/show.php?action=content&content=166654

by iheartubuntu (noreply@blogger.com) at October 14, 2014 11:42 AM

October 13, 2014

iheartubuntu

MAT - Metadata Anonymisation Toolkit


This is a great program used to help protect your privacy.

Metadata consists of information that characterizes data. Metadata is used to provide documentation for data products. In essence, metadata answers who, what, when, where, why, and how about every facet of the data that is being documented.

Metadata within a file can tell a lot about you. Cameras record data about when a picture was taken and what camera was used. Office documents like PDF or Office automatically adds author and company information to documents and spreadsheets.

Maybe you don't want to disclose that information on the web.

MAT can only remove metadata from your files, it does not anonymise their content, nor can it handle watermarking, steganography, or any too custom metadata field/system.

If you really want to be anonymous, use a format that does not contain any metadata, or better yet, use plain-text.

These are the formats supported to some extent:

Portable Network Graphics (PNG)
JPEG (.jpeg, .jpg, ...)
Open Document (.odt, .odx, .ods, ...)
Office Openxml (.docx, .pptx, .xlsx, ...)
Portable Document Fileformat (.pdf)
Tape ARchive (.tar, .tar.bz2, .tar.gz)
ZIP (.zip)
MPEG Audio (.mp3, .mp2, .mp1, .mpa)
Ogg Vorbis (.ogg)
Free Lossless Audio Codec (.flac)
Torrent (.torrent)

The President of the United States and his birth certificate would have greatly benefited from software such as MAT.

You can install MAT with this terminal command:

sudo apt-get install mat

Look for more articles about privacy soon and by searching in our search by under "privacy".

by iheartubuntu (noreply@blogger.com) at October 13, 2014 12:05 PM

October 12, 2014

iheartubuntu

Tasque TODO List App


We're getting back to some of the old basic apps that a lot of people used to use in Ubuntu. Many of them still work great and work great without any internet connection needed.

Tasque (pronounced like “task”) is a simple task management app (TODO list) for the Linux Desktop and Windows. It supports syncing with the online service Remember the Milk or simply storing your tasks locally.

The main window has the ability to complete a task, change the priority, change the name, and change the due date without additional property dialogs.

When a user clicks on a task priority, a list of possible priorities is presented and when selected, the task is re-prioritized in the order you wish.

When you click on the due date, a list of the next seven days is presented along with an option to remove the date or select a date from a calendar.

A user completes a task by clicking the check box on a task. The task is crossed out indicating it is complete and a timer begins counting down to the right of the task. When the timer is done, the task is removed from view.

As mentioned, Tasque has the ability to save tasks locally or backend used Remember the Milk, a free online to-do list. On one of my computers saving my tasks using RTM works great, on my computer at work, it wont sync my tasks. I havent figure out why, but I will post any updates here once I get it working or find a workaround.

You can install Tasque from the Ubuntu Software Center or with this terminal command:

sudo apt-get install tasque

All in all, Tasque is a great little task app. Really simple to use!

by iheartubuntu (noreply@blogger.com) at October 12, 2014 05:02 AM

October 11, 2014

Akkana Peck

Railroading exponentially

or: Smart communities can still be stupid

I attended my first Los Alamos County Council meeting yesterday. What a railroad job!

The controversial issue of the day was the town's "branding". Currently, as you drive into Los Alamos on highway 502, you pass a tasteful rock sign proclaiming "LOS ALAMOS: WHERE DISCOVERIES ARE MADE". But back in May, the county council announced the unanimous approval of a new slogan, for which they'd paid an ad agency some $55,000: "LIVE EXPONENTIALLY".

As you might expect in a town full of scientists, the announcement was greeted with much dismay. What is it supposed to mean, anyway? Is it a reference to exponential population growth? Malignant tumor growth? Gaining lots of weight as we age?

The local online daily, tired of printing the flood of letters protesting the stupid new slogan, ran a survey about the "Live Exponentially" slogan. The results were that 8.24% liked it, 72.61% didn't, and 19.16% didn't like it and offered alternatives or comments. My favorites were Dave's suggestion of "It's Da Bomb!", and a suggestion from another reader, "Discover Our Secrets"; but many of the alternate suggestions were excellent, or hilarious, or both -- follow the link to read them all.

For further giggles, try a web search on the term. If you search without quotes, Ebola tops the list. With quotes, you get mostly religious tracts and motivational speakers.

The Council Meeting

(The rest of this is probably only of interest to Los Alamos folk.)

Dave read somewhere -- it wasn't widely announced -- that Friday's council meeting included an agenda item to approve spending $225,000 -- yes, nearly a quarter of a million dollars -- on "brand implementation". Of course, we had to go.

In the council discussion leading up to the call for public comment, everyone spoke vaguely of "branding" without mentioning the slogan. Maybe they hoped no one would realize what they were really voting for. But in the call for public comment, Dave raised the issue and urged them to reconsider the slogan.

Kristin Henderson seemed to have quite a speech prepared. She acknowledged that "people who work with math" universally thought the slogan was stupid, but she said that people from a liberal arts background, like herself, use the term to mean hiking, living close to nature, listening to great music, having smart friends and all the other things that make this such a great place to live. (I confess to being skeptical -- I can't say I've ever heard "exponential" used in that way.)

Henderson also stressed the research and effort that had already gone into choosing the current slogan, and dismissed the idea that spending another $50,000 on top of the $55k already spent would be "throwing money after bad." She added that showing the community some images to go with the slogan might change people's minds.

David Izraelevitz admitted that being an engineer, he initially didn't like "Live Exponentially". But he compared it to Apple's "Think Different": though some might think it ungrammatical, it turned out to be a highly successful brand because it was coupled with pictures of Gandhi and Einstein. (Hmm, maybe that slogan should be "Live Exponential".)

Izraelevitz described how he convinced a local business owner by showing him the ad agency's full presentation, with pictures as well as the slogan, and said that we wouldn't know how effective the slogan was until we'd spent the $50k for logo design and an implementation plan. If the council didn't like the results they could choose not to go forward with the remaining $175,000 for "brand implementation". (Councilor Fran Berting had previously gotten clarification that those two parts of the proposal were separate.)

Rick Reiss said that what really mattered was getting business owners to approve the new branding -- "the people who would have to use it." It wasn't so important what people in the community thought, since they didn't have logos or ads that might incorporate the new branding.

Pete Sheehey spoke up as the sole dissenter. He pointed out that most of the community input on the slogan has been negative, and that should be taken into account. The proposed slogan might have a positive impact on some people but it would have a negative impact on others, and he couldn't support the proposal.

Fran Berting said she was "not all that taken" with the slogan, but agreed with Izraelevitz that we wouldn't know if it was any good without spending the $50k. She echoed the "so much work has already gone into it" argument. Reiss also echoed "so much work", and that he liked the slogan because he saw it in print with a picture.

But further discussion was cut off. It was 1:30, the fixed end time for the meeting, and chairman Geoff Rodgers (who had pretty much stayed out of the discussion to this point) called for a vote. When the roll call got to Sheehey, he objected to the forced vote while they were still in the middle of a discussion. But after a brief consultation on Robert's Rules of Order, chairman Rogers declared the discussion over and said the vote would continue. The motion was approved 5-1.

The Exponential Railroad

Quite a railroading. One could almost think it had been planned that way.

First, the item was listed as one of two in the "Consent Agenda" -- items which were expected to be approved all together in one vote with no discussion or public comment. It was moved at the last minute into "Business"; but that put it last on the agenda.

Normally that wouldn't have mattered. But although the council more often meets in the evenings and goes as long as it needs to, Friday's meeting had a fixed time of noon to 1:30. Even I could see that wasn't much time for all the items on the agenda.

And that mid-day timing meant that working folk weren't likely to be able to listen or comment. Further, the branding issue didn't come up until 1 pm, after some of the audience had already left to go back to work. As a result, there were only two public comments.

Logic deficit

I heard three main arguments repeated by every council member who spoke in favor:

  1. the slogan makes much more sense when viewed with pictures -- they all voted for it because they'd seen it presented with visuals;
  2. a lot of time, effort and money has already gone into this slogan, so it didn't make sense to drop it now; and
  3. if they didn't like the logo after spending the first $50k, they didn't have to approve the other $175k.

The first argument doesn't make any sense. If the pictures the council saw were so convincing, why weren't they showing those images to the public? Why spend an additional $50,000 for different pictures? I guess $50k is just pocket change, and anyone who thinks it's a lot of money is just being silly.

As for the second and third, they contradict each other. If most of the board thinks now that the initial $50k contract was so much work that we have to go forward with the next $50k, what are the chances that they'll decide not to continue after they've already invested $100k?

Exponentially low, I'd say.

I was glad of one thing, though. As a newcomer to the area faced with a ballot next month, it was good to see the council members in action, seeing their attitudes toward spending and how much they care about community input. That will be helpful come ballot time.

If you're in the same boat but couldn't make the meeting, catch the October 10, 2014 County Council Meeting video.

October 11, 2014 06:54 PM

October 09, 2014

Akkana Peck

What's nesting in our truck's engine?

We park the Rav4 outside, under an overhang. A few weeks ago, we raised the hood to check the oil before heading out on an adventure, and discovered a nest of sticks and grass wedged in above the valve cover. (Sorry, no photos -- we were in a hurry to be off and I didn't think to grab the camera.)

Pack rats were the obvious culprits, of course. There are lots of them around, and we've caught quite a few pack rats in our live traps. Knowing that rodents can be a problem since they like to chew through hoses and wiring, we decided we'd better keep an eye on the Rav and maybe investigate some sort of rodent-repelling technology.

Sunday, we got back from another adventure, parked the Rav in its usual place, went inside to unload before heading out for an evening walk, and when we came back out, there was a small flock of birds hanging around under the Rav. Towhees! Not only hanging around under the still-warm engine, but several times we actually saw one fly between the tires and disappear.

Could towhees really be our engine nest builders? And why would they be nesting in fall, with the days getting shorter and colder?

I'm keeping an eye on that engine compartment now, checking every few days. There are still a few sticks and juniper sprigs in there, but no real nest has reappeared so far. If it does, I'll post a photo.

October 09, 2014 12:10 AM

October 08, 2014

iheartubuntu

Check Ubuntu Linux for Shellshock


Shellshock is a new vulnerability that allows attackers to put code onto your machine, which could put your Ubuntu Linux system at a serious risk for malicious attacks.

Shellshock uses a bash script to access your computer. From there, hackers can launch programs, enable features, and even access your files. The script only affects UNIX-based systems (linux and mac).

You can test your system by running this test command from Terminal:

env x='() { :;}; echo vulnerable' bash -c 'echo hello'

If you're not vulnerable, you'll get this result:

bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' hello

If you are vulnerable, you'll get:

vulnerable hello

You can also check the version of bash you're running by entering:

bash --version

If you get version 3.2.51(1)-release as a result, you will need to update. Most Linux distributions already have patches available.

-----------

If your system is vulnerable make sure your computer has all critical updates and it should be patched already. if you are using a version of Ubuntu that has already reached end of life status (12.10, 13.04, 13.10, etc), you may be screwed and may need to start using a newer version of Ubuntu.

This should update Bash for you so your system is not vulnerable...

sudo apt-get update && sudo apt-get install --only-upgrade bash

by iheartubuntu (noreply@blogger.com) at October 08, 2014 11:48 PM

October 03, 2014

iheartubuntu

Ubuntu - 10 Years Strong


Ubuntu, the Debian based linux operating system is approaching its 21st release in just a couple of weeks (October 23rd) moving forward 10 years strong now!

Mark Shuttleworth invests in Ubuntu's parent company Canonical, which continues to lose money year after year. It's clear that profit isn't his main concern. There is still a clear plan for Ubuntu and Canonical. That plan appears to be very much 'cloud' and business based.

Shuttleworth is proud that the vast majority of cloud deployments are based-on Ubuntu. The recent launch of Canonical's 'Cloud in a box' deployable Ubuntu system is another indication where it sees things going.

Ubuntu Touch will soon appear on phones and tablets, which is really the glue for this cloud/mobile/desktop ecosystem. Ubuntu has evolved impressively over the last ten years and it will continue to develop in this new age.

Ubuntu provides a seamless ecosystem for devices deployed to businesses and users alike. Being able to run the identical software on multiple devices and in the cloud, all sharing the same data is very appealing.

Ubuntu will be at the heart of this with or without the survival of Canonical.

"I love technology, and I love economics and I love what’s going on in society. For me, Ubuntu brings those three things together in a unique way." - Mark Shuttleworth on the next 5 years of Ubuntu

by iheartubuntu (noreply@blogger.com) at October 03, 2014 07:48 PM

BirdFont Font Editor


If you have ever been interested in making your own fonts for fun or profit, BirdFont is an easy to use free font editor that lets you create vector graphics and export TTF, EOT & SVG fonts.

To install Birdfont, simply use the PPA below to insure you always have the most updated version. Open the terminal and run the following commands:

sudo add-apt-repository ppa:ubuntuhandbook1/birdfont

sudo apt-get update

sudo apt-get install birdfont

If you dont like using a PPA repository, you can download the appropriate DEB package  for your particular system....

http://ppa.launchpad.net/ubuntuhandbook1/birdfont/ubuntu/pool/main/b/birdfont/

If you need help developing a font, there is also an official tutorial here!

http://birdfont.org/doku/doku.php/tutorials

by iheartubuntu (noreply@blogger.com) at October 03, 2014 05:27 PM

Elizabeth Krumbach

33rd Birthday Weekend

I’m a big fan of trying new things and places, so it came as a surprise that when I decided upon a birthday getaway this past weekend we decided to go back to the Resort at Squaw Creek, where we had been last year. It wasn’t just travel exhaustion that made us choose this one, we knew we wanted to get some work done during the weekend and the suite-style was great for that. Honestly we love everything about this place – beautiful views, amazing pools, good food. The price was also right for a quick getaway.

The drive up was a long one, Friday evening traffic combined with a thunderstorm. We stopped for dinner at Cottonwood Restaurant in Truckee. By the time we arrived at the driveway to the resort the rain had transformed… what is that, slush? By the time we got to the front door it was properly snowing!

Saturday morning we had breakfast brought to our room (heaven!) and enjoyed the stunning view outside our window.

The rain kept us inside for most of the day, which was wonderful. I was able to get some work done on my book (as planned!) and MJ did a bunch of research into our first proper vacation of the year coming up in November. Fireplace, hot chocolate, the man I love, perfect!

As 4PM rolled around the rain tapered off and we went down to the pool. It was 45F degrees out, so not exactly swimming weather, but the pools were heated and the trio of hot tubs were a popular spot for other folks visiting for the weekend. It turned out wonderful, particularly with the standard warm fall we’re having in San Francisco. That evening we had wonderful dinner (and dessert!) at the on site restaurant.

Sunday was even more rainy. We took advantage of their option to pay an extra $85 to get an 8pm checkout, giving us the whole day to enjoy before we had to go home. The rain did end up keeping us from the pool, but I did take a 2 mile walk through the woods with an umbrella after lunch. In spite of the rain, it was a beautiful walk up the sometimes steep and rocky terrain through the woods.

Alas, it had to end. On our way out we stopped at FiftyFifty Brewing Company for a casual dinner. They had the most amazing mussels appetizer, I kind of want to go back to have that again. Dinner wrapped up with some cake!

Fortunately the drive home was quicker (and drier!) than our drive to the mountains had been and we got in shortly before 1AM.

My actual 33rd birthday was on Monday. I ended up making plans with a friend who was in town to celebrate her own birthday the following day. We met up at the San Francisco Zoo around 11AM and I finally got to meet the wolverines! Even better, we caught them as a keeper was putting out some treats, so we got to see them uncharacteristically bounding about their enclosure as they attacked the treat bags that were put out for them. Alas, in spite of staying until the opening of the Lion House, I still managed to miss the sneaky two-toed sloth who decided to hide from me.

We wrapped up the afternoon with lunch over at the Beach Chalet.

It was a great birthday weekend+birthday, aside from the whole turning 33 part. Getting older hasn’t tended to bother me, but time is passing too quickly, much still to do.

by pleia2 at October 03, 2014 05:26 AM

October 02, 2014

Akkana Peck

Photographing a double rainbow

[double rainbow]

The wonderful summer thunderstorm season here seems to have died down. But while it lasted, we had some spectacular double rainbows. And I kept feeling frustrated when I took the SLR outside only to find that my 18-55mm kit lens was nowhere near wide enough to capture it. I could try stitching it together as a panorama, but panoramas of rainbows turn out to be quite difficult -- there are no clean edges in the photo to tell you where to join one image to the next, and automated programs like Hugin won't even try.

There are plenty of other beautiful vistas here too -- cloudscapes, mesas, stars. Clearly, it was time to invest in a wide-angle lens. But how wide would it need to be to capture a double rainbow?

All over the web you can find out that a rainbow has a radius of 42 degrees, so you need a lens that covers 84 degrees to get the whole thing.

But what about a double rainbow? My web searches came to naught. Lots of pages talk about double rainbows, but Google wasn't finding anything that would tell me the angle.

I eventually gave up on the web and went to my physical bookshelf, where Color and Light in Nature gave me a nice table of primary and secondary rainbow angles of various wavelengths of light. It turns out that 42 degrees everybody quotes is for light of 600 nm wavelength, a blue-green or cyan color. At that wavelength, the primary angle is 42.0° and the secondary angle is 51.0°.

Armed with that information, I went back to Google and searched for double rainbow 51 OR 102 angle and found a nice Slate article on a Double rainbow and lightning photo. The photo in the article, while lovely (lightning and a double rainbow in the South Dakota badlands), only shows a tiny piece of the rainbow, not the whole one I'm hoping to capture; but the article does mention the 51-degree angle.

Okay, so 51°×2 captures both bows in cyan light. But what about other wavelengths? A typical eye can see from about 400 nm (deep purple) to about 760 nm (deep red). From the table in the book:

Wavelength Primary Secondary
400 40.5° 53.7°
600 42.0° 51.0°
700 42.4° 50.3°

Notice that while the primary angles get smaller with shorter wavelengths, the secondary angles go the other way. That makes sense if you remember that the outer rainbow has its colors reversed from the inner one: red is on the outside of the primary bow, but the inside of the secondary one.

So if I want to photograph a complete double rainbow in one shot, I need a lens that can cover at least 108 degrees.

What focal length lens does that translate to? Howard's Astronomical Adventures has a nice focal length calculator. If I look up my Rebel XSi on Wikipedia to find out that other countries call it a 450D, and plug that in to the calculator, then try various focal lengths (the calculator offers a chart but it didn't work for me), it turns out that I need an 8mm lens, which will give me an 108° 26‘ 46" field of view -- just about right.

[Double rainbow with the Rokinon 8mm fisheye] So that's what I ordered -- a Rokinon 8mm fisheye. And it turns out to be far wider than I need -- apparently the actual field of view in fisheyes varies widely from lens to lens, and this one claims to have a 180° field. So the focal length calculator isn't all that useful. At any rate, this lens is plenty wide enough to capture those double rainbows, as you can see.

About those books

By the way, that book I linked to earlier is apparently out of print and has become ridiculously expensive. Another excellent book on atmospheric phenomena is Light and Color in the Outdoors by Marcel Minnaert (I actually have his earlier version, titled The Nature of Light and Color in the Open Air). Minnaert doesn't give the useful table of frequencies and angles, but he has lots of other fun and useful information on rainbows and related phenomena, including detailed instructions for making rainbows indoors if you want to measure angles or other quantities yourself.

October 02, 2014 07:37 PM

September 30, 2014

Elizabeth Krumbach

PuppetConf 2014

Wow, so many conferences lately! Fortunately for me, PuppetConf was local so I didn’t need to catch any flights or deal with hotel hassle, it was just a 2 block walk from home each day.

My focus for this conference was learning more about how people are using code-driven infrastructures similar to ours in the OpenStack Infrastructure project and meeting up with some colleagues, several of whom I’ve only communicated with online. I succeeded on both counts and it ended up being a great conference for me.

There was a keynote by Gene Kim, he is one of the authors of the “devops novel” The Phoenix Project, which I first learned about from my colleague Robert Collins. His talk focused around the book, as The Phoenix Project: Lessons Learned. In spite of having read the book, it was great to hear from Kim on the topic more directly as he talked about technical debt and outlined his 4 top lessons learned:

  • The business value of DevOps is higher than we thought.
  • DevOps Is As Good For Dev… …As It Is For Ops
  • The Need For High-Trust Management (can’t bog people down)
  • DevOps is not just for the unicorns… DevOps is for horses, too. (ie – not just for tech stars like Facebook)

Talk slides here.

The next keynote was by Kate Matsudaira of Popforms who gave a talk titled Trust Me. I wasn’t sure what to expect with this one, but I was pleasantly surprised. She covered some of what one may call “soft skills” in the tech industry, including helping others, supporting your colleagues and in general being a resourceful person who people enjoy working with. Over the years I’ve seen far too much of people assuming these skills aren’t valuable, even as people look around and identify folks with these skills as the colleagues they like working with the most. Huge thanks to Kate for bringing attention to these skills. She also talked a lot about building trust within your organization as it can often be hard for managers to do evaluations of employees who have the freedom to work unobstructed (as we want!) and mechanisms to build that trust, including reporting what you do to your boss and team mates. Slides from her talk available here: Keynote: Trust Me slides

After the keynote I headed over to Evan Scheessele’s talk on Infrastructure-as-Code with Puppet Enterprise in the Cloud. He works in HP’s Printing & Personal Systems division and shared the evolution and use of a code-driven infrastructure on HP Cloud along with Puppet Enterprise. The driving vision in his organization was boiled down to a series of points:

  • Infrastructure as “Cattle” not “Pets”
  • Modern configuration-management means: Executable Documentation
  • “Infrastructure as Code”
  • Focus on the production-pattern, and automate it end-to-end
  • Everything is consistently reproducible

He also went application-specific, discussing their use of Jenkins, and hiera and puppetdb in PE. It was a great talk and a pleasure to catch up with him afterwards. Slides available here.


Thanks to Evan Scheessele for the photo

My talk was on How to Open Source Your Puppet Configuration and I brought along Monty Taylor and James E. Blair stick puppets I made to demonstrate the rationale of running our infrastructure as an open source project. I walked the audience through some of the benefits of making Puppet configurations public (or at least public within an organization), the importance of licensing and documentation and some steps for splitting up your configuration so it’s understandable and consumable by others. My slides are here.

On Wednesday I attended Gareth Rushgrove’s talk on Continuous Integration for Infrastructure as Code. He skipped over a lot of the more common individual testing mechanisms (puppet-lint, puppet-syntax, rspec-puppet, beaker) and dove into higher level view things like testing of images and containers and test-driven infrastructure (analogous to test-driven development). Through his talk he gave several examples of how this is accomplished, from use of Serverspec, the need to write tests before infrastructure, writing tests to enforce policy and pulling data from PuppetDB to run tests. Slides here.

After lunch I headed over to Chris Hoge’s talk about Understanding OpenStack Deployments with the Puppet modules available. In spite of all my work with OpenStack, I haven’t had a very close look at these modules, so it was nice meeting up with him and Colleen Murphy from the puppet-openstack team. In his talk he walked the audience through some of the basic design decisions of OpenStack and then pulled in examples of how the Puppet modules for OpenStack are used to bring this all together. Slides here.

Two talks I’ll have to catch once the videos are online, Continuous Infrastructure: Modern Puppet for the Jenkins Project – R.Tyler Croy, Jenkins (slides) and Infrastructure as Software – Dustin J. Mitchell, Mozilla, Inc. (slides). Both of these are open source infrastructures that I mentioned during my own talk! I wish I had taken the opportunity while we were all in one spot to meet together, fortunately I was able to chat with R.Tyler Croy prior to my talk, but his talk conflicted with mine and Dustin’s with the OpenStack talk.

In all, this was a very valuable event. I learned some interesting new things about how others are using code-driven infrastructures and I made some great connections.

More photos from PuppetConf here: https://www.flickr.com/photos/pleia2/sets/72157648049231891/

by pleia2 at September 30, 2014 08:50 PM

September 28, 2014

Akkana Peck

Petroglyphs, ancient and modern

In the canyons below White Rock there are many wonderful petroglyphs, some dating back many centuries, like this jaguar: [jaguar petroglyph in White Rock Canyon]

as well as collections like these:
[pictographs] [petroglyph collection]

Of course, to see them you have to negotiate a trail down the basalt cliff face. [Red Dot trail]

Up the hill in Los Alamos there are petroglyphs too, on trails that are a bit more accessible ... but I suspect they're not nearly so old. [petroglyph face]

September 28, 2014 03:47 AM

September 27, 2014

iheartubuntu

Ubuntu Kylin Wallpapers


Looking for some new wallpapers these days? The Chinese version of Ubuntu, Ubuntu Kylin, has some beautiful new wallpapers for the 14.10 release. Download and install the DEB to put them on your computer (a total of 24 wallpapers)...

http://security.ubuntu.com/ubuntu/pool/universe/u/ubuntukylin-wallpapers/ubuntukylin-wallpapers-utopic_14.10.0_all.deb



 

by iheartubuntu (noreply@blogger.com) at September 27, 2014 12:30 PM

September 25, 2014

Eric Hammond

Throw Away The Password To Your AWS Account

reduce the risk of losing control of your AWS account by not knowing the root account password

As Amazon states, one of the best practices for using AWS is

Don’t use your AWS root account credentials to access AWS […] Create an IAM user for yourself […], give that IAM user administrative privileges, and use that IAM user for all your work.

The root account credentials are the email address and password that you used when you first registered for AWS. These credentials have the ultimate authority to create and delete IAM users, change billing, close the account, and perform all other actions on your AWS account.

You can create a separate IAM user with near-full permissions for use when you need to perform admin tasks, instead of using the AWS root account. If the credentials for the admin IAM user are compromised, you can use the AWS root account to disable those credentials to prevent further harm, and create new credentials for ongoing use.

However, if the credentials for your AWS root account are compromised, the person who stole them can take over complete control of your account, change the associated email address, and lock you out.

I have consulted companies who lost control over the root AWS account which contained their assets. You want to avoid this.

Proposal

Given:

  • The AWS root account is not required for regular use as long as you have created an IAM user with admin privileges

  • Amazon recommends not using your AWS root account

  • You can’t accidentally expose your AWS root account password if you don’t know it and haven’t saved it anywhere

  • You can always reset your AWS root account password as long as you have access to the email address associated with the account

Consider this approach to improving security:

  1. Create an IAM user with full admin privileges. Use this when you need to do administrative tasks. Activate IAM user access to account billing information for the IAM user to have access to read and modify billing, payment, and account information.

  2. Change the AWS root account password to a long, randomly generated string. Do not save the password. Do not try to remember the password. On Ubuntu, you can use a command like the following to generate a random password for copy/paste into the change password form:

    pwgen -s 24 1
    
  3. If you need access to the AWS root account at some point in the future, use the “Forgot Password” function on the signin form.

It should be clear from this that protecting access to your email account is critical to your overall AWS security, as that is all that is needed to change your password, but that has been true for many online services for many years.

Caveats

You currently need to use the AWS root account in the following situations:

  • to change the email address and password associated with the AWS root account

  • to deactivate IAM user access to account billing information

  • to cancel AWS services (e.g., support)

  • to close the AWS account

  • to buy stuff on Amazon.com, Audible.com, etc. if you are using the same account (not recommended)

  • anything else? Let folks know in the comments.

MFA

For completeness, I should also reiterate Amazon’s constant and strong recommendation to use MFA (multi-factor authentication) on your root AWS account. Consider buying the hardware MFA device, associating it with your root account, then storing it in a lock box with your other important things.

You should also add MFA to your IAM accounts that have AWS console access. For this, I like to use Google Authenticator software running on a locked down mobile phone.

MFA adds a second layer of protection beyond just knowing the password or having access to your email account.

Original article: http://alestic.com/2014/09/aws-root-password

by Eric Hammond at September 25, 2014 11:04 PM

AWS Community Heroes Program

Amazon Web Services recently announced an AWS Community Heroes Program where they are starting to recognize publicly some of the many individuals around the world who contribute in so many ways to the community that has grown up around the services and products provided by AWS.

It is fun to be part of this community and to share the excitement that so many have experienced as they discover and promote new ways of working and more efficient ways of building projects and companies.

Here are some technologies I have gotten the most excited about over the decades. Each of these changed my life in a significant way as I invested serious time and effort learning and using the technology. The year represents when I started sharing the “good news” of the technology with people around me, who at the time usually couldn’t have cared less.

  • 1980: Computers and Programming - “You can write instructions and the computer does what you tell it to! This is going to be huge!”

  • 1987: The Internet - “You can talk to people around the world, access information that others make available, and publish information for others to access! This is going to be huge!”

  • 1993: The World Wide Web - “You can view remote documents by clicking on hyperlinks, making it super-easy to access information, and publishing is simple! This is going to be huge!”

  • 2007: Amazon Web Services - “You can provision on-demand disposable compute infrastructure from the command line and only pay for what you use! This is going to be huge!”

I feel privileged to have witnessed amazing growth in each of these and look forward to more productive use on all fronts.

There are a ton of local AWS meetups and AWS user groups where you can make contact with other AWS users. AWS often sends employees to speak and share with these groups.

A great way to meet thousands of people in the AWS community (and to spend a few days in intense learning about AWS no matter your current expertise level) is to attend the AWS re:Invent conference in Las Vegas this November. Perhaps I’ll see you there!

Original article: http://alestic.com/2014/09/aws-community-heroes

by Eric Hammond at September 25, 2014 11:03 PM

iheartubuntu

Inside Bitcoins Las Vegas Is Two Weeks Away - 10% OFF!


I Heart Ubuntu is excited to be partnering with Inside Bitcoins Conference and Expo once again, which will be returning to Las Vegas at the Flamingo Hotel on October 5-7, 2014!

The event will explore the way that cryptocurrency has been affecting the payments industry, and will cover a wide range of topics including mainstream adoption, compliance, bitcoin startups, investing, mining, altcoins, equipment, and more. The first 300 paid attendees will receive US$50 in bitcoin.

New to Inside Bitcoins Las Vegas will be a half day of small classroom-style workshops taught by cryptocurrency veterans, which will provide attendees with an interactive, informative setting to learn about various facets of the bitcoin ecosystem.

Recently announced is a keynote by Patrick Byrne, CEO of Overstock.com, who will be leading a session titled, “Cryptosecurities: the Next Decentralized Frontier” on October 6 at 3:30pm. Byrne will also be making an exciting announcement at the event regarding Overstock’s latest development on the Bitcoin front.

Featured speakers include:

  • Patrick Byrne, CEO, Overstock.com 
  • Bobby Lee, CEO, BTC China & Board Member, Bitcoin Foundation
  • Daniel Larimer, Founder, Bitshares.org
  • Perianne Boring, Founder & President, Chamber of Digital Commerce

And many more! See the full roster of speakers here.

I Heart Ubuntu is once again partnering with Inside Bitcoins to offer all readers 10% OFF Gold and Silver Passports. Enter code HEART at checkout to redeem your discount. Register now!

by iheartubuntu (noreply@blogger.com) at September 25, 2014 01:34 AM

September 23, 2014

iheartubuntu

ONIONSHARE - Send Big Files Securely and Anonymously


OnionShare lets you securely and anonymously share files of any size. It works by starting a web server, making it accessible as a Tor hidden service, and generating an unguessable URL to access and download the files. It doesn't require setting up a server on the internet somewhere or using a third party filesharing service. You host the file on your own computer and use a Tor hidden service to make it temporarily accessible over the internet. The other user just needs to use Tor Browser to download the file from you.

Features include:

  • A user-friendly drag-and-drop graphical user interface that works in Windows, Mac OS X, and Linux
  • Ability to share multiple files and folders at once
  • Support for multiple people downloading files at once
  • Automatically copies the unguessable URL to your clipboard
  • Shows you the progress of file transfers
  • When file is done transferring, automatically closes OnionShare to reduce the attack surface
  • Localized into several languages, and supports international unicode filenames
  • Designed to work in Tails, for high risk users

You can learn more about OnionShare here: https://onionshare.org/

To install OnionShare on Ubuntu, open a terminal and type:

sudo add-apt-repository ppa:micahflee/ppa

sudo apt-get update && sudo apt-get install onionshare

Before you can share files, you need to open Tor Browser in the background. This will provide the Tor service that OnionShare uses to start the hidden service.

Open OnionShare and drag and drop files and folders you wish to share, and start the server. It will show you a long, random-looking URL such as http://cfxipsrhcujgebmu.onion/7aoo4nnzj3qurkafvzn7kket7u and copy it to your clipboard. This is the secret URL that can be used to download the file you're sharing. If you'd like multiple people to be able to download this file, uncheck the "close automatically" checkbox.

Send this URL to the person you're trying to send the files to. If the files you're sending aren't secret, you can use normal means of sending the URL: emailing it, posting it to Facebook or Twitter, etc. If you're trying to send secret files then it's important to send this URL securely.

The person who is receiving the files doesn't need OnionShare. All they need is to open the URL you send them in Tor Browser to be able to download the file.



by iheartubuntu (noreply@blogger.com) at September 23, 2014 12:00 PM

Jono Bacon

Bringing Literacy to Millions of Kids With Open Source

Today we are launching the Global Learning XPRIZE complete with Indigogo crowdfunding campaign.

This is a $15 million competition in which teams are challenged to create Open Source software that will teach a child to read, write, and perform arithmetic in 18 months without the aid of a teacher. This is not designed to replace teachers but to instead provide an educational solution where little or none exists.

There are 57 million children aged 5 – 12 in the world today who have no access to education. There are 250 million children below basic literacy levels, even after several years of school. You may think the solution to this is to build more schools, but we would need an extra 1.6 million teachers by next year to provide universal primary education.

This is a tragedy.

This new XPRIZE is designed to help fix this.

Every child should have a right to the core ingredient that is literacy. It unlocks their potential and opens up opportunity. Just think of all the resources we depend on today for growth and education…the Internet, books, wikipedia, collaborative tools…without literacy all of these are inaccessible. It is time to change this. Too many suffer from a lack of literacy, and sadly girls bear much of the brunt of this too.

This prize is open to anyone to participate in. Professional developers, hobbyists, students, scientists, teachers…everyone is welcome to join in and compete. While the $15 million purse is attractive in itself, just think of the impact of potentially changing the lives of hundreds of millions of kids.

Coopetition For a Great Cause

What really excites me about this new XPRIZE is that it is the first Open Source XPRIZE. The winning team and the four runner-up teams will be expected to Open Source their entire code-base, assets, and material. This will create a solid foundation of education technology that can live on…long past the conclusion of this XPRIZE.

That isn’t the only reason why this excites me though. The Open Source nature of this prize provides an incredible opportunity for coopetition; where teams can collaborate around common areas of interest and problem-solving, while keeping their secret sauce to themselves. The impact of this could be profound.

I will be working hard to build an environment in which we encourage this kind of collaboration. It makes no sense for 100 teams to all solve the same problems privately in their own silo. Let’s get everyone up and running in GitHub, collaborating around these common challenges, so all the teams benefit from that pooling of resources.

Let’s also open this up so everyone can help us be successful. Let’s invite designers, translators, testers, teachers, scientists, musicians, artists and more…everyone has something they can bring to solve one of our grandest challenges, and help create a more literate and peaceful world.

Everyone Can Play a Part

As part of this new XPRIZE we are also launching a crowdfunding campaign that is designed to raise additional resources so we can do even more as part of the prize. We have already funded the $15 million prize purse and some field-testing, but this crowdfunding campaign will provide the resources for us to do so much more.

This will help us broaden the field-testing in more countries, with more kids, to grow a global community around solving these grand challenges, build a collaborative environment for teams to work together on common problems, and optimize this new XPRIZE to be as successful as possible. Every penny contributed helps us to do more and ultimately squeeze the most value out of this important XPRIZE.

There are ten things you can do to help:

  1. Contribute! – a great place to start is to buy one of our awesome perks from the crowdfunding campaign. Find out more here.
  2. Join the Community – come and meet the new XPRIZE community at http://forum.xprize.org and share ideas, brainstorm, and collaborate around new projects.
  3. Refer Friends and Win Prizes – want to win an expenses-paid trip to our Visioneering event where we create new XPRIZEs while also helping spread the word? To find out more click here.
  4. Download the Street Team Kit – head over to our Get Involved page and download a free kit with avatars, banners, posters, presentations, FAQs and more. The page also includes videos for how to get started!
  5. Create and Share Digital Content – we are encouraging authors, developers, content-creators and more to create content that will spread the word about literacy, the Global Learning XPRIZE, and more!
  6. Share and Tag Your Fave Children’s Book – which children’s books have been the most memorable for you? Share your favorite (and preferably post a picture of the cover), complete with a link to

    http://igg.me/at/learningxprize and tag 5 friends to share theirs too! When using social media, be sure to use the #learningprize hashtag.

  7. Show Your Pride –  go and download the Street Team Kit and use the images and avatars in there to change your profile picture and banner on your favorite social media networks (e.g. Twitter, Facebook, Google+).
  8. Create Your ‘Learning Moment’ Video – record a video and share on a video website (such as YouTube) about how learning has really impact your life. Give the video the title “Global Learning XPRIZE: My Learning Moment“. Be sure to share your video on social media too with the #learningprize hashtag!
  9. Put Posters up in Your Community – go and download the Street Team Kit, print the posters out and put them up in your local coffee shops, universities, colleges, schools, and elsewhere!
  10. Organize a Local Event – create a local event to share the Global Learning XPRIZE. Fortunately we have a video on our Get Involved page that explains how you can do this, and we have a presentation deck with notes ready for you to use!

I know a lot of people who read my blog are Open Source folks, and I believe this prize offers an incredible opportunity for us to come together to have a very real profound impact on the world. Come and join the community, support the crowdfunding campaign, and help us to all be successful in bringing literacy to millions.

by jono at September 23, 2014 05:00 AM

September 22, 2014

Akkana Peck

Pi Crittercam vs. Bushnell Trophycam

I had the opportunity to borrow a commercial crittercam for a week from the local wildlife center. [Bushnell Trophycam vs. Raspberry Pi Crittercam] Having grown frustrated with the high number of false positives on my Raspberry Pi based crittercam, I was looking forward to see how a commercial camera compared.

The Bushnell Trophycam I borrowed is a nicely compact, waterproof unit, meant to strap to a tree or similar object. It has an 8-megapixel camera that records photos to the SD card -- no wi-fi. (I believe there are more expensive models that offer wi-fi.) The camera captures IR as well as visible light, like the PiCam NoIR, and there's an IR LED illuminator (quite a bit stronger than the cheap one I bought for my crittercam) as well as what looks like a passive IR sensor.

I know the TrophyCam isn't immune to false positives; I've heard complaints along those lines from a student who's using them to do wildlife monitoring for LANL. But how would it compare with my homebuilt crittercam?

I put out the TrophyCam first night, with bait (sunflower seeds) in front of the camera. In the morning I had ... nothing. No false positives, but no critters either. I did have some shots of myself, walking away from it after setting it up, walking up to it to adjust it after it got dark, and some sideways shots while I fiddled with the latches trying to turn it off in the morning, so I know it was working. But no woodrats -- and I always catch a woodrat or two in PiCritterCam runs. Besides, the seeds I'd put out were gone, so somebody had definitely been by during the night. Obviously I needed a more sensitive setting.

I fiddled with the options, changed the sensitivity from automatic to the most sensitive setting, and set it out for a second night, side by side with my Pi Crittercam. This time it did a little better, though not by much: one nighttime shot with a something in it, plus one shot of someone's furry back and two shots of a mourning dove after sunrise.

[blown-out image from Bushnell Trophycam] What few nighttime shots there were were mostly so blown out you couldn't see any detail to be sure. Doesn't this camera know how to adjust its exposure? The shot here has a creature in it. See it? I didn't either, at first. It's just to the right of the bush. You can just see the curve of its back and the beginning of a tail.

Meanwhile, the Pi cam sitting next to it caught eight reasonably exposed nocturnal woodrat shots and two dove shots after dawn. And 369 false positives where a leaf had moved in the wind or a dawn shadow was marching across the ground. The TrophyCam only shot 47 photos total: 24 were of me, fiddling with the camera setup to get them both pointing in the right direction, leaving 20 false positives.

So the Bushnell, clearly, gives you fewer false positives to hunt through -- but you're also a lot less likely to catch an actual critter. It also doesn't deal well with exposures in small areas and close distances: its IR light source seems to be too bright for the camera to cope with. I'm guessing, based on the name, that it's designed for shooting deer walking by fifty feet away, not woodrats at a two-foot distance.

Okay, so let's see what the camera can do in a larger space. The next two nights I set it up in large open areas to see what walked by. The first night it caught four rabbit shots that night, with only five false positives. The quality wasn't great, though: all long exposures of blurred bunnies. The second night it caught nothing at all overnight, but three rabbit shots the next morning. No false positives.

[coyote caught on the TrophyCam] The final night, I strapped it to a piñon tree facing a little clearing in the woods. Only two morning rabbits, but during the night it caught a coyote. And only 5 false positives. I've never caught a coyote (or anything else larger than a rabbit) with the PiCam.

So I'm not sure what to think. It's certainly a lot more relaxing to go through the minimal output of the TrophyCam to see what I caught. And it's certainly a lot easier to set up, and more waterproof, than my jury-rigged milk carton setup with its two AC cords, one for the Pi and one for the IR sensor. Being self-contained and battery operated makes it easy to set up anywhere, not just near a power plug.

But it's made me rethink my pessimistic notion that I should give up on this homemade PiCam setup and buy a commercial camera. Even on its most sensitive setting, I can't make the TrophyCam sensitive enough to catch small animals. And the PiCam gets better picture quality than the Bushnell, not to mention the option of hooking up a separate camera with flash.

So I guess I can't give up on the Pi setup yet. I just have to come up with a sensible way of taming the false positives. I've been doing a lot of experimenting with SimpleCV image processing, but alas, it's no better at detecting actual critters than my simple pixel-counting script was. But maybe I'll find the answer, one of these days. Meanwhile, I may look into battery power.

September 22, 2014 08:29 PM

September 19, 2014

Akkana Peck

Mirror, mirror

A female hummingbird -- probably a black-chinned -- hanging out at our window feeder on a cool cloudy morning.

[female hummingbird at the window feeder]

September 19, 2014 01:04 AM

September 18, 2014

Elizabeth Krumbach

Offline, CLI-based Gerrit code review with Gertty

This past week I headed to Florida to present at Fossetcon and thought it would be a great opportunity to do a formal review of a new tool recently released by the OpenStack Infrastructure team (well, mostly James E. Blair): Gertty.

The description of this tool is as follows:

As compared to the web interface, the main advantages are:

  • Workflow — the interface is designed to support a workflow similar to reading network news or mail. In particular, it is designed to deal with a large number of review requests across a large number of projects.
  • Offline Use — Gertty syncs information about changes in subscribed projects to a local database and local git repos. All review operations are performed against that database and then synced back to Gerrit.
  • Speed — user actions modify locally cached content and need not wait for server interaction.
  • Convenience — because Gertty downloads all changes to local git repos, a single command instructs it to checkout a change into that repo for detailed examination or testing of larger changes.

For me the two big ones were CLI-based workflow and offline use, I could review patches while on a plane or on terrible hotel wifi!

I highly recommend reading the announcement email to learn more about the features, but to get going here’s a quick rundown for the currently released version 1.0.2:

First, you’ll need to set a password in Gerrit so you can use the REST API. Do that by logging into Gerrit and going to https://review.openstack.org/#/settings/http-password

From there:

pip install gertty

wget https://git.openstack.org/cgit/stackforge/gertty/plain/examples/openstack-gertty.yaml -O ~/.gertty.yaml

Edit ~/.gertty.yaml and update anything that says “CHANGEME”

A couple things worthy of note:

  • Be aware that by default, uses ~/git/ for the git-root, I had to change this in my ~/.gertty.yaml so it didn’t touch my existing ~/git/ directory.
  • You can also run it in a venv, as described on the pypi page.

Now run gertty from your terminal!

When you first load it up, you get a welcome screen with some hints on how to use it, including the all important “press F1 for help”:

Note: I use xfce4-terminal and F1 is bound to terminal help, see the Xfce FAQ to learn how to disable this so you can actually read the Gertty help and don’t have to ask on IRC how to do simple things like I did ;)

As instructed, from here you hit “L” to list projects, this is the page where you can subscribe to them:

You subscribe to projects by pressing “s” and they will show up as bright white, then you can navigate into them to list open reviews:

Go to a review you want to look at and hit enter, bringing up the review screen. This should look very familiar, just text only. I’ve expanded my standard 80×24 terminal window here so you can get a good look at what the full screen looks like:

Navigate down to < Diff > to see the diff. This is pretty cool, instead of showing it on separate pages like the web UI, it shows you a unified page with all of the file diffs, so you just need to scroll through them to see them all:

Finally, you review! Select < Review > back on the main review page and it will pop up a screen that allows you to select your +2, +1, -1, etc and add a message:

Your reviews are synced along with everything else when Gertty knows it’s online and can pull down review updates and upload your changes. At any time you can look at the top right of your screen to see how many pending sync requests it has.

When you want to quit, CTRL-q

I highly recommend giving it a spin. Feel free to ask questions about usage in #openstack-infra and bugs are tracked in Storyboard here: https://storyboard.openstack.org/#!/project/698. The code lives in a stackforge repo at: http://git.openstack.org/cgit/stackforge/gertty

by pleia2 at September 18, 2014 12:46 AM

September 17, 2014

Elizabeth Krumbach

Fossetcon 2014

As I wrote in my last post I attended Fossetcon this past weekend. The core of the event kicked off on Friday with a keynote by Iris Gardner on how Diversity Creates Innovation and the work that the CODE2040 organization is doing to help talented minorities succeed in technology. I first heard about this organization back in 2013 at OSCON, so it was great to hear more about their recent successes with their summer Fellows Program. It was also great to hear that their criteria for talent not only included coding skills, but also sought out a passion for engineering and leadership skills.

After a break, I went to see PJ Hagerty give his talk, Meetup Groups: Act Locally – Think Globally. I’ve been running open source related groups for over a decade, so I’ve been in this space for quite a long time and was hoping to get some new tips, PJ didn’t disappoint! He led off with the need to break out of the small “pizza and a presentation by a regular” grind, which is indeed important to growing a group and making people show up. Some of his suggestions for doing this included:

  • Seek out students to attend and participate in the group, they can be some of your most motivated attendees and will bring friends
  • Seek out experienced programmers (and technologists) not necessarily in your specific field to give more agnostic talks about general programming/tech practices
  • Do cross-technology meetups – a PHP and Ruby night! Maybe Linux and BSD?
  • Bring in guest speakers from out of town (if they’re close enough, many will come for the price of gas and/or train/bus ticket – I would!)
  • Send members to regional conferences… or run your own conference
  • Get kids involved
  • Host an OpenHack event

I’ll have to see what my co-conspiratorsorganizers at some local groups think of these ideas, it certainly would be fun to spice up some of the groups I regularly attend.

From there I went to MySQL Server Performance Tuning 101 by Ligaya Turmelle. Her talk centered around the fact that MySQL tuning is not simple, but went through a variety of mechanisms to tune it in different ways for specific cases you may run into. Perhaps most useful to me were her tips for gathering usage statistics from MySQL, I was unfamiliar with many of the metrics she pulled out. Very cool stuff.

After lunch and some booth duty, I headed over to Crash Course in Open Source Cloud Computing presented by Mark Hinkle. Now, I work on OpenStack (referred to as the “Boy Band” of cloud infrastructures in the talk – hah!), so my view of the cloud world is certainly influenced by that perspective. It was great to see a whirlwind tour of other and related technologies in the open source ecosystem.

The closing keynote for the day was by Deb Nicholson, Style or substance? Free Software is Totally the 80’s. She gave a bit of a history of free software and speculated as to whether our movement would be characterized by a shallow portrayal of “unconferences and penguin swag” (like 80s neon clothes and extravagance) or how free software communities are changing the world (like groups in the 80s who were really seeking social change or the fall of the Berlin wall). Her hope is that by stepping back and taking a look at our community that perhaps we could shape how our movement is remembered and focus on what is important to our future.

Saturday I had more booth duty with my colleague Yolanda Robla who came in from Spain to do a talk on Continuous integration automation. We were joined by another colleague from HP, Mark Atwood, who dropped by the conference for his talk How to Get One of These Awesome Open Source Jobs – one of my favorites.

The opening keynote on Saturday was Considering the Future of Copyleft by Bradley Kuhn. I always enjoy going to his talks because I’m considerably more optimistic about the health and future of free software, so his strong copyleft stance makes me stop and consider where I truly stand and what that means. He worries that an ecosystem of permissive licenses (like Apache, MIT, BSD) will lead to companies doing the least possible for free software and keeping all their secret sauces secret, diluting the ecosystem and making it less valuable for future consumers of free software since they’ll need the proprietary components. I’m more hopeful than that, particularly as I see real free software folks starting to get jobs in major companies and staying true to their free software roots. Indeed, these days I spend a vast majority of my time working on Apache-licensed software for a large company who pays me to do the work. Slides from his talk are here, I highly recommend having a browse: http://ebb.org/bkuhn/talks/FOSSETCON-2014/copyleft-future.html

After some more boothing, I headed over to Apache Mesos and Aurora, An Operating System For The Datacenter by David Lester. Again, being on the OpenStack bandwagon these past few years I haven’t had a lot of time to explore the ecosystem elsewhere, and I learned that this is some pretty cool stuff! Lester works for Twitter and talked some about how Twitter and other companies in the community are using both the Mesos and Aurora tools to build their efficient, fault tolerant datacenters and how it’s lead to impressive improvements in the reliability of their infrastructures. He also did a really great job explaining the concepts of both, hooray for diagrams. I kind of want to play with them now.

Introduction to The ELK Stack: Elasticsearch, Logstash & Kibana by Aaron Mildenstein was my next stop. We run an ELK stack in the OpenStack Infrastructure, but I’ve not been very involved in the management of that, instead focusing on how we’re using it in elastic-recheck so I hoped this talk would fill in some of the fundamentals for me. It did do that so I was happy with that, but I have to admit that I was pretty disappointed to see demos of plugins that required a paid license.

As the day wound down, I finally had my talk: Code Review for Systems Administrators.


Code Review for Sysadmins talk, thanks to Yolanda Robla for taking the photo

I love giving this talk. I’m really proud of the infrastructure that has been built for OpenStack and it’s one that I’m happy and excited to work with every day – in part because we do things through code review. Even better, my excitement during this presentation seemed contagious, with an audience that seemed really engaged with the topic and impressed. Huge thanks to everyone who came and particularly to those who asked questions and took time to chat with me after. Slides from my talk are available here: fossetcon-code-review-for-sysadmins/

And then we were at the end! The conference wrapped up with a closing keynote on Open Source Is More than Code by Jordan Sissel. I really loved this talk. I’ve known for some time that the logstash community was one of the friendlier ones, with their mantra of “If a newbie has a bad time, it’s a bug.” This talk dove further into that ethos in their community and how it’s impacted how members of the project handle unhappy users. He also talked about improvements made to documentation (both inline in code and formal documentation) and how they’ve tried to “break away from text” some and put more human interaction in their community so people don’t feel so isolated and dehumanized by a text only environment (though I do find this is where I’m personally most comfortable, not everyone feels that way). I hope more projects will look to the logstash community as a good example of how we all can do better, I know I have some work to do when it comes to support.

Thanks again to conference staff for making this event such a fun one, particularly as it was their first year!

by pleia2 at September 17, 2014 12:44 AM

September 16, 2014

Elizabeth Krumbach

Ubuntu at Fossetcon 2014

Last week I flew out to the east coast to attend the very first Fossetcon. The conference was on the smaller side, but I had a wonderful time meeting up with some old friends, meeting some new Ubuntu enthusiasts and finally meeting some folks I’ve only communicated with online. The room layout took some getting used to, but the conference staff was quick to put up signs and directing conference attendees in the right direction and in general leading to a pretty smooth conference experience.

On Thursday the conference hosted a “day zero” that had training and an Ubucon. I attended the Ubucon all day, which kicked off with Michael Hall doing an introduction to the Ubuntu on Phones ecosystem, including Mir, Unity8 and the Telephony features that needed to be added to support phones (voice calling, SMS/MMs, Cell data, SIM card management). He also talked about the improved developer portal with more resources aimed at app developers, including the Ubuntu SDK and simplified packaging with click packages.

He also addressed the concern of many about whether Ubuntu could break into the smartphone market at this point, arguing that it’s a rapidly developing and changing market, with every current market leader only having been there for a handful of years, and that new ideas need need to play to win. Canonical feels that convergence between phone and desktop/laptop gives Ubuntu a unique selling point and that users will like it because of intuitive design with lots of swiping and scrolling actions, gives apps the most screen space possible. It was interesting to hear that partners/OEMs can offer operator differentiation as a layer without fragmenting the actual operating system (something that Android struggles with), leaving the core operating system independently maintained.

This was followed up by a more hands on session on Creating your first Ubuntu SDK Application. Attendees downloaded the Ubuntu SDK and Michael walked through the creation of a demo app, using the App Dev School Workshop: Write your first app document.

After lunch, Nicholas Skaggs and I gave a presentation on 10 ways to get involved with Ubuntu today. I had given a “5 ways” talk earlier this year at the SCaLE in Los Angeles, so it was fun to do a longer one with a co-speaker and have his five items added in, along with some other general tips for getting involved with the community. I really love giving this talk, the feedback from attendees throughout the rest of the conference was overwhelmingly positive, and I hope to get some follow-up emails from some new contributors looking to get started. Slides from our presentation are available as pdf here: contributingtoubuntu-fossetcon-2014.pdf


Ubuntu panel, thanks to Chris Crisafulli for the photo

The day wrapped up with an Ubuntu Q&A Panel, which had Michael Hall and Nicholas Skaggs from the Community team at Canonical, Aaron Honeycutt of Kubuntu and myself. Our quartet fielded questions from moderator Alexis Santos of Binpress and the audience, on everything from the Ubuntu phone to challenges of working with such a large community. I ended up drawing from my experience with the Xubuntu community a lot in the panel, especially as we drilled down into discussing how much success we’ve had coordinating the work of the flavors with the rest of Ubuntu.

The next couple days brought Fossetcon proper, with I’ll write about later. The Ubuntu fun continued though! I was able to give away 4 copies of The Official Ubuntu Book, 8th Edition which I signed, and got José Antonio Rey to sign as well since he had joined us for the conference from Peru.

José ended up doing a talk on Automating your service with Juju during the conference, and Michael Hall had the opportunity to a talk on Convergence and the Future of App Development on Ubuntu. The Ubuntu booth also looked great and was one of the most popular of the conference.

I really had a blast talking to Ubuntu community members from Florida, they’re a great and passionate crowd.

by pleia2 at September 16, 2014 05:01 PM

September 14, 2014

Akkana Peck

Global key bindings in Emacs

Global key bindings in emacs. What's hard about that, right? Just something simple like

(global-set-key "\C-m" 'newline-and-indent)
and you're all set.

Well, no. global-set-key gives you a nice key binding that works ... until the next time you load a mode that wants to redefine that key binding out from under you.

For many years I've had a huge collection of mode hooks that run when specific modes load. For instance, python-mode defines \C-c\C-r, my binding that normally runs revert-buffer, to do something called run-python. I never need to run python inside emacs -- I do that in a shell window. But I fairly frequently want to revert a python file back to the last version I saved. So I had a hook that ran whenever python-mode loaded to override that key binding and set it back to what I'd already set it to:

(defun reset-revert-buffer ()
  (define-key python-mode-map "\C-c\C-r" 'revert-buffer) )
(setq python-mode-hook 'reset-revert-buffer)

That worked fine -- but you have to do it for every mode that overrides key bindings and every binding that gets overridden. It's a constant chase, where you keep needing to stop editing whatever you wanted to edit and go add yet another mode-hook to .emacs after chasing down which mode is causing the problem. There must be a better solution.

A web search quickly led me to the StackOverflow discussion Globally override key bindings. I tried the techniques there; but they didn't work.

It took a lot of help from the kind folks on #emacs, but after an hour or so they finally found the key: emulation-mode-map-alists. It's only barely documented -- the key there is "The “active” keymaps in each alist are used before minor-mode-map-alist and minor-mode-overriding-map-alist" -- and there seem to be no examples anywhere on the web for how to use it. It's a list of alists mapping names to keymaps. Oh, clears it right up! Right?

Okay, here's what it means. First you define a new keymap and add your bindings to it:

(defvar global-keys-minor-mode-map (make-sparse-keymap)
  "global-keys-minor-mode keymap.")

(define-key global-keys-minor-mode-map "\C-c\C-r" 'revert-buffer)
(define-key global-keys-minor-mode-map (kbd "C-;") 'insert-date)

Now define a minor mode that will use that keymap. You'll use that minor mode for basically everything.

(define-minor-mode global-keys-minor-mode
  "A minor mode so that global key settings override annoying major modes."
  t "global-keys" 'global-keys-minor-mode-map)

(global-keys-minor-mode 1)

Now build an alist consisting of a list containing a single dotted pair: the name of the minor mode and the keymap.

;; A keymap that's supposed to be consulted before the first
;; minor-mode-map-alist.
(defconst global-minor-mode-alist (list (cons 'global-keys-minor-mode
                                              global-keys-minor-mode-map)))

Finally, set emulation-mode-map-alists to a list containing only the global-minor-mode-alist.

(setf emulation-mode-map-alists '(global-minor-mode-alist))

There's one final step. Even though you want these bindings to be global and work everywhere, there is one place where you might not want them: the minibuffer. To be honest, I'm not sure if this part is necessary, but it sounds like a good idea so I've kept it.

(defun my-minibuffer-setup-hook ()
  (global-keys-minor-mode 0))
(add-hook 'minibuffer-setup-hook 'my-minibuffer-setup-hook)

Whew! It's a lot of work, but it'll let me clean up my .emacs file and save me from endlessly adding new mode-hooks.

September 14, 2014 10:46 PM

September 11, 2014

Akkana Peck

Making emailed LinkedIn discussion thread links actually work

I don't use web forums, the kind you have to read online, because they don't scale. If you're only interested in one subject, then they work fine: you can keep a browser tab for your one or two web forums perenially open and hit reload every few hours to see what's new. If you're interested in twelve subjects, each of which has several different web forums devoted to it -- how could you possibly keep up with that? So I don't bother with forums unless they offer an email gateway, so they'll notify me by email when new discussions get started, without my needing to check all those web pages several times per day.

LinkedIn discussions mostly work like a web forum. But for a while, they had a reasonably usable email gateway. You could set a preference to be notified of each new conversation. You still had to click on the web link to read the conversation so far, but if you posted something, you'd get the rest of the discussion emailed to you as each message was posted. Not quite as good as a regular mailing list, but it worked pretty well. I used it for several years to keep up with the very active Toastmasters group discussions.

About a year ago, something broke in their software, and they lost the ability to send email for new conversations. I filed a trouble ticket, and got a note saying they were aware of the problem and working on it. I followed up three months later (by filing another ticket -- there's no way to add to an existing one) and got a response saying be patient, they were still working on it. 11 months later, I'm still being patient, but it's pretty clear they have no intention of ever fixing the problem.

Just recently I fiddled with something in my LinkedIn prefs, and started getting "Popular Discussions" emails every day or so. The featured "popular discussion" is always something stupid that I have no interest in, but it's followed by a section headed "Other Popular Discussions" that at least gives me some idea what's been posted in the last few days. Seemed like it might be worth clicking on the links even though it means I'd always be a few days late responding to any conversations.

Except -- none of the links work. They all go to a generic page with a red header saying "Sorry it seems there was a problem with the link you followed."

I'm reading the plaintext version of the mail they send out. I tried viewing the HTML part of the mail in a browser, and sure enough, those links worked. So I tried comparing the text links with the HTML:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&amp;t=gde&amp;midToken=AQEqep2nxSZJIg&amp;ek=b2_anet_digest&amp;li=82&amp;m=group_discussions&amp;ts=textdisc-6&amp;itemID=5914453683503906819&amp;itemType=member&amp;anetID=98449
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken=AQEqep2nxSZJIg&ek=b2_anet_digest&li=17&m=group_discussions&ts=grouppost-disc-6&itemID=5914453683503906819&itemType=member&anetID=98449

Well, that's clear as mud, isn't it?

HTML entity substitution

I pasted both links one on top of each other, to make it easier to compare them one at a time. That made it fairly easy to find the first difference:

Text version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&amp;t=gde&amp;midToken= ...
HTML version:
http://www.linkedin.com/e/v2?e=3x1l-hzwzd1q8-6f&t=gde&midToken= ...

Time to die laughing: they're doing HTML entity substitution on the plaintext part of their email notifications, changing & to &amp; everywhere in the link.

If you take the link from the text email and replace &amp; with &, the link works, and takes you to the specific discussion.

Pagination

Except you can't actually read the discussion. I went to a discussion that had been open for 2 days and had 35 responses, and LinkedIn only showed four of them. I don't even know which four they are -- are they the first four, the last four, or some Facebook-style "four responses we thought you'd like". There's a button to click on to show the most recent entries, but then I only see a few of the most recent responses, still not the whole thread.

Hooray for the web -- of course, plenty of other people have had this problem too, and a little web searching unveiled a solution. Add a pagination token to the end of the URL that tells LinkedIn to show 1000 messages at once.

&count=1000&paginationToken=
It won't actually show 1000 (or all) responses -- but if you start at the beginning of the page and scroll down reading responses one by one, it will auto-load new batches. Yes, infinite scrolling pages can be annoying, but at least it's a way to read a LinkedIn conversation in order.

Making it automatic

Okay, now I know how to edit one of their URLs to make it work. Do I want to do that by hand any time I want to view a discussion? Noooo!

Time for a script! Since I'll be selecting the URLs from mutt, they'll be in the X PRIMARY clipboard. And unfortunately, mutt adds newlines so I might as well strip those as well as fixing the LinkedIn problems. (Firefox will strip newlines for me when I paste in a multi-line URL, but why rely on that?)

Here's the important part of the script:

import subprocess, gtk

primary = gtk.clipboard_get(gtk.gdk.SELECTION_PRIMARY)
if not primary.wait_is_text_available() :
    sys.exit(0)
link = primary.wait_for_text()
link = link.replace("\n", "").replace("&amp;", "&") + \
       "&count=1000&paginationToken="
subprocess.call(["firefox", "-new-tab", link])

And here's the full script: linkedinify on GitHub. I also added it to pyclip, the script I call from Openbox to open a URL in Firefox when I middle-click on the desktop.

Now I can finally go back to participating in those discussions.

September 11, 2014 07:10 PM

Jono Bacon

Ubuntu for Smartwatches?

I read an interesting article on OMG! Ubuntu! about whether Canonical will enter the wearables business, now the smartwatch industry is hotting up.

On one hand (pun intended), it makes sense. Ubuntu is all about convergence; a core platform from top to bottom that adjusts and expands across different form factors, with a core developer platform, and a focus on content.

On the other hand (pun still intended), the wearables market is another complex economy, that is heavily tethered, both technically and strategically, to existing markets and devices. If we think success in the phone market is complex, success in the burgeoning wearables market is going to be just as complex too.

Now, to be clear, I have no idea whether Canonical is planning on entering the wearables market or not. It wouldn’t surprise me if this is a market of interest though as the investment in Ubuntu over the last few years has been in building a platform that could ultimately scale. It is logical to think this could map to a smartwatch as “another form factor”.

So, if technically it is doable, Canonical should do it, right?

No.

I want to see my friends and former colleagues at Canonical succeed, and this needs focus.

Great companies focus on doing a small set of things and doing them well. Spiraling off in a hundred different directions means dividing teams, dividing focus, and limiting opportunities. To use a tired saying…being a “jack of all trades and master of none”.

While all companies can be tempted in this direction, I am happy that on the client side of Canonical, the focus has been firmly placed on phone. TV has taken a back seat, tablet has taken a back seat. The focus has been on building a featureful, high-quality platform that is focused on phone, and bringing that product to market.

I would hate to think that Canonical would get distracted internally by chasing the smartwatch market while it is young. I believe it would do little but direct resources away from the major push now, which is phone.

If there is something we can learn from Apple here is that it isn’t important to be first. It is important to be the best. Apple rarely ships the first innovation, but they consistently knock it out of the park by building brilliant products that become best in class.

So, I have no doubt that the exciting new convergent future of Ubuntu could run on a watch, but lets keep our heads down and get the phone out there and rocking, and the wearables and other form factors can come later.

by jono at September 11, 2014 05:11 AM

September 09, 2014

Jono Bacon

One Simple Request

I do a podcast called Bad Voltage with a bunch of my pals. In it we cover Open Source and technology, we do interviews, reviews, and more. It is a lot of fun.

We started a contest recently in which the presenters have to take part in a debate, but with a viewpoint that is actually the opposite of what we actually think.

In the first episode of this three part series, Bryan Lunduke and Stuart Langridge duked it out. Lunduke won (seriously).

In the most recent episode, Jeremy Garcia and I went up against each other.

Sadly, my tiny opponent is beating me right now.

Thus, I ask for a favor. Go here and vote for Bacon. Doing so will make you feel great about your life, save a puppy, and potentially get you that promotion you have been wanting.

Also, for my Ubuntu friends…a vote for Bacon…is a vote for Ubuntu.

UPDATE: The stakes have been increased. Want to see me donate $300 to charity, have an awkward avatar, and pour a bucket of ice/ketchup/BBQ sauce/waste vegetables on me? Read more and then vote.

by jono at September 09, 2014 02:28 AM

September 08, 2014

Akkana Peck

Dot Reminders

I read about cool computer tricks all the time. I think "Wow, that would be a real timesaver!" And then a week later, when it actually would save me time, I've long since forgotten all about it.

After yet another session where I wanted to open a frequently opened file in emacs and thought "I think I made a bookmark for that a while back", but then decided it's easier to type the whole long pathname rather than go re-learn how to use emacs bookmarks, I finally decided I needed a reminder system -- something that would poke me and remind me of a few things I want to learn.

I used to keep cheat sheets and quick reference cards on my desk; but that never worked for me. Quick reference cards tend to be 50 things I already know, 40 things I'll never care about and 4 really great things I should try to remember. And eventually they get burned in a pile of other papers on my desk and I never see them again.

My new system is working much better. I created a file in my home directory called .reminders, in which I put a few -- just a few -- things I want to learn and start using regularly. It started out at about 6 lines but now it's grown to 12.

Then I put this in my .zlogin (of course, you can do this for any shell, not just zsh, though the syntax may vary):

if [[ -f ~/.reminders ]]; then
  cat ~/.reminders
fi

Now, in every login shell (which for me is each new terminal window I create on my desktop), I see my reminders. Of course, I don't read them every time; but I look at them often enough that I can't forget the existence of great things like emacs bookmarks, or diff <(cmd1) <(cmd2).

And if I forget the exact keystroke or syntax, I can always cat ~/.reminders to remind myself. And after a few weeks of regular use, I finally have internalized some of these tricks, and can remove them from my .reminders file.

It's not just for tech tips, either; I've used a similar technique for reminding myself of hard-to-remember vocabulary words when I was studying Spanish. It could work for anything you want to teach yourself.

Although the details of my .reminders are specific to Linux/Unix and zsh, of course you could use a similar system on any computer. If you don't open new terminal windows, you can set a reminder to pop up when you first log in, or once a day, or whatever is right for you. The important part is to have a small set of tips that you see regularly.

September 08, 2014 03:10 AM

Elizabeth Krumbach

Simcoe’s August 2014 Checkup

This upcoming December will mark Simcoe living with the CRF diagnosis for 3 years. We’re happy to say that she continues to do well, with this latest batch of blood work showing more good news about her stable levels.

Unfortunately we brought her in a few weeks early this time following a bloody sneeze. As I’ve written earlier this year, they’ve both been a bit sneezy this year with an as yet undiagnosed issue that has been eluding all tests. Every month or so they switch off who is sneezing, but this was the first time there was any blood.

Simcoe at vet
“I still don’t like vet visits.”

Following the exam, the vet said she wasn’t worried. The bleeding was a one time thing and could have just been caused by rawness brought on by the sneezing and sniffles. Since the appointment on August 26th we haven’t seen any more problems (and the cold seems to have migrated back to Caligula).

As for her levels, it was great to see her weight come up a bit, from 9.62 to 9.94lbs.

Her BUN and CRE levels have both shifted slightly, from 51 to 59 on BUN and 3.9 to 3.8 on CRE.

BUN: 59 (normal range: 14-36)
CRE: 3.8 (normal range: .6-2.4)

by pleia2 at September 08, 2014 12:57 AM

September 02, 2014

Akkana Peck

Using strace to find configuration file locations

I was using strace to figure out how to set up a program, lftp, and a friend commented that he didn't know how to use it and would like to learn. I don't use strace often, but when I do, it's indispensible -- and it's easy to use. So here's a little tutorial.

My problem, in this case, was that I needed to find out what configuration file I needed to modify in order to set up an alias in lftp. The lftp man page tells you how to define an alias, but doesn't tell you how to save it for future sessions; apparently you have to edit the configuration file yourself.

But where? The man page suggested a couple of possible config file locations -- ~/.lftprc and ~/.config/lftp/rc -- but neither of those existed. I wanted to use the one that already existed. I had already set up bookmarks in lftp and it remembered them, so it must have a config file already, somewhere. I wanted to find that file and use it.

So the question was, what files does lftp read when it starts up? strace lets you snoop on a program and see what it's doing.

strace shows you all system calls being used by a program. What's a system call? Well, it's anything in section 2 of the Unix manual. You can get a complete list by typing: man 2 syscalls (you may have to install developer man pages first -- on Debian that's the manpages-dev package). But the important thing is that most file access calls -- open, read, chmod, rename, unlink (that's how you remove a file), and so on -- are system calls.

You can run a program under strace directly:

$ strace lftp sitename
Interrupt it with Ctrl-C when you've seen what you need to see.

Pruning the output

And of course, you'll see tons of crap you're not interested in, like rt_sigaction(SIGTTOU) and fcntl64(0, F_GETFL). So let's get rid of that first. The easiest way is to use grep. Let's say I want to know every file that lftp opens. I can do it like this:

$ strace lftp sitename |& grep open

I have to use |& instead of just | because strace prints its output on stderr instead of stdout.

That's pretty useful, but it's still too much. I really don't care to know about strace opening a bazillion files in /usr/share/locale/en_US/LC_MESSAGES, or libraries like /usr/lib/i386-linux-gnu/libp11-kit.so.0.

In this case, I'm looking for config files, so I really only want to know which files it opens in my home directory. Like this:

$ strace lftp sitename |& grep 'open.*/home/akkana'

In other words, show me just the lines that have either the word "open" or "read" followed later by the string "/home/akkana".

Digression: grep pipelines

Now, you might think that you could use a simpler pipeline with two greps:

$ strace lftp sitename |& grep open | grep /home/akkana

But that doesn't work -- nothing prints out. Why? Because grep, under certain circumstances that aren't clear to me, buffers its output, so in some cases when you pipe grep | grep, the second grep will wait until it has collected quite a lot of output before it prints anything. (This comes up a lot with tail -f as well.) You can avoid that with

$ strace lftp sitename |& grep --line-buffered open | grep /home/akkana
but that's too much to type, if you ask me.

Back to that strace | grep

Okay, whichever way you grep for open and your home directory, it gives:

open("/home/akkana/.local/share/lftp/bookmarks", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.netrc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/home/akkana/.local/share/lftp/rl_history", O_RDONLY|O_LARGEFILE) = 5
open("/home/akkana/.inputrc", O_RDONLY|O_LARGEFILE) = 5
Now we're getting somewhere! The file where it's getting its bookmarks is ~/.local/share/lftp/bookmarks -- and I probably can't use that to set my alias.

But wait, why doesn't it show lftp trying to open those other config files?

Using script to save the output

At this point, you might be sick of running those grep pipelines over and over. Most of the time, when I run strace, instead of piping it through grep I run it under script to save the whole output.

script is one of those poorly named, ungoogleable commands, but it's incredibly useful. It runs a subshell and saves everything that appears in that subshell, both what you type and all the output, in a file.

Start script, then run lftp inside it:

$ script /tmp/lftp.strace
Script started on Tue 26 Aug 2014 12:58:30 PM MDT
$ strace lftp sitename

After the flood of output stops, I type Ctrl-D or Ctrl-C to exit lftp, then another Ctrl-D to exit the subshell script is using. Now all the strace output was in /tmp/lftp.strace and I can grep in it, view it in an editor or anything I want.

So, what files is it looking for in my home directory and why don't they show up as open attemps?

$ grep /home/akkana /tmp/lftp.strace

Ah, there it is! A bunch of lines like this:

access("/home/akkana/.lftprc", R_OK)    = -1 ENOENT (No such file or directory)
stat64("/home/akkana/.lftp", 0xbff821a0) = -1 ENOENT (No such file or directory)
mkdir("/home/akkana/.config", 0755)     = -1 EEXIST (File exists)
mkdir("/home/akkana/.config/lftp", 0755) = -1 EEXIST (File exists)
access("/home/akkana/.config/lftp/rc", R_OK) = 0

So I should have looked for access and stat as well as open. Now I have the list of files it's looking for. And, curiously, it creates ~/.config/lftp if it doesn't exist already, even though it's not going to write anything there.

So I created ~/.config/lftp/rc and put my alias there. Worked fine. And I was able to edit my bookmark in ~/.local/share/lftp/bookmarks later when I had a need for that. All thanks to strace.

September 02, 2014 07:06 PM