Planet Ubuntu California

May 27, 2015

Jono Bacon

#ISupportCommunity

So the Ubuntu Community Council has asked Jonathan Riddell to step down as a leader in the Kubuntu community. The reasoning for this can be broadly summarized as “poor conduct”.

Some members of the community have concluded that this is something of a hatchet job from the Community Council, that Jonathan’s insistence to get answers to tough questions (e.g. licensing, donations) has resulted in the Community Council booting him out.

I don’t believe this is true.

Just because the Community Council has not provided an extensive docket of evidence behind their decision does not equate to wrong-doing. It does not equate to corruption or malpractice.

I do sympathize with the critics though. I spent nearly eight years pretty close to the politics of Ubuntu and when I read the Community Council’s decision I understood and agreed with it. For all of Jonathan’s tremendously positive contributions to Kubuntu, I do believe his conduct and approach has sadly had a negative impact on parts of our community too.

This has nothing to do with the questions he raised, it was the way he raised them, and the inference and accusations he made in raising such questions. We can’t have our leaders behaving like that: it sets a bad example.

As such, I understood the Community Council’s decision because I have seen these politics both up front and behind the scenes due to my close affiliation with Ubuntu and Canonical. For those people for who haven’t been so close to the coalface though, this decision from the CC feels heavy handed, without due evidence, and emotive in response.

Thus, in conclusion, I don’t believe the CC have acted inappropriately in making this decision, but I do believe that their decision needs to be illustrated further. The decision needs to feel complete and authoritative, but until we see further material, we are not going to improve the situation if everyone assumes the Community Council is some shadowy cabal against Jonathan and Kubuntu.

We are a community. We have more in common than what differs between us. Let’s put the hyperbole to one side and have a conversation about how we resolve this. There is an opportunity for a great outcome here: for better understanding and further improvement, but the first step is everyone understanding the perspectives of the people with opposing viewpoints.

As such #ISupportCommunity; our wider Ubuntu and Kubuntu family. Let’s work together, not against each other.

by jono at May 27, 2015 05:25 PM

May 26, 2015

Eric Hammond

Schedule Recurring AWS Lambda Invocations With The Unreliable Town Clock (UTC)

public SNS Topic with a trigger event every quarter hour

Scheduled executions of AWS Lambda functions on an hourly/daily/etc basis is a frequently requested feature, ever since the day Amazon introduced the service at AWS re:Invent 2014.

Until Amazon releases a reliable, premium cron feature for AWS Lambda, I’m offering a community-built alternative which may be useful for some non-critical applications.

arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

Background

Beyond its event-driven convenience, the primary attraction of AWS Lambda is eliminating the need to maintain infrastructure to run and scale code. The AWS Lambda function code is simply uploaded to AWS and Amazon takes care of providing systems to run on, keeping it available, scaling to meet demand, recovering from infrastructure failures, monitoring, logging, and more.

The available methods to trigger AWS Lambda functions already include some powerful and convenient events like S3 object creation, DynamoDB changes, Kinesis stream processing, and my favorite: the all-purpose SNS Topic subscription.

Even so, there is a glaring need for code that wants to run at regular intervals: time-triggered, recurring, scheduled event support for AWS Lambda. Attempts to to do this yourself generally ends up with having to maintain your own supporting infrastructure, when your original goal was to eliminate the infrastructure worries.

Unreliable Town Clock (UTC)

The Unreliable Town Clock (UTC) is a new, free, public SNS Topic (Amazon Simple Notification Service) that broadcasts a “chime” message every quarter hour to all subscribers. It can send the chimes to AWS Lambda functions, SQS queues, and email addresses.

You can use the chime attributes to run your code every fifteen minutes, or only run your code once an hour (e.g., when minute == "00") or once a day (e.g., when hour == "00" and minute == "00") or any other series of intervals.

You can even subscribe a function you only want to run only once at a specific time in the future: Have the function ignore all invocations until it’s after the time it wants. When it is time, it can perform its job, then unsubscribe itself from the SNS Topic.

Connecting your code to the Unreliable Town Clock is fast and easy. No application process or account creation is required:

Example: AWS Lambda Function

These commands subscribe an AWS Lambda function to the Unreliable Town Clock:

# AWS Lambda function
lambda_function_name=YOURLAMBDAFUNCTION
account=YOURACCOUNTID
lambda_function_arn="arn:aws:lambda:us-east-1:$account:function:$lambda_function_name"

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Allow the SNS Topic to invoke the AWS Lambda function
aws lambda add-permission \
  --function-name "$lambda_function_name"  \
  --action lambda:InvokeFunction \
  --principal sns.amazonaws.com \
  --source-arn "$sns_topic_arn" \
  --statement-id $(uuidgen)

# Subscribe the AWS Lambda function to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol lambda \
  --notification-endpoint "$lambda_function_arn"

Example: Email Address

These commands subscribe an email address to the Unreliable Town Clock (useful for getting the feel, testing, and debugging):

# Email address
email=YOUREMAIL@YOURDOMAIN

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Subscribe the email address to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol email \
  --notification-endpoint "$email"

Example: SQS Queue

These commands subscribe an SQS queue to the Unreliable Town Clock:

# SQS Queue
sqs_queue_name=YOURQUEUE
account=YOURACCOUNTID
sqs_queue_arn="arn:aws:sqs:us-east-1:$account:$sqs_queue_name"
sqs_queue_url="https://queue.amazonaws.com/$account/$sqs_queue_name"

# Unreliable Town Clock public SNS Topic
sns_topic_arn=arn:aws:sns:us-east-1:522480313337:unreliable-town-clock-topic-178F1OQACHTYF

# Allow the SNS Topic to post to the SQS queue
sqs_policy='{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": { "AWS": "*" },
    "Action": "sqs:SendMessage",
    "Resource": "'$sqs_queue_arn'",
    "Condition": {
      "ArnEquals": {
        "aws:SourceArn": "'$sns_topic_arn'"
}}}]}'
sqs_policy_escaped=$(echo $sqs_policy | perl -pe 's/"/\\"/g')
aws sqs set-queue-attributes \
  --queue-url "$sqs_queue_url" \
  --attributes '{"Policy":"'"$sqs_policy_escaped"'"}'

# Subscribe the SQS queue to the SNS Topic
aws sns subscribe \
  --topic-arn "$sns_topic_arn" \
  --protocol sqs \
  --notification-endpoint "$sqs_queue_arn"

Chime message

The chime message includes convenient attributes like the following:

{
  "type" : "chime",
  "timestamp": "2015-05-26 02:15 UTC",
  "year": "2015",
  "month": "05",
  "day": "26",
  "hour": "02",
  "minute": "15",
  "day_of_week": "Tue",
  "unique_id": "2d135bf9-31ba-4751-b46d-1db6a822ac88",
  "region": "us-east-1",
  "sns_topic_arn": "arn:aws:sns:...",
  "reference": "...",
  "support": "...",
  "disclaimer": "UNRELIABLE SERVICE {ACCURACY,CONSISTENCY,UPTIME,LONGEVITY}"
}

You should only run your code’s primary function when the message type == "chime"

Other values are reserved for other message types which may include things like service notifications or alerts. Those message types may have different attributes.

It might make sense to forward non-chime messages to a human (e.g., post to an SNS Topic where you have an email address subscribed).

Regions

The Unreliable Town Clock is currently available in the following AWS Regions:

  • us-east-1

If you would like to use it in other regions, please let me know.

Cost

The Unreliable Town Clock is free for unlimited “lambda” and “sqs” subscriptions.

Yes. Unlimited. Amazon takes care of the scaling and does not charge for sending to these endpoints through SNS.

You may currently add “email” subscriptions, especially to test and see the message format, but if there are too many email subscribers, new subscriptions may be disabled, as it costs the sending account $0.70/year for each address at the current chime frequency.

You are naturally responsible for any charges that occur in your own accounts.

Running an AWS Lambda function four times an hour for a year results in 35,000 invocations, which is negligible if not free, but you need to take care what your functions do and what resources they consume as they are running in your AWS account.

Source

The source code for the infrastructure of the Unreliable Town Clock is available on GitHub

https://github.com/alestic/alestic-unreliable-town-clock

You are welcome to run your own copy, but note that the current code marks the SNS Topic as public so that anybody can subscribe.

Support

The following Google Group mailing list can be used for discussion, questions, enhancement requests, and alerts about problems.

http://groups.google.com/d/forum/unreliable-town-clock

If you plan to use the Unreliable Town Clock, you should subscribe to this mailing list so that you receive service notifications (e.g., if the public SNS Topic ARN is going to change).

Disclaimer

The Unreliable Town Clock service is intended but not guaranteed to be useful. As the name explicitly states, you should consider it unreliable and should not use it for anything you consider important.

Here are some, but not all, of the dimensions in which it is unreliable:

  • Accuracy: The times messages are sent may not be the true times they indicate. Messages may be delayed, get sent early, or be duplicated.

  • Uptime: Chime messages may be skipped for short or long periods of time.

  • Consistency: The formats or contents of the messages may change without warning.

  • Longevity: The service may disappear without warning at any time.

There is no big company behind this service, just a human being. I have experience building and supporting public services used by individuals, companies, and other organizations around the world, but I’m still just one fellow, and this is just an experimental service for the time being.

Comments

What are you thinking of using recurring AWS Lambda invocations for?

Any other features you would like to see?

Original article and comments: https://alestic.com/2015/05/aws-lambda-recurring-schedule/

May 26, 2015 09:01 AM

May 25, 2015

Eric Hammond

Debugging AWS Lambda Invocations With An Echo Function

As I create architectures that include AWS Lambda functions, I find there are situations where I just want to know that the AWS Lambda function is getting invoked and to review the exact event data structure that is being passed in to it.

I found that a simple “echo” function can be dropped in to copy the AWS Lambda event to the console log (CloudWatch Logs). It’s easy to review this output to make sure the function is getting invoked at the right times and with the right data.

There are probably dozens of debug/echo AWS Lambda functions floating around out there, but for my own future reference, I have created a GitHub repo with a four line echo function that does the trick for me. I’ve included a couple scripts to install and uninstall the AWS Lambda function in an account, including the required IAM role and policies.

Here’s the repo for the lambda-echo AWS Lambda function:

https://github.com/alestic/lambda-echo

The README.md provides instructions on how to install and test.

Note: Once you install an AWS Lambda function, there is no reason to delete it if you think it might be useful in the future. It costs nothing to let Amazon store it for you and keep it available for when you want to run it again.

Amazon has indicated that they may prune AWS Lambda functions that go unused for long periods of time, but I haven’t sen this happen in practice yet.

Is there a standard file structure yet for a directory with AWS Lambda function source and the related IAM role/policies? Should I convert this to the format expected by Mitch Garnaat’s kappa perhaps?

Original article and comments: https://alestic.com/post/2015/05/aws-lambda-echo/

May 25, 2015 08:03 AM

Debugging AWS Lambda Invocations With An Echo Function

As I create architectures that include AWS Lambda functions, I find there are situations where I just want to know that the AWS Lambda function is getting invoked and to review the exact event data structure that is being passed in to it.

I found that a simple “echo” function can be dropped in to copy the AWS Lambda event to the console log (CloudWatch Logs). It’s easy to review this output to make sure the function is getting invoked at the right times and with the right data.

There are probably dozens of debug/echo AWS Lambda functions floating around out there, but for my own future reference, I have created a GitHub repo with a four line echo function that does the trick for me. I’ve included a couple scripts to install and uninstall the AWS Lambda function in an account, including the required IAM role and policies.

Here’s the repo for the lambda-echo AWS Lambda function:

https://github.com/alestic/lambda-echo

The README.md provides instructions on how to install and test.

Note: Once you install an AWS Lambda function, there is no reason to delete it if you think it might be useful in the future. It costs nothing to let Amazon store it for you and keep it available for when you want to run it again.

Amazon has indicated that they may prune AWS Lambda functions that go unused for long periods of time, but I haven’t seen this happen in practice yet.

Is there a standard file structure yet for a directory with AWS Lambda function source and the related IAM role/policies? Should I convert this to the format expected by Mitch Garnaat’s kappa perhaps?

Original article and comments: https://alestic.com/2015/05/aws-lambda-echo/

May 25, 2015 08:03 AM

May 24, 2015

Elizabeth Krumbach

Liberty OpenStack Summit days 3-5

Summiting continued! The final three days of the conference offered two days of OpenStack Design Summit discussions and working sessions on specific topics, and Friday was spent doing a contributors meetup so we could have face time with people we’re working with on projects.

Wednesday began with a team breakfast, where over 30 of us descended upon a breakfast restaurant and had a lively morning. Unfortunately it ran a bit long and made us a bit late for the beginning of summit stuff, but the next Infrastructure work session was fully attended! The session sought to take some next steps with our activity tracking mechanisms, none of which are currently part of the OpenStack Infrastructure. Currently there are several different types of stats being collected, from reviewstats which are hosted by a community member and focus specifically on reviews to those produced from Bitergia (here) that are somewhat generic but help compare OpenStack to other open source projects to Stackalytics which is crafted specifically for the OpenStack community. There seems to be value in hosting various metric types, mostly so comparisons can be made across platforms if they differ in any way. The consensus of the session was to first move forward with moving Stackalytics into our infrastructure, since so many projects find such value in it. Etherpad here: YVR-infra-activity-tracking


With this view from the work session room, it’s amazing we got anything done

Next up was QA: Testing Beyond the Gate. In OpenStack there is a test gate that all changes must pass in order for a change to be merged. In the past cycle periodic and post-merge tests have also been added, but it’s been found that if a code merging isn’t dependent upon these passing, not many people pay attention to these additional tests. The result of the session is a proposed dashboard for tracking these tests so that there’s an easier view into what they’re doing, whether they’re failing and so empower developers to fix them up. Tracking of third party testing in this, or a similar, tracker was also discussed as a proposal once the infra-run tests are being accounted for. Etherpad here: YVR-QA-testing-beyond-the-gate

The QA: DevStack Roadmap session covered some of the general cleanup that typically needs to be done in DevStack, but then also went into some of the broader action items, including improving the reliability of Centos tests run against it that are currently non-voting, pulling some things out of DevStack to support them as plugins as we move into a Big Tent world and work out how to move forward with Grenade. Etherpad here: YVR-QA-Devstack-Roadmap

I then attended QA: QA in the Big Tent. In the past cycle, OpenStack dropped the long process of being accepted into OpenStack as an official project and streamlined it so that competing technologies are now all in the mix, we’re calling it the Big Tent – as we’re now including everyone. This session focused on how to support the QA needs now that OpenStack is not just a slim core of a few projects. The general idea from a QA perspective is that they can continue to support the things-everyone-uses (nova, neutron, glance… an organically evolving list) and improve pluggable support for projects beyond that so they can help themselves to the QA tools at their disposal. Etherpad here: YVR-QA-in-the-big-tent

With sessions behind me, I boarded a bus for the Core Reviewer Party, hosted at the Museum of Anthropology at UBC. As party venues go, this was a great one. The museum was open for us to explore, and they also offered tours. The main event took place outside, where they served design-your-own curry seafood dishes, bison, cheeses and salmon. Of course no OpenStack event would be complete with a few bars around serving various wines and beer. There was an adjacent small building where live music was playing and there was a lot of space to walk around, catch the sunset and enjoy some gardens. I spent much of my early evening with friends from Time Warner Cable, and rounded things off with several of my buddies from HP. This ended up being a get-back-after-midnight event for me, but it was totally worth it to spend such a great time with everyone.

Thursday morning kicked off with a series of fishbowl sessions where the Infrastructure team was discussing projects we have in the works. First up was Infrastructure: Zuul v3. Zuul is our pipeline-oriented project gating system, which currently works by facilitating the of running tests and automated tasks in response to Gerrit events. Right now it sends jobs off to Gearman for launching via Jenkins to our fleet of waiting nodes, but we’re really using Jenkins as a shim here, not really taking advantage of the built in features that Jenkins offers. We’re also in need of a system that better supports multi-tenancy and multi-node jobs and which can scale as OpenStack continues to grow, particularly with Big Tent. This session discussed the end game of phasing out Jenkins in favor of a more Zuul-driven workflow and more immediate changes that may be made to Nodepool and smaller projects like Zuul-merger to drive our vision. Etherpad here: YVR-infra-zuulv3

Everyone loves bug reporting and task tracking, right? In the next session, Infrastructure: Task tracking, that was our topic. We did an experiment with the creation of Storyboard as our homebrewed solution to bug and task tracking, but in spite of valiant efforts by the small team working on it, they were unable to gain more contributors and the job was simply too big for the size of the team doing the work. As a result, we’re now back to looking at solutions other than Canonical’s hosted Launchpad (which is currently used). The session went through some basic evaluation of a few tools, and at the end there was some consensus to work toward bringing up a more battle-hardened and Puppetized instance of Maniphest (from Phabricator) so that teams can see if it fits their needs. Etherpad here:YVR-infra-task-tracking

The morning continued with an Infrastructure: Infra-cloud session. The Infrastructure team has about 150 machines in a datacenter that have been donated to us by HP. The session focused on how we can put these to use as Nodepool instances by running OpenStack on our own and adding that “infra-cloud” to the providers in Nodepool. I’m particularly interested in this, given some of my history with getting TripleO into testing (so have deployed OpenStack many, many times!) and in general eager to learn even more about production OpenStack deployments. So it looks like I’ll be providing Infra-brains to Clint Byrum who is otherwise taking a lead here. To keep in sync with other things we host, we’ll be using Puppet to deploy OpenStack, so I’m thankful for the expertise of people like Colleen Murphy who just joined our team to help with that. Etherpad here: YVR-infra-cloud

Next up was the Infrastructure: Puppet testing session. It was great to have some of the OpenStack Puppet folks in the room so they could talk some about how they’re using beaker-rspec in our infra for testing the OpenStack modules themselves. Much of the discussion centered around whether we want to follow their lead, or do something else, leveraging our current system of node allocation to do our own module testing. We also have a much commented on spec up for proposal here. The result of the discussion was that it’s likely that we’ll just follow the lead of the OpenStack Puppet team. Etherpad here: kilo-infra-puppet-testing

That afternoon we had another Infrastructure: Work session where we focused on the refactor of portions of system-config OpenStack module puppet scripts, and some folks worked on getting the testing infrastructure that was talked about earlier. I took the opportunity to do some reviews of the related patches and help a new contributor do some review – she even submitted a patch that was merged the next morning! Etherpad for the work session here: YVR-infra-puppet-openstackci

The last session I attended that day was QA: Liberty Priorities. It wasn’t one I strictly needed to be in, but I hadn’t attended a session in room 306 yet, and it was the famous gosling room! The room had a glass wall that looked out onto a roof were a couple of geese had their babies and would routinely walk by and interrupt the session because everyone would stop, coo and take pictures of them. So I finally got to see the babies! The actual session collected the pile of to do list items generated at the summit, which I got roped into helping with, and prioritized them. Oh, and they gave me a task to help with. I just wanted to see the geese! Etherpad with the priorities is here: YVR-QA-Liberty-Priorities


Photo by Thierry Carrez (source)

Thursday night I ended up having dinner with the moderator of our women of OpenStack panel, Beth Cohen. We went down to Gastown to enjoy a dinner of oysters and seafood and had a wonderful time. It was great to swap tech (and women in tech) stories and chat about our work.

Friday! The OpenStack conference itself ended on Thursday, so it was just ATCs (Active Technical Contributors) attending for the final day of the Design Summit. So things were much quieter and the agenda was full of contributors meetups. I spent the day in the Infrastructure, QA and Release management contributors meetup. We had a long list of things to work on, but I focused on the election tooling, which I ended up following up with on list and then later had a chat with the author of the proposed tooling. My afternoon was spent working on the translations infrastructure with Steve Kowalik who works with me on OpenStack infra and Carlos Munoz of the Zanata team. We were able to work through the outstanding Zanata bugs and make some progress with how we’re going to tackle everything, it was a productive afternoon and always a pleasure to get together with the folks I work with online every day.

That evening, as we left the closing conference center, I met up with several colleagues for an amazing sushi dinner in downtown Vancouver. A perfect, low-key ending to an amazing event!

by pleia2 at May 24, 2015 02:15 AM

May 21, 2015

Elizabeth Krumbach

Liberty OpenStack Summit day 2

My second day of the OpenStack summit came early with he Women of OpenStack working breakfast at 7AM. It kicked off with a series of lightning talks that talked about impostor syndrome, growing as a technical leader (get yourself out there, ask questions) and suggestions from a tech start-up founder about being an entrepreneur. From there we broke up into groups to discuss what we’d like to see from the Women of OpenStack group in the next year. The big take-aways were around mentoring of new women joining our community and starting to get involved with all the OpenStack tooling and more generally giving voice to the women in our community.

Keynotes kicked off at 9AM with Mark Collier announcing the next OpenStack Summit venues: Austin for the spring 2016 summit and Barcelona for the fall 2016 summit. He then went into a series of chats and demos related to using containers, which may be the Next Big Thing in cloud computing. During the session we heard from a few companies who are already using OpenStack with containers (mostly Docker and Kubernetes) in production (video). The keynotes continued with one by Intel, where the speaker took time to talk about how valuable feedback from operators has been in the past year, and appreciation for the new diversity working group (video). The keynote from EBay/Paypal showed off the really amazing progress they’ve made with deploying OpenStack, with it now running on over 300k cores and pretty much powers Paypal at this point (video). Red Hat’s keynote focused on customer engagement as OpenStack matures (video). The keynotes wrapped up with one from NASA JPL, which mostly talked about the awesome Mars projects they’re working on and the massive data requirements therein (video).


OpenStack at EBay/Paypal

Following keynotes, Tuesday really kicked off the core OpenStack Design Summit sessions, where I focused on a series of Cross Project Workshops. First up was Moving our applications to Python 3. This session focused on the migration of Python 3 for functional and integration testing in OpenStack projects now that Oslo libraries are working in Python 3. The session mostly centered around strategy, how to incrementally move projects over and the requirements for the move (2.x dependencies, changes to Ubuntu required to effectively use Python 3.4 for gating, etc). Etherpad here: liberty-cross-project-python3. I then attended Functional Testing Show & Tell which was a great session where projects shared their stories about how they do functional (and some unit) testing in their projects. The Etherpad for this one is super valuable for seeing what everyone reports, it’s available here: liberty-functional-testing-show-tell.

My Design Summit sessions were broken up nicely with a lunch with my fellow panelists, and then the Standing Tall in the Room – Sponsored by the Women of OpenStack panel itself at 2PM (video). It was wonderful to finally meet my fellow panelists in person and the session itself was well-attended and we got a lot of positive feedback from it. I tackled a question about shyness with regard to giving presentations here at the OpenStack Summit, where I pointed at a webinar about submitting a proposal via the Women of OpenStack published in January. I also talked about difficulties related to the first time you write to the development mailing list, participate on IRC and submit code for review. I used an example of having to submit 28 patches for one of my early patches, and audience member Steve Martinelli helpfully tweeted about a 63 patch change. Diving in to all these things helps, as does supporting the ideas of and doing code review for others in your community. Of course my fellow panelists had great things to say too, watch the video!


Thanks to Lisa-Marie Namphy for the photo!

Panel selfie by Rainya Mosher

Following the panel, it was back to the Design Summit. The In-team scaling session was an interesting one with regard to metrics. We’ve learned that regardless of project size, socially within OpenStack it seems difficult for any projects to rise above 14 core reviewers, and keep enough common culture, focus and quality. The solutions presented during the session tended to be heavy on technology (changes to ACLs, splitting up the repo to trusted sub-groups). It’ll be interesting to see how the scaling actually pans out, as there seem to be many more social and leadership solutions to the problem of patches piling up and not having enough core folks to review them. There was also some discussion about the specs process, but the problems and solutions seem to heavily vary between teams, so it seemed unlikely that a unified solution to unprocessed specs would be universal, but it does seem like the process is often valuable for certain things. Etherpad here: liberty-cross-project-in-team-scaling.

My last session of the day was OpenStack release model(s). A time-based discussion required broader participation, so much of the discussion centered around the ability for projects to independently do intermediary releases outside of the release cycle and how that could be supported, but I think the jury is still out on a solution there. There was also talk about how to generally handle release tracking, as it’s difficult to predict what will land, so much so that people have stopped relying on the predictions and that bled into a discussion about release content reporting (release changelogs). In all, an interesting session with some good ideas about how to move forward, Etherpad here: liberty-cross-project-release-models.

I spent the evening with friends and colleagues at the HP+Scality hosted party at Rocky Mountaineer Station. BBQ, food trucks and getting to see non-Americans/non-Canadians try s’mores for the first time, all kinds of fun! Fortunately I managed to make it back to my hotel at a reasonable hour.

by pleia2 at May 21, 2015 10:03 PM

May 20, 2015

Elizabeth Krumbach

Liberty OpenStack Summit day 1

This week I’m at the OpenStack Summit. It’s the most wonderful, exhausting and valuable-to-my-job event I go to, and it happens twice a year. This time it’s being held in the beautiful city of Vancouver, BC, and the conference venue is right on the water, so we get to enjoy astonishing views throughout the day.


OpenStack Summit: Clouds inside and outside!

Jonathan Bryce Executive Director of the OpenStack Foundation kicked off the event with an introduction to the summit, success that OpenStack has built in the Process, Store and Move digital economy, and some announcements, among which was the success found with federated identity support in Keystone where Morgan Fainberg, PTL of Keystone, helped show off a demonstration. The first company keynote was presented by Digitalfilm Tree who did a really fun live demo of shooting video at the summit here in Vancouver, using their OpenStack-powered cloud so it was accessible in Los Angeles for editorial review and then retrieving and playing the resulting video. They shared that a recent show that was shot in Vancouver used this very process for the daily editing and that they had previously used courier services and staff-hopping-on-planes to do the physical moving of digital content because it was too much for their previous systems. Finally, Comcast employees rolled onto the stage on a couch to chat about how they’ve expanded their use of OpenStack since presenting at the summit in Portland, Oregon Video of the all of this available here.

Next up for keynotes was Walmart, who talked about how they moved to OpenStack and used it for all the load on their sites experienced over the 2014 holiday season and how OpenStack has met their needs, video here. Then came HP’s keynote, which really focused on the community and choices available aspect of OpenStack, where speaker Mark Interrante said “OpenStack should be simpler, you shouldn’t need a PhD to run it.” Bravo! He also pointed out that HP’s booth had a demonstration of OpenStack running on various hardware at the booth, an impressively inclusive step for a company that also sells hardware. Video for HP’s keynote here (I dig the Star Wars reference). Keynotes continued with one from TD Bank, which I became familiar with when they bought up the Commerce branches in the Philadelphia region, but have since learned are a major Canadian Bank (oooh, TD stands for Toronto Dominion!). The most fascinating thing about their moved to the cloud for me is how they’ve imposed a cloud-first policy across their infrastructure, where teams must have a really good reason and approval in order to do more traditional bare-metal, one off deployments for their applications, so it’s rare, video. Cybera was the next keynote and perhaps the most inspiring from a humanitarian standpoint. As one of the earliest OpenStack adopters, Cybera is a non-profit that seeks to improve access to the internet and valuable resources therein, which presented Robin Winsor stressed in his keynote was now as the physical infrastructure that was built in North America in the 19th and 20th centuries (railroads, highways, etc), video here. The final keynote was from Solidfire who discussed the importance of solid storage as a basis of a successful deployment, video here.

Following the keynotes, I headed over to the Virtual Networking in OpenStack: Neutron 101 (video) where Kyle Mestery and Mark McClain gave a great overview of how Neutron works with various diagrams showing of the agents and improvements made in Kilo with various new drivers and plugins. The video is well worth the watch.

A chunk of my day was then reserved for translations. My role here is as the Infrastructure team contact for the translations tooling, so it’s also been a crash course in learning about translations workflows since I only speak English. Each session, even unrelated to the actual infrastructure-focused tooling has been valuable to learning. In the first translation team working session the focus was translations glossaries, which are used to help give context/meaning to certain English words where the meaning can be unclear or otherwise needs to be defined in terms of the project. There was representation from the Documentation team, which was valuable as they maintain a docs-focused glossary (here) which is more maintained and has a bigger team than the proposed separate translations glossary would have. Interesting discussion, particularly as my knowledge of translations glossaries was limited. Etherpad here: Vancouver-I18n-WG-session.

I hosted the afternoon session on Building Translation Platform. We’re migrating the team to Zanata have been fortunate to have Carlos Munoz, one of the developers on Zanata, join us at every summit since Atlanta. They’ve been one of the most supportive upstreams I’ve ever worked with, prioritizing our bug reports and really working with us to make sure our adoption is a success. The session itself reviewed the progress of our migration and set some deadlines for having translators begin the testing/feedback cycle. We also talked about hosting a Horizon instance in infra, refreshed daily, so that translators can actually see where translations are most needed via the UI and can prioritize appropriately. Finally, it was a great opportunity to get feedback from translators about what they need from the new workflow and have Carlos there to answer questions and help prioritize bugs. Etherpad here: Vancouver-I18n-Translation-platform-session.

My last translations-related thing of the day was Here be dragons – Translating OpenStack (slides). This was a great talk by Łukasz Jernaś that began with some benefits of translations work and then went into best practices and tips for working with open source translations and OpenStack specifically. It was another valuable session for me as the tooling contact because it gave me insight into some of the pain points and how appropriate it would be to address these with tooling vs. social changes to translations workflows.

From there I went back to general talks, attending Building Clouds with OpenStack Puppet Modules by Emilien Macchi, Mike Dorman and Matt Fischer (video). The OpenStack Infrastructure team is looking at building our own infra-cloud (we have a session on it later this week) and the workflows and tips that this presentation gave would also be helpful to me in other work I’ve been focusing on.

The final session I wandered into was a series of Lightning Talks, put together by HP. They had a great lineup of speakers from various companies and organizations. My evening was then spent at an HP employee gathering, but given my energy level and planned attendance at the Women of OpenStack breakfast at 7AM the following morning I headed back to my hotel around 9PM.

by pleia2 at May 20, 2015 11:26 PM

May 18, 2015

Eric Hammond

Alestic.com Site Redesign

The Alestic.com web site has been redesigned. The old design was going on 8 years old. The new design is:

Ok, so I still have a little improvement remaining in the fast dimension, but at least the site is static now and served through a CDN.

Since fellow geeks care, here are the technologies currently employed:

Simple, efficient, and gets the job done.

The old site is aailable at http://old.alestic.com for a while.

Questions, comments, and suggestions in the comments below.

Original article and comments: https://alestic.com/post/2015/05/blog-redesign/

May 18, 2015 07:10 AM

Alestic.com Site Redesign

The Alestic.com web site has been redesigned. The old design was going on 8 years old. The new design is:

Ok, so I still have a little improvement remaining in the fast dimension, but at least the site is static now and served through a CDN.

Since fellow geeks care, here are the technologies currently employed:

Simple, efficient, and gets the job done.

The old site is available at http://old.alestic.com for a while.

Questions, comments, and suggestions in the comments below.

Original article and comments: https://alestic.com/2015/05/blog-redesign/

May 18, 2015 07:10 AM

May 16, 2015

Elizabeth Krumbach

Xubuntu sweatshirt, Wily, & Debian Jessie Release

People like shirts, stickers and goodies to show support of their favorite operation system, and though the Xubuntu project has been slower than our friends over at Kubuntu at offering them, we now have a decent line-up offered by companies we’re friendly with. Several months ago the Xubuntu team was contacted by Gabor Kum of HELLOTUX to see if we’d be interested in offering shirts through their site. We were indeed interested! So after he graciously sent our project lead a polo shirt to evaluate, we agreed to start offering his products on our site, alongside the others. See all products here.

Polos aren’t really my thing, so when the Xubuntu shirts went live I ordered the Xubuntu sweater. Now a language difference may be in play here, since I’d call it a sweatshirt with a zipper, or a light jacket, or a hoodie without a hood. But it’s a great shirt, I’ve been wearing it regularly since I got it in my often-chilly city of San Francisco. It fits wonderfully and the embroidery is top notch.

Xubuntu sweatshirt
Close-up of HELLOTUX Xubuntu embroidery

In other Ubuntu things, given my travel schedule Peter Ganthavorn has started hosting some of the San Francisco Ubuntu Hours. He hosted one last month that I wasn’t available for, and then another this week which I did attend. Wearing my trusty new Xubuntu sweatshirt, I also brought along my Wily Werewolf to his first Ubuntu Hour! I picked up this fluffy-yet-fearsome werewolf from Squishable.com, which is also where I found my Natty Narwhal.

When we wrapped up the Ubuntu Hour, we headed down the street to our favorite Chinese place for Linux meetings where I was hosting a Bay Area Debian Meeting and Jessie Release Party! I was pretty excited about doing this, since the Toy Story character Jessie is a popular one, I jumped at the opportunity to pick up some party supplies to mark the occasion, and ended up with a collection of party hats and notepads:

There were a total of 5 of us there, long time BAD member Michael Paoli being particularly generous with his support of my ridiculous hats:

We had a fun time, welcoming a couple of new folks to our meeting as well. A few more photos from the evening here: https://www.flickr.com/photos/pleia2/sets/72157650542082473

Now I just need to actually upgrade my servers to Jessie!

by pleia2 at May 16, 2015 03:09 AM

May 15, 2015

Akkana Peck

Of file modes, umasks and fmasks, and mounting FAT devices

I have a bunch of devices that use VFAT filesystems. MP3 players, camera SD cards, SD cards in my Android tablet. I mount them through /etc/fstab, and the files always look executable, so when I ls -f them, they all have asterisks after their names. I don't generally execute files on these devices; I'd prefer the files to have a mode that doesn't make them look executable.

I'd like the files to be mode 644 (or 0644 in most programming languages, since it's an octal, or base 8, number). 644 in binary is 110 100 100, or as the Unix ls command puts it, rw-r--r--.

There's a directive, fmask, that you can put in fstab entries to control the mode of files when the device is mounted. (Here's Wikipedia's long umask article.) But how do you get from the mode you want the files to be, 644, to the mask?

The mask (which corresponds to the umask command) represent the bits you don't want to have set. So, for instance, if you don't want the world-execute bit (1) set, you'd put 1 in the mask. If you don't want the world-write bit (2) set, as you likely don't, put 2 in the mask. So that's already a clue that I'm going to want the rightmost byte to be 3: I don't want files mounted from my MP3 player to be either world writable or executable.

But I also don't want to have to puzzle out the details of all nine bits every time I set an fmask. Isn't there some way I can take the mode I want the files to be -- 644 -- and turn them into the mask I'd need to put in /etc/fstab or set as a umask?

Fortunately, there is. It seemed like it ought to be straightforward, but it took a little fiddling to get it into a one-line command I can type. I made it a shell function in my .zshrc:

# What's the complement of a number, e.g. the fmask in fstab to get
# a given file mode for vfat files? Sample usage: invertmask 755
invertmask() {
    python -c "print '0%o' % (~(0777 & 0$1) & 0777)"
}

This takes whatever argument I give to it -- $1 -- and takes only the three rightmost bytes from it, (0777 & 0$1). It takes the bitwise NOT of that, ~. But the result of that is a negative number, and we only want the three rightmost bytes of the result, (result) & 0777, expressed as an octal number -- which we can do in python by printing it as %o. Whew!

Here's a shorter, cleaner looking alias that does the same thing, though it's not as clear about what it's doing:

invertmask1() {
    python -c "print '0%o' % (0777 - 0$1)"
}

So now, for my MP3 player I can put this in /etc/fstab:

UUID=0000-009E /mp3 vfat user,noauto,exec,fmask=133,shortname=lower 0 0

May 15, 2015 04:27 PM

May 11, 2015

Elizabeth Krumbach

OpenStack events, anniversary & organization, a museum and some computers & cats

I’ve been home for just over 3 weeks. I thought things would be quieter event-wise, but I have attended 2 OpenStack meetups since getting home, the first right after getting off the plane from South Carolina. My colleague and Keystone PTL Morgan Fainberg was giving a presentation on Keystone and I have the rare opportunity to finally meet a scholarship winner who I’ve been mentoring at work. It was great to meet up and see some of the folks who I only see at conference, including other colleagues from HP. Plus, Morgan’s presentation on Keystone was great and the audience had a lot of good questions. Video of the presentation here and slides are available here


With my Helion mentee!

This past week I went to the second meetup, this time over at Walmart Labs, just a quick walk from the Sunnyvale Caltrain station. For this meetup I was on a mainstage panel where discussions covered improvements to OpenStack in the Kilo release (including the continued rise of third party testing, which I was able to speak to), the new Big Tent approach to OpenStack project adoption and how baremetal is starting to change the OpenStack landscape. I was also able to meet some of the really smart people working at Walmart Labs, and learned that all of walmart.com is running on top of OpenStack (this article from March talks about it and they’ll be doing a session on it at the upcoming OpenStack Summit in Vancouver).


Panel at Walmart Labs

In other professional news, the work I did in Oman earlier this year continues to bear fruit. On April 20th issue #313 of the Sultan Qaboos University Horizon newsletter was published with my interview, (8M PDF here). They were kind enough to send me a few paper copies which I received on Friday. The interview touched upon key points that I spoke on during my presentation back in February, focusing on personal and business reasons for open source contributions.

Personally, MJ and I celebrated our second wedding anniversary with a fantastic meal at Murray Circle Restaurant where we sat on the porch and enjoyed our dinner with a nighttime view of the Golden Gate Bridge. We also recently agreed to start a diet together, largely going back to our pre-wedding diet that we both managed to lose a lot of weight on. Health-wise I continue to go out running, but running isn’t enough to help me to lose weight. I’m largely replacing starches with vegetables and reducing the sugar in my diet. Finally, we’ve been hacking our way through a massive joint to do list that’s been haunting us for several months now. Most of the tasks are home-based, from things like painting we need to get done to storage clean-outs. I don’t love that we have so much to do (don’t other adults get to have fun on weekends?), but finally having it organized and a plan for tackling it has reduced my stress incredibly.


2nd anniversary dinner

We do actually get to have fun on weekends, Saturday at least. We’ve continued to take Saturdays off together to attend services, have a nice lunch together and spend some time relaxing, whether that’s catching up on some shows together or visiting a local museum. Last weekend we had the opportunity of finally going to the Cable Car Museum here in San Francisco. Given my love for all things rail, it’s astonishing that I never made it up there before. The core of the museum is the above-ground, in-building housing for the four cables that run the three cable car lines, and then exhibits are built around it. It’s a fantastic little museum, and entrance is free.

I also picked up some beautifully 3d printed cable car earrings and matching necklace produced by Freeform Ind. I loved their stuff so much that I found their shop online and picked up some other local landmark jewelry.

More photos from our trip to the Cable Car Museum are available here: https://www.flickr.com/photos/pleia2/sets/72157652325687332

We’ve had some computer fun lately. MJ has finally ordered a replacement 1U server for the old one that he has co-located in Fremont. Burn-in testing happened this weekend but there are some more harddrive-related pieces that we’re still waiting on to get it finished up. We’re aiming for getting it installed at the datacenter in June. I also replaced the old Pentium 4 that I’ve been using as a monitoring server and backups machine. It was getting quite old and unusable as a second desktop, even when restricted to following social media accounts and watching videos here and there. It’s now been replaced with a refurbished HP DC6200 from 2011, which has an i3 processor and I bumped it up to 8G of RAM that I had laying around from when I maxed out my primary desktop with 16G. So far so good, I moved over the harddrive from the old machine and it’s been running great.


HP DC6200

In the time between work and other things, I’ve been watching The Good Wife on my own and Star Trek: Voyager with MJ. Also, hanging out with my darling kitties. One evening I got this epic picture of Caligula:

This week I’m hosting an Ubuntu Hour and Debian Dinner where we’re celebrate the release of Debian 8 “Jessie”. I’ve purchased Jessie (cowgirl from Toy Story 2 and 3) party hats to mark the occasion. At the break of dawn on Sunday I’ll be boarding a plane to go to the OpenStack Summit in Vancouver. I’ve never been to Vancouver, so I’m spending Sunday there and staying until late on the following Saturday night, so I hope to have time to see some of the city. After this trip, I’m staying home until July! Thank goodness, I can definitely use the down time to work on my book.

by pleia2 at May 11, 2015 03:07 AM

May 06, 2015

Akkana Peck

Tips for passing Google's "Mobile Friendly" tests

I saw on Slashdot that Google is going to start down-rating sites that don't meet its criteria of "mobile-friendly": Are you ready for Google's 'Mobilegeddon' on Tuesday?. And from the the Slashdot discussion, it was pretty clear that Google's definition included some arbitrary hoops to jump through.

So I headed over to Google's Mobile-friendly test to check out some of my pages.

Now, most of my website seemed to me like it ought to be pretty mobile friendly. It's size agnostic: I don't specify any arbitrary page widths in pixels, so most of my pages can resize down as far as necessary (I was under the impression that was what "responsive design" meant for websites, though I've been doing it for many years and it seems now that "responsive design" includes a whole lot of phone-specific tweaks and elaborate CSS for moving things around based on size.) I also don't set font sizes that might make the page less accessible to someone with vision problems -- or to someone on a small screen with high pixel density. So I was pretty confident.

[Google's mobile-friendly test page] I shouldn't have been. Basically all of my pages failed. And in chasing down some of the problems I've learned a bit about Google's mobile rules, as well as about some weird quirks in how current mobile browsers render websites.

Basically, all of my pages failed with the same three errors:

  • Text too small to read
  • Links too close together
  • Mobile viewport not set

What? I wasn't specifying text size at all -- if the text is too small to read with the default font, surely that's a bug in the mobile browser, not a bug in my website. Same with links too close together, when I'm using the browser's default line spacing.

But it turned out that the first two points were meaningless. They were just a side effect of that third error: the mobile viewport.

The mandatory meta viewport tag

It turns out that any page that doesn't add a new meta tag, called "viewport", will automatically fail Google's mobile friendly test and be downranked accordingly. What's that all about?

Apparently it's originally Apple's fault. iPhones, by default, pretend their screen is 980 pixels wide instead of the actual 320 or 640, and render content accordingly, and so they shrink everything down by a factor of 3 (980/320). They do this assuming that most website designers will set a hard limit of 980 pixels (which I've always considered to be bad design) ... and further assuming that their users care more about seeing the beautiful layout of a website than about reading the website's text.

And Google apparently felt, at some point during the Android development process, that they should copy Apple in this silly behavior. I'm not sure when Android started doing this; my Android 2.3 Samsung doesn't do it, so it must have happened later than that.

Anyway, after implementing this, Apple then introduced a meta tag you can add to an HTML file to tell iPhone browsers not to do this scaling, and to display the text at normal text size. There are various forms for this tag, but the most common is:

<meta name="viewport" content="width=device-width, initial-scale=1">
(A lot of examples I found on the web at first suggested this: <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> but don't do that -- it prevents people from zooming in to see more detail, and hurts the accessibility of the page, since people who need to zoom in won't be able to. Here's more on that: Stop using the viewport meta tag (until you know how to use it.)

Just to be clear, Google is telling us that in order not to have our pages downgraded, we have to add a new tag to every page on the web to tell mobile browsers not to do something silly that they shouldn't have been doing in the first place, and which Google implemented to copy a crazy thing Apple was doing.

How width and initial-scale relate

Documentation on how width and initial-scale relate to each other, and which takes precedence, are scant. Apple's documentation on the meta viewport tag says that setting initial-scale=1 automatically sets width=device-width. That implies that the two are basically equivalent: that they're only different if you want to do something else, like set a page width in pixels (use width=) or set the width to some ratio of the device width other than 1 (use initial-scale=.

That means that using initial-scale=1 should imply width=device-width -- yet nearly everyone on the web seems to use both. So I'm doing that, too. Apparently there was once a point to it: some older iPhones had a bug involving switching orientation to landscape mode, and specifying both initial-scale=1 and width=device-width helped, but supposedly that's long since been fixed.

initial-scale=2, by the way, sets the viewport to half what it would have been otherwise; so if the width would have been 320, it sets it to 160, so you'll see half as much. Why you'd want to set initial-scale to anything besides 1 in a web page, I don't know.

If the width specified by initial-scale conflicts with that specified by width, supposedly iOS browsers will take the larger of the two, while Android won't accept a width directive less than 320, according to Quirks mode: testing Meta viewport.

It would be lovely to be able to test this stuff; but my only Android device is running Android 2.3, which doesn't do all this silly zooming out. It does what a sensible small-screen device should do: it shows text at normal, readable size by default, and lets you zoom in or out if you need to.

(Only marginally related, but interesting if you're doing elaborate stylesheets that take device resolution into account, is A List Apart's discussion, A Pixel Identity Crisis.)

Control width of images

[Image with max-width 100%] Once I added meta viewport tags, most of my pages passed the test. But I was seeing something else on some of my photo pages, as well as blog pages where I have inline images:

  • Content wider than screen
  • Links too close together

Image pages are all about showing an image. Many of my images are wider than 320 pixels ... and thus get flagged as too wide for the screen. Note the scrollbars, and how you can only see a fraction of the image.

There's a simple way to fix this, and unlike the meta viewport thing, it actually makes sense. The solution is to force images to be no wider than the screen with this little piece of CSS:

<style type="text/css">
  img { max-width: 100%; height: auto; }
</style>

[Image with max-width 100%] I've been using similar CSS in my RSS reader for several months, and I know how much better it made the web, on news sites that insist on using 1600 pixel wide images inline in stories. So I'm happy to add it to my photo pages. If someone on a mobile browser wants to view every hair in a squirrel's tail, they can still zoom in to the page, or long-press on the image to view it at full resolution. Or rotate to landscape mode.

The CSS rule works for those wide page banners too. Or you can use overflow: hidden if the right side of your banner isn't all that important.

Anyway, that takes care of the "page too wide" problem. As for the "Links too close together" even after I added the meta viewport tag, that was just plain bad HTML and CSS, showing that I don't do enough testing on different window sizes. I fixed it so the buttons lay out better and don't draw on top of each other on super narrow screens, which I should have done long ago. Likewise for some layout problems I found on my blog.

So despite my annoyance with the whole viewport thing, Google's mandate did make me re-examine some pages that really needed fixing, and should have improved my website quite a bit for anyone looking at it on a small screen. I'm glad of that.

It'll be a while before I have all my pages converted, especially that business of adding the meta tag to all of them. But readers, if you see usability problems with my site, whether on mobile devices or otherwise, please tell me about them!

May 06, 2015 09:48 PM

April 30, 2015

iheartubuntu

How To Install BitMessage


If you are ever concerned about private messaging, BitMessage offers an easy solution. Bitmessage is a P2P communications protocol used to send encrypted messages to another person or to many subscribers. It is decentralized and trustless, meaning that you need-not inherently trust any entities like root certificate authorities. It uses strong authentication which means that the sender of a message cannot be spoofed, and it aims to hide "non-content" data, like the sender and receiver of messages, from passive eavesdroppers like those running warrantless wiretapping programs. If Bitmessage is completely new to you, you may wish to start by reading the whitepaper:

https://bitmessage.org/bitmessage.pdf

Windows, Mac and Source Code available here:

https://bitmessage.org/wiki/Main_Page

A community-based forum for questions, feedback, and discussion is also available on the subreddit:

http://www.reddit.com/r/bitmessage/

To install BitMessage on Ubuntu (and other linux distros) go to your terminal and type:

git clone git://github.com/Bitmessage/PyBitmessage.git

Once its finished, run this...

python2.7 PyBitmessage/src/bitmessagemain.py

BitMessage should now be installed with a link your menu or dash or by running that last line in your terminal window again.

* You may need to install git and python for the codes to work.

Give it a try and good luck!

by iheartubuntu (noreply@blogger.com) at April 30, 2015 09:42 PM

Akkana Peck

Stile style

On a hike a few weeks ago, we encountered an unusual, and amusing, stile across the trail.

[Normal stile] It isn't uncommon to see stiles along trails. There are lots of different designs, but their purpose is to allow humans, on foot, an easy way to cross a fence, while making it difficult for vehicles and livestock like cattle to pass through. A common design looks like this, with a break in the fence and "wings" so that anything small enough to make the sharp turn can pass through.

On a recent hike starting near Buckman, on the Rio Grande, we passed a few stiles with the "wings" design; but one of the stiles we came to had a rather less common design:

[Wrongly-built stile]

It was set up so that nothing could pass without climbing over the fence -- and one of the posts which was supposed to hold fence rails was just sitting by itself, with nothing attached to it. [Pathological stile]

I suspect someone gave a diagram to a welder, and the welder, not being an outdoor person and having no idea of the purpose of a stile, welded it up without giving it much thought. Not very functional ... and not very stilish, either!

I'm curious whether the error was in the spec, or in the welder's interpretation of it. But alas, I suspect I'll never learn the story behind the stile.

Giggling, we climbed over the fence and proceeded on our hike up to the very scenic Otowi Peak.

April 30, 2015 05:38 PM

April 21, 2015

Akkana Peck

Finding orphaned files on websites

I recently took over a website that's been neglected for quite a while. As well as some bad links, I noticed a lot of old files, files that didn't seem to be referenced by any of the site's pages. Orphaned files.

So I went searching for a link checker that also finds orphans. I figured that would be easy. It's something every web site maintainer needs, right? I've gotten by without one for my own website, but I know there are some bad links and orphans there and I've often wanted a way to find them.

An intensive search turned up only one possibility: linklint, which has a -orphan flag. Great! But, well, not really: after a few hours of fiddling with options, I couldn't find any way to make it actually find orphans. Either you run it on a http:// URL, and it says it's searching for orphans but didn't find any (because it ignors any local directory you specify); or you can run it just on a local directory, in which case it finds a gazillion orphans that aren't actually orphans, because they're referenced by files generated with PHP or other web technology. Plus it flags all the bad links in all those supposed orphans, which get in the way of finding the real bad links you need to worry about.

I tried asking on a couple of technical mailing lists and IRC channels. I found a few people who had managed to use linklint, but only by spidering an entire website to local files (thus getting rid of any server side dependencies like PHP, CGI or SSI) and then running linklint on the local directory. I'm sure I could do that one time, for one website. But if it's that much hassle, there's not much chance I'll keep using to to keep websites maintained.

What I needed was a program that could look at a website and local directory at the same time, and compare them, flagging any file that isn't referenced by anything on the website. That sounded like it would be such a simple thing to write.

So, of course, I had to try it. This is a tool that needs to exist -- and if for some bizarre reason it doesn't exist already, I was going to remedy that.

Naturally, I found out that it wasn't quite as easy to write as it sounded. Reconciling a URL like "http://mysite.com/foo/bar.html" or "../asdf.html" with the corresponding path on disk turned out to have a lot of twists and turns.

But in the end I prevailed. I ended up with a script called weborphans (on github). Give it both a local directory for the files making up your website, and the URL of that website, for instance:

$ weborphans /var/www/ http://localhost/

It's still a little raw, certainly not perfect. But it's good enough that I was able to find the 10 bad links and 606 orphaned files on this website I inherited.

April 21, 2015 08:55 PM

April 20, 2015

Elizabeth Krumbach

POSSCON 2015

This past week I had the pleasure of attending POSSCON in the beautiful capital city of South Carolina, Columbia. The great event kicked off with a social at Hickory Tavern, which I arranged to be at by tolerating a tight connection in Charlotte. It all worked out and in spite of generally being really shy at these kind of socials, I found some folks I knew and had a good time. Late in the evening several of us even had the opportunity to meet the Mayor of Columbia who had come down to the event and talk about our work and the importance of open source in the economy today. It’s really great to see that kind of support for open source in a city.

The next morning the conference actually kicked off. Organizer Todd Lewis opened the event and quickly handed things off to Lonnie Emard, the President of IT-oLogy. IT-oLogy is a non-profit that promotes initial and continued learning in technology through events targeting everyone from children in grade school to professionals who are seeking to extend their skill set, more on their About page. As a partner for POSSCON, they were a huge part of the event, even hosting the second day at their offices.

We then heard from aforementioned Columbia Mayor Steve Benjamin. A keynote from the city mayor was real treat, taking time out of what I’m sure is a busy schedule showed a clear commitment to building technology in Columbia. It was really inspiring to hear him talk about Columbia, with political support and work from IT-oLogy it sounds like an interesting place to be for building or growing a career in tech. There was then a welcome from Amy Love, the South Carolina Department of Commerce Innovation Director. Talk about local support! Go South Carolina!

The next keynote was from Andy Hunt, who was speaking on “A New Look at Openness” which began with a history of how we’ve progressed with development, from paying for licenses and compilers for proprietary development to the free and open source tool set and their respective licenses we work with today. He talked about how this all progresses into the Internet of Things, where we can now build physical objects and track everything from keys to pets. Today’s world for developers, he argued, is not about inventing but innovating, and he implored the audience to seek out this innovation by using the building blocks of open source as a foundation. In the idea space he proposed 5 steps for innovative thinking:

  1. Gather raw material
  2. Work it
  3. Forget the whole thing
  4. Eureka/My that’s peculiar
  5. Refine and develop
  6. profit!

Directly following the keynote I gave my talk on Tools for Open Source Systems Administration in the Operations/Back End track. It had the themes of many of my previous talks on how the OpenStack Infrastructure team does systems administration in an open source way, but I refocused this talk to be directly about the tools we use to accomplish this as a geographically distributed team across several different companies. The talk went well and I had a great audience, huge thanks to everyone who came out for it, it was a real pleasure to talk with folks throughout the rest of the conference who had questions about specific parts of how we collaborate. Slides from my presentation are here (pdf).

The next talk in the Operations/Back End track was Converged Infrastructure with Sanoid by Jim Salter. With SANOID, he was seeking to bring enterprise-level predictability, minimal downtime and rapid recover to small-to-medium-sized businesses. Using commodity components, from hardware through software, he’s built a system that virtualizes all services and runs on ZFS for Linux to take hourly (by default) snapshots of running systems. When something goes wrong, from a bad upgrade to a LAN infected with a virus, he has the ability to quickly roll users back to the latest snapshot. It also has a system for easily creating on and off-site backups and uses Nagios for monitoring, which is how I learned about aNag, a Nagios client for Android, I’ll have to check it out! I had the opportunity to spend more time with Jim as the conference went on, which included swinging by his booth for a SANOID demo. Slides from his presentation are here.

For lunch they served BBQ. I don’t really care for typical red BBQ sauce, so when I saw a yellow sauce option at the buffet I covered my chicken in that instead. I had discovered South Carolina Mustard BBQ sauce. Amazing stuff. Changed my life. I want more.

After lunch I went to see a talk by Isaac Christofferson on Assembling an Open Source Toolchain to Manage Public, Private and Hybrid Cloud Deployments. With a focus on automation, standardization and repeatability, he walked us through his usage of Packer, Vagrant and Ansible to interface with a variety of different clouds and VMs. I’m also apparently the last systems administrator alive who hadn’t heard of devopsbookmarks.com, but he shared the link and it’s a great site.

The rooms for the talks were spread around a very walkable area in downtown Columbia. I wasn’t sure how I’d feel about this and worried it would be a problem, but with speakers staying on schedule we were afforded a full 15 minutes between talks to switch tracks. The venue I spoke it was in a Hilton, and the next talk I went to was in a bar! It made for quite the enjoyable short walks outside between talks and a diversity in venues that was a lot of fun.

That next talk I went to was Open Source and the Internet of Things presented by Erica Stanley. I had the pleasure of being on a panel with Erica back in October during All Things Open (see here for a great panel recap) so it was really great running into her at this conference as well. Her talk was a deluge of information about the Internet of Things (IoT) and how we can all be makers for it! She went into detail about the technology and ideas behind all kinds of devices, and on slides 41 and 42 she gave a quick tour of hardware and software tools that can be used to build for the IoT. She also went through some of the philosophy, guidelines and challenges for IoT development. Slides from her talk are online here, the wealth of knowledge packed into that slidedeck are definitely worth spending some time with if you’re interested in the topic.

The last pre-keynote talk I went to was by Tarus Balog with a Guide to the Open Source Desktop. A self-confessed former Apple fanboy, he had quite the sense of humor about his past where “everything was white and had an apple on it” and his move to using only open source software. As someone who has been using Linux and friends for almost a decade and a half, I wasn’t at this talk to learn about the tools available, but instead see how a long time Mac user could actually make the transition. It’s also interesting to me as a member of the Ubuntu and Xubuntu projects to see how newcomers view entrance into the world of Linux and how they evaluate and select tools. He walked the audience through the process he used to select a distro and desktop environment and then all the applications: mail, calendar, office suite and more. Of particular interest he showed a preference for Banshee (reminded him of old iTunes), as well as digiKam for managing photos. Accounting-wise he is still tied to Quickbooks, but either runs it under wine or over VNC from a Mac.

The day wound down with a keynote from Jason Hibbets. He wrote The foundation for an open source city and is a Project Manager for opensource.com. His keynote was all about stories, and why it’s important to tell our open source stories. I’ve really been impressed with the development of opensource.com over the past year (disclaimer: I’ve written for them too), they’ve managed to find hundreds of inspirational and beneficial stories of open source adoption from around the world. In this talk he highlighted a few of these, including the work of my friend Charlie Reisinger at Penn Manor and Stu Keroff with students in the Asian Penguins computer club (check out a video from them here). How exciting! The evening wrapped up with an afterparty (I enjoyed a nice Palmetto Amber Ale) and a great speakers and sponsors dinner, huge thanks to the conference staff for putting on such a great event and making us feel so welcome.

The second day of the conference took place across the street from the South Carolina State House at the IT-oLogoy office. The day consisted of workshops, so the sessions were much longer and more involved. But the day also kicked off with a keynote by Bradley Kuhn, who gave a basic level talk on Free Software Licensing: Software Freedom Licensing: What You Must Know. He did a great job offering a balanced view of the licenses available and the importance of selecting one appropriate to your project and team from the beginning.

After the keynote I headed upstairs to learn about OpenNMS from Tarus Balog. I love monitoring, but as a systems administrator and not a network administrator, I’ve mostly been using service-based monitoring tooling and hadn’t really looking into OpenNMS. The workshop was an excellent tour of the basics of the project, including a short history and their current work. He walked us through the basic installation and setup, and some of the configuration changes needed for SNMP and XML-based changes made to various other parts of the infrastructure. He also talked about static and auto-discovery mechanisms for a network, how events and alarms work and details about setting up the notification system effectively. He wrapped up by showing off some interesting graphs and other visualizations that they’re working to bring into the system for individuals in your organization who prefer to see the data presented in less technical format.

The afternoon workshop I attended was put on by Jim Salter and went over Backing up Android using Open Source technologies. This workshop focused on backing up content and not the Android OS itself, but happily for me, that’s what I wanted to back up, as I run stock Android from Google otherwise (easy to install again from a generic source as needed). Now, Google will happily backup all your data, but what if you want to back it up locally and store it on your own system? By using rsync backup for Android, Jim demonstrated how to configure your phone to send backups to Linux, Windows and Mac using ssh+rsync. For Linux at least so far this is a fully open source solution, which I quite like and have started using it at home. The next component makes it automatic, which is where we get into a proprietary bit of software, Llama – Location Profiles. Based on various types of criteria (battery level, location, time, and lots more), Llama allows you to identify criteria of when it runs certain actions, like automatically running rsync to do backups. In all, it was a great and informative workshop and I’m happy to finally have a useful solution to pulling photos and things off my phone periodically without plugging it in and using MTP, which apparently I hate and so never I do it. Slides from Jim’s talk, which also include specific instructions and tools for Windows and Mac are online here.

The conference concluded with Todd Lewis sending more thanks all around. By this time in the day rain was coming down in buckets and there were no taxis to be seen, so I grabbed a ride from Aaron Crosman who I was happy to learn earlier was a local but had come from Philadelphia and we had great Philly tech and city vs. country tech stories to swap.

More of my photos from the event available here: https://www.flickr.com/photos/pleia2/sets/72157651981993941/

by pleia2 at April 20, 2015 06:07 PM

Jono Bacon

Announcing Chimp Foot.

I am delighted to share my new music project: Chimp Foot.

I am going to be releasing a bunch of songs, which are fairly upbeat rock and roll (no growly metal here). The first tune is called ‘Line In The Sand’ and is available here.

All of these songs are available under a Creative Commons Attribution ShareAlike license, which means you can download, share, remix, and sell them. I am also providing a karaoke version with vocals removed (great for background music) and all of the individual instrument tracks that I used to create each song. This should provide a pretty comprehensive archive of open material.

Please follow me on SoundCloud and/or on Twitter, Facebook, and Google+.

Shares of this would be much appreciated, and feedback welcome for the music!?

by jono at April 20, 2015 04:22 PM

April 16, 2015

Akkana Peck

I Love Small Town Papers

I've always loved small-town newspapers. Now I have one as a local paper (though more often, I read the online Los Alamos Daily Post. The front page of the Los Alamos Monitor yesterday particularly caught my eye:

[Los Alamos Monitor front page]

I'm not sure how they decide when to include national news along with the local news; often there are no national stories, but yesterday I guess this story was important enough to make the cut. And judging by font sizes, it was considered more important than the high school debate team's bake sale, but of the same importance as the Youth Leadership group's day for kids to meet fire and police reps and do arts and crafts. (Why this is called "Wild Day" is not explained in the article.)

Meanwhile, here are a few images from a hike at Bandelier National Monument: first, a view of the Tyuonyi Pueblo ruins from above (click for a larger version):

[View of Tyuonyi Pueblo ruins from above]

[Petroglyphs on the rim of Alamo Canyon] Some petroglyphs on the wall of Alamo Canyon. We initially called them spirals but they're actually all concentric circles, plus one handprint.

[Unusually artistic cairn in Lummis Canyon] And finally, a cairn guarding the bottom of Lummis Canyon. All the cairns along this trail were fairly elaborate and artistic, but this one was definitely the winner.

April 16, 2015 08:01 PM

April 14, 2015

Jono Bacon

Open Source, Makers, and Innovators

Recently I started writing a column on opensource.com called Six Degrees.

They just published my latest column on how Open Source could provide the guardrails for a new generation of makers and innovators

Go and read the column here.

You can read the two previous columns here:

by jono at April 14, 2015 03:59 PM

April 13, 2015

iheartubuntu

Free Ubuntu Stickers


I have only 3 sheets of Ubuntu stickers to give away! So if you are interested in one of them, I will randomly pick (via random.org) three people. I'll ship each page of stickers any where in the world along with an official Ubuntu 12.04 LTS disc.

To enter into our contest, please "like" our Facebook page for a chance to win. Contest ends Friday, April 17, 2015. I'll announce the three winners the day after. Thanks for the like!

https://www.facebook.com/iheartubuntu

by iheartubuntu (noreply@blogger.com) at April 13, 2015 09:55 PM

Eric Hammond

Subscribing AWS Lambda Function To SNS Topic With aws-cli

The aws-cli documentation and command line help text have not been updated yet to include the syntax for subscribing an AWS Lambda function to an SNS topic, but it does work!

Here’s the format:

aws sns subscribe \
  --topic-arn arn:aws:sns:REGION:ACCOUNT:SNSTOPIC \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:REGION:ACCOUNT:function:LAMBDAFUNCTION

where REGION, ACCOUNT, SNSTOPIC, and LAMBDAFUNCTION are substituted with appropriate values for your account.

For example:

aws sns subscribe --topic-arn arn:aws:sns:us-east-1:012345678901:mytopic \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:us-east-1:012345678901:function:myfunction

This returns an SNS subscription ARN like so:

{
    "SubscriptionArn": "arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe"
}

You can unsubscribe with a command like:

aws sns unsubscribe \
  --subscription-arn arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe

where the subscription ARN is the one returned from the subscribe command.

I’m using the latest version of aws-cli as of 2015-04-15 on the GitHub “develop” branch, which is version 1.7.22.

Original article and comments: https://alestic.com/post/2015/04/aws-cli-sns-lambda/

April 13, 2015 05:35 PM

Subscribing AWS Lambda Function To SNS Topic With aws-cli

The aws-cli documentation and command line help text have not been updated yet to include the syntax for subscribing an AWS Lambda function to an SNS topic, but it does work!

Here’s the format:

aws sns subscribe \
  --topic-arn arn:aws:sns:REGION:ACCOUNT:SNSTOPIC \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:REGION:ACCOUNT:function:LAMBDAFUNCTION

where REGION, ACCOUNT, SNSTOPIC, and LAMBDAFUNCTION are substituted with appropriate values for your account.

For example:

aws sns subscribe --topic-arn arn:aws:sns:us-east-1:012345678901:mytopic \
  --protocol lambda \
  --notification-endpoint arn:aws:lambda:us-east-1:012345678901:function:myfunction

This returns an SNS subscription ARN like so:

{
    "SubscriptionArn": "arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe"
}

You can unsubscribe with a command like:

aws sns unsubscribe \
  --subscription-arn arn:aws:sns:us-east-1:012345678901:mytopic:2ced0134-e247-11e4-9da9-22000b5b84fe

where the subscription ARN is the one returned from the subscribe command.

I’m using the latest version of aws-cli as of 2015-04-15 on the GitHub “develop” branch, which is version 1.7.22.

Original article and comments: https://alestic.com/2015/04/aws-cli-sns-lambda/

April 13, 2015 05:35 PM

April 12, 2015

Elizabeth Krumbach

Spring Trip to Philadelphia and New Jersey

I didn’t think I’d be getting on a plane at all in March, but plans shifted and we scheduled a trip to Philadelphia and New Jersey that left my beloved San Francisco on Sunday March 29th and returned us home on Monday, April 6th.

Our mission: Deal with our east coast storage. Without getting into the boring and personal details, we had to shut down a storage unit that MJ has had for years and go through some other existing storage to clear out donatable goods and finally catalog what we have so we have a better idea what to bring back to California with us. This required movers, almost an entire day devoted to donations and several days of sorting and repacking. It’s not all done, but we made pretty major progress, and did close out that old unit, so I’m calling the trip a success.

Perhaps what kept me sane through it all was the fact that MJ has piles of really old hardware, which is a delight to share on social media. Geeks from all around got to gush over goodies like the 32-bit SPARC lunchboxes (and commiserate with me as I tried to close them).


Notoriously difficult to close, but it was done!

Now admittedly, I do have some stuff in storage too, including my SPARC Ultra 10 that I wrote about here, back in 2007. I wanted to bring it home on this trip, but I wasn’t willing to put it in checked baggage and the case is a bit too big to put in my carry-on. Perhaps next trip I’ll figure out some way to ship it.


SPARC Ultra 10

More gems were collected in my album from the trip: https://www.flickr.com/photos/pleia2/sets/72157651488307179/

We also got to visit friends and family and enjoy some of our favorite foods we can’t find here in California, including east coast sweet & sour chicken, hoagies and chicken cheese steaks.

Family visits began on Monday afternoon as we picked up the plastic storage totes we were using to replace boxes, many of which were hard to go through in their various states of squishedness and age. MJ had them delivered to his sister in Pennsylvania and they were immensely helpful when we did the move on Tuesday. We also got to visit with MJ’s father and mother, and on Saturday met up with his cousins in New Jersey to have my first family Seder for Passover! Previously I’d gone to ones at our synagogue, but this was the first time I’d done one in someone’s home, and it meant a lot to be invited and to participate. Plus, the Passover diet restrictions did nothing to stem the exceptional dessert spread, there was so much delicious food.

We were fortunate to be in town for the first Wednesday of the month, since that allowed us to attend the Philadelphia area Linux Users Group meeting in downtown Philadelphia. I got to see several of my Philadelphia friends at the meeting, and brought along a box of books from Pearson to give away (including several copies of mine), which went over very well with the crowd gathered to hear from Anthony Martin, Keith Perry, and Joe Rosato about ways to get started with Linux, and freed up space in my closet here at home. It was a great night.


Presentation at PLUG

Friend visits included a fantastic dinner with our friend Danita and a quick visit to see Mike and Jessica, who had just welcomed little David into the world, awww!


Staying in New Jersey meant we could find Passover-friendly meals!

Sunday wrapped up with a late night at storage, finalizing some of our sorting and packing up the extra suitcases we brought along. We managed to get a couple hours of sleep at the hotel before our flight home at 6AM on Monday morning.

In all, it was a productive trip, but exhausting and I spent this past week making up for sleep debt and the aches and pains. Still, it felt good to get the work done and visit with friends we’ve missed.

by pleia2 at April 12, 2015 04:26 PM

iheartubuntu

Edward Snowden on Passwords

Just a friendly reminder on developing stronger passwords...


by iheartubuntu (noreply@blogger.com) at April 12, 2015 01:30 PM

April 11, 2015

iheartubuntu

Elementary Freya Released

FREYA. The next generation of Elementary OS is here. Lightweight and beautiful. All-new apps. A refined look. You can help support the devs and name your price or download it for free.

Based on the countdown on their website, the new Freya version of Elementary OS has now arrived!

Download it here:

They will be having a Special LIVE Elementary OS Hangout here as well for the launch...


I have the beta version of Freya Elementary OS installed on one of my laptops and it works great. Its easy to install and its beautiful. It is crafted by designers and developers who believe that computers can be easy, fun, and gorgeous. By putting design first, Elementary ensures they wont compromise on quality or usability. Its also based on Ubuntu 14.04 making it easy to install PPAs.

You can get a feel of the new Elementary OS Freya by checking out this video on Youtube...



and this review also on Youtube...



Elementary OS is definitely worth a look!

by iheartubuntu (noreply@blogger.com) at April 11, 2015 03:00 PM

April 10, 2015

iheartubuntu

Please Take Our Survey

We would love your input. Please take our short little survey. We'll take what you say to "heart" and make I Heart Ubuntu awesome!


by iheartubuntu (noreply@blogger.com) at April 10, 2015 06:08 PM

We Are Back!


We are trying to sort out some graphics and artwork and other stuff so please bear with us. Hope to see everyone again very soon.

by iheartubuntu (noreply@blogger.com) at April 10, 2015 06:07 PM

LMDE 2 “Betsy” MATE & CINNAMON Released


Today, the Linux Mint team announced the release of the LMDE 2 “Betsy” MATE desktop as well as the Cinnamon desktop environments.

LMDE (Linux Mint Debian Edition) is a very exciting distribution, targeted at experienced users, which provides the same environment as Linux Mint but uses Debian as its package base, instead of Ubuntu.

LMDE is less mainstream than Linux Mint, it has a much smaller user base, it is not compatible with PPAs, and it lacks a few features. That makes it a bit harder to use and harder to find help for, so it is not recommended for novice users.

Important release info, system requirements, upgrade instructions and more can be found about these releases directly on Linux Mint website.

by iheartubuntu (noreply@blogger.com) at April 10, 2015 06:07 PM

Torchlight 2 Now on Steam


Torchlight II is a dungeon crawler game that lets you chose to play as a few different classes. The basic concept is the same as nearly all dungeon crawlers: Explore, level up, find gear, beat the boss, rinse and repeat.

A few years ago I really enjoyed playing the original Torchlight. It worked great on Ubuntu. There were some shading issues with the 3D rendering making your characters faces invisible, but those little problems are of no worry in this new version. Torchlight 2 has improved almost every aspect of the original game.

About a month ago STEAM launched Torchlight 2 with linux support. The new version is cross platform multi-player supported, game saves work across all platforms, and game modding is even supported.

Installation through Steam is simple. The download was about 1GB in size. At work I have a slimline computer with a Pentium G2020 processor, 4 GB ram, and a 1GB nVidia GeForce 210 video card. Graphics are superb. It doesnt get much better than this. I even maxxed out all of the graphics settings. Game play is smooth and enjoyable. Ive just been having a fun time going deeper and deeper into the dungeons fighting new bad guys. The scenery alone is worth it. Cant wait to try multiplayer!

You can even zoom in with the mouse wheel and fight your battles up close!


Here are the recommended system specs...

  • Ubuntu 12.04 LTS (or similar, Debian-based distro)
  • x86/x86_64-compatible, 2.0GHz or better processor
  • 2GB System RAM
  • 1.7 GB hard drive space (subject to change)
  • OpenGL 2.0 compatible 3D graphics card* with at least 256 MB of addressable memory (ATI Radeon x1600 or NVIDIA equivalent)
  • A broadband Internet connection (For Steam download and online multiplayer)
  • Online multiplayer requires a free Runic Account.
  • Requires Steam.

The game itself costs $20 on Steam, but you can get it a part of the Humble Bundle 14 set of games if you pay at least $6.15. If you are considering Torchlight 2, you have until April 14 when the Humble Bundle deal expires.

Torchlight II is a great hack-n-slash every fan of this type of games should own. It will entertain you for hours and hours and you will hardly repeat boring actions like farming and grinding. A must have, for such a low price.

I enjoyed this game so much I'm giving it a 5 out of 5 rating :)


by iheartubuntu (noreply@blogger.com) at April 10, 2015 05:58 PM

April 09, 2015

Eric Hammond

AWS Lambda Event-Driven Architecture With Amazon SNS

Today, Amazon announced that AWS Lambda functions can be subscribed to Amazon SNS topics.

This means that any message posted to an SNS topic can trigger the execution of custom code you have written, but you don’t have to maintain any infrastructure to keep that code available to listen for those events and you don’t have to pay for any infrastructure when the code is not being run.

This is, in my opinion, the first time that Amazon can truly say that AWS Lambda is event-driven, as we now have a central, independent, event management system (SNS) where any authorized entity can trigger the event (post a message to a topic) and any authorized AWS Lambda function can listen for the event, and neither has to know about the other.

Making this instantly useful is the fact that there already are a number of AWS services and events that can post messages to Amazon SNS. This means there are a lot of application ideas that are ready to be implemented with nothing but a few commands to set up the SNS topic, and some snippets of nodejs code to upload as an AWS Lambda function.

Unfortunately…

I was unable to find a comprehensive list of all the AWS services and events that can post messages to Amazon SNS (Simple Notification Service).

I’d like to try an experiment and ask the readers of this blog to submit pointers to AWS and other services which can be configured to post events to Amazon SNS. I will collect the list and update this blog post.

Here’s the list so far:

You can either submit your suggestions as comments on this blog post, or tweet the pointer mentioning @esh

Thanks for contributing ideas:

[2015-04-13: Updated with input from comments and Twitter]

Original article and comments: https://alestic.com/post/2015/04/aws-lambda-sns/

April 09, 2015 12:43 PM

AWS Lambda Event-Driven Architecture With Amazon SNS

Today, Amazon announced that AWS Lambda functions can be subscribed to Amazon SNS topics.

This means that any message posted to an SNS topic can trigger the execution of custom code you have written, but you don’t have to maintain any infrastructure to keep that code available to listen for those events and you don’t have to pay for any infrastructure when the code is not being run.

This is, in my opinion, the first time that Amazon can truly say that AWS Lambda is event-driven, as we now have a central, independent, event management system (SNS) where any authorized entity can trigger the event (post a message to a topic) and any authorized AWS Lambda function can listen for the event, and neither has to know about the other.

Making this instantly useful is the fact that there already are a number of AWS services and events that can post messages to Amazon SNS. This means there are a lot of application ideas that are ready to be implemented with nothing but a few commands to set up the SNS topic, and some snippets of nodejs code to upload as an AWS Lambda function.

Unfortunately…

I was unable to find a comprehensive list of all the AWS services and events that can post messages to Amazon SNS (Simple Notification Service).

I’d like to try an experiment and ask the readers of this blog to submit pointers to AWS and other services which can be configured to post events to Amazon SNS. I will collect the list and update this blog post.

Here’s the list so far:

You can either submit your suggestions as comments on this blog post, or tweet the pointer mentioning @esh

Thanks for contributing ideas:

[2015-04-13: Updated with input from comments and Twitter]

Original article and comments: https://alestic.com/2015/04/aws-lambda-sns/

April 09, 2015 12:43 PM

iheartubuntu

Ubuntu Artwork on Flickr


I am always in search of new and interesting wallpapers. For many years Ubuntu has had a great Flickr group that is used to help decide which wallpapers make it into each new Ubuntu release. Most of them dont make it, but there are definitely some great quality images in this Flickr group. You can easily spend an hour here and pick favorites.

Check it out:

https://www.flickr.com/groups/ubuntu-artwork/pool/page1

by iheartubuntu (noreply@blogger.com) at April 09, 2015 07:49 AM

April 08, 2015

iheartubuntu

MAT - Metadata Anonymisation Toolkit


This is a great program used to help protect your privacy.

Metadata consists of information that characterizes data. Metadata is used to provide documentation for data products. In essence, metadata answers who, what, when, where, why, and how about every facet of the data that is being documented.

Metadata within a file can tell a lot about you. Cameras record data about when a picture was taken and what camera was used. Office documents like PDF or Office automatically adds author and company information to documents and spreadsheets.

Maybe you don't want to disclose that information on the web.

MAT can only remove metadata from your files, it does not anonymise their content, nor can it handle watermarking, steganography, or any too custom metadata field/system.

If you really want to be anonymous, use a format that does not contain any metadata, or better yet, use plain-text.

These are the formats supported to some extent:

Portable Network Graphics (PNG)
JPEG (.jpeg, .jpg, ...)
Open Document (.odt, .odx, .ods, ...)
Office Openxml (.docx, .pptx, .xlsx, ...)
Portable Document Fileformat (.pdf)
Tape ARchive (.tar, .tar.bz2, .tar.gz)
ZIP (.zip)
MPEG Audio (.mp3, .mp2, .mp1, .mpa)
Ogg Vorbis (.ogg)
Free Lossless Audio Codec (.flac)
Torrent (.torrent)

The President of the United States and his birth certificate would have greatly benefited from software such as MAT.

You can install MAT with this terminal command:

sudo apt-get install mat

Look for more articles about privacy soon and by searching in our search by under "privacy".

by iheartubuntu (noreply@blogger.com) at April 08, 2015 08:00 PM

Tasque TODO List App


We're getting back to some of the old basic apps that a lot of people used to use in Ubuntu. Many of them still work great and work great without any internet connection needed.

Tasque (pronounced like “task”) is a simple task management app (TODO list) for the Linux Desktop and Windows. It supports syncing with the online service Remember the Milk or simply storing your tasks locally.

The main window has the ability to complete a task, change the priority, change the name, and change the due date without additional property dialogs.

When a user clicks on a task priority, a list of possible priorities is presented and when selected, the task is re-prioritized in the order you wish.

When you click on the due date, a list of the next seven days is presented along with an option to remove the date or select a date from a calendar.

A user completes a task by clicking the check box on a task. The task is crossed out indicating it is complete and a timer begins counting down to the right of the task. When the timer is done, the task is removed from view.

As mentioned, Tasque has the ability to save tasks locally or backend used Remember the Milk, a free online to-do list. On one of my computers saving my tasks using RTM works great, on my computer at work, it wont sync my tasks. I havent figure out why, but I will post any updates here once I get it working or find a workaround.

You can install Tasque from the Ubuntu Software Center or with this terminal command:

sudo apt-get install tasque

All in all, Tasque is a great little task app. Really simple to use!

by iheartubuntu (noreply@blogger.com) at April 08, 2015 08:00 PM

Tomboy The Original Note App


When I first started using Ubuntu back in early 2007 (Ubuntu 6.10) I fell in love with a pre-installed app called Tomboy. I had used Tomboy for several years until Ubuntu One notified users it would stop syncing Tomboy a couple years ago, and then the finality of Ubuntu One shutting down earlier this year. I had rushed to find alternatives like Evernote, Gnotes, etc but none of them were simple and easily integrated.

The Tomboy description is as follows... "Tomboy is a simple & easy to use desktop note-taking application for Linux, Unix, Windows, and Mac OS X. Tomboy can help you organize the ideas and information you deal with every day."

Some of Tomboys notable features are highlighting text, inline spell checking, auto-linking web & email addresses, undo/redo, font styling & sizing and bulleted lists.

I am creating new notes as well as manually importing a few of my old notes from a couple years ago. Tomboy used to sync easily with Ubuntu One. Since that is no longer an option, you can do it with your Dropbox folder or your Google Drive folder (I'm using Insync).

Tomboy hasnt been updated in a while, but it installs and works great on Ubuntu 14.04 using:

sudo apt-get install tomboy

When you start Tomboy there will be a little square note icon with pen up on your top bar. Clicking the icon will show you the Tomboy menu options. To sync your notes across your computers you would go to the Tomboy preferences, clicking the Syncronization tab, and pick a local folder in your Dropbox or Google Drive. Thats pretty much it! Start writing those notes! On your other computers that you want to sync your notes, you would select the same sync folder you chose on your first computer.

A few quick points. When you sync your notes, it will create a folder titled "0" in whatever folder you have chosen to sync your notes in.

If you want to launch Tomboy with your system startup (in Ubuntu 14.04) in Unity search for "Startup Applications" and run it. Add a new app titled "Tomboy" with the command "tomboy", save and close. Next time you log on, your Tomboy notes will be ready to use.

Tomboy also works with Windows and Mac OS X and installation instructions can be found here:

Windows ... https://wiki.gnome.org/Apps/Tomboy/Installing/Windows
Mac ... https://wiki.gnome.org/Apps/Tomboy/Installing/Mac

- - - - -

If you are still looking for syncing options, this comes in from Christian....

You can self-host your note sync server with either Rainy or Grauphel...

Learn more here...

http://dynalon.github.io/Rainy/

http://apps.owncloud.com/content/show.php?action=content&content=166654

by iheartubuntu (noreply@blogger.com) at April 08, 2015 08:00 PM

Ubuntu - 10 Years Strong


Ubuntu, the Debian based linux operating system is approaching its 21st release in just a couple of weeks (October 23rd) moving forward 10 years strong now!

Mark Shuttleworth invests in Ubuntu's parent company Canonical, which continues to lose money year after year. It's clear that profit isn't his main concern. There is still a clear plan for Ubuntu and Canonical. That plan appears to be very much 'cloud' and business based.

Shuttleworth is proud that the vast majority of cloud deployments are based-on Ubuntu. The recent launch of Canonical's 'Cloud in a box' deployable Ubuntu system is another indication where it sees things going.

Ubuntu Touch will soon appear on phones and tablets, which is really the glue for this cloud/mobile/desktop ecosystem. Ubuntu has evolved impressively over the last ten years and it will continue to develop in this new age.

Ubuntu provides a seamless ecosystem for devices deployed to businesses and users alike. Being able to run the identical software on multiple devices and in the cloud, all sharing the same data is very appealing.

Ubuntu will be at the heart of this with or without the survival of Canonical.

"I love technology, and I love economics and I love what’s going on in society. For me, Ubuntu brings those three things together in a unique way." - Mark Shuttleworth on the next 5 years of Ubuntu

by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:59 PM

BirdFont Font Editor


If you have ever been interested in making your own fonts for fun or profit, BirdFont is an easy to use free font editor that lets you create vector graphics and export TTF, EOT & SVG fonts.

To install Birdfont, simply use the PPA below to insure you always have the most updated version. Open the terminal and run the following commands:

sudo add-apt-repository ppa:ubuntuhandbook1/birdfont

sudo apt-get update

sudo apt-get install birdfont

If you dont like using a PPA repository, you can download the appropriate DEB package  for your particular system....

http://ppa.launchpad.net/ubuntuhandbook1/birdfont/ubuntu/pool/main/b/birdfont/

If you need help developing a font, there is also an official tutorial here!

http://birdfont.org/doku/doku.php/tutorials

by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:59 PM

Check Ubuntu Linux for Shellshock


Shellshock is a new vulnerability that allows attackers to put code onto your machine, which could put your Ubuntu Linux system at a serious risk for malicious attacks.

Shellshock uses a bash script to access your computer. From there, hackers can launch programs, enable features, and even access your files. The script only affects UNIX-based systems (linux and mac).

You can test your system by running this test command from Terminal:

env x='() { :;}; echo vulnerable' bash -c 'echo hello'

If you're not vulnerable, you'll get this result:

bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' hello

If you are vulnerable, you'll get:

vulnerable hello

You can also check the version of bash you're running by entering:

bash --version

If you get version 3.2.51(1)-release as a result, you will need to update. Most Linux distributions already have patches available.

-----------

If your system is vulnerable make sure your computer has all critical updates and it should be patched already. if you are using a version of Ubuntu that has already reached end of life status (12.10, 13.04, 13.10, etc), you may be screwed and may need to start using a newer version of Ubuntu.

This should update Bash for you so your system is not vulnerable...

sudo apt-get update && sudo apt-get install --only-upgrade bash

by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:59 PM

Ubuntu Kylin Wallpapers


Looking for some new wallpapers these days? The Chinese version of Ubuntu, Ubuntu Kylin, has some beautiful new wallpapers for the 14.10 release. Download and install the DEB to put them on your computer (a total of 24 wallpapers)...

http://security.ubuntu.com/ubuntu/pool/universe/u/ubuntukylin-wallpapers/ubuntukylin-wallpapers-utopic_14.10.0_all.deb





by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:58 PM

Welcome to the New I Heart Ubuntu


And here we go! The NEW I Heart Ubuntu website takes a more modern magazine style look. Some of the new features include: an easy categories section at the top, an easy search feature at the top right, the 6 top featured articles right on the main part of your screen, and our popular posts are easy to find on the right side too. We now have a "Video of the Week" on the righthand side and our easy to see social icons at the bottom of the page. We have our RSS link easily available at the top and bottom of our site now. Commenting on articles is available via Disqus and Facebook. Our website also (finally) looks great and easy to use on mobile devices.

We will also begin covering other linux distros but will focus primarily on Ubuntu, Linux Mint and Elementary OS.

We have a bit more work to do yet such as tagging & labeling 460+ articles correctly, building up our Facebook page (you can "like us" on the right side), as well as focusing on the key points you have all recommended based on our survey.

Thanks again for everyones input! Our three websites I Heart Ubuntu, Daily Ubuntu and Crypto Reporter combined have well over 4 million views. Stay tuned & lets make I Heart Ubuntu a destination once again!

by iheartubuntu (noreply@blogger.com) at April 08, 2015 07:55 PM

TAILS The Privacy Distro


TAILS, the anonymizing distribution released its version 1.1 about two weeks ago – which means you can download it now. The Tails 1.1 release is largely a catalog of security fixes and bug fixes, limiting itself otherwise to minor improvements such as to the ISO upgrade and installer, and the Windows 8 camouflage. This is one to grab to keep your online privacy intact.

https://tails.boum.org/

by iheartubuntu (noreply@blogger.com) at April 08, 2015 06:10 PM

April 07, 2015

Elizabeth Krumbach

Puppet Camp San Francisco 2015

On Tuesday, March 24th I woke up early and walked down the street to a regional Puppet Camp, this edition held not only in my home city of San Francisco, but just a few blocks from home. The schedule for the event can be found up on the Eventbrite page.

The event kicked off with a keynote by Ryan Coleman of Puppet Labs, who gave an overview of how configuration management tools like Puppet have shaped our current systems administration landscape. Our work will only continue to grow in scale as we move forward, and based on results of the 2014 DevOps Report more companies will continue to move their infrastructures to the cloud, where automation is key to a well-functioning system. He went on to talk about the work that has been going into Puppet 4 RC and some tips for attendees on how they can learn more about Puppet beyond the event, including free resources like Learn Puppet (which also links to paid training resources) and the Puppet Labs Documentation site, for which they have a dedicated documentation team working to make great docs.

Next up was a great talk by Jason O’Rourke of Salesforce who talked about his infrastructure of tens of thousands of servers and how automation using Puppet has allowed his team to do less of the boring, repetitive tasks and more interesting things. His talk then focused in on “Puppet Adoption in a Mature Environment” where he quickly reviewed different types of deployments, from fresh new ones where it’s somewhat easy to deploy a new management framework to old ones where you may have a lot of technical debt, compliance and regulatory considerations and inability to take risks in a production environment. He walked through the strategies they used to accomplished to make changes in the most mature environments, including the creation of a DevOps team who were responsible for focusing on the “infrastructure is code” mindset, use of tools like Vagrant so identical test environments can be deployed by developers without input from IT, the development of best practices for managing the system (including code review, testing, and more). One of the interesting things they also did was give production access to their DevOps team so they could run limited read/test-only commands against Puppet. This new system was then slowly rolled out typically when hardware or datacenters were rolled out, or when audits or upgrades are being conducted. They also rolled out specific “roles” in their infrastructure separately, from the less risky internal-only services to partner and customer-facing. The rest of the talk was mostly about how they actually deploy into production on a set schedule and do a massive amount of testing for everything they roll out, nice to see!


Jason O’Rourke of Salesforce

Tray Torrance of NCC Group rounded out the morning talks by giving a talk on MCollective (Marionette Collective). He began the talk by covering some history of the orchestration space that MCollective seeks to cover, and how many of the competing solutions are ssh-based, including Ansible, which we’ve been using in the OpenStack infrastructure. It was certainly interesting to learn how it integrates with Puppet and is extendable with Ruby code.

After lunch I presented a talk on “Puppet in the Open Source OpenStack Infrastructure” where I walked through how and why we have an open source infrastructure, and steps for how other organizations and projects can adopt similar methods for managing their infrastructure code. This is similar to some other “open sourcing all our Puppet” talks I have given, but with this audience I definitely honed in on the DevOps-y value of making the code for infrastructure more broadly accessible, even if it’s just within an organization. Slides here.

The next couple of talks were by Nathan Valentine and David Lutterkort of Puppet Labs. Nathan did several live demos of Puppet Enterprise, mostly working through the dashboard to demonstrate how services can be associated with servers and each other for easy deployment. David’s presentation went into a bit of systems administration history in the world before ever-present configuration management and virtualization to discuss how containerization software like Docker has really changed the landscape for testing and deployments. He walked through usage of the Puppet module for Docker written by Gareth Rushgrove and his cooresponding proof of concept for a service deployment in Docker for ASP.NET, available here.

The final talk of the day was by Aaron Stone (nickname “sodabrew”) of BrightRoll on “Dashboard 2 and External Node Classification” where he walked through the improvements to the Puppet Dashboard with the release of version 2. I myself had been exposed to Puppet Dashboard when I first joined the OpenStack Infrastructure team a couple years ago and we were using it to share read-only data to our community so we’d have insight into when Puppet changes merged and whether they were successful. Unfortunately, a period of poor support for the dashboard caused us to go through several ideas for an alternative dashboard (documented in this bug) until we finally settled on using a simple front end for PuppetDB, PuppetBoard. We’re really happy with the capabilities for our team, since read-only access is what we were looking for, but it was great to hear from Aaron about work he’s resumed on the Dashboard, should I have a need in the future. Some of the improvements he covered included some maintenance fixes, including broader support for newer versions of Ruby and updating of the libraries (gems) it’s using, an improved REST API and some UI tweaks. He said that upgrading should be easy, but in an effort to focus on development he wouldn’t be packaging it for all the distros, though the files (ie debian/ for .deb packages) to make this a task for someone else are available if someone is able to do the work.

In all, this was a great little event and the low ticket price of $50 it was quite the cost-effective way to learn about a few new technologies in the Puppet ecosystem and meet fellow, local systems administrators and engineers.

A few more photos from the event are here: https://www.flickr.com/photos/pleia2/sets/72157649225111213

by pleia2 at April 07, 2015 01:47 AM

April 06, 2015

Akkana Peck

Quickly seeing bird sightings maps on eBird

The local bird community has gotten me using eBird. It's sort of social networking for birders -- you can report sightings, keep track of what birds you've seen where, and see what other people are seeing in your area.

The only problem is the user interface for that last part. The data is all there, but asking a question like "Where in this county have people seen broad-tailed hummingbirds so far this spring?" is a lengthy process, involving clicking through many screens and typing the county name (not even a zip code -- you have to type the name). If you want some region smaller than the county, good luck.

I found myself wanting that so often that I wrote an entry page for it.

My Bird Maps page is meant to be used as a smart bookmark (also known as bookmarklets or keyword bookmarks), so you can type birdmap hummingbird or birdmap golden eagle in your location bar as a quick way of searching for a species. It reads the bird you've typed in, and looks through a list of species, and if there's only one bird that matches, it takes you straight to the eBird map to show you where people have reported the bird so far this year.

If there's more than one match -- for instance, for birdmap hummingbird or birdmap sparrow -- it will show you a list of possible matches, and you can click on one to go to the map.

Like every Javascript project, it was both fun and annoying to write. Though the hardest part wasn't programming; it was getting a list of the nonstandard 4-letter bird codes eBird uses. I had to scrape one of their HTML pages for that. But it was worth it: I'm finding the page quite useful.

How to make a smart bookmark

I think all the major browsers offer smart bookmarks now, but I can only give details for Firefox. But here's a page about using them in Chrome.

Firefox has made it increasingly difficult with every release to make smart bookmarks. There are a few extensions, such as "Add Bookmark Here", which make it a little easier. But without any extensions installed, here's how you do it in Firefox 36:

[Firefox bookmarks dialog] First, go to the birdmap page (or whatever page you want to smart-bookmark) and click on the * button that makes a bookmark. Then click on the = next to the *, and in the menu, choose Show all bookmarks. In the dialog that comes up, find the bookmark you just made (maybe in Unsorted bookmarks?) and click on it.

Click the More button at the bottom of the dialog.
(Click on the image at right for a full-sized screenshot.)
[Firefox bookmarks dialog showing keyword]

Now you should see a Keyword entry under the Tags entry in the lower right of that dialog.

Change the Location to http://shallowsky.com/birdmap.html?bird=%s.

Then give it a Keyword of birdmap (or anything else you want to call it).

Close the dialog.

Now, you should be able to go to your location bar and type:
birdmap common raven or birdmap sparrow and it will take you to my birdmap page. If the bird name specifies just one bird, like common raven, you'll go straight from there to the eBird map. If there are lots of possible matches, as with sparrow, you'll stay on the birdmap page so you can choose which sparrow you want.

How to change the default location

If you're not in Los Alamos, you probably want a way to set your own coordinates. Fortunately, you can; but first you have to get those coordinates.

Here's the fastest way I've found to get coordinates for a region on eBird:

  • Click "Explore a Region"
  • Type in your region and hit Enter
  • Click on the map in the upper right

Then look at the URL: a part of it should look something like this: env.minX=-122.202087&env.minY=36.89291&env.maxX=-121.208778&env.maxY=37.484802 If the map isn't right where you want it, try editing the URL, hitting Enter for each change, and watch the map reload until it points where you want it to. Then copy the four parameters and add them to your smart bookmark, like this: http://shallowsky.com/birdmap.html?bird=%s&minX=-122.202087&minY=36.89291&maxX=-121.208778&maxY=37.484802

Note that all of the the "env." have been removed.

The only catch is that I got my list of 4-letter eBird codes from an eBird page for New Mexico. I haven't found any way of getting the list for the entire US. So if you want a bird that doesn't occur in New Mexico, my page might not find it. If you like birdmap but want to use it in a different state, contact me and tell me which state you need, and I'll add those birds.

April 06, 2015 08:30 PM

April 02, 2015

Akkana Peck

One-antlered stags

[mule deer stag with one antler] This fellow stopped by one evening a few weeks ago. He'd lost one of his antlers (I'd love to find it in the yard, but no luck so far). He wasn't hungry; just wandering, maybe looking for a place to bed down. He didn't seem to mind posing for the camera.

Eventually he wandered down the hill a bit, and a friend joined him. I guess losing one antler at a time isn't all that uncommon for mule deer, though it was the first time I'd seen it. I wonder if their heads feel unbalanced.
[two mule deer stags with one antler each]

Meanwhile, spring has really sprung -- I put a hummingbird feeder out yesterday, and today we got our first customer, a male broad-tailed hummer who seemed quite happy with the fare here. I hope he stays around!

April 02, 2015 01:25 AM

March 28, 2015

Elizabeth Krumbach

Simcoe’s March 2015 Checkup

Our little Siamese, Simcoe, has Chronic Renal Failure (CRF). She has been doing well for over 3 years now with subcutaneous fluid injections every other day to keep her hydrated and quarterly check-ins with the vet to make sure her key blood levels and weight are staying within safe parameters.

On March 14th she went in for her latest visit and round of blood work. As usual, she wasn’t thrilled about the visit and worked hard to stay in her carrier the whole time.

She came out long enough for the exam, and the doctor was healthy with her physical, though her weight had dropped a little again, going from 9.74lbs to 9.54lbs.

Both her BUN and CRE levels remained steady.

Unfortunately her Calcium levels continue to come back a bit high, so the vet wants her in for an ionized Calicum test. She has explained that it’s only the ionized Calcium that is a concern because it can build up in the kidneys and lead to more rapid deterioration, so we’d want to get her on something to reduce the risk if that was the case. We’ll probably be making an appointment once I return from my travels in mid April to get this test done.

In the meantime, she gets to stay at home and enjoy a good book.

…my good book.

by pleia2 at March 28, 2015 02:07 AM

The spaces between

It’s been over 2 months since I’ve done a “miscellaneous life stuff” blog post. Anyone reading this blog recently might think I only write about travel and events! Since that last post I have had other things pop up here and there, but I am definitely doing too many events. That should calm down a bit in the 2nd quarter of the year and almost disappear in the third, with the notable exception of a trip to Peru, part work and part pleasure.

Unfortunately it looks like stress I mentioned in that last post flipped the switch on my already increasing-in-frequency migraines. I’ve seen my neurologist twice this year and we’ve worked through several medications, finally finding one that seems to work. And at least a visit to my neurologist affords me some nice views.

So I have been working on stress reduction, part which is making sure I keep running. It doesn’t reduce stress immediately but a routine of exercise does help even me out in the long term. To help clear my head, I’ve also been refining my todo lists to make them more comprehensive. I’m also continuing to let projects go when I find they’re causing my stress levels to spike for little gain. This is probably the hardest thing to do, I care about everything I work on and I know some things will just drop on the ground if I don’t do them, but I really need to be more realistic about what I can actually get done and focus my energy accordingly.

And to clear the way in this post for happier things, I did struggle with the loss of Eric in January. My Ubuntu work here in San Francisco simply won’t be the same without him, and every time I start thinking about planning an event I am reminded that he won’t be around to help or attend. Shortly after learning of his passing, several of us met up at BerkeleyLUG to share memories. Then on March 18th a more organized event was put together to gather friends from his various spheres of influence to celebrate his life at one of his favorite local pizzerias. It was a great event, I met some really good people and saw several old friends. It also brought some closure for me that I’d been lacking in dealing with this on my own.

On to happier things! I actually spent 30 days in a row off a plane in March. Home time means I got to do lots of enjoyable home things, like actually spending time with my husband over some fantastic meals, as well as finally finishing watching Breaking Bad together. I also think I’ve managed to somewhat de-traumatize my cats, who haven’t been thrilled about all my travel. We’ve been able to take some time to do some “home things” – like get some painting estimates so we can get some repairs done around the condo. I also spent a day down in Mountain View so I could meet up with a local colleague who I hadn’t yet met to kick off a new project, and then have dinner with a friend who was in the area visiting. Plus, I got to see cool things like a rare storm colliding with a sunset one evening:

I’ve been writing some, in January my article 10 entry points to tech (for girls, women, and everyone) went live on opensource.com. In early March I was invited to publish an article on Tech Talk Live Blog on Five Ways to Get Involved with Ubuntu as an Educator based on experience working with teachers over the past several years. I’ve also continued work toward a new book in progress, which has been time-consuming but I’m hoping will be ready for more public discussion in the coming months. Mark G. Sobell’s A Practical Guide to Ubuntu Linux, 4th Edition also came out earlier this year, and while I didn’t write that, I did spend a nice chunk of time last summer doing review for it. I came away with a quote on the cover endorsing the great work Mark did with the book!

Work-wise, aside from travel and conferences I’ve talked about in previous posts, I was recently promoted to root and core for OpenStack Infrastructure. This has meant a tremendous amount to me, both the trust the team has placed in me and the increased ability for me to contribute to the infrastructure I’ve spent so much time with over these past couple of years. It also means I’ve been learning a lot and sorting through the tribal knowledge that should be formally documented. I was also able to participate as a Track Chair for selecting talks for the Related OSS Projects track at the OpenStack Summit in Vancouver in May, I did this for Atlanta last year but ended up not being able to attend due to being too sick (stupid gallbladder). And while on the topic of Vancouver, a panel proposed by the Women of OpenStack that I’m participating in has been accepted, Standing Tall in the Room, where we hope to give other women in our community some tips for success. My next work trip is coming up before Vancouver I’m heading off to South Carolina for Posscon where I’ll be presenting on Tools for Open Source Systems Administration, a tour of tools we use in order to make collaborating online with a distributed team of systems administrators from various companies possible (and even fun!).

In the tech goodies department, I recently purchased a Nexus 6. I was compelled to after I dropped my Galaxy S3 while sitting up on the roof deck. I was pretty disappointed by the demise of my S3, it was a solid phone and the stress of replacement wasn’t something I was thrilled to deal with immediately upon my return from Oman. I did a bunch of research before I settled on the Nexus 6 and spent my hard-earned cash on retail price for a phone for the first time in my life. It’s now been almost a month and I’m still not quite used to how BIG the Nexus 6 is, but it is quite a pleasure to use. I still haven’t quite worked out how to carry it on my runs; it’s too big for my pockets and the arm band solution isn’t working (too bulky, and other reasons), I might switch to a small backpack that can carry water too. It’s a great phone though, so much faster than my old one, which honestly did deserve to be replaced, even if not in the way I face-planted it on the concrete, sorry S3.


Size difference: Old S3 in new Nexus 6 case

I also found my old Chumby while searching through the scary cave that is our storage unit for the paint that was used for previous condo painting. They’ve resurrected the service for a small monthly fee, now I just need to find a place to plug it in near my desk…

I actually made it out of the house to be social a little too. My cousin Steven McCorry is the lead singer in a band called Exotype, which signed a record deal last year and has since been on several tours. This one brought him to San Francisco, so I finally made my way out to the famous DNA Lounge to see the show. It was a lot of fun, but as much as I can appreciate metal, I’m pleased with their recent trend toward rock, which I prefer. It was also great to visit with my cousin and his band mates.

This week it was MJ’s turn to be out of the country for work. While I had Puppet Camp to keep me busy on Tuesday, I did a poor job of scheduling social engagements and it’s been a pretty lonely time. It gave me space to do some organization and get work done, but I wasn’t as productive as I really wanted to be and I may have binge watched the latest slew of Mad Men episodes that landed on Netflix one night. Was nice to have snuggle time with the kitties though.

MJ comes home Sunday afternoon, at which time we have to swap out the contents of his suitcase and head back to the airport to catch a red eye flight to Philadelphia. We’re spending next week moving a storage unit, organizing our new storage situation and making as many social calls as possible. I’m really looking forward to visiting PLUG on Wednesday to meet up with a bunch of my old Philadelphia Linux friends. And while I’m not actively looking forward to the move, it’s something we’ve needed to do for some time now, so it’ll be nice for that to be behind us.

by pleia2 at March 28, 2015 01:58 AM

March 27, 2015

Akkana Peck

Hide Google's begging (or any other web content) via a Firefox userContent trick

Lately, Google is wasting space at the top of every search with a begging plea to be my default search engine.

[Google begging: Switch your default search engine to Google] Google already is my default search engine -- that's how I got to that page. But if you don't have persistent Google cookies set, you have to see this begging every time you do a search. (Why they think pestering users is the way to get people to switch to them is beyond me.)

Fortunately, in Firefox you can hide the begging with a userContent trick. Find the chrome directory inside your Firefox profile, and edit userContent.css in that directory. (Create a new file with that name if you don't already have one.) Then add this:

#taw { display: none !important; }

Restart Firefox, do a Google search and the begs should be gone.

In case you have any similar pages where there's pointless content getting in the way and you want to hide it: what I did was to right-click inside the begging box and choose Inspect element. That brings up Firefox's DOM inspector. Mouse over various lines in the inspector and watch what gets highlighted in the browser window. Find the element that highlights everything you want to remove -- in this case, it's a div with id="taw". Then you can write CSS to address that: hide it, change its style or whatever you're trying to do.

You can even use Inspect element to remove elements immediately. That won't help you prevent them from showing up later, but it can be wonderful if you need to use a page that has an annoying blinking ad on it, or a mis-designed page that has images covering the content you're trying to read.

March 27, 2015 02:17 PM

March 20, 2015

kdub

A few years of Mir TDD

asm header

We started the Mir project a few years ago guided around the principles in the book, Growing Object Oriented Software Guided by Tests. I recommend a read, especially if you’ve never been exposed to “Test-driven development”

Compared to other projects that I’ve worked on, I find that as a greenfield  TDD project Mir has really benefitted from the TDD process in terms of ease of development, and reliability. Just a few quick thoughts:

  • I’ve found the mir code to be ready to ship as soon as code lands. There’s very little going back and figuring out how the new feature has caused regressions in other parts of the code.
  • There’s much less debugging in the intial rounds of development, as you’ve already planned and written out tests for what you want the code to do.
  • It takes a bit more faith when you’re starting a new line of work that you’ll be able to get the code completed. Test-driven development forces more exploratory spikes (which tend to have exploratory interfaces), and then to revisit and methodically introduce refactorings and new interfaces that are clearer than the ropey interfaces seen in the ‘spike’ branches. That is, the interfaces that land tend to be the second-attempt interfaces that have been selected from a fuller understanding of the problem, and tend to be more coherent.
  • You end up with more modular, object-oriented code, because generally you’re writing a minimum of two implementations of any interface you’re working on (the production code, and the mock/stub)
  • The reviews tend to be less about whether things work, and more about the sensibility of the interfaces.

by Kevin at March 20, 2015 11:31 PM

test post, ignore

test post, ignore the man behind the curtain

by Kevin at March 20, 2015 01:25 PM

March 19, 2015

Akkana Peck

Hints on migrating Google Code to GitHub

Google Code is shutting down. They've sent out notices to all project owners suggesting they migrate projects to other hosting services.

I moved all my personal projects to GitHub years ago, back when Google Code still didn't support git. But I'm co-owner on another project that was still hosted there, and I volunteered to migrate it. I remembered that being very easy back when I moved my personal projects: GitHub had a one-click option to import from Google Code. I assumed (I'm sure you know what that stands for) that it would be just as easy now.

Nope. Turns out GitHub no longer has any way to import from Google Code: it tells you it can't find a repository there when you give it the address to Google's SVN repository.

Google's announcement said they were providing an exporter to GitHub. So I tried that next. I had the new repository ready on GitHub -- under the owner's account, not mine -- and I expected Google's exporter to ask me for the repository.

Not so. As soon as I gave it my OAuth credentials, it immediately created a new repository on GitHub under my name, using the name we had used on Google Code (not the right name, since Google Code project names have to be globally unique while GitHub projects don't).

So I had to wait for the export to finish; then, on GitHub, I went to our real repository, and did an import there from the new repository Google had created under my name. I have no idea how long that took: GitHub's importer said it would email me when the import was finished, but it didn't, so I waited several hours and decided it was probably finished. Then I deleted the intermediate repository.

That worked fine, despite being a bit circuitous, and we're up and running on GitHub now.

If you want to move your Google Code repository to GitHub without the intermediate step of making a temporary repository, or if you don't want to give Google OAuth access to your GitHub account, here are some instructions (which I haven't tested) on how to do the import via a local copy of the repo on your own machine, rather than going directly from Google to GitHub: krishnanand's steps for migrating Google code to GitHub

March 19, 2015 07:11 PM

March 18, 2015

Elizabeth Krumbach

Elastic{ON} 2015

I’m finally home for a month, so I’ve taken advantage of some of this time to attend and present at some local events. The first of which was Elastic{ON}, the first user conference for Elasticsearch and related projects now under the Elastic project umbrella. The conference venue was Pier 27, a cruise terminal on the bay. It was a beautiful venue with views of the bay, and clever use for a terminal while there aren’t ships coming in.

The conference kicked off with a keynote where they welcomed attendees (of which there were over 1300 from 35 countries!) and dove into project history from the first release in 2010. A tour of old logos and websites built up to the big announcement, the “Elastic” rebranding, as the scope of their work now goes beyond search in the former Elasticsearch name. The opening keynotes continued with several leads from projects within the Elastic family, including updates from Logstash and Kibana.

At lunch I ended up sitting with 3 other women who were attending the conference on behalf of their companies (when gender ratios are skewed, this type of congregation tends to happen naturally). We all got to share details about how we were using Elasticsearch, so that was a lot of fun. One woman was doing data analysis against it for her networking-related work, another was using it to store metadata for videos and the third was actually speaking that afternoon on how they’re using it to supplement the traditional earthquake data with social media data about earthquakes at the USGS.

Track sessions began after lunch, and I spent my afternoon camped out in the Demo Theater. The first talk was by the Elastic Director of Developer Relations, Leslie Hawthorne. She talked about the international trio of developer evangelists that she works with, focusing on their work to support and encourage meetup groups worldwide, noting that 75 cities now have meetups with a total of over 17,000 individual members. She shared some tips from successful meetup groups, including offering a 2nd track during meetups for beginners, using an unconference format rather than set schedule and mixing things up sometimes with hack nights on Elastic projects. It was interesting to learn how they track community metrics (code/development stats, plus IRC and mailing list activity) and she wrapped up by noting the new site at https://www.elastic.co/community where they’re working to add more how-tos and on-ramping content, which their recent acquisition of Found, which has maintained a lot of that kind of material.


Leslie Hawthorn on “State of the Community”

The next session was “Elasticsearch Data Journey: Life of a Document in Elasticsearch” by Alexander Reelsen & Boaz Leskes. When documents enter Elasticsearch as json output from a service like Logstash, it can seem like a bit of a black box as far as what exactly happens to it in order for it to be added to Elasticsearch. This talk went through what happens. It’s first stored in Elasticsearch, where it’s stored node-wise is based on several bits of criteria analyzed upon bringing in, and the data is normalized and sorted. While the data is coming in, it’s stored in a buffer and also written to a transaction log until it’s actually committed to disk, at which time it’s still in the transaction log until it can be replicated across the Elasticsearch cluster. From there, they went into discussing data retrieval, cluster scaling and while stressing that replication is NOT backups, how to actually do backups of each node and how to restore from them. Finally, they talked about the data deletion process and how it queues data for deletion on each node in segments and noted that this is not a reversible option.

Continuing in “Life of” theme, I also attended “Life of an Event in Logstash” by Colin Surprenant. Perhaps my favorite talk of the day, Colin did an excellent job of explaining and defining all the terms he used in his talk. Contrary to popular belief, this isn’t just useful to folks new to the project, but as a systems administrator who maintains dozens of different types of applications over hundreds of servers, I am not necessarily familiar with what Logstash in particular calls everything terminology-wise, so having it made clear during the talk was great. His talk walked us through the 3 stages that events coming into Logstash go through: Input, Filter and Output, and the sized queues between each of them. The Input stage takes whatever data you’re feeding into Logstash and uses plugins to transform it into a Logstash event. The Filter stage actually modifies the data from the event so that the data is made uniform. The Output stage translates the uniform data into whatever output you’re sending it to, whether it’s STDOUT or sending it off to Elastisearch as json. Knowing the bits of this system is really valuable for debugging loss of documents, I look forward to having the video online to share with my colleagues. EDIT 3/20/2015: Detailed slides online here.


Colin Surprenant on “Life of an Event in Logstash”

I tended to a avoid many of the talks by Elasticsearch users talking about how they use it. While I’m sure there’s valuable insights to be gained by learning how others use it, we’re pretty much convinced about our use and things are going well. So use cases were fresh to me when the day 2 keynotes kicked off with a discussion with Don Duet, Co-head of Technology at Goldman Sachs. It was interesting to learn that nearly 1/3 of the employees at Goldman Sachs are in engineering or working directly with engineering in some kind of technical analysis capacity. They were also framed as very tech-conscious company and long time open source advocate. In exploring some of their work with Elasticsearch he used legal documents as an example: previously they were difficult to search and find, but using Elasticsearch an engineer was empowered to work with the legal department to make the details about contracts and more searchable and easier to find.

The next keynote was a surprising one, from Microsoft! As a traditional proprietary, closed-source company, they haven’t historically been known for their support of open source software, at least in public. This has changed in recent years as the world around has changed and they’ve found themselves needing to not only support open source software in their stacks but contributing to things like the Linux kernel as well. Speaker Pablo Castro had a good sense of humor about this all as he walked attendees through three major examples of Elasticsearch use at Microsoft. It was fascinating to learn that it’s used for content on MSN.com, which gets 18 billion hits per month. They’re using Elasticsearch on the Microsoft Dynamics CRM for social media data, and in this case their actually using Ubuntu as well. Finally, they’re using it for the search tool in their cloud offering, Azure. They’ve come a long way!


Pablo Castro of Microsoft

The final keynote was from NASA JPL. The room was clearly full of space fans, so this was a popular presentation. They talked about how they use Elasticsearch to hold data about user behavior from browsers on internal sites so they can improve them for employees. They also noted the terribly common practice of putting data (in this case, for the Mars rover) into Excel or Powerpoint and emailing it around as a mechanism for data sharing, and how they’ve managed to get this data into Elasticsearch instead, clearly improving the experience for everyone.

After the keynotes, it time to do my presentation! The title of my talk was “elastic-Recheck Yourself Before You Wreck Yourself: How Elasticsearch Helps OpenStack QA” and I can’t take credit for the title, my boring title was replaced by a suggestion from the talk selection staff. The talk was fun, I walked through our use of Elasticsearch to power our elastic-recheck (status page, docs) tooling in OpenStack. It’s been valuable not only for developer feedback (“your patch failed tests because of $problem, not your code”), but by giving the QA an Infrastructure teams a much better view into what the fleet of test VMs are up to in the aggregate so we can fix problems more efficiently. Slides from my talk are here (pdf).


All set up for elastic-Recheck Yourself Before You Wreck Yourself

Following my talk, ended up having lunch with the excellent Carol Willing. We got to geek out on all kinds of topics from Python to clouds as we enjoyed an outdoor lunch by the bay. Until it started drizzling.

The most valuable talk in the afternoon for me was “Resiliency in Elasticsearch and Lucene” with Boaz Leskes & Igor Motov. They began by talking about how with scale came the realization that more attention needed to be paid to recovering from various types of failures, and that they show up more often when you have more workers. The talk walked through various failures scenarios and how they’ve worked (and are working) on making improvements in these areas, including “pulling the plug” for a full shutdown, various hard disk failures, data corruption, and several types of cluster and HA failures (splitbrain and otherwise), out of memory resiliency and external pressures. This is another one I’m really looking forward to the video from.

The event wrapped up with a panel from GuideStar, kCura and E*Trade on how they’re using Elasticsearch and several “war stories” from their experiences working with the software itself, open source in general and Elastic the company.

In all, the conference was a great experience for me, and it was an impressive inaugural conference, though perhaps I should have expected that given the expertise and experience of the community team they have working there! They plan on doing a second one, and I recommend attendance to folks working with Elasticsearch.

More of my photos from the conference here: https://www.flickr.com/photos/pleia2/sets/72157650940379129/

by pleia2 at March 18, 2015 10:58 PM

March 14, 2015

Akkana Peck

Making a customized Firefox search plug-in

It's getting so that I dread Firefox's roughly weekly "There's a new version -- do you want to upgrade?" With every new upgrade, another new crucial feature I use every day disappears and I have to spend hours looking for a workaround.

Last week, upgrading to Firefox 36.0.1, it was keyword search: the feature where, if I type something in the location bar that isn't a URL, Firefox would instead search using the search URL specified in the "keyword.URL" preference.

In my case, I use Google but I try to turn off the autocomplete feature, which I find it distracting and unhelpful when typing new search terms. (I say "try to" because complete=0 only works sporadically.) I also add the prefix allintext: to tell Google that I only want to see pages that contain my search term. (Why that isn't the default is anybody's guess.) So I set keyword.URL to: http://www.google.com/search?complete=0&q=allintext%3A+ (%3A is URL code for the colon character).

But after "up"grading to 36.0.1, search terms I typed in the location bar took me to Yahoo search. I guess Yahoo is paying Mozilla more than Google is now.

Now, Firefox has a Search tab under Edit->Preferences -- but that just gives you a list of standard search engines' default searches. It would let me use Google, but not with my preferred options.

If you follow the long discussions in bugzilla, there are a lot of people patting each other on the back about how much easier the preferences window is, with no discussion of how to specify custom searches except vague references to "search plugins". So how do these search plugins work, and how do you make one?

Fortunately a friend had a plugin installed, acquired from who knows where. It turns out that what you need is an XML file inside a directory called searchplugins in your profile directory. (If you're not sure where your profile lives, see Profiles - Where Firefox stores your bookmarks, passwords and other user data, or do a systemwide search for "prefs.js" or "search.json" or "cookies.sqlite" and it should lead you to your profile.)

Once you have one plugin installed, it's easy to edit it and modify it to do anything you want. The XML file looks roughly like this:

<SearchPlugin xmlns="http://www.mozilla.org/2006/browser/search/" xmlns:os="http://a9.com/-/spec/opensearch/1.1/">
<os:ShortName>MySearchPlugin</os:ShortName>
<os:Description>The search engine I prefer to use</os:Description>
<os:InputEncoding>UTF-8</os:InputEncoding>
<os:Image width="16" height="16">data:image/x-icon;base64,ICON GOES HERE</os:Image>
<SearchForm>http://www.google.com/</SearchForm>
<os:Url type="text/html" method="GET" template="https://www.google.com/search">
  <os:Param name="complete" value="0"/>
  <os:Param name="q" value="allintext: {searchTerms}"/>
  <!--os:Param name="hl" value="en"/-->
</os:Url>
</SearchPlugin>

There are four things you'll want to modify. First, and most important, os:Url and os:Param control the base URL of the search engine and the list of parameters it takes. {searchTerms} in one of those Param arguments will be replaced by whatever terms you're searching for. So <os:Param name="q" value="allintext: {searchTerms}"/> gives me that allintext: parameter I wanted.

(The other parameter I'm specifying, <os:Param name="complete" value="0"/>, used to make Google stop the irritating autocomplete every time you try to modify your search terms. Unfortunately, this has somehow stopped working at exactly the same time that I upgraded Firefox. I don't see how Firefox could be causing it, but the timing is suspicious. I haven't been able to figure out another way of getting rid of the autocomplete.)

Next, you'll want to give your plugin a ShortName and Description so you'll be able to recognize it and choose it in the preferences window.

Finally, you may want to modify the icon: I'll tell you how to do that in a moment.

Using your new search plugin

[Firefox search prefs]

You've made all your modifications and saved the file to something inside the searchplugins folder in your Firefox profile. How do you make it your default?

I restarted firefox to make sure it saw the new plugin, though that may not have been necessary. Then Edit->Preferences and click on the Search icon at the top. The menu near the top under Default search engine is what you want: your new plugin should show up there.

Modifying the icon

Finally, what about that icon?

In the plugin XML file I was copying, the icon line looked like:

<os:Image width="16"
height="16">data:image/x-icon;base64,AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAAAAAAA
... many more lines like this then ... ==</os:Image>
So how do I take that and make an image I can customize in GIMP?

I tried copying everything after "base64," and pasting it into a file, then opening it in GIMP. No luck. I tried base64 decoding it (you do this with base64 -d filename >outfilename) and reading it in with GIMP. Still no luck: "Unknown file type".

The method I found is roundabout, but works:

  1. Copy everything inside the tag: data:image/x-icon;base64,AA ... ==
  2. Paste that into Firefox's location bar and hit return. You'll see the icon from the search plugin you're modifying.
  3. Right-click on the image and choose Save image as...
  4. Save it to a file with the extension .ico -- GIMP won't open it without that extension.
  5. Open it in GIMP -- a 16x16 image -- and edit to your heart's content.
  6. File->Export as...
  7. Use the type "Microsoft Windows icon (*.ico)"
  8. Base64 encode the file you just saved, like this: base64 yourfile.ico >newfile
  9. Copy the contents of newfile and paste that into your os:Image line, replacing everything after data:image/x-icon;base64, and before </os:Image>

Whew! Lots of steps, but none of them are difficult. (Though if you're not on Linux and don't have the base64 command, you'll have to find some other way of encoding and decoding base64.)

But if you don't want to go through all the steps, you can download mine, with its lame yellow smiley icon, as a starting point: Google-clean plug-in.

Happy searching! See you when Firefox 36.0.2 comes out and they break some other important feature.

March 14, 2015 06:35 PM

March 10, 2015

kdub

Mir Android-platform Multimonitor

My latest work on the mir android platform includes multimonitor support! It should work with slimport/mhl; Mir happily sits at an abstraction level above the details of mhl/slimport. This should be available in the next release (probably mir 0.13), or you can grab lp:mir now to start tinkering.

by Kevin at March 10, 2015 01:33 PM

March 08, 2015

Akkana Peck

GIMP: Turn black to another color with Screen mode

[20x20 icon, magnified 8 times] I needed to turn some small black-on-white icons to blue-on-white. Simple task, right? Except, not really. If there are intermediate colors that are not pure white or pure black -- which you can see if you magnify the image a lot, like this 800% view of a 20x20 icon -- it gets trickier.

[Bucket fill doesn't work for this] You can't use anything like Color to Alpha or Bucket Fill, because all those grey antialiased pixels will stay grey, as you see in the image at left.

And the Hue-Saturation dialog, so handy for changing the hue of a sky, a car or a dress, does nothing at all -- because changing hue has no effect when saturation is zero, as for black, grey or white. So what can you do?

I fiddled with several options, but the best way I've found is the Screen layer mode. It works like this:

[Make a new layer] In the Layers dialog, click the New Layer button and accept the defaults. You'll get a new, empty layer.

[Set the foreground color] Set the foreground color to your chosen color.

[Set the foreground color] Drag the foreground color into the image, or do Edit->Fill with FG Color.

Now it looks like your whole image is the new color. But don't panic!

[Use screen mode] Use the menu at the top of the Layers dialog to change the top layer's mode to Screen.

Layer modes specify how to combine two layers. (For a lot more information, see my book, Beginning GIMP). Multiply mode, for example, multiplies each pixel in the two layers, which makes light colors a lot more intense while not changing dark colors very much. Screen mode is sort of the opposite of Multiply mode: GIMP inverts each of the layers, multiplies them together, then inverts them again. All those white pixels in the image, when inverted, are black (a value of zero), so multiplying them doesn't change anything. They'll still be white when they're inverted back. But black pixels, in Screen mode, take on the color of the other layer -- exactly what I needed here.

Intensify the effect with contrast

[Mars sketch, colorized orange] One place I use this Screen mode trick is with pencil sketches. For example, I've made a lot of sketches of Mars over the years, like this sketch of Lacus Solis, the "Eye of Mars". But it's always a little frustrating: Mars is all shades of reddish orange and brown, not grey like a graphite pencil.

Adding an orange layer in Screen mode helps, but it has another problem: it washes out the image. What I need is to intensify the image underneath: increase the contrast, make the lights lighter and the darks darker.

[Colorized Mars sketch, enhanced  with brightness/contrast] Fortunately, all you need to do is bump up the contrast of the sketch layer -- and you can do that while keeping the orange Screen layer in place.

Just click on the sketch layer in the Layers dialog, then run Colors->Brightness/Contrast...

This sketch needed the brightness reduced a lot, plus a little more contrast, but every image will be different. Experiment!

March 08, 2015 01:22 AM

March 02, 2015

Elizabeth Krumbach

Tourist in Muscat, Oman

I had the honor of participating in FOSSC Oman this February, which I wrote about here. Our gracious hosts were very accommodating to all of our needs, starting with arranging assistance at the airport and lodging at a nearby Holiday Inn.

The Holiday Inn was near the airport without much else around, so it was my first experience with a familiar property in a foreign land. It was familiar enough for me to be completely comfortable, but different enough to never let me forget that I was in a new, interesting place. In keeping with standards of the country, the hotel didn’t serve alcohol or pork, which was fine by me.

During my stay we had one afternoon and evening to visit the sights with some guides from the conference. Speakers and other guests convened at the hotel and boarded a bus which first took us to the Sultan Qaboos Grand Mosque. Visiting hours for non-Muslims were in the morning, so we couldn’t go inside, but we did get to visit the outside gardens and take some pictures in front of the beautiful building.

From there we went to a downtown area of Muscat and were able to browse through some shops that seemed aimed at tourists and enjoy the harbor for a bit. Browsing the shops allowed me to identify some of the standard pieces I may want to purchase later, like the style of traditional incense burner. The harbor was quite enjoyable, a nice breeze coming in to take the edge off the hot days, which topped out around 90F while we were there (and it was their winter!).

We were next taken to Al Alam Palace, where the Sultan entertains guests. This was another outside only tour, but the walk through the plaza up to the palace and around was well worth the trip. There were also lit up mountainside structures visible from the palace which looked really stunning in the evening light.

That evening we headed up to the Shangri-La resort area on what seemed like the outskirts of Muscat. It was a whole resort complex, where we got to visit a beach before meeting up with other conference folks for a buffet dinner and musical entertainment for the evening.

I really enjoyed my time in Oman. It was safe, beautiful and in spite of being hot, the air conditioning in all the buildings made up for the time we spent outdoors, and the mornings and evenings were nice and cool. There was some apprehension as it was my first trip to the middle east and as a woman traveling alone, but I had no problems and everyone I worked with throughout the conference and or stay was professional, welcoming and treated me well. I’d love the opportunity to go back some day.

More photos from my trip here: https://www.flickr.com/photos/pleia2/sets/72157650553216248/

by pleia2 at March 02, 2015 02:47 AM

February 24, 2015

Akkana Peck

Tips for developing on a web host that offers only FTP

Generally, when I work on a website, I maintain a local copy of all the files. Ideally, I use version control (git, svn or whatever), but failing that, I use rsync over ssh to keep my files in sync with the web server's files.

But I'm helping with a local nonprofit's website, and the cheap web hosting plan they chose doesn't offer ssh, just ftp.

While I have to question the wisdom of an ISP that insists that its customers use insecure ftp rather than a secure encrypted protocol, that's their problem. My problem is how to keep my files in sync with theirs. And the other folks working on the website aren't developers and are very resistant to the idea of using any version control system, so I have to be careful to check for changed files before modifying anything.

In web searches, I haven't found much written about reasonable workflows on an ftp-only web host. I struggled a lot with scripts calling ncftp or lftp. But then I discovered curftpfs, which makes things much easier.

I put a line in /etc/fstab like this:

curlftpfs#user:password@example.com/ /servername fuse rw,allow_other,noauto,user 0 0

Then all I have to do is type mount /servername and the ftp connection is made automagically. From then on, I can treat it like a (very slow and somewhat limited) filesystem.

For instance, if I want to rsync, I can

rsync -avn --size-only /servername/subdir/ ~/servername/subdir/
for any particular subdirectory I want to check. A few things to know about this:
  1. I have to use --size-only because timestamps aren't reliable. I'm not sure whether this is a problem with the ftp protocol, or whether this particular ISP's server has problems with its dates. I suspect it's a problem inherent in ftp, because if I ls -l, I see things like this:
    -rw-rw---- 1 root root 7651 Feb 23  2015 guide-geo.php
    -rw-rw---- 1 root root 1801 Feb 14 17:16 guide-header.php
    -rw-rw---- 1 root root 8738 Feb 23  2015 guide-table.php
    
    Note that a file modified a week ago shows a modification time, but files modified today show only a day and year, not a time. I'm not sure what to make of this.
  2. Note the -n flag. I don't automatically rsync from the server to my local directory, because if I have any local changes newer than what's on the server they'd be overwritten. So I check the diffs by hand with tkdiff or meld before copying.
  3. It's important to rsync only the specific directories you're working on. You really don't want to see how long it takes to get the full file tree of a web server recursively over ftp.

How do you change and update files? It is possible to edit the files on the curlftpfs filesystem directly. But at least with emacs, it's incredibly slow: emacs likes to check file modification dates whenever you change anything, and that requires an ftp round-trip so it could be ten or twenty seconds before anything you type actually makes it into the file, with even longer delays any time you save.

So instead, I edit my local copy, and when I'm ready to push to the server, I cp filename /servername/path/to/filename.

Of course, I have aliases and shell functions to make all of this easier to type, especially the long pathnames: I can't rely on autocompletion like I usually would, because autocompleting a file or directory name on /servername requires an ftp round-trip to ls the remote directory.

Oh, and version control? I use a local git repository. Just because the other people working on the website don't want version control is no reason I can't have a record of my own changes.

None of this is as satisfactory as a nice git or svn repository and a good ssh connection. But it's a lot better than struggling with ftp clients every time you need to test a file.

February 24, 2015 02:46 AM

February 23, 2015

Elizabeth Krumbach

FOSSC Oman 2015

This past week I had the honor of speaking at FOSSC Oman 2015 in Muscat, following an invitation last fall from Professor Hadj Bourdoucen and the organizing team. Prior to my trip I was able to meet up with 2013 speaker Cat Allman who gave me invaluable tips about visiting the country, but above all made me really excited to visit the middle east for the first time and meet the extraordinary people putting on the conference.


Some of the speakers and organizers meet on Tuesday, from left: Wolfgang F. Finke, Matthias Stürmer, Khalil Al Maawali, me and Hadj Bourdoucen

My first observation was that the conference staff really went out of their way to be welcoming to all the speakers, welcoming us at the hotel the day before the conference, making sure all our needs were met. My second was that the conference was that it was really well planned and funded. They did a wonderful job finding a diverse speaker list (both topic and gender-wise) from around the world. I was really happy to learn that the conference was also quite open and free to attend, so there were participants from other nearby companies, universities and colleges. I’ll also note that there were more women at this conference than I’ve ever seen at an open source conference, at least half the audience, perhaps slightly more.

The conference itself began on Wednesday morning with several introductions and welcome speeches from officials of Sultan Qaboos University (SQU), the Information Technology Authority (ITA) and Professor Hadj Bourdoucen who gave the opening FOSSC 2015 speech. These introductions were all in Arabic and we were all given headsets for live translations into English.

The first formal talk of the conference was Patrick Sinz on “FOSS as a motor for entrepreneurship and job creation.” In this talk he really spoke to the heart of why the trend has been leaning toward open source, with companies tired of being beholden to vendors for features, being surprised by changes in contracts, and the general freedom of not needing “permission” to alter the software that’s running your business, or your country. After a break, his talk was followed by one by Jan Wildeboer titled “Open is default.” He covered a lot in his talk, first talking about how 80% of most software stacks can easily be shared between companies without harming any competitive advantage, since everyone needs all the basics of hardware interaction, basic user interaction and more, thus making use of open source for this 80% an obvious choice. He also talked about open standards and how important it is to innovation that they exist. While on the topic of innovation he noted that instead of trying to make copies of proprietary offerings, open source is now leading innovation in many areas of technology, and has been for the past 5 years.

My talk came up right after Jan’s, and with a topic of “Building a Career in FOSS” it nicely worked into things that Patrick and Jan had just said before me. In this world of companies who need developers for features and where they’re paying good money for deployment of open source, there are a lot of jobs cropping up in the open source space. My talk gave a tour of some of the types of reasons one may contribute (aside from money, there’s passion for openness, recognition, and opportunity to work with contributors from around the world), types of ways to get involved (aside from programming, people are paid for deployments, documentation, support and more) and companies to aim for when looking to find a job working on open source (fully open source, open source core, open source division of a larger company). Slides from my talk are available here (pdf).

Directly following my talk, I participated in a panel with Patrick, Jan and Matthias (who I’d met the previous day) where we talked about some more general issues in the open source career space, including how language barriers can impact contributions, how the high profile open source security issues of 2014 have impacted the industry and some of the biggest mistakes developers make regarding software licenses.

The afternoon began with a talk by Hassan Al-Lawati on the “FOSS Initiative in Oman, Facts and Challenges” where he outlined the work they’ve been doing in their multi-year plan to promote the use and adoption of FOSS inside of Oman. Initiatives began with awareness campaigns to familiarize people with the idea of open source software, development of training material and programs, in addition to existing certificate programs in the industry, and the deployment of Open Source Labs where classes on and development of open source can be promoted. He talked about some of the further future plans including more advanced training. He wrapped up his talk by discussing some of the challenges, including continued fears about open source by established technologists and IT managers working with proprietary software and in general less historical demand for using open source solutions. Flavia Marzano spoke next on “The role and opportunities of FOSS in Public Administrations” where she drew upon her 15 years of experience working in the public sector in Italy to promote open source solutions. Her core points centered around the importance of the releasing of data by governments in open formats and the value of laws that make government organizations consider FOSS solutions, if not compel them. She also stressed that business leaders need to understand the value of using open source software, even if they themselves aren’t the ones who will get the read the source code, it’s important that someone in your organization can. Afternoon sessions wrapped up with a panel on open source in government, which talked about how cost is often not a motivator and that much of the work with governments is not a technical issue, but a political one.


FOSS in Government panel: David Hurley, Hassan Al-Lawati, Ali Al Shidhani and Flavia Marzano

The conference wrapped up with lunch around 2:30PM and then we all headed back to our hotels before an evening out, which I’ll talk more about in an upcoming post about my tourist fun in Muscat.

Thursday began a bit earlier than Wednesday, with the bus picking us up at the hotel at 7:45AM and first talks beginning at 8:30AM.

Matthias Stürmer kicked off the day with a talk on “Digital sustainability of open source communities” where he outlined characteristics of healthy open source communities. He first talked about the characteristics that defined digital sustainability, including transparency and lack of legal or policy restrictions. The characteristics of healthy open source communities included:

  • Good governance
  • Heterogeneous community (various motivations, organizations involved)
  • Nonprofit foundation (doing marketing)
  • Ecosystem of commercial service providers
  • Opportunity for users to get things done

It was a really valuable presentation, and his observations were similar to mine when it comes to healthy communities, particularly as they grow. His slides are pretty thorough with main points clearly defined and are up on slideshare here.

After his presentation, several of us speakers were whisked off to have a meeting with the Vice-chancellor of SQU to talk about some of the work that’s been done locally to promote open source education, adoption and training. Can’t say I was particularly useful at this session, lacking experience with formal public sector migration plans, but it was certainly interesting for me to participate in.

I then met up with Khalil for another adventure, over to Middle East College to give a short open source presentation to students in an introductory Linux class. The class met in one of the beautiful Open Source Labs that Hassan had mentioned in his talk, it was a real delight to go to one. It was also fascinating to see that the vast majority of the class was made up of women, with only a handful of men – quite the opposite from what I’m used to! My presentation quickly covered the basics of open source, the work I’ve done both as a paid and volunteer contributor, examples of some types of open source projects (different size, structure and volunteer to paid ratios) and common motivations for companies and individuals to get involved. The session concluded with a great Q&A session, followed by a bunch of pictures and chats with students. Slides from my talk are here (pdf).


Khalil and me at the OSL at MEC

My day wound down back at SQU by attending the paper sessions that concluded the conference and then lunch with my fellow speakers.

Now for some goodies!

There is a YouTube video of each day up, so you can skim through it along with the schedule to find specific talks:

There was also press at the conference, so you can see one release published on Zawya: FOSSC-Oman Kicks Off; Forum Focuses on FOSS Opportunities and Communities and an article by the Oman Tribune: Conference on open source software begins at SQU.

And more of my photos from the conference are here: https://www.flickr.com/photos/pleia2/sets/72157650553205488/

by pleia2 at February 23, 2015 02:15 AM

February 19, 2015

Akkana Peck

Finding core dump files

Someone on the SVLUG list posted about a shell script he'd written to find core dumps.

It sounded like a simple task -- just locate core | grep -w core, right? I mean, any sensible packager avoids naming files or directories "core" for just that reason, don't they?

But not so: turns out in the modern world, insane numbers of software projects include directories called "core", including projects that are developed primarily on Linux so you'd think they would avoid it ... even the kernel. On my system, locate core | grep -w core | wc -l returned 13641 filenames.

Okay, so clearly that isn't working. I had to agree with the SVLUG poster that using "file" to find out which files were actual core dumps is now the only reliable way to do it. The output looks like this:

$ file core
core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (375)

The poster was using a shell script, but I was fairly sure it could be done in a single shell pipeline. Let's see: you need to run locate to find any files with 'core" in the name.

Then you pipe it through grep to make sure the filename is actually core: since locate gives you a full pathname, like /lib/modules/3.14-2-686-pae/kernel/drivers/edac/edac_core.ko or /lib/modules/3.14-2-686-pae/kernel/drivers/memstick/core, you want lines where only the final component is core -- so core has a slash before it and an end-of-line (in grep that's denoted by a dollar sign, $) after it. So grep '/core$' should do it.

Then take the output of that locate | grep and run file on it, and pipe the output of that file command through grep to find the lines that include the phrase 'core file'.

That gives you lines like

/home/akkana/geology/NorCal/pinnaclesGIS/core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), too many program headers (523)

But those lines are long and all you really need are the filenames; so pass it through sed to get rid of anything to the right of "core" followed by a colon.

Here's the final command:

file `locate core | grep '/core$'` | grep 'core file' | sed 's/core:.*//'

On my system that gave me 11 files, and they were all really core dumps. I deleted them all.

February 19, 2015 07:54 PM

February 18, 2015

Jono Bacon

Bobbing for Influence

Companies, communities, families, clubs, and other clumps of humans all have some inherent social dynamics. At a simple level there are leaders and followers, but in reality the lines are rarely as clear as that.

Many leaders, with a common example being some founders, have tremendous vision and imagination, but lack the skills to translate that vision into actionable work. Many followers need structure to their efforts, but are dynamic and creative in the execution. Thus, the social dynamic in organizations needs a little more nuance.

This is where traditional management hierarchies break down in companies. You may have your SVPs, then your VPs, then your Senior Directors, then your Directors, and so on, but in reality most successful companies don’t observe those hierarchies stringently. In many organizations a junior-level employee who has been there for a while can have as much influence and value, if not more, than a brand new SVP.

As such, the dream is that we build organizations with crisp reporting lines but in which all employees feel they have the ability to bring their creativity and ideas to logically influence the scope, work, and culture of the organization.

Houston, we have a problem

Sadly, this is where many organizations run into trouble. It seems to be the same ‘ol story time after time: as the organization grows, the divide between the senior leadership and the folks on the ground widens. Water cooler conversations and bar-side grumblings fuel the fire and resentment, frustrations, and resume-editing often sets in.

So much of this is avoidable though. Of course, there will always be frustration in any organization: this is part and parcel of people working together. Nothing will be perfect, and it shouldn’t be…frustration and conflict can often lead to organizations re-pivoting and taking a new approach. I believe though, that there are a lot of relatively simple things we can do to make organizations feel more engaging.

Influence

A big chunk of the problems many organizations face is around influence. More specifically, the problems set in when employees and contributors feel that they no longer have the ability to have a level of influence or impact in an organization, and thus, their work feels more mechanical, is not appreciated, and there is little validation.

Now, influence here is subtle. It is not always about being involved in the decision-making or being in the cool meetings. Some people won’t, and frankly shouldn’t, be involved in certain decisions: when we have too many cooks in the kitchen, you get a mess. Or Arby’s. Choose your preferred mess.

The influence I am referring to here is the ability to feed into the overall culture and to help shape and craft the organization. If we want to build truly successful organizations, we need to create a culture in which the very best ideas and perspectives bubble to the surface. These ideas may come from SVPs or it may come from the dude who empties out the bins.

The point being, if we can figure out a formula in which people can feel they can feed into the culture and help shape it, you will build a stronger sense of belonging and people will stick around longer. A sense of empowerment like this keeps people around for the long haul. When people feel unengaged or pushed to the side, they will take the next shiny opportunity that bubbles up on LinkedIn.

Some Practical Things To Do So, we get what the challenge ahead is. How do we beat it? Well, while there are many books written on the subject, I believe there are ten simple approaches we can get started with.

You don’t have to execute them in this order (in fact, these are not in any specific order), and you may place different levels of importance in some of them. I do believe though, they are all important. Let’s take a spin through them.

1. Regularly inform

A lack of information is a killer in an organization. If an organization has problems and is working to resolve them, the knowledge and assurance of solving these challenges is of critical importance to share.

In the Seven Habits, Covey talks about the importance of working on Important, and not just Urgent things. In the rush to solve problems we often forget to inform where changes, improvements, and engagement is happening. No one ever cried about getting too much clarity, but the inverse has resulted in a few gin and tonics in an evening.

There are two key types of updates here: informational and engagement. For the former, this is the communication to the wider organization. It is the memo, or if you are more adventurous, the podcast, video presentation, all-hands meeting or otherwise. These updates are useful, but everyone expects them to be very formal, lack specifics, and speak in generalities.

The latter, engagement updates, are within specific teams or with individuals. These should be more specific, and where appropriate, share some of the back-story. This gives a sense of feeling “in” on the story. Careful use of both approaches can do wondrous things to build a sense of engagement to leadership.

2. Be collaborative around the mission and values

Remember that mission statement you wrote and stuck on a web page or plaque somewhere? Yeah, so do we. Looked at it recently? Probably not.

Mission statements are often a broad and ambiguous statement, once written, and mostly forgotten. They are typically drafted by a select group of people, and everyone on the ground in service of that very mission typically feels rather disconnected from it.

Let’s change that. Dig out the mission statement and engage with your organization to bring it up to date. Have an interactive conversation about what people feel the broader goals and opportunities are, and take practical input from people and merge it into the mission. You will end up with a mission that is more specific, more representative, and in which people really felt a part of.

Do the same for your organizational values, code of conduct, and other key documents.

3. Provide opportunities for success

The very best organizations are ones where everyone has the opportunities to bring their creativity to the fold and further our overall mission and goals. The very worst organizations shut their people down because their business card doesn’t have the right thing written on it, or because of a clique of personalities.

We want an environment where everyone has the opportunity to step to the plate. An example of this was when I hired a translations coordinator for my team at Canonical. He did great work so I offered him opportunities to challenge himself and his skills. That same guy filled my shoes when I left the Canonical few years later.

Now, let’s be honest. This is tough. It relies on leaders really knowing their teams. It relies on seeing potential, not just ticked-off work items. If you create a culture though where you can read potential, tap it, and bring it into new projects, it will create an environment in which everyone feels opportunity is around the corner if they work hard.

4. If you Make Plans, Action Them

This is going to sound like a doozy, but it blows me away how much this happens. This is one for the leaders of organizations. Yes, you reading this: this includes you.

If you create a culture in which people can be more engaged, this will invariably result in new plans, ideas, and platforms. When these plans are shared, those people will feel engaged and excited about contributing to the wider team.

If that then goes into a black hole never to be assessed, actioned, or approved, discontentment will set in.

So, if you want to have a culture of engagement, take the time to actually follow up and make sure people can actually do something. Accepting great ideas, agreeing to them, and not following up will merely spark frustration for those who take the initiative to think holistically about the organization.

5. Regularly survey

It never ceases to amaze me how valuable surveys can be. You often have an idea of what you think people have a perspective on, you decide to survey them, and the results are in many cases enlightening.

Well structured surveys are an incredibly useful tool. You don’t need to do any crazy data analysis on these things: you often just need to see the general trends and feedback. It is important in these surveys to to always have a general open-ended question that can gather all feedback that didn’t fit neatly into your question matrix.

Of course, there is a whole science around running great surveys, and some great books to read, but my primary point here is to do them, do them often, and learn-from and action the results.

One final point: surveys will often freak managers out as they will worry about accountability. Don’t treat these worries with a sledgehammer: help them to understand the value of learning from feedback and to embrace a culture in which we constantly improve. This is not about yelling about mistakes, it is about exploring how we improve.

6. Create a thoughtful management culture

OK, that title might sound a little fluffy, but this is a key recommendation.

I learned from an old manager a style of management that I have applied subsequently and that I feel works well.

The idea is simple: when someone joins my team, I tell them that I want to help them in two key ways. Firstly, I want them to be successful in their role, to have all the support they need, to get the answers they need, and to be able to do a great job and enjoy doing it. Most managers focus their efforts here.

What is important is the second area of focus as a manager. I tell my team members that I want to help them be the very best they can be in their career; to support, mentor, and motivate them to not just do a great job here at the organization, but to feel that this time working here was one that was a wider investment in their career.

I believe both of these pledges from a manager are critical. Think about the best managers and teachers you have had: they paid attention to your immediate as well as long-term success.

If you are on an executive team of company, you should demand that your managers provide both of these pledges to their teams. This should be real, not just words, and be authentic.

7. Surprise your staff

This is another one for leaders in an organization.

We are all people and in business we often forget we are people. We all have hobbies, interests, ideas, jokes, stories, experiences to share. When we infuse our organizations with this humanity they feel more real and more engaging.

In any melting pot of an organization, some people will freely share their human side…their past experiences, stories, families, hobbies, favorite movies and bands…but in many cases the more senior up the chain you go, these kinds of human elements become isolated and shared with people who have a similar rank in the organization. This creates leadership cliques.

On many cases, seeing leaders surprise their staff and be relaxed, open, and engaging, can send remarkably positive messages. It shows the human side of someone who may be primarily experienced by staff as merely giving directives and reviewing performance. Remember, folks, we are all animals.

8. Set expectations

Setting expectations is a key thing in many successful projects. Invariably though, we often think about the expectations of consumers of our work; stakeholders, customers, partners etc.

It is equally important to set expectations with our teams that we welcome input, ideas, and perspectives for how the team and the wider organization works.

I like to make this bluntly clear to anyone I work with: I want all feedback, even if that feedback is deeply critical of my or the work I am doing. I would rather have an uncomfortable conversation and be able to tend to those concerns, than never to hear them in the first place and keep screwing up.

Thus, even if you think it is well understood that feedback and engagement is welcome, make it bluntly clear, from the top level and throughout the ranks that this is not only welcome, but critical for success.

9. Focus on creativity and collaboration

I hated writing that title. It sounds so buzzwordy, but it is an important point. The most successful organizations are ones that feel creative and collaborative, and where people have the ability to explore new ideas.

Covey talks about the importance of synergy and that working with others not only brings the best out of us, but helps us to challenge broken or misaligned assumptions. As such, getting people together to creatively solve problems is not just important for the mission, but also for the wellbeing of the people involved.

As discussed earlier though, we want to infuse specific teams with this, but also create a general culture of collaboration. To do this on a wider level you could have organization-wide discussions, online/offline planning events, incentive competitions and more.

10. Should I stay or should I go?

This is going to be a tough pill to swallow for some founders and leaders, but sometimes you just need to get out the way and let your people do their jobs.

Organizations that are too directed and constrained by leadership, either senior or middle-management, feel restrictive and limiting. Invariably this will quash the creativity and enthusiasm in some staff.

We want to strike a balance where teams are provided the parameters of what success looks like, and then leadership trusts them to succeed within those parameters. Regular gate reviews make perfect sense, but daily whittering over specifics does not.

This means that for some leaders, you just need to get out the way. I learned this bluntly when a member of my team at Canonical told me over a few beers one night that I needed to stop meddling and leave the team alone to get on with a project. They were right: I was worried about my teams delivery and projecting that down by micro-managing them. I gave them the air they needed, and they succeeded.

On the flip side, we also need to ensure leadership is there for support and guidance when needed. Regular check-ins, 1-on-1s, and water-cooler time is a great way to do this in a more comfortable way.

I hope this was useful and if nothing else, provided some ideas for further thinking about how we build organizations where we can tap into the rich chemistry of ideas, creativity, and experience in our wider teams. As usual, feedback is always welcome. Thanks for reading!

by jono at February 18, 2015 05:45 PM