Planet Ubuntu California

July 26, 2016

Jono Bacon

The Risks of Over-Rewarding Communities

Incentive plays an important role in communities. We see it everywhere: community members are rewarded with extrinsic rewards such as t-shirts, stickers, gadgets, or other material, or intrinsic rewards such as increased responsibilities, kudos, reputation, or other benefits.

too_many_T-shirts_2

The logic seems seems sound: if someone is the bees knees and doing a great job, they deserve to be rewarded. People like rewards, and rewards make people want to stick around and contribute more. What’s not to love?

There is though some interesting evidence to suggest that over-rewarding your communities, either internal to an organization or external, has some potent risks. Let’s explore the evidence and then see how we can harness it.

The Research

Back in 1908, Yerkes-Dodson, psychologists (and potential prog rock band) developed the Yerkes-Dodson Law. It suggests performance in a task increases with arousal, but only to a point. Now, before you get too hot under the collar, this study refers to mental or physiological arousal such as motivation. The study highlights a “peak arousal” time which is the ideal mix of the right amount of arousal to hit maximal performance.

Dan Ariely in The Upside of Irrationality took this research and built on it to test the effect of extrinsic rewards on performance. He asked a series of people in India to perform tasks with varying levels of financial reward (very small up to very high). His results were interesting:

Relative to those in the low- or medium-bonus conditions, they achieved good or very good performance less than a third of the time. The experience was so stressful to those in the very-large-bonus condition that they choked under the pressure.

I found this choke point insight interesting. We often see an inverse choke point when the stress of joining a community is too high (e.g. submitting a first code pull request to your peers). Do we see choke points for communities members with a high level of pressure to perform though?

Community Strategy Implications

I am not so sure. Many communities have high performing community members with high levels of responsibility (e.g. release managers, security teams, and core maintainers) who perform with predictably high quality results.

Where we often see the ugly head of community is with entitlement; that is, when some community members expect to be treated differently to others.

When I think back to the cases where I have seen examples of this entitlement (which shall remain anonymous to protect the innocent) it has invariably been due to an imbalance of expectations and rewards. In other words, when their expectations don’t match their level of influence on a community and/or they feel rewarded beyond that suitable level of influence, entitlement tends to brew.

As as such, my graph looks a little like this:

Screen Shot 2016-07-26 at 8.42.49 AM

This shows the Yerkes-Dodson curve but subdivides the timeline into three distinctive areas. The first area is used for growth and we use rewards as a means to encourage participation. The middle area is for maintenance and ensuring regular contribution over an extended period of time. The final area is the danger zone – this is where entitlement can set in, so we want to ensure that manage expectations and rewards carefully. In this end zone we want to reward great work, but ultimately cap the size of the reward – lavish gifts and experiences are probably not going to have as much impact and may even risk the dreaded entitlement phenomena.

This narrative matches a hunch I have had for a while that rewards have a direct line to expectations. If we can map our rewards to effectively mitigate the inverse choke point for new members (thus make it easier to get involved) and reduce the latter choke point (thus reduce entitlement), we will have a balanced community.

Things You Can Do

So, dear reader, this is where I give you some homework you can do to harness this research:

  1. Design what a ‘good’ contribution is – before you can reward people well you need to decide what a good contribution is. As an example, is a good code contribution a well formed, submitted, reviewed, and merged pull request? Decide what it is and write it down.
  2. Create a platform for effectively tracking capabilities – while you can always throw out rewards willy-nilly based on observations of performance, this risks accusations of rewarding some but not others. As such, implement an independent way of mapping this good contribution to some kind of automatically generated numeric representation (e.g. reputation/karma).
  3. Front-load intrinsic rewards – for new contributors in the growth stage, intrinsic rewards (such as respect, support, and mentoring) are more meaningful as these new members are often nervous about getting started. You want these intrinsic rewards primarily at the beginning of a new contributor on-ramp – it will build a personal sense of community with them.
  4. Carefully distribute extrinsic rewards – extrinsic rewards such as clothing, gadgets, and money should be carefully distributed along the curve in the graph above. In other words, give out great material, but don’t make it too opulent otherwise you may face the latter choke point.
  5. Create a distribution curve of expectations – in the same way we are mapping rewards to the above graph, we should do the same with expectations. At different points in the community lifecycle we need to provide different levels of expectations and information (e.g. limited scope for new contributions, much wider for regular participants). Map this out and design systems for delivering it.

If we can be mindful of the Yerkes-Dodson curve and balance expectations and rewards well, we have the ability to build truly engaging and incentivized communities and organizations.

I would love to have a discussion about this in the comments. Do you think this makes sense? What am I missing in my thinking here? What are great examples of effective rewards? How have you reduced entitlement? Share your thoughts…

by Jono Bacon at July 26, 2016 04:04 PM

July 25, 2016

Elizabeth Krumbach

The Official Ubuntu Book, 9th Edition released!

Back in 2014 I had the opportunity to lend my expertise to the 8th edition of The Official Ubuntu Book and began my path into authorship. Since then, I’ve completed the first edition of Common OpenStack Deployments, coming out in September. I was thrilled this year when Matthew Helmke invited me back to work on the 9th edition of The Official Ubuntu Book. We also had José Antonio Rey joining us for this edition as a third co-author.

One of the things we focused on with the 8th edition was, knowing that it would have a shelf life of 2 years, future-proofing. With the 9th edition we continued this focus, but also wanted to add a whole new chapter: Ubuntu, Convergence, and Devices of the Future

Taking a snippet from the book’s sample content, the chapter gives a whirlwind tour of where Ubuntu on desktops, servers and devices is going:

Chapter 10: Ubuntu, Convergence, and Devices of the Future 261

The Convergence Vision 262
Unity 263
Ubuntu Devices 264
The Internet of Things and Beyond 268
The Future of the Ubuntu Desktop 272
Summary 273

The biggest challenge with this chapter was the future-proofing. We’re in an exciting point in the world of Ubuntu and how it’s moved far beyond “Linux for Human Beings” on the desktop and into powering servers, tablets, robots and even refrigerators. With the Snappy and Ubuntu Core technologies both powering much of this progress and changing rapidly, we had to be cautious about how in depth we covered this tooling. With the help of Michael Hall, Nathan Haines and Sergio Schvezov I believe we’ve succeeded in presenting a chapter that gives the reader a firm overview of these new technologies, while being general enough to last us until the 10th edition of this book.

Also thanks to Thomas Mashos of the Mythbuntu team and Paul Mellors who also pitched in with this edition. Finally, as with the last edition, it was a pleasure to work with Matthew and José on this book. I hope you enjoy it!

by pleia2 at July 25, 2016 08:27 PM

Jono Bacon

Audio Interview: On Building Communities, Careers, and Traversing Challenges

GCOD-Header-Tall

Last week I was interviewed by the wonderful Eric Wright for the GC On-Demand Podcast.

Over the years I have participated in various interviews, and this was a particularly fun, meaty, and meaningful discussion. I think this could be worth a listen, particularly if you are interested in community growth, but also leadership and facing and traversing challenges.

Some of the topics we discussed included:

  • How I got into this business.
  • What great communities look like and how to build them.
  • How to keep communities personal, particularly when dealing with scale.
  • Managing the expectations of different parts of an organization.
  • My 1/10/100 rule for mentoring and growing your community.
  • How to evolve and grow the skills of your community members and teams in a productive way.
  • My experiences working at Canonical, GitHub and XPRIZE.
  • Increasing retention and participation in a community.
  • Building effective leadership and leading by example.
  • Balancing open source consumption and contribution.
  • My recommended reading list.
  • Lots of fun anecdotes and stories.

So, go and grab a cup of coffee, and use the handy player below to listen to the show:

You can also find the show here.

Eric is a great guy and has a great podcast. I encourage you to check out his website and subscribe to the podcast feed to stay up to date with future episodes.

by Jono Bacon at July 25, 2016 03:00 PM

July 22, 2016

Elizabeth Krumbach

Ubuntu 16.04 in the SF Bay Area

Back in June I gave a presentation on the 16.04 release down at FeltonLUG, which I wrote about here.

Making my way closer to home, I continued my tour of Ubuntu 16.04 talks in the San Francisco Bay Area. A couple weeks ago I gave the talk at SVLUG (Silicon Valley Linux Users Group) and on Tuesday I spoke at BALUG (Bay Area Linux Users Group).

I hadn’t been down to an SVLUG meeting in a couple years, so I appreciated the invitation. They have a great space set up for presentations, and the crowd was very friendly. I particularly enjoyed that folks came with a lot of questions, which meant we had an engaging evening and it stretched what is alone a pretty short talk into one that filled the whole presentation time. Slides: svlug_ubuntu_1604.pdf (6.0M), svlug_ubuntu_1604.odp (5.4M)


Presentation, tablets and giveaways at SVLUG

At BALUG this week things were considerably more casual. The venue is a projector-less Chinese restaurant these days and the meetings tend to be on the small side. After family style dinner, attendees gathered around my big laptop running Ubuntu as I walked through my slide deck. It worked better than expected, and the format definitely lent itself to people asking questions and having discussions throughout too. Very similar slides to the ones I had at SVLUG: balug_ubuntu_1604.pdf (6.0M), balug_ubuntu_1604.odp (5.4M)


Setup and giveaways at BALUG

Next week my Ubuntu 16.04 talk adventures culminate in the event I’m most excited about, the San Francisco Ubuntu 16.04 release party at OpenDNS office located at 135 Bluxome St in San Francisco!

The event is on Thursday, July 28th from 6:30 – 8:30PM.

It’s right near the Caltrain station, so where ever you are in the bay it should be easy to get to.

  • Laptops running Ubuntu and Xubuntu 16.04.
  • Tablets running the latest Ubuntu build, including the bq Aquaris M10 that shipped with Ubuntu and demonstrates convergence.
  • Giveaways, including the 9th edition of the Official Ubuntu book (new release!), pens, stickers and more.

I’ll need to plan for food, so I need folks to RSVP. There are a few options for RSVP:

Need more convincing? It’ll be fun! And I’m a volunteer whose systems engineering job is unrelated to the Ubuntu project. In order to continue putting the work into hosting these events, I need the satisfaction of having people come.

Finally, event packs from Canonical are now being shipped out to LoCos! It’s noteworthy that for this release instead of shipping DVDs, which have been in sharp popularity decline over the past couple of years, they are now shipping USB sticks. These are really nice, but the distribution is limited to just 25 USB sticks in the shipment for the team. This is an order of magnitude fewer than we got with DVDs, but they’re also much more expensive.


Event pack from Canonical

Not in the San Francisco Bay Area? If you feel inspired to give an Ubuntu 16.04 presentation, you’re welcome to use my slides, and I’d love to see pictures from your event!

by pleia2 at July 22, 2016 12:17 AM

July 21, 2016

Jono Bacon

Hack The World

hacktheworld-logo

As some of you will know, recently I have been consulting with HackerOne.

I just wanted to share a new competition we launched yesterday called Hack The World. I think it could be interesting to those of you already hacking, but also those of you interested in learning to hack.

The idea is simple. HackerOne provides a platform where you can go and hack on popular products/services (e.g. Uber, Adobe, GitHub, Square, Slack, Dropbox, GM, Twitter, Yahoo!, and many more) and submit vulnerability reports. This is awesome for hackers as they can safely hack on products/services, try out new hacking approaches/tools, build relationships with security teams, build a resume of experience, and earn some cold hard cash.

Currently HackerOne has 550+ customers, has paid over $8.9 million in bounties, and fixed over 25,000 vulnerabilities, which makes for a safer Internet.

Hack The World

Hack The World is a competition that runs from 20th July 2016 – 19th September 2016. In that time period we are encouraging people to hack programs on HackerOne and submit vulnerability reports.

When you a submit a vulnerability report that is valid, the program may award you a bounty payment (many people all over the world earn significant buckets of money from bounties). In addition, you will be rewarded reputation and signal. Reputation is an indicator of active activity and participation, and signal is the average reputation in your reports.

Put simply, whoever earns the most reputation in the competition can win some awesome prizes including $1337 in cash, a hackable FPV drone kit, awesome limited edition swag, and bragging rights as being one of the most talented hackers in the world.

To ensure the competition is fair for everyone, we have two brackets – one for experienced hackers and one for new hackers. There will be 1st, 2nd, and runner up prizes in each bracket. This means you folks new at hacking have a fighting chance to win!

Joining in the fun

Getting started is simple. Just go and register an account or sign in if you already have an account.

To get you started, we are providing a free copy of Peter Yaworski’s awesome Web Hacking 101 book. Ensure you are logged in and then go here to grab the book. It will then be emailed to you.

Now go and and find a program, start hacking, learn how to write a great report, and submit reports.

When your reports are reviewed by the security teams in the programs you are hacking on the reputation will be awarded. You will then start appearing on the Hack The World Leaderboard which at the time of writing looks a little like this:

Screen Shot 2016-07-20 at 9.48.03 PM

This data is almost certainly out of date as you read this, so go and see the leaderboard here!

So that’s the basic idea. You can read all the details about Hack The World by clicking here.

Hack The World is a great opportunity to hack safely, explore new hacking methods/tools, make the Internet safer, earn some money, and potentially be crowned as a truly l33t hacker. Go hack and prosper, people!

by Jono Bacon at July 21, 2016 03:00 PM

July 18, 2016

Elizabeth Krumbach

CodeConf 2016

In the last week of June I had the pleasure of attending CodeConf in sunny Hollywood, Los Angeles. As I wrote in my tourist account of this trip, it was my first visit to Hollywood.

The event commenced on Monday, when they had a series of tutorials and I took the opportunity to pick up my badge and get acquainted with the event staff. In the early evening I went to the venue, AVALON Hollywood to complete my A/V check. My laptop is restricted to display port and VGA and I think they mostly expected Macs, so I had a pile of adapters at my side to figure out which would work best. That’s when I got my first glimpse of the historic venue that the conference was hosted in, it was a beautiful space for a single track conference. Also, there was a really nice ceiling piece.


Performer-eye view of the stage at Avalon Hollywood

That evening I met up with the organizers and my fellow speakers at the EP Lounge. I really enjoyed this gathering, it was small enough that I felt comfortable, everyone was super friendly and eager to include shy, introverted me in their conversations and I met a whole slew of brilliant people. That’s also where they presented speakers with our speaker gift, CodeConf Vans SHOES! I’m still breaking them in, but they fit really well.

Tuesday kicked off the post-tutorials conference. Breakfast was provided via a series of food trucks in an adjacent lot. In spite of the heat, it was a great setup.

Conference-wise, I can’t possibly cover all the talks, but there were several which were notable in that I learned something new or was somehow inspired. Michael Bernstein got us started with a talk about “The Perfect Programming Language” where he told a story about an old notebook that outlined the key features of “the perfect programming language” but taught us that perfect goes beyond the code. Not only does the perfect programming language not exist, it’s also about things that are less glamorous than language mechanics, like documentation, testing, packaging and practical adoption. The perfect programming language, he posits, is the one you’re using now. He also implored the audience to rise above language wars and to instead appreciate the strengths of other languages and adopt from them what they do right.

Mid-day I had the pleasure of listening to E. Dunham talk about the community processes in the Rust community. What I particularly loved about her talk was that she addressed both how the social components of the community and the technical ones create a better atmosphere for contributors. The social components included having a high expectations for the behavior of your community members (including a Code of Conduct), providing simple methods of communications for all contributors and being culturally supportive of showing appreciation for contributions people have made, especially newcomers. On the technical side, she talked a lot about robots! Bots that send a welcome message to new contributors, bots that test the code before it’s merged, pull request templates on GitHub to help guide new contributors and more. There’s no replacing the personal touch, but there’s a lot of routine work that can be done by bots.


E. Dunham on Rust community processes

After lunch Anton McConville presented a talk about natural language processing by using his David Bowie Personas tooling. The heart of the talk was the modern ability to process natural language (say, your tweets) to draw conclusions. He demonstrated this with his Ziggy | Bowie Personas through lyric analysis website which is powered by IBM’s Watson and IBM’s natural language analysis tooling. Through his tooling and website he did an analysis of David Bowie lyrics across albums and decades to track various emotions and map them to the artist’s public personal history. In addition, there’s a feature where you can put your own Twitter handle in to see which David Bowie personal you most closely match with.

Another notable talk was that by Mike McQuaid on The Contributor Funnel. He used the well-known sales tunnel as an analog to present different, fluid groups of people in your community: users, contributors and trusted maintainers. The point of his talk was that efforts should be continually made to “upsell” community members to the next level of contributors. You want your users to become contributors, contributors to become maintainers and maintainers with the mindset to foster an environment where they can continually accept and welcome the newest generation of new maintainers. He suggested not making assumptions about users (like they know how to git/github) and have a new maintainer checklist so you don’t have to remember what resources tooling new folks need to be added to. He also talked about avoiding bikeshedding in communications, having a code of conduct and making constant growth of your community a priority.

I really enjoyed the next trio of talks. First up was Anjuan Simmons about Lending Privilege. What he meant by this was to work not only toward building up diversity in your organization, but also factoring in inclusion. His talk stressed the importance of what people in the majority populations in tech can do to help minorities, including lending them your credibility, helping them with access to the tooling and levels of trust they have, encouraging them in their roles and sharing of expertise. On a personal note, I’ll emphasize that it’s easier to be a mentor to people who you share a background, race and gender with, which results in minorities struggling to find mentors. We must do better than what is easy and work to mentor people who are different than we are.

David Molina then presented what was probably the most inspirational of talks at the conference: What Happens When Military Veterans Learn to Code. Through the organization he founded, Operation Code, he is seeking to put veterans in touch with the resources they need to get into code camps and launch a new career in programming. The organization accomplishes this through scholarships for veterans for code camps, recruitment of industry mentors (like us!), open source projects within the organization where their code camp graduates can publicly demonstrate expertise, and job placement. It was interesting to learn that the GI bill does not support code camps since they aren’t accredited, so in addition to handling the status quo through external scholarships, he’s also working with organizations to get accreditation and petition for modernization of the 1940s-era focused requirements for the GI bill, many of which don’t help veterans get job-ready skills today. I’m incredibly appreciative to David for his own service to our country as a veteran himself, his commitment to his fellow veterans and for bringing this to our attention.


David Molina on Operation Code

The final talk of the day was about the Let’s Encrpyt initiative. I’ve known about this initiative since the beta launch last year but I’ve been cautious about moving from CAcert for my own domains. Speaker and one of the founders, Josh Aas, spoke on the history and rationale of the project, which seeks to enable all sites to have at least the most basic SSL certificate, which they provide free of charge. They also have a goal of making it much easier process-wise, as the current process tends to be very technical and complicated, and varies greatly based on the SSL certificate vendor. I have to say that I’m much more inclined to seriously consider it the next time I renew my certificates after seeing this talk.

Wednesday began with an excellent talk by Nadia Eghbal, on Emerging Models for Open Source Contributions. She walked us through the history of open source with an eye toward leadership models. She covered the following:

  1. Early days (1980s through early 90s) where Benevolent Dictator For Life (BDFL) was common. Leadership was centralized and there were a limited number of contributors and users. This model is simple, but tended to make companies nervous due to control of the project and ability for it to continue should the BDFL cease involvement.
  2. Maturing of open source era (late 1990s through 2010) where meritocracy ruled and commitment to the project was still required. This helped highly competent (though non-diverse) communities grow and started to get companies involved.
  3. Modern open source communities (2010 through today) where many projects have adopted a liberal contributions model. With tooling like GitHub, contributors have a common set of tooling for contributions and one-off contributors are common. Sheu shared that of the largest projects on GitHub, many had a large percentage of contributors who only contributed to the project once. This style of contributions was more difficult in the past when you may have needed to be known by the BDFL or needed merit in the community before your contributions were reviewed and accepted.

I also liked that she didn’t just put all these methods into boxes and say there was a one size fits all model for all projects. While the leadership models have mapped to time and eras in Open Source, it didn’t necessarily mean that the newer models were appropriate everywhere. Instead, each project now has these models to seek ideas from and evaluate for their communities. I found further details about her talk here.

There was then a talk by Mitchell Hashimoto of HashiCorp on The HashiCorp Formula to Open Source. Having produced a series of successful open source projects, of which I’ve used two, Vagrant and Terraform, Mitchell spoke on the formula his company has used to continually produce successful projects. His six-step path for success was the following:

  1. Find a problem and evaluate other solutions on the market
  2. Design a solution with human language (don’t write code yet!)
  3. Build and release the 0.1 version based on a basic reference use case and spend 3-6 months on it (no more, no less)
  4. Write human-centric documentation and a landing page (these are different things!), partially so you can effectively collect and respond to 0.1 feedback
  5. Ship and share (0.2 should come quickly, aim for production-ready 0.3-.0.5 releases and then give talks about it!)

Of course his excellent talk dove into a considerable amount of detail on each, which is worth considering if the video is made available.

My talk was at noon, where I spoke on building an open source cloud (slides, PDF). The focus of my talk was squarely on OpenStack, but my recommendations for use of configuration management for maintainability and expertise you want on your cloud-building team were universal.


Thanks to E. Dunham for snapping a photo during my talk! (source)

After lunch I really enjoyed a talk by Tracy Osborn on Design for Non-designers. I’ll begin by saying that I have the utmost respect for designers who not only have the education background in design, but have design as a career. I have paid web designers before for this very reason. That said, as a systems engineer I can use all the help I can get with design! The talk format was a brief introduction to how design is taught, and how she’s not going into that, and then demonstrating considerable improvements that could be made to a dialog window with the suggestions she outlined. She covered: cutting down on clutter, lining things up, use of color (see pallets like those at ColourLovers.com for inspiration), use of a maximum of two fonts (use TypeWolf to find open source fonts to use), use of white space, use of bright colors for important things on your page and a super quick tutorial in migrating paragraphs to a series of bullet-points. I’m really taking these recommendations to heart, thanks Tracy!

The final two talks that really spoke to me were on public data and a tooling unspecific look at debugging. First up was Tyrone Grandison from from the US Department of Commerce. I’ll start off by saying I love open data talks. They always make me want to learn more programming so I can come up with fun and interesting ways to use data, and this talk was no exception. Tyrone himself is a self-proclaimed data geek, and that showed through, and his relatively new team has been really productive. They’ve been supporting US government organizations releasing their data in a public, usable form and in turn writing tutorials to help organizations use the data effectively. I’m really impressed by their work. A link dump of resources he shared: US Commerce Data Service, US Commerce Data Usability Project and US Commerce Data Service on GitHub, which includes aforementioned tutorials.

The last talk was by Kerri Miller on Crescent Wrenches, Socket Sets, and Other Tools For Debugging. I was somewhat worried this talk would be about specific technical tools (maybe crescent wrenches and socket sets are open source tools I don’t know about?), but I was pleasantly surprised to hear a very humor-filled, entertaining talk instead about a high level view of debugging. By providing a high level talk about debugging, she presented us with a world where you don’t make assumptions, are methodical about finding solutions but still have a lot of room for creativity.

To conclude, I had a wonderful time at this conference. I also want to applaud the CodeConf LA team for presenting such a diverse program of speakers. I have a great appreciation for the variety of perspectives that such a diverse conference speaker lineup includes. It also proved yet again that you don’t need to “lower the bar” to have a diverse lineup. All the speakers were world-class.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157667666687824

by pleia2 at July 18, 2016 08:46 PM

Jono Bacon

Reducing Texting and Driving: An Idea

This weekend I dropped Erica off at the airport. Driving through San Francisco we saw an inventive billboard designed to reduce texting and driving. Driver distraction is a big problem, with a 2012 study suggesting over 3,000 deaths and 421,000 injuries were a result of distraction. I am pretty confident those shiny, always connected cellphones are indeed a common distraction during a boring drive or in times when you are anxious for information.

So anyway, we were driving past this billboard designed to reduce texting and driving and it included an Apple messages icon with a message awaiting. It was similar to, but not the same as this:

DSCF5170_t670

While these billboards are good to have, I suspect they are only effective when they go beyond advocating a behavior and are actually able to trigger a real behavioral change. Rory Sutherland’s example of Scotland changing speeding signs from the number to an unhappy face, being a prime example – instead of telling drivers to drive more slowly, they tapped into the psychology of initiating that behavioral change.

When I saw this sign, it actually had the opposite effect on me. Seeing the notification icon with a message waiting caused a cognitive discomfort that something needed checking, tending to, and completing. You guessed it: it made me actually want to check my phone.

The Psychology of Notifications

This got me thinking about the impact of notifications on our lives and whether part of the reason people text and drive is not because they voluntarily pick up the phone and screw around with it, but instead because they are either (a) notified by audio, or (b) feel the notification itch to regularly check their phone to see if there are new notifications and then action them. Given how both Android and Apple phones both display notifications on the unlocked screen, this makes it particularly easy to see a notification and then action it by clicking on it and loading the app, and then potentially smash your car into a Taco Bell sign.

There is of course some psychology that supports this. Classical Conditioning demonstrates that we can associate regularly exposed stimuli with key responses. As such, we could potentially associate time away from our computers, travel, or other cognitive functions such as driving, as a time when we think about our relationships, our work, and therefore feel the urge to use our phones. In addition to this, research in Florida demonstrated that any kind of audio notifications fundamentally disrupt productivity and thus are distracting.

A Software Solution?

As such, it strikes me that a simple solution for reducing texting and driving could be to simply reduce notifications while driving.

For this work, I think a solution would need to be:

  • Automatic – it detects when you are traveling and suitably disengages notifications.
  • Contextual – sometimes we are speeding along but not driving (such as taking a subway, or as a passenger in a car).
  • Incentivized – it is unlikely we can expect all phone makers to switch this on by default and not make it able to be disabled (nor should we). As such, we need to incentivize people to use a feature like this.

For the automatic piece some kind of manual installation would likely be needed but then the app could actively block notifications when it automatically detects the phone is above a given speed threshold. This could be done via transitional points between GPS waypoints and/or wifi hotspots (if in a database). If the app detects someone going faster than a given speed, it kicks in.

For the contextual piece I am running thin on ideas for how to do this. One option could be to use the accelerometer to determine if the phone is stationary or not (most people seem to put their phones in a cup holder or phone holder when they drive). If the accelerometer is wiggling around it might suggest the person is a passenger and has the phone on their lap, pocket, or in their hand. Another option could be an additional device that connects to the phone over bluetooth that determines proximity of the person in the car (e.g. a wrist-band, camera, sensor on the seat, or something else), but this would get away from the goals of it being automatic.

For the incentive piece, this is a critical component. With teenagers a common demographic, and thus first-time drivers, money could be an incentive. Lower insurance fees (particularly given how expensive teenagers are to insure), discounts/offers at stores teenagers care about (e.g. hot topic for the greebos out there, free food and other ideas could be an incentive. For older drivers the same benefits could apply, just in a different context.

Conclusion

While putting up billboards to tell people to be responsible human beings is one tool in reducing accidents, we are better positioned than ever to use a mixture of technology and psychology to creatively influence behavior more effectively. If I had the time, I would love to work on something like this, but I don’t have the time, so I figured I would share the idea here as a means to inspire some discussion and ideas.

So, comments, feedback, and ideas welcome!

by Jono Bacon at July 18, 2016 03:29 PM

July 15, 2016

Jono Bacon

Scratch Community Manager Position Available

A while back Mako introduced me to Mitchel Resnick, LEGO Papert Professor of Learning Research and head of the Lifelong Kindergarten group at the MIT Media Lab. Mitchel is a tremendous human being; warm, passionate, and terribly creative in solving interesting problems.

Mitchel introduced me to some members of his team and the conversation was focused on how they can find a good community manager for the Scratch learning environment. For the cave-dwellers among you, Scratch is a wonderful platform for teaching kids programming and the core principles involved.

So, we discussed the role and I helped to shape the role description somewhat.

It is a really awesome and important opportunity, particularly if you are passionate about kids and technology. It is a role that is calling for a creative thinker to take Scratch to the next level and impact a whole new generation of kids and how they can build interesting things with computers. While some community managers focus a lot on the outreach pieces (blogging, social media, and events), I encourage those of you interested in this role to also think of it from a deeper perspective of workflow, building different types of community, active collaboration, and more.

Check out the role description here and apply. If you and I know each other, feel free to let them know this and I am happy to share with them more about you. Good luck!

by Jono Bacon at July 15, 2016 04:27 AM

July 08, 2016

Jono Bacon

Building a Safer Internet with HackerOne

Recently I started doing some work with HackerOne and I thought many of you would find it interesting enough for me to share.

A while back my friend Mårten Mickos joined HackerOne as CEO. Around that time we had lunch and he shared with me more about the company. Mårten has an impressive track record, and I could see why he was so passionate about his new gig.

The idea is pretty neat: HackerOne provides a service where companies (e.g. Uber, Slack, General Motors etc, and even The Pentagon) can provide a bug bounty program that invites hackers to find security flaws in their products and services. The company specifies the scope of the program (e.g. which properties/apps), and hackers are encouraged to find and submit vulnerability reports. When a report is approved, the hacker is often issued a payment.

HackerOne is interesting for a few reasons. Firstly, it is helping to build a safer and more secure world. As we have seen in open source, crowdfunding, and crowdsourcing, a productive and enabled community can deliver great results and expand the scope of operations far beyond that of a single organization. This is such a logical fit when it comes to security as the potential attack surface is growing larger and larger every day as more of our lives move into a digital realm.

What I also love about HackerOne is the opportunity it opens up for those passionate about security. It provides a playground where hackers can safely explore vulnerabilities, report them responsibly, build experience and relationships with security teams at popular companies, and earn some money. Some hackers on HackerOne are earning significant amounts of money (some even doing this full-time), and some are just having a blast on evenings and weekends earning some extra cash while having fun hacking.

I am working with HackerOne on the community strategy and execution side and it has been interesting exploring the different elements of building an engaged community of hackers. One of the things I have learned over the years building communities is that every one is different, and that is very much the case for HackerOne.

Familiar Ground

More broadly, it is also interesting to see echoes of similar challenges that faced open source in the early days, but now applied to hacking. Back then the world was presented with the open source model in which anyone, anywhere, could contribute their skills and talents to improve software. Many organizations back then were pretty weirded out by this. They worried about their intellectual property, the impact on their customers, losing control, and how they would manage the PR.

wargames_still8

Believe it or not, WarGames is not a documentary.

In a similar way, HackerOne is presenting a model in which organizations can tap the talents of a distributed community of hackers. While some organizations will have similar concerns to the ones back in the early days of open source, I am confident we will traverse those. This will be great for the Internet, great for organizations, and great for hackers.

Get Involved

If you are a hacker, or a programmer who would like to learn about security and try your hand, go and sign up, then find a program, and submit a report.

If you are an existing HackerOne user, I would also love to hear your feedback, thoughts, and ideas about how we can build the very best community. Feel free to send me an email to jono@hackerone.com – let’s build a powerful, engaged, global community that is making the world more secure and making hackers more successful.

by Jono Bacon at July 08, 2016 05:42 PM

July 06, 2016

Akkana Peck

GIMP at Texas LinuxFest

I'll be at Texas LinuxFest in Austin, Texas this weekend. Friday, July 8 is the big day for open source imaging: first a morning Photo Walk led by Pat David, from 9-11, after which Pat, an active GIMP contributor and the driving force behind the PIXLS.US website and discussion forums, gives a talk on "Open Source Photography Tools". Then after lunch I'll give a GIMP tutorial. We may also have a Graphics Hackathon/Q&A session to discuss all the open-source graphics tools in the last slot of the day, but that part is still tentative. I'm hoping we can get some good discussion especially among the people who go on the photo walk.

Lots of interesting looking talks on Saturday, too. I've never been to Texas LinuxFest before: it's a short conference, just two days, but they're packing a lot into those two days and but it looks like it'll be a lot of fun.

July 06, 2016 12:37 AM

July 04, 2016

Elizabeth Krumbach

Simcoe’s March and June Checkups

I missed a checkup post! Looking back to January, she had been prescribed the Atopica for some scabbing that kept occurring around her eyes and she continues to be on that. Since starting that daily pill, she has only had one very mild breakout of scabbing around her eyes, but it cleared up quickly after we bumped up the dose.

We also started giving her an appetite stimulant to get her to eat more and put on some weight. It’s working so far, she still isn’t the biggest eater, so I think the pill bothers her because she has to eat a lot. We’re planning on switching over to a lower dose that she can take a bit more often to even out her eating schedule. She’s also been on Calcitriol, an active form of Vitamin D (info about usage in renal failure cats here). This spring she also suffered a UTI, which is pretty common in renal failure felines, but we’d gotten lucky so far. Thankfully a batch of antibiotics knocked it out without much trouble and it hasn’t returned. Finally, I mentioned in January that she’d been suffering some with constipation. That has continued and the dermatologist assured us it was unrelated to any of her medication. It hasn’t subsided, so we’re now giving both cats a small dollop of wet K/D food every night, and Simcoe’s getting some fiber mixed in. When she’s not stubborn about eating it, it seems to be doing the trick.

Levels! First up, her weight. In January she was at 8.8lbs. In March she dropped to the lowest she’s been, 8.3. By her appointment on June 29th she was up a bit to 8.4. Keeping her at a healthy weight is incredibly important, hopefully the new appetite stimulant regime will continue to help with that.

lbs

Her BUN dipped in March a bit, going from 71 from 85. In June it had risen again, now sitting at 100. As levels go, the vet seems to be less concerned about this and looking more at her CRE levels.

BUN

…which are also continuing to rise. 4.6 in January, 5.1 in March and now at 5.5. As expected, this is simply the renal failure continuing to progress like we always knew it would.

CRE

As always we’re enjoying our time together and making sure she’s continuing to live a healthy, active life. She certainly doesn’t care for all the traveling I do, including during the last vet appointment (MJ ended up taking her). I am home most of the time though since I work from home, so I can keep an eye on her and spend lots of quality time together.

Simcoe with plants

by pleia2 at July 04, 2016 02:51 AM

July 03, 2016

Elizabeth Krumbach

Tourist in Los Angeles

I’ve been to Los Angeles several times for the Southern California Linux Expo, but the first few trips only took me to the LAX airport area, and without a car I wasn’t venturing too far beyond the area. This year it wasn’t even in Los Angeles, moving over to nearby Pasadena (good move!).

At CodeConf this past week my experience finally changed! The event took place in the heart of Hollywood, and my nearby hotel was a lovely jumping off point for my Hollywood adventures.

Sunday morning I flew down to Burbank airport on a little regional jet (CJR-200), putting me at my hotel around 10AM. I stashed my suitcase at the hotel and grabbed an Uber over my first tourist stop, the Griffith Observatory. I’ve seen it in movies and shows, most recently as MJ and I made our way through the Star Trek Voyager series, but I didn’t know a whole lot about it as a place. It turns out that it is actually a public observatory, specifically built for the public to use. It was built in the early 20th century at Griffith J. Griffith’s direction after he saw how life-changing seeing the sky through a telescope was and his desire to share this experience with everyone. I really enjoyed the free showing of the observatory’s history in the new Leonard Nimoy Event Horizon Theater. In the movie you learn that addition to the original structure that we enjoy today, in the early 2000s they shut down the entire observatory to do a multi-billion dollar restoration of the interior and built the underground addition that houses the theater and massive a new exhibit space. It’s pretty astonishing that they were able to do such a change with virtually no change to the original structure or look of the place from the outside.

The observatory also has a planetarium, where during my three hour visit I was able to get in a couple shows, Water is Life and Centered in the Universe, both of which I’d recommend seeing.

The hill the observatory is perched upon also offered great views of the Hollywood sign, so I was able to get my obligatory Hollywood sign photos out of the way early in my adventures. I really missed MJ on this observatory visit, I think it would have been a great place to explore together. I sent him a postcard to help share the experience, even if it was just a little bit.

I swung by my hotel to check into my room, where I snagged corner room that offered views of the Hollywood sign, Capitol Records building and Pantages Theatre. Quite nice! I could also see one of the three Dunkin’ Donuts in California from my room, but I suppose that’s not quite as noteworthy unless you’re me. Yes, I did get some coffee and donuts during my stay. OK, I got more than “some” coffee. I drank more iced coffee this past week than I have in years.

My day continued by going to the TLC Chinese Theatre. I decided to pay for a VIP tour ($15) and also see Independence Day: Resurgence in the classic theater, fitted with the third largest IMAX screen in North America ($22.75). I’ll say right off the bat that the tour isn’t worth it if you’re going to see a movie in that theater anyway. Half the tour was reading labels of clothing displayed in the lobby and continued by walking us through common areas telling us rather droll facts about the theater that are easy to find online. Since I had access to the theater with my movie ticket anyway, it wasn’t a very good use of my time or money.

Seeing a movie in that theater is totally worth doing though. It’s the most famous movie theater in the world, the screen and sound system were great, which I was initially skeptical about given the theater’s age. The curtains that cover the screen are beautiful, faithful reproductions of the long-worn originals and always novel to see in a movie theater. The movie itself? It was pretty silly, but if you’re going to see it the IMAX is the way to get the full level of enjoyment out of it. I joked that I was going to see a ridiculous movie in a ridiculous movie theater. It all felt appropriate.

After the movie I walked down Hollywood Boulevard for about a mile to get back to my hotel. Along the way I walked through some hyper tourist areas with the wax museums, people dressed up as various characters for photos and tourist goodie shops selling t-shirts, magnets and the like. The stars along the sidewalks are worth seeing, but a single walk through the area was plenty for me. There’s also lots of great food around. Los Angeles is famous for fresh sushi, and I managed to get some before I left on Wednesday night.

The conference took up the rest of my week, except that I did have to sneak out on Tuesday evening for an event I’d been waiting months AND pledged on a kickstarter for, the MST3K reunion show! I picked up tickets on Fandango for a theater in downtown LA. It was a shame to go alone, I missed my San Francisco MSTies but I’m glad I was able to make time for it in spite of being away from home, it was a lot of fun.

In all, I enjoyed Los Angeles on this trip. I’m glad I was finally able to make it beyond a conference venue, the city has a lot to offer. Next time I’ll have to check out the zoo.

More pictures from my adventures here: https://www.flickr.com/photos/pleia2/sets/72157669821167402/

by pleia2 at July 03, 2016 10:15 PM

Akkana Peck

Midsummer Nature Notes from Traveling

A few unusual nature observations noticed over the last few weeks ...

First, on a trip to Washington DC a week ago (my first time there). For me, the big highlight of the trip was my first view of fireflies -- bright green ones, lighting once or twice then flying away, congregating over every park, lawn or patch of damp grass. What fun!

Predatory grackle

[grackle]

But the unusual observation was around mid-day, on the lawn near the Lincoln Memorial. A grackle caught my attention as it flashed by me -- a male common grackle, I think (at least, it was glossy black, relatively small and with only a moderately long tail).

It turned out it was chasing a sparrow, which was dodging and trying to evade, but unsuccessfully. The grackle made contact, and the sparrow faltered, started to flutter to the ground. But the sparrow recovered and took off in another direction, the grackle still hot on its tail. The grackle made contact again, and again the sparrow recovered and kept flying. But the third hit was harder than the other two, and the sparrow went down maybe fifteen or twenty feet away from me, with the grackle on top of it.

The grackle mantled over its prey like a hawk and looked like it was ready to begin eating. I still couldn't quite believe what I'd seen, so I stepped out toward the spot, figuring I'd scare the grackle away and I'd see if the sparrow was really dead. But the grackle had its eye on me, and before I'd taken three steps, it picked up the sparrow in its bill and flew off with it.

I never knew grackles were predatory, much less capable of killing other birds on the wing and flying off with them. But a web search on grackles killing birds got quite a few hits about grackles killing and eating house sparrows, so apparently it's not uncommon.

Daytime swarm of nighthawks

Then, on a road trip to visit friends in Colorado, we had to drive carefully past the eastern slope of San Antonio Mountain as a flock of birds wheeled and dove across the road. From a distance it looked like a flock of swallows, but as we got closer we realized they were far larger. They turned out to be nighthawks -- at least fifty of them, probably considerably more. I've heard of flocks of nighthawks swarming around the bugs attracted to parking lot streetlights. And I've seen a single nighthawk, or occasionally two, hawking in the evenings from my window at home. But I've never seen a flock of nighthawks during the day like this. An amazing sight as they swoop past, just feet from the car's windshield.

Flying ants

[Flying ant courtesy of Jen Macke]

Finally, the flying ants. The stuff of a bad science fiction movie! Well, maybe if the ants were 100 times larger. For now, just an interesting view of the natural world.

Just a few days ago, Jennifer Macke wrote a fascinating article in the PEEC Blog, "Ants Take Wing!" letting everyone know that this is the time of year for ants to grow wings and fly. (Jen also showed me some winged lawn ants in the PEEC ant colony when I was there the day before the article came out.) Both males and females grow wings; they mate in the air, and then the newly impregnated females fly off, find a location, shed their wings (leaving a wing scar you can see if you have a strong enough magnifying glass) and become the queen of a new ant colony.

And yesterday morning, as Dave and I looked out the window, we saw something swarming right below the garden. I grabbed a magnifying lens and rushed out to take a look at the ones emerging from the ground, and sure enough, they were ants. I saw only black ants. Our native harvester ants -- which I know to be common in our yard, since I've seen the telltale anthills surrounded by a large bare area where they clear out all vegetation -- have sexes of different colors (at least when they're flying): females are red, males are black. These flying ants were about the size of harvester ants but all the ants I saw were black. I retreated to the house and watched the flights with binoculars, hoping to see mating, but all the flyers I saw seemed intent on dispersing. Either these were not harvester ants, or the females come out at a different time from the males. Alas, we had an appointment and had to leave so I wasn't able to monitor them to check for red ants. But in a few days I'll be watching for ants that have lost their wings ... and if I find any, I'll try to identify queens.

July 03, 2016 03:28 PM

July 02, 2016

Elizabeth Krumbach

Ubuntu 16.04 at FeltonLUG and the rest of California

On Saturday, June 25th my husband and I made our way south to Felton, California so I could give a presentation to the Felton Linux Users Group on Ubuntu 16.04.

I brought along my demo systems:

  • Lenovo G575 running Ubuntu 16.04, which I presented from
  • Dell mini9 running Xubuntu 16.04
  • Nexus 7 2013 running Ubuntu OTA-11
  • bq Aquaris M10 running Ubuntu OTA-11

All these were pristine systems so that I didn’t have any data loaded on them or anything. The Nexus 7 took some prep though. I had to swing by #ubuntu-touch on freenode to get some help with re-flashing it after it got stuck on a version from February and wouldn’t upgrade beyond that in the UI. Thanks to popey for being so responsive there and helping me out.

The presentation was pretty straight forward. I walked attendees through screenshots and basic updates of the flavors, and then dove into a variety of changes in the 16.04 release of Ubuntu itself, including disabling of Amazon search by default, replacement of Ubuntu Software Center by GNOME Software, replacement of Upstart with systemd (new since the last LTS release), ability to move the Unity launcher to the bottom of the screen, inclusion of ZFS and the introduction of Ubuntu Snappy.

Slides from my presentation are available for other folks to use as they see fit (but you probably want to introduce yourself, rather than me!): feltonlug_ubuntu_1604.pdf (3.1M), feltonlug_ubuntu_1604.odp (5.4M). If you’d like a smaller version of this slide deck, drop me a message at lyz@ubuntu.com and I’ll send you one without all the flavor screenshots.

After the presentation portion of the event, I answered questions and gave folks the opportunity to play with the laptops and tablets I brought along. About half the meeting was spent causally chatting with attendees about their experiences and plans to debug and flash the Ubuntu image on supported tablets.

Huge thanks to the group for being the welcoming crowd they always are, and Bob Lewis for inviting me down.

I’ll continue my presentation roadshow through July, presenting on Ubuntu 16.04 at the following Bay Area groups and events where I’m also bringing along Ubuntu pens, stickers and other goodies:

Bonus: At the release party in San Francisco I’ll also have copies of the The Official Book, 9th Edition which I’ll be signing and giving away!

Looking forward to these events, it should be a nice adventure around the bay area.

by pleia2 at July 02, 2016 12:30 AM

June 30, 2016

Elizabeth Krumbach

Family, moose, beer and cryptids

Our trip to Maine over Memorial Day weekend was quite the packed one. I wrote already about the trains, but we also squeezed in a brewery tour, a trip to a museum, a wildlife park visit and more.

We took an overnight (red eye) flight across the country to arrive in New Hampshire and drive up to Maine on Thursday morning. We had to adjust our plans away from the trolley museum when we learned it hadn’t opened yet, so we instead drove up to Portland to stop by one of my favorite breweries for a tour and tasting, Allagash Brewing. As a lover of Belgian style ales, I discovered Allagash in Pennsylvania several years ago, starting with their standard White and quickly falling in love with the Curieux. I left Maine before I could drink, and their tasting room and tour didn’t open until long after I moved away, so this was my first opportunity to visit. Now, you can drop by for a tasting flight at any time, but you have to reserve tour tickets in advance. It being a weekday was a huge help here, I was able to grab some of the last tickets for early in the afternoon as we drove up from Kennebunkport.

Our arrival coincided with lunchtime, and since they can’t serve food in their tasting room, they have a mutually beneficial relationship with a food truck, called Mothah Truckah, serving delicious sandwiches that sits in their parking lot on days they’re open. We ordered our sandwiches and wandered inside to eat it with their house beer. The tour itself was your typical brewery tour, with brewery history and tidbits about what makes this brewery unique throughout.

Having been party to hop growing and beer home brewing back when I lived in Pennsylvania, I’m quite familiar with the process, but am really interested to learn how breweries differ. Allagash does a huge business in kegs, with something like 70% of their beer ending up in kegs that are shipped to bars all over the country. The rest goes into one of two main bottling lines, the first of which is all their standard beers, and the second is their sours, which due to their nature require some special handling so they don’t contaminate each other. After seeing the keg and bottling lines, we went into their aging building where we had a series of other brews to taste: White, Saison, Little Brett, Golden Brett. I like Saisons a lot, and the Bretts trended in the sour, and I strongly preferred the Golden. We purchased a bottle of the Golden Brett and Uncommon Crow after the tour. An Allagash bottle opener keychain also came home with me. More photos from the brewery tour here: here.

This pretty much took up our afternoon, from there we stopped by the grocery store to per-order a Spiderman cake for my nephew’s birthday on Saturday and then checked into The Westin in Portland. After a couple snafus with the room choice, we were finally put into a room with a beautiful view of the Portland Art Museum and the harbor. We rested for a bit and then we went out in search of my lobster! We ended up at the locals-friendly J’s Oyster where I was able to order my steamed clams (steamers) and lobster, plus watch the Penguins win the game that put them in the Stanley Cup finals with our beloved Sharks. I slept well that night.

The next morning I let MJ sleep in while I made my way over to the International Cryptozoology Museum. It’s an interesting place for such a museum, but Maine is where museum founder and Cryptozoology legend Loren Coleman lives, so there we were. As I walked into the museum I was immediately met by Loren, who happily obliged my request for a photo together (he gets this a lot). He also signed some books for me, one of which went directly in the mail to one of my fellow cryptid lovers.

I keep talking about cryptids and cryptozoology. Cryptozoology is the search for creatures whose existence has not been proven due to lack of evidence, and cryptids are what we call these creatures. Think the Loch Ness Monster and the various incantations of Bigfoot, but they have a coelacanth as a mascot, since the coelacanth was thought long extinct until it’s modern existence was confirmed pretty recently. The okapi also tends to show up a lot in their literature, being probably the last large mammal to be confirmed by science. To be strictly honest with myself, it’s a pseudoscience and I’m a skeptic. Like many skeptics I like to see my ideas challenged and if I were the less skeptical type, totally would be out there in the wood searching for bigfoot. I can’t, but I want to believe. The museum itself was an important visit for me. A variety of casts of bigfoot feet, lots of kitsch and memorabilia from various cryptids, with Nessie being one of my favorites. They also had exhibits showcasing some of the lesser known and more local cryptids. I think these smaller exhibits were my favorite, since they walked the fine line between seriousness and self-deprecation on the part of cryptid seekers. With the “head of a moose, the body of a man and the wings and feet of an eagle” I’m not sure most people could honestly say they believe that the Pamola actually exists as an animal you may encounter.

The visit to this museum was definitely a memorable highlight of my trip, I’m glad I was able to visit it before they moved to their new location. The museum is closed for a few weeks this summer to do the move, I made it just in time! More photos from the museum here.

After my morning cryptid adventures, we made our way over to Fort Williams State Park, where the very famous Portland Headlight lighthouse is. Our reason was not to see the lighthouse though, we wanted lobster rolls at Bite into Maine. They remain my favorite lobster rolls. Going here is kind of a pilgrimage now.

After lunch we drove up to Freeport to meet up with my family and do a bit of shopping at L.L. Bean. We met up with my mother, youngest sister and my nephew. With my nephew I got my first glimpse at a moose! A stuffed moose.

We had dinner together at Jameson Tavern where I got a beloved slice of blueberry pie, a la mode. We then swung by the L.L. Bean outlet and did one last stop at the main retail store. I ordered a snazzy new travel pouch for toiletries when I travel.

As I wrote about previously, we spent Saturday morning at the Seashore Trolley Museum. Afterwards we swung by the bakery to pick up the cake we had ordered and drove to my sister’s place. The evening was spent with pizza, cake and birthday presents! It’s hard to believe my nephew is almost four already.

Sunday was moose day. This trip marked MJ’s second visit to Maine. I’d always told him tongue-in-cheek stories about all the moose in Maine, and the first time we visited the only moose he saw were stuffed ones at L.L. Bean. This time I was determined to show him a see a real, live moose! Alas, unless you go up to some of the northern or western parts of the state, they are actually pretty rare. In the 15 years I spent in Maine in my youth I could probably count my moose encounters on my hands.

Instead I “discovered” the Maine Wildlife Park. I put discovered in quotes because my mother informed me that I had actually been there as a child. Oh. She did say that it has changed a lot since then, so going again was a different adventure even for her. We met up with my mother, sister and nephew for lunch and then made our way out to the park in the early afternoon.

The park has improved enclosures for the animals, in keeping with modernization of many facilities. They also specialize in caring for wild animals and keeping the ones that can’t survive in the wild, writing: “Many of the animals at the Maine Wildlife Park were brought here because they were injured or orphaned, or because they were human dependent – raised, sometimes illegally, in captivity.” The collection of local animals is worth seeing. In addition to my lovely moose, their most popular exhibit, they have a pair of black bears, several eagles, mountain lions and more. Plus, it was a great place to take my nephew, with him switching between his stroller to running around to see the next animal pretty often.

And I got my moose selfie:

More photos from the Maine Wildlife Park here.

That evening MJ and I enjoyed dinner at Congress Squared at the hotel, and drinks upstairs in their Top of the East bar. With dinner we got to have some fried fiddleheads. So Maine!

Monday was Memorial Day, and that morning we met my family in Portland and went to the Narrow Gauge Railroad, which I already wrote about. The afternoon was spent getting some more lobster rolls and taking pictures throughout Cape Elizabeth, my home town. We rounded out the day with a visit to my old neighborhood, and even stopped for ice cream at the ice cream shop I frequented as a youth.

The evening on Monday concluded with MJ and I having another quiet evening out together, this time going to Eventide Oyster Co. in Portland, not too far from our hotel. As much as I love the Pacific, and living in San Francisco, I still prefer east coast oysters. It was a nice opportunity to sample a larger variety than I’ve had before. The rest of the meal was a couple small plates and cocktails, but it was plenty after that late afternoon ice cream we indulged in.

Tuesday I saw MJ off, as he needed to return home and my mother picked me up at the hotel when I checked out. We spent some time walking around downtown Portland, drifted into some book shops and had some lunch. In the mid afternoon we drove up to her place, where I got to see all her kitties! She has… several cats.

Eventually we went over to my sister’s place where I’d be staying for the rest of the week. On the way she tool me to a tractor supply store, where I marveled at all the country things (“raise your own chickens!”) and realized I’d turned into a city slicker. Hah! I was pretty out of my element.

I spent Wednesday through Friday working from my sister’s couch. My nephew went to his school program in the morning and my sister kept herself busy. I had to work late on Friday as we handled a maintenance window, but otherwise it all worked out. Working from there allowed them to not feel the need to keep me entertained, and I didn’t have to miss much work for my visit. The evenings I spent hanging out with my sister and mother, watching movies, drinking some adult root beer. It was nice to spend time with them.

On Thursday night my mother’s boyfriend took the three of us out to The Red Barn in Augusta. I’d never been to this place before, but they had top notch whole fried clams. Yummy!

Working from my sister’s couch and looking out her window at the forest view was also a nice change of pace. With just some finishing touches needed on my book, I had reached a place where I could finally relax. Being in such a quiet place helped me transition into a more peaceful spot.

Saturday was my flight day. I had planned a whale watching tour with my mother, but after waking up at 7AM and leaving before 8, the tour company called at 9:15 to cancel our 10AM tour! I was terribly disappointed. I’d never been on a whale watching tour, and with how much both my mother love animals it seemed like a perfect way to spend the day together. Since we were already so far down south, we made a detour to the Old Orchard Beach area, where we spent the morning walking around the seaside shops, walking barefoot in the warm sands (it was over 80 degrees out!) and visiting the beautiful historic carousel they have there. We had lunch at Bugaboo Creek steakhouse, and then killed time at the Maine Mall, where I picked up both Star Wars and Star Trek pajama pants, much to my delight.

It was then time for my flight out of little the little Portland jetport. Connecting through Philadelphia I had an easy time getting home, made even easier with a pair of complimentary upgrades!

This trip was a very busy one, but it was a special one for me. I don’t have the opportunity to spend a lot of time with my family in Maine, with my travel and work schedule, and splitting time with friends and family in Philadelphia as well. It was also nice to play the tourist, which I hadn’t felt super comfortable with until this trip. I finally don’t have anxiety about visiting my home town, and can appreciate it all for the beautiful place it is.

More photos from my trip, including some light houses and ocean views and our beach morning in Old Orchard are here.

by pleia2 at June 30, 2016 04:56 AM

June 26, 2016

Akkana Peck

How to un-deny a host blocked by denyhosts

We had a little crisis Friday when our server suddenly stopped accepting ssh connections.

The problem turned out to be denyhosts, a program that looks for things like failed login attempts and blacklists IP addresses.

But why was our own IP blacklisted? It was apparently because I'd been experimenting with a program called mailsync, which used to be a useful program for synchronizing IMAP folders with local mail folders. But at least on Debian, it has broken in a fairly serious way, so that it makes three or four tries with the wrong password before it actually uses the right one that you've configured in .mailsync. These failed logins are a good way to get yourself blacklisted, and there doesn't seem to be any way to fix mailsync or the c-client library it uses under the covers.

Okay, so first, stop using mailsync. But then how to get our IP off the server's blacklist? Just editing /etc/hosts.deny didn't do it -- the IP reappeared there a few minutes later.

A web search found lots of solutions -- you have to edit a long list of files, but no two articles had the same file list. It appears that it's safest to remove the IP from every file in /var/lib/denyhosts.

So here are the step by step instructions.

First, shut off the denyhosts service:

service denyhosts stop

Go to /var/lib/denyhosts/ and grep for any file that includes your IP:

grep aa.bb.cc.dd *

(If you aren't sure what your IP is as far as the outside world is concerned, Googling what's my IP will helpfully tell you, as well as giving you a list of other sites that will also tell you.)

Then edit each of these files in turn, removing your IP from them (it will probably be at the end of the file).

When you're done with that, you have one more file to edit: remove your IP from the end of /etc/hosts.deny

You may also want to add your IP to /etc/hosts.allow, but it may not make much difference, and if you're on a dynamic IP it might be a bad idea since that IP will eventually be used by someone else.

Finally, you're ready to re-start denyhosts:

service denyhosts start

Whew, un-blocked. And stay away from mailsync. I wish I knew of a program that actually worked to keep IMAP and mbox mailboxes in sync.

June 26, 2016 06:59 PM

June 24, 2016

Elizabeth Krumbach

Trains in Maine

I grew up just outside of Portland, Maine. About 45 minutes south of there is the Seashore Trolley Museum. I went several times as a kid, having been quite the little rail fan. But it wasn’t until I moved to San Francisco that I really picked up my love for rails again with all the historic transit here in the city. With my new love for San Francisco streetcars, I made plans during our last trip back to make to visit the beloved trolley museum of my youth.

I’ll pause for a moment now to talk about terminology. Here in San Francisco we call that colorful fleet of cars that ride down Market and long the Embarcadero “streetcars” but in Maine, and in various other parts of the world, they’re known as “trolleys” instead. I don’t know why this distinction exists, and both terms are pretty broad so a dictionary is no help here. Since I was visiting the trolley museum, I’ll be referring to the ones I saw there as trolleys.

Before my trip I became a member of the museum, which gave us free entrance to the museum and a discount at the gift shop. We had originally intended to go to the museum upon arrival in Maine on the 26th of May, but learned when we showed up that they hadn’t opened on weekdays yet since it was still before Memorial Day. Whoops! We adjusted our plans and went back on Saturday.

Saturday was a hot day, but not intolerable. We had a little time to kill before the next trolley was leaving, so we made our way over to the Burton B Shaw South Boston Car House to start checking out some of the trolleys they had on display. These ones were pretty far into the rust territory and it was the smallest barn of them all, but I was delighted to find one of their double deckers inside. The streetcar lines in San Francisco don’t have the electric overhead infrastructure to support these cars, so it was a real treat for me. Later in the day we also saw another double decker that I was actually able to go up inside!

It was then time to board! With the windows open on the Boston 5821 trolley we enjoyed a nice loop around the property. The car itself was unfamiliar to me, but here in San Francisco we have the 1059, a PCC that is painted in honor of the Boston Elevated Railway so I was familiar with the transit company and livery. During the ride around the loop we had a pair of very New England tour guides who enjoyed bantering (think Car Talk). I caught a video of a segment of our trolley car ride. Riding through the beautiful green woods of Maine is certainly a different experience than the downtown streets of San Francisco that I’m used to!

On this ride I learned that many of the early amusement parks were created by the rail companies in an effort to increase ridership on Sundays, and transit companies in Maine were no exception. They also stopped by a rock formation that had evidence of how they would split rocks using water that froze and expanded in the winter to make way for the railroad tracks during building. The rocks were then crushed and used to help build the foundation of the tracks. The route from Biddeford to Kennebunkport, which the tracks we rode on was part of, is slanted downhill in the southern direction, so we also heard tales of the electricity being shut off at midnight and the last train of the day sometimes relying upon speeding up near midnight and coasting the rest of the way to the final station. I think the jury is out about how much exaggeration is to be expected in stories like this.


5821, Boston Elevated Railway

After the loop, we were met by a tour guide who took us around the other two transit barns that they have on the property. For most of the tour I popped ahead of the tour group to take photos, while staying within auditory range to hear what he had to say. I think this explains the 250+ pictures I took throughout the day. The barns had trolleys going at least 4 deep, in 3-4 rows. They had cars from all over the world, ranging from a stunning open top car from Montreal to that double decker from Glasgow that I got to go up to the top of. Some of the trolleys had really stunning interiors, like the Liberty Bell Limited from Philadelphia, I wouldn’t mind riding in one of those! They also had a handful of other trains that weren’t passenger trolleys, like a snow sweeper from Ottawa and a very familiar cable car from San Francisco.

Our walk around the property concluded with a visit to the restoration shop where they do work on the trolleys. Inside we saw some of the trolley skeletons and a bunch of the tools and machines they use to do work on the cars.

As you may expect, had a blast. They have an impressive assortment of trolleys, and I enjoyed learning about them and taking pictures. The museum also has a small assortment of vintage buses and train cars from various transit agencies, with a strong bias toward Boston. It was fun to see some trains that looked eerily similar to the BART trains that we still run here in the bay area, along with some of Philadelphia’s SEPTA trains. I even caught a glimpse of a SEPTA PCC trolley with livery that was somewhat modern, but it was under a cover and likely not yet restored.

The icing on the cake was their gift shop. I picked up a book for my nephew, along with my standard “tourist stuff” shot glass and magnet. The real gems were the model trains. I selected a couple toys that will accompany the others that I have from Philadelphia and San Francisco that will go on the standard wooden track that many children have. The adult model trains are where my heart was, I was able to get one of the F-Line train models (1063 Baltimore) that I didn’t have yet, along with a much larger (1:48 scale) and more impressive 2352 Connecticut Company, Destination Middletwon Birney Safety Car. I’ll be happy when I finally have a place to display all of these, but for now my little F-Line cars are hanging out on top of my second monitor.

As I mentioned, I took a lot of photos during our adventure, a whole bunch more can be browsed in an album on Flickr, and I do recommend it if you’re interested! https://www.flickr.com/photos/pleia2/albums/72157669023849545

My visit to Maine was also to visit family and as I was making plans I tried to figure out things that would be fun, but not too tiring for my nearly four year old nephew. The Seashore Trolley Museum will be great when he’s a bit older, but could I sneak in a different train trip that would be more his speed? Absolutely! The Narrow Gauge Railroad Museum in Portland, Maine was perfect.

The train ride itself takes about 40 minutes total, and takes you on a 1.5 mile (3 miles round trip) voyage along Portland Harbor. This meant it was about 15 minutes each way, with a stop at the end of the line for about 10 minutes for the engine to detatch and re-attach to the other side of the train, I took a video of the reattachment, which took a few tries that day. The timing was perfect for someone so young, and I was delighted to see how much he enjoyed the ride.

I enjoyed it too, it was a beautiful spring day and Portland Harbor is a lovely place to ride a train along.

We spent about a half hour in the small accompanying museum. Narrow gauge is a broad term for a variety of gauges, and I learned the one that ran there in Portland had a 2 foot gauge. As I understand it, wider gauges tend to make for a smoother ride, and though these trains were very clearly passenger trains (and vintage ones at that), the ride was a bumpy one. They had a couple other passenger and freight cars in the museum, and my nephew enjoyed playing with some of the train toys.

I hadn’t really intended for this trip to Maine to be so train-heavy, but I’m glad we were able to take advantage of the stunning weather and make it so! More photos from the Narrow Gauge Railroad, including things like the telegraph and inside of the cars they had on display are here: https://www.flickr.com/photos/pleia2/sets/72157669122685275

by pleia2 at June 24, 2016 03:57 AM

June 20, 2016

Jono Bacon

Announcing Jono Bacon Consulting

A little while back I shared that I decided to leave GitHub. Firstly, thanks to all of you for your incredible support. I am blessed to have such wonderful people in my life.

Since that post I have been rather quiet about what my next adventure is going to be, and some of the speculation has been rather amusing. Now I am finally ready to share more details.

In a nutshell, I have started a new consultancy practice to provide community management, innersourcing, developer workflow/relations, and other related services. To keep things simple right now, this new practice is called Jono Bacon Consulting (original, eh?)

As some of you know, I have actually been providing community strategy and management consultancy for quite some time. Previously I have worked with organizations such as Deutsche Bank, Sony Mobile, ON.LAB, Open Networking Foundation, Intel and others. I am also an active advisor for organizations such as AlienVault, Open Networking Foundation, Open Cloud Consortium, Mycroft AI and I also advise some startup accelerators.

I have always loved this kind of work. My wider career ambitions have always been to help organizations build great communities and to further the wider art and science of collaboration and community development. I love the experience and insight I gain with each new client.

When I made the decision to move on from GitHub I was fortunate to have some compelling options on the table for new roles. After spending some time thinking about what I love doing and these wider ambitions, it became clear that consulting was the right step forward. I would have shared this news earlier but I have already been busy traveling and working with clients. 😉

I am really excited about this new chapter. While I feel I have a lot I can offer my clients today, I am looking forward to continuing to broaden my knowledge, expertise, and diversity of community strategy and leadership. I am also excited to share these learnings with you all in my writing, presentations, and elsewhere. This has always been a journey, and each new road opens up interesting new questions and potential, and I am thirsty to discover and explore more.

So, if you are interested in building a community, either inside or outside (or both) your organization, feel free to discover more and get in touch and we can talk more.

by Jono Bacon at June 20, 2016 02:45 PM

June 18, 2016

Akkana Peck

Cave 6" as a Quick-Look Scope

I haven't had a chance to do much astronomy since moving to New Mexico, despite the stunning dark skies. For one thing, those stunning dark skies are often covered with clouds -- New Mexico's dramatic skyscapes can go from clear to windy to cloudy to hail or thunderstorms and back to clear and hot over the course of a few hours. Gorgeous to watch, but distracting for astronomy, and particularly bad if you want to plan ahead and observe on a particular night. The Pajarito Astronomers' monthly star parties are often clouded or rained out, as was the PEEC Nature Center's moon-and-planets star party last week.

That sort of uncertainty means that the best bet is a so-called "quick-look scope": one that sits by the door, ready to be hauled out if the sky is clear and you have the urge. Usually that means some kind of tiny refractor; but it can also mean leaving a heavy mount permanently set up (with a cover to protect it from those thunderstorms) so it's easy to carry out a telescope tube and plunk it on the mount.

I have just that sort of scope sitting in our shed: an old, dusty Cave Astrola 6" Newtonian on an equatorian mount. My father got it for me on my 12th birthday. Where he got the money for such a princely gift -- we didn't have much in those days -- I never knew, but I cherished that telescope, and for years spent most of my nights in the backyard peering through the Los Angeles smog.

Eventually I hooked up with older astronomers (alas, my father had passed away) and cadged rides to star parties out in the Mojave desert. Fortunately for me, parenting standards back then allowed a lot more freedom, and my mother was a good judge of character and let me go. I wonder if there are any parents today who would let their daughter go off to the desert with a bunch of strange men? Even back then, she told me later, some of her friends ribbed her -- "Oh, 'astronomy'. Suuuuuure. They're probably all off doing drugs in the desert." I'm so lucky that my mom trusted me (and her own sense of the guys in the local astronomy club) more than her friends.

The Cave has followed me through quite a few moves, heavy, bulky and old fashioned as it is; even when I had scopes that were bigger, or more portable, I kept it for the sentimental value. But I hadn't actually set it up in years. Last week, I assembled the heavy mount and set it up on a clear spot in the yard. I dusted off the scope, cleaned the primary mirror and collimated everything, replaced the finder which had fallen out somewhere along the way, set it up ... and waited for a break in the clouds.

[Hyginus Rille by Michael Karrer] I'm happy to say that the optics are still excellent. As I write this (to be posted later), I just came in from beautiful views of Hyginus Rille and the Alpine Valley on the moon. On Jupiter the Great Red Spot was just rotating out. Mars, a couple of weeks before opposition, is still behind a cloud (yes, there are plenty of clouds). And now the clouds have covered the moon and Jupiter as well. Meanwhile, while I wait for a clear view of Mars, a bat makes frenetic passes overhead, and something in the junipers next to my observing spot is making rhythmic crunch, crunch, crunch sounds. A rabbit chewing something tough? Or just something rustling in the bushes?

I just went out again, and now the clouds have briefly uncovered Mars. It's the first good look I've had at the Red Planet in years. (Tiny achromatic refractors really don't do justice to tiny, bright objects.) Mars is the most difficult planet to observe: Dave liks to talk about needing to get your "Mars eyes" trained for each Mars opposition, since they only come every two years. But even without my "Mars eyes", I had no trouble seeing the North pole with dark Acidalia enveloping it, and, in the south, the sinuous chain of Sini Sabaeus, Meridiani, Margaritifer, and Mare Erythraeum. (I didn't identify any of these at the time; instead, I dusted off my sketch pad and sketched what I saw, then compared it with XEphem's Mars view afterward.)

I'm liking this new quick-look telescope -- not to mention the childhood memories it brings back.

June 18, 2016 02:53 PM

June 14, 2016

Elizabeth Krumbach

Spike, Dino and José

In the fall of 2014 we attended a wedding for one of MJ’s cousins and guests got to bring home their own little succulent plant wedding favor. At the time we didn’t even know what a succulent was, but we dutifully carted it home on the flight.

For the first few months we kept it in the temporary container it came in and I didn’t have a lot of faith in my ability to keep it alive. We managed though, MJ did some research to learn what it was and how to move it into a pot, and it’s been growing ever since.

One day, Caligula wasn’t feeling well. After months of ignoring the plant, he decided that a plant was just the thing to sooth is upset stomach. He tried to bite it, but couldn’t find a good spot because the leaves have a spike at the end. We named the plant Spike.


Simcoe and Spike

In December of last year our dear Spike had a brush with fame! I snapped a picture of the rain one afternoon, and a glimpse of Spike was included in an article.

In April I attended an OpenStack Summit in Austin, Texas. At one of the parties Canonical was giving out succulents in dinosaur planters. How could I resist that? Plus, I’d continue the trend of free plants traveling home in carry on luggage. Having a succulent in a dinosaur planter sticking out of my purse was quite the conversation starter as I traveled home.


Simcoe sits with Dino and Spike

Spike has grown since it was that little wedding favor plant, and it never grew straight. Perhaps because we didn’t turn it enough and it grew towards the sun, or because succulents just keep growing and that just happens. We weren’t sure what to do though, as it eventually got to the point where it was too top-heavy to properly support its own weight! Spike now has scaffolding.

We’d grown quite fond of our little plant, and wanted to see how we could save him and not do the same with Dino. MJ did some research and found the Cactus & Succulent Society of San Jose that appeared to be very welcoming to folks like us looking for help and to identify what the plants are. We went last Sunday, upon my return from Maine and brought Spike and Dino along.

The society meets at a meetinghouse in a park in San Jose and the society members were just as welcoming as their website led us to believe! We were welcomed as we walked in and immediately had a few of our questions answered. As the presentation began they gave us chairs and raffle tickets for later in the meeting. The meeting had a presentation from a woman who sells a lot of succulents and also does a lot of craft projects that use the plants, putting them in living wreathes and various types of cages. I had worried that Dino living in a plastic dinosaur planter would offend them (what are you doing to your precious plant?!), but it turns out that putting succulents into interesting planters is quite a popular hobby. We learned that Dino is perfectly happy in that planter for now.


From the presentation, a succulent box that hangs on the wall, and some cages and various types of succulents

After the meeting they gave out awards for the mini-show that they had for members who brought in plants they wanted to share. We were able to get all the rest of our questions answered as well. We learned a bunch.

  • Both Spike and Dino are of the Echeveria genus. We can do our own research online or at plant shows to compare ours to others to figure out exactly what kind of Echeveria they are. Spike has a purple tint to the leaves and dino has red.
  • We didn’t actually destroy Spike, in spite of the lop-sidedness. Succulents grow and grow and grow. A method of reproduction is when a leaf drops off it can grow into another plant! Spike is ready to become lots of mini-Spikes!
  • One of the reasons societies like this exist is so people can give away and sell their ever-growing population of succulents.
  • We picked up some Miracle Grow soil for cacti and succulents at a home improvement store. That’s fine, but our plant likely doesn’t actually need fertilizer. Something to think about.
  • We should be watering these succulents every week or two, but need to keep an eye on how moist the soil is since root rot is one of the only things that does kill these hearty plants.
  • It’s pretty hard to kill a succulent, so they do use them for all kinds of craft projects and inventive ways.

They gave us some advice about how to handle Spike. They recommended cutting off the top(!), drying it out for a few weeks and replanting that. The bottom of the plant will also grow a new top of the plant. Assuming all goes well, we’ll at least end up with two Spikes that will hopefully grow straight this time, plus as many of the leaves as we want to grow into new plants. We have four leaves drying out now. We haven’t done the scary cutting and replanting yet, but it may be a project for this upcoming weekend, along with picking up a few more pots.

As the meeting wound down they did a raffle. The final ticket called was mine! We ended up going home with a Notocactus Roseoluteus, a flowering cactus. We certainly hadn’t planned on adding to our plant family at this meeting, but it’s a nice plant, and hopefully as a cactus it’ll be another plant that we can keep alive. Since we got this cactus in San Jose, we named it José.


José, the Notocactus Roseoluteus

So far José is doing ok, we watered it yesterday morning and it’s now sitting on the windowsill with the other plants.

by pleia2 at June 14, 2016 12:11 AM

June 10, 2016

Akkana Peck

Visual diffs and file merges with vimdiff

I needed to merge some changes from a development file into the file on the real website, and discovered that the program I most often use for that, meld, is in one of its all too frequent periods where its developers break it in ways that make it unusable for a few months. (Some of this is related to GTK, which is a whole separate rant.)

That led me to explore some other diff/merge alternatives. I've used tkdiff quite a bit for viewing diffs, but when I tried to use it to merge one file into another I found its merge just too hard to use. Likewise for emacs: it's a wonderful editor but I never did figure out how to get ediff to show diffs reliably, let alone merge from one file to another.

But vimdiff looked a lot easier and had a lot more documentation available, and actually works pretty well.

I normally run vim in an xterm window, but for a diff/merge tool, I want a very wide window which will show the diffs side by side. So I used gvimdiff instead of regular vimdiff: gvimdiff docs.dev/filename docs.production/filename

Configuring gvimdiff to see diffs

gvimdiff initially pops up a tiny little window, and it ignores Xdefaults. Of course you can resize it, but who wants to do that every time? You can control the initial size by setting the lines and columns variables in .vimrc. About 180 columns by 60 lines worked pretty well for my fonts on my monitor, showing two 80-column files side by side. But clearly I don't want to set that in .vimrc so that it runs every time I run vim; I only want that super-wide size when I'm running a side-by-side diff.

You can control that by checking the &diff variable in .vimrc:

if &diff
    set lines=58
    set columns=180
endif

If you do decide to resize the window, you'll notice that the separator between the two files doesn't stay in the center: it gives you lots of space for the right file and hardly any for the left. Inside that same &diff clause, this somewhat arcane incantation tells vim to keep the separator centered:

    autocmd VimResized * exec "normal \<C-w>="

I also found that the colors, in the vim scheme I was using, made it impossible to see highlighted text. You can go in and edit the color scheme and make your own, of course, but an easy way quick fix is to set all highlighting to one color, like yellow, inside the if $diff section:

    highlight DiffAdd    cterm=bold gui=none guibg=Yellow
    highlight DiffDelete cterm=bold gui=none guibg=Yellow
    highlight DiffChange cterm=bold gui=none guibg=Yellow
    highlight DiffText   cterm=bold gui=none guibg=Yellow

Merging changes

Okay, once you can view the differences between the two files, how do you merge from one to the other? Most online sources are quite vague on that, but it's actually fairly easy:

]c jumps to the next difference
[c jumps to the previous difference
dp makes them both look like the left side (apparently stands for diff put
do makes them both look like the right side (apparently stands for diff obtain

The only difficult part is that it's not really undoable. u (the normal vim undo keystroke) works inconsistently after dp: the focus is generally in the left window, so u applies to that window, while dp modified the right window and the undo doesn't apply there. If you put this in your .vimrc

nmap du :wincmd w<cr>:normal u<cr>:wincmd w<cr>
then you can use du to undo changes in the right window, while u still undoes in the left window. So you still have to keep track of which direction your changes are going.

Worse, neither undo nor this du command restores the highlighting showing there's a difference between the two files. So, really, undoing should be reserved for emergencies; if you try to rely on it much you'll end up being unsure what has and hasn't changed.

In the end, vimdiff probably works best for straightforward diffs, and it's probably best get in the habit of always merging from right to left, using do. In other words, run vimdiff file-to-merge-to file-to-merge-from, and think about each change before doing it to make it less likely that you'll need to undo.

And hope that whatever silly transient bug in meld drove you to use vimdiff gets fixed quickly.

June 10, 2016 02:10 AM

June 06, 2016

Elizabeth Krumbach

Hashtag FirstJob

Back in February Gareth Rushgrove started the fantastic Twitter hashtag, #FirstTechJob. The responses were inspiring for many people, from those starting out to people like me who “fell into” a tech career. I had a natural love for computers, various junior tech jobs and volunteered in open source for years. I had no formal education in computer science. While my story is not uncommon in tech, it can still be isolating and embarrassing at academic conferences I participate in.

Lots of IT/software jobs ask for experience, but everyone starts somewhere. Lets encourage new folks with a tweet about our #FirstTechJob

I chimed in myself.

Contract web developer at a web development firm. Turned static designs into layouts on sites! Browser compatibility issues... #FirstTechJob

I knew when I posted it that 140 characters was not enough to provide context. For me and so many others it wasn’t just about having that first job and going through the prerequisite grunt work, but the long journey I had before getting said first tech job.

This week, two things inspired me to write more about this. First, I visited my old hometown. Second, several folks I know went to AlterConf and I saw a bunch of tweets about how tech workers should be more compassionate toward support/building/cleaning staff working around them. Don’t disrupt their work, but learn their names, engage them in conversation, treat them with respect.

I’ll begin by setting my privilege stage:

  1. I’m a white woman.
  2. Though we weren’t wealthy, I grew up in an affluent town with great public schools.
  3. I always had clothes, healthy food and a house to live in.
  4. Even though it was 10 years old (and so was I!) when it came to our house in 1991, I had desktop computer at home and I could use it as much as I wanted. We got online at home in 1998.
  5. In addition to a supportive Linux User Group community in Philadelphia, my white 20-something boyfriend referred and recommended me to the employer who gave me my first tech job.
  6. I had, and continue to have, time to learn, hack and experiment outside of work hours.

In spite of any of the other challenges I encountered as a child, youth and young adult, my life was a lot better than many others then and now. I had a lot going for me.


The same age as I am, I found the first computer we had in a museum

So what did my visit back home do?

My husband and I stayed in one of the nicest hotels in Portland, Maine. It was at the top of the highest hill in the city and had a beautiful view of the harbor and the Portland Art Museum.

My most vivid memory from that art museum was not visiting it, though I’m sure I did with a school trip, but when I was a teen and worked as catering staff for a wedding there. Looking out the hotel from our room I remembered the 16 hour day that left me dead on my feet and vowing never to do it again (though of course I did). I woke up early to help cart everything to the museum, helped to make sure the chefs and servers had everything they needed behind the scenes and washed the fancy champagne soup dishes, watching most the soup go down the drain. We rushed around the venue after the event concluded to clean and pack everything back into the van.

It brought back memories of other catering jobs I did too. At one of these jobs in my home town I served hors d’oeuvres to the extended family of people I went to school with. Being friendly and outgoing enough to offer food and carry around those trays while handing out the little napkins is a skill that I still have a lot of respect for.


The Portland Art Museum is in the center of my photo

It wasn’t just catering that I did as soon as I was old enough to work. As we drove through my home town in Cape Elizabeth my verbal tour to my husband included actual historical landmarks that make the town a tourist destination and “I babysat there!” and “I used to clean that house!” It turns out I worked a lot during high school and over those summer vacations.

All these hashtags and discussions really hit home. A formidable amount of my youth was spent as “the help” and I know what it’s like to be invisible to and disrespected by people I serve.

If nothing else, I’ll add my voice to those imploring my fellow techies to make an effort to be more compassionate to the support staff around them. After all, you know me, you can relate to me, and I spent time in their hard-working, worn out shoes.

by pleia2 at June 06, 2016 09:05 PM

June 04, 2016

Akkana Peck

Walking your Goat at the Summer Concert

I love this place. We just got back from this week's free Friday concert at Ashley Pond. Not a great band this time (the previous two were both excellent). But that's okay -- it's still fun to sit on the grass on a summer evening and watch the swallows wheeling over the pond and the old folks dancing up near the stage and the little kids and dogs dashing pell-mell through the crowd, while Dave, dredging up his rock-star past, explains why this band's sound is so muddy (too many stacked effects pedals).

And then on the way out, I'm watching appreciatively as the teen group, who were earlier walking a slack line strung between two trees, has now switched to juggling clubs. (I know old people are supposed to complain about "kids today", but honestly, the kids here seem smart and fit and into all kinds of cool activities.) One of the jugglers has just thrown three clubs and a ball, and is mostly keeping them all in the air, when I hear a bleat to my right -- it's a girl walking by with a goat on a leash.

Just another ordinary Friday evening in Los Alamos.

June 04, 2016 02:45 AM

May 29, 2016

Elizabeth Krumbach

Toys and Cats in Austin

It’s been a month since returning from my trip to Austin for the OpenStack Summit, but I’ve been overwhelmed with work and finishing my book, more on that in another post. Not much time for writing here in my blog! I had some side adventures in Austin that I’d hate to see go unmentioned.

The OpenStack Summits are pretty exhausting, so what better way to unwind than to snuggle up with some kitties? As we wrapped up our work on Friday afternoon I gathered a crew to join me at the Blue Cat Cafe, which was just under a mile from the conference venue. A bit after 5PM we made our way over there.

Along the way, we discovered the Austin Toy Museum. It was a small place, but it was a fun detour. I got my picture taken with R2-D2.

They had a relatively big Star Wars exhibit with a bunch of toys that my colleagues and I enjoyed pointing to and saying we had as kids. The museum definitely skewed toward toys from the 1980s, and the fellow who sold us our tickets waxed poetically about how the 1980s were the golden age of toys. Who am I to argue? I sure enjoyed my toys as a kid in the 1980s.


Hoth toys have always been a favorite of mine

The museum distinguishes itself by the video games, which you get to play as much as you want for the price of admission. They have a whole wall of consoles, plus several arcade games. I enjoyed getting smashed to pieces in Astroids and playing a bit of Pac-Man, both on arcade games. Plus, my 1980s flashback journey was completed by seeing a couple Popples hanging out on top of the Q*bert game.

From there we finally made our way over to the cat cafe! Cat cafes have been popping up in major cities, including one in San Francisco, but this was the first time I’d made it to one. Like many of them, their focus is on adoption and care for cats that don’t have homes. They’re also great for cat lovers who can’t have one at home, or are traveling for a conference and missing their own kitties!

The inside of this cafe was definitely the domain of kitties. An old drum set was transformed into kitty sleeping areas. An old furniture-style CRT TV had the mechanical components removed to make way for a nice cat bed. There were also plenty of places to climb!

There were also some unintentional cat toys. When someone left the bathroom door open we learned why you don’t leave the bathroom door open.

The cafe component of this establishment was served by a food truck in front of the building. You can order from inside with the kitties, but they take your order out to the food truck to be prepared and then you pick it up at a window inside, or they bring it to you. I enjoyed some hot cider while we petted the cats that wandered through where we were sitting on some couches.

Our adventure to the cat cafe was my perfect relaxing activity post-conference. Next time I’m in Austin I plan on checking out the Museum of the Weird and Austin Books & Comics, which I had planned on visiting but didn’t make it to.

A few more photos from the cat cafe here: https://www.flickr.com/photos/pleia2/albums/72157668283330182

by pleia2 at May 29, 2016 03:59 PM

May 25, 2016

Elizabeth Krumbach

Sharks and Giants

Six years ago sports weren’t on my radar. I’d been to a couple minor league baseball games (Sea Dogs in Portland when I was young, and the Reading Phillies a few years earlier) but it wasn’t until 2010 that I went to a major sporting event.

I’m not sure if it was the stunning AT&T Park or I was just at a point in my life where I could chill out and enjoy a game, but I fell in love that night in 2010 when we watched the Philadelphia Phillies play the San Francisco Giants. Since then I’ve attended a bunch more San Francisco Giants games, several Oakland A’s games, and MJ and I have branched out into hockey too by going to San Jose Sharks games. Back in December I went to my first football game. Baseball still holds my heart, and so does AT&T Park, but I do enjoy a good hockey game.

A couple weeks ago when we learned that the Sharks were going into the 7th game of second round finals we snapped up tickets. On May 12th we took Caltrain down to San Jose to see them play against Nashville.

It was the first time I’d ever been to a playoff game for any sport. Going to a sold out game with the energy that a playoff brings was quite the experience. It was a really enjoyable game for Sharks fans.

Nashville had lots of great passes, but the Sharks won 4-0, sealing their spot in the conference finals. Nice! This week will determine how far they continue to go, as I write this they’re in a 3-2 game lead in the conference finals.

More pictures from the evening and the game: https://www.flickr.com/photos/pleia2/albums/72157666141427753

The only downside to the evening was the trek home. I’d love for Caltrain to be a good option both ways. Going down is pretty easy and quick on a bullet train during rush hour, but coming home is pretty rough. The game ended around 8:30, we were on the train platform by 9 to catch a 9:30 train. By the time we go home it was 11:30PM. Three hours from the end of the game to getting home was a bit much, especially since I was also recovering from a nasty cold that sapped my energy pretty severely.

I hadn’t planned on going to another game this month, but a friend and colleague who is staying in town for a few weeks contacted me to see if I’d be interested in catching a baseball game this week. Count me in. Last night MJ and I met up with my buddy Spencer and we caught a Giants game down at my beloved AT&T Park.

The weather was a bit gloomy, but we only had a bit of misting during the end of the game. The Giants were in their first game against the San Diego Padres, and the Padres put up a fight. The game was 0-0 until the bottom of the 9th. It was actually a little painful, but I had good company… who I dragged halfway across the stadium so we could get decent beer during the game. Happy to report that I enjoyed a Mango Wheat and Go West! IPA by Anchor Brewing Company along with my obligatory ball game hot dogs.

It was the bottom of the 9th inning, as we all were getting ready for extra innings, that the Giants scored a run. It sure made for an exciting final inning!

More photos here: https://www.flickr.com/photos/pleia2/albums/72157668743195896

No complaints about the commute home from AT&T Park. We live less than a mile from the stadium so just needed to use our feet to get home, along with dozens of other fans headed in the same direction.

by pleia2 at May 25, 2016 07:45 AM

May 23, 2016

Jono Bacon

Moving on From GitHub

Last year I joined GitHub as Director Of Community. My role has been to champion and manage GitHub’s global, scalable community development initiatives. Friday was my last day as a hubber and I wanted to share a few words about why I have decided to move on.

My passion has always been about building productive, engaging communities, particularly focused on open source and technology. I have devoted my career to understanding the nuances of this work and which workflow, technical, psychological, and leadership ingredients can deliver the most effective and rewarding results.

As part of this body of work I wrote The Art of Community, founded the annual Community Leadership Summit, and I have led the development of community at Canonical, XPRIZE, OpenAdvantage, and for a range of organizations as a consultant and advisor.

I was attracted to GitHub because I was already a fan and was excited by the potential within such a large ecosystem. GitHub’s story has been a remarkable one and it is such a core component in modern software development. I also love the creativity and elegance at the core of GitHub and the spirit and tone in which the company operates.

Like any growing organization though, GitHub will from time to time need to make adjustments in strategy and organization. One component in some recent adjustments sadly resulted in the Director of Community role going away.

The company was enthusiastic about my contributions and encouraged me to explore some other roles that included positions in product marketing, professional services, and elsewhere. So, I met with these different teams to explore some new and existing positions and see what might be a good fit. Thanks to everyone in those conversations for your time and energy.

Unfortunately, I ultimately didn’t feel they matched my passion and skills for building powerful, productive, engaging communities, as I mentioned above. As such, I decided it was time to part ways with GitHub.

Of course, I am sad to leave. Working at GitHub was a blast. GitHub is a great company and is working on some valuable and important areas that strike right at the center of how we build great software. I worked with some wonderful people and I have many fond memories. I am looking forward to staying in touch with my former colleagues and executives and I will continue to be an ardent supporter, fan, and user of both GitHub and Atom.

So, what is next? Well, I have a few things in the pipeline that I am not quite ready to share yet, so stay tuned and I will share this soon. In the meantime, to my fellow hubbers, live long and prosper!

by Jono Bacon at May 23, 2016 03:20 PM

May 18, 2016

Jono Bacon

Kindness and Community

On Friday last week I flew out to Austin to run the Community Leadership Summit and join OSCON. When I arrived in Austin, I called home and our son, Jack, was rather upset. It was clear he wasn’t just missing daddy, he also wasn’t feeling very well.

As the week unfolded he developed strep throat. While a fairly benign issue in the scheme of things, it is clearly uncomfortable for him and pretty scary for a 3 year-old. With my wife, Erica, flying out today to also join OSCON and perform one of the keynotes, it was clear that I needed to head home to take care of him. So, I packed my bag, wrestled to keep the OSCON FOMO at bay, and headed to the airport.

Coordinating the logistics was no simple feat, and stressful. We both feel awful when Jack is sick, and we had to coordinate new flights, reschedule meetings, notify colleagues and handover work, coordinate coverage for the few hours in-between her leaving and me landing, and other things. As I write this I am on the flight heading home and at some point she will zoom past me on another flight heading to Austin.

Now, none of this is unusual. Shit happens. People face challenges every day, and many far worse than this. What struck me so notably today though was the sheer level of kindness from our friends, family, and colleagues.

People wrapped around us like a glove. Countless people offered to take care of responsibilities, help us with travel and airport runs, share tips for helping Jack feel better, provide sympathy and support, and more.

This was all after a weekend of running the Community Leadership Summit, an event that solicited similar levels of kindness. There were volunteers who got out of bed at 5am to help us set up, people who offered to prepare and deliver keynotes and sessions, coordinate evening events, equipment, sponsorship contributions, and help run the event itself. Then, to top things off, there were remarkably generous words and appreciation for the event as a whole when it drew to a close.

This is the core of what makes community so special, and so important. While at times it can seem the world has been overrun with cynicism, narcissism, negativity, and selfishness, we are instead surrounded by an abundance of kindness. What helps this kindness bubble to the surface are great relationships, trust, respect, and clear ways in which people can play a participatory role and support each other. Whether it is something small like helping Erica and I to take care of our little man or something more involved such as an open source project, it never ceases to inspire and amaze me how innately kind and collaborative we are.

This is another example of why I have devoted my life to understanding every nuance I can of how we can tap into and foster these fundamental human instincts. This is how we innovate, how we make the world a better place, and how we build opportunity for everyone, no matter what their background is.

When we harness these instincts, understand the subtleties of how we think and operate, and wrap them in effective collaborative workflows and environments, we create the ability to build and disrupt things more effectively than ever.

It is an exciting journey, and I am thankful every day to be joined on it by so many remarkable people. We are going build an exciting future together and have a rocking great time doing so.

by Jono Bacon at May 18, 2016 07:48 PM

May 12, 2016

Elizabeth Krumbach

My Yakkety Yak has arrived!

I like toys, but I’m an adult who lives in a small condo, so I need to behave myself when it comes to bringing new friends into our home. I made an agreement with myself to try and limit my stuffed toy purchases to two per year, one for each Ubuntu release.

Even so, I now have quite the collection.

These toys serve the purpose of brightening up our events with some fun, and enjoy the search for a new animal to match Mark Shuttleworth’s latest animal announcement. Truth be told, my tahr is a goat that I found that kind of looks like a tahr. The same goes for my xerus. My pangolin ended up having to be a plastic toy, though awareness about the animal (and conservation effords) has grown since 2012 so I’d likely be able to find one now. The quetzal was the trickiest, I had to admit defeat bought an ornament instead, but I did find and buy some quetzal earrings during our honeymoon in Mexico.

I’ve had fun as well and learned more about animals, which I love anyway. For the salamander I bought a $55 Hellbender Salamander Adoption Kit from the World Wildlife fund, an organization my husband and I now donate to annually. Learning about pangolins led me to visit one in San Diego and become a made me aware of the Save Pangolins organization.

It is now time for a Yakkety Yak! After some indecisiveness, I went with an adorable NICI yak, which I found on Amazon and shipped from Shijiazhuang, China. He arrived today.

Here he is!

…though I did also enjoy the first photo I took, where trusty photobombed us.

by pleia2 at May 12, 2016 01:38 AM

May 10, 2016

Elizabeth Krumbach

Newton OpenStack Summit Days 3-5

On Monday and Tuesday I was pretty focused on the conference side of the OpenStack Summit, but with all the keynotes behind us, when Wednesday rolled around I found myself much more focused on the Design Summit side.

Our first session of the day was on Community Task Tracking, which we jokingly called the “task tracking bake-off.” As background, couple years ago the OpenStack Infrastructure team placed our bets on an in-project developed task tracker called StoryBoard. The hope had been that the intention to move off of Launchpad and onto this new platform would bring support from companies looking to help with development. Unfortunately this didn’t pan out. Development landed on the shoulders of a single poor, overworked soul. At this point we started looking at the Maniphest component of Phabricator. Simultaneously we ended up with a contributor putting together configuration management for Maniphest and had a team pop up to continue support of StoryBoard for a downstream that had begun using it. A few weeks ago I organized a bug day where the team got together to do a serious once through of outstanding bugs and provide feedback to the StoryBoard team about what we need to use it, we went from 571 active bugs down to 414.

This set the stage for our session. We could stand up a Maniphest server or place our bets with StoryBoard again. We had a lot to consider.

  Pros Cons
Storyboard Strong influence over direction, already running and being used in our infra, good API We need to invest in development ourselves, little support for design/UI folks (though we could run a standalone Pholio)
Maniphest Investment is made by a large exiting development team, feature rich with pluggable components like Pholio for design folks Little influence over direction (like with Gerrit), still have to stand up and migrate to, weak API

Both had a few things lacking that we’d need before we go full steam into use by all of OpenStack, so there seemed to be consensus that they were similar in terms of work and time needed to get to that point. Of all the considerations, the need to develop our own vs. depending on upstream is the one that weighed most heavily upon me. Will companies really step up and help with development once we move everyone into production? What happens if our excellent current StoryBoard developers are reassigned to other projects? Having an active upstream certainly is a benefit. The session didn’t end with a formal selection, but we will be discussing it more over the next couple weeks so we can move toward making a recommendation to the Technical Committee (TC). Read-only session etherpad here.

The next session I attended was in the QA track, for the DevStack Roadmap. The session centered around finally making DevStack use Neutron by default. It’s been some time since nova-networking was deprecated, so this switch was a long time in coming. In addition to the technical components of this switch, there documentation needs to be updated around the networking decisions. Since I’ve just recently done some deep dives into OpenStack networking, somehow I ended up volunteering to help with this bit! Read-only session etherpad here.

Before the very busy lunch I had coming up, there was one more morning session, on Landing Page for Contributors. The current pages we have on the wiki, like the Main page on the wiki itself and the How To Contribute wiki aren’t the most welcoming of pages, they’re more walls of text that a new contributor has to sift through. This session talked through a lot of the tooling that could be used to make a more inviting, approachable page, drawing from other projects who have forged this path in the past. Of course it is also important that the content is reviewed and maintainable from the project perspective too, so something that can be held in revision control is key. Read-only session etherpad here.

As lunch rolled around I rushed upstairs to assist with the Git and Gerrit – Lunch and Learn. The event began by expecting and separating out about 1/3 of the folks in the room who hadn’t completed the prerequisites. It was the job of myself and the other helpers to start working with these folks to get their accounts set up and git-review installed. This wasn’t a trivial task, in spite of my intimate knowledge of how our system works and years of using it, almost all the attendees used Windows or Mac. I use Linux full time and we don’t maintain good (or any) documentation for in our development workflow for OpenStack development for these other operating systems.

A lot of folks did make it through configuration, and it was nice to be reminded about how our community is growing and that our tools need to do as well. A patch was submitted several months back to add a video of how to set things up on Windows, but that’s inconsistent with the rest of our documentation and has not been accepted. It would be great to see some folks using these other operating systems help us get the written documentation into better shape. Beyond the prerequisites, session leaders Amy Marrich and Tamara Johnston walked folks through setting up their environment, submitting a patch to the sandbox repo, submitting a test bug, reviewing a change and more. The slide deck they used has been uploaded to Amy’s AustinSummit GitHub project. I also took a few minutes to explain the Zuul Status page and a bit about each of the pipelines that a change may go through on the way to being merged.


Git and Gerrit – Lunch and Learn

Directly after lunch I was in another infrastructure session, this time to talk about Launch-Node, Ansible and Puppet. Launching new, long-lived servers in our infrastructure is one of those tasks that has remained frustratingly hands on. This manual work has been a time sink and a lot of it can be automated, so we as a team consider this situation a bug. Our Launch-Node script has been developed to start tackling this and the session went through some of the things we need to be careful of, including handling of DNS and duplicate hostnames (what if we’re spinning up a replacement server?), when do we unmount and disassociate cinder volumes and trove databases with the old server and bring them up on the new? Lots of great discussion around all of this was had. Fixes were already coming in by the end of this session and we have a good path moving forward. Read-only session etherpad here.

The next infrastructure session focused on Wiki upgrades. We’ve been struggling with spam problems for a several months. We need to do an upgrade to get some of the latest anti-spam tooling, which also requires upgrading the operating system in order to get a newer version of PHP. The people-power we have for this is limited, as we all have a lot of other projects on our plates. The session began with outlining what we need to do to get this done, and wound down with the proposal to shut down the wiki in a year. The OpenStack project has great, collaborative tooling for publishing documentation and things, we also use etherpads a lot for notes and to do lists, is there really still an active need for a wiki? Thierry Carrez sent an email today that started work on socializing our options, whether to carry on with the wiki or not. As the discussions continue on list, I hope to help in finding tooling for teams that need it and the current tools don’t satisfy. While we do that over the next year, Paul Belanger has bravely stepped forward to lead up the ongoing maintenance of the wiki until the possible retirement. Read-only session etherpad here.

Thursday morning kicked off bright and early with a session on Proposal jobs. As some quick background, proposal jobs are run on a privileged server in the OpenStack infrastructure that has the credentials to publish to a few places, like translations files up to Zanata. With this in mind, and as general good policy, we like to keep jobs we’re running here down to a minimum, using non-privileged servers as much as possible to complete tasks. The session walked through several of the existing jobs and news ones that were being proposed to sort through how they could be done differently, and make sure we’re all on the same page as a team when it comes to approving new jobs on these servers. Read-only session etherpad here.

It was then on to a session to “Robustify” Ansible-Puppet. Several months back we switched over to a system of triggering Puppet runs with Ansible instead of using the Puppetmaster software. This process quickly became complicated, so much so that even I struggled to trace the whole path of how everything works. Thankfully Monty Taylor and Spencer Krum started off the session by whiteboarding how everything works together, or doesn’t, as the case may be. It was a huge help to see it sketched out so that the pain points could be identified, one of those rare times when it was super valuable to be together in a room as a team rather than trying to explain things over IRC. We learned that inventory creation for Ansible is one of our pain points, but the complexity of the whole system has made fixing problems tricky, you pull one thread and something else gets undone! We also discussed the status of logging, and how we can better prepare for edge cases where things Really Go Wrong and we can’t access to the server to see the logs to find out what happened. There’s also some Puppetboard debugging to do, as folks rely on the data from that and it hasn’t been entirely accurate in reporting failures lately. In all, a great session, read-only session etherpad here.


Monty and Spencer explain our Ansible-Puppet setup

Next up for Infrastructure was a fishbowl session about OpenID/SSO for Community Systems. The OpenStack Foundation invested in the development of OpenStackID when few other options that fit our need were mature in this space. Today we have the option of using ipsilon, which has a bigger development community and is already in use by another major open source project (Fedora). The session outlined the benefits of consuming an upstream tool instead, including their development model, security considerations and general resources that have been spent to roll our own solution. The session also outlined exactly what our needs are to move all of our authentication away from Launchpad hosted by Canonical. I think it was a good session with some healthy discussion about where we are with our tooling, read-only session etherpad here.

I spent my time after lunch with the translations/internationalization (i18n) folks in a 90 minute work session on Translation Processes and Tools (read-only session etherpad here). My role in this session, along with Steve Kowalik and Andreas Jaeger was to represent the infrastructure team and the tooling we could provide to help the i18n team get their work done. Of particular focus were the translations check site that we need to work toward bringing online and our plan to upgrade Zanata, and the underlying operating system it’s running on. We also discussed some of the other requirements of the team, like automated polling of Active Technical Contributor (ATC) status for translators and improved statistics on Stackalytics for translations. Andreas was also able to take time to show off the new translations procedure for reno-driven release notes, which allows for translations throughout the cycle as they’re committed to the repositories rather than a mad rush to complete them at the end. It was also nice to catch up with Alex Eng from the Zanata team and former i18n PTL Daisy (Ying Chun Guo) who I had such a great time with in Tokyo, I wish I’d had more time to grab a meal with them.

In our final Infrastructure-focused session of the day, we met to discuss operating system upgrades. With the release of the latest Ubuntu LTS (16.04) the week prior to the summit, we find ourselves in a world of three Ubuntu LTS releases in the mix. We decided to first carve out some time to get all of our 12.04 systems upgraded to 14.04. From there we’ll work to get our Puppet modules updated and services prepared for running on 16.04. Of particular interest to me is getting the Zanata server on 16.04 soon so we can upgrade the version of Zanata that it’s running and requires a newer version of Java than 14.04 provides. We also spent a little time splitting out the easier servers to upgrade from the more difficult ones, especially since some systems have very little data and don’t actually need an in place upgrade, we can simply redeploy workers. We will do a more thorough evaluation when we’re closer to upgrade time, which we’re scheduling for some time this month. Read-only session etherpad here.

Thursday evening meant it was time for our Infrastructure Team dinner! Over 20 self-proclaimed infrastructure team members piled into cars to make it across town to Freedmans to enjoy piles of BBQ. I had to pass on all things bready (including beer) but later in the evening we made our way inside to the bar where we found agave tequila that was not forbidden for me. The rest was history. Lots of fun and great chats with everyone, including a bunch of non-infra people who had been clued into our late night shenanigans and decided to join us.


Infra evening gathering, photo by Monty Taylor

Friday was our day for team work session gatherings. Infrastructure ended up in room 404 (which, in fact, was difficult to find). Jeremy Stanley (fungi) kicked the day off by outlining topics for Infra and QA that we may find valuable to work on together while we were in the room. I worked on a few things with folks for about an hour before switching tracks to join my translations friends again over in their work session.

Steve, Andreas and I made our way over to the i18n session to chat with them about the ability to translate more things (like DevStack documentation) and to give them an update from our upgrades session for an idea of when they could expect the Zanata upgrade. Perhaps the most exciting part of the morning was their request for us to finally shut down the OpenStack Transifex project. We switched to Zanata when Transifex went closed source, but our hosted account has lingered around for a year since we’ve used it “just in case” we needed something from it. With two OpenStack cycles on Zanata behind us, it was time to shut it down. We were all delighted when we saw the email: [Transifex] The organization OpenStack has been deleted by the user jaegerandi.


Cheerful crowd of i18n contributors!

After one more lunch at Cooper’s BBQ, I made it back to the Infrastructure room for more afternoon work, but I could feel the cloud of exhaustion hitting me by then. Most of what I managed was informally chatting with my fellow contributors and sketching out work to be done rather than actually getting much done. There’d be plenty of time for that once I returned home!

I concluded my time in Austin with a few colleagues with a visit to the Austin Toy Museum, some leisurely time at the Blue Cat Cafe (my first cat cafe!) and a quiet sushi dinner. With that, another great OpenStack Summit was behind me. My flight home left at 6AM Saturday morning.

Edit: Infrastructure PTL Jeremy Stanley has also written summaries of sessions here: Newton Summit Infra Sessions Recap

by pleia2 at May 10, 2016 07:39 PM

May 08, 2016

Akkana Peck

Setting "Emacs" key theme in gtk3 (and Firefox 46)

I recently let Firefox upgrade itself to 46.0.1, and suddenly I couldn't type anything any more. The emacs/readline editing bindings, which I use probably thousands of times a day, no longer worked. So every time I typed a Ctrl-H to delete the previous character, or Ctrl-B to move back one character, a sidebar popped up. When I typed Ctrl-W to delete the last word, it closed the tab. Ctrl-U, to erase the contents of the urlbar, opened a new View Source tab, while Ctrl-N, to go to the next line, opened a new window. Argh!

(I know that people who don't use these bindings are rolling their eyes and wondering "What's the big deal?" But if you're a touch typist, once you've gotten used to being able to edit text without moving your hands from the home position, it's hard to imagine why everyone else seems content with key bindings that require you to move your hands and eyes way over to keys like Backspace or Home/End that aren't even in the same position on every keyboard. I map CapsLock to Ctrl for the same reason, since my hands are too small to hit the PC-positioned Ctrl key without moving my whole hand. Ctrl was to the left of the "A" key on nearly all computer keyboards until IBM's 1986 "101 Enhanced Keyboard", and it made a lot more sense than IBM's redesign since few people use Caps Lock very often.)

I found a bug filed on the broken bindings, and lots of people commenting online, but it wasn't until I found out that Firefox 46 had switched to GTK3 that I understood had actually happened. And adding gtk3 to my web searches finally put me on the track to finding the solution, after trying several other supposed fixes that weren't.

Here's what actually worked: edit ~/.config/gtk-3.0/settings.ini and add, inside the [Settings] section, this line:

gtk-key-theme-name = Emacs

I think that's all that was needed. But in case that doesn't do it, here's something I had already tried, unsuccessfully, and it's possible that you actually need it in addition to the settings.ini change (I don't know how to undo magic Gnome settings so I can't test it):

gsettings set org.gnome.desktop.interface gtk-key-theme "Emacs"

May 08, 2016 12:11 AM

May 01, 2016

Elizabeth Krumbach

Newton OpenStack Summit Days 1-2

This past week I attended my sixth OpenStack Summit. This one took us to Austin, Texas. I was last in Austin in 2014 when I quickly stopped by to give a talk at the Texas LinuxFest, but I wasn’t able to stay long during that trip. This trip gave me a chance (well, several) to finally have some local BBQ!

I arrived Sunday afternoon and took the opportunity to meet up with Chris Aedo and Paul Belanger, who I’d be on the stage with on Monday morning. We were able to do our first meetup together and do a final once through of our slides to make sure they had all the updates we wanted and we were clear on where the transitions were. Gathering at the convention center also allowed to pick up our badges before the mad rush that would come the opening of the conference itself on Monday morning.

With Austin being the Live Music Capital of the World, we were greeted in the morning by live music from the band Soul Track Mind. I really enjoyed the vibe it brought to the morning, and we had a show to watch as we settled in and waited for the keynotes.

Jonathan Bryce and Lauren Sell of the OpenStack Foundation opened the conference and gave us a tour of numbers. The first OpenStack summit was held in Austin just under six years ago with 75 people and they were proud to announce that this summit had over 7,500. It’s been quite the ride that I’m proud to have been part of since the beginning of 2013. In Jonathan’s keynote we were able to get a glimpse into the real users of OpenStack, with highlights including the fact that 65% of respondents to the recent OpenStack User Survey are using OpenStack in production and that half of the Fortune 100 companies are using OpenStack in some capacity. It was also interesting to learn how important the standard APIs for interacting with clouds was for companies, a fact that I always hoped would shine through as this open source cloud was being adopted. The video from his keynote is here: Embracing Datacenter Diversity.

As the keynotes continued the ones that really stood out for me were by AT&T (video: AT&T’s Cloud Journey with OpenStack) and Volkswagen Group (Driving the Future of IT Infrastructure at Volkswagen Group.

The AT&T keynote was interesting from a technical perspective. It’s clear that the rise of mobile devices and the internet of things has put pressure on telecoms to grow much more quickly than they have in the past to handle this new mobile infrastructure. Their keynote shared that they expected this to grow an additional ten times by 2020. To meet this need, the networking aspects of technologies like OpenStack are important to their strategy as they move away from “black box” hardware from networking vendors and to more software-driven infrastructure that could grow more quickly to fit their needs. We learned that they’re currently using 10 OpenStack projects in their infrastructure, with plans to add 3 more in the near future, and learned about their in house AT&T Integrated Cloud (AIC) tooling for managing OpenStack. When the morning concluded, all their work was rewarded with a Super User award, they wrote about here.

The Volkswagen Group keynote was a lot of fun. As the world of electric and automated cars quickly approaches they have recognized the need to innovate more quickly and use technology to get there. They still seem to be in the early days of OpenStack deployments, but have committed a portion of one of their new data centers to just OpenStack. Ultimately they see a hybrid cloud future, leveraging both public and private hosting.

The keynote sessions concluded with the announcement of the 2017 OpenStack Summit locations: Boston and Sydney!

Directly after the keynote I had to meet Paul and Chris for our talk on OpenStack Infrastructure for Beginners (video, slides). We had a packed room. I lead off the presentation by covering an overview of our work and by giving a high level tour of the OpenStack project infrastructure. Chris picked up by speaking to how things worked from a developer perspective, tying that back into how and why we set things up the way we did. Paul rounded out the presentation by diving into more of the specifics around Zuul and Jenkins, including how our testing jobs are defined and run. I think the talk went well, we certainly had a lot of fun as we went into lunch chatting with folks about specific components that they were looking either to get involved with or replicate in their own continuous integration systems.


Chris Aedo presenting, photo by Donnie Ham (source)

After a delicious lunch at Cooper’s BBQ, I went over to a talk on “OpenStack Stable: What It Actually Means to Maintain Stable Branches” by Matt Riedemann, Matthew Treinish and Ihar Hrachyshka in the Upstream Development track of the conference. This was a new track for this summit, and it was great to see how well-attended the sessions ended up being. The goal of this talk was to inform members of the community what exactly is involved in management of stable releases, which has a lot more moving pieces than most people tend to expect. Video from the session up here. It was then over to “From Upstream Documentation To Downstream Product Knowledge Base” by Stefano Maffulli and Caleb Boylan of DreamHost. They’ve been taking OpenStack documentation and adjusting it for easier and more targeted for consumption by their customers. They talked about their toolchain that gets it from raw source from the OpenStack upstream into the proprietary knowledge base at DreamHost. It’ll be interesting to see how this scales long term through releases and documentations changes, video here.

My day concluded by participating in a series of Lightning Talks. My talk was first, during which I spent 5 minutes giving a tour of status.openstack.org. I was inspired to give this talk after realizing that even though the links are right there, most people are completely unaware of what things like Reviewday (“Reviews” link) are. It also gave me the opportunity to take a closer, current look at OpenStack Health prior to my presentation, I had intended to go to “OpenStack-Health Dashboard and Dealing with Data from the Gate” (video) but it conflicted with the talk we were giving in the morning. The lightning talks continued with talks by Paul Belanger on Grafyaml, James E. Blair on Gertty and Andreas Jaeger on the steps for adding a project to OpenStack. The lightning talks from there drifted away from Infrastructure and into more general upstream development. Video of all the lightning talks here.

Day two of the summit began with live music again! It was nice to see that it wasn’t a single day event. This time Mark Collier of the OpenStack Foundation kicked things off by talking about the explosion of growth in infrastructure needed to support the growing Internet of Things. Of particular interest was learning about how operators are particularly seeking seamless integration of virtual machines, containers and bare metal, and how OpenStack meets that need today as a sort of integration engine, video here.

The highlights of the morning for me included a presentation from tcp cloud in the Czech Republic. They’re developing a Smart City in the small Czech city of Písek. He did an overview of the devices they were using and presented a diagram demonstrating how all the data they collect from around the city gets piped into an OpenStack cloud that they run. He concluded his presentation by revealing that they’d turned the summit itself into a mini city by placing devices around the venue to track temperature and CO2 levels throughout the rooms, very cool. Video of the presentation here.


tcp cloud presentation

I also enjoyed seeing Dean Troyer on stage to talk about improving user experience (UX) with OpenStackClient (OSC). As someone who has put a lot of work into converting documented commands in my book in an effort to use OSC rather than the individual project clients I certainly appreciate his dedication to this project. The video from the talk is here. It was also great to hear from OVH, an ISP and cloud hosting provider who currently donates OpenStack instances to our infrastructure team for running CI testing.

Tuesday also marked the beginning of the Design Summit. This is when I split off from the user conference and then spend the rest of my time in development space. This time the Design Summit was held across the street from the convention center in the Hilton where I was staying. This area of the summit takes us away from presentation-style sessions and into discussions and work sessions. This first day focused on cross-project sessions.

This was the lightest day of the week for me, having a much stronger commitment to the infrastructure sessions happening later in the week. Still, went to several sessions, starting off with a session led by Doug Hellmann to talk about how to improve the situation around global requirements. The session actually seemed to be an attempt to define the issues around requirements and get more contributors to help with requirements project review and to chat about improvements to tests. We’d really like to see requirements changes have a lower chance of breaking things, so trying to find folks to sign up to do this test writing work is really important.

I had lunch with my book writing co-conspirator Matt Fischer to chat about some of the final touches we’re working on before it’s all turned in. Ended up with a meaty lunch again at Moonshine Grill just across the street from the convention center, after which I went into a “Stable Branch End of Life Policy” session led by Thierry Carrez and Matt Riedemann. The stable situation is a tough one. Many operators want stable releases with longer lifespans, but the commitment from companies to put engineers on it is extremely limited. This session explored the resources required to continue supporting releases for longer (infra, QA, etc) and there were musings around extending the support period for projects meeting certain requirements for up to 24 months (from 18). Ultimately by the end of the summit it does seem that 18 months continues to be the release lifespan of them all.

I then went over to the Textile building across from the conference center where my employer, HPE, had set up their headquarters. I had a great on-camera chat with Stephen Spector about how open source has evolved from hobbyist to corporate since I became involved in 2001. I then followed some of the marketing folks outside to shoot some snippits for video later.

The day of sessions continued with a “Brainstorm format for design summit split event” session that talked a lot about dates. As a starting point, Thierry Carrez wrote a couple blog posts about the proposal to split the design summit from the user summit:

With these insightful blog posts in mind, the discussion moved forward on the assumption that the events would be split and how to handle that timing-wise. When in the cycle would each event happen for maximum benefit for our entire community? In the first blog post he had a graphic that had a proposed timeline, which the discussions mostly stuck to, but dove deeper into discussing what is going on during each release cycle week and what the best time would be for developers to gather together to start planning the next release. While there was good discussion on the topic, it was clear that there continues to be apprehension around travel for some contributors. There are fears that they would struggle to attend multiple events funding-wise, especially when questions arose around whether mid-cycle events would still be needed. Change is tough, but I’m on board with the plan to split out these events. Even as I write this blog post, I notice the themes and feel for the different parts of our current summit are very different.

My session day concluded with a session about cross-projects specifications for work lead by Shamail Tahir and Carol Barrett from the Product Working Group. I didn’t know much about OpenStack user stories, so this session was informative for seeing how those should be used in specs. In general, planning work in a collaborative way, especially across different projects that have diverse communities is tricky. Having some standards in place for these specs so teams are on the same page and have the same expectations for format seems like a good idea.

Tuesday evening meant it was time for the StackCity Community Party. Instead of individual companies throwing big, expensive parties, a street was rented out and companies were able to sponsor the bars and eateries in order to throw their branded events in them. Given my dietary restrictions this week, I wasn’t able to partake in much of the food being offered, so I only spent about an hour there before joining a similarly restricted diet friend over at Iron Works BBQ. But not before I picked up a dinosaur with a succulent in it from Canonical.

I called it an early night after dinner, and I’m glad I did. Wednesday through Friday were some busy days! But those days are for another post.

More photos from the summit here: https://www.flickr.com/photos/pleia2/albums/72157667572682751

by pleia2 at May 01, 2016 04:28 PM

April 29, 2016

Akkana Peck

Vermillion Cliffs trip, and other distractions

[Red Toadstool, in the Paria Rimrocks] [Cobra Arch, in the Vermillion Cliffs] I haven't posted in a while. Partly I was busy preparing for, enjoying, then recovering from, a hiking trip to the Vermillion Cliffs, on the Colorado River near the Arizona/Utah border. We had no internet access there (no wi-fi at the hotel, and no data on the cellphone). But we had some great hikes, and I saw my first California Condors (they have a site where they release captive-bred birds). Photos (from the hikes, not the condors, which were too far away): Vermillion Cliffs trip.

I've also been having fun welding more critters, including a roadrunner, a puppy and a rattlesnake. I'm learning how to weld small items, like nail legs on spark plug dragonflies and scorpions, which tend to melt at the MIG welder's lowest setting.

[ Welded puppy \ [ Welded Roadrunner ] [ Welded rattlesnake ]

New Mexico's weather is being charmingly erratic (which is fairly usual): we went for a hike exploring some unmapped cavate ruins, shivering in the cold wind and occasionally getting lightly snowed upon. Then the next day was a gloriously sunny hike out Deer Trap Mesa with clear long-distance views of the mountains and mesas in all directions. Today we had graupel -- someone recently introduced me to that term for what Dave and I have been calling "snail" or "how" since it's a combination of snow and hail, soft balls of hail like tiny snowballs. They turned the back yard white for ten or fifteen minutes, but then the sun came out for a bit and melted all the little snowballs.

But since it looks like much of today will be cloudy, it's a perfect day to use up that leftover pork roast and fill the house with good smells by making a batch of slow-cooker green chile posole.

April 29, 2016 06:28 PM

April 23, 2016

Elizabeth Krumbach

FOSSASIA 2016

A few weeks ago I had the pleasure of flying to Singapore to participate in FOSSASIA 2016, which is billed as Asia’s Premier Open Technology Event. I was able to spend a little time prior to the event doing some touristing but Friday morning came quickly and I met up with a colleague to make our way to the conference. We took the Singapore MRT (Mass Rapid Transit, rails!) from the station near our hotel to the Science Centre Singapore where the conference was being held. I was really pleased with how fast, frequent, clean and easy to navigate the MRT is during rush hour. Though the trains did tend to fill up, we had very easy rides to and from the venue each day.

This was my second open source conference in a science museum, and I really like the association. As conference attendees we were free to visit the museum (photos here). It was quite an honor to be welcomed to the center by Lim Tit Meng, the museum’s Chief Executive, during the keynotes on Friday morning. That morning I also had the pleasure of meeting FOSSASIA founder Hong Phuc, who I had been exchanging emails with leading up to the event, it was very clear that she’s continued to be very hands on with the organization of the conference since its founding.

The theme of the conference this year centered around the Internet of Things, so the Friday morning keynotes drew from a diverse group of people and organizations. I was particularly impressed that they didn’t just call upon open source developers to give presentations. Keynotes came from folks working on hardware, design and fascinating programs that used IoT devices.

Highlights of the morning included a talk by Bunnie Huang who made electronic, lighted badges for Burning Man that changed their light patterns based on how they “mated” with other badges to change their blinkome (think genome). Talks continued with a really fun one from Bernard Leong of the Singapore Post who explained how they’ve been experimenting with drones for small package delivery, particularly to remote areas, using Pulau Ubin as an example in the demonstration run.

I was then really delighted to hear about UNESCO’s YouthMobile program from Davide Storti and ITO Misako. YouthMobile is encouraging children to shift from being mere users of mobile devices to actually developing applications for them. I find this project to be particularly important as I know I wouldn’t be the technologist I am today without being able to fiddle with my early computers. We need to grow that next generation of tinkerers, but increasingly kids tend to only have access to mobile rather than the big old desktops that I grew up on. I believe projects aimed at inspiring the tinkerer in children on these new devices will grow in importance as we move into the future It was also nice to hear that the project hasn’t just been creating all their own curriculum to accomplish their goals, they’ve been partnering with existing initiatives and programs. Kudos to them for doing it right.


Davide Storti and ITO Misako on YouthMobile

Cat Allman continued keynotes as she talked about the work Google has started to do in the Maker and Science space. Their work includes Google Summer of Code accepting more science-focused programs, support of Maker events and “road trips” with students to science museums. The final keynote came from Jan Nikolai Nelles who spoke on the The Other Nefertiti, where a team visited a German museum and created a not-strictly-authorized 3D rendering of a famous Nefertiti bust. It was a valuable thing unto itself, and interesting for raising awareness about how museums share data about artifacts, or don’t, as the case may be.

The conference continued as I went to a talk titled “Why are we (Still) wasting food? How technology can help” which sounds interesting, but the presenter didn’t seem to understand his audience or what the conference was about. The talk was pretty much a sales talk about the success of their product in saving food in restaurant and other industry kitchens. A noble effort, and it was fun to brainstorm how some of the components he talked about could be used in other open source projects. I visited their website during the talk and was perplexed to be unable to find a link to their source code. During the Q&A I specifically asked whether the software was actually open source. The presenter struggled to answer my question, he claimed that it was, but that he is not a developer so he wasn’t sure which parts or where I could find it. He gave me his business card so I could send him an email about it after the conference. My email follow-up received this response:

“We are not using any open source code. Everything is developed in house.”

How disappointing! I’m not sure how their talk ended up at a Free and Open Source Software conference, though their selection of a non-technical presenter who couldn’t answer a simple question that strikes at the core of what the conference is about does hint at their obliviousness. I certainly didn’t appreciate being tricked into attending a sales talk about a suite of proprietary software. Thankfully, the conference improved after this.

I attended a talk by U-Zyn Chua about how he reverse engineered an API in a taxi app for his Singapore Taxi data project. His talk was fascinating for two reasons. First he walked us through the work that had to be done to use an undocumented API. Second, the data about taxis that he collected was fascinating, high traffic areas, times of days when taxis were busy. Plus, between this talk and the Singapore Post talk I learned a lot about the geography and population centers of Singapore.

Official Group Photograph - FOSSASIA 2016
Official Group Photograph – FOSSASIA 2016 by Michael Cannon

The conference continued the next day and I made sure I made time to attend Sayan Chowdhury’s “Dive deep into Fedora Infra” talk. Fedora was an early project on my open source infra list and it’s always exciting to chat with their engineers and swap running infra in the open stories. Sayan’s talk gave an overview of several of the key services that they’ve developed and deployed, including projects like Fedora Infrastructure Message Bus (fedmsg) which was also deployed by the open source infra team for the Debian project. Unfortunately I had to quickly depart from that talk in order to make it over to my own just after.

I gave a talk on “Code Review for DevOps” which I had a lot of fun modifying for the 20 minute slot and for a devops rather than systems administration audience. I put a firmer emphasis on the development of tooling in our team and was able to tighten up the presentation a lot to deliver a whirlwind tour of how we do almost everything through a code review system and with testing. Slides from the presentation are here (PDF).


Photo of my presentation by Dong (Vincent) Ma source)

I mentioned that my talk was 20 minutes long, and that makes this a good time to pause and reflect on that format. Almost all the talks at this conference were 20 minute slots, which is about half the length I’m accustom to. I really like this length. If a talk is not interesting, at least it’s short. If it is interesting, 20 minutes does actually give enough time for a good presentation. The schedule also allowed for 10 minutes between sessions so that people could get to their next room. In reality, all this timing this could have used a bit more policing. Q&As and even talks themselves by speakers used to longer slots frequently overflowed beyond their 20 minute window and frequently made it difficult to complete seeing one talk and getting out to the next. For a volunteer-run event, they did do a good job overall of sticking to at least the schedule of when talks started in each room, so if I planned accordingly I rarely missed the beginning of a talk in an alternate track because the schedule had drifted.

Saturday afternoon I spent some time going to lightning talks, including one about “Continuous Integration and Continuous Deployment (CI/CD) for Open Source and Free Software Development” by my colleague Dong Ma. With only 5 minutes, he was quickly able to contrast some of the features of the FOSSology open source CI/CD workflow with that of the model the OpenStack community has developed.


Dong Ma on open source CI/CD

I was then off to Sundeep Anand’s presentation, “Using Python Client to talk with Zanata Server.” Last autumn we launched translate.openstack.org running on Zanata and have been using the Java client along with a series of scripts to handle manipulation of the translations in the OpenStack project. It was interesting to learn about his strides with the Python client, which is making its way up to feature parity with the Java one. Since OpenStack itself is written in Python, switching to this Python client may make sense for us at some point, as it would make it easier for developers on our project to contribute to it. During his talk he also gave a demonstration of Zanata itself as he walked through the use of the client.

These talks were all very practical for me and applicable to my work, but that doesn’t mean I didn’t go off and have fun too. Later that afternoon I attended a talk on “A trip to Pluto with OpenSpace” where the team developing OpenSpace took public images of the Pluto flyby and gave us a demonstration of how their software worked to provide such a fascinating, animated demonstration. I also got to learn about the New Palmyra project where people are getting together to create 3D models of famous monuments in Syria that have been or are at risk of being destroyed by ongoing military conflict in the region. I also enjoyed learning about the passion that everyone on that team is bringing to the project, and I have a lot of respect for and interest in their goals of preserving history.

On Sunday the first talk I attended was by François Cartegnie on the newest features of the popular, cross-platform VLC software project. As a user of multiple platforms (Linux and Android) it was nice to hear that with the 3.0 release they’re aiming to standardize on that release number, as the differing version numbers across platforms have been confusing. He also spent a great deal of time explaining the challenges they continually overcome to be the best player on the market, including not just by supporting encoding standards, but by also supporting when those encoding standards are poorly or improperly implemented. This can’t be an easy task. I was also interested to learn that the uPnP support has also been revamped and should be working better these days.

My colleague and tourism buddy for the week Matthew Treinish spoke next, on “QA in the Open.” Drawing from his experience as the QA project lead for OpenStack for several cycles, he talked about the plugin-driven model that OpenStack QA has adopted. This model has helped individual projects take ownership of their testing requirements and has helped scale the very small core QA team, which now spans over a thousand repositories and dozens of projects that make up OpenStack.


Matthew Treinish on QA in the Open

Sunday afternoon had a talk that was one of the conference highlights for me: “Reproducible Builds – fulfilling the original promise of free software” by Chris Lamb. I had an interest in the topic before joining the session, but it was one of those talks where I was really pulled in and became even more interested in the topic. The idea on the surface seems pretty simple, you want to be able to exactly replicate builds over time and space. But there are a number of challenges to this when it comes to actually doing it, which he outlined:

  • Timestamps
  • Timezones and locales
  • Different versions of libraries
  • Non-deterministic file ordering
  • Users, groups, umask, environment variables
  • Random behavior (eg. hash ordering)
  • Build paths

Chris Lamb on Reproducible Builds

As soon as he enumerated these things it was obvious that they all would be problems, and still surprising that it would be so difficult. From this talk I learned about the reproducible-builds.org project which seeks to document and discuss these issues and find solutions for all of them. Additionally, Chris himself is a participant in Debian and he was able to share statistics about how most Debian packages are now being created in a way that adheres to the reproducible model. Very cool stuff, I hope to learn more about it.

My afternoon continued by attending a talk about btrfs by Anand Jain. His focus was basics and then on to upcoming features in development. The talk may have convinced me to start using it in a basic way on one of my systems soon, as the support for the core components is actually quite stable these days. I then went to an Asciidoc talk, where presenter George Goh compiled his presentation from Asciidoc just before he began presenting, nicely done! He stressed the importance of documentation and making it easy to keep updated, with automated updates of references to things like figures that live inline in the text. He also explored the use of template systems in Asciidoc to easily export portions of your document to different projects and organizations while preserving the appropriate branding for each.

In what seemed much too soon, the conference conclusion came on Sunday evening. There were thanks and words from several of the organizers. Words from the audience and various attendees were also spoke, my favorite of which came from young (middle school by my US-rendering) students visiting from Saudi Arabia. Several had feared that the conference would be boring and too technical for the level they were at, but they expressed excitement about how much fun they had and how many presenters had succeeded in presenting topics that they could understand. It was thrilling to hear this from these students, I want the future architects of our future to start young, be exposed to free and open source software and to be excited by the possibilities.

More of my photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157666299641355

Thanks to all the organizers and volunteers for putting this conference together. I had a wonderful time and hope to participate again in the future!

by pleia2 at April 23, 2016 05:50 PM

April 21, 2016

Jono Bacon

Dan Ariely on Building More Human Technology, Data, Artificial Intelligence, and More

Behavioral economics is an exciting skeleton on which to build human systems such as technology and communities.

One of the leading minds in behavioral economics is Dan Ariely, New York Times best-selling author of Predictably Irrational, The Upside Of Irrationality, and frequent TED speaker.

I recently interviewed Dan for my Forbes column to explore how behavioral economics is playing a role in technology, data, artificial intelligence, and preventing online abuse. Predictably, his insight was irrationally interesting. OK, that was a stretch.

Read the piece here

by Jono Bacon at April 21, 2016 08:59 PM

Nathan Haines

Ubuntu 16.04 LTS FAQ

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Ubuntu 16.04 LTS is here! Let's take a look at some of the most exciting features and common questions around this new operating system.

Ubuntu 16.04 LTS

  1. When does Ubuntu 16.04 LTS come out?

    • Ubuntu 16.04 LTS will reach general release on April 21st, 2016.
  2. I meant at what time will the release happen?

    • Ubuntu is actively being developed until the actual release happens, minus a small delay to help the mirrors propogate first. The release will be announced on the ubuntu-announce mailing list. (This page will not exist until the release.)
  3. What does "16.04 LTS" mean?

    • Ubuntu is released on a regular schedule every six months. The first release was in October 2004, and was named Ubuntu 4.10. For Ubuntu, the major version number is the year of release and the minor version number is the month of release. Ubuntu 16.04 is released on 2016-04-21, so the version number is 16.04.
    • Ubuntu releases are supported for 9 months, but many computing activities require stability. Every two years, an Ubuntu release is developed with long term support in mind. These releases, designated with "LTS" after the version number, are supported for 5 years on the server and desktop.
  4. What does "Xenial Xerus" mean?

    • Every version of Ubuntu has an alliterative development codename. After Ubuntu 6.06 LTS was released, the decision was made to choose new codenames in alphabetical order. Ubuntu 16.04 LTS is codenamed the Xenial Xerus release, or xenial for short.
    • "Xenial" is an adjective that means "friendly to others, especially foreigners, guests, or strangers." With lxd being perfect for "guest" containers, Snappy Ubuntu Core being perfect for IoT developers, snap packages being perfect for third-party software developers, and Ubuntu on Windows perfect for Windows developers who use Ubuntu in the cloud (or Ubuntu developers who are forced to use Windows at work!), xenial is a perfect description of Ubuntu 16.04!
    • "Xerus" is the genus name of the African ground squirrel. They collaborate and are not aggressive to other mammals, so they fit the description of xenial. It also makes for an adorable mascot!
  5. How long will Ubuntu 16.04 LTS be supported?

    • Ubuntu 16.04 LTS will be supported on desktops, servers, and in the cloud for 5 years, until April 2021. After this time, 16.04 LTS will enter end-of-life and no more security updates will be released.

Getting Ubuntu 16.04 LTS

  1. Where can I download Ubuntu 16.04 LTS?

    • Once released, Ubuntu 16.04 LTS will be available for release at http://www.ubuntu.com/download/. This URL will help you select the right architecture and will automatically link you to a mirror for the download. Please don't constantly refresh the direct download site!
  2. What if I find ubuntu-16.04-desktop-amd64.iso on an Ubuntu server before the official release is announced?

    • Then you've found a final release candidate that is being used to seed the mirrors before releases. Downloading or linking to it will interfere with the mirrors and delay the release.
  3. What if I post a link to it anyway?

    • If you do it on /r/Ubuntu, your post or comment will be removed and you will be banned for a day. The release team works hard enough as it is!
  4. What if I want to help others get Ubuntu 16.04 LTS faster?

    • Thank you for your help! Consider using BitTorrent (Ubuntu comes with Transmission) and seeding the final release.
  5. What if I'm already running Ubuntu 14.04.4 LTS or Ubuntu 15.10?

    • Then you can simply upgrade to Ubuntu 16.04 using Software Updater

Upgrading to Ubuntu 16.04 LTS

  1. Is upgrading to a new version of Ubuntu easy?

    • Yes, the upgrade process is supported and automated. However, you should always back up your files and data before upgrading Ubuntu. Actually, you should always keep recent backups even when you not upgrading Ubuntu.
    • Ubuntu checks for software updates once a day, and Software Updater will inform you once a new version of Ubuntu is available. The upgrade will download a large amount of data--anywhere from 0.5 - 1.5 GB of data depending on the packages you have installed, and the upgrade process can take some time. Don't do any serious work on your computer during the upgrade process. Light web browsing or a simple game such as Aisleriot, Mahjongg, or Mines is safe.
  2. Should I upgrade to Ubuntu 16.04 LTS right away or wait?

    • It should be safe to upgrade immediately, and as long as you back up your home folder and have install media for your current version of Ubuntu in case you want to reinstall, there's very little risk involved.
  3. Is it better to wait until later?

    • Probably not, but there are other benefits. Ubuntu 16.04 will receive newer release images with bug fixes about 3 months after its initial release. In addition, downloading updates can be much faster after release week. (Be sure to set up your Ubuntu mirror in Software & Updates!) Ubuntu 14.04 LTS is supported until April 2019 and Ubuntu 15.10 is supported until July 2016, so you have nothing to lose by waiting a couple weeks.
  4. I'm running Ubuntu 15.10. How do I upgrade to Ubuntu 16.04 LTS?

    • After Ubuntu 16.04 LTS is released, Software Updater will inform you that a new version of Ubuntu is available. Make sure that all available updates for Ubuntu 15.10 have been installed first, then click the "Upgrade..." button.
  5. I'm running Ubuntu 14.04.4 LTS. How do I upgrade to Ubuntu 16.04 LTS?

    • After Ubuntu 16.04.1 LTS is released in July 2016, Software Updater will inform you that a new version of Ubuntu is available. Make sure that all available updates for Ubuntu 14.04 LTS have been installed first, then click the "Upgrade..." button.
  6. I'm running Ubuntu 12.04 LTS. How do I upgrade to Ubuntu 16.04 LTS?

    • You can't upgrade directly to Ubuntu 16.04 LTS, so you have two options:
      • Use Update Manager to upgrade to Ubuntu 14.04 LTS, then reboot and use Software Updater to upgrade again to Ubuntu 16.04 LTS.
      • Back up your computer and install Ubuntu 16.04 LTS from scratch.
  7. What is Ubuntu 16.04.1 and why can't I update Ubuntu 14.04 LTS immediately?

    • A new version of Ubuntu is released every six months, but LTS releases are used for years. So Ubuntu offers "point releases" of LTS versions. Starting 3 months after the release and then every 6 months thereafter, new install images are created that include the latest updates to all of the default software. This allows new installations to run the latest software immediately and decreases the time it takes to download updates after a new install.
    • Because LTS users depend on stability, Ubuntu 14.04 LTS will not automatically offer an update to Ubuntu 16.04 LTS until the first point release. After three months, any show-stopper bugs should be solved and the upgrade process will have been tested by many others and improved if necessary.
  8. What if I want to upgrade right now?

    • Upgrading from Ubuntu 14.04 LTS to Ubuntu 16.04 LTS should be safe and easy. If you have a recent backup of your files and data, simply open Terminal and type update-manager -d. This will tell Ubuntu to upgrade to the next release early.
  9. What if I already ran update-manager -d and upgraded to a beta or pre-release version of Ubuntu 16.04 LTS?

    • If you run Software Updater after the release of Ubuntu 16.04 LTS, your version of xenial will be the same as the released version of Ubuntu.
  10. What if I don't believe that?

    • When xenial is being developed, it is constantly being improved. Milestones such as Alpha 1, Beta 2, and so on are simply points in time where developers can check progress. If you install Ubuntu from a Beta 2 image (for example), the moment you apply updates, you are no longer running Beta 2. Updates to xenial continue until release, when the Ubuntu archive is locked, images are spun, and the xenial archive is finalized and released as Ubuntu 16.04 LTS. After the release of Ubuntu 16.04 LTS, all further updates come from the xenial-updates and xenial-security repositories and the xenial repository remains unchanged. Updating from the Ubuntu repositories during and after the xenial development and release brings you along through theses moments in time.
      • TRIVIA: As implied above, this means that Ubuntu 16.04 LTS doesn't exist until the Release Team names the final product. Until then, the release is simply Xenial Xerus or xenial for short.

Coming next:

Details on new features!

  • How do snap packages and deb packages work together?
  • DAE Unity 8?
  • Y U NO AMD fglrx drivers?
  • And other questions you ask in the [/r/Ubuntu comments](https://redd.it/4frg4a)!

April 21, 2016 10:17 AM

April 16, 2016

Elizabeth Krumbach

Color an Ubuntu Xenial Xerus

Last cycle I reached out to artist and creator of Full Circle Magazine Ronnie Tucker to see if he’d create a coloring page of a werewolf for some upcoming events. He came through and we had a lot of fun with it (blog post here).

With the LTS release coming up, I reached out to him again.

He quickly turned my request around, and now we have a xerus to color!

Xerus coloring page
Click the image or here to download the full size version for printing.

Huge thanks to Ronnie for coming through with this, it’s shared with a CC-SA license, so I encourage people to print and share them at their release events and beyond!

While we’re on the topic of the our African ground squirrel friend, thanks to Tom Macfarlane of the Canonical Design Team I was able to update the Animal SVGs section of the Official Artwork page on the Ubuntu wiki. For those of you who haven’t seen the mascot image, it’s a real treat.

Xerus official mascot

It’s a great accompaniment to your release party. Download the SVG version for printing from the wiki page or directly here.

by pleia2 at April 16, 2016 05:03 PM

April 14, 2016

Jono Bacon

Mycroft and Building a Future of Open Artificial Intelligence

Last year a new project hit Kickstarter called Mycroft that promises to build an artificial intelligence assistant. The campaign set out to raise $99,000 and raised just shy of $128,000.

Now, artificial intelligence assistants are nothing particularly new. There are talking phones and tablets such as Apple’s Siri and Google Now, and of course the talking trash can, the Amazon Echo. Mycroft is different though and I have been pretty supportive of the project, so much so that I serve as an advisor to the team. Let me tell you why.

Here is a recent build in action, demoed by Ryan Sipes, Mycroft CTO and all round nice chap:

Mycroft is interesting both for the product it is designed to be and the way the team are building it.

For the former, artificial intelligence assistants are going to be a prevalent part of our future. Where these devices will be judged though is in the sheer scope of the functions, information, and data they can interact with. They won’t be judged by what they can do, but instead what they can’t do.

This is where the latter piece, how Mycroft is being built, really interests me.

Firstly, Mycroft is open source in not just the software, but also the hardware and service it connects to. You can buy a Mycroft, open it up, and peek into every facet of what it is, how it works, and how information is shared and communicated. Now, for most consumers this might not be very interesting, but from a product development perspective it offers some distinctive benefits:

  • A community can be formed that can play a role in the future development and success of the product. This means that developers, data scientists, advocates, and more can play a part in Mycroft.
  • Capabilities can be crowdsourced to radically expand what Mycroft can do. In much the same way OpenStreetmap has been able to map the world, developers can scratch their own itch and create capabilities to extend Mycroft.
  • The technology can be integrated far beyond the white box sitting on your kitchen counter and into Operating Systems, devices, connected home units, and beyond.
  • The hardware can be iterated by people building support for Mycroft on additional boards. This could potentially lower costs for future units with the integration work reduced.
  • Improved security for users with a wider developer community wrapped around the project.
  • A partner ecosystem can be developed where companies can use and invest in the core Mycroft open source projects to reduce their costs and expand the technology.

There is though a far wider set of implications with Mycroft too. Much has been been written about the concerns from people such as Elon Musk and Stephen Hawking about the risks of artificial intelligence, primarily if it is owned by a single company, or a small set of companies.

While I don’t think skynet is taking over anytime soon, these concerns are valid and this raises the importance that artificial intelligence is something that is open, not proprietary. I think Mycroft can play a credible role in building a core set of services around AI that are part of an open commons that companies can invest in. Think of this as the OpenStack of AI, if you will.

Hacking on Mycroft

So, it would be remiss if I didn’t share a few details of how the curious among you can get involved. Mycroft currently has three core projects:

  • The Adapt Intent Parser converts natural language into machine readable data structures.
  • Mimic takes in text and reads it out loud to create a high quality voice.
  • OpenSTT is aimed at creating an open source speech-to-text model that can be used by individuals and company to allow for high accuracy, low-latency conversion of speech into text.

You can also find the various projects here on GitHub and find a thriving user and developer community here.

Mycroft are also participating in the IBM Watson AI XPRIZE where the goal is to create an artificial intelligence platform that interacts with people so naturally that when people speak to it they’ll be unable to tell of they’re talking to a machine or to a person. You can find out more about how Mycroft is participating here.

I know the team are very interested in attracting developers, docs writers, translators, advocates, and more to play a role across these different parts of the project. If this all sounds very exciting to you, be sure to get started by posting to the forum.

by Jono Bacon at April 14, 2016 05:01 AM

April 13, 2016

Jono Bacon

Going Large on Medium

I just wanted to share a quick note to let you know that I will be sharing future posts both on jonobacon.org and on my Medium site.

I would love to hear what kind of content you would find interesting for me to share. Feel free to share in the comments!

Thanks!

by Jono Bacon at April 13, 2016 07:19 PM

April 12, 2016

Jono Bacon

Upcoming Speaking at Interop and Abstractions

I just wanted to share a couple of upcoming speaking engagements going on:

  • Interop in Las Vegas – 5th May 2016 – I will be participating in the keynote panel at Interop this year. The panel is called How Open-Source Changes the IT Equation and I am looking forward to participating with Colin McNamara, Greg Ferro, and Sean Roberts.
  • Abstractions in Pittsburgh – 18-20 Aug 2016 – I will be delivering one of the headlining talks at Abstractions. This looks like an exciting new conference and my first time in Pittsburgh. Looking forward to getting out there!

Some more speaking gigs are in the works. More details soon.

by Jono Bacon at April 12, 2016 03:30 PM

April 10, 2016

Jono Bacon

Community Leadership Summit 2016

On 14th – 15th May 2016 in Austin, Texas the Community Leadership Summit 2016 will be taking place. For the 8th year now, community leaders and managers from a range of different industries, professions, and backgrounds will meet together to share ideas and best practice. See our incredible registered attendee list that is shaping up for this year’s event.

This year we also have many incredible keynotes that will cover topics such as building developer communities, tackling imposter syndrome, gamification, governance, and more. Of course CLS will incorporate the popular unconference format where the audience determine the sessions in the schedule.

We are also delighted to host the FLOSS Community Metrics event as part of CLS this year too!

The event is entirely free and everyone is welcome! CLS takes place the weekend before OSCON in the same venue in Austin. Be sure to go and register to join us and we hope to see you in Austin in May!

Many thanks to O’Reilly, Autodesk, and the Linux Foundation for their sponsorship of the event!

by Jono Bacon at April 10, 2016 09:35 PM

April 05, 2016

Akkana Peck

Modifying a git repo so you can pull without a password

There's been a discussion in the GIMP community about setting up git repos to host contributed assets like scripts, plug-ins and brushes, to replace the long-stagnant GIMP Plug-in Repository. One of the suggestions involves having lots of tiny git repos rather than one that holds all the assets.

That got me to thinking about one annoyance I always have when setting up a new git repository on github: the repository is initially configured with an ssh URL, so I can push to it; but that means I can't pull from the repo without typing my ssh password (more accurately, the password to my ssh key).

Fortunately, there's a way to fix that: a git configuration can have one url for pulling source, and a different pushurl for pushing changes.

These are defined in the file .git/config inside each repository. So edit that file and take a look at the [remote "origin"] section.

For instance, in the GIMP source repositories, hosted on git.gnome.org, instead of the default of url = ssh://git.gnome.org/git/gimp I can set

pushurl = ssh://git.gnome.org/git/gimp
url = git://git.gnome.org/gimp
(disclaimer: I'm not sure this is still correct; my gnome git access stopped working -- I think it was during the Heartbleed security fire drill, or one of those -- and never got fixed.)

For GitHub the syntax is a little different. When I initially set up a repository, the url comes out something like url = git@github.com:username/reponame.git (sometimes the git@ part isn't included), and the password-free pull URL is something you can get from github's website. So you'll end up with something like this:

pushurl = git@github.com:username/reponame.git
url = https://github.com/username/reponame.git

Automating it

That's helpful, and I've made that change on all of my repos. But I just forked another repo on github, and as I went to edit .git/config I remembered what a pain this had been to do en masse on all my repos; and how it would be a much bigger pain to do it on a gazillion tiny GIMP asset repos if they end up going with that model and I ever want to help with the development. It's just the thing that should be scriptable.

However, the rules for what constitutes a valid git passwordless pull URL, and what constitutes a valid ssh writable URL, seem to encompass a lot of territory. So the quickie Python script I whipped up to modify .git/config doesn't claim to handle everything; it only handles the URLs I've encountered personally on Gnome and GitHub. Still, that should be useful if I ever have to add multiple repos at once. The script: repo-pullpush (yes, I know it's a terrible name) on GitHub.

April 05, 2016 06:28 PM

March 29, 2016

Elizabeth Krumbach

Tourist in Singapore

Time flies, and my recent trip to Singapore to speak at FOSSASIA snuck up on me. I wasn’t able to make time to do research into local attractions and so I found myself there the day before the conference I was there to attend with only one thing on my agenda, the Night Safari. MJ told me about it years ago when he visited Singapore and how he thought I’d enjoy it, given my love for animals and zoos.

I flew Singapore Air, frequently ranked the best airline in the world, and for good reason. Even in coach, the service is top notch and the food is edible, sometimes even good. My itinerary took me through Seoul on the way out, which felt the long way of doing things but my layover was short and I had a contiguous flight number, so passengers were mostly just shuffled through security and loaded onto the next plane. I seem to have cashed in all my travel karma this trip and ended up with an entire center row to myself, which meant I could lie down and get some sleep during the flights even though I was in coach. Heavenly! I arrived in Singapore at the bright and early time of 2AM and caught a taxi to my hotel. Thankfully I was able to get some sleep there too so I was ready for my jet lag adjustment day on Wednesday.

In the morning I met up with a colleague who was also in town for the conference. With neither of us having plans, I dragged him along with me as we bought tickets for the Night Safari that evening, including transport from a tour company that included priority boarding inside the park once we arrived. And then on to a touristy hop on/hop off bus to give us an overview of the city.


On the tourist bus!

The first thing I’ll say about Singapore: It’s hot and humid. I’m not built for this kind of weather. As much as I enjoyed my adventures, it was a struggle each day to keep up with my “I went to school in Georgia, this is fine!” colleague and to stay hydrated.

Then there’s their love for greenery. As a city-state there is a prevalence of what they refer to as the “concrete jungle” but they also seem keen on striking a balance. Many buildings have green gardens, and even full trees, on various balconies and roofs of their tall buildings. Even throughout areas of the city you could find larger green spaces than I’m accustom to seeing, bigger trees that they’ve clearly made an effort to make sure could still thrive. It was nice to see in a city.

The tourist bus took us through the heart of downtown where we were staying, then down to Chinatown, where the where we saw the Sri Mariamman Temple (which is actually a Hindu temple). The financial and districts were next, and then we decided to leave the bus for a time as we got to the Gardens by the Bay. This was a huge complex. There were several outdoor gardens with various themes, which surround the main area that has a couple indoor complexes as well as the outdoor tree-like structures that loom large, I got some great pictures of them.

We decided to go into the Cloud Forest, seeing as we were in town to speak about our work on cloud software. I was worried it would be even hotter inside, but it was amusing to discover that it was actually cooler, quite the welcome break for me. The massive dome structure enclosed what I would compare to the rain forest dome inside the California Academy of Sciences building in San Francisco, but much bigger and with a strong focus on flora rather than fauna. You enter the building at ground level and take the elevator to the top to walk down several stories through exhibits showing plant life of all sorts. It made for some nice views of the whole complex, and outside too.

After the dome, it was back out in the heat. We walked through some of the outdoor gardens before hopping on the tourist bus again. We took it through the Indian neighborhood where we saw the Sri Veeramakaliamman Temple and Arab section which included getting to see the beautiful Masjid Sultan (mosque), near where we had dinner later in the week at an Indian place that advertised being Halal.

By the time the bus got back to the stop near our hotel it was time for me to take a break before the Night Safari. We were being picked up for the safari at 6PM, which took us on a van to meet our bus that took us up to the part of the island where the Night Safari was. The tour guide gave an interesting take on history and the social benefits of living in Singapore on our journey up. It did make me reflect upon the fact that while there was traffic, the congestion was nothing like I’d expect for a city of Singapore’s size. I hadn’t yet experienced the public transit, but as I’d learn later in my trip it was quite good for the southern parts of the island.

The Night Safari! First impression: Tourist trap. But it got better. Once you make your way past the crowds, shops and food places, and beyond the goofy welcome show that has various animals doing tricks things get better. The adventure begins on a tram through the park. With the tour we didn’t have to wait in line, which when combined with the bus ride there, made it worth the extra fee for paying for the tour. The tram takes you through various habitats from around the world where nocturnal animals dwell. Big cats, various types of deer, wolves and hippos were among the star attractions. I was delighted to finally get to see some tahrs, which the last Ubuntu LTS release were named after.

After the tram tour I was feeling pretty tired, heat and jet lag hitting me hard. But I decided to go on a couple of the walking trails anyway. It was worth it. The walking trails are by far the best part of the park! More animals and getting to take the time as much time as you want to see the various animals. Exhaustion started hitting me when we completed half the trails, but I got to see fishing cats, otters, bats, a sleeping pangolin (another Ubuntu LTS animal!) and my favorite of the night, the binturong, otherwise known as a bear cat. I didn’t take any pictures of the animals, because night safari. By the end of our walking I was pretty tired and just wanted to get back to my bed, we forewent the tour bus back to the hotel and just got a taxi.

Thursday evening the first conference events kicked off with a hot pot dinner, but prior to that we had more time for touristing. During our city tour the day before I saw the Mint – Museum of Toys. Casting away thoughts of Toy Story 2’s plot line of being sold to a Japanese toy museum, I was delighted to visit an actual toy museum. Sadly, their floor on Space and Sci-Fi toys was closed, but the rest of the museum mostly made up for it. The open parts of the museum had 5 floors of toy displays spanning about one hundred years. Most of the toys were cartoon-related, with Popeye, super heroes, various popular Anime and Disney characters all making a respectable showing. Some of the toys packed into displays had surprisingly high appraisals attached to them, and there were notes here and there about their rarity. I had a lot of fun!

After toys, we decided to find lunch. It turns out that a number of places aren’t open for lunch, so we wandered around for a bit until around noon when we found ourselves in the Raffles Hotel courtyard in front of a menu that looked lovely for lunch. It was outdoors, so no escaping the heat, but the shade made things a bit more tolerable. It didn’t take long for us to eye the list of Slings on the cocktail menu and learn via a Google search that we were sitting where Singapore Slings were invented. How cool! Hydration took a back seat, I had to have a Singapore Sling were they were invented.

After lunch we continued our walk to make our way to the newly opened National Gallery. I had actually read about this one incidentally before arriving in Singapore, as it just opened in November and the opening was briefly covered in a travel magazine I read. This new gallery is housed in the historical former Supreme Court and City Hall buildings, and they didn’t do anything to hide this. Particularly in the Supreme Court building, it was very obvious that it was a courthouse, with much of what look like original benches throughout and rooms that still looked like court rooms with big wooden chairs and (jury?) boxes. In all, they were amazing buildings. The contents within made it that much better, these were some of the most impressive galleries I’ve ever had the pleasure of walking through. Art spanned centuries and styles of southern Asian talent, as well as art from colonials. I do admit enjoying the older, more realistic art rather than the modern and abstract, but there was something for everyone there. I’ll definitely go again the next time I’m in Singapore.

The National Gallery visit concluded my tourist adventures. That evening we met up with fellow FOSSASIA speakers at a hot pot restaurant not far from our hotel. It was my first time having hot pot, collecting raw meats, vegetables and fish from a buffet and dumping it in various boiling pots with seasonings was an experience I’m glad I had, but the weather got me there too. Sitting over a boiling pot in the evening heat and humidity certainly took its toll on me. Later in the week I had the opposite culinary adventure when I ended up at Swensen’s, an ice cream chain that started in San Francisco. I’d never been to the one in San Francisco, but apparently they’ve been a big hit in south Asia. It was fascinating to be in a San Francisco-themed restaurant and order a Golden Gate Bridge sundae while sitting halfway around the world from my city by the bay. Maybe I should visit the one in San Francisco now.

More photos from my tour around Singapore here: https://www.flickr.com/photos/pleia2/albums/72157666098884052

Two days isn’t nearly enough in Singapore. Even though I don’t shop (and shopping is BIG there!) I only got a small taste of what the city had to offer.

Next stop was on to the conference at the Singapore Science Centre, which was quite the inspired venue selection for an open source conference, especially one that attracted a number of younger attendees, but that’s a story for another day.

by pleia2 at March 29, 2016 02:31 AM

March 27, 2016

Elizabeth Krumbach

Wine and dine in Napa Valley

In 2008 when I was visiting MJ in my first trip to San Francisco we had plans to go up to Napa Valley. Given the distance and crowds, the driver MJ hired for the day made an alternate suggestion: “How about Sonoma Valley instead?” That day was the beginning of us being Sonoma Valley fans. Tastings weren’t over-crowded, the wine was excellent, at the time traffic was tolerable even coming back to the city. We visited a winery with a wine cave, where we’d get engaged three years later. Last year we joined a wine club, sealing our fate to visit regularly.

We never did make it to Napa, until a couple weeks ago.

For MJ’s birthday last year I promised him a meal at the most coveted restaurant in California, The French Laundry. I worked with a concierge to complete the herculean effort to get reservations, and then rescheduled a couple of times to work around our shifting travel schedules. Finally they were firmed up for Sunday, March 13th. The timing worked out, with all our travel lately we hadn’t seen much of each other, so it was a nice excuse to get out of town and spend the weekend together. We drove up Friday night and checked into the Harvest Inn, catching a late dinner at the lovely restaurant there, Harvest Table.


Dinner at Harvest Table

Saturday morning we began our wine trail. We didn’t have a lot of time to plan this trip, so we depended upon the recommendations of my recent house guest, George Mulak (and remotely, his wife Vicki), who supplied us with a list of their favorites. Their recommendations were spot on. Our first stop was Heitz Cellar which was conveniently almost across the street from where we were staying. They have a relatively small tasting area, and sadly when we arrived the skies had opened up to give us piles of rain, so there was no enjoying the grounds. They did have a couple things I really liked though. The first was a bit surprise, I don’t typically care for Zinfandels, but we bought a bottle of theirs, it was very good. Two bottles of their port also came home with us. Next on our list was one of several Rutherford <Noun> wineries, and we ended up at the wrong one, in what was a lovely mistake. We found ourselves at Rutherford Hill. a famous winery known for their Merlots, and I love Merlot. They also had wine caves and did tours! On the rainy day that it ended up being, a wine cave tour was a fantastic shelter from the weather. Our bartender and tour guide was super friendly and inviting and there’s a reason they are world-renowned: their wines are wonderful. We even joined their club.


Drinking wine in the Rutherford Hills wine caves

For lunch we went to Rutherford Grill, which we quickly noticed looked a lot like one of our Silicon Valley favorites, Los Altos Grill, and San Francisco haunt Hillstone. Turns out they’re all related. The familiarity was a welcome surprise, and an enjoyable lunch.

Wine adventures continued in the same parking lot as the grill when we made our way across to Beaulieu Vineyard (BV). I think planning ahead would have served us better here, we just did the basic tasting which was pretty run of the mill. A day with better weather and a planned historic wine tour would have been a better experience, maybe next time. From there we made our final stop of the day back near our hotel at Franciscan Estate Winery. We had a lovely time chatting with the Philadelphia-native pouring our wines and did a couple flights covering their range of types and qualities. A fine way to round out our afternoon. We picked up some snacks and water (time to hydrate!) at the lovely Dean & DeLuca shop (purveyors of fine food) and went back to the hotel to spend some time relaxing before dinner.


Final tasting of the day at Franciscan

In preparation for our exciting French Laundry reservation the following day, we booked late (9:45PM) dinner reservations at a related restaurant, Bouchon. Another French restaurant by Thomas Keller, the meal was delicious and the atmosphere was both fancy and casual, a lovely mix of how at home a really nice Napa Valley restaurant can make you feel. Highly recommended, and quite a bit easier to get reservations at than The French Laundry, though I still did need to plan a couple weeks ahead.


Appetizers at Bouchon

Sunday morning concluded our stay at the Harvest Inn. In spite of the rainy weekend, I did get to enjoy walking through their grounds a bit and appreciated the spacious room we had and the real wood fireplace. The location was great too, giving us a nice home base for the loop of wineries we visited. We’d stay here again. Check out was quick and then we were dressed up and on our way to the gem of our Napa adventure: Tasting menu lunch at The French Laundry!

In case I haven’t drilled this home enough, The French Laundry was named the Best Restaurant in the World multiple times. Even when it’s not at the top, looking at pretty much any top 10 lists for the past decade will see it listed as well. Going here was a really big, once in a lifetime, kind of deal.

The rainy weekend continued as we were seated downstairs and settled in with a glass of champagne to start our meal. A half bottle of red wine later joined us mid-meal. What struck me first about the meal there was the environment. French restaurants I’ve been to are either very modern or very stuffy, neither of which I’m a huge fan of. The French Laundry was a lovely mix of the two, much like Bouchon of the previous night, it seemed to reflect its home in Napa Valley. The restaurant was truly laundry themed in a very classy way, with a clothes pin as their logo and the lamps on the walls tastefully boasting clothes laundry symbols. The staff was professional, charming and witty. The food was spectacular, quickly making it into one of the top three meals I’ve ever had. The meal took about three hours, with small plates coming at a nice pace to keep us satisfied but also relaxed so we could enjoy the time there. I was definitely full at the end, especially after the stream of beautiful and delicious desserts that filled our table at the end. At the conclusion of the meal we were given a copy of the menu and gifted the wooden clothes pins that were at our table upon arrival. In all, it was an exceptional experience.


Meal at The French Laundry

With some time on our hands following our long lunch at The French Laundry we decided to add one more winery to our itinerary before driving home, Hagafen Cellars. Their wines are Kosher, even for Passover, which makes them great for us during that no-bread time and a star at the White House during major Jewish and Israeli-focused events. Best of all, their wines are wonderful. Having not grown up Jewish, I was not aware of the disappointment found with the standard Manischewitz wine until a couple years ago, so it was refreshing to learn we have other options during Passover! We were pretty close to joining their wine club, but in the end preferred making our own selections, and with a trunk full of wine we figured we’d had enough for now.


Final stop, Hagafen Cellars

With that, our fairy tale weekend together in Napa Valley came to a close. MJ flew out to Seattle that night for work. My trip to Singapore had me leaving the next morning.

More photos from our weekend in Napa Valley here: https://www.flickr.com/photos/pleia2/albums/72157665313725990

by pleia2 at March 27, 2016 06:26 PM

March 26, 2016

Akkana Peck

Debian: Holding packages you build from source, and rebuilding them easily

Recently I wrote about building the Debian hexchat package to correct a key binding bug.

I built my own version of the hexchat packages, then installed the ones I needed:

dpkg -i hexchat_2.10.2-1_i386.deb hexchat-common_2.10.2-1_all.deb hexchat-python_2.10.2-1_i386.deb hexchat-perl_2.10.2-1_i386.deb

That's fine, but of course, a few days later Debian had an update to the hexchat package that wiped out my changes.

The solution to that is to hold the packages so they won't be overwritten on the next apt-get upgrade:

aptitude hold hexchat hexchat-common hexchat-perl hexchat-python

If you forget which packages you've held, you can find out with aptitude:

aptitude search '~ahold'

Simplifying the rebuilding process

But now I wanted an easier way to build the package. I didn't want to have to search for my old blog post and paste the lines one by one every time there was an update -- then I'd get lazy and never update the package, and I'd never get security fixes.

I solved that with a zsh function:

newhexchat() {
    # Can't set errreturn yet, because that will cause mv and rm
    # (even with -f) to exit if there's nothing to remove.
    cd ~/outsrc/hexchat
    echo "Removing what was in old previously"
    rm -rf old
    echo "Moving everything here to old/"
    mkdir old
    mv *.* old/

    # Make sure this exits on errors from here on!
    setopt localoptions errreturn

    echo "Getting source ..."
    apt-get source hexchat
    cd hexchat-2*
    echo "Patching ..."
    patch -p0 < ~/outsrc/hexchat-2.10.2.patch
    echo "Building ..."
    debuild -b -uc -us
    echo
    echo 'Installing' ../hexchat{,-python,-perl}_2*.deb
    sudo dpkg -i ../hexchat{,-python,-perl}_2*.deb
}

Now I can type newhexchat and pull a new version of the source, build it, and install the new packages.

How do you know if you need to rebuild?

One more thing. How can I find out when there's a new version of hexchat, so I know I need to build new source in case there's a security fix?

One way is the Debian Package Tracking System. You can subscribe to a package and get emails when a new version is released. There's supposed to be a package tracker web interface, e.g. package tracker: hexchat with a form you can fill out to subscribe to updates -- but for some packages, including hexchat, there's no form. Clicking on the link for the new package tracker goes to a similar page that also doesn't have a form.

So I guess the only option is to subscribe by email. Send mail to pts@qa.debian.org containing this line:

subscribe hexchat [your-email-address]
You'll get a reply asking for confirmation.

This may turn out to generate too much mail: I've only just subscribed, so I don't know yet. There are supposedly keywords you can use to limit the subscription, such as upload-binary and upload-source, but the instructions aren't at all clear on how to include them in your subscription mail -- you say keyword, or keyword your-email, so where do you put the actual keywords you want to accept? They offer no examples.

Use apt to check whether your version is current

If you can't get the email interface to work or suspect it'll be too much email, you can use apt to check whether the current version in the repository is higher than the one you're running:

apt-cache policy hexchat

You might want to automate that, to make it easy to check on every package you've held to see if there's a new version. Here's a little shell function to do that:

# Check on status of all held packages:
check_holds() {
    for pkg in $( aptitude search '~ahold' | awk '{print $2}' ); do
        policy=$(apt-cache policy $pkg)
        installed=$(echo $policy | grep Installed: | awk '{print $2}' )
        candidate=$(echo $policy | grep Candidate: | awk '{print $2}' )
        if [[ "$installed" == "$candidate" ]]; then
            echo $pkg : nothing new
        else
            echo $pkg : new version $candidate available
        fi
    done
}

March 26, 2016 05:11 PM

Elizabeth Krumbach

Six years in San Francisco

February 2016 marked six years of me living here in San Francisco. It’s hard to believe that much time has passed, but at the same time I feel so at home in my latest adopted city. I sometimes find myself struggling to remember what it was like to live in the suburbs, drive every day and not be able to just walk to the dentist, or take in the beautiful sights along the Embarcadero as I go for a run. I’ve grown accustom to the weather and seasons (or lack thereof), and barely think twice when making plans. Of course the weather will be beautiful!

I love you, California, I adore spending my time on The Dock of the Bay.

Our travel schedules this year have been a bit crazy though. I just returned from my second overseas conference of the year on Monday and MJ has been spending almost half his time time traveling with work. We’ve tried to plan things so that we’re not out of town at the same time, but haven’t always been effective. Plus, being out of town the same time is great for the cats and our need for a pet sitter, but it’s less great for getting time together. We ended up celebrating Valentine’s Day a day early, on February 13th, in order to work around these schedules and MJ’s plan to leave for a trip on Sunday.

It was a fabulous Valentine’s Day dinner though. We went to Jardinière over in Hayes Valley and both ordered the tasting menu, and I went with the wine pairing since I didn’t have a flight to catch the next day. Everything was exceptional, from the sea urchin to the beautifully prepared, marbled steak that melts in your mouth. I hope we can make it back at some point.

With MJ out of town I’ve had to fight the temptation to slip into workaholic mode. I definitely have a lot of work to do, especially as my for-real-this-time book deadline approaches. But I’ve grown appreciative of the need to take a break, and how it untangles the mind to be fresh again the next day and more effective at solving problems. On Presidents’ Day I treated myself to an afternoon at the zoo.

More photos from the zoo here: https://www.flickr.com/photos/pleia2/albums/72157662402671763

I’ve also gotten to make time to spend with friends here and there, recently making it out to the cinema with a friend to see the Oscar Nominated Animation Shorts. I grew to appreciate these shorts years ago after learning my beloved Wallace & Gromit films had been nominated and won in the past, but it had been some time since I’d gone to a theater to enjoy them.

While MJ has been in town, I’ve reflected on my six years here in the city and realized there were still things I’ve wanted to do in the area but haven’t had the opportunity to, so I’ve been slowly checking them off my list. Even small changes to accommodate new things have been worth it. One afternoon we took a slight detour from going to the Beach Chalet and instead went downstairs to the Park Chalet where we had never been before.


High Tide Hefeweizen at the Park Chalet

While on the topic of food, we also finally made it over to Zachary’s Chicago Pizza over in Oakland, near the Rockridge BART station. I’m definitely a New York pizza girl, but I hear so many good things about Zachary’s every time I moan about the state of California pizza. We went around 2:30 in the afternoon on a Saturday afternoon and were seated immediately. Eating there is a bit of an event, you order and wait a half hour for your giant wall of deep dish pizza to cook, I had the Spinach & Mushroom. The toppings and cheese are buried inside the pizza, with the sauce covering the top. It was really good, even if I could barely finish two pieces (leftovers!).

After Zachary’s I had planned to take BART up to downtown Berkeley to hit up a comic book store, since the one I used to go to here in San Francisco has closed due to increasing rent. I was delighted to learn that there was a comic book store within walking distance of where we already were. That’s how I was introduced to The Escapist in Berkeley, just over the Oakland/Berkeley border. I picked up most of the backlog of comics I was looking for, and then hit up Dark Carnival next door, a great Sci-Fi and Fantasy book store that I’d been to in the past. I’ll be returning to both stores in the near future.

And now it’s time to take an aforementioned break. Saturday off, here I come!

by pleia2 at March 26, 2016 02:32 AM

March 25, 2016

Jono Bacon

Suggestions for Donating a Speaker fee

In August I am speaking at Abstractions and the conference organizers very kindly offered to provide a speaker fee.

Thing is, I have a job and so I don’t need the fee as much as some other folks in the world. As such, I would like to donate the speaker fee to an open source / free software / social good organization and would love suggestions in the comments.

I probably won’t donate to the Free Software Foundations, EFF, or Software Freedom Conservancy as I have already financially contributed to them this year.

Let me know your suggestions in the comments!

by Jono Bacon at March 25, 2016 04:30 PM

March 17, 2016

Akkana Peck

Changing X brightness and gamma with xrandr

I switched a few weeks ago from unstable ("Sid") to testing ("Stretch") in the hope that my system, particularly X, would break less often. The very next day, I updated and discovered I couldn't use my system at night any more, because the program I use to reduce the screen brightness by tweaking X gamma no longer worked. Neither did other related programs, such as xgamma and xcalib.

The Dell monitor I use doesn't have reasonable hardware brightness controls: strangely, the brightness button works when the monitor is connected over VGA, but if I want to use the sharper HDMI connection, brightness adjustment no longer works. So I depend on software brightness adjustment in order to use my computer at night when the room is dim.

Fortunately, it turns out there's a workaround. xrandr has options for both brightness and gamma:

xrandr --output HDMI1 --brightness .5
xrandr --output HDMI1 --gamma .5:.5:.5

I've always put xbrightness on a key, so I can use a function key to adjust brightness interactively up and down according to conditions. So a command that sets brightness to .5 or .8 isn't what I need; I need to get the current brightness and set it a little brighter or a little dimmer. xrandr doesn't offer that, so I needed to script it.

You can get the current brightness with

xrandr --verbose | grep -i brightness

But I was hoping there would be a more straightforward way to get brightness from a program. I looked into Python bindings for xrandr; there are some, but with no documentation and no examples. After an hour of fiddling around, I concluded that I could waste the rest of the day poring through the source code and trying things hoping something would work; or I could spend fifteen minutes using subprocess.call() to wrap the command-line xrandr.

So subprocesses it was. It made for a nice short script, much simpler than the old xbrightness C program that used <X11/extensions/xf86vmode.h> and XF86VidModeGetGammaRampSize(): xbright on github.

March 17, 2016 05:01 PM

March 09, 2016

Akkana Peck

Juniper allergy season

It's spring, and that means it's the windy season in New Mexico -- and juniper allergy season.

When we were house-hunting here, talking to our realtor about things like local weather, she mentioned that spring tended to be windy and a lot of people got allergic. I shrugged it off -- oh, sure, people get allergic in spring in California too. Little did I know.

A month or two after we moved, I experienced the worst allergies of my life. (Just to be clear, by allergies I mean hay fever, sneezing, itchy eyes ... not anaphylaxis or anything life threatening, just misery and a morbid fear of ever opening a window no matter how nice the temperature outside might be.)

[Female (left) and male junipers in spring]
I was out checking the mail one morning, sneezing nonstop, when a couple of locals passed by on their morning walk. I introduced myself and we chatted a bit. They noticed my sneezing. "It's the junipers," they explained. "See how a lot of them are orange now? Those are the males, and that's the pollen."

I had read that juniper plants were either male or female, unlike most plants which have both male and female parts on every plant. I had never thought of junipers as something that could cause allergies -- they're a common ornamental plant in California, and also commonly encountered on trails throughout the southwest -- nor had I noticed the recent color change of half the junipers in our neighborhood.

But once it's pointed out, the color difference is striking. These two trees, growing right next to each other, are the same color most of the year, and it's hard to tell which is male and which is female. But in spring, suddenly one turns orange while the other remains its usual bright green. (The other season when it's easy to tell the difference is late fall, when the female will be covered with berries.)

Close up, the difference is even more striking. The male is dense with tiny orange pollen-laden cones.

[Female juniper closeup] [male juniper closeup showing pollen cones]

A few weeks after learning the source of my allergies, I happened to be looking out the window on a typically windy spring day when I saw an alarming sight -- it looked like the yard was on fire! There were dense clouds of smoke billowing up out of the trees. I grabbed binoculars and discovered that what looked like fire smoke was actually clouds of pollen blowing from a few junipers. Since then I've gotten used to seeing juniper "smoke" blowing through the canyons on windy spring days. Touching a juniper that's ready to go will produce similar clouds.

The good news is that there are treatments for juniper allergies. Flonase helps a lot, and a lot of people have told me that allergy shots are effective. My first spring here was a bit miserable, but I'm doing much better now, and can appreciate the fascinating biology of junipers and the amazing spectacle of the smoking junipers (not to mention the nice spring temperatures) without having to hide inside with the windows shut.

March 09, 2016 03:20 AM

March 08, 2016

kdub

Mir and Vulkan Demo

This week the Mir team got a Vulkan demo working on Mir! (youtube link to demo)

I’ve been working on replumbing mir’s internals a bit to give more fine grained control over buffers, and my tech lead Cemil has been working on hooking that API into the Vulkan/Mir WSI.

The tl;dr on Vulkan is its a recently finalized hardware accelerated graphics API from Khronos (who also proved the OpenGL APIs). It doesn’t surplant OpenGL, but can give better performance (esp in multithreaded environments) and better debug in exchange for more explicit control of the GPU.

Some links:
Khronos Vulkan page

Wikipedia Vulkan entry

short video from Intel at SIGGRAPH with a quick explanation

longer video from NVIDIA at SIGGRAPH on Vulkan

 

If you’re wondering when this will appear in a repository near you, probably right after the Ubuntu Y series opens up (we’re in a feature freeze for xenial/16.04 LTS at the moment).

by kdub at March 08, 2016 07:31 PM

March 06, 2016

Elizabeth Krumbach

Xubuntu 16.04 ISO testing tips

As we get closer to the 16.04 LTS release, it’s becoming increasingly important for people to be testing the daily ISOs to catch any problems. This past week, we had the landing of GNOME Software to replace the Ubuntu Software Center and this will definitely need folks looking at it and reporting bugs (current ones tracked here: https://bugs.launchpad.net/ubuntu/+source/gnome-software)

In light of this, I thought I’d quickly share a few of my own tips and stumbling points. My focus is typically on Xubuntu testing, but things I talk about are applicable to Ubuntu too.


ISO testing on a rainy day

1. Downloading the ISO

Downloading an ISO every day, or even once a week can be tedious. Helpfully, the team provides the images via zsync which will only download the differences in the ISO between days, saving you a lot of time and bandwidth. Always use this option when you’re downloading ISOs, you can even use it the first time you download one, as it will notice that none exists.

The zsync URL is right alongside all the others when you choose “Link to the download information” in the ISO tracker:

You then use a terminal to cd into the directory where you want the ISO to be (or where it already is) and copy the zsync line into the terminal and hit enter. It will begin by examining the current ISO and then give you a progress bar for what it needs to download.

2. Putting the image on a USB stick

I have struggled with this for several releases. At first I was using UNetbootin (unetbootin), then usb-creator (usb-creator-gtk). Then I’d switch off between the two per release when one or the other wasn’t behaving properly. What a mess! How can we expect people to test if they can’t even get the ISO on a USB stick with simple instructions?

The other day flocculant, the Xubuntu QA Lead, clued me into using GNOME Disks to put an ISO on a USB stick for testing. You pop in the USB stick, launch gnome-disks (you’ll need to install the gnome-disk-utility package in Xubuntu), select your USB stick in the list on the left and choose the “Restore Disk Image…” option in the top right to select the ISO image you want to use:

I thought about doing a quick screencast of it, but Paul W. Frields over at Fedora Magazine beat me to it by more than a year: How to make a Live USB stick using GNOME Disks

This has worked beautifully with both the Xubuntu and Ubuntu ISOs.

3. Reporting bugs

The ISO tracker, where you report testing results, is easy enough to log into, but a fair number of people quit the testing process when it gets to actually reporting bugs. How do I report bugs? What package do I report them against? What if I do it wrong?

I’ve been doing ISO testing for several years, and have even run multiple events with a focus on ISO testing, and STILL struggle with this.

How did I get over it?

First, I know it’s a really long page, but this will get you familiar with the basics of reporting a bug using the ubuntu-bug tool: Ubuntu ReportingBugs

Often times being familiar with the basic tooling isn’t enough. It’s pretty common to run into a bug that’s manifesting in the desktop environment rather than in a specific application. A wallpaper is gone, a theme looks wrong, you’re struggling to log in. Where do those get submitted? And Is this bad enough for me to classify it as “Critical” in the ISO Tracker? This is when I ask. For Xubuntu I ask in #xubuntu-devel and for Ubuntu I ask in #ubuntu-quality. Note: people don’t hover over their keyboards on IRC, explain what you’re doing, ask your question and be patient.

This isn’t just for bugs, we want to see more people testing and it’s great when new testers come into our IRC channels to share their experiences and where they’re getting stuck. You’re part of our community :)


Simcoe thinks USB sticks are cat toys

Resources

I hope you’ll join us.

by pleia2 at March 06, 2016 05:54 PM

March 04, 2016

Akkana Peck

Recipe: Easy beef (or whatever) jerky

You don't need a special smoker or dehydrator to make great beef jerky.

Winter is the time to make beef jerky -- hopefully enough to last all summer, because in summer we try to avoid using the oven, cooking everything outside so as not to heat up the house. In winter, having the oven on for five hours is a good thing.

It took some tuning to get the flavor and the amount of saltiness right, but I'm happy with my recipe now.

Beef jerky

Ingredients

  • thinly sliced beef or pork: about a pound or two
  • 1-1/2 cups water
  • 1/4 cup soy sauce
  • 3/4 tbsp salt
  • Any additional seasonings you desire: pepper, chile powder, sage, ginger, sugar, etc.

Directions

Heat water slightly (30-40 sec in microwave) to help dissolve salt. Mix all ingredients except beef.

Cut meat into small pieces, trimming fat as much as possible.

Marinate in warm salt solution for 15 min, stirring occasionally. (For pork, you might want a shorter marinating time. I haven't tried other meats.)

Set the oven on its lowest temperature (170F here).

Lay out beef on a rack, with pieces not touching or overlapping.
Nobody seems to sell actual cooking racks, but you can buy "cooling racks" for cooling cookies, which seem to work fine for jerky. They're small so you probably need two racks for a pound of beef.

Ideally, put the rack on one oven shelf with a layer of foil on the rack below to catch the drips.
You want as much air space as possible under the meat. You can put the rack on a cookie sheet, but it'll take longer to cook and you'll have to turn the meat halfway through. Don't lay the beef directly on cookie sheet or foil unless you absolutely can't find a rack.

Cook until sufficiently dry and getting hard, about 4 to 4-1/2 hours at 170F depending on how dry you like your jerky. Drier jerky will keep longer unrefrigerated, but it's not as tasty. I cook mine a little less and store it in the fridge when I'm not actually carrying it hiking or traveling.

If you're using a cookie sheet, turn the pieces once at around 2-3 hours when the tops start to look dry and dark.

Tip: if you're using a rack without a cookie sheet, a fork wedged between the bars of the rack makes it easy to remove a rack from the oven.

March 04, 2016 07:24 PM

March 01, 2016

Elizabeth Krumbach

OpenStack infra-cloud sprint

Last week at the HPE offices in Fort Collins, members of the OpenStack Infrastructure team focused on getting an infra-cloud into production met from Monday through Thursday.

The infra-cloud is an important project for our team, so important that it has a Mission!

The infra-cloud’s mission is to turn donated raw hardware resources into expanded capacity for the OpenStack infrastructure nodepool.

This means that in addition companies who Contribute Cloud Test Resources in the form of OpenStack instances, we’ll be running our own OpenStack-driven cloud that will provide additional instances to our pool of servers we run tests on. We’re using the OpenStack Puppet Modules (since the rest of our infra uses Puppet) and bifrost, which is a series of Ansible playbooks that use Ironic to automate the task of deploying a base image onto a set of known hardware.

Our target for infra-cloud was a few racks of HPE hardware provided to the team by HPE that resides in a couple HPE data centers. When the idea for a sprint came together, we thought it might be nice to have the sprint itself hosted at an HPE site where we could meet some of the folks who handle servers. That’s how we ended up in Fort Collins, at an HPE office that had hosted several mid-cycle and sprint events for OpenStack in the past.

Our event kicked off with an overview by Colleen Murphy of work that’s been done to date. The infra-cloud team that Colleen is part of has been architecting and deploying the infra-cloud over the past several months with an eye toward formalizing the process and landing it in our git repositories. Part of the aim of this sprint was to get everyone on the broader OpenStack Infrastructure team up to speed with how everything works so that the infra cores could intelligently review and provide feedback on the patches being deployed. Colleen’s slides (available here) also gave us an overview of the baremetal workflow with bifrost, the characteristics of the controller and compute nodes, networking (and differences found between the US East and US West regions) and her strategy for deploying locally for a development environment (GitHub repo here). She also spent time getting us up to speed with the HPE iLO management interfaces that we’ll have to use if we’re having trouble with provisioning.

This introduction took up our morning. After lunch it was time to talk about our plan for the rest of our time together. We discussed the version of OpenStack we wanted to focus on and broadly how and if we planned to do upgrades, along with goals of this project. Of great importance was also that we built something that could be redeployed if we changed something, we don’t want this infrastructure to bit rot and cause a major hassle if we need to rebuild the cloud for some reason. We then went through the architecture section of the infra-cloud documentation to confirm that the assumptions there continued to be accurate, and made notes accordingly on our etherpad when they were not.

Our discussion then shifted into broad goals for our week, out came the whiteboard! It was decided that we’d focus on getting all the patches landed to support US West so that by the end of the sprint we’d have at least one working cloud. It was during this discussion that we learned how valuable hosting our sprint at an HPE facility was. An attendee at our sprint, Phil Jensen, works in the Fort Collins data center and updated us on the plans for moving systems out of US West. The timeline that he was aware of was considerably closer than we’d been planning on. A call was scheduled for Thursday to sort out those details, and we’re thankful we did since it turns out we had to effectively be ready to shut down the systems by the end of our sprint.

Goals continued for various sub-tasks in what coalesced in the main goal of the sprint: Get a region added to Nodepool so I could run a test on it.

Tuesday morning we began tackling our tasks, and at 11:30 Phil came by to give us a tour of the local data center there in Fort Collins. Now, if we’re honest, there was no technical reason for this tour. All the systems engineers on our team have been in data centers before, most of us have even worked in them. But there’s a reason we got into this: we like computers. Even if we mostly interact with clouds these days, a tour through a data center is always a lot of fun for us. Plus it got us out of the conference room for a half hour, so it was a nice pause in our day. Huge thanks to Phil for showing us around.

The data center also had one of the server types we’re using in infra-cloud, the HP SL390. While we didn’t get to see the exact servers we’re using, it was fun to get to see the size and form factor of the servers in person.


Spencer Krum checks out a rack of HP SL390s

Tuesday was spent heads down, landing patches. People moved around the room as we huddled in groups, and there was some collaborative debugging on the projector as we learned more about the deployment, learned a whole lot more about OpenStack itself and worked through some unfortunate issues with Puppet and Ansible.


Not so much glamour, sprints are mostly spent working on our laptops

Wednesday was the big day for us. The morning was spent landing more patches and in the afternoon we added our cloud to the list of clouds in Nodepool. We then eagerly hovered over the Jenkins dashboard and waited for a job to need a trusty node to run a test…

Slave ubuntu-trusty-infracloud-west-8281553 Building swift-coverage-bindep #2

The test ran! And completed successfully! Colleen grabbed a couple screenshots.


We watch on Clark Boylan’s laptop as the test runs

Alas, it was not all roses. Our cloud struggled to obey the deletion command and the test itself ran considerably slower than we would have expected. We spent some quality time looking at disk configurations and settings together to see if we could track down the issue and do some tuning. We still have more work to do here to get everything running well on this hardware once it has moved to the new facility.

Thursday we spent some time getting US East patches to land before the data center moves. We also had a call mid-day to firm up the timing of the move. Our timing for the sprint ended up working out well for the move schedule, we were able to complete a considerable amount of work at the sprint before the machines had to be shut down. The call was also valuable in getting to chat with some of the key parties involved and learn what we needed to hand off to them with regard to our requirements for the new home the servers will have, in an HPE POD (cool!) in Houston. This allowed us to kick off a Network Requirements for Infracloud Relocation Deployment thread and Cody A.W. Somerville captured notes from the rest of the conversation here.

The day concluded with a chat about how the sprint went. The feedback was pretty positive, we all got a lot of work done, Spencer summarized our feedback on list here.

Personally, I liked that the HPE campus in Fort Collins has wild rabbits. Also, it snowed a little and I like snow.

I could have done without the geese.

It was also enjoyable to visit downtown Fort Collins in the evenings and meet up with some of the OpenStack locals. Plus, at Coopersmith’s I got a beer with a hop pillow on top. I love hops.

More photos from the week here: https://www.flickr.com/photos/pleia2/sets/72157662730010623/

David F. Flanders also Tweeted some photos: https://twitter.com/dfflanders/status/702603441508487169

by pleia2 at March 01, 2016 02:01 AM

February 27, 2016

Akkana Peck

Learning to Weld

I'm learning to weld metal junk into art!

I've wanted to learn to weld since I was a teen-ager at an LAAS star party, lusting after somebody's beautiful homebuilt 10" telescope on a compact metal fork mount. But building something like that was utterly out of reach for a high school kid. (This was before John Dobson showed the world how to build excellent alt-azimuth mounts out of wood and cheap materials ... or at least before Dobsonians made it to my corner of LA.)

Later the welding bug cropped up again as I worked on modified suspension designs for my X1/9 autocross car, or fiddled with bicycles, or built telescopes. But it still seemed out of reach, too expensive and I had no idea how to get started, so I always found some other way of doing what I needed.

But recently I had the good fortune to hook up with Los Alamos's two excellent metal sculptors, David Trujillo and Richard Swenson. Mr. Trujillo was kind enough to offer to mentor me and let me use his equipment to learn to make sculptures like his. (Richard has also given me some pointers.)

[My first metal art piece] MIG welding is both easier and harder than I expected. David Trujillo showed me the basics and got me going welding a little face out of a gear and chain on my very first day. What a fun start!

In a lot of ways, MIG welding is actually easier than soldering. For one thing, you don't need three or four hands to hold everything together while also holding the iron and the solder. On the other hand, the craft of getting a good weld is something that's going to require a lot more practice.

Setting up a home workshop

I knew I wanted my own welder, so I could work at home on my own schedule without needing to pester my long-suffering mentors. I bought a MIG welder and a bottle of gas (and, of course, safety equipment like a helmet, leather apron and gloves), plus a small welding table. But then I found that was only the beginning.

[Metal art: Spoon cobra] Before you can weld a piece of steel you have to clean it. Rust, dirt, paint, oil and anti-rust coatings all get in the way of making a good weld. David and Richard use a sandblasting cabinet, but that requires a big air compressor, making it as big an investment as the welder itself.

At first I thought I could make do with a wire brush wheel on a drill. But it turned out to be remarkably difficult to hold the drill firmly enough while brushing a piece of steel -- that works for small areas but not for cleaning a large piece or for removing a thick coating of rust or paint.

A bench grinder worked much better, with a wire brush wheel on one side for easy cleaning jobs and a regular grinding stone on the other side for grinding off thick coats of paint or rust. The first bench grinder I bought at Harbor Freight had a crazy amount of vibration that made it unusable, and their wire brush wheel didn't center properly and added to the wobble problem. I returned both, and bought a Ryobi from Home Depot and a better wire brush wheel from the local Metzger's Hardware. The Ryobi has a lot of vibration too, but not so much that I can't use it, and it does a great job of getting rust and paint off.

[Metal art: grease-gun goony bird] Then I had to find a place to put the equipment. I tried a couple of different spots before finally settling on the garage. Pro tip: welding on a south-facing patio doesn't work: sunlight glints off the metal and makes the auto-darkening helmet flash frenetically, and any breeze from the south disrupts everything. And it's hard to get motivated to out outside and weld when it's snowing. The garage is working well, though it's a little cramped and I have to move the Miata out whenever I want to weld if I don't want to risk my baby's nice paint job to welding fumes. I can live with that for now.

All told, it was over a month after I bought the welder before I could make any progress on welding. But I'm having fun now. Finding good junk to use as raw materials is turning out to be challenging, but with the junk I've collected so far I've made some pieces I'm pretty happy with, I'm learning, and my welds are getting better all the time.

Earlier this week I made a goony bird out of a grease gun. Yesterday I picked up some chairs, a lawnmower and an old exercise bike from a friend, and just came in from disassembling them. I think I see some roadrunner, cow, and triceratops parts in there.

Photos of everything I've made so far: Metal art.

February 27, 2016 09:02 PM

February 25, 2016

Akkana Peck

Migrating from xchat: a couple of hexchat fixes

I decided recently to clean up my Debian "Sid" system, using apt-get autoclean, apt-get purge `deborphan`, aptitude purge ~c, and aptitude purge ~o. It gained me almost two gigabytes of space. On the other hand, it deleted several packages I had long depended on. One of them was xchat.

I installed hexchat, the fully open replacement for xchat. Mostly, it's the same program ... but a few things didn't work right.

Script fixes

The two xchat scripts I use weren't loading. Turns out hexchat wants to find its scripts in .config/hexchat/addons, so I moved them there. But xchat-inputcount.pl still didn't work; it was looking for a widget called "xchat-inputbox". That was fairly easy to patch: I added a line to print the name of each widget it saw, determined the name had changed in the obvious way, and changed

    if( $child->get( "name" ) eq 'xchat-inputbox' ) {
to
    if( $child->get( "name" ) eq 'xchat-inputbox' ||
        $child->get( "name" ) eq 'hexchat-inputbox' ) {
That solved the problem.

Notifying me if someone calls me

The next problem: when someone mentioned my nick in a channel, the channel tab highlighted; but when I switched to the channel, there was no highlight on the actual line of conversation so I could find out who was talking to me. (It was turning the nick of the person addressing me to a specific color, but since every nick is a different color anyway, that doesn't make the line stand out when you're scanning for it.)

The highlighting for message lines is set in a dialog you can configure: Settings→Text events...
Scroll down to Channel Msg Hilight and click on that elaborate code on the right: %C2<%C8%B$1%B%C2>%O$t$2%O
That's the code that controls how the line will be displayed.

Some of these codes are described in Hexchat: Appearance/Theming, and most of the rest are described in the dialog itself. $t is an exception: I'm not sure what it means (maybe I just missed it in the list).

I wanted hexchat to show the nick of whoever called me name in inverse video. (Xchat always made it bold, but sometimes that's subtle; inverse video would be a lot easier to find when scrolling through a busy channel.) %R is reverse video, %B is bold, and %O removes any decorations and sets the text back to normal text, so I set the code to: %R%B<$1>%O $t$2 That seemed to work, though after I exited hexchat and started it up the next morning it had magically changed to %R%B<$1>%O$t$2%O.

Hacking hexchat source to remove hardwired keybindings

But the big problem was the hardwired keybindings. In particular, Ctrl-F -- the longstanding key sequence that moves forward one character -- in hexchat, it brings up a search window. (Xchat had this problem for a little while, many years ago, but they fixed it, or at least made it sensitive to whether the GTK key theme is "Emacs".)

Ctrl-F doesn't appear in the list under Settings→Keyboard shortcuts, so I couldn't fix it that way. I guess they should rename that dialog to Some keyboard shortcuts. Turns out Ctrl-F is compiled in. So the only solution is to rebuild from source.

I decided to use the Debian package source:

apt-get source hexchat

The search for the Ctrl-F binding turned out to be harder than it had been back in the xchat days. I was confident the binding would be in one of the files in src/fe-gtk, but grepping for key, find and search all gave way too many hits. Combining them was the key:

egrep -i 'find|search' *.c | grep -i key

That gave a bunch of spurious hits in fkeys.c -- I had already examined that file and determined that it had to do with the Settings→Keyboard shortcuts dialog, not the compiled-in key bindings. But it also gave some lines from menu.c including the one I needed:

    {N_("Search Text..."), menu_search, GTK_STOCK_FIND, M_MENUSTOCK, 0, 0, 1, GDK_KEY_f},

Inspection of nearby lines showed that the last GDK_KEY_ argument is optional -- there were quite a few lines that didn't have a key binding specified. So all I needed to do was remove that GDK_KEY_f. Here's my patch:

--- src/fe-gtk/menu.c.orig      2016-02-23 12:13:55.910549105 -0700
+++ src/fe-gtk/menu.c   2016-02-23 12:07:21.670540110 -0700
@@ -1829,7 +1829,7 @@
        {N_("Save Text..."), menu_savebuffer, GTK_STOCK_SAVE, M_MENUSTOCK, 0, 0,
 1},
 #define SEARCH_OFFSET (70)
        {N_("Search"), 0, GTK_STOCK_JUSTIFY_LEFT, M_MENUSUB, 0, 0, 1},
-               {N_("Search Text..."), menu_search, GTK_STOCK_FIND, M_MENUSTOCK,
 0, 0, 1, GDK_KEY_f},
+               {N_("Search Text..."), menu_search, GTK_STOCK_FIND, M_MENUSTOCK,
 0, 0, 1},
                {N_("Search Next"   ), menu_search_next, GTK_STOCK_FIND, M_MENUS
TOCK, 0, 0, 1, GDK_KEY_g},
                {N_("Search Previous"   ), menu_search_prev, GTK_STOCK_FIND, M_M
ENUSTOCK, 0, 0, 1, GDK_KEY_G},
                {0, 0, 0, M_END, 0, 0, 0},

After making that change, I rebuilt the hexchat package and installed it:

sudo apt-get build-dep hexchat
sudo apt-get install devscripts
cd hexchat-2.10.2/
debuild -b -uc -us
sudo dpkg -i ../hexchat_2.10.2-1_i386.deb

Update: I later wrote about how to automate this here: Debian: Holding packages you build from source, and rebuilding them easily.

And the hardwired Ctrl-F key binding was gone, and the normal forward-character binding from my GTK key theme took over.

I still have a couple of minor things I'd like to fix, like the too-large font hexchat uses for its channel tabs, but those are minor. At least I'm back to where I was before foolishly deciding to clean up my system.

February 25, 2016 02:00 AM

February 19, 2016

Akkana Peck

GIMP ditty: change font size and face on every text layer

A silly little GIMP ditty:
I had a Google map page showing locations of lots of metal recycling places in Albuquerque. The Google map shows stars for each location, but to find out the name and location of each address, you have to mouse over each star. I wanted a printable version to carry in the car with me.

I made a screenshot in GIMP, then added text for the stars over the places that looked most promising. But I was doing this quickly, and as I added text for more locations, I realized that it was getting crowded and I wished I'd used a smaller font. How do you change the font size for ALL font layers in an image, all at once?

Of course GIMP has no built-in method for this -- it's not something that comes up very often, and there's no reason it would have a filter like that. But the GIMP PDB (Procedural DataBase, part of the GIMP API) lets you change font size and face, so it's an easy script to write.

In the past I would have written something like this in script-fu, but now that Python is available on all GIMP platforms, there's no reason not to use it for everything.

Changing font face is just as easy as changing size, so I added that as well.

I won't bother to break it down line by line, since it's so simple. Here's the script: changefont.py: Mass change font face and size in all GIMP text layers.

February 19, 2016 06:11 PM

February 18, 2016

Jono Bacon

Supporting Beep Beep Yarr!

Some of you may be familiar with LinuxVoice magazine. They put an enormous amount of effort in creating a high quality, feature-packed magazine with a small team. They are led by Graham Morrison who I have known for many years and who is one of the most thoughtful, passionate, and decent human beings I have ever met.

Well, the same team are starting an important new project called Beep Beep Yarr!. It is essentially a Kickstarter crowd-funded children’s book that is designed to teach core principles of programming to kids. The project not just involves the creation of the book, but also a parent’s guide and an interactive app to help kids engage with the principles in the book.

They are not looking to raise a tremendous amount of money ($28,684 is the goal converted to mad dollar) and they have already raised $15,952 at the time of writing. I just went and added my support – I can’t wait to read this to our 3 year-old, Jack.

I think this campaign is important for a few reasons. Firstly, I am convinced that learning to program and all the associated pieces (logic flow, breaking problems down into smaller pieces, maths, collaboration) is going to be a critical skill in the future. Programming is not just important for teaching people how to control computers but it also helps people to fundamentally understand and synthesize logic which has knock-on benefits in other types of thinking and problem-solving too.

Beep Beep Yarr! is setting out to provide an important first introduction to these principles for kids. It could conceivably play an essential role in jumpstarting this journey for lots of kids, our own included.

So, go and support the campaign, not just because it is a valuable project, but also because the team behind it are good people who do great work.

by Jono Bacon at February 18, 2016 06:05 PM

February 15, 2016

Elizabeth Krumbach

Simcoe’s January 2016 Checkups

First up, as I first wrote about back in August, since July Simcoe has been struggling with some sores and scabbing around her eyes and inside her ear. This typically goes away after a few weeks, but it keeps coming back Over the winter holidays she started developing more scabbing, this time in addition to hear eyes and ears, it was showing up near her tail and back legs. She was also grooming excessively What could be going on?

We went through some rounds of antibiotics and then some Neo-Poly-Dex Ophthalmic for treatment of bacterial infections around her eyes throughout the fall. Unfortunately this didn’t help much, so we eventually scheduled an appointment at the beginning of January with a dermatologist at SFVS where she has been mostly transferred to for more specialized care of her renal failure as it progresses. The dermatologist determined that she’s actually suffering from allergies which are causing the breakouts. She’s now on a daily anti-allergy pill, Atopica. The outbreaks haven’t returned, but now she seems to be suffering from increasing constipation, which we’re currently trying to treat by supplementing her diet with pumpkin mixed with renal diet wet food she likes. It’s pretty clear that it’s causing her distress every time it happens. It’s unclear whether they’re related, but I have a call with the dermatologist and possibly the vet this week to find out.

As for her renal failure, we had an appointment on January 16th with the specialist to look at her levels and see how she’s doing. Due to the constipation we’re reluctant to put her on appetite stimulants just yet, but she is continuing to lose weight, which is a real concern. From November she was down from 8.9 to 8.8.

Simcoe weight

Her BUN and CRE levels also are on the increase, so we’re keeping a close eye on her.

Simcoe weight
Simcoe weight

Her next formal appointment is scheduled for April, so we’ll see how things go over the next month and a half. Behavior-wise she’s still the active and happy kitty we’re accustomed to, aside from the constipation.

Simcoe on Laundry
Simcoe on Suitcase

Still getting into my freshly folded laundry and claiming my suitcases every time I dare bring them out for a trip away from her!

by pleia2 at February 15, 2016 03:58 AM

February 12, 2016

Elizabeth Krumbach

Highlights from LCA 2016 in Geelong

Last week I had the pleasure of attending my second linux.conf.au. This year it took place in Geelong, a port city about an hour train ride southwest of Melbourne. After my Melbourne-area adventures earlier in the week, I made my way to Geelong via train on Sunday afternoon. That evening I met up with a whole bunch of my HPE colleagues for dinner at a restaurant next to my hotel.

Monday morning the conference began! Every day 1km the walk from my hotel to the conference venue at Deakin University’s Waterfront Campus and back was a pleasure as it took me along the shoreline. I passed a beach, a marina and even a Ferris wheel and a carousel.

I didn’t make time to enjoy the beach (complete with part of Geelong’s interesting post-people art installation), but I know many conference attendees did.

With that backdrop, it was time to dive into some Linux! I spent much of Monday in the Open Cloud Symposium miniconf run by my OpenStack Infra colleague over at Rackspace, Joshua Hesketh. I really enjoyed the pair of talks by Casey West, The Twelve-Factor Container (video) and Cloud Anti-Patterns (video). In both talks he gave engaging overviews of best practices and common gotchas with each technology. With containers it’s a temptation during the initial adoption phase to treat them like “tiny VMs” rather than compute-centric, storage free, containers for horizontally-scalable applications. He also stressed the importance of a consolidated code base for development and production and keeping any persistent storage out of containers and more generally the importance of Repeatability, Reliability and Resiliency. The second talk focused on how to bring applications into a cloud-native environment by using the 5-stages of grief repurposed for cloud-native. Key themes in this talk walked you through beginning with a legacy application being crammed into a container and the eventual modernization of that software into a series of microservices, including an automated build pipeline and continuous delivery with automated testing.

Unfortunately I was ill on Tuesday, so my conferencing picked up on Wednesday with a keynote by Catarina Mota who spoke on open hardware and materials, with a strong focus on 3D printing. It’s a topic that I’m already well-versed in, so the talk was mostly review for me, but I did enjoy one of the videos that she shared during her talk: Full Printed by nueveojos.

The day continued with a couple of talks that were some of my favorites of the conference. The first was Going Faster: Continuous Delivery for Firefox by Laura Thomson. Continuous Delivery (CD) has become increasingly popular for server-side applications that are served up to users, but this talk was an interesting take: delivering a client in a CD model. She didn’t offer a full solution for a CD browser, but instead walked through the problem space, design decisions and rationale behind the tooling they are using to get closer to a CD model for client-side software. Firefox is in an interesting space for this, as it already has add-ons that are released outside of the Firefox release model. What they decided to do was leverage this add-on tooling to create system add-ons, which are core to Firefox and to deliver microchanges, improvements and updates to the browser online. They’re also working to separate the browser code itself from the data that ships with it, under the premise that things like policy blacklists, dictionaries and fonts should be able to be updated and shipped independent of a browser version release. Indeed! This data would instead be shipped as downloadable content, and could also be tuned to only ship certain features upon request, like specific language support.


Laura Thomson, Director of Engineering, Cloud Services Engineering and Operations at Mozilla

The next talk that I got a lot out of was Wait, ?tahW: The Twisted Road to Right-to-Left Language Support (video) by Moriel Schottlender. Much like the first accessibility and internationalization talks I attended in the past, this is one of those talks that sticks with me because it opened my eyes to an area I’d never thought much about, as an English-only speaking citizen of the United States. She was also a great speaker who delivered the talk with the humor and intrigue… “can you guess the behavior of this right-to-left feature?” The talk began by making the case for more UIs supporting right to left (RTL) languages, citing that there are 800 million RTL speakers in the world who we should be supporting. She walked us through the concepts of Visual and Logical Rendering, how “obvious” solutions like flipping all content are flawed and considerations with regard to the relationship of content and the interface itself when designing for RTL. She also gave us a glimpse into the behavior of the Unicode Bidirectional Algorithm and the fascinating ways it behaves when mixing LTR and RTL languages. She concluded by sharing that expectations of RTL language users are pretty low since most software gets it wrong, but this means that there’s a great opportunity for projects that do support it to get it right. Her website on the topic that has everything she covered in her talk, and more, is at http://rtl.wtf.


Moriel Schottlender, Software Engineer at Wikimedia

Wednesday night was the Penguin Dinner, which is the major, all attendees welcome conference dinner of the event. The venue was The Pier, which was a restaurant appropriately perched on the end of a very long pier. It was a bit loud, but I had some interesting discussions with my fellow attendees and there was a lovely patio where we were able to get some fresh air and take pictures of the bay.

On Thursday a whole bunch of us enjoyed a talk about a Linux-driven Microwave (video) by David Tulloh. What I liked most about his talk was that while he definitely was giving a talk about tinkering with a microwave to give it more features and make it more accessible, he was also “encouraging other people to do crazy things.” Hack a microwave, hack all kinds of devices and change the world! Manufacturing one-off costs are coming down…

In the afternoon I gave my own talk, Open Source Tools for Distributed Systems Administration (video, slides). I was a bit worried that attendance wouldn’t be good because of who I was scheduled against, but I was mistaken, the room was quite full! After the talk I was able to chat with some folks who are also working on distributed systems teams, and with someone from another major project who was seeking to put more of their infrastructure work into open source. In all, a very effective gathering. Plus, my colleague Masayuki Igawa took a great photo during the talk!


Photo by Masayuki Igawa (source)

The afternoon continued with a talk by Rikki Endsley on Speaking their language: How to write for technical and non-technical audiences (video). Helpfully, she wrote an article on the topic so I didn’t need to take notes! The talk walked through various audiences, lay, managerial and experts and gave examples of how to craft posting for each. The announcement of a development change, for instance, will look very different when presenting it to existing developers than it may look to newcomers (perhaps “X process changed, here’s how” vs. “dev process made easier for new contributors!”), and completely differently when you’re approaching a media outlet to provide coverage for a change in your project. The article dives deep into her key points, but I will say that she delivered the talk with such humor that it was fun to learn directly from hearing her speak on the topic.


Also got my picture with Rikki! (source)

Thursday night was the Speakers’ dinner, which took place at a lovely little restaurant about 15 minutes from the venue via bus. I’m shy, so it’s always a bit intimidating to rub shoulders with some of the high profile speakers that they have at LCA,. Helpfully, I’m terrible with names, so I managed to chat away with a few people and not realize that they are A Big Deal until later. Hah! So the dinner was nice, but having been a long week I was somewhat thankful when the buses came at 10PM to bring us back.

Friday began with my favorite keynote of the conference! It was by Genevieve Bell (video), an Intel fellow with a background in cultural anthropology. Like all of my favorite talks, hers was full of humor and wit, particularly around the fact that she’s an anthropologist who was hired to work for a major technology company without much idea of what that would mean. In reality, her job turned out to be explaining humans to engineers and technologists, and using their combined insight to explore potential future innovations. Her insights were fascinating! A key point was that traditional “future predictions” tend to be a bit near-sighted and very rooted in problems of the present. In reality our present is “messy and myriad” and that technology and society are complicated topics, particularly when taken together. Her expertise brought insight to human behavior that helps engineers realize that while devices work better when connected, humans work better while disconnected (to the point of seeking “disconnection” from the internet on our vacations and weekends).

Additionally, many devices and technologies aim to provide a “seamless” experience, but that humans actually prefer seamful interactions so we can split up our lives into contexts. Finally, she spent a fair amount of time talking about our lives in the world of Internet of Things, and how some serious rules will need to be put in place to make us feel safe and supported by our devices rather than vulnerable and spied upon. Ultimately, technology has to be designed with the human element in mind, and her plea to us, as the architects of the future, is to be optimistic about the future and make sure we’re getting it right.

After her talk I now believe every technology company should have a staff cultural anthropologist.


Intel Fellow and cultural anthropologist Genevieve Bell

My day continued with a talk by Andrew Tridgell on Helicopters and rocket-planes (video), one on Copyleft For the Next Decade: A Comprehensive Plan (video) by Bradley Kuhn, a talk by Matthew Garrett on Troublesome Privacy Measures: using TPMs to protect users (video) and an interesting dive into handling secret data with Tollef Fog Heen’s talk on secretd – another take on securely storing credentials (video).

With that, the conference came to a close with a closing session that included raffle prizes, thanks to everyone and the hand-off to the team running LCA 2017 in Hobart next year.

I went to more talks than highlighted in this post, but with a whole week of conferencing it would have been a lot to cover. I also am typically not the biggest fan of the “hallway track” (introvert, shy) and long breaks, but I knew enough people at this conference to find people to spend time with during breaks and meals. I could also get a bit of work done during the longer breaks without skipping too many sessions and it easy to switch rooms between sessions without disruption. Plus, all the room moderators I saw did an excellent job of keeping things on schedule.

Huge thanks to all the organizers and everyone who made me feel so welcome this year. It was a wonderful experience and I hope to do it again next year!

More photos from the conference and beautiful Geelong here: https://www.flickr.com/photos/pleia2/albums/72157664277057411

by pleia2 at February 12, 2016 09:20 PM

February 10, 2016

iheartubuntu

OpenShot 2.0.6 (Beta 3) Released!


The third beta of OpenShot 2.0 has been officially released! To install it, add the PPA by using the Terminal commands below:

sudo add-apt-repository ppa:openshot.developers/ppa
sudo apt-get update
sudo apt-get install openshot openshot-doc

Now that OpenShot is installed, you should be able to launch it from your Applications menu, or from the terminal ($ openshot-qt). Every time OpenShot has an update, you will be prompted to update to the newest version. It's a great way to test our latest features.

Smoother Animation
Animations are now silky smooth because of improved anti-aliasing support in the libopenshot compositing engine. Zooming, panning, and rotation all benefit from this change.

Audio Quality Improvements
Audio support in this new version is vastly superior to previous versions. Popping, crackling, and other related audio issues have been fixed.

Autosave
A new autosave engine has been built for OpenShot 2.0, and it’s fast, simple to configure, and will automatically save your project at a specific interval (if it needs saving). Check the Preferences to be sure it’s enabled (it will default to enabled for new users).

Automatic Backup and Recovery
Along with our new autosave engine, a new automatic backup and recovery feature has also been integrated into the autosave flow. If your project is not yet saved… have no fear, the autosave engine will make a backup of your unsaved project (as often as autosave is configured for), and if OpenShot crashes, it will recover your most recent backup on launch.


Project File Improvements
Many improvements have been made to project file handling, including relative paths for built-in transitions and improvements to temp files being copied to project folders (i.e. animated titles). Projects should be completely portable now, between different versions of OpenShot and on different Operating Systems. This was a key design goal of OpenShot 2.0, and it works really well now.

Improved Exception Handling
Integration between libopenshot (our video editing library) and openshot-qt (our PyQt5 user interface) has been improved. Exceptions generated by libopenshot are now passed to the user interface, and no longer crash the application. Users are now presented with a friendly error message with some details of what happened. Of course, there is still the occasional “hard crash” which kills everything, but many, many crashes will now be avoided, and users more informed on what has happened.

Preferences Improvements
There are more preferences available now (audio preview settings - sample rate, channel layout, debug mode, etc…), including a new feature to prompt users when the application will “require a restart” for an option to take effect.


Improved Stability on Windows
A couple of pretty nasty bugs were fixed for Windows, although in theory they should have crashed on other platforms as well. But for whatever reason, certain types of crashes relating to threading only seem to happen on Windows, and many of those are now fixed.

New Version Detection
OpenShot will now check the most recent released version on launch (from the openshot.org website) and descretely prompt the user by showing an icon in the top right of the main window. This has been a requested feature for a really long time, and it’s finally here. It will also quietly give up if no Internet connection is available, and it runs in a separate thread, so it doesn’t slow down anything.

Metrics and Anonymous Error Reporting
A new anonymous metric and error reporting module has been added to OpenShot. It can be enabled / disabled in the Preferences, and it will occasionally send out anonymous metrics and error reports, which will help me identify where crashes are happening. It’s very basic data, such as “WEBM encoding error - Windows 8, version 2.0.6, libopenshot-version: 0.1.0”, and all IP addresses are anonymized, but will be critical to help improve OpenShot over time.

Improved Precision when Dragging
Dragging multiple clips around the timeline has been improved. There were many small issues that would sometimes occur, such as extra spacing being added between clips, or transitions being slightly out of place. These issues have been fixed, and moving multiple clips now works very well.

Debug Mode
In the preferences, one of the new options is “Debug Mode”, which outputs a ton of extra info into the logs. This might only work on Linux at the moment, because it requires the capturing of standard output, which is blocked in the Windows and Mac versions (due to cx_Freeze). I hope to enable this feature for all OSes soon, or at least to provide a “Debug” version for Windows and Mac, that would also pop open a terminal/command prompt with the standard output visible.

Updated Translations
Updates to 78 supported languages have been made. A huge thanks to the translators who have been hard at work helping with OpenShot translations. There are over 1000 phrases which require translation, and seeing OpenShot run so seamlessly in different languages is just awesome! I love it!

Lots of Bug fixes

  • In addition to all the above improvements and fixes, here are many other smaller bugs and issues that have been addressed in this version.
  • Prompt before overwriting a video on export
  • Fixed regression while previewing videos (causing playhead to hop around)
  • Default export format set to MP4 (regardless of language)
  • Fixed regression with Cutting / Split video dialog
  • Fixed Undo / Redo bug with new project
  • Backspace key now deletes clips (useful with certain keyboards and laptop keyboards)
  • Fixed bug on Animated Title dialog not updating progress while rendering
  • Added multi-line and unicode support to Animated Titles
  • Improved launcher to use distutils entry_points


Get Involved
Please report bugs and suggestions here: https://github.com/OpenShot/openshot-qt/issues. Please contribute language translations here (if you are a non-English speaking user): https://translations.launchpad.net/openshot/2.0/+translations.

by iheartubuntu (noreply@blogger.com) at February 10, 2016 01:23 PM

Elizabeth Krumbach

Kangaroos, Penguins, a Koala and a Platypus

On the evening of January 27th I began my journey to visit Australia for the second time in my life. My first visit to the land down under was in 2014 when I spoke at and attended my first linux.conf.au in Perth. Perth was beautiful, in addition to the conference (which I wrote about here, here and here), I took some time to see the beach and visit the zoo during my tourist adventures.

This time I was headed for Melbourne to once again attend and speak at linux.conf.au, this time in the port city of Geelong. I arrived the morning of Friday the 29th to spend a couple days adjusting to the time zone and visiting some animals. However, I was surprised at the unexpected discovery of something else I love in Melbourne: historic street cars. Called trams there, they run a free City Circle Tram that uses the historic cars! There’s even a The Colonial Tramcar Restaurant which allows you to dine inside one as you make your way along the city rails. Unfortunately my trip was not long enough to ride in a tram or enjoy a meal, but this alone puts Melbourne right on my list of cities to visit again.

At the Perth Zoo I got my first glimpse of a wombat (they are BIG!) and enjoyed walking through an enclosure where the kangaroos roamed freely. This time I had some more animals on my checklist, and wanted to get a bit closer to some others. After checking into my hotel in Melbourne, I went straight to the Melbourne Zoo.

I love zoos. I’ve visited zoos in countries all over the world. But there’s something special you should know about the Melbourne Zoo: they have a platypus. Everything I’ve read indicate that they don’t do very well in captivity and captive breeding is very rare. As a result, no zoos outside of Australia have platypuses, so if I wanted to see one it had to be in Australia. I bought my zoo ticket and immediately asked “where can I find the platypus?” With that, I got to see a platypus! They platypus was swimming in it’s enclosure and I wasn’t able to get a photo of it (moving too fast), but I did get a lovely video. They are funny creatures, and very cute!

The rest of the zoo was very nice. I didn’t see everything, but I spent a couple hours visiting the local animals and checking out some of their bigger exhibits. I almost skipped their seals (seals live at home!) and penguins (I’d see wild ones the next day!), but I’m glad I didn’t since it was a very nice setup. Plus, I wasn’t able to take pictures of the wild fairy penguins as to not disturb them in their natural habitat, but the ones at the zoo were fine.

I also got a video of the penguins!

More photos from the Melbourne Zoo here: https://www.flickr.com/photos/pleia2/albums/72157664216488166

When I got into a cab to return to my hotel it began to rain. I was able to pick up an early dinner and spend the evening catching up on some work and getting to bed early.

Saturday was animal tour day! I booked a AAT Kings full day Phillip Island – Penguins, Kangaroos & Koalas tour that had a tour bus picking me up right at my hotel. I selected the Viewing Platform Upgrade and it was well worth it.

Philip Island is about two hours from Melbourne, and it’s where the penguins live. They come out onto the beach at sunset and all rush back to their homes. The rest of the tour was a series of activities leading up to this grand event, beginning with a stop at MARU Koala & Animal Park. We were in the bus for nearly two hours to get to the small park, during which the tour guide told us about the history of Melbourne and about the penguins we’d see later in the evening.

The tour included entrance fees, but I paid an extra $20 to pet a koala and get some food for the kangaroos and other animals. First up, koala! The koala I got to pet was an active critter. It sat still during my photo, but between people it could be seen reaching toward the keepers to get back the stem of eucalyptus that it got to munch on during the tourist photos. It was fun to learn that instead of being really soft like they look, their fur feels a lot more like wool.

The rest of my time at the park was spent with the kangaroos. Not only are they just hopping around for everyone to see like in the Perth Zoo, when you have a container of food you get to feed them! And pet them! In case you’re wondering, it’s one of the best things ever. They’re all very used to being around human tourists all day, and when you lay your hand flat as instructed to have them eat from your hand they don’t bite.

I got to feed and pet lots of kangaroos!

The rest of the afternoon was spent visiting a couple scenic outlooks and a beach before stopping for dinner in the town of Cowes on Philip Island where I enjoyed a lovely fish dinner with a stunning view at Harry’s on the Esplanade. The weather was so nice!


Selfies were made for the solo tourist

As we approached the “skinny tip of the island” the tour guide told us a bit about the history of the island and the nature preserve where the penguins live. The area had once been heavily populated with vacation homes, but with the accidental introduction of foxes, which kill penguins, and increased human population, the island quickly saw their penguin (and other local wildlife populations) drop. We learned that a program was put in place to buy back all the private property and turn it into a preserve, and work was also done to rid the island of foxes. The program seems to have worked, the preserve no longer has private homes and we saw dozens of wild wallabies as well as some of the large native geese that were also targets of the foxes. Most exciting for me was that the penguin population was preserved for us to enjoy.

As the bus made its way through the park, we could see little penguin homes throughout the landscape. Some were natural holes built by the penguins, and others were man-made houses put in place when they tore down a private home and discovered penguins had been using it for a burrow and required some kind of replacement. The hills were also covered in deep trails that we learned were little penguin highways, used for centuries (millennia?) for the little penguins to make their way from the ocean where they hunted throughout the day, to their nests where they spend the nights. The bus then stopped at the top of a hill that looked down onto the beach where we’d spend the evening watching the penguins come ashore. I took the picture from inside the bus, but if you look closely at this picture you see the big rows of stadium seating, and then to the left, and closer, there are some benches that are curvy. The stadium like seating was general admission and the curvy ones are the viewing platform upgrade I paid for.

The penguins come ashore when it gets dark (just before 9PM while I was there), so we had about an hour before then to visit the gift shop and get settled in to our seats. I took the opportunity to send post cards to my family, featuring penguins and sent out right there from the island. I also picked up a blanket, because in spite of the warm day and my rain jacket, the wind had picked up to make it a bit chilly and it was threatening rain by the time dusk came around.

It was then time for the penguins. With the viewing platform upgrade the penguins were still a bit far when they came out of the ocean, but we got a nice view of them as they approached up the beach, walking right past our seating area! They come out of the ocean in big clumps of a couple dozen, so each time we saw another grouping the human crowd would pipe up and notice. I think for the general admission it would be a lot harder to see them come up on the beach. The rest of the penguin parade is fun for everyone though, they waddle and scuttle up the island to their little homes, and they pass all the trails, regardless of where you were seated. Along the pathways the penguins get so close to you that you could reach out and touch them (of course, you don’t!). Photos are strictly prohibited since the risk is too high that someone would accidentally use a flash and disturb them, but it was kind of refreshing to just soak in the time with the penguins without a camera/phone. All told, I understand there are nearly 1,500 penguins each night that come out of the ocean at that spot.

The hills then come alive with penguin noises as they enjoy their evenings, chatting away and settling in with their chicks. Apparently this parade lasts well into the night, though most of them do come out of the ocean during the hour or so that I spent there with the tour group. At 10PM it was time to meet back at the bus to take us back to Melbourne. The timing was very good, about 10 minutes after getting in the bus it started raining. We got to watch the film Oddball on our journey home, about another island of penguins in Victoria that was at risk from foxes but were saved.

In all, the day was pretty overwhelming for me. In a good way. Petting some of these incredibly cute Australian animals! Seeing adorable penguins in the wild! A day that I’ll cherish for a lifetime.

More photos from the tour here: https://www.flickr.com/photos/pleia2/albums/72157664216521696

The next day it was time to take a train to Geelong for the Linux conference. An event with a whole different type of penguins!

by pleia2 at February 10, 2016 08:58 AM