Planet Ubuntu California

February 12, 2016

Elizabeth Krumbach

Highlights from LCA 2016 in Geelong

Last week I had the pleasure of attending my second linux.conf.au. This year it took place in Geelong, a port city about an hour train ride southwest of Melbourne. After my Melbourne-area adventures earlier in the week, I made my way to Geelong via train on Sunday afternoon. That evening I met up with a whole bunch of my HPE colleagues for dinner at a restaurant next to my hotel.

Monday morning the conference began! Every day 1km the walk from my hotel to the conference venue at Deakin University’s Waterfront Campus and back was a pleasure as it took me along the shoreline. I passed a beach, a marina and even a Ferris wheel and a carousel.

I didn’t make time to enjoy the beach (complete with part of Geelong’s interesting post-people art installation), but I know many conference attendees did.

With that backdrop, it was time to dive into some Linux! I spent much of Monday in the Open Cloud Symposium miniconf run by my OpenStack Infra colleague over at Rackspace, Joshua Hesketh. I really enjoyed the pair of talks by Casey West, The Twelve-Factor Container (video) and Cloud Anti-Patterns (video). In both talks he gave engaging overviews of best practices and common gotchas with each technology. With containers it’s a temptation during the initial adoption phase to treat them like “tiny VMs” rather than compute-centric, storage free, containers for horizontally-scalable applications. He also stressed the importance of a consolidated code base for development and production and keeping any persistent storage out of containers and more generally the importance of Repeatability, Reliability and Resiliency. The second talk focused on how to bring applications into a cloud-native environment by using the 5-stages of grief repurposed for cloud-native. Key themes in this talk walked you through beginning with a legacy application being crammed into a container and the eventual modernization of that software into a series of microservices, including an automated build pipeline and continuous delivery with automated testing.

Unfortunately I was ill on Tuesday, so my conferencing picked up on Wednesday with a keynote by Catarina Mota who spoke on open hardware and materials, with a strong focus on 3D printing. It’s a topic that I’m already well-versed in, so the talk was mostly review for me, but I did enjoy one of the videos that she shared during her talk: Full Printed by nueveojos.

The day continued with a couple of talks that were some of my favorites of the conference. The first was Going Faster: Continuous Delivery for Firefox by Laura Thomson. Continuous Delivery (CD) has become increasingly popular for server-side applications that are served up to users, but this talk was an interesting take: delivering a client in a CD model. She didn’t offer a full solution for a CD browser, but instead walked through the problem space, design decisions and rationale behind the tooling they are using to get closer to a CD model for client-side software. Firefox is in an interesting space for this, as it already has add-ons that are released outside of the Firefox release model. What they decided to do was leverage this add-on tooling to create system add-ons, which are core to Firefox and to deliver microchanges, improvements and updates to the browser online. They’re also working to separate the browser code itself from the data that ships with it, under the premise that things like policy blacklists, dictionaries and fonts should be able to be updated and shipped independent of a browser version release. Indeed! This data would instead be shipped as downloadable content, and could also be tuned to only ship certain features upon request, like specific language support.


Laura Thomson, Director of Engineering, Cloud Services Engineering and Operations at Mozilla

The next talk that I got a lot out of was Wait, ?tahW: The Twisted Road to Right-to-Left Language Support (video) by Moriel Schottlender. Much like the first accessibility and internationalization talks I attended in the past, this is one of those talks that sticks with me because it opened my eyes to an area I’d never thought much about, as an English-only speaking citizen of the United States. She was also a great speaker who delivered the talk with the humor and intrigue… “can you guess the behavior of this right-to-left feature?” The talk began by making the case for more UIs supporting right to left (RTL) languages, citing that there are 800 million RTL speakers in the world who we should be supporting. She walked us through the concepts of Visual and Logical Rendering, how “obvious” solutions like flipping all content are flawed and considerations with regard to the relationship of content and the interface itself when designing for RTL. She also gave us a glimpse into the behavior of the Unicode Bidirectional Algorithm and the fascinating ways it behaves when mixing LTR and RTL languages. She concluded by sharing that expectations of RTL language users are pretty low since most software gets it wrong, but this means that there’s a great opportunity for projects that do support it to get it right. Her website on the topic that has everything she covered in her talk, and more, is at http://rtl.wtf.


Moriel Schottlender, Software Engineer at Wikimedia

Wednesday night was the Penguin Dinner, which is the major, all attendees welcome conference dinner of the event. The venue was The Pier, which was a restaurant appropriately perched on the end of a very long pier. It was a bit loud, but I had some interesting discussions with my fellow attendees and there was a lovely patio where we were able to get some fresh air and take pictures of the bay.

On Thursday a whole bunch of us enjoyed a talk about a Linux-driven Microwave (video) by David Tulloh. What I liked most about his talk was that while he definitely was giving a talk about tinkering with a microwave to give it more features and make it more accessible, he was also “encouraging other people to do crazy things.” Hack a microwave, hack all kinds of devices and change the world! Manufacturing one-off costs are coming down…

In the afternoon I gave my own talk, Open Source Tools for Distributed Systems Administration (video, slides). I was a bit worried that attendance wouldn’t be good because of who I was scheduled against, but I was mistaken, the room was quite full! After the talk I was able to chat with some folks who are also working on distributed systems teams, and with someone from another major project who was seeking to put more of their infrastructure work into open source. In all, a very effective gathering. Plus, my colleague Masayuki Igawa took a great photo during the talk!


Photo by Masayuki Igawa (source)

The afternoon continued with a talk by Rikki Endsley on Speaking their language: How to write for technical and non-technical audiences (video). Helpfully, she wrote an article on the topic so I didn’t need to take notes! The talk walked through various audiences, lay, managerial and experts and gave examples of how to craft posting for each. The announcement of a development change, for instance, will look very different when presenting it to existing developers than it may look to newcomers (perhaps “X process changed, here’s how” vs. “dev process made easier for new contributors!”), and completely differently when you’re approaching a media outlet to provide coverage for a change in your project. The article dives deep into her key points, but I will say that she delivered the talk with such humor that it was fun to learn directly from hearing her speak on the topic.


Also got my picture with Rikki! (source)

Thursday night was the Speakers’ dinner, which took place at a lovely little restaurant about 15 minutes from the venue via bus. I’m shy, so it’s always a bit intimidating to rub shoulders with some of the high profile speakers that they have at LCA,. Helpfully, I’m terrible with names, so I managed to chat away with a few people and not realize that they are A Big Deal until later. Hah! So the dinner was nice, but having been a long week I was somewhat thankful when the buses came at 10PM to bring us back.

Friday began with my favorite keynote of the conference! It was by Genevieve Bell (video), an Intel fellow with a background in cultural anthropology. Like all of my favorite talks, hers was full of humor and wit, particularly around the fact that she’s an anthropologist who was hired to work for a major technology company without much idea of what that would mean. In reality, her job turned out to be explaining humans to engineers and technologists, and using their combined insight to explore potential future innovations. Her insights were fascinating! A key point was that traditional “future predictions” tend to be a bit near-sighted and very rooted in problems of the present. In reality our present is “messy and myriad” and that technology and society are complicated topics, particularly when taken together. Her expertise brought insight to human behavior that helps engineers realize that while devices work better when connected, humans work better while disconnected (to the point of seeking “disconnection” from the internet on our vacations and weekends).

Additionally, many devices and technologies aim to provide a “seamless” experience, but that humans actually prefer seamful interactions so we can split up our lives into contexts. Finally, she spent a fair amount of time talking about our lives in the world of Internet of Things, and how some serious rules will need to be put in place to make us feel safe and supported by our devices rather than vulnerable and spied upon. Ultimately, technology has to be designed with the human element in mind, and her plea to us, as the architects of the future, is to be optimistic about the future and make sure we’re getting it right.

After her talk I now believe every technology company should have a staff cultural anthropologist.


Intel Fellow and cultural anthropologist Genevieve Bell

My day continued with a talk by Andrew Tridgell on Helicopters and rocket-planes (video), one on Copyleft For the Next Decade: A Comprehensive Plan (video) by Bradley Kuhn, a talk by Matthew Garrett on Troublesome Privacy Measures: using TPMs to protect users (video) and an interesting dive into handling secret data with Tollef Fog Heen’s talk on secretd – another take on securely storing credentials (video).

With that, the conference came to a close with a closing session that included raffle prizes, thanks to everyone and the hand-off to the team running LCA 2017 in Hobart next year.

I went to more talks than highlighted in this post, but with a whole week of conferencing it would have been a lot to cover. I also am typically not the biggest fan of the “hallway track” (introvert, shy) and long breaks, but I knew enough people at this conference to find people to spend time with during breaks and meals. I could also get a bit of work done during the longer breaks without skipping too many sessions and it easy to switch rooms between sessions without disruption. Plus, all the room moderators I saw did an excellent job of keeping things on schedule.

Huge thanks to all the organizers and everyone who made me feel so welcome this year. It was a wonderful experience and I hope to do it again next year!

More photos from the conference and beautiful Geelong here: https://www.flickr.com/photos/pleia2/albums/72157664277057411

by pleia2 at February 12, 2016 09:20 PM

February 10, 2016

iheartubuntu

OpenShot 2.0.6 (Beta 3) Released!


The third beta of OpenShot 2.0 has been officially released! To install it, add the PPA by using the Terminal commands below:

sudo add-apt-repository ppa:openshot.developers/ppa
sudo apt-get update
sudo apt-get install openshot openshot-doc

Now that OpenShot is installed, you should be able to launch it from your Applications menu, or from the terminal ($ openshot-qt). Every time OpenShot has an update, you will be prompted to update to the newest version. It's a great way to test our latest features.

Smoother Animation
Animations are now silky smooth because of improved anti-aliasing support in the libopenshot compositing engine. Zooming, panning, and rotation all benefit from this change.

Audio Quality Improvements
Audio support in this new version is vastly superior to previous versions. Popping, crackling, and other related audio issues have been fixed.

Autosave
A new autosave engine has been built for OpenShot 2.0, and it’s fast, simple to configure, and will automatically save your project at a specific interval (if it needs saving). Check the Preferences to be sure it’s enabled (it will default to enabled for new users).

Automatic Backup and Recovery
Along with our new autosave engine, a new automatic backup and recovery feature has also been integrated into the autosave flow. If your project is not yet saved… have no fear, the autosave engine will make a backup of your unsaved project (as often as autosave is configured for), and if OpenShot crashes, it will recover your most recent backup on launch.


Project File Improvements
Many improvements have been made to project file handling, including relative paths for built-in transitions and improvements to temp files being copied to project folders (i.e. animated titles). Projects should be completely portable now, between different versions of OpenShot and on different Operating Systems. This was a key design goal of OpenShot 2.0, and it works really well now.

Improved Exception Handling
Integration between libopenshot (our video editing library) and openshot-qt (our PyQt5 user interface) has been improved. Exceptions generated by libopenshot are now passed to the user interface, and no longer crash the application. Users are now presented with a friendly error message with some details of what happened. Of course, there is still the occasional “hard crash” which kills everything, but many, many crashes will now be avoided, and users more informed on what has happened.

Preferences Improvements
There are more preferences available now (audio preview settings - sample rate, channel layout, debug mode, etc…), including a new feature to prompt users when the application will “require a restart” for an option to take effect.


Improved Stability on Windows
A couple of pretty nasty bugs were fixed for Windows, although in theory they should have crashed on other platforms as well. But for whatever reason, certain types of crashes relating to threading only seem to happen on Windows, and many of those are now fixed.

New Version Detection
OpenShot will now check the most recent released version on launch (from the openshot.org website) and descretely prompt the user by showing an icon in the top right of the main window. This has been a requested feature for a really long time, and it’s finally here. It will also quietly give up if no Internet connection is available, and it runs in a separate thread, so it doesn’t slow down anything.

Metrics and Anonymous Error Reporting
A new anonymous metric and error reporting module has been added to OpenShot. It can be enabled / disabled in the Preferences, and it will occasionally send out anonymous metrics and error reports, which will help me identify where crashes are happening. It’s very basic data, such as “WEBM encoding error - Windows 8, version 2.0.6, libopenshot-version: 0.1.0”, and all IP addresses are anonymized, but will be critical to help improve OpenShot over time.

Improved Precision when Dragging
Dragging multiple clips around the timeline has been improved. There were many small issues that would sometimes occur, such as extra spacing being added between clips, or transitions being slightly out of place. These issues have been fixed, and moving multiple clips now works very well.

Debug Mode
In the preferences, one of the new options is “Debug Mode”, which outputs a ton of extra info into the logs. This might only work on Linux at the moment, because it requires the capturing of standard output, which is blocked in the Windows and Mac versions (due to cx_Freeze). I hope to enable this feature for all OSes soon, or at least to provide a “Debug” version for Windows and Mac, that would also pop open a terminal/command prompt with the standard output visible.

Updated Translations
Updates to 78 supported languages have been made. A huge thanks to the translators who have been hard at work helping with OpenShot translations. There are over 1000 phrases which require translation, and seeing OpenShot run so seamlessly in different languages is just awesome! I love it!

Lots of Bug fixes

  • In addition to all the above improvements and fixes, here are many other smaller bugs and issues that have been addressed in this version.
  • Prompt before overwriting a video on export
  • Fixed regression while previewing videos (causing playhead to hop around)
  • Default export format set to MP4 (regardless of language)
  • Fixed regression with Cutting / Split video dialog
  • Fixed Undo / Redo bug with new project
  • Backspace key now deletes clips (useful with certain keyboards and laptop keyboards)
  • Fixed bug on Animated Title dialog not updating progress while rendering
  • Added multi-line and unicode support to Animated Titles
  • Improved launcher to use distutils entry_points


Get Involved
Please report bugs and suggestions here: https://github.com/OpenShot/openshot-qt/issues. Please contribute language translations here (if you are a non-English speaking user): https://translations.launchpad.net/openshot/2.0/+translations.

by iheartubuntu (noreply@blogger.com) at February 10, 2016 01:23 PM

Elizabeth Krumbach

Kangaroos, Penguins, a Koala and a Platypus

On the evening of January 27th I began my journey to visit Australia for the second time in my life. My first visit to the land down under was in 2014 when I spoke at and attended my first linux.conf.au in Perth. Perth was beautiful, in addition to the conference (which I wrote about here, here and here), I took some time to see the beach and visit the zoo during my tourist adventures.

This time I was headed for Melbourne to once again attend and speak at linux.conf.au, this time in the port city of Geelong. I arrived the morning of Friday the 29th to spend a couple days adjusting to the time zone and visiting some animals. However, I was surprised at the unexpected discovery of something else I love in Melbourne: historic street cars. Called trams there, they run a free City Circle Tram that uses the historic cars! There’s even a The Colonial Tramcar Restaurant which allows you to dine inside one as you make your way along the city rails. Unfortunately my trip was not long enough to ride in a tram or enjoy a meal, but this alone puts Melbourne right on my list of cities to visit again.

At the Perth Zoo I got my first glimpse of a wombat (they are BIG!) and enjoyed walking through an enclosure where the kangaroos roamed freely. This time I had some more animals on my checklist, and wanted to get a bit closer to some others. After checking into my hotel in Melbourne, I went straight to the Melbourne Zoo.

I love zoos. I’ve visited zoos in countries all over the world. But there’s something special you should know about the Melbourne Zoo: they have a platypus. Everything I’ve read indicate that they don’t do very well in captivity and captive breeding is very rare. As a result, no zoos outside of Australia have platypuses, so if I wanted to see one it had to be in Australia. I bought my zoo ticket and immediately asked “where can I find the platypus?” With that, I got to see a platypus! They platypus was swimming in it’s enclosure and I wasn’t able to get a photo of it (moving too fast), but I did get a lovely video. They are funny creatures, and very cute!

The rest of the zoo was very nice. I didn’t see everything, but I spent a couple hours visiting the local animals and checking out some of their bigger exhibits. I almost skipped their seals (seals live at home!) and penguins (I’d see wild ones the next day!), but I’m glad I didn’t since it was a very nice setup. Plus, I wasn’t able to take pictures of the wild fairy penguins as to not disturb them in their natural habitat, but the ones at the zoo were fine.

I also got a video of the penguins!

More photos from the Melbourne Zoo here: https://www.flickr.com/photos/pleia2/albums/72157664216488166

When I got into a cab to return to my hotel it began to rain. I was able to pick up an early dinner and spend the evening catching up on some work and getting to bed early.

Saturday was animal tour day! I booked a AAT Kings full day Phillip Island – Penguins, Kangaroos & Koalas tour that had a tour bus picking me up right at my hotel. I selected the Viewing Platform Upgrade and it was well worth it.

Philip Island is about two hours from Melbourne, and it’s where the penguins live. They come out onto the beach at sunset and all rush back to their homes. The rest of the tour was a series of activities leading up to this grand event, beginning with a stop at MARU Koala & Animal Park. We were in the bus for nearly two hours to get to the small park, during which the tour guide told us about the history of Melbourne and about the penguins we’d see later in the evening.

The tour included entrance fees, but I paid an extra $20 to pet a koala and get some food for the kangaroos and other animals. First up, koala! The koala I got to pet was an active critter. It sat still during my photo, but between people it could be seen reaching toward the keepers to get back the stem of eucalyptus that it got to munch on during the tourist photos. It was fun to learn that instead of being really soft like they look, their fur feels a lot more like wool.

The rest of my time at the park was spent with the kangaroos. Not only are they just hopping around for everyone to see like in the Perth Zoo, when you have a container of food you get to feed them! And pet them! In case you’re wondering, it’s one of the best things ever. They’re all very used to being around human tourists all day, and when you lay your hand flat as instructed to have them eat from your hand they don’t bite.

I got to feed and pet lots of kangaroos!

The rest of the afternoon was spent visiting a couple scenic outlooks and a beach before stopping for dinner in the town of Cowes on Philip Island where I enjoyed a lovely fish dinner with a stunning view at Harry’s on the Esplanade. The weather was so nice!


Selfies were made for the solo tourist

As we approached the “skinny tip of the island” the tour guide told us a bit about the history of the island and the nature preserve where the penguins live. The area had once been heavily populated with vacation homes, but with the accidental introduction of foxes, which kill penguins, and increased human population, the island quickly saw their penguin (and other local wildlife populations) drop. We learned that a program was put in place to buy back all the private property and turn it into a preserve, and work was also done to rid the island of foxes. The program seems to have worked, the preserve no longer has private homes and we saw dozens of wild wallabies as well as some of the large native geese that were also targets of the foxes. Most exciting for me was that the penguin population was preserved for us to enjoy.

As the bus made its way through the park, we could see little penguin homes throughout the landscape. Some were natural holes built by the penguins, and others were man-made houses put in place when they tore down a private home and discovered penguins had been using it for a burrow and required some kind of replacement. The hills were also covered in deep trails that we learned were little penguin highways, used for centuries (millennia?) for the little penguins to make their way from the ocean where they hunted throughout the day, to their nests where they spend the nights. The bus then stopped at the top of a hill that looked down onto the beach where we’d spend the evening watching the penguins come ashore. I took the picture from inside the bus, but if you look closely at this picture you see the big rows of stadium seating, and then to the left, and closer, there are some benches that are curvy. The stadium like seating was general admission and the curvy ones are the viewing platform upgrade I paid for.

The penguins come ashore when it gets dark (just before 9PM while I was there), so we had about an hour before then to visit the gift shop and get settled in to our seats. I took the opportunity to send post cards to my family, featuring penguins and sent out right there from the island. I also picked up a blanket, because in spite of the warm day and my rain jacket, the wind had picked up to make it a bit chilly and it was threatening rain by the time dusk came around.

It was then time for the penguins. With the viewing platform upgrade the penguins were still a bit far when they came out of the ocean, but we got a nice view of them as they approached up the beach, walking right past our seating area! They come out of the ocean in big clumps of a couple dozen, so each time we saw another grouping the human crowd would pipe up and notice. I think for the general admission it would be a lot harder to see them come up on the beach. The rest of the penguin parade is fun for everyone though, they waddle and scuttle up the island to their little homes, and they pass all the trails, regardless of where you were seated. Along the pathways the penguins get so close to you that you could reach out and touch them (of course, you don’t!). Photos are strictly prohibited since the risk is too high that someone would accidentally use a flash and disturb them, but it was kind of refreshing to just soak in the time with the penguins without a camera/phone. All told, I understand there are nearly 1,500 penguins each night that come out of the ocean at that spot.

The hills then come alive with penguin noises as they enjoy their evenings, chatting away and settling in with their chicks. Apparently this parade lasts well into the night, though most of them do come out of the ocean during the hour or so that I spent there with the tour group. At 10PM it was time to meet back at the bus to take us back to Melbourne. The timing was very good, about 10 minutes after getting in the bus it started raining. We got to watch the film Oddball on our journey home, about another island of penguins in Victoria that was at risk from foxes but were saved.

In all, the day was pretty overwhelming for me. In a good way. Petting some of these incredibly cute Australian animals! Seeing adorable penguins in the wild! A day that I’ll cherish for a lifetime.

More photos from the tour here: https://www.flickr.com/photos/pleia2/albums/72157664216521696

The next day it was time to take a train to Geelong for the Linux conference. An event with a whole different type of penguins!

by pleia2 at February 10, 2016 08:58 AM

February 08, 2016

Akkana Peck

Attack of the Killer Titmouse!

[Juniper titmouse attacking my window] For the last several days, when I go upstairs in mid-morning I often hear a strange sound coming from the bedroom. It's a juniper titmouse energetically attacking the east-facing window.

He calls, most often in threes, as he flutters around the windowsill, sometimes scratching or pecking the window. He'll attack the bottom for a while, moving from one side to the other, then fly up to the top of the window to attack the top corners, then back to the bottom.

For several days I've run down to grab the camera as soon as I saw him, but by the time I get back and get focused, he becomes camera-shy and flies away, and I hear EEE EEE EEE from a nearby tree instead. Later in the day I'll sometimes see him down at the office windows, though never as persistently as upstairs in the morning.

I've suspected he's attacking his reflection (and also assumed he's a "he"), partly because I see him at the east-facing bedroom window in the morning and at the south-facing office window in the early afternoon. But I'm not sure about it, and certainly I hear his call from trees scattered around the yard.

Something I was never sure of, but am now: titmice definitely can raise and lower their crests. I'd never seen one with its crest lowered, but this one flattens his crest while he's in attack mode.

His EEE EEE EEE call isn't very similar to any of the calls listed for juniper titmouse in the Stokes CD set or the Audubon Android app. So when he briefly attacked the window next to my computer yesterday afternoon while I was sitting there, I grabbed a camera and shot a video, hoping to capture the sound. The titmouse didn't exactly cooperate: he chirped a few times, not always in the group of three he uses so persistently in the morning, and the sound in the video came out terribly noisy; but after some processing in audacity I managed to edit out some of the noise. And then this morning as I was brushing my teeth, I heard him again and he was more obliging, giving me a long video of him attacking and yelling at the bedroom window. Here's the Juniper titmouse call as he attacks my window this morning, and yesterday's Juniper titmouse call at the office window yesterday. Today's video is on youtube: Titmouse attacking the window but that's without the sound edits, so it's tough to hear him.

(Incidentally, since Audacity has a super confusing user interface and I'm sure I'll need this again, what seemed to work best was to highlight sections that weren't titmouse and use Edit→Delete; then use Effects→Amplify, checking the box for Allow clipping and using Preview to amplify it to the point where the bird is audible. Then find a section that's just noise, no titmouse, select it, run Effects→Noise Reduction and click Get Noise Profile. The window goes away, so click somewhere to un-select, call up Effects→Noise Reduction again and this time click OK.)

I feel a bit sorry for the little titmouse, attacking windows so frenetically. Titmice are cute, excellent birds to have around, and I hope he's saving some energy for attracting a mate who will build a nest here this spring. Meanwhile, he's certainly providing entertainment for me.

February 08, 2016 06:10 PM

February 05, 2016

Akkana Peck

Updating Debian under a chroot

Debian's Unstable ("Sid") distribution has been terrible lately. They're switching to a version of X that doesn't require root, and apparently the X transition has broken all sorts of things in ways that are hard to fix and there's no ETA for when things might get any better.

And, being Debian, there's no real bug system so you can't just CC yourself on the bug to see when new fixes might be available to try. You just have to wait, try every few days and see if the system

That's hard when the system doesn't work at all. Last week, I was booting into a shell but X wouldn't run, so at least I could pull updates. This week, X starts but the keyboard and mouse don't work at all, making it hard to run an upgrade. has been fixed.

Fortunately, I have an install of Debian stable ("Jessie") on this system as well. When I partition a large disk I always reserve several root partitions so I can try out other Linux distros, and when running the more experimental versions, like Sid, sometimes that's a life saver. So I've been running Jessie while I wait for Sid to get fixed. The only trick is: how can I upgrade my Sid partition while running Jessie, since Sid isn't usable at all?

I have an entry in /etc/fstab that lets me mount my Sid partition easily:

/dev/sda6 /sid ext4 defaults,user,noauto,exec 0 0
So I can type mount /sid as myself, without even needing to be root.

But Debian's apt upgrade tools assume everything will be on /, not on /sid. So I'll need to use chroot /sid (as root) to change the root of the filesystem to /sid. That only affects the shell where I type that command; the rest of my system will still be happily running Jessie.

Mount the special filesystems

That mostly works, but not quite, because I get a lot of errors like permission denied: /dev/null.

/dev/null is a device: you can write to it and the bytes disappear, as if into a black hole except without Hawking radiation. Since /dev is implemented by the kernel and udev, in the chroot it's just an empty directory. And if a program opens /dev/null in the chroot, it might create a regular file there and actually write to it. You wouldn't want that: it eats up disk space and can slow things down a lot.

The way to fix that is before you chroot: mount --bind /dev /sid/dev which will make /sid/dev a mirror of the real /dev. It has to be done before the chroot because inside the chroot, you no longer have access to the running system's /dev.

But there is a different syntax you can use after chrooting:

mount -t proc proc proc/
mount --rbind /sys sys/
mount --rbind /dev dev/

It's a good idea to do this for /proc and /sys as well, and Debian recommends adding /dev/pts (which must be done after you've mounted /dev), even though most of these probably won't come into play during your upgrade.

Mount /boot

Finally, on my multi-boot system, I have one shared /boot partition with kernels for Jessie, Sid and any other distros I have installed on this system. (That's somewhat hard to do using grub2 but easy on Debian though you may need to turn off auto-update and Debian is making it harder to use extlinux now.) Anyway, if you have a separate /boot partition, you'll want it mounted in the chroot, in case the update needs to add a new kernel. Since you presumably already have the same /boot mounted on the running system, use mount --bind for that as well.

So here's the final set of commands to run, as root:

mount /sid
mount --bind /proc /sid/proc
mount --bind /sys /sid/sys
mount --bind /dev /sid/dev
mount --bind /dev/pts /sid/dev/pts
mount --bind /boot /sid/boot
chroot /sid

And then you can proceed with your apt-get update, apt-get dist-upgrade etc. When you're finished, you can unmount everything with one command:

umount --recursive /sid

Some helpful background reading:

February 05, 2016 06:43 PM

February 02, 2016

Nathan Haines

Ubuntu Free Culture Showcase submissions are now open again!

It’s time once again for the Ubuntu Free Culture Showcase!

The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We're looking for content which shows off the skill and talent of these amazing artists and will greet Ubuntu 16.04 LTS users.

Not only will the chosen content be featured on the next set of pressed Ubuntu discs shared worldwide across the next two years, but it will serve the joint purposes of providing a perfect test for new users testing Ubuntu’s live session or new installations, but also celebrating the fantastic talents of artists who embrace Free content licenses.

While we hope to see contributions from the video, audio, and photographic realms, I also want to thank the artists who have provided wallpapers for Ubuntu release after release. Ubuntu 15.10 shipped with wallpapers from the following contributors:

I'm looking forward to seeing the next round of entrants and a difficult time picking final choices to ship with Ubuntu 16.04 LTS.

For more information, please visit the Ubuntu Free Culture Showcase page on the Ubuntu wiki.

February 02, 2016 11:33 AM

February 01, 2016

Jono Bacon

The Hybrid Desktop

OK, folks, I want to share a random idea that cropped up after a long conversation with Langridge a few weeks back. This is merely food for thought and designed to trigger some discussion.

Today my computing experience is comprised of Ubuntu and Mac OS X. On Ubuntu I am still playing with GNOME Shell and on Mac I am using the standard desktop experience.

I like both. Both have benefits and disadvantages. My Mac has beautiful hardware and anything I plug into it just works out the box (or has drivers). While I spend most of my life in Chrome and Atom, I use some apps that are not available on Ubuntu (e.g. Bluejeans and Evernote clients). I also find multimedia is just easier and more reliable on my Mac.

My heart will always be with Linux though. I love how slick and simple Shell is and I depend on the huge developer toolchain available to me in Ubuntu. I like how customizable my desktop is and that I can be part of a community that makes the software I use. There is something hugely fulfilling about hanging out with the people who make the tools you use.

So, I have two platforms and use the best of both. The problem is, they feel like two different boxes of things sat on the same shelf. I want to jumble the contents of those boxes together and spread them across the very same shelf.

The Idea

So, imagine this (this is total fantasy, I have no idea if this would be technically feasible.)

You want the very best computing experience, so you first go out and buy a Mac. They have arguably the nicest overall hardware combo (looks, usability, battery etc) out there.

You then download a distribution from the Internet. This is shipped as a .dmg and you install it. It then proceeds to install a bunch of software on your computer. This includes things such as:

  • GNOME Shell
  • All the GNOME 3 apps
  • Various command line tools commonly used on Linux
  • An ability to install Linux packages (e.g. Debian packages, RPMs, snaps) natively

When you fire up the distribution, GNOME Shell appears (or Unity, KDE, Elementary etc) and it is running natively on the Mac, full screen like you would see on Linux. For all intents and purposes it looks and feels like a Linux box, but it is running on top of Mac OS X. This means hardware issues (particularly hardware that needs specific drivers) go away.

Because shell is native it integrates with the Mac side of the fence. All the Mac applications can be browsed and started from Shell. Nautilus shows your Mac filesystem.

If you want to install more software you can use something such as apt-get, snappy, or another service. Everything is pulled in and available natively.

Of course, there will be some integration points where this may not work (e.g. alt-tab might not be able to display Shell apps as well as Mac apps), but importantly you can use your favorite Linux desktop as your main desktop yet still use your favorite Mac apps and features.

I think this could bring a number of benefits:

  • It would open up a huge userbase as a potential audience. Switching to Linux is a big deal for most people. Why not bring the goodness to the Mac userbase?
  • It could be a great opportunity for smaller desktops to differentiate (e.g. Elementary).
  • It could be a great way to introduce people to open source in a more accessible way (it doesn’t require a new OS).
  • It could potentially bring lots of new developers to projects such as GNOME, Unity, KDE, or Elementary.
  • It could significantly increase the level of testing, translations and other supplemental services due to more people being able to play with it.

Of course, from a purely Free Software perspective it could be seen as a step back. Then again, with Darwin being open source and the desktop and apps you install in the distribution being open source, it would be a mostly free platform. It wouldn’t be free in the eyes of the FSF, but then again, neither is Ubuntu. 😉

So, again, just wanted to throw the idea out there to spur some discussion. I think it could be a great project to see. It wouldn’t replace any of the existing Linux distros, but I think it could bring an influx of additional folks over to the open source desktops.

So, two questions for you all to respond to:

  1. What do you think? Could it be an interesting project?
  2. If so, technically how do you think this could be accomplished?

by Jono Bacon at February 01, 2016 03:17 AM

January 31, 2016

Elizabeth Krumbach

SCALE14x

I have already written about the UbuCon Summit and Ubuntu booth at SCALE14x (14th annual Southern California Linux Expo), but the conference went far beyond Ubuntu for me!

First of all, I love this new venue. SCALE had previously been held at hotels near LAX, with all the ones I’d attended being at the Hilton LAX. It was a fine venue itself, but the conference was clearly outgrowing it even when I last attended in 2014 and there weren’t many food options around, particularly if you wanted a more formal meal. The Pasadena Convention Center was the opposite of this. Lots of space, lots of great food of all kinds and price ranges within walking distance! A whole plaza across from the venue made a quick lunch at a nice place quite doable.

It’s also worth mentioning that with over 3000 attendees this year, the conference has matured well. My first SCALE was 9x back in 2011, and with every year the growth and professionalism has continued, but without losing the feel of a community-run, regional conference that I love so much. Even the expo hall has continued to show a strong contingent of open source project and organization booths among the flashy company-driven booths, but even the company booths weren’t over done. Kudos to the SCALE crew for their work and efforts that make SCALE continue to be one of my favorite open source conferences.

As for the conference itself, MJ and I were both able to attend for work, which was a nice change for us. Plus, given how much conference travel I’ve done on my own, it’s nice to travel and enjoy an event together.

Thursday was taken up pretty much exclusively by the UbuCon Summit, but Friday we started to transition into more general conference activities. The first conference-wide keynote was on Friday morning with Cory Doctorow presenting No Matter Who’s Winning the War on General Purpose Computing, You’re Losing where he explored security and Digital rights management (DRM) in the exploding field of the Internet of Things. His premise was that we did largely win the open source vs. proprietary battle, but now we’re in a whole different space where DRM are now threatening our safety and stifling innovation. Security vulnerabilities in devices are going undisclosed when discovered by third parties under threat of prosecution for violating DRM-focused laws which have popped up worldwide. Depending on the device, this fear of disclosure could actually result in vulnerabilities causing physical harm to someone if compromised in a malicious way. He also dove into more dystopian future where smart devices are given away for free/cheap but then are phoning home and can be controlled remotely by an entity that doesn’t have your personal best interest in mind. The talk certainly gave me a lot to think about. He concluded by presenting the Apollo 1201 Project “a mission to eradicate DRM in our lifetime” that he’s working on at the EFF, article here.

Later that morning I made my way over to the DevOpsDayLA track to present on Open Source tools for distributed systems administration. Unfortunately, the projectors in the room weren’t working. Thankfully my slides were not essential to the talk, so even though I did feel a bit unsettled to present without slides, I made it through. People even said nice things afterwards, so I think it went pretty well in spite of the technology snafu. The slides that should have been seen during the talk are available here (PDF) and since I am always asked, I do maintain a list of other open source infras. Thanks to @scalexphotos for capturing a photo during my talk.

In the afternoon I spent some time in the expo hall, where I was able to see many more familiar faces! Again, the community booths are the major draw for me, so it was great visiting with participants of projects and groups there. It was nice to swing by the Ubuntu booth to see how polished everything was looking. I also got to see Emma of System76, who I hadn’t seen in quite some time.

Friday evening had a series of Birds of a Feather (BoF) sessions. I was able to make my way over to one on OpenStack before wrapping up my evening.

Saturday morning began with a welcome from Pasadena City Council member Andy Wilson who was enthusiastic about SCALE14x coming to Pasadena and quickly dove into his technical projects and the work being done in Pasadena around tech. I love this trend of city officials welcoming open source conferences to their area, it means a lot that the work we’re doing is being taken seriously by the cities we’re in. Then it moved into a keynote by Mark Shuttleworth on Open Source in the World of App Stores which had many similarities to his talk at the UbuCon Summit, but was targeted more generally about how distributions can help keep pace today’s computing that deploys “at the speed of git.”

I then went to Akkana Peck’s talk on Stupid GIMP tricks (and smart ones, too). It was a very visual talk, so I’m struggling to do it justice in written form, but she demonstrated various tools for photo editing in GIMP that I knew nothing about, I learned a lot. She concluded by talking about the features that came out in the 2.8 release and then the features planned and being worked on in the upcoming 2.9 release. Video of the talk here In the afternoon I attended a Kubernetes talk, noting quickly that the containers track was pretty packed throughout the conference.


Akkana Peck on GIMP

Between “hallway track” chats about everything from the Ubuntu project to the OpenStack project infrastructure tooling, Saturday afternoon also gave me the opportunity to do a bit more wandering through the expo hall. I visited my colleagues at the HPE booth and was able to see their cloud in a box. It was amusing to see the suitcase version and the Ubuntu booth with an Orange box. Putting OpenStack clouds in a single demonstration deployment for a conference is a popular thing!

My last talk of the day was by OpenStack Magnum Project Technical Lead Adrian Otto on Docker, Kubernetes, and Mesos: Compared. He walked us through some of the basics of Magnum first, then dove into each technology. Docker Swarm is good for simple tooling that you’re comfortable with and doing exactly what you tell it (imperative) and have 100s-1000s machines in the cluster. Kubernetes is more declarative (you tell it what you want, it figures out how to do it) and currently has some scaling concerns that make it better suited for a cluster of up to 200 nodes. Mesos is a more complicated system that he recommended using if you have a dedicated infrastructure team and can effectively scale to over 10k nodes. Video of the talk here

Sunday began with a keynote by Sarah Sharp on Improving Diversity with Maslow’s Hierarchy of Needs. She spoke about diversity across various angles, from income and internet bandwidth restrictions to gender and race, and the intersection of these things. There are many things that open source projects assume: unlimited ability to download software, ability for contributors to have uninterrupted “deep hack mode” time, access to fast systems to do development on. These assumptions fall apart when a contributor is paying for the bandwidth they use, are a caretaker who doesn’t have long periods without interruptions or a new system that they have access to. Additionally, there are opportunities that are simply denied to many genders, as studies have show that mothers and daughters don’t have as many opportunities or as much access to technology as the fathers and sons in their household. She also explored safety in a community, demonstrating how even a single sexist or racist contributor can single-handedly destroy diversity for your project by driving away potential contributors. Having a well-written code of conduct with a clear enforcement plan is also important and cited resources for organizations and people who could help you with that, warning that you shouldn’t roll your own. She concluded by asking audience members to recognize the problem and take action in their communities to help improve diversity. Her excellent slides (with notes) are here and a video of the talk here.

I then made my way to the Sysadmin track to see Jonah Horowitz and Albert Tobey on From Sys Admin to Netflix SRE. First off, their slides were hilarious. Lots of 80s references to things that were out-dated as they made their way through how they’re doing Site Reliability Engineering (SRE) at Netflix and inside their CORE (Cloud Operations Reliability Engineering) team. In their work, they’ve moved past configuration management, preferring to deploy baked AMIs (essentially, golden images). They also don’t see themselves as “running applications for the developers” and instead empower developers to do their own releases and application-level monitoring. In this new world of managing fleets of servers rather than individual systems, they’ve worked to develop a blameless culture where they do postmortems so that anything that is found to be done manually or otherwise error-prone can be fixed so the issue doesn’t happen again. The also shared the open source tooling that they use to bypass traditional monitoring systems and provide SREs with a high level view of how their system is working, noting that no one in the organization “knows everything” about the infrastructure. This tooling includes Spinnaker, Atlas and Vector, along with their well-known Simian Army which services within Netflix must run (unless they have a good reason not to) to test tolerance of random instance failures. Video of the talk can be found here and slide here.

After lunch I made my way to A fresh look at SELinux… by Daniel Walsh. I’d seen him speak on SELinux before, and found his talk valuable then too. This time I was particularly interested in how it’s progressed in RHEL7/Centos7, like the new rules for a file type, such as knowing what permissions /home/user/.ssh should have and having an semanage command to set those permissions to that default instead of doing it manually. I also learned about semanage -e (equivalency) to copy permissions from one place to another and the new mv -Z which moves things while retaining the SELinux properties. Finally, I somehow didn’t have a good grasp on improvements to the man pages, doing things like `man httpd_selinux` works and is very helpful! I was also amused to learn a bout stopdisablingselinux.com (especially since our team does not turn it off, and that took some work on my part!). In closing, there’s also an SELinux Coloring Book (which I’ve written about before), and though I didn’t get to the session in time to get one, MJ picked me up on in the expo hall. Video of the talk here

With that, we were at the last talk of the conference. I went over to Dustin Kirkland’s talk on “adapt install [anything]” on your Ubuntu LTS server/desktop! Adapt is a wrapper around LXD containers that allows you to locally (unprivileged user) install versions of Ubuntu software from various versions and run it locally on your system. The script handles provisioning the container, many default settings and keeping it updated automatically, so you really can “adapt install” and run a series of adapt commands to interact with it as if it were prepared locally. It all reminded me of the pile of chroot-building scripts I had back when I was doing Debian packaging, but more polished than mine ever were! He wrote a blog post following up his talk here: adapt install [anything] which includes a link to his slides. Video from the talk here (link at 4 hours 42 minutes).

With the conference complete, it was sad to leave, but I had an evening flight out of Burbank. Amusingly, even my flight was full of SCALE folks, so there were some fun chats in the boarding area before our departure.

Huge thanks to everyone who made SCALE possible, I’m looking forward to next year!

More photos from SCALE14x here: https://www.flickr.com/photos/pleia2/albums/72157663821501532

by pleia2 at January 31, 2016 09:15 PM

Akkana Peck

Setting mouse speed in X

My mouse died recently: the middle button started bouncing, so a middle button click would show up as two clicks instead of one. What a piece of junk -- I only bought that Logitech some ten years ago! (Seriously, I'm pretty amazed how long it lasted, considering it wasn't anything fancy.)

I replaced it with another Logitech, which turned out to be quite difficult to find. Turns out most stores only sell cordless mice these days. Why would I want something that depends on batteries to use every day at my desktop?

But I finally found another basic corded Logitech mouse (at Office Depot). Brought it home and it worked fine, except that the speed was way too fast, much faster than my old mouse. So I needed to find out how to change mouse speed.

X11 has traditionally made it easy to change mouse acceleration, but that wasn't what I wanted. I like my mouse to be fairly linear, not slow to start then suddenly zippy. There's no X11 property for mouse speed; it turns out that to set mouse speed, you need to call it Deceleration.

But first, you need to get the ID for your mouse.

$ xinput list| grep -i mouse
⎜   ↳ Logitech USB Optical Mouse                id=11   [slave  pointer  (2)]

Armed with the ID of 11, we can find the current speed (deceleration) and its ID:

$ xinput list-props 11 | grep Deceleration
        Device Accel Constant Deceleration (259):       3.500000
        Device Accel Adaptive Deceleration (260):       1.000000

Constant deceleration is what I want to set, so I'll use that ID of 259 and set the new deceleration to 2:

$ xinput set-prop 11 259 2

That's fine for doing it once. But what if you want it to happen automatically when you start X? Those constants might all stay the same, but what if they don't?

So let's build a shell pipeline that should work even if the constants aren't.

First, let's get the mouse ID out of xinput list. We want to pull out the digits immediately following "id=", and nothing else.

$ xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/'
11

Save that in a variable (because we'll need to use it more than once) and feed it in to list-props to get the deceleration ID. Then use sed again, in the same way, to pull out just the thing in parentheses following "Deceleration":

$ mouseid=$(xinput list | grep Mouse | sed 's/.*id=\([0-9]*\).*/\1/')
$ xinput list-props $mouseid | grep 'Constant Deceleration'
        Device Accel Constant Deceleration (262):       2.000000
$ xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/'
262

Whew! Now we have a way of getting both the mouse ID and the ID for the "Constant Deceleration" parameter, and we can pass them in to set-prop with our desired value (I'm using 2) tacked onto the end:

$ xinput set-prop $mouseid $(xinput list-props $mouseid | grep 'Constant Deceleration' | sed 's/.* Deceleration (\([0-9]*\)).*/\1/') 2

Add those two lines (setting the mouseid, then the final xinput line) wherever your window manager will run them when you start X. For me, using Openbox, they go in .config/openbox/autostart. And now my mouse will automatically be the speed I want it to be.

January 31, 2016 08:42 PM

January 30, 2016

Elizabeth Krumbach

Ubuntu at SCALE14x

I spent a long weekend in Pasadena from January 21-24th to participate in the 14th Annual Southern California Linux Expo (SCALE14x). As I mentioned previously, a major part of my attendance was focused on the Ubuntu-related activities. Wednesday evening I joined a whole crowd of my Ubuntu friends at a pre-UbuCon meet-and-greet at a wine bar (all ages were welcome) near the venue.

It was at this meet-and-greet where I first got to see several folks I hadn’t seen since the last Ubuntu Developer Summit (UDS) back in Copenhagen in 2012. Others I had seen recently at other open source conferences and still more I was meeting for the first time, amazing contributors to our community who I’d only had the opportunity to get to know online. It was at that event that the excitement and energy I used to get from UDS came rushing back to me. I knew this was going to be a great event.

The official start of this first UbuCon Summit began Thursday morning. I arrived bright and early to say hello to everyone, and finally got to meet Scarlett Clark of the Kubuntu development team. If you aren’t familiar with her blog and are interested in the latest updates to Kubuntu, I highly recommend it. She’s also one of the newly elected members of the Ubuntu Community Council.


Me and Scarlett Clark

After morning introductions, we filed into the ballroom where the keynote and plenaries would take place. It was the biggest ballroom of the conference venue! The SCALE crew really came through with support of this event, it was quite impressive. Plus, the room was quite full for the opening and Mark Shuttleworth’s keynote, particularly when you consider that it was a Thursday morning. Richard Gaskin and Nathan Haines, familiar names to anyone who has been to previous UbuCon events at SCALE, opened the conference with a welcome and details about how the event had grown this year. Logistics and other details were handled now too, and then they quickly went through how the event would work, with a keynote, series of plenaries and then split User and Developer tracks in the afternoon. They concluded by thanking sponsors and various volunteers and Canonical staff who made the UbuCon Summit a reality.


UbuCon Summit introduction by Richard Gaskin and Nathan Haines

The welcome, Mark’s keynote and the morning plenaries are available on YouTube, starting here and continuing here.

Mark’s keynote began by acknowledging the technical and preference diversity in our community, from desktop environments to devices. He then reflected upon his own history in Linux and open source, starting in university when he first installed Linux from a pile of floppies. It’s been an interesting progression to see where things were twenty years ago, and how many of the major tech headlines today are driven by Linux and Ubuntu, from advancements in cloud technology to self-driving cars. He continued by talking about success on a variety of platforms, from the tiny Raspberry Pi 2 to supercomputers and the cloud, Ubuntu has really made it.

With this success story, he leapt into the theme of the rest of his talk: “Great, let’s change.” He dove into the idea that today’s complex, multi-system infrastructure software is “too big for apt-get” as you consider relationships and dependencies between services. Juju is what he called “apt-get for the cloud/cluster” and explained how LXD, the next evolution of LXC running as a daemon, gives developers the ability to run a series of containers to test deployments of some of these complex systems. This means that just like the developers and systems engineers of the 90s and 00s were able to use open source software to deploy demonstrations of standalone software on our laptops, containers allow the students of today to deploy complex systems locally.

He then talked about Snappy, the new software packaging tooling. His premise was that even a six month release cycle is too long as many people are continuously delivering software from sources like GitHub. Many places have a solid foundation of packages we rely upon and then a handful of newer tools that can be packaged quickly in Snappy rather than going through the traditional Debian Packaging route, which is considerably more complicated. It was interesting to listen to this, as a former Debian package maintainer myself I always wanted to believe that we could teach everyone to do software packaging. However, seeing these efforts play out the community work with app developers it became clear between their reluctance and the backlog felt by the App Review Board, it really wasn’t working. Snappy moves us away from PyPI, PPAs and such into an easier, but still packaged and managed, way to handle software on our systems. It’ll be fascinating to see how this goes.


Mark Shuttleworth on Snappy

He concluded by talking about the popular Internet of Things (IoT) and how Ubuntu Core with Snappy is so important here. DJI, “the market leader in easy-to-fly drones and aerial photography systems,” now offers an Ubuntu-driven drone. The Open Source Robotics Institute uses Ubuntu. GE is designing smart kitchen appliances powered by Ubuntu and many (all?) of the self-driving cars known about use Ubuntu somewhere inside them. There was also a business model here, a company that produces the hardware and a minimal features set that comes with it, also sells a more advanced version, and then industry-expert third parties who further build upon it to sell industry-specific software.

After Mark’s talk there were a series of plenaries that took place in the same room.

First up was Sergio Schvezov who followed on Mark’s keynote nicely as he gave a demo of Snapcraft, the tool used to turn software into a .snap package for Ubuntu Core. Next up was Jorge Castro who gave a great talk about the state of Gaming on Ubuntu, which he said was “Not bad.” Having just had this discussion with my sister, the timing was great for me. On the day of his talk, there were 1,516 games on Steam that would natively run on Linux, a nice selection of which are modern games that are new and exciting across multiple platforms today. He acknowledged the pre-made Steam Boxes but also made the case for homebrewed Steam systems with graphics card recommendations, explaining that Intel did fine, AMD is still lagging behind high performance with their open source drivers and giving several models of NVidia cards today that do very well (from low to high quality, and cost: 750Ti, 950, 960, 970, 980, 980Ti). He also passed around a controller that works with Linux to the audience. He concluded by talking about some issues remaining with Linux Gaming, including regressions in drivers that cause degraded performance, the general performance gap when compared to some other gaming systems and the remaining stigma that there are “no games” on Linux, which talks like this are seeking to reverse. Plenaries continued with Didier Roche introducing Ubuntu Make, a project which makes creating a developer platform out of Ubuntu with several SDKs much easier so that developers reduce the bootstrapping time. His blog has a lot of great posts on the tooling. The last talk of the morning was by Scarlett Clark, who gave us a quick update on Kubuntu Development, explaining that the team had recently joined forces with KDE packagers in Debian to more effectively share resources in their work.

It was then time for group photo! Which included my xerus, and where I had a nice chat (and selfie!) with Carla Sella as we settled in for the picture.


Me and Carla Sella

In the afternoon I attended the User track, starting off with Nathan Haines on The Future of Ubuntu. In this talk he talked about what convergence of devices meant for Ubuntu and warded off concerns that the work on the phone was done in isolation and wouldn’t help the traditional (desktop, server) Ubuntu products. With Ubuntu Core and Snappy, he explained, all the work done on phones is being rolled back into progress made on the other systems, and even IoT devices, that will use them in the future. Following Nathan was the Ubuntu Redux talk by Jono Bacon. His talk could largely be divided into two parts: History of Ubuntu and how we got here, and 5 recommendations for the Ubuntu community. He had lots of great stories and photos, including one of a very young Mark, and moved right along to today with Unity 8 and the convergence story. His 5 recommendations were interesting, so I’ll repeat them here:

  1. Focus on core opportunities. Ubuntu can run anywhere, but should it? We have finite resources, focus efforts accordingly.
  2. Rethink what community in Ubuntu is. We didn’t always have Juju charmers and app developers, but they are now a major part of our community. Understand that our community has changed and adjust our vision as to where we can find new contributors.
  3. Get together more in person. The Ubuntu Online Summit works for technical work, but we’ve missed out on the human component. In person interactions are not just a “nice to have” in communities, they’re essential.
  4. Reduce ambiguity. In a trend that would continue in our leadership panel the next day, some folks (including Jono) argue that there is still ambiguity around Intellectual Propoerty and licensing in the Ubuntu community (Mark disagrees).
  5. Understand people who are not us.

Nathan Haines on The Future of Ubuntu

The next presentation was my own, on Building a career with Ubuntu and FOSS where I drew upon examples in my own career and that of others I’ve worked with in the Ubuntu community to share recommendations for folks looking to contribute to Ubuntu and FOSS as a tool to develop skills and tools for their career. Slides here (PDF). David Planella on The Ubuntu phone and the road to convergence followed my talk. He walked audience members through the launch plan for the phone, going through the device launch with BQ for Ubuntu enthusiasts, the second phase for “innovators and early adopters” where they released the Meizu devices in Europe and China and went on to explain how they’re tackling phase three: general customer availability. He talked about the Ubuntu Phone Insiders group of 30 early access individuals who came from a diverse crowd to provide early feedback and share details (via blog posts, social media) to others. He then gave a tour of the phones themselves, including how scopes (“like mini search engines on your phone”) change how people interact with their device. He concluded with a note about the availability of the SDK for phones available at developer.ubuntu.com, and that they’re working to make it easy for developers to upload and distribute their applications.

Video from the User track can be found here. The Developer track was also happening, video for that can be found here. If you’re scanning through these to find a specific talk, note that each is 1 hour long.

Presentations for the first day concluded with a Q&A with Richard Gaskin and Nathan Haines back in the main ballroom. Then it was off to the Thursday evening drinks and appetizers at Porto Alegre Churrascaria! Once again, a great opportunity to catch up with friends old and new in the community. It was great running into Amber Graner and getting to talk about our respective paid roles these days, and even touched upon key things we worked on in the Ubuntu community that helped us get there.

The UbuCon Summit activities continued after a SCALE keynote with an Ubuntu Leadership panel which I participated in along with Oliver Ries, David Planella, Daniel Holbach, Michael Hall, Nathan Haines and José Antonio Rey with Jono Bacon as a moderator. Jono had prepared a great set of questions, exploring the strengths and weaknesses in our community, things we’re excited about and eager to work on and more. We also took questions from the audience. Video for this panel and the plenaries that followed, which I had to miss in order to give a talk elsewhere, are available here. The link takes you to 1hr 50min in, where the Leadership panel begins.

The afternoon took us off into unconference mode, which allowed us to direct our own conference setup. Due to aforementioned talk I was giving elsewhere, I wasn’t able to participate in scheduling, but I did attend a couple sessions in the afternoon. First was proposed by Brendan Perrine where we talked about strategies for keeping the Ubuntu documentation up to date, and also talked about the status of the Community Help wiki, which has been locked down due to spam for nearly a month(!). I then joined cm-t arudy to chat about an idea the French team is floating around to have people quickly share stories and photos about Ubuntu in some kind of community forum. The conversation was a bit tool-heavy, but everyone was also conscious of how it would need to be moderated. I hope I see something come of this, it sounds like a great project.

With the UbuCon Summit coming to a close, the booth was the next great task for the team. I couldn’t make time to participate this year, but the booth featured lots of great goodies and a fleet of contributors working the booth who were doing a fantastic job of talking to people as the crowds continued to flow through each day.

Huge thanks to everyone who spent months preparing for the UbuCon Summit and booth on the SCALE14x expo hall. It was a really amazing event that I was proud to be a part of. I’m already looking forward to the next one!

Finally, I took responsibility for the @ubuntu_us_ca Twitter account throughout the weekend. It was the first time I’ve done such a comprehensive live-tweeting of an event from a team/project account. I recommend a browse through the tweets if you’re interested in hearing more from other great people live-tweeting the event. It was a lot of fun, but also surprisingly exhausting!

More photos from my time at SCALE14x (including lots of Ubuntu ones!) here: https://www.flickr.com/photos/pleia2/albums/72157663821501532

by pleia2 at January 30, 2016 11:40 PM

Jono Bacon

Happy Birthday, Stuart

About 15 years ago I met Stuart ‘Aq’ Langridge when he walked into the new Wolverhampton Linux Users Group I had just started with his trademark bombastic personality and humor. Ever since those first interactions we have become really close friends.

Today Stuart turns 40 and I just wanted to share a few words about how remarkable a human being he is.

Many of you who have listened to Stuart on Bad Voltage, seen him speak, worked with him, or socialized with him will know him for his larger than life personality. He is funny, warm, and passionate about his family, friends, and technology. He is opinionated, and many of you will know him for the amusing, insightful, and tremendously articulate way in which he expresses his views.

He is remarkably talented and has an incredible level of insight and perspective. He is not just a brilliant programmer and software architect, but he has a deft knowledge and understanding of people, how they work together, and the driving forces behind human interaction. What I have always admired is that while bombastic in his views, he is always open to fresh ideas and new perspectives. For him life is a journey and new ways of looking at the road are truly thrilling for him.

As I have grown as a person in my career, with my family, and particularly when moving to America, he has always supported yet challenged me. He is one of those rare friends that can enthusiastically validate great steps forward yet, with the same enthusiasm, illustrate mistakes too. I love the fact that we have a relationship that can be so open and honest, yet underlined with respect. It is his personality, understanding, humor, thoughtfulness, care, and mentorship that will always make him one of my favorite people in the world.

Stuart, I love you, pal. Have an awesome birthday, and may we all continue to cherish your friendship for many years to come.

by Jono Bacon at January 30, 2016 09:05 PM

January 29, 2016

Jono Bacon

Heading Out To linux.conf.au

On Saturday I will be flying out to linux.conf.au taking place in Geelong. Because it is outrageously far away from where I live, I will arrive on Monday morning. :-)

I am excited to be joining the conference. The last time I made the trip was sadly way back in 2007 and I had an absolutely tremendous time. Wonderful people, great topics, and well worth the trip. Typically I have struggled to get out with my schedule, but I am delighted to be joining this year.

I will also be delivering one of the keynotes this year. My keynote will be on Thu 4th Feb 2016 at 9am. I will be delving into how we are at potentially the most exciting time ever for building strong, collaborative communities, and sharing some perspectives on how we empower a new generation of open source contributors.

So, I hope to see many of you there. If you want to get together for a meeting, don’t hesitate in getting in touch. You can contact me at jono@github.com for GitHub related discussions, or jono@jonobacon.org for everything else. See you there!

by Jono Bacon at January 29, 2016 07:10 AM

January 23, 2016

iheartubuntu

Brave Browser on Ubuntu

Brendan Eich, one of the co-founders of the Mozilla project (Firefox browser) is developing a new web browser promising to block intrusive ads and 3rd party trackers. Enter BRAVE built for Linux, Windows, OSX, iOS and Android...

https://brave.com/

Brave's browser, still in early development, speeds up web pages by stripping out not just ads but also other page elements that track online behavior and web elements used to deliver ads. By removing advertisements and trackers, Braves browser will speed up page loading considerably. It loads pages 2x to 4x faster than other smart phone browsers & up to 2x faster than other browsers for personal computers.


Blocking ads however could be a challenge for the Brave team as advertising helps fund websites and bloggers content. The browsers work around is to eventually display new ads from their own pool of advertisers and connect Bitcoin as a method for users to pay website owners directly for the content users are viewing for free. Its a new way of doing things for sure and could disrupt Googles ad network which is Googles biggest source of revenue.

Brave has been built out of the open source Chromium browser, which is the foundation for Google's Chrome browser. An interesting choice considering Brave is essentially trying to take market cap away from Google. Has Eich and his Brave team averted the ad blocking war or created a new style war?

All of Braves source packages have been made available on GitHub. We managed to compile it on Ubuntu 16.04, but ran into problems. The GitHub page includes a readme file for installation however its really incomplete right now as of 1/22/16. We also ran into problems with the newest version of Node.js 5.xx not being supported by our newest version of Ubuntu. However, installation of Node.js 5.xx may work fine on Ubuntu 15.10 or older, thus getting the Brave browser installed on Ubuntu.

Keep your eyes open on their GitHub for updated installation information such as DEB files or PPAs for an easier way to install Brave.

https://github.com/brave/browser-laptop

by iheartubuntu (noreply@blogger.com) at January 23, 2016 04:56 PM

January 20, 2016

Elizabeth Krumbach

December events and a pair of tapestries

In my last post I talked some about the early December tourist stuff that I did. I also partook in several events that gave me a nice, fun distraction when I was looking for some down time after work and book writing.

It’s no secret that I like good food, so when a spot opened up with some friends to check out Lazy Bear here in San Francisco, I was pretty eager to go. They had two seatings per night and everyone sits together at long tables and was served each course at the same time. We had to skip the pork selections, but I was happy with the substitutions they provided for us. They also gave us pencils and notebooks to take notes about the dishes. An overall excellent dinner.

On December 2nd MJ and I met up with my friend Amanda to see Randall Monroe of XKCD fame talk about his new book, Thing Explainer. In this book he talks about complicated concepts using only the 1000 most common words. He shared stories about the process of writing the book and some things he had a lot of fun with. It was particularly amusing to hear how much he used the word “bag” when explaining the human body. We waited around pretty late for what ended up being some marathon signing, huge thanks to him for staying around so we could get our copy signed!

The very next day I scored a ticket to a local Geek Girl Dinner here in SOMA. I’d only been to one before, and going alone always means I’m a bit on edge nervousness-wise. But it was a Star Wars themed dinner and I do enjoy hearing stories from other women in tech, so I donned my R2-D2 hoodie and made my way over. Turns out, not many people were there to celebrate Star Wars, but they did have R2-D2 cupcakes and some cardboard cutouts of the new characters, so they pulled it off. The highlight of the night for me was a technical career panel of women who were able to talk about their varied entry points into tech. As someone with a non-traditional background myself, it’s always inspiring to hear from other women who made major career changes after being inspired by technology in some way or another.


Twilio tech careers panel

I mentioned in an earlier post that our friend Danita was in town recently. The evening she arrived I was neck deep in book work… and the tail end of the Bring Back MST3K Kickstarter campaign. They hosted five hours of a telethon-style variety show with magicians, musicians, comedians and various cameos by past and future MST3K actors, writers and robots. I’m pretty excited about this reboot, MST3K was an oddly important show when I was a youth. A game based on riffing is what first brought me on to an IRC network and introduced me to a whole host of people who made major impacts in my life. We all loved MST3K. Today I still enjoy Rifftrax (including the live show I went to last week). In spite of technical difficulties it was fun to prop up my tablet while working and watch the stream of their final fundraising push as they broke the record for biggest TV kickstarter campaign ever. Congratulations everyone, I am delighted to have donated to the campaign and look forward to the new episodes!

Hanukkah was also in December. Unfortunately MJ had to be out of town for the first few days, so we did a Google Hangout video call each evening. I set the tablet up on the counter as I lit the lights. I also took pictures each night so I could share the experience further.

At the end of the month MJ had a couple of his cousins in town to visit over the Christmas holiday. I didn’t take much time off, but I did tag along on select adventures, enjoying several great meals together and snapping a bunch of tourist photos of the Golden Gate Bridge (album here). We also made our way to Pier 39 one afternoon to visit sea lions and MJ and I made a detour to the Aquarium of the Bay while the girls did some shopping. The octopus and sea otters were particularly lively that evening (album here) and I snapped a couple videos: Giant Pacific Octopus and River otters going away for the night. Gotta love the winter clothes the human family was wearing in the otter video, we had a brisk December!

To conclude, I’ll leave you with a pair of Peruvian tapestries that we picked up in Cusco in August. Peru was one of my favorite adventures to date, and it’s nice that we were able to bring home some woven keepsakes from the Center for Traditional Textiles. We bundled them together in a carry on to bring them home and then brought them to our local framing shop and art gallery for framing. It took a few months, but I think it was worth it, they did a very nice job.

And now that I’ve taken a breather, it’s time to pack for SCALE14x, which we’re leaving for tomorrow morning. I also need to see if I can tie off some loose ends with this chapter I’m working on before we go.

by pleia2 at January 20, 2016 02:04 AM

January 17, 2016

Elizabeth Krumbach

Local tourist: A mansion, some wine and the 49ers

Some will say that there are tourists and there are travelers. The distinction tends to be that tourists visit the common places and take selfies, while travelers wander off the beaten path and take a more peaceful and thoughtful approach to enjoying their chosen destination.

I’m a happy tourist. Even when I’m at home.

Back in December our friend Danita was in town and I took advantage of this fact by going full on Bay Area tourist with her.

Our first adventure was going down to the Winchester Mystery House in San Jose. Built continuously for decades by the widow Sarah Winchester (of Winchester rifle fame), the house is a maze of uneven floors, staircases that go nowhere and doors that could drop you a story or two if you don’t watch when stepping through them. It’s said that the spiritualist movement heavily influenced Mrs. Winchester’s decisions, from moving to California after her husband’s death to the need to continuously be doing construction. She had a private seance room and after the house survived the 1906 earthquake that destroyed the tower that used to be a key feature in the house, she followed spirit-driven guidance. This caused her to stop work on the main, highly decorated front part of the house and only work on the back half, not even fixing up the sections damaged in the earthquake.

Door to nowhere
A “door to nowhere” in the Winchester House

There certainly are bits about this place that remind me of a tourist trap, including the massive gift shop and ghost stories. But it wasn’t shopping, spiritualism or ghosts that brought me here. As an armchair history and documentary geek, I’ve known about the Winchester House for years. When I moved to the bay area almost six years ago, it immediately went on my “to visit” list. The beautiful Victorian architecture, the oddity that was how she built it and her interest in the latest turn of the 20th century innovations in the house are what interested me. She had three elevators in the house, of varying types as the technology was developed, providing a fascinating snapshot into approximately 20 years of early elevator innovation history. She was an early adopter of electricity, and there were various types of the latest time and energy-saving gadgets and tools that were installed to help her staff get their work done. Plus, in addition to having a car (with a chauffeur, obviously), the garage where it was kept had a car wash apparatus built in! We went on a behind-the-scenes tour to visit many of these things. The estate originally covered many acres, allowing for a large fruit orchard and fruit was actually processed on site, so we got to see the massive on-site evaporator used for preparing the fruit for distribution.


Fruit evaporator at Winchester House

When Mrs. Winchester died, her belongings were carefully distributed among her heirs, but no arrangements were made for the house. Instead, curious neighbors got together and made sure it was saved from demolition, effectively turning it into a tourist attraction just a few years after her passing. Still privately-owned, today it’s listed on the U.S. National Register of Historic Places.

Photos weren’t allowed inside the house, but I snapped away outside: https://www.flickr.com/photos/pleia2/albums/72157660011104133

My next round of local touristing took us north, to Sonoma county for some wine tasting! We’re a member of a winery up there, so we had our shipment to pick up too, but it’s also always fun bringing our friends to our favorite stops in wine country. We started at Imagery Winery where we picked up our wine and enjoyed tastings of several of their sweeter wines, including their port. From there we picked up fresh sandwiches at a deli and grocery store before making our way to Benziger Family Winery, where MJ and I got engaged back in 2011.s We ate lunch before the rain began and then went inside to do some more wine tastings. Thankfully, the weather cleared up before our 3PM tour, where we got to see the vinyards, their processing area and inside the wine caves. It was cold though, in the 40s with a stiff breeze throughout the day. Our adventure concluded with a stop at Jacuzzi Family Vineyards where we tasted some olive oils, vinegar and mustard.

More photos from our Sonoma adventure here: https://www.flickr.com/photos/pleia2/albums/72157661706977879

In slightly less tourism and more local experience, the last adventure I went on with Danita was a trip down the bay (took Amtrak) to the brand new NFL stadium for the 49ers on Sunday, December 20th. I’m not into football, but going to an NFL game was something I wanted to experience, particularly since this brand new stadium is the one the Super Bowl will be played in a few weeks from now. Nice experience to have! The forecast called for rain, but we lucked out and it was merely cold (40s and 50s), I picked up a winter hat there at the stadium and they appeared to be doing brisk business for us Californians who are not accustomed to the chilly weather. We got to our seats before all the pre-game activities began, of which there are many, I had no idea the kind of pomp that accompanies a football game! We had really nice seats right next to the field, so close that Danita was able to find us upon watching game footage later:

The game itself? I am still no football fan. As someone who doesn’t watch much, I’ll admit that it was a bit hard for me to follow. Thankfully Danita is a big fan so she was able to explain things to me when I had questions. And regardless of the sport, it is fun to be piled into a stadium with fans. Hot dogs and pretzels, cheering and excitement, all good for the human spirit. I also found the cheerleaders to be a lot of fun, for all the stopping and starting the football players did, the cheerleaders were active throughout the game. I also learned that the stadium was near the San Jose airport, I may have taken a lot of pictures of planes flying over the stadium. They also had a halftime break that featured some previous Super Bowl 49ers from the 80s, Joe Montana was among them. Even as someone who doesn’t pay attention to football, I recognized him!


Airplane, cheerleaders and probably some football happening ;)

The Amtrak trip home was also an adventure, but not the good kind. Our train broke down and we had to be rescued by the next train, an hour behind us. There were high spirits among our fellow passengers though… and lots of spirits, the train bar ran out of champagne. It was raining by the time we got on the next train and so we had a bit of a late and soggy trip back. Still, all in all I’m glad I went.

More photos from the game here: https://www.flickr.com/photos/pleia2/albums/72157662674446015

by pleia2 at January 17, 2016 08:01 PM

Color me Ubuntu at UbuCon Summit & SCALE14x

This week I’ll be flying down to Pasadena, California to attend the first UbuCon Summit, which is taking place at the the Fourteenth Annual Southern California Linux Expo (SCALE14x). The UbuCon Summit was the brain child of meetings we had over the summer that expressed concern over the lack of in person collaboration and connection in the Ubuntu community since the last Ubuntu Developer Summit back in 2012. Instead of creating a whole new event, we looked at the community-run UbuCon events around the world and worked with the organizers of the one for SCALE14x to bring in funding and planning help from Canonical, travel assistance to project members and speakers to provide a full two days of conference and unconference event content.

UbuCon Summit

As an attendee of and speaker at these SCALE UbuCons for several years, I’m proud to see the work that Richard Gaskin and Nathan Haines has put into this event over the years turn into something bigger and more broadly supported. The event will feature two tracks on Thursday, one for Users and one for Developers. Friday will begin with a panel and then lead into an unconference all afternoon with attendee-driven content (don’t worry if you’ve never done an unconference before, a full introduction after the panel will be provided on to how to participate).

As we lead up to this the UbuCon Summit (you can still register here, it’s free!) on Thursday and Friday, I keep learning that more people from the Ubuntu community will be attending, several of whom I haven’t seen since that last Developer Summit in 2012. Mark Shuttleworth will be coming in to give a keynote for the event, along with various other speakers. On Thursday at 3PM, I’ll be giving a talk on Building a Career with Ubuntu and FOSS in the User track, and on Friday I’ll be one of several panelists participating in an Ubuntu Leadership Panel at 10:30AM, following the morning SCALE keynote by Cory Doctorow. Check out the full UbuCon schedule here: http://ubucon.org/en/events/ubucon-summit-us/schedule/

Over the past few months I’ve been able to hop on some of the weekly UbuCon Summit planning calls to provide feedback from folks preparing to participate and attend. During one of our calls, Abi Birrell of Canonical held up an origami werewolf that she’d be sending along instructions to make. Turns out, back in October the design team held a competition that included origami instructions and gave an award for creating an origami werewolf. I joked that I didn’t listen to the rest of the call after seeing the origami werewolf, I had already gone into planning mode!

With instructions in hand, I hosted an Ubuntu Hour in San Francisco last week where I brought along the instructions. I figured I’d use the Ubuntu Hour as a testing ground for UbuCon and SCALE14x. Good news: We had a lot of fun, it broke the ice with new attendees and we laughed a lot. Bad news: We’re not very good at origami. There were no completed animals at the end of the Ubuntu Hour!

Origami werewolf attempt
The xerus helps at werewolf origami

At 40 steps to create the werewolf, one hour and a crowd inexperienced with origami, it was probably not the best activity if we wanted animals at the end, but it did give me a set of expectations. The success of how fun it was to try it (and even fail) did get me thinking though, what other creative things could we do at Ubuntu events? Then I read an article about adult coloring books. That’s it! I shot an email off to Ronnie Tucker, to see if he could come up with a coloring page. Most people in the Ubuntu community know Ronnie as the creator of Full Circle Magazine: the independent magazine for the Ubuntu Linux community, but he’s also a talented artist whose skills were a perfect matched for this task. Lucky for me, it was a stay-home snowy day in Glasgow yesterday and within a couple hours he had a werewolf draft to me. By this morning he had a final version ready for printing in my inbox.

Werewolf coloring page

You can download the creative commons licensed original here to print your own. I have printed off several (and ordered some packets of crayons) to bring along to the UbuCon Summit and Ubuntu booth in the SCALE14x expo hall. I’m also bringing along a bunch of origami paper, so people can try their hand at the werewolf… and unicorn too.

Finally, lest we forget that my actual paid job is a systems administrator on the OpenStack Infrastructure team, I’m also doing a talk at DevOpsDayLA on Open Source tools for distributed systems administration. If you think I geek out about Ubuntu and coloring werewolves, you should see how I act when I’m talking about the awesome systems work I get to do at my day job.

by pleia2 at January 17, 2016 06:32 PM

January 14, 2016

Akkana Peck

Snow hiking

[Akk on snowshoes crossing the Jemez East Fork]

It's been snowing quite a bit! Radical, and fun, for a California ex-pat. But it doesn't slow down the weekly hiking group I'm in. When the weather turns white, the group switches to cross-country skiing and snowshoeing.

A few weeks ago, I tried cross-country skiing for the first time. (I've downhill skied a handful of times, so I know how, more or less, but never got very good at it. Ski areas are way too far away and way too expensive in Californian.) It was fun, but I have a chronic rotator cuff problem, probably left over from an old motorcycle injury, and found my shoulder didn't deal well with skiing. Well, the skiing was probably fine. It was probably more the falling and trying to get back up again that it didn't like.

So for the past two weeks I've tried snowshoes instead. That went just fine. It doesn't take much learning: it's just like hiking, except a little bit harder work remembering not to step on your own big feet. "Bozo goes hiking!" Dave called it, but it isn't nearly as Bozo-esque as I thought it would be.

Last week we snowshoed from a campground out to the edge of Frijoles Canyon, in a snowstorm most of the way, and ice fog -- sounds harsh when described like that, but it was lovely, and we were plenty warm when we were moving. This week, we followed the prettiest trail in the area, the East Fork of the Jemez River. In summer, it's a vibrantly green meadow with the sparkling creek snaking through it. In winter, it turns into a green and sparkling white forest. Someone took a photo of me snowshoeing across one of the many log bridges spanning the East Fork. You can't see any hint of the river itself -- it's buried in snow.

But if you hike in far enough, there's a warm spring: we're on the edge of the Valles Caldera, an old supervolcano that still has plenty of low-level geothermal activity left. The river is warm enough here that it's still running even in midwinter ... and there was a dipper there. American dippers are little birds that dive into creeks and fly under the water in search of food. They're in constant motion, diving, re-emerging, bathing, shaking off, and this dipper went about its business fifteen feet from where we were standing watching it. Someone had told me that he saw two dippers at this spot yesterday, but we were happy to get such a good look at even one.

We had lunch in a sunny spot downstream from the dipper, then headed back to the trailhead. A lovely way to spend a winter day.

January 14, 2016 02:01 AM

January 11, 2016

Jono Bacon

SCALE14x Plans

In a week and a half I am flying out to Pasadena to the SCALE14x conference. I will be there from the evening of Wed 20th Jan 2016 to Sun 24th Jan 2016.

SCALE is a tremendous conference, as I have mentioned many times before. This is a busy year for me, so I wanted to share what I will be up to:

  • Thurs 21st Jan 2016 at 2pm in Ballroom AUbuntu Redux – as part of the UbuCon Summit I will be delivering a presentation about the key patterns that have led Ubuntu to where it is today and my unvarnished perspective on where Ubuntu is going and what success looks like.
  • Thurs 21st Jan 2016 at 7pm – in Ballroom DEFLOSS Reflections – I am delighted to be a part of a session that looks into the past, present, and future of Open Source. The past will be covered by the venerable Jon ‘Maddog’ Hall, the present by myself, and the future by Keila Banks.
  • Fri 22nd Jan 2016 at 10.30am – in Ballroom DE – Ubuntu Panel – I will be hosting a panel where Mark Shuttleworth (Ubuntu Founder), David Planella (Ubuntu Community Manager), Olli Ries (Engineering Manager), and Community Council and community members will be put under the spotlight to illustrate where the future of Ubuntu is going. This is a wonderful opportunity to come along and get your questions answered!
  • Fri 22nd Jan 2016 at 8pm – in Ballroom DEBad Voltage: Live – join us for a fun, informative, and irreverent live Bad Voltage performance. There will be free beer, lots of prizes (including a $2200 Pogo Linux workstation, Zareason Strata laptop, Amazon Fire Stick, Mycroft, Raspberry Pi 2 kit, plenty of swag and more), and plenty of audience participation and surprises. Be sure to join us!
  • Sat 23rd Jan 2016 at 4.30pm – in Ballroom HBuilding Awesome Communities On GitHub – this will be my first presentation in my new role as Director Of Community at GitHub. In it I will be delving into how you can build great communities with GitHub and I will talk about some of the work I will be focused on in my new role and how this will empower communities around the world.

I am looking forward to seeing you all there and if you would like have a meeting while I am there, please drop me an email to jono@github.com.

by Jono Bacon at January 11, 2016 05:48 PM

iheartubuntu

OpenShot 2.0 - Beta Released


The first beta release of OpenShot 2.0 is available to Kickstarter supporters and a much wider testing effort has started. For those who supported OpenShots Kickstarter, they will gain early access and receive a separate update with links to installers. For everyone else, the source code has been published and is available online, but its recommended to wait a little longer, until the installers are released for everyone.

More info here...

http://www.openshotvideo.com/2016/01/openshot-20-beta-released.html

For anyone who has looked for a video editor in Ubuntu, Openshot is really really nice. I have personally used it on several family occasions (weddings, birthdays, and a 50th anniversary) and it has produced great results.

One relative who was a film director in the 70s, moving on to live stage show productions in the 80s and 90s was really impressed with the work I did with OpenShot.

Definitely give OpenShot a try for your video editing needs!

If you really want to test the current version 1.1.3, there is a DEB installer here...

http://www.openshot.org/download/

..and OpenShot is in the Ubuntu Software Center as well with version 1.4.1. Or wait for further instructions for the newest OpenShot 2.0

by iheartubuntu (noreply@blogger.com) at January 11, 2016 05:01 PM

January 09, 2016

Elizabeth Krumbach

Going to the theater

I typically don’t spend a lot of time in theaters, for either movies or plays. Aside from some obvious exceptions, I’m not a big movie person.

This was turned on its head over the past month, with a total of five visits to theaters in the past month!

It began quietly, when I had a friend in town and she suggested we make our way over to The Castro Theatre to see The Nightmare Before Christmas. We’d both seen it dozens (hundreds?) of times, but it’s a favorite and I adore that theater. It’s an older theater with substantial adornments throughout. They regularly have an organist playing as you are getting settled into your seats along with slides of upcoming events. A much more relaxing and entertaining experience for me than a giant screen with a series of loud of commercials. The theater also sells snacks and drinks (alcoholic and otherwise) that you can take to your seats. The movie itself was full of the usual charm, even if the copy they had was older and had a few instances of skipping where the film had probably torn or otherwise been degraded. We took the streetcar home, rounding off a wonderful evening.

The next theater was the A.C.T.’s Geary Theater. This is where MJ and I saw Between Riverside and Crazy a few months back, the first play I’d seen in San Francisco! Since it was close to the holidays they were playing A Christmas Carol and we picked up tickets for the high balcony seats. Another one of the old style theaters that is intricately ornamented, I love simply being in that space. Staring up at the decorated ceiling, inspecting the private boxes. I had never seen A Christmas Carol live before, and in spite of it not being a holiday I celebrate these days, it’s still a story I love. They did a beautiful job with it, I loved their interpretation of the various spirits! And there was no getting around falling in love with the main character, Ebenezer Scrooge.

Then there was Star Wars: The Force Awakens! MJ managed to get us tickets for opening night down in Mountain View with several of his colleagues. I may have dressed up.

And gotten a commemorative cup.

It was at a Cinemark (no beautiful theater to look at), but the theater did have big reclining seats. It was also the 2D version of the movie, which I much preferred for the first time seeing it. The movie pulled all the right nostalgic heart strings. I laughed, I cried (more than once) and I thoroughly enjoyed it.

A few days later I made my way over to the Sundance Kabuki theater to see it again, this time in 3D in their eat in theater! We got there early to have dinner up on their balcony next to the theater. From there we picked up our 3D classes and settled in to the big, comfy reserved seats. And I didn’t partake, but they did have a series of amusing cocktails to celebrate the release.

Next I’ll have to see it 3D in the IMAX!

And then there was last night. I made my way over to The Castro Theatre yet again, this time to see a live Rifftrax performance to kick off SF Sketchfest. I’d gone to one of these back in 2013 as well, so it was a real treat to yet again see Kevin Murphy, Bill Corbett and Michael J. Nelson joined by Mary Jo Pehl, Adam Savage and others to riff on a series of old shorts films. The theater was packed for this event, and so my friend Steve and I tried our luck up on the balcony, which I barely knew existed and had never been to. It was a brilliant decision, the balcony was really nice and gave us a great view of the show.

As I try to be less of a hermit while MJ is out of town next week, I’m hoping to see another proper in theater movie with a local friend soon. I hardly know myself!

by pleia2 at January 09, 2016 12:21 AM

January 07, 2016

Jono Bacon

We Need Your Answers

As I posted about the other day we are doing Bad Voltage Live in Los Angeles in a few weeks. It is on Fri 22nd Jan 2016 at 8pm at the SCALE14x Conference in Los Angeles. Find out more about the show here.

Now, I need every one of you to help provide some answers for a quiz we are doing in the show. It should only take a few minutes to fill in the form and your input could be immortalized in the live show (the show will be recorded and streamed live so you can see it for posterity).

You don’t have to be at the live show or a Bad Voltage listener to share your answers here, so go ahead and get involved!

If you end up joining the show in-person you also have the potential to win some prizes (Mycroft, Raspberry Pi 2 kit, and more!) by providing the most amusing/best answers too. Irrespective of whether you join the show live though, we appreciate if you fill it in:

Go and fill it in by clicking here

Thanks, everyone!

by Jono Bacon at January 07, 2016 11:00 PM

January 06, 2016

Akkana Peck

Speaking at SCALE 14x

I'm working on my GIMP talk for SCALE 14x, the Southern California Linux Expo in Pasadena.

[GIMP] My talk is at 11:30 on Saturday, January 23: Stupid GIMP tricks (and smart ones, too).

I'm sure anyone reading my blog knows that GIMP is the GNU Image Manipulation Program, the free open-source photo and image editing program which just celebrated its 20th birthday last month. I'll be covering an assortment of tips and tricks for beginning and intermediate GIMP users, and I'll also give a quick preview of some new and cool features that will be coming in the next GIMP release, 2.10.

I haven't finished assembling the final talk yet -- if you have any suggestions for things you'd love to see in a GIMP talk, let me know. No guarantees, but if I get any requests I'll try to accommodate them.

Come to SCALE! I've spoken at SCALE several times in the past, and it's a great conference -- plenty of meaty technical talks, but it's also the most newbie-friendly conference I've been to, with talks spanning the spectrum from introductions to setting up Linux or introductory Python programming all the way to kernel configuration and embedded boot systems. This year, there's also an extensive "Ubucon" for Ubuntu users, including a keynote by Mark Shuttleworth. And speaking of keynotes, the main conference has great ones: Cory Doctorow on Friday and Sarah Sharp on Sunday, with Saturday's keynote yet to be announced.

In the past, SCALE been held at hotels near LAX, which is about the ugliest possible part of LA. I'm excited that the conference moving to Pasadena this year: Pasadena is a much more congenial place to be, prettier, closer to good restaurants, and it's even close to public transportation.

And best of all, SCALE is fairly inexpensive compared to most conferences. Even more so if you use the promo-code SPEAK for a discount when registering.

January 06, 2016 11:32 PM

Jono Bacon

Bad Voltage Live in Los Angeles: Why You Should Be There

On Friday 22nd January 2016 the Bad Voltage team will be delivering a live show at the SCALE14x conference in Pasadena, California.

For those of you unfamiliar with Bad Voltage, it is a podcast that Stuart Langridge, Bryan Lunduke, Jeremy Garcia, and myself do every two weeks that delves into technology, open source, linux, gaming, and more. It features discussions, interviews, reviews and more. It is fun, loose, and informative.

We did our very first Bad Voltage Live show last year at SCALE. To get a sense of it, you can watch it below:

Can’t see the video? Watch it here.

This year is going to be an awesome show, and here are some reasons you should join us.

1. A Fun Show

At the heart of Bad Voltage is a fun show. It is funny, informative, and totally irreverent. This is not just four guys sat on a stage talking. This is a show. It is about spectacle, audience participation, and having a great time.

While we discussed important topics in open source last year we also had a quiz where we made video compilations of people insulting our co-hosts. We even had a half-naked Bryan review a bottle of shampoo in a hastily put together shower prop.

You never know what might happen at a Bad Voltage Live show and that is the fun of it. The audience make the evening so memorable. Be sure to be there!

2. Free Beer (and non-alcoholic beverages)

If there is one thing that people enjoy at conferences is an event with free beer. Well, thanks to our friends at Linode the beer will be flowing. We are also arranging to have soft drinks available too.

So, get along to the show, have a few cold ones and have a great time.

3. Lots of Prizes to be Won

We are firm believers in free stuff. So, we have pulled together a big pile of free stuff that you can all win by joining us at the show.

This includes such wonderful items as:

A Pogo Linux Verona 931H Workstation (thanks to Pogo Linux)

A Zareason Strata laptop (thanks to Zareason)

A Mycroft AI device (thanks to Mycroft)

We will also be giving away a Raspberry Pi 2 kit, Amazon Fire Stick and other prizes!

You can only win these prizes if you join us at the show, so be sure to get along!

4. Free Entry

So, how much does it cost to get into some fun live entertainment, free beer, and a whole host of prizes to be won?

Nothing.

That’s right, the entire show is free to attend.

5. At SCALE

One of the major reasons we like doing Bad Voltage Live at SCALE is because the SCALE conference is absolutely fantastic.

There are a wide range of tracks, varied content, and a wonderful community that attends every year. There are also a range of other events happening as part of SCALE such as the Ubuntu UbuCon Summit.

So, I hope all of this convinces you that Bad Voltage Live is the place to be. I hope to see you there on Friday 22nd January 2016 at 8pm. Find out more here.

Thanks to our sponsors, Linode, Microsoft, Pogo Linux, Zareason, and SCALE.

by Jono Bacon at January 06, 2016 04:30 AM

January 03, 2016

Elizabeth Krumbach

Celebrating the 1915 World’s Fair in SF, the PPIE

I’ve been fascinated with the World’s Fair ever since learning about them as a kid. The extravagance, the collections of art and innovation, the thrill of being part of something that was so widely publicized worldwide and the various monuments left over when the fairs concluded. As I learned about past fairs I was always disappointed that I had missed their heyday, and struggled to understand why their time had passed. The fairs still happen now called Expositions and 2015 marked one in Milan, but unless you’re local or otherwise look for them, you typically won’t know about them. Indeed, most people don’t realize they still exist. No longer does the media descend upon them for a flourish of publicity. The great companies, artists, innovators, cities and countries of our ages no longer make grand investments in showing off their best over acres of temporary, but beautiful, pop up cities.

In 2015 I learned a lot more about the fairs and how times have changed as San Francisco celebrated the centennial of the Panama-Pacific International Exposition (PPIE) of 1915. As I spent the year reading about the fair and walking through various exhibits around the city, all the things I knew about 1915 became so much more real. The first trans-continental telephone call from New York to San Francisco was made just prior to the fair opening. Planes were still quite new and it was common for these early pilots to die while operating them (aviation pioneer Lincoln J. Beachey actually died at the PPIE). Communication and travel that I take for granted simply didn’t exist. When I reflect on the “need” for a World’s Fair, I realized the major ones took place during a special time of intense innovation and cultural exchange but where we didn’t yet have a good way of sharing these things yet. The World’s Fair provided that space, and people would pay to see it.

I began my learning when MJ bought me Laura Ackley’s San Francisco’s Jewel City: The Panama-Pacific International Exposition of 1915. I read it cover to cover over a couple months, providing a nice foundation for exhibits I visited as the year went on. The first was the “Fair, Please” exhibit at the SF Railway Museum. The museum talks about the exhibit in their 1915 Fair Celebration blog post, which also links to a 2005 article that gets into more depth about how transit handled the increased ridership and new routes that the PPIE caused. I enjoyed the exhibit, though it was quite small (the museum and gift shop itself is only a single, large room). While I was at that exhibit I also picked up a copy of the Bay Area Electric Railroad Association Journal from spring 2007 that had a 25 page article by Grant Ute about transit and the PPIE. Predictably that was my next pile of reading.

Back in September I attended a panel about transit and the PPIE, which Grant Ute was a part of along with other transit representatives and historians who spoke about the fair and touched upon the future of transit as well. I wrote about it in a post here, excerpt:

I spent the evening at the California Historical Society, which is just a block or so away from where we live. They were hosting a lecture on City Rising for the 21st Century: San Francisco Public Transit 1915, now, tomorrow.

The talk [by Grant Ute] and panel were thoroughly enjoyable. Once the panel moved on from progress and changes made and made possible by transit changes surrounding the PPIE, topics ranged from the removal of (or refusal to build) elevated highways in San Francisco and how it’s created a beautiful transit and walk-friendly city, policies around the promotion of public transit and how funding has changed over the years.

In October I bought the book Jewel City: Art from San Francisco’s Panama-Pacific International Exposition in preparation for an exhibit at the De Young museum of the same name: Jewel City: Art from San Francisco’s Panama-Pacific International Exposition. It’s a beautiful book, and while I didn’t read it cover to cover like Laura Ackley’s, browsing through it at my own pace and focusing on parts I was most interested in was enjoyable. In early November we made it out to the exhibit at the de Young Museum.

From the Exhibit website:

To mark this anniversary, Jewel City revisits this vital moment in the inauguration of San Francisco as the West Coast’s cultural epicenter. The landmark exhibition at the de Young reassembles more than 200 works by major American and European artists, most of which were on display at this defining event.

No photos were allowed inside the exhibit, but it was a wonderful collection. As someone who is not very into modern or abstract art, it was nice to see a collection from 1915 where only a tiny sampling of these types were represented. There was a lot of impressionism (my favorite), as well as many portraits and a small corner that showed off some photos, which at the time were working to gain acceptance as art. The gift shop gave me the opportunity to pick up a beautiful silk scarf that has a drawn aerial view of the fair grounds.

In December I had to squeeze in a bunch of exhibits! On December 1st the Palace of Fine Arts’ Innovation Hanger opened their own exhibit, City Rising: San Francisco and the 1915 World’s Fair. The first time I visited the Palace of Fine Arts I didn’t quite realize what it was. What is now Innovation Hanger used to house the Exploritorum and was the only indoor space in the area. The Palace of Fine Arts was otherwise an outdoor space, colonnades around a beautiful pond, culminating in a huge domed structure. Where were the fine arts? It turns out, this was an on-site re-creation of the Palace of Fine Arts from the PPIE! The art for the fair had been featured in galleries throughout the also reconstructed old Exploritorium/Innovation Hanger. As we entered the exhibit it was nice to think about how we were walking through the same place that so many pieces of art had been featured during the fair.

The exhibit took up a section of the massive space, a bit dwarfed by the high ceilings but managed to make a bit of a cozy corner to enjoy it. A few artifacts, including big ones like a Model T (an on-site Ford factory producing them was a popular attraction at the fair) were on display. They also had a big map of the fair grounds, with the Palace of Fine Arts show raised as the only remaining building on site. Various sections of the exhibit talked about different areas of the fair, and also different phases, from planning to events at the fair itself to the much later formal reconstruction of the Palace of Fine Arts. I picked up the book Panorama: Tales From San Francisco’s 1915 Pan-Pacific International Exposition, which is next in my reading pile.

MJ’s cousins were in town, so it was also a nice opportunity to take a peaceful walk through the grounds of the Palace of Fine Arts. It’s a beautifully stunning place, regardless of what you know if it’s history.

More photos from the exhibit at the Palace of Fine Arts here: https://www.flickr.com/photos/pleia2/albums/72157660512348743

A couple days later we went to the Conservatory of Flowers in Golden Gate Park. They opened their own PPIE exhibit, Garden Railway: 1915 Pan-Pacific in November. Flowers, trains and the PPIE? I’m so there! Their exhibit room isn’t very large, but they did have a lovely recreation of part of the joy zone, the Palace of Fine Arts and, of course, the Palace of Horticulture.

From their website:

In an enchanting display landscaped with hundreds of dwarf plants and several water features, model trains wend their way through the festive fairgrounds, zipping past whimsical recreations of the fair’s most dazzling monuments and amusements, including the Tower of Jewels, Palace of Fine Arts, and more. Interpretive signs, memorabilia and interactive activities throughout help visitors to understand the colorful history of the grand fair that signaled San Francisco’s recovery from the 1906 earthquake.

Model trains are fun, so the movement they brought to the exhibit was enjoyable.

The center of the exhibit featured a recreation of the Tower of Jewels.

More photos from the Conservatory of Flowers and their PPIE exhibit here: https://www.flickr.com/photos/pleia2/albums/72157662883839265

On January 31st MJ and I went to the closest of PPIE exhibits, at the California History Museum which is just a block from home. The museum was colorfully painted and you walked through 3 separate rooms, plus a section at the entrance and a large main area to explore.

I’d say this was the most comprehensive visit with regard to history of the PPIE and artifacts. I really enjoyed seeing various kinds of keepsakes that were given away at the fair, including various pamphlets put out by countries, companies and the fair itself. Collectors items of all kinds, from picture books to spoons and watches were bought by fair goers. At the end of the fair they even sold the Novagems that covered the Tower of Jewels, many in commemorative boxes with certificates of authenticity.

They also had a massive scale model of the main areas of the fair grounds. Produced in around 1938 for the Golden Gate International Exposition on nearby Treasure Island, it was brought out of storage for us to enjoy in this exhibit. It’s really nice that it’s been so well preserved!

More photos from the California History Museum exhibit here: https://www.flickr.com/photos/pleia2/albums/72157662962920716

As the year concluded I am even more in love with these old World’s Fairs than I ever was. As such, I’m still sad that I missed them, but I have a new found appreciation for our lives today and the opportunities we have. In 2015 I visited 3 continents, spent my days working in real time with people all over the world and had immediate access to the latest news in the palm of my hand. None of this was possible for someone of my means 100 years ago. As much as I think it would be a wonderful and fascinating experience, it turns out that I don’t actually need a World’s Fair to expose me to the world and technology outside my home town.

by pleia2 at January 03, 2016 08:34 PM

January 01, 2016

Elizabeth Krumbach

The adventures of 2015

I wasn’t sure what to expect from 2015. Life circumstances meant that I wanted to travel a bit less, which meant being more selective about the conferences I would speak at. At the same time, some amazing opportunities for conferences came up that I couldn’t bring myself to turn down. Meeting new people, visiting countries very foreign to me, plus a new continent (South America!), there was much to be excited about this year! Work has gone exceptionally well. I hit some major milestones with my career, particularly with regards to my technical level, all thanks to support from those around me, dedication to some important projects and hard work on the day to day stuff.

There were also struggles this year. Early in the year I learned of the passing of a friend and local open source advocate. MJ and I navigated our way through the frailness and loss of a couple family members. I was forced to pause, reflect upon and ultimately step away from some of my open source involvement as it was causing me incredible emotional pain and stress. I also learned a lot about my work habits and what it takes to put out a solid technical book. The book continues to be a real struggle, but I am thankful for support from those around me.

I’ve been diligent in continuing to enjoy this beautiful city we live in. We went on a streetcar tour, MJ took me to a Star Wars Giants game for my birthday and we went to various Panama-Pacific International Exhibit commemorative events. I finally made it down the bay to the Winchester House and to see a 49ers game. As friends and family come into town, I jumped at every opportunity to explore the new and familiar. I also spoke at a few local conferences and events which I wrote about: San Francisco Ubuntu Global Jam, Elastic{ON} 2015, Puppet Camp San Francisco 2015 and an OpenStack Meetup.


Enjoying San Francisco with a special tour on the Blackpool Boat Tram

At the Bay Bridge with visiting friend Crissi

Star Wars day at AT&T Park

At a 49ers game with visiting friend Danita

Visiting one of several PPIE15 exhibits

Health-wise, I had to go in for several diagnostic tests post-gallbladder to see why some of my liver levels are off. After a bit of stress, it all looks ok, but I do need to exercise on a more regular basis. The beautiful blue sky beckons me to make a return to running, so I plan on doing that while incorporating things I learned with the trainer I worked with this past year. We’ve also been tracking Simcoe’s health with her renal failure, it’s been 4 years since her diagnosis and while her health isn’t what it was, our little Siamese continuing to hang in there.

And then there was all my travel!


Manneken Pis in Brussels, Belgium

In front of the Sultan Qaboos Grand Mosque, Muscat, Oman

Beautiful views from the OpenStack Summit in Vancouver, Canada

With MJ in obligatory tourist photo at Machu Picchu, Peru

Kinkaku-ji (golden temple), Kyoto, Japan

Space Shuttle Discovery near Washington D.C.

I didn’t give as many talks as I did in 2014, but I felt I took a stronger aim at quality this year. Speaking at conferences like FOSSC Oman and Grace Hopper Celebration of Women in Computing exposed me to some amazing, diverse audiences that led to some fantastic conversations after my talks. Exploring new places and meeting people who enrich my life and technical expertise are why I do all of this, so it was important that I found so much value in both this year.


Speaking at FOSSC Oman in Muscat

As I kick off 2016, my book is front and center. I have an amazing contributing author working with me. A Rough Cuts version went up on Safari at the end of 2015 and I’ve launched the book website. As I push through final technical challenges I’m hopeful that the pieces will soon fall into place so I can push through to completion.

Most of all, as I reflect upon 2015, I see a lot of cheer and sorrow. High highs and low lows. I’m aiming at a more balanced 2016.

by pleia2 at January 01, 2016 07:40 PM

December 31, 2015

Akkana Peck

Weather musing, and poor insulation

It's lovely and sunny today. I was just out on the patio working on some outdoor projects; I was wearing a sweatshirt, but no jacket or hat, and the temperature seemed perfect.

Then I came inside to write about our snowstorm of a few days ago, and looked up the weather. NOAA reports it's 23°F at Los Alamos airport, last reading half an hour ago. Our notoriously inaccurate (like every one we've tried) outdoor digital thermometer says it's 26&deg.

Weather is crazily different here. In California, we were shivering and miserable when the temperature dropped below 60°F. We've speculated a lot on why it's so different here. The biggest difference is probably that it's usually sunny here. In the bay area, if the temperature is below 60°F it's probably because it's overcast. Direct sun makes a huge difference, especially the sun up here at 6500-7500' elevation. (It feels plenty cold at 26°F in the shade.) The thin, dry air is probably another factor, or two other factors: it's not clear what's more important, thin, dry, or both.

We did a lot of weather research when we were choosing a place to move. We thought we'd have trouble with snowy winters, and would probably want to take vacations in winter to travel to warmer climes. Turns out we didn't know anything. When we were house-hunting, we went for a hike on a 17° day, and with our normal jackets and gloves we were fine. 26° is lovely here if you're in the sun, and the rare 90° summer day, so oppressive in the Bay Area, is still fairly pleasant if you can find some shade.

But back to that storm: a few days ago, we had a snowstorm combined with killer blustery winds. The wind direction was whipping around, coming from unexpected directions -- we never get north winds here -- and it taught us some things about the new house that we hadn't realized in the nearly two years we've lived here.

[Snow coming under the bedroom door] For example, the bedroom was cold. I mean really cold. The windows on the north wall were making all kinds of funny rattling noises -- turned out some of them had leaks around their frames. There's a door on the north wall, too, that leads out onto a deck, and the area around that was pretty cold too, though I thought a lot of that was leakage through the air conditioner (which had had a cover over it, but the cover had already blown away in the winds). We put some towels around the base of the door and windows.

Thank goodness for lots of blankets and down comforters -- I was warm enough overnight, except for cold hands while reading in bed. In the morning, we pulled the towel away from the door, and discovered a small snowdrift inside the bedroom.

We knew the way that door was hung was fairly hopeless -- we've been trying to arrange for a replacement, but in New Mexico everything happens mañana -- but snowdrifts inside the room are a little extreme.

We've added some extra weatherstripping for now, and with any luck we'll get a better-hung door before the next rare north-wind snowstorm. Meanwhile, I'm enjoying today's sunshine while watching the snow melt in the yard.

December 31, 2015 06:28 PM

December 30, 2015

Jono Bacon

In Memory of Ian Murdock

Today we heard the sad news that Ian Murdock has passed away. He was 42.

Although Ian and my paths crossed relatively infrequently, over the years we became friends. His tremendous work in Debian was an inspiration for my own work in Ubuntu. At times when I was unsure of what to do in my work, Ian would share his guidance and wisdom. He never asked for anything in return. He never judged. He always supported the growth of Open Source and Free Software. He was precisely the kind of person that makes the Open Source and Free Software world so beautiful.

As such, when I heard about some of his erratic tweets a few days back as I landed back home from the UK for Christmas, I reached out with a friendly arm to see if there was anything I could do to help. Sadly, I got no response. I now know why: he had likely just passed away when I reached out to him.

While it is natural for us to grieve his passing, we should also take time to focus on what he gave us all. He gave us a sparkling personality, a passion for everyone to succeed, and a legacy of Open Source and Free Software that would be hard to match.

Ian, wherever you may be, rest in peace. We will miss you

by Jono Bacon at December 30, 2015 08:06 PM

December 28, 2015

Elizabeth Krumbach

Simcoe’s November 2015 Hospital Checkup

It’s been quite a season for Simcoe. I mentioned back in September that the scabbing around her eyes had healed up, but unfortunately it keeps coming back. The other day we also noticed a sore and chunk of missing fur at the base of the underside of her tail. She has a dermatologist appointment in the beginning of January, so hopefully we can get to the bottom of it. It would be very nice to know what’s going on, when we need to worry and what to do about it when it happens. Poor kitty!

This December marks four years with the renal failure diagnosis. With her BUN and CRE levels creeping up and weight dropping a bit, we decided to go in for a consultation with the hospital doctor (rather than her great regular vet). The hospital vet has been really helpful with his industry contacts and experience with renal failure cats, and we trust his opinion. The bad news is that renal transplants for cats haven’t improved much since her diagnosis. It’s still risky, traumatic and expensive. Worst of all, median survival rate still lands at only about three years.

Fortunately she’s still acting normal and eating on her own, so we have a lot of options. One of them is supplementing her diet with wet food. We also had the option of switching her subcutaneous fluid injections from 150ml every other day to 100ml daily. Another is giving her pills to stimulate appetite so her weight doesn’t drop too low. We’re starting off with the food and fluid schedule adjustments, which we began this month. We bought a small pet scale for here at home so we can keep a closer eye on her weight and will likely start weekly weigh-ins next week.

During the checkup in November, they also ran her blood work which is showing the trend continuing for the most part. Her BUN levels went up a lot, but the doctor was more focused on and concerned about CRE increases and weight decreases (though she did put on a few ounces).

CRE dropped a little, from 4.8 to 4.4.

CRE graph

BUN spiked, going from 54 to 75.

BUN graph

She’s still under 9lbs, but drifting in a healthy area in the high 8s, going from 8.8lbs to 8.9lbs.

Weight graph

We’re thankful that we’ve had so much time with her post-diagnosis, she’s been doing very well all things considered and she’s still a happy and active cat. She just turned nine years old and we’re aiming for several more years with her.

by pleia2 at December 28, 2015 04:14 AM

Akkana Peck

Extlinux on Debian Jessie

Debian "Sid" (unstable) stopped working on my Thinkpad X201 as of the last upgrade -- it's dropping mouse and keyboard events. With any luck that'll get straightened out soon -- I hear I'm not the only one having USB problems with recent Sid updates. But meanwhile, fortunately, I keep a couple of spare root partitions so I can try out different Linux distros. So I decided to switch to the current Debian stable version, "Jessie".

The mouse and keyboard worked fine there. Except it turned out I had never fully upgraded that partition to the "Jessie"; it was still on "Wheezy". So, with much trepidation, I attempted an apt-get update; apt-get dist-upgrade

After an interminable wait for everything to download, though, I was faced with a blue screen asking this:

No bootloader integration code anymore.
The extlinux package does not ship bootloader integration anymore.
If you are upgrading to this version of EXTLINUX your system will not boot any longer if EXTLINUX was the only configured bootloader.
Please install GRUB.
<Ok>

No -- it's not okay! I have good reasons for not using grub2 -- besides which, extlinux on exact machine has been working fine for years under Debian Sid. If it worked on Wheezy and works on Sid, why wouldn't it work on the version in between, Jessie?

And what does it mean not to ship "bootloader integration", anyway? That term is completely unclear, and googling was no help. There have been various Debian bugs filed but of course, no explanation from the developers for exactly what does and doesn't work.

My best guess is that what Debian means by "bootloader integration" is that there's a script that looks at /boot/extlinux/extlinux.conf, figures out which stanza corresponds to the current system, figures out whether there's a new kernel being installed that's different from the one in extlinux.conf, and updates the appropriate kernel and initrd lines to point to the new kernel.

If so, that's something I can do myself easily enough. But what if there's more to it? What would actually happen if I upgraded the extlinux package?

Of course, there's zero documentation on this. I found plenty of questions from people who had hit this warning, but most were from newbies who had no idea what extlinux was or why their systems were using it, and they were advised to install grub. I only found one hit from someone who was intentionally using extlinux. That person aborted the install, held back the package so the potentially nonbooting new version of extlinux wouldn't be installed, then updated extlinux.conf by hand, and apparently that worked fine.

It sounded like a reasonable bet. So here's what I did (as root, of course):

  • Open another terminal window and run ps aux | grep apt to find the apt-get dist-upgrade process and kill it. (sudo pkill apt-get is probably an easier approach.) Ensure that apt has exited and there's a shell prompt in the window where the scary blue extlinux warning was.
  • echo "extlinux hold" | dpkg --set-selections
  • apt-get dist-upgrade and wait forever for all the packages to install
  • aptitude search linux-image | grep '^i' to find out what kernel versions are installed. Pick one. I picked 3.14-2-686-pae because that happened to be the same kernel I was already running, from Sid.
  • ls -l /boot and make sure that kernel is there, along with an initrd.img of the same version.
  • Edit /boot/extlinux/extlinux.conf and find the stanza for the Jessie boot. Edit the kernel and append initrd lines to use the right kernel version.

It worked fine. I booted into jessie with the kernel I had specified. And hooray -- my keyboard and mouse work, so I can continue to use my system until Sid becomes usable again.

December 28, 2015 12:28 AM

December 26, 2015

Elizabeth Krumbach

Days in Kyoto

As I mentioned in my post about Osaka, we spent our nights in Osaka and days plus evenings on Friday and Saturday in Kyoto. Since our plans got squished a bit, we didn’t get to as many sights as I had wanted to in Kyoto, but we did get to visit some of the key ones, and were able to keep our plans to go to one of the best restaurants in the city.

On Friday we took a Japanese Rail train up to Kyoto early so we could make our lunch reservations at the famous Kichisen. This was probably the best meal we had on our trip. They serve the food in the Kaiseki tradition with their beautiful and fancy take on many of the traditional Kaiseki dishes. Upon arrival we were greeted by the hosts, took our shoes off and were led into our private dining room. We enjoyed tea as the courses began, and were impressed as each course was more dazzling and delicious than the last.

After that very long and satisfying lunch, we made our way to Kinkaku-ji, the Golden temple. Being the height of autumn tourist season it was incredibly busy. In order to get to the best views of the temple we actually had to wait and then work our way through the crowds. Fortunately the photos didn’t reflect the madness and I got some really great shots, like this one which is now my desktop background.

The temple complex closed around five and we made our way to over to the Kyoto Imperial Palace complex. It’s a massive park, and while we didn’t have tickets for a tour inside the palace areas, we were able to walk around it, explore the trails in the park.


Outside the Imperial Palace

We also enjoyed finding other little temples and ponds. It was a beautiful way to spend time as the sun set.


Another small temple in Imperial park

From there we went to the Gion district and walked around for a while before stopping for some tea. We had a late evening dinner at Roan Kikunoi, which was another Kaiseki-style meal. This time we were seated at a bar with several other patrons and the courses came out mostly at the same time for all of us. The dishes were good, I particularly enjoyed the sashimi courses.

Saturday morning was spent in Osaka, but we made it to Kyoto in the afternoon to go to Ginkaku-ji, the Silver Temple. The temple is not silver, but it’s called that to distinguish it from the Gold Temple across town that we saw the day before.


MJ and I at the silver temple

It was a nice walk around the grounds of the temple, and then you climb a series of stairs to get a view of the city of Kyoto.


View from hill at silver temple

We had reservations at Tagoto Honten for dinner on Saturday. We once again had a Kaiseki-style meal but this one was much more casual than the day before. By this time we were getting a little tired of the style, but there was enough variation to keep us happy.

I’m sure our whirlwind tour of the city hardly did it justice. While we knocked out some of the key attractions, there are dozens of smaller temples, a castle to tour, plus the imperial palace and I’ve heard there’s a long climb up a hill where you can see and feed monkeys! A dinner with a geisha was also on our list, but we couldn’t make those reservations with our time restraints either. We’d definitely also work to reserve far enough in advance to stay in Kyoto itself, as while the train rides to Osaka were easy and short, all told we probably spent an hour in transit when you factor in deciding a route, walking to and from the stations. On the topic of transit, we ended up taking cabs around Kyoto more than we did in the rest of Japan, partially because we were often short on time, and otherwise because the rail system just isn’t as comprehensive as other cities we went to (though buses were available). It was noteworthy to share that the cabs are metered, very clean and all had friendly, professional drivers.

We don’t often make solid plans to revisit a place we’ve been to together, as there are so many places in the world we want to see. Japan is certainly an exception. Not just because we missed our segment in Tokyo, but because a week isn’t nearly enough time to enjoy this country I unexpectedly fell in love with.

More photos from our adventures in Kyoto here: https://www.flickr.com/photos/pleia2/sets/72157659834169750

by pleia2 at December 26, 2015 05:08 PM

Shinkansen to Osaka

As I mentioned in my post about Tokyo, it’s taken me a couple months to get around to writing about our journeys in Japan. But here we are! On October 22nd we landed back in Japan after our quick trip back to Philadelphia and took the NE’X train right to the high speed Shinkansen which took us all the way to Osaka (about 300 miles) in approximately 3 hours.

Before getting on the Shinkansen we took the advice of one of MJ’s local colleagues and picked up a boxed meal on the railway platform. We helpfully had a translation explaining that we don’t eat pork, and the woman selling the boxes was very helpful in finding us a few that didn’t contain any pork. We were grateful for her help, as I made my way through the box and had no idea what I was eating. It was all delicious though, and beautifully presented.

Our original plan had been to stay in Kyoto, but we booked later than anticipated and the reasonable hotels in Kyoto had already sold out. With the beautiful weather and changing leaves, autumn in Kyoto is only second to the spring (when the cherry blossoms bloom) as far as being a busy tourist time. Staying in Osaka worked out well though, especially since there was a lot to do there after things closed in Kyoto!

We stayed at the beautiful, if incredibly fancy old style European, Hotel Hankyu International. It was just a quick walk from Umeda Station, which made getting around pretty easy. We took trains everywhere we went.

Most of Friday was spent in Kyoto, but Saturday morning we began exploring Osaka a bit with a train ride over to the Osaka Aquarium Kaiyukan. I had read about this aquarium before our trip, and learned that it’s one of the best in Asia. As a fan of zoos and aquariums, I was glad we got to keep this visit on our agenda.


Osaka Aquarium Kaiyukan

The aquarium is laid out as several levels, and you begin by taking an elevator to the top floor. The top floor has a natural light forest along with river otters, crabs and various fish and birds. As you go down through the aquarium you see penguins, seals, all kinds of sharks and fish. For me, the major draw was getting to see some massive whale sharks, which I hadn’t seen in captivity before.


Whale shark

After the aquarium we needed some lunch. MJ is a big fan of okonomiyaki, a Japanese pancake that’s filled with vegetables (mostly cabbage) and your choice of meat or seafood. We did some searching near the train station and found Fukutaro. It was crowded, but we got a seat pretty quickly. It’s also hot, since they prepare the food on a big grill at the front of the restaurant (which we sat near) and then there is a hot grill in front of you which they deliver the okonomiyaki to so that it stays warm as you eat. It was the best okonomiyaki I’ve ever had.

From there we made our way to Kyoto for the rest of the day and dinner. We came back to Osaka after dinner and went back to the area where the aquarium is to go up on the Tempozan Ferris wheel to see the bay at night! The Ferris wheel was all lit up in blue, and since it was later in the evening there was no line, we even had no trouble waiting for the transparent car.

Sunday morning we had to pack up and head back to the Shinkansen for our trip back to Tokyo. After some false starts in finding lunch (it was terribly tempting to get okonomiyaki again) we found ourselves at a mall that had a tempura restaurant. We did a several course meal where they brought out an assorted selection of tempura meats and vegetables. My life is now complete that I’ve had tempura pumpkin, it was amazing.

Our train ride in to Osaka was later in the day so it was mostly dark. I fully enjoyed the daytime train ride, we passed lots of little towns and lots of solar panels!

More photos from Osaka here: https://www.flickr.com/photos/pleia2/albums/72157659829244819

And more photos from our trip on the Shinkansen: https://www.flickr.com/photos/pleia2/sets/72157660421552335

by pleia2 at December 26, 2015 12:12 AM

December 22, 2015

kdub

New Mir Release (0.18)

Mir Image

If a new Mir release was on your Christmas wishlist (like it was on mine), Mir 0.18 has been released! I’ve been working on this the last few days, and its out the door now.  Full text of changelog. Special thanks to mir team members who helped with testing, and the devs in #ubuntu-ci-eng for helping move the release along.

Graphics

  • Internal preparation work needed for Vulkan, hardware decoded multimedia optimizations, and latency improvements for nested servers.
  • Started work on plugin renderers. This will better prepare mir for IoT, where we might not have a Vulkan/GLES stack on the device, and might have to use the CPU.
  • Fixes for graphics corruption affecting Xmir (blocky black bars)
  • Various fixes for multimonitor scenarios, as well as better support for scaling buffers to suit the the monitor its on.

Input

  • Use libinput by default. We had been leaning on an old version of the Android input stack. Completely remove this in favor of using libinput.

Bugs

  • Quite a long list of bug correction. Some of these were never ‘in the wild’ but existed in the course of 0.18 development.

What’s next?

Its always tricky to pin down what exactly will make it into the next release, but I can at least comment on the stuff we’re working on, in addition to the normal rounds of bugfixing and test improvements:

  • various Internet-o-Things and convergence topics (eg, snappy, figuring out different rendering options on smaller devices).
  • buffer swapping rework to accommodate different render technologies (Vulkan!) accommodations for multimedia, and improve latency for nested servers.
  • more flexible screenshotting support
  • further refinements to our window management API
  • refinements to our platform autodetection

How can I help?

Writing new Shells

A fun way to help would be to write new shells! Part of mir’s goals is to make this as easy to do as possible, so writing a new shell always helps us make sure we’re hitting this goals.

If you’re interested in the mir C++ shell API, then you can look at some of our demos, available in the ‘mir-demos’ package. (source here, documentation here)

Even easier than that might be writing a shell using QML like unity8 is doing via the qtmir plugin. An example of how to do that is here (instructions on running here).

Tinkering with technology

If you’re more of the nuts and bolts type, you can try porting a device, adding a new rendering platform to mir (OpenVG or pixman might be an interesting, beneficial challenge), or figuring out other features to take advantage of.

Standard stuff

Pretty much all open source projects recommend bug fixing or triaging, helping on irc (#ubuntu-mir on freenode) or documentation auditing as other good ways to start helping.

by Kevin at December 22, 2015 07:18 PM

December 20, 2015

Akkana Peck

Christmas Bird Count

Yesterday was the Los Alamos Christmas Bird Count.

[ Mountain chickadee ] No big deal, right? Most counties have a Christmas Bird Count, a specified day in late December when birders hit the trails and try to identify and count as many birds as they can find. It's coordinated by the Audubon Society, which collects the data so it can be used to track species decline, changes in range in response to global warming, and other scientific questions. The CBC has come a long way from when it split off from an older tradition, the Christmas "Side Hunt", where people would hit the trails and try to kill as many animals as they could.

But the CBC is a big deal in Los Alamos, because we haven't had one since 1953. It turns out that to run an official CBC, you have to be qualified by Audubon and jump through a lot of hoops proving that you can do it properly. Despite there being a very active birding community here, nobody had taken on the job of qualifying us until this year. There was a lot of enthusiasm for the project: I think there were 30 or 40 people participating despite the chilly, overcast weather.

The team I was on was scheduled to start at 7. But I had been on the practice count in March (running a practice count is one of the hoops Audubon makes you jump through), and after dragging myself out of bed at oh-dark-thirty and freezing my toes off slogging through the snow, I had learned that birds are mostly too sensible to come out that early in winter. I tried to remind the other people on the team of what the March morning had been like, but nobody was listening, so I said I'd be late, and I met them at 8. (Still early for me, but I woke up early that morning.)

[ Two very late-season sandhill cranes ] Sure enough, when I got there at 8, there was disappointment over how few birds there were. But actually that continued all day: the promised sun never came out, and I think the birds were hoping for warmer weather. We did see a good assortment of woodpeckers and nuthatches in a small area of Water Canyon, and later, a pair of very late-season sandhill cranes made a low flyover just above where we stood on Estante Way; but mostly, it was disappointing.

In the early afternoon, the team disbanded to go home and watch our respective feeders, except for a couple of people who drove down the highway in search of red-tailed hawks and to the White Rock gas station in search of rock pigeons. (I love it that I'm living in a place where birders have to go out of their way to find rock pigeons to count.)

I didn't actually contribute much on the walks. Most of the others were much more experienced, so mostly my role was to say "Wait, what's that noise?" or "Something flew from that tree to this one" or "Yep, sure enough, two more juncos." But there was one species I thought I could help with: scaled quail. We've been having a regular flock of scaled quail coming by the house this autumn, sometimes as many as 13 at a time, which is apparently unusual for this time of year. I had Dave at home watching for quail while I was out walking around.

When I went home for a lunch break, Dave reported no quail: there had been a coyote sniffing around the yard, scaring away all the birds, and then later there'd been a Cooper's hawk. He'd found the hawk while watching a rock squirrel that was eating birdseed along with the towhees and juncos: the squirrel suddenly sat up and stared intently at something, and Dave followed its gaze to see the hawk perched on the fence. The squirrel then resumed eating, having decided that a Cooper's hawk is too small to be much danger to a squirrel.

[ Scaled quail ] But what with all the predators, there had been no quail. We had lunch, keeping our eyes on the feeder area, when they showed up. Three of them, no, six, no, nine. I kept watch while Dave went over to another window to see if there were any more headed our way. And it turns out there was a whole separate flock, nine more, out in the yard. Eighteen quail in all, a record for us! We'd suspected that we had two different quail families visiting us, but when you're watching one spot with quail constantly running in and out, there's no way to know if it's the same birds or different ones. It needed two people watching different areas to get our high count ot 18. And a good thing: we were the only bird counters in the county who saw any quail, let alone eighteen. So I did get to make a contribution after all.

I carried a camera all day, but my longest regular lens (a 55-250 f/4-5.6) isn't enough when it comes to distant woodpeckers. So most of what I got was blurry, underexposed "record shots", except for the quail, cranes, and an obliging chickadee who wasn't afraid of a bunch of binocular-wielding anthropoids. Photos here: Los Alamos Christmas Bird Count, White Rock team, 2015.

December 20, 2015 09:21 PM

iheartubuntu

Free Ubuntu Stickers


I have only 3 sheets of Ubuntu stickers to give away! So if you are interested in one of them, I will randomly pick (via random.org) three people. I'll ship each page of stickers any where in the world along with an official Ubuntu 12.04 LTS disc.

To enter into our contest, please "like" our Facebook page for a chance to win. Contest ends Friday, April 17, 2015. I'll announce the three winners the day after. Thanks for the like!

https://www.facebook.com/iheartubuntu

by iheartubuntu (noreply@blogger.com) at December 20, 2015 03:07 AM

December 19, 2015

iheartubuntu

Elementary OS 0.3.2 Freya Update

On December 10th, Elementary announced the release of the elementary OS 0.3.2 Freya computer operating system.

This new release 0.3.2 of Freya improves the support for 64-bit UEFI (Unified Extensible Firmware Interface) and SecureBoot systems, as well as machines running BIOS and Legacy Boot, fixing the dreaded GRUB boot error has finally been patched.

There are some new user-visible features as well implemented in elementary OS 0.3.2 Freya. Lots of updates to the Applications Menu, which now lists the Settings separately from the applications during searches. It returns search results for actions from applications' quicklists, such as Scratch's "New Document" or Geary's "Compose Message."

The Archive Manager and Font Viewer utilities were hidden from the Applications Menu, but users can access them via the Files file manager and the built-in search engine. Other minor visual issues related to dark applications have been fixed, along with refinements to panel's and windows' shadows.

The elementary developers also managed to add improvements to the internationalization support in elementary OS, including the addition of 22 new language translations.

Existing Elementary OS 0.3 Freya users can easily upgrade to version 0.3.2 just by running the Software Updater utility and applying all available updates. Those of you new to Elementary OS can download the Elementary OS 0.3.2 Freya release right now via the project's website link below...

https://elementary.io

by iheartubuntu (noreply@blogger.com) at December 19, 2015 09:41 PM

Elizabeth Krumbach

Fragmented travels in Tokyo

Back in October I flew directly from the Grace Hopper Celebration in Computing in Houston, Texas to Tokyo to begin a vacation with MJ. As I wrote about here a death in the family made it so we had to cut our trip short, but we were able to enjoy some of Tokyo.

The Tokyo side of travels began with a flight into Narita airport and a ride on the N’EX train to Shibuya station. Thankfully MJ had done research beforehand for me, so I was well-prepared for what tickets I needed to buy, train to take and station to arrive at. So far so good.

Leaving Shibuya station is where things got tricky. It was my first experience in a Tokyo station and Shibuya is a big one. I had a big backpack and suitcase and in the crowd of people I instantly got lost upon leaving the station and as I began looking for the hotel. After some false starts, I did eventually make it to Ceruleantower Tokyu Hotel. I ordered some room service (sushi!). I was later joined by MJ, whose flight came in about 6 hours after mine. We planned our flight back to the US for the funeral and spent not nearly enough time sleeping before we had to check out the next day.

Lunch before our flight was had at one of the several restaurants in the hotel, Kanetanaka So, where we had a wonderful, multi-course Japanese lunch.

My travels in Tokyo didn’t properly resume until after returning from the US and then going to Osaka and Kyoto (which I’ll write about later). So fast forward 6 days and we’re on the Shinkansen high speed train on our way back to Tokyo. MJ and I spent the evening with a trip to Tokyo Skytree, the tallest tower in the world as of completion in 2011. From it we’d at least get a 450 meter high view of the amazing city we had to cut from our travel plans.

A very popular, destination, the way tickets to Tokyo Skytree work is you go ahead of time and get a reservation to buy tickets for a time later in the day. So at 7PM we got our reservations for 8:30PM ticket window. In the meantime, we were kept entertained with a visit to the nearby Sumida Aquarium. They had penguins!

We then waited in a very long line to buy our tickets for Tokyo Skytree, including time spent waiting while the elevators were shut down during some strong winds. Fortunately we did finally make it up to the 350M level, and when there bought the additional ticket to go up another 100M to the top at 450M. The observation decks provided 360 degree views of the city, lights stretching for miles around us. And with a steady incline in the upper level, you slowly make your way to the peak of 450 meters before taking the elevator back down.

It was nearly 11PM by the time we completed our visit, which was too late for anything in the mall surrounding the tower to be open for dinner. Instead we took a train over to the Roppongi district and found some late night sushi. As the only customers in the sushi bar, we had a lovely time chatting with the manager and sushi chef who was preparing our fish in the perfect way, including adding the appropriate amount of soy sauce to each piece for us. I need to find some fatty tuna again, it was delicious!

The next day we met up with some of MJ’s colleagues for lunch back in Roppongi before I saw MJ off for his flight home. For me, the next 4 days were filled with the OpenStack Summit right there in Tokyo. I wrote about it and my evening activities in Tokyo each night here, here and here.

Come Saturday I was on my own. My flight wasn’t until the evening, so I spent the morning in beautiful Ueno Park and then at Ueno Zoo for a couple hours. I’ve been to zoos all over the world, and Ueno was a first class zoo. Their animal stars are the Giant Pandas, who I was delighted to see. I arrived at opening time, so the crowds weren’t too bad and the pandas were awake and eating their leafy breakfasts.

It was a pleasant walk around the zoo, enjoying key attractions like the lions, tigers, polar bears and sea lions, along with all the smaller ones. Time was running short when I hopped on their bright and colorful monorail that took me to the other side of the zoo where the penguins and a few other animals lived. I had to depart around noon.

Lots more photos from Ueno Zoo here: https://www.flickr.com/photos/pleia2/albums/72157660680854861

From there I took a train back to my hotel to pick up my luggage and take the N’EX train back to the airport. I took a lot of trains while in Japan, it seemed like the most reasonable way to get around. Frequent, clean and heavily used, it was fascinating to see how well they operated and with my phone (I had a data-only SIM for my phone) I had routes in my pocket so I could make sure I was getting on the right train, and if not that at least I wouldn’t get lost. Their excellent train system goes beyond just the capital city, we took trains in Osaka and Kyoto as well. San Francisco has pretty good public transportation for a US city, but I find myself now frequently pining for what we saw in Japan.

In spite of all the traveling I do, I’ll admit right away that I was a bit nervous about this trip. I was worried it would be too foreign and I’d get lost or simply be afraid of all the crowds and in-your-face pop culture. I was wrong. It certainly was crowded, but Tokyo was amazing, and everything was so cute. I bought a pile of cute animal note cards, stickers and post-its at the zoo because it so well fit what I loved. Seeing Nintendo characters around and being there during Halloween compounded it all, I grew up on and loved all these things! Instead of it all feeling foreign, I felt comfortable and so many things made me smile. There were also enough English speakers and signs in English where we went to make me feel like I usually knew what I was doing. I want to go back.

More photos from generally around Tokyo (including more trains!) here: https://www.flickr.com/photos/pleia2/albums/72157659829235239

by pleia2 at December 19, 2015 07:51 PM

iheartubuntu

Self-Driving Car Powered by Ubuntu


George Hotz is a 26-year-old hacker who says he built a self-driving car in a month. Sounds absurd, right? George is using the linux operating system Ubuntu to power it. Bloomberg's Ashlee Vance was skeptical too, so he went to test drive the 2016 Acura that Hotz retrofitted in his garage. (video by David Nicholson)

You can watch it here...



If the link doesnt work, head over to here...

http://www.bloomberg.com/news/videos/2015-12-16/this-hacker-built-a-self-driving-car-in-his-garage

by iheartubuntu (noreply@blogger.com) at December 19, 2015 01:05 AM

December 13, 2015

Elizabeth Krumbach

Thanksgiving 2015 and family

Back in September I wrote about a trip to Philadelphia where we were visiting an ailing relative. That relative was MJ’s grandmother and during that trip we spent time with her and met with her caretakers. In mid-October she passed away. I’d known her for several years. Before MJ and I dated, I was still local to Philadelphia while MJ was in California and I’d routinely go over to her apartment to help her with various electronics, from phones to televisions. And even after some initial surprise (“You’re dating the phone girl?”) I believe she ultimately welcomed me into the family when MJ and I got married back in 2013.

I learned about her passing when I was in Tokyo. I had just arrived at the hotel and saw messages from MJ, who found out during a layover on his way to meet me there. When he joined me in Tokyo we immediately made plans to return to the US for her funeral the next day. It was a sad, difficult and exhausting time. To make things worse, when we did make it back to Japan after her funeral we learned that another relative had passed away. It was almost too shocking to believe. We continued our Japan trip, mostly because I had to be in the country anyway for a conference. If I’m honest, part of the reason I haven’t gotten around to writing about it yet is because of the such intense, mixed feelings around it all.

With this stage set, MJ’s sister told us she was going to host Thanksgiving at her new home in Philadelphia. We initially said we couldn’t make it, but as we thought more about it, we concluded that we deserved a happy trip back east with family. It also gave us the opportunity to take care of some things for MJ’s grandmother, including moving her final possessions out of the nursing home and into storage. We flew to Philadelphia on the day before Thanksgiving, on what turned out to be a surprisingly easy trip, in spite of a layover and it being the busiest travel day of the year.

Given the logistics of our trip, we decided to stay at a hotel in downtown Philadelphia at Penn’s Landing. This gave us some beautiful views of the city and Penn’s Landing itself, especially at night. We also didn’t bother renting a car, instead depending upon cabs and inexpensive daily rentals that lived inside the hotel garage (so convenient!).

Thanksgiving itself was really enjoyable. Gathering together for a festive holiday, eating lots of great food and enjoying a couple bottles of Sonoma Valley bottles of wine. Given our travel schedules, the holidays tend to be when we stay home, choosing to visit family during less chaotic times. When I looked back and realized the last time I had spent Thanksgiving with family was back in 2010 when I traveled to New England to visit my side of the family.


Thanksgiving! Thanks to Irina for posting this!

I also watched a bit of Mystery Science Theater 3000 on the morning of Thanksgiving at the hotel, traditional “Turkey Day” celebrations. Good times.

We were only in town for 3 days, so the rest of our time was split between meals and visits with family, a couple trips to storage and a final meal with just the two of us on Saturday night at Moshulu on Penn’s Landing. Moshulu is “the world’s oldest and largest square rigged sailing vessel still afloat” (source) and I’ve wanted to visit the restaurant it contains for years. Our stay on Penn’s Landing gave us the perfect opportunity as it was just a quick walk down the landing from the hotel to get to it. Dinner was everything I expected and the quirkiness of it being on a ship made it that much more enjoyable. There may have been several hot spiced bourbon cocktails, and a Graham’s Tawny Port flight which included 10, 20, 30 and 40 year samples (we shared it!).

Sunday morning we took a couple of flights that finally brought us home. This trip concluded my travels for 2015, it was nice to end things on a high note.

by pleia2 at December 13, 2015 07:54 PM

December 12, 2015

Akkana Peck

Emacs rich-text mode: coloring and styling plain text

I use emacs a lot for taking notes, during meetings, while watching lectures in a MOOC, or while researching something.

But one place where emacs falls short is highlighting. For instance, if I paste a section of something I'm researching, then I want to add a comment about it, to differentiate the pasted part from my added comments, I have to resort to horrible hacks like "*********** My comment:". It's like the stuff Outlook users put in emails because they can't figure out how to quote.

What I really want is a simple rich-text mode, where I can highlight sections of text by changing color or making it italic, bold, underlined.

Enter enriched-mode. Start it with M-x enriched-mode and then you can apply some styles with commands like M-o i for italic, M-o b for bold, etc. These styles may or may not be visible depending on the font you're using; for instance, my font is already bold and emacs isn't smart enough to make it bolder, the way some programs are. So if one style doesn't work, try another one.

Enriched mode will save these styles when you save the file, with a markup syntax like <italic>This text is in italic.</italic> When you load the file, you'll just see the styles, not the markup.

Colors

But they're all pretty subtle. I still wanted colors, and none of the documentation tells you much about how to set them.

I found a few pages saying that you can change the color of text in an emacs buffer using the Edit menu, but I hide emacs's menus since I generally have no use for them: emacs can do everything from the keyboard, one of the things I like most about it, so why waste space on a menu I never use? I do that like this:

(tool-bar-mode 0)
(menu-bar-mode 0)

It turns out that although the right mouse button just extends the selection, Control-middleclick gives a context menu. Whew! Finally a way to change colors! But it's not at all easy to use: Control-middleclick, mouse over Foreground Color, slide right to Other..., click, and the menu goes away and now there's a prompt in the minibuffer where you can type in a color name.

Colors are saved in the file with a syntax like: <x-color><param>red</param>This text is in red.</x-color>

All that clicking is a lot of steps, and requires taking my hands off the keyboard. How do I change colors in an easier, keyboard driven way? I drew a complete blank with my web searches. A somewhat irritable person on #emacs eventually hinted that I should be using overlays, and I eventually figured out how to set overlay colors ((overlay-put (make-overlay ...)) turned out to be the way to do that) but it was a complete red herring: enriched-mode doesn't pay any attention to overlay colors. I don't know what overlays are useful for, but it's not that.

But in emacs, you can find out what's bound to a key with describe-key. Maybe that works for mouse clicks too? I ran describe-key, held down Control, clicked the middle button -- the context menu came up -- then navigated to Foreground Color and Other... and discovered that it's calling (facemenu-set-foreground COLOR &optional START END).

Binding to keys

Finally, a function I can bind to a key! COLOR is just a string, like "red". The documentation implies that START and END are optional, and that the function will apply to the selected region if there is one. But in practice, if you don't specify START and END, nothing happens, so you have to specify them. (region-beginning) and (region-end) work if you have a selected region.

Similarly, I learned that Face->italic from that same menu calls (facemenu-set-italic), and likewise for bold, underline etc. They work on the selected region.

But what if there's no region defined? I decided it might be nice to be able to set styles for the current line, without selecting it first. I can use (line-beginning-position) and (line-end-position) for START and END. So I wrote a wrapper function. For that, I didn't want to use specific functions like (facemenu-set-italic); I wanted to be able pass a property like "italic" to my wrapper function.

I found a way to do that: (put-text-property START END 'italic). But that wasn't quite enough, because put-text-property replaces all properties; you can't make something both italic and bold. To add a property without removing existing ones, use (add-text-properties START END (list 'face 'italic)).

So here's the final code that I put in my .emacs. I was out of excuses to procrastinate, and my enriched-mode bindings worked fine for taking notes on the project which had led to all this procrastination.

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Text colors/styles. You can use this in conjunction with enriched-mode.
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;; rich-style will affect the style of either the selected region,
;; or the current line if no region is selected.
;; style may be an atom indicating a rich-style face,
;; e.g. 'italic or 'bold, using
;;   (put-text-property START END PROPERTY VALUE &optional OBJECT)
;; or a color string, e.g. "red", using
;;   (facemenu-set-foreground COLOR &optional START END)
;; or nil, in which case style will be removed.
(defun rich-style (style)
  (let* ((start (if (use-region-p)
                    (region-beginning) (line-beginning-position)))
                    
         (end   (if (use-region-p)
                    (region-end)  (line-end-position))))
    (cond
     ((null style)      (set-text-properties start end nil))
     ((stringp style)   (facemenu-set-foreground style start end))
     (t                 (add-text-properties start end (list 'face style)))
     )))

(defun enriched-mode-keys ()
  (define-key enriched-mode-map "\C-ci"
    (lambda () (interactive)    (rich-style 'italic)))
  (define-key enriched-mode-map "\C-cB"
    (lambda () (interactive)    (rich-style 'bold)))
  (define-key enriched-mode-map "\C-cu"
    (lambda () (interactive)    (rich-style 'underline)))
  (define-key enriched-mode-map "\C-cr"
    (lambda () (interactive)    (rich-style "red")))
  ;; Repeat for any other colors you want from rgb.txt

  (define-key enriched-mode-map (kbd "C-c ")
    (lambda () (interactive)    (rich-style nil)))
  )
(add-hook 'enriched-mode-hook 'enriched-mode-keys)

December 12, 2015 09:48 PM

Elizabeth Krumbach

Pandas and Historical Adventures in Washington D.C.

Time flies, I’m behind on writing about my recent trips! Back in November, when was already in Washington D.C. for the LISA15 conference (which I wrote about here), I decided to take some time to see the sights and visit with my friend Danita who came down for the weekend from Philadelphia.

It was great meeting up with her, we stayed at The George hotel on Capitol Hill, which was just a brisk walk away from The National Mall where all the Smithsonian Museums are. But first, there was the National Zoo!

I’d been to the Smithsonian’s National Zoo before as a youth, and on this trip I actually went here multiple times since the LISA15 hotel was less than a mile away. I had a drizzle-filled adventure on Tuesday when I got in, where I walked the whole zoo. That’s the day when I took most of my photos, including all the lions! Unfortunately I only got a quick glimpse of a panda right before it went inside to escape the rain, so I went back on Friday during a conference lunch break. The Friday trip gave me a chance to see a sleeping panda. Danita arrived Friday evening, so on Saturday morning we decided to go back together for one last glimpse, and that’s when I saw a very awake panda! I took a bunch of pictures of the panda eating, walking, playing with a toy. Lots of fun.

More photos from the zoo here: https://www.flickr.com/photos/pleia2/albums/72157661018394046

After swinging by our hotel to drop off our bags, Saturday continued with a visit to the National Museum of the American Indian. It’s one of the few Smithsonian museums I hadn’t been to, so I was really excited to see it. I had also recently read an article about The Great Inka Road: Engineering an Empire exhibit that I really wanted to see.

We took the advice of a friend and many guide books and first had lunch in the museum cafe. They had a wonderful assortment of native american dishes spanning both continents – a far cry from most museum food! The permanent exhibits are worth the visit, but I also really enjoyed the Inka exhibit. We made it in time for one of their complimentary afternoon tours of the exhibit, where our tour guide Jay walked us through Inka history and geography throughout the incredible roads they created and were recently made a world heritage site.

The rest of the afternoon was spent over at the Air and Space Museum. A classic, but one I barely remembered, so it was nice to go back. While I was there I also peeked in on the Art of the Airport Tower exhibit and picked up the book. I also picked up a new appreciation for airport towers and on subsequent flights (over Thanksgiving) have made a point to check them out upon landing. Our evening was spent at an Irish pub behind our hotel, where I had a whiskey and hard cider cocktail (it’s not “mixing” if they mix it for you!).

Sunday morning we were up bright and early to go to Ford’s Theatre. The theater itself is a nice one, and they still have plays in it, but of course the major draw is getting to see the presidential box where President Lincoln was shot. After exploring the theater and seeing the box, you go downstairs where they have a surprisingly thorough museum for the basement space it’s in, walking through Lincoln’s presidency with various artifacts, videos and stories.


Ford’s Theatre, presidential box

After the theater, the museum continues across the street at the Petersen House (a boarding house) where he actually died. You first see the downstairs rooms, where all the furniture was sadly unoriginal (contemporary and near contemporary collectors took pieces after his death) and the room where he died, along with a recreation of the bedspread and wallpaper painstakingly created from the only known photo taken at the time (see this article for the photo). Then you take an elevator up several floors to another museum that gives you an immersive and dark tour of the days following Lincoln’s death, including his funerary procession and a large section devoted to the hunt for his assassin, John Wilkes Booth.

As we walked through the gift shops at the conclusion of our tour I was forced to admit to my companion that Lincoln is not one of my favorite presidents. When reflecting on the powers our current presidents use in times of conflict, it’s frightening to think of them going as far as Lincoln did to preserve the union. Many argue that the ends justified the means, but I’m sure it was a terrifying time to be someone who didn’t agree with the government, regardless of north/south allegiance.

Speaking of our founding fathers, our next trip was a visit to the National Archives Museum where the Declaration of Independence, Constitution of the United States, and Bill of Rights are all housed. Before getting to the trio (known together as the Charters of Freedom) we explored the rest of the museum, which was surprisingly large! Even in the time we spent there, we only scratched the surface of what the various American-themed displays showed, and I’d like to go back and resume the adventure some time. We were also surprised to learn about their Spirited Republic: Alcohol in American History exhibit, which provided a glimpse into how alcohol and laws around it influenced the history of America, as well as our habits around consumption. Fascinating stuff. When we finally made it to the Rotunda to see the Charters of Freedom, the main draw of the museum, it was clear we picked the right day, there was no wait to get in and we only had to wait behind a person or two in order to see each of them.


The National Archive, D.C.

After grabbing some lunch we made our way over to the National Museum of Natural History. Another one of my favorites, the museum is full of taxidermied animals and nature-focused exhibits spanning the globe. Their dinosaur/fossil section was sadly closed for major renovation, but they did create a temporary dino hall where a got my selfie with a tyrannosaurus rex. Awesome. Our evening concluded with some pizza and movies back at the hotel before Danita had to drive back home. I spent my final night in DC at the hotel.

My flight on Monday wasn’t until 3PM, so after getting a bit of work done in the morning I packed up and headed to a place that about a half dozen friends recommended when I mentioned I’d be going to the Air and Space Museum in downtown DC: The Steven F. Udvar-Hazy Center. A relatively new (opened in 2003) addition to the Smithsonian collection, it’s huge series of hangers with dozens of planes, helicopters and space vehicles of all kinds. It’s also the final resting place of the Space Shuttle Discovery, which looms large over the other space exhibits in the hanger. The museum also notably has an Air France Concorde, an SR-71 Blackbird and historical planes through the years, and I love old bi-planes. Since I had a flight to catch, I only had a couple hours to enjoy the museum and this is one you could spend an entire day in. It’s really convenient to Dulles Airport, where I was flying out of. I was able to stash my luggage (just carry-on size) in one of the lockers at the museum and then take a local bus that runs a circuit from the Metro to the Museum and then the Airport – easy! And only cost a couple bucks. Highly recommend swinging by before a flight or upon arrival, I certainly will make plans to go again.

And with that, my DC trip came to a close. My travels home were a bit of an adventure, with a late departure out of Dulles and then storms upon arrival in Dallas. The storms were so bad that they shut down the air train, and with only a short time to make my connection I dashed across the airport on foot. Exhausted and sweaty I made it to the gate in time, only to then sit on the plane with the doors closed for nearly 3 hours as the storms caused more delays, and ultimately made us have to go back to the gate to refuel so we could take a longer route home. I did finally make it home though, if a few hours later than I had planned. Fortunately I had scored complimentary upgrades on both flights, so as stressful and long as it was, there was at least that much comfort.

by pleia2 at December 12, 2015 08:12 PM

December 10, 2015

Jono Bacon

Why You Should Go To UbuCon in Los Angeles in January

The 21st – 22nd January 2016 are some important dates you need to pencil into your calendar. The reason? That is when the UbuCon Summit is happening in Pasadena, California, USA.

Many moons ago there used to be the Ubuntu Developer Summit events. They brought together Ubuntu developers, community members, Canonical employees, and partners from all over the world. They were important not just for sharing and evolving ideas and projects but also getting people together socially.

While the Ubuntu Developer Summits were switched to an online setting, it left a gaping hole for an in-person event that brings people together. This, my friends, is what the UbuCon Summit in Los Angeles is going to help provide.

Now, to be clear, this isn’t an Ubuntu Developer Summit. It is a different type of event. It is designed for a much wider demographic: users, developers, artists, educators, partners, businesspeople, and more. The goal of the event is to have fun with Ubuntu and share ideas and opportunities, and build relationships.

The goal of the event is to share some interesting and thought-provoking content. It will feature both keynote and presentation sessions, as well as an unfconference where the audience can shape the content.

It is going to be phenomenal for content. There will be talks by Mark Shuttleworth, Elizabeth K Joseph, Stuart Langridge, David Planella, Nathan Haines, Manik Taneja, John Lea, and even my little old self. The unconference will be a wonderful opportunity to delve into a diverse range of topics that relate to the real-world conditions and needs of Ubuntu users.

Importantly though, it will be the networking and social side of the event that will be the most valuable. A big contingent from Canonical will be there, many LoCo teams and members, community members from around the world and more. It is going to be a wonderful opportunity to get the Ubuntu family together and I can’t wait to be a part of it.

Finally, the UbuCon Los Angeles is front-loading the incredible SCALE14x conference. So, not only can you get out to UbuCon but you can also join what I feel is one of the greatest Linux and Open Source conferences in the world. Oh, and as a total sweetener, myself and the rest of the Bad Voltage team will be doing a live show on the second evening of the UbuCon (which is the first day of SCALE14x).

Entrance to UbuCon (and the Bad Voltage live show) is entirely free, and entrance to SCALE14x is a really inexpensive.

So, you basically have no excuses, people. Be sure to come out and join us for an incredible time at the UbuCon in Los Angeles.

Find Out More

by Jono Bacon at December 10, 2015 11:14 PM

December 04, 2015

Akkana Peck

Distclean part 2: some useful zsh tricks

I wrote recently about a zsh shell function to run make distclean on a source tree even if something in autoconf is messed up. In order to save any arguments you've previously passed to configure or autogen.sh, my function parsed the arguments from a file called config.log.

But it might be a bit more reliable to use config.status -- I'm guessing this is the file that make uses when it finds it needs to re-run autogen.sh. However, the syntax in that file is more complicated, and parsing it taught me some useful zsh tricks.

I can see the relevant line from config.status like this:

$ grep '^ac_cs_config' config.status
ac_cs_config="'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'"

--enable-foo --disable-bar are options I added purely for testing. I wanted to make sure my shell function would work with multiple arguments.

Ultimately, I want my shell function to call autogen.sh --prefix=/usr/local/gimp-git --enable-foo --disable-bar The goal is to end up with $args being a zsh array containing those three arguments. So I'll need to edit out those quotes and split the line into an array.

Sed tricks

The first thing to do is to get rid of that initial ac_cs_config= in the line from config.status. That's easy with sed:

$ grep '^ac_cs_config' config.status | sed -e 's/ac_cs_config=//'
"'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'"

But since we're using sed anyway, there's no need to use grep to get the line: we can do it all with sed. First try:

sed -n '/^ac_cs_config/s/ac_cs_config=//p' config.status

Search for the line that starts with ac_cs_config (^ matches the beginning of a line); then replace ac_cs_config= with nothing, and p print the resulting line. -n tells sed not to print anything except when told to with a p.

But it turns out that if you give a sed substitution a blank pattern, it uses the last pattern it was given. So a more compact version, using the search pattern ^ac_cs_config, is:

sed -n '/^ac_cs_config=/s///p' config.status

But there's also another way of doing it:

sed '/^ac_cs_config=/!d;s///' config.status

! after a search pattern matches every line that doesn't match the pattern. d deletes those lines. Then for lines that weren't deleted (the one line that does match), do the substitution. Since there's no -n, sed will print all lines that weren't deleted.

I find that version more difficult to read. But I'm including it because it's useful to know how to chain several commands in sed, and how to use ! to search for lines that don't match a pattern.

You can also use sed to eliminate the double quotes:

sed '/^ac_cs_config=/!d;s///;s/"//g' config.status
'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'
But it turns out that zsh has a better way of doing that.

Zsh parameter substitution

I'm still relatively new to zsh, but I got some great advice on #zsh. The first suggestion:

sed -n '/^ac_cs_config=/s///p' config.status | IFS= read -r; args=( ${(Q)${(z)${(Q)REPLY}}} ); print -rl - $args

I'll be using final print -rl - $args for all these examples: it prints an array variable with one member per line. For the actual distclean function, of course, I'll be passing the variable to autogen.sh, not printing it out.

First, let's look at the heart of that expression: the args=( ${(Q)${(z)${(Q)REPLY}}}.

The heart of this is the expression ${(Q)${(z)${(Q)x}}} The zsh parameter substitution syntax is a bit arcane, but each of the parenthesized letters does some operation on the variable that follows.

The first (Q) strips off a level of quoting. So:

$ x='"Hello world"'; print $x; print ${(Q)x}
"Hello world"
Hello world

(z) splits an expression and stores it in an array. But to see that, we have to use print -l, so array members will be printed on separate lines.

$ x="a b c"; print -l $x; print "....."; print -l ${(z)x}
a b c
.....
a
b
c

Zsh is smart about quotes, so if you have quoted expressions it will group them correctly when assigning array members:

$ 
x="'a a' 'b b' 'c c'"; print -l $x; print "....."; print -l ${(z)x} 'a a' 'b b' 'c c' ..... 'a a' 'b b' 'c c'

So let's break down the larger expression: this is best read from right to left, inner expressions to outer.

${(Q) ${(z) ${(Q) x }}}
   |     |     |   \
   |     |     |    The original expression, 
   |     |     |   "'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'"
   |     |     \
   |     |      Strip off the double quotes:
   |     |      '--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'
   |     \
   |      Split into an array of three items
   \
    Strip the single quotes from each array member,
    ( --prefix=/usr/local/gimp-git --enable-foo --disable-bar )
Neat!

For more on zsh parameter substitutions, see the Zsh Guide, Chapter 5: Substitutions.

Passing the sed results to the parameter substitution

There's still a little left to wonder about in our expression, sed -n '/^ac_cs_config=/s///p' config.status | IFS= read -r; args=( ${(Q)${(z)${(Q)REPLY}}} ); print -rl - $args

The IFS= read -r seems to be a common idiom in zsh scripting. It takes standard input and assigns it to the variable $REPLY. IFS is the input field separator: you can split variables into words by spaces, newlines, semicolons or any other character you want. IFS= sets it to nothing. But because the input expression -- "'--prefix=/usr/local/gimp-git' '--enable-foo' '--disable-bar'" -- has quotes around it, IFS is ignored anyway.

So you can do the same thing with this simpler expression, to assign the quoted expression to the variable $x. I'll declare it a local variable: that makes no difference when testing it in the shell, but if I call it in a function, I won't have variables like $x and $args cluttering up my shell afterward.

local x=$(sed -n '/^ac_cs_config=/s///p' config.status); local args=( ${(Q)${(z)${(Q)x}}} ); print -rl - $args

That works in the version of zsh I'm running here, 5.1.1. But I've been warned that it's safer to quote the result of $(). Without quotes, if you ever run the function in an older zsh, $x might end up being set only to the first word of the expression. Second, it's a good idea to put "local" in front of the variable; that way, $x won't end up being set once you've returned from the function. So now we have:

local x="$(sed -n '/^ac_cs_config=/s///p' config.status)"; local args=( ${(Q)${(z)${(Q)x}}} ); print -rl - $args

You don't even need to use a local variable. For added brevity (making the function even more difficult to read! -- but we're way past the point of easy readability), you could say:

args=( ${(Q)${(z)${(Q)"$(sed -n '/^ac_cs_config=/s///p' config.status)"}}} ); print -rl - $args
or even
print -rl - ${(Q)${(z)${(Q)"$(sed -n '/^ac_cs_config=/s///p' config.status)"}}}
... but that final version, since it doesn't assign to a variable at all, isn't useful for the function I'm writing.

December 04, 2015 08:25 PM

December 01, 2015

Jono Bacon

Recommendations For New Runners

Recently I started running. I blame Jason Hibbets for this. While at the Community Leadership Summit earlier this year we got chatting about running and through some gentle persuasion he convinced me to give it a shot.

So I did. Based on Jason’s recommendations I went and bought some running shoes and started using an app called Couch To 5K. The app basically provides a series of workouts that gradually increase in intensity. Each workout involves a mixture of warming up (walking), jogging, brisk walking, and cooling down (walking).

Things were going well. Surprisingly to myself I was really quite enjoying it (more on this later) and I was sticking to the three workouts each week.

Then I made a mistake. I decided I was impatient with the app’s growing intensity and I decided to notch it up a bit. I switched the app off and significantly increased my runs, clocking in some much better workouts.

It felt great but my celebration was short-lived. Knee pain set in. After some research it became clear that the knee pain was because I pushed myself too hard. Many runners warned me of this but I ignored them. Well, I was wrong.

Fortunately though, the experience of buggering up my knee helped me to learn a lot about running that I wish I knew when I started. Thus, I wanted to share five key learnings here.

Now, to be clear: I am not a running expert. I am still very much a novice, so some experts may challenge these recommendations, but I welcome all feedback in the comments!

1. Buy The Right Shoes

Even in my first conversation with Jason the importance of getting good shoes was emphasized. I presumed though that this was mainly about getting “running shoes” as opposed to bog-standard shoes (sneakers/trainers etc).

What I didn’t realize was that we all run in slightly different ways and the purpose of a good shoe is to support you to run in the most optimal way.

As an example, it seems I tend to put a lot of press on the inside of my foot. As such, I need shoes that provide a lot of support on the inside to prevent my knee rotating as much (this was the cause of my knee pain).

Interestingly, when I went to a running store to buy shoes after my knee injury the chap in there could tell a lot about how I run from the state of my first pair of shoes. They noticeably drooped on one side, showing the impact of my foot on the inside.

So, don’t just get running shoes, but pay attention to the impact you have on your shoes and use that to make a decision about future shoes too.

2. Take It Slow

Again, one thing that was made clear to me when I started was to take it slow. Although I was using Couch to 5K, I didn’t feel like much of a couch potato, so I jacked up my pace to get my heart racing.

As I mentioned above, this was a mistake.

Running places quite a bit of stress on different parts of your body. It affects your feet, ankles, knee, quads, hams etc. Part of the reason why Couch To 5K and similar apps go so slowly is to ramp up your body gradually to get used to the impact running has on it.

As my case demonstrates, if you push yourself too hard you get injured. So, don’t do what I did: take it a step at a time (pun intended).

3. Stretch

When I started running my wife told me to stretch when I finished my run. She showed me a few stretches and I did them for a few minutes when I got home.

When the knee injury set in, I asked one of Erica’s friends, Tabitha, who is a runner, what she thought the problem may be. She asked me to do some simple stretches and it became obvious that my body was…well…not all that stretchy. Tabitha made it clear that I needed to stretch both before and after a run.

Fortunately there are plenty of videos on YouTube that show you how. Again, I wished I had realized the importance of stretching when I got started.

4. Strengthen Your Muscles

On a related note to both taking it slowly and stretching, my interest in running helped to illustrate some basic anatomy and the importance of building strength.

What became clear to me is that muscles throughout your body play a role on different elements of running. As an example, your quads play an important role in your knee working effectively. If you don’t have strong and stretched quads, it can result in some knee pain.

As such, I discovered that the actual run is not the only important piece. Stretching before and after and taking time to build strength is important too. Again, there are plenty of videos online that can help with this.

5. The Runners High

For quite some time friends of mine who are runners have talked about the runners high. Now, it seems different people have a different idea of what this is, but it is basically a special sense of pleasure when running or afterwards.

I have to admit, I was a bit cynical about this. I don’t enjoy exercise. I never have. It feels like a neccessary evil I need to do to stay in shape and healthy.

That changed though. On my first few runs I noticed that I really enjoyed being out of the house and running. I enjoyed the feeling of the wind against my face as I ran. I felt a sense of decompression as the blood flowed and my mind detached from work. I found myself genuinely enjoying the 30 minutes or so that I was out getting started running.

This sensation continued after the run too, particularly when I was out of the shower and dressed. I felt lighter, more nimble, and a real sense of accomplishment. I am not sure if this is the runners high others get, but it is a great feeling.

What was really odd was that when I got my knee injury I really started missing getting out there to run. I would have never have imagined I would have felt this way but I love it.

So, for those of you who have read this far and are not convinced that running might be both good for you and enjoyable, give it a shot. You never know, you might just enjoy it.

Bonus: Shopping List

Before I wrap up I thought it might also be handy to share a few handy things I have purchased that can enhance the overall running experience:

  • Compression Underwear – this is important for the gentlemen out there (in much the same way a sports bra is important for the ladies). You just don’t want things bouncing around down there, so get some good compression undies. I have some Asics and Adidas undies.
  • Socks – good sock choice is important. Poor socks can result in blisters so get some runners socks. I use Asics white socks.
  • Belt – I got myself a FlipBelt which is a handy tubular belt you can put your phone, credit card, and ID into. It saves you having to carry them separately or have bulky pockets.
  • Bluetooth Headphones – I tend to listen to music sometimes when I run and sometimes I listen to audio books. I tried wired headphones on my first run and they kept falling out of my ears. So, I picked up a MPow Cheetah headset which I love. It pairs with my phone easily and sounds great.
  • Water Bottle – I didn’t buy anything fancy – just a cheap plastic bottle I picked up at a conference. I shove a little ice in there on a hot day and it works fine.

Anyway, I think that is about it. Be sure to share your additional running tips or questions in the comments!

by Jono Bacon at December 01, 2015 07:04 AM

November 30, 2015

Elizabeth Krumbach

Giving Tuesday (and every day) to support Linux in schools

The Tuesday following Cyber Monday has been designated Giving Tuesday. Whether you observe charitable giving on that day or any other day of the year, the following are organizations I’ve worked with and/or given to that promote one of my own passions: putting Free/Open Source Software into schools and others in need.

Partimus

I’ve been on the Board of Directors for Partimus for the past 5 years. In that time we’ve done projects in public charter schools, after school programs and a library. This year our focus has been work at a homeless shelter in San Francisco. See an interview with Elizabeth Pocock, our on site contact responsible for the oversight of the Partimus computer pilot project here.

This is also the non-profit that gets a donation from Boutique Academia for sales of the Ubuntu necklaces and earrings. So purchase a shiny gift for someone this holiday and help out Partimus too!

Partimus is based in the San Francisco Bay Area. We’re also always looking for volunteers, so if you’re familiar with Ubuntu (or Linux in general) and are looking for a way to give back, please contact me at lyz@partimus.org. We’re especially looking for technical talent to help us organize and deliver on some of our technical goals, like creating custom ISOs for our schools and developing solutions to make it easier to deploy them and keep them updated (PXE boot servers, local proxies, etc). You can also hop on our tech-partimus mailing list and browse our archives if you’re interested.

Giving Tuesday post: On Giving Tuesday, help us give computers to low income shelters

Donate here.

Computer Reach

Based in Pittsburgh, Pennsylvania, Computer Reach not only does work in their region, but has deployed Ubuntu-based computers all over the world. This is the organization I went to Ghana with in 2012. Their counts page details the Linux and Mac computers provided to organizations worldwide.

Giving Tuesday post: #GivingTuesday

Donate here.

Reglue

Based in Austin, Texas, I Reglue met founder Ken Starks several years ago at a conference and his work has always been an inspiration for Partimus. They recently completed a successful Indiegogo campaign to continue their work, but like all of our non-profits they can always use more funding to focus on their core efforts.

See sidebar on the main site to donate, they also accept hardware donations.

And Beyond

This is just a sampling of organizations doing this work. If you want to donate or work locally, I strongly encourage looking in your area for computer recycling programs using Linux, for both donation and volunteer opportunities.

by pleia2 at November 30, 2015 11:21 PM

November 28, 2015

Jono Bacon

Perceptions and Methods of Good Customer Service

This week I had a rather frustrating customer experience. Now, in these kinds of situations some folks like to take to their blogs to spew their frustration in the direction of the Internet and feel a sense of catharsis.

To be honest, what I found frustrating about this experience was less the outcome and more the way the situation was managed. The frustration then turned into an interesting little thought experiment about the psychology going on in this experience and how it could potentially be improved.

So, I sat down and thought about why the experience was frustrating and came away with some conclusions that I thought might be interesting to share. This may be useful for those of you building your own customer service/engagement departments.

The Problem

A while ago I booked some flights to take my family to England for Christmas. Using an airline Erica and I are both big fans of, we managed to book the trip using miles. We had to be a little flexible on dates/times, but we figured this would be worth it to save the $1500+.

Like anyone picking flights, the times and dates were carefully considered. My parents live up in the north of England and it takes about four hours to get from Heathrow to their house (with a mixture of trains and taxis). Thus we wanted to arrive at Heathrow from San Francisco earlier in the day, and for our return flight to be later in the day to accommodate this four hour trip.

Recently I was sent an email that the airline had decided to change the times of our flights. More specifically, the return flight which was due to leave at 3.15pm was now shifted to closer to 12pm. As such, to get to Heathrow with the requisite few hours before our flight it would have mean’t us leaving my parents house at around 5am. Ugh.

Now, to help illustrate the severity of this issue, this would mean getting a 3 year-old up at 4.15am to embark on a four hour journey to London and of course the 11 hour flight back to San Francisco. The early morning would make the whole trip more difficult.

Expected reaction of Jack when this happens (credit)

Expected reaction of Jack when this happens (credit)

As you can imagine, we were not particularly amused by this. So, I went to call the airline to see if we could figure out a better solution.

The Response

I called the airline and politely illustrated the problem, complete with all the details of the booking.

I was then informed that they couldn’t do anything to change the flight time (obviously), and there were no other flights that day (understandable).

So, I asked if they could simply re-book my family onto the same flight the following day. This would then mean we could head to the airport, stay in a hotel that evening near Heathrow, and make the noon flight…all without having to cut our holiday short by a day.

I was promptly informed that this was not going to work. The attendant told me that because we had purchased a miles-based ticket, they could only move us to miles-based ticketed seats the following day without a charge. I was also informed that the airline considers anything less than a 5 hour time change to be “insignificant” and thus are not obliged to provide any additional amendments or service. To cap things off I was told that if I had read the Terms Of Service this would have all been abundantly clear.

To explore all possible options I asked how much the change fees would be to move to the same flight the following day but in non-mileage based seats and the resulting cost was $1500; quite a number to swallow.

The airline's perception of my house (credit)

The airline’s perception of my house (credit)

As I processed this information I was rather annoyed. I booked these tickets in good faith and the airline had put us in this awkward position with the change of times. While I called to explore a flexible solution to the problem, I was instead told there would be no flexibility and that they were only willing to meet their own defined set of obligations.

As you can imagine, I was not particularly happy with this outcome so I felt it appropriate to escalate. I politely asked to speak to a manager and was informed that the manager would not take my call as this was merely a ticket-related issue. I pressed further to ask to speak to a manager and after a number of additional pushbacks about this not being important enough for a manager and that they may not take my call, I was eventually put through.

When I spoke to the manager the same response was re-iterated. We finished the conversation and I made it clear I was not frustrated with any of the staff who I spoke to (they were, after all, just doing their job and don’t set airline policy), but I was frustrated with the airline and I would not be doing business with them in future.

Now to be clear, I am not expecting to be treated like royalty. I just felt the overall situation could have possibly been handled better.

A Better Experience

Now, to be clear before we proceed, I am not an expert on customer service, how it is architected, and the methodology of delivering the best customer service while protecting the legal and financial interests of a company.

I am merely a customer, but I do think there were some underlying principles that exist in people and how we engage around problems such as this that the airline seems to be ignoring.

Let’s first look at what I think the key problems were in this engagement:

Accountability and Assurance

At no point throughout the discussion did one of the customer service reps say:

“Mr Bacon, we know we have put you in an awkward situation, but we assure you we are going to do our level best to find a solution that you and your family are happy with.”

A simple acknowledgement such as this serves three purposes. Firstly, it lets the customer feel the company is willing to accept responsibility. Secondly, it demonstrates a collaborative human side to the company. Finally, and as we will explore later, it equalizes the relationship between the customer and the company. This immediately gets the conversation off to a good start.

Obligations vs. Gestures Of Goodwill

Imagine your friend does something that puts you in an awkward position, for example, saying they will take care of part of a shared project which they then say they are not going to have time to deliver.

Now imagine the conversation looks like this:

You: you have kind of put me in an awkward situation here, friend. What do you think you can do to help resolve it?
Friend: well, based upon the parameters of the project and our friendship I am only obliged to provide you with a certain level of service, which is X.

This is not how human beings operate. When there is a sense that a shared agreement has been compromised, it is generally recommended that the person who compromised the agreement will (a) demonstrate a willingness to rectify the situation and (b) provide a sense of priority in doing so.

When we replace thoughtful problem-solving with “obligations” and “terms of service”, which while legally true and accurate, it changes the nature of the conversation to be one that is more pedantic and potentially adversarial. This is not what anyone wants. It essentially transforms the discussion from a collaboration to a sense that one party is covering their back and wants to put in minimal effort to solve the problem. This neatly leads me to…

Trust and Favors

Psychology has taught us that favors play an important role in the world. When we feel someone has treated us well we socially feel a responsibility to repay the favor.

Consequently in business when you feel a company goes above and beyond, consumers will often repay that generosity significantly.

In this case the cost to me of reseating my family was $1500. Arguably this will be a lower actual cost to the airline, let’s say $1000.

Now, let’s say the airline said:

“Mr Bacon, as I mentioned it is difficult to move you to the seats on the flight the following day as you have a mileage ticket, but I have talked to my manager and we would be happy to provide a 30% discount.”

If this happened it would demonstrate a number of things. Firstly, the airline was willing to step outside of their published process to solve the customer’s problem. It demonstrates a willingness to find a middle-ground, and it shows that the airline wants to minimize the cost for the customer.

If this had occurred I would have come away singing the praises of the airline. I would be tweeting about how impressed I was, telling my friends that they are “different to the usual airlines”, and certainly keeping my business with them.

This is because I would feel that they took care of me and did me a favor. As such, and as we see elsewhere in the world, I would feel an urge to repay that favor, both with advocacy and future business.

Unfortunately, the actual response of what they are obliged to do and that they are covered by their terms of service shows an unwillingness to work together to find a solution.

Thus, the optimal solution would cost them a $500 loss but assure future business and customer advocacy. The current solution saves them $500 but means they are less likely to get my future business or advocacy.

Relativity and Expectations

People think largely in terms of relativity. We obviously compare products and services but we also compare social constructs and our positions in the world too.

This is important because a business transaction is often a power struggle. If you think about the most satisfying companies you have purchased a product or service from, invariably the ones where you felt like an equal in the transaction was more rewarding. Compare for example the snooty restaurant waiter that looks down at you versus the chatty and talkative waiter who makes you feel at ease. The latter makes you feel more of an equal and thus feels like a better experience.

In this case the airline customer service department made it very clear from the outset that they considered themselves in a position of power. The immediate citing of obligations, terms of service, an unwillingness to escalate the call, and other components essentially put the customer in a submissive position, which rarely helps contentious situations.

The knock-on effect here is expectations: when a customer feels unequal it sets low expectations in the business relationship and we tend to think less highly of the company. The world is littered with examples of this sense of an unequal relationship with many cable companies getting a particularly bad reputation here.

Choice Architecture

Another interesting construct in psychology is the importance of choice. Choices provide a fulfilling experience for people and it makes people feel a sense of control and empowerment.

In this case the airline provided no real choices with the exception of laying down $1500 for full-price tickets for the non-mileage seats. If they had instead provided a few options (e.g. a discounted ticket, an option to adjust the flight time/date, or even choices for speaking to other staff members such as a manager to rectify the situation) the overall experience would feel more rewarding.

The Optimal Solution

So, based on all this, how would I have recommended the airline handled this? Well, imagine this conversation (this is somewhat paraphrased to keep it short, but you get the drift):

Me: Good afternoon. We have a bit of a problem where your airline has changed the time my family’s return flights. Now, we have a 3 year-old on this trip and this is going to result in getting up at 4.15am to make the new time. As you can imagine this is going to be stressful, particularly with such a long trip. Is there anything you can do to help?
Airline: I am terribly sorry to hear this. Can you let me know your booking ID please?
Me: Sure, it is ABCDEFG.
Airline: Thank-you, Mr Bacon. OK, I can see the problem now. Firstly, I want to apologize for this. We know that the times of reservations are important and I am sorry your family are in this position. Unfortunately we had to change the time due to XYZ factors, but I also appreciate you are in an uncomfortable situation. Rest assured I want to do everything to make your trip as comfortable as possible. Would you mind if I put you on hold and explore a few options?
Me: Sure.
Airline: OK, Mr Bacon. So the challenge we have is that because you booked a mileage-based ticket, our usual policy is that we can only move you to mileage-based seats. Now, for the day after we sadly don’t have any of these types of seats left. So, we have a few options. Firstly, I could explore a range of flight options across dates that work for you to see if there is something that works by moving the mileage-based seats free of charge. Secondly, we could explore a refund of your miles so you could explore another airline or ticket. Now, there are normal seats available the day after but the fee to switch to them would be around $1500. We do though appreciate you are in an uncomfortable position, particularly with a child, and we also appreciate you are a regular customer due to you booking mileage seats. Unfortunately while I am unable to provide these new seats free of charge…I wish I could but I am unable to…I can provide a discount so we provide a 1/3 off, so you pay $1000. Another option is that I can put you through to my manager if none of these options will work for you. What would you prefer?
Me: Thanks for the options. I think I will go for the $1000 switch, thanks.
Airline: Wonderful. Again, Mr Bacon, I apologize for this…I know none of us would want to be in this position, and we appreciate your flexibility in finding a solution.

If something approximating this outcome occurred, I would have been quite satisfied with the airline, I would have felt empowered, left with a sense that they took care of me, and I would be sharing the story with my friends and colleagues.

This would have also mitigated taking a manager’s time and reduced the overall call time to around 10 – 15 minutes as opposed to the hour that I was on the phone.

To put the cherry on top I would then recommend that the airline sends an email a few days later that says something like this:

Dear Mr Bacon,

One of my colleagues shared with me the issue you had with your recent booking and the solution that was sourced. I want to also apologize for the change in times (we try to minimize this as best we can because we know how disruptive this can be).

I just wanted to follow up and let you know that if you have any further issues or questions, please feel free to call me directly. You can just call the customer service line and use extension 1234.

Kind Regards,

Jane Bloggs, Customer Service Team Manager

This would send yet another signal of clear customer care. Also, while I don’t have any data on-hand to prove this, I am sure the actual number of customers that would call Jane would be tiny, thus you get the benefit of the caring email without the further cost of serving the customer.

Now, some of you may say “well, what if the airline can’t simply slash the cost by a third for the re-seating?”

In actuality I think the solution in many cases is secondary to the handling of the case. If the airline in this case had demonstrated a similar optimal approach that I outline here (acknowledging the issue, sympathizing with the customer, an eagerness to solve the problem creatively, providing choices etc), yet they could not provide any workable solution, I suspect most people would be reasonably satisfied with the effort.

Eventually they never solved the problem in our case, so a 4.15am wake-up and a grumpy Jack it is. While rather annoying, in the scheme of things it is manageable. I just thought the psychology behind the story was interesting.

Anyway, sorry for the long post, but I hope this provides some interesting food for thought for those of you building customer service platforms for your companies.

by Jono Bacon at November 28, 2015 10:42 PM

November 27, 2015

Akkana Peck

Getting around make clean or make distclean aclocal failures

Keeping up with source trees for open source projects, it often happens that you pull the latest source, type make, and get an error like this (edited for brevity):

$ make
cd . && /bin/sh ./missing --run aclocal-1.14
missing: line 52: aclocal-1.14: command not found
WARNING: aclocal-1.14' is missing on your system. You should only need it if you modified acinclude.m4' or configure.ac'. You might want to install the Automake' and Perl' packages. Grab them from any GNU archive site.

What's happening is that make is set up to run ./autogen.sh (similar to running ./configure except it does some other stuff tailored to people who build from the most current source tree) automatically if anything has changed in the tree. But if the version of aclocal has changed since the last time you ran autogen.sh or configure, then running configure with the same arguments won't work.

Often, running a make distclean, to clean out all local configuration in your tree and start from scratch, will fix the problem. A simpler make clean might even be enough. But when you try it, you get the same aclocal error.

Whoops! make clean runs make, which triggers the rule that configure has to run before make, which fails.

It would be nice if the make rules were smart enough to notice this and not require configure or autogen if the make target is something simple like clean or distclean. Alas, in most projects, they aren't.

But it turns out that even if you can't run autogen.sh with your usual arguments -- e.g. ./autogen.sh --prefix=/usr/local/gimp-git -- running ./autogen.sh by itself with no extra arguments will often fix the problem.

This happens to me often enough with the GIMP source tree that I made a shell alias for it:

alias distclean="./autogen.sh && ./configure && make clean"

Saving your configure arguments

Of course, this wipes out any arguments you've previously passed to autogen and configure. So assuming this succeeds, your very next action should be to run autogen again with the arguments you actually want to use, e.g.:

./autogen.sh --prefix=/usr/local/gimp-git

Before you ran the distclean, you could get those arguments by looking at the first few lines of config.log. But after you've run distclean, config.log is gone -- what if you forgot to save the arguments first? Or what if you just forget that you need to re-run autogen.sh again after your distclean?

To guard against that, I wrote a somewhat more complicated shell function to use instead of the simple alias I listed above.

The first trick is to get the arguments you previously passed to configure. You can parse them out of config.log:

$ egrep '^  \$ ./configure' config.log
  $ ./configure --prefix=/usr/local/gimp-git --enable-foo --disable-bar

Adding a bit of sed to strip off the beginning of the command, you could save the previously used arguments like this:

    args=$(egrep '^  \$ ./configure' config.log | sed 's_^  \$ ./configure __')

(There's a better place for getting those arguments, config.status -- but parsing them from there is a bit more complicated, so I'll follow up with a separate article on that, chock-full of zsh goodness.)

So here's the distclean shell function, written for zsh:

distclean() {
    setopt localoptions errreturn

    args=$(egrep '^  \$ ./configure' config.log | sed 's_^  \$ ./configure __')
    echo "Saved args:" $args
    ./autogen.sh
    ./configure
    make clean

    echo
    echo "==========================="
    echo "Running ./autogen.sh $args"
    sleep 3
    ./autogen.sh $args
}

The setopt localoptions errreturn at the beginning is a zsh-ism that tells the shell to exit if there's an error. You don't want to forge ahead and run configure and make clean if your autogen.sh didn't work right. errreturn does much the same thing as the && between the commands in the simpler shell alias above, but with cleaner syntax.

If you're using bash, you could string all the commands on one line instead, with && between them, something like this: ./autogen.sh && ./configure && make clean && ./autogen.sh $args Or perhaps some bash user will tell me of a better way.

November 27, 2015 08:33 PM

November 25, 2015

Jono Bacon

Supporting Software Freedom Conservancy

There are a number of important organizations in the Open Source and Free Software world that do tremendously valuable work. This includes groups such as the Linux Foundation, Free Software Foundation, Electronic Frontier Foundation, Apache Software Foundation, and others.

One such group is the Software Freedom Conservancy. To get a sense of what they do, they explain it best:

Software Freedom Conservancy is a not-for-profit organization that helps promote, improve, develop, and defend Free, Libre, and Open Source Software (FLOSS) projects. Conservancy provides a non-profit home and infrastructure for FLOSS projects. This allows FLOSS developers to focus on what they do best — writing and improving FLOSS for the general public — while Conservancy takes care of the projects’ needs that do not relate directly to software development and documentation.

Conservancy performs some important work. Examples include bringing projects under their protection, providing input on and driving policy that relates to open software/standards, funding developers to do work, helping refine IP policies, protecting GPL compliance, and more.

This work comes at a cost. The team need to hire staff, cover travel/expenses, and more. I support their work by contributing, and I would like to encourage you do too. It isn’t a lot of money but it goes a long way.

They just kicked off a fundraiser at sfconservancy.org/supporter/ and I would like to recommend you all take a look. They provide an important public service, they operate in a financially responsible way, and their work is well intended and executed.

by Jono Bacon at November 25, 2015 03:09 AM

November 23, 2015

Elizabeth Krumbach

LISA15 wrap-up

From November 11th through 13th I attended and spoke at Usenix’s LISA15 (Large Installation Systems Administration) conference. I participated in a women in tech panel back in 2012, so I’d been to the conference once before, but this was the first time I submitted a talk. A huge thanks goes to Tom Limoncelli for reaching out to me to encourage me to submit, and I was amused to see my response to his encouragement ended up being the introduction to a blog post earlier this year. LISA has changed!

The event program outlines two main sections of LISA, tutorials and conference. I flew in on Tuesday in order to attend the three conference days from Wednesday through Friday. I picked up my badge Tuesday night and was all ready for the conference come Wednesday morning.

Wednesday began with a keynote from Mikey Dickerson of the U.S. Digital Service. It was one of the best talks I’ve seen all year, and I go to a lot of conferences. Launched just over a year ago (August 2014), the USDS is a part of the US executive office tasked with work and advisement to federal agencies about technology. His talk centered around the work he did post launch of healthcare.gov. He was working at Google at the time and was brought in as one of the experts to help rescue the website after the catastrophic failed launch. Long hours, a critical 24-hour news cycle that made sure they stayed under pressure to fix it and work to convince everyone to use best practices refined by the industry made for an amusing and familiar tale. The reasons for the failure were painfully easy to predict, no monitoring, no incident response plan or post-mortems, no formal testing and release process. These things are fundamental to software development in the industry today, and for whatever reason (time? money?) were left off this critical launch. The happy ending was that the site now works (though he wouldn’t go as far as saying it was “completely fixed”) and their success could be measured by the lack of news about the website during the 2014-2015 enrollment cycle. He also discussed some of the other work the USDS was up to, including putting together Requirements for Federal Websites and Digital Services, improvements to VA disability processing and the creation of the College Scorecard.


A talk by Mikey Dickerson of the USDS opens up LISA15

I then went to see Supercomputing for Healthcare: A Collaborative Approach to Accelerating Scientific Discovery (slides linked on that page) presented by Patricia Kovatch of the Icahn School of Medicine at Mount Sinai. She started off by talking about the vast amounts of data collected by facilities like Mount Sinai and how important having that data accessible and mine-able by researchers who are looking for cures to health problems. Then she dove into into collaboration, the keystone of her talk, bringing up several up important social points. Even as a technologist, you should understand the goals of everyone you work with, from the mission statement of your organization to yourself, your management, your clients and the clients (or patients!) served by the organization. Communication is key, and she recommended making non-tech friendly visualizations (that track metrics which are important – and re-evaluate those often), monthly reports and open meetings where interested parties can participate and build trust in your organization. She also covered some things that can be done to influence user behavior, like creating a “free” compute queue that’s lower priority but a department doesn’t need to pay for to encourage usage of that rather than taking over the high priority queue for everything (because everyone’s job is high priority when it’s all the same to them…). In case it’s not obvious, there was a lot of information in this talk squeezed into her time slot! I can’t imagine any team realistically going from having a poorly communicating department to adopting all of these suggestions, but she does present a fantastic array of helpful ideas that can be implemented slowly over time, each of which would help an organization. The slides are definitely worth a browse.

Next up was my OpenStack colleague Devananda van der Veen who was talking about Ironic: A Modern Approach to Hardware Provisioning. Largely divorcing Ironic from OpenStack, he spent this talk talking about how to use it largely as a stand alone tool for hardware provisioning. But he did begin by talking about how tools like OpenStack have started handling VMs, which themselves are abstractions of computers, and that Ironic takes that one step further, but instead of a VM you have hardware that’s an abstraction of a computer, thus putting bare metal and VMs on similar footing abstraction-wise with tooling in OpenStack with Ironic. He spent a fair amount of time talking about how much effort has been put in by hardware manufacturers into writing hardware drivers, and how quickly adoption in production has taken off with companies like Rackspace and Yahoo! being very public about their usage.

The hallway track was strong at this conference! The next talk I attended was in the afternoon, The Latest from Kubernetes by Tim Hockin. As an open source project, I feel like Kubernetes has moved very quickly since I first heard about it, so this was really valuable talk that skipped over introductory details and went straight to talking about new features and improvements in version 1.1. There’s iptables kube-proxy (yay kernel!), support for a level 7 loadbalancer (Ingress), namespaces, resource isolation, quota and limits, network plugins, persistent volumes, secrets handling and an alpha release of daemon sets. And his talk ran long, so he wasn’t able to get to everything! Slides, all 85 of them, are linked to the talk page and are valuable even without the accompanying talk.

My day wrapped up with My First Year at Chef: Measuring All the Things by Nicole Forsgren, the Director of Organizational Performance & Analytics at Chef. Nicole presented a situation where she joined a company that wanted to do better tracking of metrics within a devops organization and outlined how she made this happen at Chef. The first step was just talking about metrics, do you have them? What should you measure? She encouraged making sure both dev and ops were included in the metrics discussions so you’re always on the same page and talking about the same things. In starting these talks, she also suggested the free ~20 page book Data Driven: Creating a Data Culture for framing the discussions. She then walked through creating a single page scorecard for the organization about key things they want to see happen or improve, pick a few key things and then work toward how they can set targets and measure progress and success. Benchmarks were also cited as important, so you can see how you’re doing compared to where you began and more generally in the industry. Advice was also given about what kinds of measurement numbers to look at: internal, external, cultural and whether subjective or objective makes the most sense for each metric, and how to go about subjective measuring.


Nicole Forsgren on “Measuring All the Things”

I had dinner with my local friend Mackenzie Morgan. I hadn’t seen her since my wedding 2.5 years ago, so it was fun to finally spend time catching up in person, and offered a stress-free conclusion to my first conference day.

The high-quality lineup of keynote speakers continued on Thursday morning with Christopher Soghoian of the ALCU who came to talk about Sysadmins and Their Role in Cyberwar: Why Several Governments Want to Spy on and Hack You, Even If You Have Nothing to Hide. He led with the fact that many systems administrators are smart enough to know how to secure themselves, but many don’t take precautions at home: we use poor passwords, don’t encrypt our hard drives, etc. I’m proud to say that I’m paranoid enough that I actually am pretty cautious personally, but I think that stems from being a hobbiest first, it’s always been natural for my personal stuff to be just as secure as what I happen to be paid to work on. With that premise, he dove into government spying that was made clear by Snowden’s documents and high profile cases of systems administrators and NOC workers being targeted personally to gain control of the systems they manage either through technical means (say, sloppy ssh key handling), social engineering or stalking and blackmail. Know targets have been people working for the government, sysadmins at energy and antivirus companies, but he noted any of us could be a target if the data we’re responsible for administering is valuable in anyway. I can’t say any of the information in the talk was new to me, but it was presented in a way that was entertaining and makes me realize that I probably should pay more attention in my day to day work. Bottom line: Even if you’re just an innocent, self-proclaimed boring geek who goes home and watches SciFi after work, you need to be vigilant. See, I have a reason to be paranoid!

I picked up talks in the afternoon by attending one on fwunit: Unit Testing and Monitoring Your Network Flows with Fwunit by Dustin J. Mitchell. The tool was specifically designed for workflows at Mozilla so only a limited set of routers and switches are supported right now (Juniper SRX, AWS, patches welcome for others), but the goal was to be able to do flow monitoring on their network in order to have a good view into where and how traffic moved through their network. They also wanted to be able to do this without inflexible proprietary tooling and in a way that could be scripted into their testing infrastructure. Did a change they make just cut off a bunch of traffic that is needed by one of their teams? Alert and revert! Future work includes improvements to tracking ACLs, optimized statistic gathering and exploring options to test prior to production so reverts aren’t needed.

Keeping with the networking thread, Dinesh G Dutt of Cumulus Networks spoke next on The Consilience Of Networking and Computing. The premise of his talk was that the networking world is stuck in a sea of proprietary tooling that isn’t trivial to use and the industry there is losing out on a lot of the promises of devops since it’s difficult to automate everything in an effective manner. He calls for a more infrastructure-as-code-driven plan forward for networking and cited places where progress is being made, like in the Open Compute Project. His talk reminded me of OpenConfig working group that an acquaintance has been involved with, so it does sound like there is some consensus among network operators about where they want to see the future go.

The final talk I went to on Thursday was Vulnerability Scanning’s Not Good Enough: Enforcing Security and Compliance at Velocity Using Infrastructure As Code by Julian Dunn. He was preaching to the choir a bit as he introduced how useless standard vulnerability scanning is to us sysadmins (“I scanned for your version of Apache, and that version number is vulnerable” “…do you not understand how distro patches work?”) and expressed how challenging they are to keep up with. His proposal was two fold. First, that companies get more in the habit of prioritizing security in general rather than passing arbitrary compliance tests. Second, to consolidate the tooling used by everyone and integrate it into the development and deployment pipeline to make sure security standards are adhered to in the long run (not just when the folks testing for compliance are in the building). To this end, he promoted use of the Chef Inspec Project.

Thursday evening was the LISA social, but I skipped that in favor of a small dinner I was invited to at a local Ethiopian restaurant. Fun fact: I’ve only ever eaten Ethiopian food when I’m traveling, and the first time I had it was in 2012 when I was in San Diego, following my first LISA conference!

The final day of the conference began with a talk by Jez Humble on Lean Configuration Management. He spent some time reflecting on modern methodologies for product development (agile, change management, scrum), and discussed how today with the rapid pace of releases (and sometimes continuous delivery) there is an increasing need to make sure quality is built in at the source and bugs are addressed quickly. He then went into the list of very useful indicators for a successful devops team:

  • Use of revision control
  • Failure alerts from properly configured logging and monitoring
  • Developers who merge code into trunk (not feature branches! small changes!) daily
  • Peer review driven change approval (not non-peer change review boards)
  • Culture that exhibits the Generative organizational structure as defined by R Westrum in his A typology of organisational cultures

He also talked a fair amount about team structures and the ricks when not only dev and ops are segregated, but also product development and others in the organization. He proposed bringing them closer together, even putting an ops person on a dev team and making sure business interests and goals in the product are also clearly communicated to everyone involved.

It was a pleasure to have my talk following this one, as our team strives to tick off most of the boxes when it comes to having a successful team (though we don’t really do active, alerting monitoring). I spoke on Tools for Distributed, Open Source Systems Administration (slides linked on the linked page) where I walked through the key strategies and open source tools we’re using as a team that’s distributed geographically and across time zones. I talked about our Continuous Integration system (the heart of our work together), various IRC channels we use for different purposes (day to day sync-up, meetings, sprints, incidents), use of etherpads for collaborative editing and work and how we have started to address hand-offs between time zones (mostly our answer is “hire more people in that time zone so they have someone to work with”). After my talk I had some great chats with folks either doing similar work, or trying to nudge their organization into being productive across offices. The talk was also well attended, so huge thanks to everyone who came out to it.

At lunch time I had a quick meal with Ben Cotton before sneaking off to the nearby zoo to see if I could get a glimpse of the pandas. I saw a sleeping panda. I was back in time for the first talk after lunch, Thomas A. Limoncelli on Transactional System Administration Is Killing Us and Must be Stopped. Many systems administrators live in a world of tickets. Tickets come in, they are processed, we’re always stressed because we have too many tickets and are always running around to get them done with poor tooling for priority (everything is important!). It also leads to a very reaction-driven workflow, instead of fixing fundamental long term issues and long term planning is very hard. It also creates a bad power dynamic, sysadmins begin to see users as a nuisance, and users are always waiting on those sysadmins in order to get their work done. Plus, users hate opening tickets and sysadmins hate reading tickets opened by users. Perhaps worst of all, we created this problem by insisting upon usage of ticketing systems in the 90s. Whoops. In order to solve this, his recommendations are very much in line with what I’d been hearing at the conference all week: embed ops with dev, build self-service tooling so repeatable things are no longer manually done by sysadmins (automate, automate, automate!), have developers write their own monitors for their software (ops don’t know how it works, the devs do, they can write better monitoring than just pinging a server!). He also promoted the usage of Kanban and building your team schedule so that there is a rotating role for emergencies and others are able to focus on long term project work.

The final talk of the main conference I attended was The Care and Feeding of a Community by Jessica Hilt. I’ve been working with communities for a long time, even holding some major leadership positions, but I really envy the experience that Jessica brought to her talk, particularly since she’s considerably more outgoing and willing to confront conflict than I am. She began with an overview of different types of communities and how their goals matter so you can collect the right group of people for the community you’re building. She stressed that goals like cooperative learning (educational, tech communities, beyond) is a valuable use of a group’s time and helps build expertise and encourages retention when members are getting value. Continuing on a similar theme, networking and socialization are important, so that people have a bond with each other and provide a positive feedback loop that keeps the community healthy. During a particularly amusing part of her talk, she also mentioned that you want to include people who complain, since it’s often that the complainers are passionate about the group topic, but are just grumpy and they can be a valuable asset. Once you have ideas and potential members identified, you can work on organizing. What are the best tools to serve this community? What rules need to be in place to make sure people are treated fairly and with respect? She concluded by talking about long term sustainability, which includes re-evaluating the purpose of the group from time to time, making sure it’s still attracting new members, confirming that the tooling is still effective and that the rules in place are being enforced.

During the break before the closing talks of the conference I had the opportunity to meet the current Fedora Project Lead, Matthew Miller. Incidentally, it was the same day that my tenure on the Ubuntu Community Council officially expired, so we were able to have an interesting chat about leadership and community dynamics in our respective Linux distributions. We have more in common than we tend to believe.

The conference concluded with a conference report from the LISA Build team that handled the network infrastructure for the conference. They presented all kinds of stats about traffic and devices and stories of their adventures throughout the conference. I was particularly amused when they talked about some of the devices connecting, including an iPod. I couldn’t have been the only one in the audience brainstorming what wireless devices I could bring next year to spark amusement in their final report. They then handed it off to a tech-leaning comedian who gave us a very unusual, meandering talk that kept the room laughing.

This is my last conference of the year and likely my last talk, unless someone local ropes me into something else. It was a wonderful note to land on in spite being tired from so much travel this past month. Huge thanks to everyone who took time to say hello and invite me out, it went a long way to making me feel welcome.

More photos from the conference here: https://www.flickr.com/photos/pleia2/sets/72157660670374520

by pleia2 at November 23, 2015 04:35 AM

November 20, 2015

Elizabeth Krumbach

Ubuntu Community Appreciation Day

Often times, Ubuntu Community Appreciation Day sneaks up on me and I don’t have an opportunity to do a full blog post. This time I was able to spend several days reflecting on who has had an impact on my experience this year, and while the list is longer than I can include here (thanks everyone), there are some key people who I do need to thank.

José Antonio Rey

If you’ve been involved with Ubuntu for any length of time, you know José. He’s done extraordinary work as a volunteer across various areas in Ubuntu, but this year I got to know him just a little bit better. He and his father picked me up from the airport in Lima, Peru when visited his home country for UbuCon Latinoamérica back in August. In the midst of preparing for a conference, he also played tour guide my first day as we traveled the city to pick up shirts for the conference and then took time to have lunch at one of the best ceviche places in town. I felt incredibly welcome as he introduced me to staff and volunteers and checked on me throughout the conference to make sure I had what I needed. Excellent conference with incredible support, thank you José!

Naudy Urquiola

I met Naudy at UbuCon Latinoamérica, and I’m so glad I did. He made the trip from Venezuela to join us all, and I quickly learned how passionate and dedicated to Ubuntu he was. When he introduced himself he handed me a Venezuelan flag, which hung off my backpack for the rest of the conference. Throughout the event he took photos and has been sharing them since, along with other great Ubuntu tidbits that he’s excited about, a constant reminder of the great time we all had. Thanks for being such an inspirational volunteer, Naudy!


Naudy, me, Jose

Richard Gaskin

For the past several years Richard has led UbuCon at the Southern California Linux Expo, rounding up a great list of speakers for each event and making sure everything goes smoothly. This year I’m proud to say it’s turning into an even bigger event, as the UbuCon Summit. He’s also got a great Google+ feed. But for this post, I want to call out that he reminds me why we’re all here. It can become easy to get burnt out as a volunteer on open source, feel uninspired and tired. During my last one-on-one call with Richard, his enthusiasm around Ubuntu for enabling us to accomplish great things brought back my energy. Thanks to Ubuntu I’m able to work with Partimus and Computer Reach to bring computers to people at home and around the world. Passion for bringing technology to people who lack access is one of the reasons I wake up in the morning. Thanks to Richard for reminding me of this.

Laura Czajkowski, Michael Hall, David Planella and Jono Bacon

What happens when you lock 5 community managers in a convention center for three days to discuss hard problems in our community? We laugh, we cry, we come up with solid plans moving forward! I wrote about the outcome of our discussions from the Community Leadership Summit in July here, but beyond the raw data dump provided there, I was able to connect on a very personal level with each of them. Whether it was over a conference table or over a beer, we were able to be honest with each other to discuss hard problems and still come out friends. No blame, no accusations, just listening, talking and more listening. Thank you all, it’s an honor to work with you.


Laura, David, Michael and me (Jono took the picture!)

Paul White

For the past several years, Paul White has been my right hand man with the Ubuntu Weekly Newsletter. If you enjoy reading the newsletter, you should thank him as well. As I’ve traveled a lot this year and worked on my next book, he’s been keeping the newsletter going, from writing summaries to collecting links, with me just swinging in to review, make sure all the ducks are lined up and that the release goes out on time. It’s often thankless work with only a small team (obligatory reminder that we always need more help, see here and/or email editor.ubuntu.news@ubuntu.com to learn more). Thank you Paul for your work this year.

Matthew Miller

Matthew Miller is the Fedora Project Lead, we were introduced last week at LISA15 by Ben Cotton in an amusing Twitter exchange. He may seem like an interesting choice for an Ubuntu appreciation blog post, but this is your annual reminder that as members of Linux distribution communities, we’re all in this together. In the 20 or so minutes we spoke during a break between sessions, we were able to dive right into discussing leadership and community, understanding each others jokes and pain points. I appreciate him today because his ability to listen and insights have enriched my experience in Ubuntu by bringing in a valuable outside perspective and making me feel like we’re not in this alone. Thanks mattdm!


Matt holds my very X/Ubuntu laptop, I hold a Fedora sticker

You

If you’re reading this, you probably care about Ubuntu. Thank you for caring. I’d like to send you a holiday card!

by pleia2 at November 20, 2015 05:15 PM

November 18, 2015

Elizabeth Krumbach

Holiday cards 2015!

Every year I send out a big batch of winter-themed holiday cards to friends and acquaintances online.

Holiday cable car

Reading this? That means you! Even if you’re outside the United States!

Send me an email at lyz@princessleia.com with your postal mailing address. Please put “Holiday Card” in the subject so I can filter it appropriately. Please do this even if I’ve sent you a card in the past, I won’t be reusing the list from last year.

If you’re an Ubuntu fan, let me know and I’ll send along some stickers too :)

Typical disclaimer: My husband is Jewish and we celebrate Hanukkah, but the cards are non-religious, with some variation of “Happy holidays” or “Season’s greetings” on them.

by pleia2 at November 18, 2015 07:04 PM

November 16, 2015

Eric Hammond

Using AWS CodeCommit With Git Repositories In Multiple AWS Accounts

set up each local CodeCommit repository clone to use a specific cross-account IAM role with git clone --config and aws codecommit credentials-helper

When I started testing AWS CodeCommit, I used the Git ssh protocol with uploaded ssh keys to provide access, because this is the Git access mode I’m most familiar with. However, using ssh keys requires each person to have an IAM user in the same AWS account as the CodeCommit Git repository.

In my personal and work AWS usage, each individual has a single IAM user in a master AWS account, and those users are granted permission to assume cross-account IAM roles to perform operations in other AWS accounts. We cannot use the ssh method to access Git repositories in other AWS accounts, as there are no IAM users in those accounts.

AWS CodeCommit comes to our rescue with an alternative https access method that supports Git Smart HTTP, and the aws-cli offers a credential-helper feature that integrates with the git client to authenticate Git requests to the CodeCommit service.

In my tests, this works perfectly with cross-account IAM roles. After the initial git clone command, there is no difference in how git is used compared to the ssh access method.

Most of the aws codecommit credential-helper examples I’ve seen suggest you set up a git config --global setting before cloning a CodeCommit repository. A couple even show how to restrict the config to AWS CodeCommit repositories only so as to not interfere with GitHub and other repositories. (See “Resoures” below)

I prefer to have the configuration associated with the specific Git repositories that need it, not in the global setting file. This is possible by passing in a couple --config parameters to the git clone command.

Create/Get CodeCommit Repository

The first step in this demo is to create a CodeComit repository, or to query the https endpoint of an existing CodeCommit repo you might already have.

Set up parameters:

repository_name=...   # Your repository name
repository_description=$repository_name   # Or more descriptive
region=us-east-1

If you don’t already have a CodeCommit repository, you can create one using a command like:

repository_endpoint=$(aws codecommit create-repository \
  --region "$region" \
  --repository-name "$repository_name" \
  --repository-description "$repository_description" \
  --output text \
  --query 'repositoryMetadata.cloneUrlHttp')
echo repository_endpoint=$repository_endpoint

If you already have a CodeCommit repository set up, you can query the https endpoint using a command like:

repository_endpoint=$(aws codecommit get-repository \
  --region "$region" \
  --repository-name "$repository_name" \
  --output text \
  --query 'repositoryMetadata.cloneUrlHttp')
echo repository_endpoint=$repository_endpoint

Now, let’s clone the repository locally, using our IAM credentials. With this method, there’s no need to upload ssh keys or modify the local ssh config file.

git clone

The git command line client allows us to specify specific config options to use for a clone operation and will add those config settings to the repository for future git commands to use.

Each repository can have a specific aws-cli profile that you want to use when interacting with the remote CodeCommit repository through the local Git clone. The profile can specify a cross-account IAM role to assume, as I mentioned at the beginning of this article. Or, it could be a profile that specifies AWS credentials for an IAM user in a different account. Or, it could simply be "default" for the main profile in your aws-cli configuration file.

Here’s the command to clone a Git repository from CodeCommit, and for authorized access, associate it with a specific aws-cli profile:

profile=$AWS_DEFAULT_PROFILE   # Or your aws-cli profile name

git clone \
  --config 'credential.helper=!aws codecommit --profile '$profile' --region '$region' credential-helper $@' \
  --config 'credential.UseHttpPath=true' \
  $repository_endpoint
cd $repository_name

At this point, you can interact with the local repository, pull, push, and do all the normal Git operations. When git talks to CodeCommit, it will use aws-cli to authenticate each request transparently, using the profile you specified in the clone command above.

Clean up

If you created a CodeCommit repository to follow the example in this article, and you no longer need it, you can wipe it out of existence with this command:

# WARNING! DESTRUCTIVE! CAUSES DATA LOSS!
aws codecommit delete-repository \
  --region "$region" \
  --repository-name $repository_name

You might also want to delete the local Git repository.

With the https access method in CodeCommit, there is no need to need upload or to delete any uploaded ssh keys from IAM, as all access control is performed seamlessly through standard AWS authentication and authorization controls.

Resources

Here are some other articles that talk about CodeCommit and the aws-cli credential-helper.

In Setup Steps for HTTPS Connections to AWS CodeCommit Repositories on Linux, AWS explains how to set up the aws-cli credential-helper globally so that it applies to all repositories you clone locally. This is the simplistic setting that I started with before learning how to apply the config rules on a per-repository basis.

In Using CodeCommit and GitHub Credential Helpers, James Wing shows how Amazon’s instructions cause problems if you have some CodeCommit repos and some GitHub repos locally and how to fix them (globally). He also solves problems with Git credential caches for Windows and Mac users.

In CodeCommit with EC2 Role Credentials, James Wing shows how to set up the credential-helper system wide in cloud-init, and uses CodeCommit with an IAM EC2 instance role.

Original article and comments: https://alestic.com/2015/11/aws-codecommit-iam-role/

November 16, 2015 06:06 PM

Jono Bacon

Atom: My New Favorite Code Editor

As a hobbyist Python programmer, over the years I have tried a variety of different editors. Back in the day I used to use Eclipse with the PyDev plugin. I then moved on to use GEdit with a few extensions switched on. After that I moved to Geany. I have to admit, much to the shock of some of you, I never really stuck with Sublime, despite a few attempts.

As some of you will know, this coming week I start at GitHub as Director of Community. Like many, when I was exploring GitHub as a potential next career step, I did some research into what the company has been focusing their efforts on. While I had heard of the Atom editor, I didn’t realize it came from GitHub. So, I thought I would give it a whirl.

Now, before I go on, I rather like Atom, and some of you may think that I am only saying this because of my new job at GitHub. I assure you that this is not the case. I almost certainly would have loved Atom if I had discovered it without the possibility of a role at GitHub, but you will have to take my word for that. Irrespective, you should try it yourself and make your own mind up.

My Requirements

Going into this I had a set of things I look for in an editor that tends to work well with my peanut-sized brain. These include:

  • Support for multiple languages.
  • A simple, uncluttered, editor with comprehensive key-bindings.
  • Syntax highlighting and auto-completion for the things I care about (Python, JSON, HTML, CSS, etc).
  • Support for multiple files, line numbers, and core search/replace.
  • A class/function view for easily jumping around large source files.
  • High performance in operation and reliable.
  • Cross-platform (I use a mixture of Ubuntu and Mac OS X).
  • Nice to have but not required: integrated terminal, version control tools.

Now, some of you will think that this mixture of ingredients sounds an awful lot like an IDE. This is a reasonable point, but what I wanted was a simple text editor, just with a certain set of key features…the ones above…built in. I wanted to avoid the IDE weight and clutter.

This is when I discovered Atom, and this is when it freaking rocked my world.

The Basics

Atom is an open source cross-platform editor. There are builds available for Mac, Windows, and Linux. There is of course the source available too in GitHub. As a side point, and as an Ubuntu fan, I am hoping Atom is brought into Ubuntu Make and I am delighted to see didrocks is on it.

Atom is simple and uncluttered.

Atom is simple and uncluttered.

As a core editor it seems to deliver everything you might need. Auto-completion, multiple panes, line numbers, multiple file support, search/replace features etc. It has the uncluttered and simple user interface I have been looking for and it seems wicked fast.

Stock Atom also includes little niceties such as markdown preview, handy for editing README.md files on GitHub:

Editing Markdown is simple with the preview pane.

Editing Markdown is simple with the preview pane.

So, in stock form it ticks off most of the requirements listed above.

A Hackable Editor

Where it gets really neat is that Atom is a self-described hackable text editor. Essentially what this means is that Atom is a desktop application built with JavaScript, CSS, and Node.js. It uses another GitHub project called Electron that provides the ability to build cross-platform desktop apps with web technologies.

Consequently, basically everything in Atom can be customized. Now, there are core exposed customizations such as look and feel, keybindings, wrapping, invisibles, tabs/spaces etc, but then Atom provides an extensive level of customization via themes and packages. This means that if the requirements I identified above (or anything else) are not in the core of the editor, they can be switched on if there are suitable Atom packages available.

Now, for a long time text editors have been able to be tuned and tweaked like this, but Atom has taken it to a new level.

Firstly, the interface for discovering, installing, enabling, and updating plugins is incredibly simple. This is built right into Atom and there is thankfully over 3000 packages available for expanding Atom in different ways.

Searching for and installing plugins is built right into Atom.

Searching for and installing plugins is built right into Atom.

Thus, Atom at the core is a simple, uncluttered editor that provides the features the vast majority of programmers would want. If something is missing you can then invariably find a package or theme that implements it and if you can’t, Atom is extensively hackable to create that missing piece and share it with the world. This arguably provides the ability for Atom to satisfy pretty much about everyone while always retaining a core that is simple, sleek, and efficient.

My Packages

To give you a sense of how I have expanded Atom, and some examples of how it can be used beyond the default core that is shipped, here are the packages I have installed.

Please note: many of the screenshots below are taken from the respective plugin pages, so the credit is owned by those pages.

Symbols Tree View

Search for symbols-tree-view in the Atom package installer.

This package simply provides a symbols/class view on the right side of the editor. I find this invaluable for jumping around large source files.

symbols

Merge Conflicts

Search for merge-conflicts in the Atom package installer.

A comprehensive tool for unpicking merge conflicts that you may see when merging in pull requests or other branches. This makes handling these kinds of conflicts much easier.

merge

Pigments

Search for pigments in the Atom package installer.

A neat little package for displaying color codes inline in your code. This makes it simple to get a sense of what color that random stream of characters actually relates to.

pigments

Color Picker

Search for color-picker in the Atom package installer.

Another neat color-related package. Essentially, it makes picking a specific color as easy as navigating a color picker. Handy for when you need a slightly different shade of a color you already have.

color

Terminal Plus

Search for terminal-plus in the Atom package installer.

An integrated terminal inside Atom. I have to admit, I don’t use this all the time (I often just use the system terminal), but this adds a nice level of completeness for those who may need it.

term

Linter

Search for linter in the Atom package installer.

This is a powerful base Linter for ensuring you are writing, y’know, code that works. Apparently it has “cow powers” whatever that means.

linter

In Conclusion

As I said earlier, editor choice is a very personal thing. Some of you will be looking at this and won’t be convinced about Atom. That is totally cool. Live long and edit in whatever tool you prefer.

Speaking personally though, I love the simplicity, extensibility, and innovation that is going into Atom. It is an editor that lets me focus on writing code and doesn’t try to force me into a mindset that doesn’t feel natural. Give it a shot, you may quite like it.

Let me know what you think in the comments below!

by Jono Bacon at November 16, 2015 06:12 AM

November 13, 2015

Jono Bacon

Blogging, Podcasting, or Video?

Over the course of my career I have been fortune to meet some incredible people and learn some interesting things. These have been both dramatic new approaches to my work and small insights that provide a different lens to look at a problem through.

When I learn these new insights I like to share them. This is the way we push knowledge forward: we share, discuss, and remix it in different ways. I have benefited from the sharing of others, so I feel I should do the same.

Therein lies a dilemma though: what is the best medium for transmitting thoughts? Do we blog? Use social media? Podcasting? Video? Presentations? How do we best present content for (a) wider consumption, (b) effectively delivering the message, and (c) simple sharing?

Back of the Napkin

In exploring this I did a little back of the napkin research. I ask a range of people where they generally like to consume media and what kind of media formats they are most likely to actually use.

The response was fairly consistent. Most of us seem to discover material on social media these days and while video is considered an enjoyable experience if done well, most people tend to consume content by reading. There were various reasons shared for this:

  • It is quicker to read a blog post than watch a video.
  • I can’t watch video at work, on my commute, etc.
  • It is easier to recap key points in an article.
  • I can’t share salient points in a video very easily.

While I was initially attracted to the notion of sharing some of these thoughts in an audio format, I have decided to focus instead more on writing. This was partially informed by my back of the napkin research, but also in thinking about how we best present thoughts.

Doing Your Thinking

I recently read online (my apologies, I forget the source) an argument that social media is making us lazy: essentially, that we tend to blast out thoughts on Twitter as it is quick and easy, as opposed to sitting down and presenting a cogent articulation of a position or idea.

This resonated with me. Yesterday at a conference, Jeff Atwood shared an interesting point:

“The best way to learn is to teach.”

This is a subtle but important point. The articulation and presentation of information is not just important for the reader, but for the author as well.

While I want to share the things I have learned, I also (rather selfishly) want to get better at those things and how I articulate and evolve those ideas in the future.

As such, it became clear that blogging is the best solution for me. It provides the best user interface for me to articulate and structure my thoughts (a text editor), it is easily consumable, easily shareable, and easily searchable on Google.

So, regular readers may notice that jonobacon.org has been spruced up a little. Specifically, my blog has been tuned quite a bit to be more readable, easier to participate in, and easier to share the content with.

I am not finished with the changes, but my goal is to regularly write and share content that may be useful for my readers. You can keep up to date with new articles by following me on either Twitter, Facebook, or Google+. As is with life, the cadence of this will vary, but I hope you will hop into the articles and share your thoughts and join the conversation.

by Jono Bacon at November 13, 2015 10:02 PM

November 11, 2015

Elizabeth Krumbach

Grace Hopper Celebration of Women in Computing 2015

After a quick trip to Las Vegas in October, I was off to Houston for my first Grace Hopper Celebration of Women in Computing! I sometimes struggle some with women in computing events, and as a preamble to this post I wrote about it here. But I was excited to finally attend a Grace Hopper conference and honored to have my talk about the Continuous Integration system we use in the OpenStack project accepted in the open source track.

Since I’m an ops person and not a programmer, the agenda I was looking at leaned very much toward the keynotes, plenaries and open source, with a few talks just for fun thrown in. Internet of Things! Astronauts!

Wednesday kicked off with a series of keynotes. The introduction by Telle Whitney, CEO and President of the Anita Borg Institute for Women and Technology (ABI) included statistics about attendees, of which there were 12,000 from over 60 countries and over 1,000 organizations. She then introduced the president of the ACM, Alexander L. Wolf, who talked about Association for Computing Machinery (ACM) and encouraged attendees to join professional organizations like the ACM in order to bring voice to our profession. I’ve been a member since 2007.

The big keynote for the morning was by Hilary Mason, a data scientist and Founder at Fast Forward Labs. She dove into the pace of computer technology, progress of Artificial Intelligence and how data is driving an increasing amount of innovation. She explained that various mechanisms that make data available and the drop in computing prices has helped drive this, explaining that what makes a machine intelligence technology interesting tends to follow four steps:

  1. A theoretical breakthrough
  2. A change in economics
  3. A capability to build a commodity
  4. New data is available

Slides from her talk are on slideshare.

From the keynotes I went to the first series of open source presentations which began with a talk by Jen Wike Huger on contributing to opensource.com. As a contributor already, it was fun to hear her talk and I was particularly delighted to see her highlight three of my favorite stories as examples of how telling your open source story can make a difference:


Jen Wike Huger on opensource.com

The next presentation was by Robin J. Goldstone, a Solutions Architect at Lawrence Livermore National Laboratory (LLNL) where they work on supercomputers! Her talk centered around the extreme growth of open source in the High Performance Computing (HPC) space by giving a bit of a history of supercomputing at LLNL and beyond, and how the introduction of open source into their ecosystem has changed things. She talked about their work on the CHAOS Linux clustering operating system that they’ve developed which allows them to make changes without consulting a vendor, many of whom aren’t authorized to access the data stored on the clusters anyway. It was fascinating to hear her speak to how it’s been working in production and she expressed excitement about the ability to share their work with other organizations.

From there, it was great to listen to Sreeranjani (Jini) Ramprakash of Argonne National Laboratory where they’re using Jenkins, the open source Continuous Integration system, in their supercomputer infrastructure. Most of her talk centered less around the nuts and bolts of how they’re using it, and more on why they chose to adopt it, including the importance testing changes in a distributed team (can’t just tap on a shoulder to ask why and when something broke), richer reports when something does break and shorter debug time since all changes are tested. When talking about Jenkins specifically, we learned that they had used it elsewhere in their organization so adopting that hosted version was at first a no-brainer, but then when they learned that they really had to run their own. The low bar created by it being open source software allowed them to run it themselves without too much of an issue.

That afternoon I attended the plenaries, kicked off by Clara Shih, the CEO and Founder at Hearsay Social. Her talk began by talking about how involvement with the Grace Hopper conference and ABI helped prepare her early for success in her career, and quickly launched into 5 major points when working in and succeeding with technology:

  1. Listen carefully (to everyone: customers, employees)
  2. Be OK with being different (and you have to be yourself to truly be accepted, don’t fake it)
  3. Cherish relationships above all else (both personal and professional, especially as a minority)
  4. There is no failure, only learning
  5. Who? If not us. When? If not now. (And do your part to encourage other women in tech)

Clara Shih keynote

Her plenary was followed by a surprising one from Blake Irving, the CEO of GoDaddy. GoDaddy has a poor reputation when it comes to women, particularly with respect to their objectifying ad campaigns that made the company famous early on. In his talk, I felt a genuine commitment from him personally and the company to change this, from the retirement of those advertisements and making sure the female employees within GoDaddy are being paid fairly. Reflecting on company culture, he also said they wanted advertising to reflect the passion and great work that happens within the company, in spite of poor public opinion due to their ads. They’re taking diversity seriously and he shared various statistics about demographics and pay within the company to show gender pay parity in various roles, which is a step I hadn’t seen a company do before (there are diversity stats from several companies, but not very detailed or broken up by role in a useful way). The major take-away was that if a company with a reputation like GoDaddy can work toward turning things around, anyone can.

The final plenary of Wednesday was from Megan Smith, the Chief Technology Officer of the United States. The office was created by President Obama in 2009 and Smith is the third person to hold the post, and the first woman. Her talk about the efforts being made by the US government to embrace the digital world, from White House Tech Meetups, the TechHire Initiative and White House Demo Days and Maker work. Even more exciting, she brought a whole crew of women from various areas of the government to speak on various projects. One spoke on simplifying access to Veteran Medical records through digital access, another on healthcare more broadly as they worked to fix Healthcare.org after it was launched. A technology-driven modernization effort to the immigration system was particularly memorable, as work to make it easier and cheaper for potential citizens to get the resources they need without the mountain of confusing and expensive forms that they often have to go through today to become citizens and bring family members to the United States. It was also interesting to learn about the open data initiatives from data.gov as well as how citizens can help bring more records online through the Citizen Archivist program. I was also really impressed with their commitment to open source throughout all of their talks. It seems obvious to me that any software developed with my tax dollars should be made available to me in an open source manner, but it’s only recently that this has actually started to gain traction, and this administration seems committed to making sure we continue to go in this direction.


Technologists in US Government!

A quick walk through the opening of the career fair and exposition hall finished up my day. It was a pretty overwhelming space. So many companies seeking to interview and hiring from the incredible pool of talent that GHC brings together.

My Thursday was very busy. It began with an HP networking breakfast, with the 70 or so people from HP who came to the conference as attendees (not booth and interview staff) could meet up. I got a surprise at the breakfast by being passed the microphone after several VPs spoke as I was one of the two speakers from HP who was attending the conference and the only one at the breakfast, no pressure! From there, it was off to the keynotes.

I really enjoyed hearing from Hadi Partovi, the Founder of Code.org about his take on the importance of humans being taught about coding in the world today and how the work of Code.org is helping to make that happen on a massive scale. The growing demand versus slower creation of computer science professional statistics were grim and he stressed the importance of computer science as a component of primary education. It was impressive to learn about some of the Code.org statistics from their mere 2 years of existence, going into their third year they’re hoping to reach hundreds of thousands of more students.

It was a real pleasure to hear from Susan Wojcicki, the CEO of YouTube. She touched upon several important topics, including myths in computing that keep school age girls (even her own daughter!) away: Computer Science is boring, girls aren’t good at it and discomfort with associating with the stereotypical people who are portrayed in the field. She talked about the trouble with retention of women in CS, citing improvements to paid maternity leave as a huge factor in helping retention at Google.

Following the keynotes I attended the next round of open source sessions. Becka Morgan, a professor at Western Oregon University began the morning with a very interesting talk about building mentorship programs for her students in partnership with open source projects. I learned that she initially had worked with the local Ubuntu team, even having some of her students attend an Ubuntu Global Jam in Portland, an experience she hoped to repeat in the future. She walked us through various iterations of her class structure and different open source projects she worked with in order to get the right balance of structure, support from project volunteers and clear expectations on all sides. It was great to hear about how she was then able to take her work and merge it with that of others in POSSE (Professors’ Open Source Summer Experience) so they could help build programs and curriculum together. Key take-aways for success in her classroom included:

  • Make sure students have concrete tasks to work on
  • Find a way to interact with open source project participants in the classroom, whether they visit or attend virtually through a video call or similar (Google Hangouts were often used)
  • Tell students to ask specific, concrete questions when they need help, never assume the mentors will stop their work to reach out and ask them if they need help (they’re busy, and often doing the mentoring as a volunteer!)
  • Seek out community opportunities for students to attend, like the Ubuntu Global Jam

Her talk was followed by one by Gina Likins of Red Hat who talked about the experience in her career moving from a very proprietary company to one that is open and actually develops open source software. As someone who is familiar with structures of open organizations from my own work and open source experiences it was mostly information I was familiar with, but one interesting point she made was that in some companies people hoard information in an effort to make sure they have an edge over other teams. This stifles innovation and is very short-sighted, more importance in sharing knowledge so that everyone can grow is a valuable cultural trait for an organization. Billie Rinaldi followed Gina’s talk with one about working on an Apache Software Foundation project, sharing the benefits of a solid structure and valuable structures for getting involved as important to open source projects and something that the foundation supports.

Prior to a partner lunch that I was invited to, I went to a final morning talk by Dr. Nadya Fouad who published the famous Leaning in, but Getting Pushed Back (and Out) study results where culture, including failure to provide clear and fair advancement opportunities, was cited in their study of women leaving engineering. I’d read articles about her work, as it was widely covered when it first came out as one of the best studies to come out covering the retention problem. Of particular note was that about $3.4 billion in US federal funds are spent on the engineering “pipeline problem” each year, and very little attention is paid to the near 50% of women who complete an engineering degree and don’t continue with an engineering career. I’ve known that culture was to blame for some time, so it was satisfying to see someone do a study on the topic to gather data beyond the unscientific anecdotal stories I had piled up in my own experience with female friends and acquaintances who have left or lost their passion for the tech industry. She helpfully outlined things that were indicators for a successful career path, of course noting that these things are good for everyone: good workload management, psychologically safe environment, supportive leadership, promotion path, equitable development opportunities and an actively supported work/life balance policy.


Dr. Nadya Fouad on retention in engineering

After lunch began the trio of open source presentations that included my own! The afternoon began with a talk by Irene Ros on Best Practices for Releasing and Choosing Open Source Software. This talk gave her an opportunity to attack evaluation of open source from both sides, both what to look for in a project before adopting it and what you need to provide users and a community before you release your own open source project – predictably these are the same thing! She stressed the importance of using a revision control system, writing documentation, version tracking (see semver.org for a popular method), publishing of release notes and changelogs, proper licensing, support and issue tracking and in general paying attention to feedback and needs of the community. I loved her presentation because it included a lot of valuable information packed into her short talk slot, not all of which is obvious to new projects.

My talk came next, where I talked about our Open Source Continuous Integration System. In 20 minutes I gave a whirlwind tour of our CI system, including our custom components (Zuul, Nodepool, Jenkins Job Builder) along with the most popular open source offerings for code review (Gerrit) and CI (Jenkins). I included a lot of links in my talk so that folks who were interested could dive deeper into whichever component my quick overview was of interest to them. I was delighted to conclude my talk with several minutes of engaging Q&A before turning the microphone over to my OpenStack colleague Anne Gentle. Slides from my talk are here: 2015-ghc-Open-Source-Continuous-Integration-System.pdf


Thanks to Terri Oda for the photo! (source)

Anne’s talk a great one on documentation. She stressed the importance of treating open source documentation just like you would code. Use revision control, track versions, make the format they are written in simple (like reStructuredText) and use the same tooling as developers so it’s easy for developers to contribute to documentation. She also spoke about the automated test and build tools we use in OpenStack (part of our CI system, yay!) and how they help the team continue to publish quickly and stay on top of the progress of documentation. It was also worthy to note that writing documentation in OpenStack grants one Active Technical Contributor status, which gives you prestige in the community as a contributor (just like a developer) and a free ticket to the OpenStack summits that happen twice a year. That’s how documentation writers should be treated!

Since our trio of talks followed each other immediately, I spent the break after Anne’s talk following up with folks in the audience who were interested in learning more about our CI system and generally geeking out about various components. It was a whole lot of fun to chat with other Gerrit operators and challenges that our Nodepool system solves when it comes to test node handling. I had a lot of fun, and it’s always great when these conversations follow me for the rest of the conference like they did at GHC.

The next session I attended was the Women Lead in Open Source panel. A fantastic lineup of women in open source explored several of the popular open source organizations and opportunities for women and others, including Systers, Google Summer of Code, Outreachy and OpenHatch. The panel then spent a lot of time answering great questions about breaking into open source, how to select a first project and searching for ways to contribute based on various skills, like knowledge of specific programming languages.

The plenary that wrapped up our day was a popular one by Sheryl Sandberg, which caused the main keynote and plenary room to fill up quickly. For all the criticism, I found myself to be the target audience of her book Lean In and found tremendous value in not holding back my career while waiting for other parts of my life to happen (or not). Various topics were covered in her plenary, from salary parity across genders and the related topic of negotiation, bringing back the word “feminism” and banning the word “bossy”, equal marriages, unconscious bias and the much too gradual progress on C-suite gender parity. She has placed a lot work and hope into Lean In Circles and how they help build and grow the necessary professional networks for women. She advised us to undertake a positive mindfulness exercise before bed, writing down three things you did well during the day (“even if it’s something simple”). A strong conclusion was made by telling us to stay in technology, because they are the best jobs out there.

With the plenary concluded, I went back to my hotel to “rest for a few minutes before the evening events” and promptly fell asleep for 2.5 hours. I guess I had some sleep debt! In spite of missing out on some fun evening events, it probably a wise move to just take it easy that evening.

Friday’s keynote could be summed up concisely with one word: Robots! Valerie Fenwick wrote a great post about the keynote by Manuela Veloso of Carnegie Mellon University here: GHC15: Keynote: Robotics as a Part of Society.

As we shuffled out of the last keynote, I was on my way back to the open source track for star-studded panel (ok, two of them are my friends, too!) of brilliant security experts. The premise of the panel was exploring some of the recent high profile open source vulnerabilities and the role that companies now play in making sure this widely used tooling is safe, a task that all of the panelists work on. I found a lot of value in hearing from security experts what struggles they have when interacting with open source projects, like how to be diplomatic about reporting vulnerabilities and figuring out how to do it securely when a mechanism isn’t in place. They explored the fact that most open source projects simply don’t have security in mind, and they suggested some simple tooling and tips that can be used to evaluate security of various types of software, from Nmap and AFL to the OSWASP Top 10 of 2013 which is a rundown of common security issues with software, many of which are still legitimate today and the Mozilla wiki that has a surprising amount of security information (I knew about it from their SSL pages, lots of good information there). They also recommended the book The Art of Software Security Assessment and concluded my mentioning that learning about security is a valuable skill, there are a lot of jobs!

I had a bit of fun after the security panel and went to one of the much larger rooms to attend a panel about Data Science at NASA. Space is cool, and astronaut Catherine Coleman on the panel to talk about her work on the International Space Station (ISS)! It was also really fun to see photos of several women she’s worked with on the ISS and in the program, as female *nauts are still a minority (though there are a lot of female technologists working at NASA). I enjoyed hearing her talk about knowing your strengths and those of the people you’re working, since your life could depend upon it, teams are vital at NASA. Annette Moore, CIO of the Johnson Space Center, then spoke about the incredible amount of data being sent from the ISS, from the results of experiments to the more human communications that the astronauts need to keep in contact with those of us back on Earth. I have to admit that it did sound pretty cool to be the leader of the team providing IT support for a space station. Dorothy Rasco, CFO at Johnson Space Center, then spoke about some of the challenges of a manned mission to Mars, including handling the larger, more protected lander required, making sure it gets there fast enough, and various questions about living in a different atmosphere and food (most doesn’t have a shelf life beyond 3 years, not long enough!). Panel moderator and CTO-IT of NASA Deborah Diaz then took time to talk more broadly about the public policy of data at NASA which meant some interesting and ongoing big data challenges around making sure it’s all made available effectively. She shared a the link to open.nasa.gov that has various projects for the public, including thousands of data sets, open source code repositories and almost 50 APIs to work with. Very cool stuff! She also touched upon managing wearables (our new “Internet of Things”) that astronauts have been wearing for years, and how to manage all the devices on a technology and practical level, to record and store important scientific data collected, all without overburdening those wearing them.

Later in the afternoon I went to a fun Internet of Things workshop where we split into groups and tried to brainstorm an IoT product while paying careful attention to security and privacy around identity and authentication mechanisms for these devices. Our team invented a smart pillow. I think we were all getting pretty tired from conferencing!

The conference concluded with an inspiring talk from Miral Kotb, the Founder of iLuminate. A brilliant software engineer, I loved hearing about her passion for dance and technology, and how she followed both to dream up and build her company. I’d never heard of iLuminate before, but for the other uninitiated their performances are done in the dark with full body suits that use a whole bunch of lights synced up with their proprietary hardware and software to give the audience a light, music and dance show. Following her talk she brought out the dancers to close the conference with a show, nice!

I met up with some friends and acquaintances for dinner before going over to the closing party, which was held in the Houston Astros ballpark! I had fun, and made it back to the hotel around 10:30 so I could collect my bags and make my move to a hotel closer to the airport so I could just take a quick shuttle in the early AM to catch my flight to Tokyo the next day.

More photos from the conference and after party here: https://www.flickr.com/photos/pleia2/albums/72157659453000380

It was quite a conference, I’m thankful that I was able to participate. The venue in Houston was somewhat disruptively under construction, but it’s otherwise a great space and it was great to learn that they’ll be holding the conference there again next year. I’d encourage women in tech I know to go if they’re feeling isolated or looking for tips to succeeding. If you’re thinking of submitting a talk, I’d also be happy to proof and make recommendations about your proposal, as it’s one of the more complicated submission processes I’ve been through and competition for speaking slots is strong.

by pleia2 at November 11, 2015 02:09 AM

November 10, 2015

kdub

Small Run Fab Services

For quite a while I’ve been just using protoboards, or trying toner transfer to make pcbs, with limited success.

A botched toner transfer attempt

A hackaday article (Why are you still making PCB’s?) turned me on to low cost, prototyping pcb runs. Cutting my own boards via toner transfer had lots of drawbacks:

  • I’d botch my transfer (as seen above), and have to clean the board and start over again. Chemicals are no fun either.
  • Drilling is tedious.
  • I never really got to the point where I’d say it was easy to do a one-sided board.
  • I would always route one-sided boards, as I never got good enough to want to try a 2 layer board.
  • There was no solder mask layer, so You’d get oxidation, and have to be very careful while soldering.
  • Adding silkscreen was just not worth the effort.

I seemed to remember trying to find small run services like this a while ago, but coming up short. I might be coming late to the party of small-run pcb fabs, but I was excited to find services like OSHpark are out there. They’ll cut you three 2-layer pcbs with all the fixins’ for $5/square inch! This is a much nicer board and probably at a cheaper cost than I am able to do myself.

Here’s the same board design (rerouted for 2layer) as the botched one above:

The same board design as above, uploaded into OSHpark

You can upload an Eagle BRD file directly, or submit the normal gerber files. Once uploaded, you can easily share the project on OSHpark. (this project’s download). You have to wait 12 days for the boards, but if I’m being honest with myself, this is a quicker turnaround time than my basement-fab could do! I’m sure I’ll be cutting my own boards way less in the future.

by Kevin at November 10, 2015 02:45 PM

November 09, 2015

Eric Hammond

Creating An Amazon API Gateway With aws-cli For Domain Redirect

Ten commands to launch a minimal, functioning API Gateway

As of this publication date, the Amazon API Gateway is pretty new and the aws-cli interface for it is even newer. The API and aws-cli documentation at the moment is a bit rough, but this article outlines steps to create a functioning API Gateway with the aws-cli. Hopefully, this can help others who are trying to get it to work.

Goals

I regularly have a need to redirect browsers from one domain to another, whether it’s a vanity domain, legacy domain, “www” to base domain, misspelling, or other reasons.

I usually do this with an S3 bucket in website mode with a CloudFront distribution in front to support https. This works, performs well, and costs next to nothing.

Now that the Amazon API Gateway has aws-cli support, I was looking for simple projects to test out so I worked to reproduce the domain redirect. I found I can create an API Gateway that will redirect a hostname to a target URL, without any back end for the API (not even a Lambda function).

I’m not saying the API Gateway method is better than using S3 plus CloudFront for simple hostname redirection. In fact, it costs more (though still cheap), takes more commands to set up, and isn’t quite as flexible in what URL paths get redirected from the source domain. It does, however, work and may be useful as an API Gateway aws-cli example.

Assumptions

The following steps assume that you already own and have set up the source domain (to be redirected). Specifically:

  • You have already created a Route53 Hosted Zone for the source domain in your AWS account.

  • You have the source domain SSL key, certificate, chain certificate in local files.

Now here are the steps for setting up the domain to redirect to another URL using the aws-cli to create an API Gateway.

1. Create an Amazon API Gateway with aws-cli

Set up the parameters for your redirection. Adjust values to suit:

base_domain=erichammond.xyz # Replace with your domain
target_url=https://twitter.com/esh # Replace with your URL

api_name=$base_domain
api_description="Redirect $base_domain to $target_url"
resource_path=/
stage_name=prod
region=us-east-1

certificate_name=$base_domain
certificate_body=$base_domain.crt
certificate_private_key=$base_domain.key
certificate_chain=$base_domain-chain.crt

Create a new API Gateway:

api_id=$(aws apigateway create-rest-api \
  --region "$region" \
  --name "$api_name" \
  --description "$api_description" \
  --output text \
  --query 'id')
echo api_id=$api_id

Get the resource id of the root path (/):

resource_id=$(aws apigateway get-resources \
  --region "$region" \
  --rest-api-id "$api_id" \
  --output text \
  --query 'items[?path==`'$resource_path'`].[id]')
echo resource_id=$resource_id

Create a GET method on the root resource:

aws apigateway put-method \
  --region "$region" \
  --rest-api-id "$api_id" \
  --resource-id "$resource_id" \
  --http-method GET \
  --authorization-type NONE \
  --no-api-key-required \
  --request-parameters '{}'

Add a Method Response for status 301 with a required Location HTTP header:

aws apigateway put-method-response \
  --region "$region" \
  --rest-api-id "$api_id" \
  --resource-id "$resource_id" \
  --http-method "GET" \
  --status-code 301 \
  --response-models '{"application/json":"Empty"}' \
  --response-parameters '{"method.response.header.Location":true}'

Set the GET method integration to MOCK with a default 301 status code. By using a mock integration, we don’t need a back end.

aws apigateway put-integration \
  --region "$region" \
  --rest-api-id "$api_id" \
  --resource-id "$resource_id" \
  --http-method GET \
  --type MOCK \
  --request-templates '{"application/json":"{\"statusCode\": 301}"}'

Add an Integration Response for GET method status 301. Set the Location header to the redirect target URL.

aws apigateway put-integration-response \
  --region "$region" \
  --rest-api-id "$api_id" \
  --resource-id "$resource_id" \
  --http-method GET \
  --status-code 301 \
  --response-templates '{"application/json":"redirect"}' \
  --response-parameters \
    '{"method.response.header.Location":"'"'$target_url'"'"}'

2. Create API Gateway Deployment and Stage using aws-cli

The deployment and its first stage are created with one command:

deployment_id=$(aws apigateway create-deployment \
  --region "$region" \
  --rest-api-id "$api_id" \
  --description "$api_name deployment" \
  --stage-name "$stage_name" \
  --stage-description "$api_name $stage_name" \
  --no-cache-cluster-enabled \
  --output text \
  --query 'id')
echo deployment_id=$deployment_id

If you want to add more stages for the deployment, you can do it with the create-stage sub-command.

At this point, we can actually test the redirect using the endpoint URL that is printed by this command:

echo "https://$api_id.execute-api.$region.amazonaws.com/$stage_name$resource_path"

3. Create API Gateway Domain Name using aws-cli

The API Gateway Domain Name seems to be a CloudFront distribution with an SSL Certificate, though it won’t show up in your normal CloudFront queries in the AWS account.

distribution_domain=$(aws apigateway create-domain-name \
  --region "$region" \
  --domain-name "$base_domain" \
  --certificate-name "$certificate_name" \
  --certificate-body "file://$certificate_body" \
  --certificate-private-key "file://$certificate_private_key" \
  --certificate-chain "file://$certificate_chain" \
  --output text \
  --query distributionDomainName)
echo distribution_domain=$distribution_domain

aws apigateway create-base-path-mapping \
  --region "$region" \
  --rest-api-id "$api_id" \
  --domain-name "$base_domain" \
  --stage "$stage_name"

4. Set up DNS

All that’s left is to update Route53 so that we can use our preferred hostname for the CloudFront distribution in front of the API Gateway. You can do this with your own DNS if you aren’t managing the domain’s DNS in Route53.

Get the hosted zone id for the source domain:

hosted_zone_id=$(
  aws route53 list-hosted-zones \
    --region "$region" \
    --output text \
    --query 'HostedZones[?Name==`'$base_domain'.`].Id'
)
hosted_zone_id=${hosted_zone_id#/hostedzone/}
echo hosted_zone_id=$hosted_zone_id

Add an Alias record for the source domain, pointing to the CloudFront distribution associated with the API Gateway Domain Name.

cloudfront_hosted_zone_id=Z2FDTNDATAQYW2
change_id=$(aws route53 change-resource-record-sets \
  --region "$region" \
  --hosted-zone-id $hosted_zone_id \
  --change-batch '{
    "Changes": [{
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "'$base_domain'",
        "Type": "A",
        "AliasTarget": {
          "HostedZoneId": "'$cloudfront_hosted_zone_id'",
          "DNSName": "'$distribution_domain'",
          "EvaluateTargetHealth": false
  }}}]}' \
  --output text \
  --query 'ChangeInfo.Id')
echo change_id=$change_id

This could be a CNAME if you are setting up a hostname that is not a bare apex domain, but the Alias approach works in all Route53 cases.

Once this is all done, you may still need to wait 10-20 minutes while the CloudFront distribution is deployed to all edge nodes, and for the Route53 updates to complete.

Eventually, however, hitting the source domain in your browser should automatically redirect to the target URL. Here is my example in action:

EricHammond.xyz

Using the above as a starting point, we can now expand into more advance setups with the API Gateway and the aws-cli.

Original article and comments: https://alestic.com/2015/11/amazon-api-gateway-aws-cli-redirect/

November 09, 2015 10:10 AM

November 08, 2015

Elizabeth Krumbach

Preamble to Grace Hopper Celebration of Women in Computing 2015

Prior to the OpenStack Summit last week, I attended the Grace Hopper Celebration of Women in Computing in Houston.

But it’s important to recognize a few things before I write about my experience at the conference in subsequent post.

I have experienced sexism and even serious threats throughout my work in open source software. This became particularly acute as I worked to increase my network of female peers and boost participation of women in open source with my work in Ubuntu Women and LinuxChix.

This is not to say open source work has been bad. The vast majority my experiences have been positive and I’ve built life-long friendships with many of the people I’ve volunteered with over the years. My passion for open source software as a movement, a community and a career is very much intact.

I have been exceptionally fortunate in my paid technical (mostly Linux Systems Administration) career. I have been a part of organizations that have not only supported and promoted my work, but have shown a real commitment to diversity in the talent they hire. At my first junior systems administration job in Philadelphia, my boss ran a small business where he constantly defied the technical stereotypes regarding race, age and gender with his hires, allowing me to work with a small, but diverse group of people. In my work now in Hewlett Packard Enterprise I’m delighted to work with many brilliant women, from my management chain to my peers, as well as people from all over the world.

My experience was not just luck. I’ve had been very fortunate to have the career flexibility and financial stability through a working partner to select jobs that fit my criteria for a satisfying work environment. When I needed to be frugal when living on my own in a small, inexpensive apartment far from the city and very limited budget, I made it through. Early in my career when I couldn’t find permanent work I wanted, I called up a temp agency and did everything from data entry to accounting work. I also spent time working as a technical consultant, at one job I did back end web development, in another helped make choices around enterprise open source platforms for a pharmaceutical company. While there certainly were micro-aggressions to deal with (clients regularly asking to speak with a “real developer” or directing design-oriented questions to me rather than my male designer colleague), my passion for technology and the work I was doing kept me above water through these routine frustrations.

When it comes to succeeding in my technical career I’ve also had the benefit of being a pretty hard core nerd. Every summer in high school I worked odd neighborhood jobs to save up money to buy computer parts. I had extended family members who gave us our first computer in 1991 (I was 10), the only gaming console I ever owned as a youth (the NES) and when we needed a better computer, grandparents who gave us a 486 for Christmas in 1994 (I was 13). Subsequent computers I bought with my precious summer work savings from classified ads, dragging my poor mother to the doorstep of more than one unusual fellow who was selling some old computer equipment. Both my parents had a love for SciFi, my father making the Lord of the Rings series a more familiar story than those from the Christian Bible, and my mother with her love of terribly amusing giant monster horror movies that I still hold close to this day. One look at my domain name here shows that I also grew up with the Star Wars trilogy. I’ve been playing video games since we got that first NES and I still carry around a Nintendo DS pretty much everywhere I go. I’ve participated in Magic:The Gathering tournaments. I wear geek t-shirts and never learned how to put on make-up. I have a passion for beer. I fit in with the “guys” in tech.

So far, I’m one of the women in tech who has stayed.

In spite of my work trying to get more women involved, like the two mentorship programs I participated in this year for women, I’ve spent a lot of time these past few years actively ignoring some of the bigger issues regarding women in tech. I love technology. I love open source. I’ve built my life and hobbies around my technical work and expertise. When I leave home and volunteer, it’s not spooning soup into bowls at a soup kitchen, it’s using my technical skills to deploy computers to disadvantaged communities. Trying to ignore the issues that most women face has been a survival tactic. It’s depressing and discouraging to learn how far behind we still are with pay, career advancement and both overt and subtle sexism in the workplace. I know that people (not just women!) who aren’t geeky or don’t drink like me are often ostracized or feel like they have to fake it to succeed, but I’ve pushed that aside to succeed and contribute in the way I have found is most valuable to my career and my community.

At the Grace Hopper Celebration of Women in Computing there was a lot of focus on all the things I’ve tried to ignore. All that discrimination in the form of lower pay for women, fewer opportunities for advancement, maternity penalties to the careers of women and lack of paternity leave for men in the US, praise for “cowboy” computing (jumping in at 3AM to save the day rather than spending time making sure things are stable and 3AM saves aren’t ever required) and direct discrimination. The conference did an exceptional job of addressing how we can handle these things, whether it be strategies in the workplace or seeking out a new job when things can’t be fixed. But it did depress and exhaust me. I couldn’t ignore the issues anymore during the three days that I attended.

It’s a very valuable conference and I’m really proud that I had the opportunity to speak there. I have the deepest respect and gratefulness for those who run the conference and make efforts every day to improve our industry for women and minorities. My next post will be my typical conference summary of what I learned while there and the opportunities that presented themselves. Just keep this post in mind as you make your way through the next one.

by pleia2 at November 08, 2015 03:41 AM

November 06, 2015

Elizabeth Krumbach

A werewolf SVG and a xerus

The release of Ubuntu 15.10, code name Wily Werewolf, came out last month. With this release have been requests for the SVG file used in all the release information. Thanks to a ping from +HEXcube on G+ I was reminded to reach out to Tom Macfarlane of the Canonical Design Team and he quickly sent it over!

It has been added to the Animal SVGs section of the official artwork page on the Ubuntu wiki.

And following Mark Shuttleworth’s announcement that the next release is code named Xenial Xerus, I added to my collection of critters to bring along to Ubuntu Hours and other events.

Xerus

Finally, in case you were wondering how Xerus is pronounced (I was!), dictionary.com says: zeer-uh s.

by pleia2 at November 06, 2015 07:49 PM

Eric Hammond

Pause/Resume AWS Lambda Reading Kinesis Stream

use the aws-cli to suspend an AWS Lambda function processing an Amazon Kinesis stream, then resume it again

At Campus Explorer we are using AWS Lambda extensively, with sources including Kinesis, DyanmoDB, S3, SNS, CloudFormation, API Gateway, custom events, and schedules.

This week, Steve Caldwell (CTO and prolific developer) encountered a situation which required pausing an AWS Lambda function with a Kinesis stream source, and later resuming it, preferably from the same point at which it had been reading in each Kinesis shard.

We brainstormed a half dozen different ways to accomplish this with varying levels of difficulty, varying levels of cost, and varying levels of not-quite-what-we-wanted-ness.

A few hours later, Steve shared that he had discovered the answer (and suggested I pass on the answer to you).

Buried in the AWS Lambda documentation for update-event-source-mapping in the aws-cli (and the UpdateEventSourceMapping in the API), is the mention of --enabled and --no-enabled with this description:

Specifies whether AWS Lambda should actively poll the stream or not. If disabled, AWS Lambda will not poll the stream.

As it turns out, this does exactly what we need. These options can be specified to change the processing enabled state without changing anything else about the AWS Lambda function or how it reads from the stream.

The big benefit that isn’t documented (but verified by Amazon) is that this saves the place in each Kinesis shard. On resume, AWS Lambda continues reading from the same shard iterators without missing or duplicating records in the stream.

Commands

To pause an AWS Lambda function reading an Amazon Kinesis stream:

region=us-east-1
event_source_mapping_uuid=... # (see below)

aws lambda update-event-source-mapping \
  --region "$region" \
  --uuid "$event_source_mapping_uuid" \
  --no-enabled

And to resume the AWS Lambda function right where it was suspended without losing place in any of the Kinesis shards:

aws lambda update-event-source-mapping \
  --region "$region" \
  --uuid "$event_source_mapping_uuid" \
  --enabled

You can find the current state of the event source mapping (e.g., whether it is enabled/unpaused or disabled/paused) with this command:

aws lambda get-event-source-mapping \
  --region "$region" \
  --uuid "$event_source_mapping_uuid" \
  --output text \
  --query 'State'

Here are the possible states: Creating, Enabling, Enabled, Disabling, Disabled, Updating, Deleting. I’m not sure how long it can spend in the Disabling state before transitioning to full Disabled, but you might want to monitor the state and wait if you want to make sure it is fully paused before taking some other action.

If you’re not sure what $event_source_mapping_uuid should be set to in all the above commands, keep reading.

Bonus

Here’s an aws-cli incantation that will return the event source mapping UUID given a Kinesis stream and connected AWS Lambda function.

source_arn=arn:aws:kinesis:us-east-1:ACCOUNTID:stream/STREAMNAME
function_name=FUNCTIONNAME

event_source_mapping_uuid=$(
  aws lambda list-event-source-mappings \
    --region "$region" \
    --function-name "$function_name" \
    --output text \
    --query 'EventSourceMappings[?EventSourceArn==`'$source_arn'`].UUID')
echo event_source_mapping_uuid=$event_source_mapping_uuid

If your AWS Lambda function has multiple Kinesis event sources, you will need to pause each one of them separately.

Other Event Sources

The same process described above should be usable to pause/resume an AWS Lambda function reading from a DynamoDB Stream, though I have not tested it.

Other types of AWS Lambda function event sources are not currently possible to pause and resume without missing events (e.g., S3, SNS). However, if pause/resume is something you’d like to make easy for those sources, you could use AWS Lambda, the glue of AWS.

For example, suppose you currently have events flowing like this:

S3 -> SNS -> Lambda

and you want to be able to pause the Lambda function, without losing S3 events.

Insert a trivial new Lambda(pipe) function that reposts the S3/SNS events to a new Kinesis stream like so:

S3 -> SNS -> Lambda(pipe) -> Kinesis -> Lambda

and now you can pause the last Kinesis->Lambda mapping while saving S3/SNS events in the Kinesis stream for up to 7 days, then resume where you left off.

I still like my “pause Lambda” brainstorming idea of updating the AWS Lambda function code to simply sleep forever, triggering a timeout error after 5 minutes and causing the Kinesis/Lambda framework to retry the function call with the same data over and over until we are ready to resume by uploading the real code again, but Steve’s discovery is going to end up being somewhat simpler, safer, and cheaper.

Original article and comments: https://alestic.com/2015/11/aws-lambda-kinesis-pause-resume/

November 06, 2015 12:00 AM

November 02, 2015

Eric Hammond

Alestic Git Sunset

retiring “Git with gitolite by Alestic” on AWS Marketplace

Back in 2011 when the AWS Marketplace launched, Amazon was interested in having some examples of open source software listed in the marketplace, so I created and published Git with gitolite by Alestic.

This was a free AWS Marketplace product that endeavored to simplify the process of launching an EC2 instance running Git for private repositories, with ssh access managed through the open source gitolite software.

Though maintaining releases of this product has not been overly burdensome, I am planning to discontinue this work and spend time on other projects that would likely be more beneficial to the community.

Current Plan

Unless I receive some strong and convincing feedback from users about why this product’s life should be extended, I currently plan to ask Amazon to sunset Git with gitolite by Alestic in the coming months.

When this happens, AWS users will not be able to subscribe and launch new instances of the product, unless they already had an active AWS Marketplace subscription for it.

Alternatives

Folks who want to use private Git repositories have a number of options:

  • Amazon has released CodeCommit, “a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories”.

  • The AWS Marketplace has other Git related products, some of them free, in the Source Control software section.

  • At the bottom of the original Alestic Git page, I have always listed a number of service that will host private Git repositories for a fee. The obvious and most popular choice is GitHub.

  • The code I use to build the Git with gitolite AMI is open source, and publicly available on GitHub. You are welcome to use and adapt this to build your own updated AMI.

Existing Customers

AWS Marketplace customers who currently have a subscription to Git with gitolite by Alestic may continue running the product and should be able to start new instances of it if needed.

Note, however, that the AMIs will not be updated and the Ubuntu LTS operating systems do eventually reach end of life where they do not receive security updates.

In the 4.5 years this product has been publicly available, I think one person asked for help (a client ssh issue), but I’ll continue to be available if there are issues running the AMI itself.

Transfer of Control

If you already host software on the AWS Marketplace, and you would be willing to assume maintenance of the Git with gitolite product, please get in touch with me to discuss a possible transition.

Original article and comments: https://alestic.com/2015/11/alestic-git-sunset/

November 02, 2015 09:42 AM