Syndicate content
Elizabeth Krumbach Joseph's public journal about Linux, sysadmining, beer, travel, pink gadgets and her life in the city where little cable cars climb halfway to the stars.
Updated: 4 hours 55 min ago


Mon, 2015-04-20 18:07

This past week I had the pleasure of attending POSSCON in the beautiful capital city of South Carolina, Columbia. The great event kicked off with a social at Hickory Tavern, which I arranged to be at by tolerating a tight connection in Charlotte. It all worked out and in spite of generally being really shy at these kind of socials, I found some folks I knew and had a good time. Late in the evening several of us even had the opportunity to meet the Mayor of Columbia who had come down to the event and talk about our work and the importance of open source in the economy today. It’s really great to see that kind of support for open source in a city.

The next morning the conference actually kicked off. Organizer Todd Lewis opened the event and quickly handed things off to Lonnie Emard, the President of IT-oLogy. IT-oLogy is a non-profit that promotes initial and continued learning in technology through events targeting everyone from children in grade school to professionals who are seeking to extend their skill set, more on their About page. As a partner for POSSCON, they were a huge part of the event, even hosting the second day at their offices.

We then heard from aforementioned Columbia Mayor Steve Benjamin. A keynote from the city mayor was real treat, taking time out of what I’m sure is a busy schedule showed a clear commitment to building technology in Columbia. It was really inspiring to hear him talk about Columbia, with political support and work from IT-oLogy it sounds like an interesting place to be for building or growing a career in tech. There was then a welcome from Amy Love, the South Carolina Department of Commerce Innovation Director. Talk about local support! Go South Carolina!

The next keynote was from Andy Hunt, who was speaking on “A New Look at Openness” which began with a history of how we’ve progressed with development, from paying for licenses and compilers for proprietary development to the free and open source tool set and their respective licenses we work with today. He talked about how this all progresses into the Internet of Things, where we can now build physical objects and track everything from keys to pets. Today’s world for developers, he argued, is not about inventing but innovating, and he implored the audience to seek out this innovation by using the building blocks of open source as a foundation. In the idea space he proposed 5 steps for innovative thinking:

  1. Gather raw material
  2. Work it
  3. Forget the whole thing
  4. Eureka/My that’s peculiar
  5. Refine and develop
  6. profit!

Directly following the keynote I gave my talk on Tools for Open Source Systems Administration in the Operations/Back End track. It had the themes of many of my previous talks on how the OpenStack Infrastructure team does systems administration in an open source way, but I refocused this talk to be directly about the tools we use to accomplish this as a geographically distributed team across several different companies. The talk went well and I had a great audience, huge thanks to everyone who came out for it, it was a real pleasure to talk with folks throughout the rest of the conference who had questions about specific parts of how we collaborate. Slides from my presentation are here (pdf).

The next talk in the Operations/Back End track was Converged Infrastructure with Sanoid by Jim Salter. With SANOID, he was seeking to bring enterprise-level predictability, minimal downtime and rapid recover to small-to-medium-sized businesses. Using commodity components, from hardware through software, he’s built a system that virtualizes all services and runs on ZFS for Linux to take hourly (by default) snapshots of running systems. When something goes wrong, from a bad upgrade to a LAN infected with a virus, he has the ability to quickly roll users back to the latest snapshot. It also has a system for easily creating on and off-site backups and uses Nagios for monitoring, which is how I learned about aNag, a Nagios client for Android, I’ll have to check it out! I had the opportunity to spend more time with Jim as the conference went on, which included swinging by his booth for a SANOID demo. Slides from his presentation are here.

For lunch they served BBQ. I don’t really care for typical red BBQ sauce, so when I saw a yellow sauce option at the buffet I covered my chicken in that instead. I had discovered South Carolina Mustard BBQ sauce. Amazing stuff. Changed my life. I want more.

After lunch I went to see a talk by Isaac Christofferson on Assembling an Open Source Toolchain to Manage Public, Private and Hybrid Cloud Deployments. With a focus on automation, standardization and repeatability, he walked us through his usage of Packer, Vagrant and Ansible to interface with a variety of different clouds and VMs. I’m also apparently the last systems administrator alive who hadn’t heard of, but he shared the link and it’s a great site.

The rooms for the talks were spread around a very walkable area in downtown Columbia. I wasn’t sure how I’d feel about this and worried it would be a problem, but with speakers staying on schedule we were afforded a full 15 minutes between talks to switch tracks. The venue I spoke it was in a Hilton, and the next talk I went to was in a bar! It made for quite the enjoyable short walks outside between talks and a diversity in venues that was a lot of fun.

That next talk I went to was Open Source and the Internet of Things presented by Erica Stanley. I had the pleasure of being on a panel with Erica back in October during All Things Open (see here for a great panel recap) so it was really great running into her at this conference as well. Her talk was a deluge of information about the Internet of Things (IoT) and how we can all be makers for it! She went into detail about the technology and ideas behind all kinds of devices, and on slides 41 and 42 she gave a quick tour of hardware and software tools that can be used to build for the IoT. She also went through some of the philosophy, guidelines and challenges for IoT development. Slides from her talk are online here, the wealth of knowledge packed into that slidedeck are definitely worth spending some time with if you’re interested in the topic.

The last pre-keynote talk I went to was by Tarus Balog with a Guide to the Open Source Desktop. A self-confessed former Apple fanboy, he had quite the sense of humor about his past where “everything was white and had an apple on it” and his move to using only open source software. As someone who has been using Linux and friends for almost a decade and a half, I wasn’t at this talk to learn about the tools available, but instead see how a long time Mac user could actually make the transition. It’s also interesting to me as a member of the Ubuntu and Xubuntu projects to see how newcomers view entrance into the world of Linux and how they evaluate and select tools. He walked the audience through the process he used to select a distro and desktop environment and then all the applications: mail, calendar, office suite and more. Of particular interest he showed a preference for Banshee (reminded him of old iTunes), as well as digiKam for managing photos. Accounting-wise he is still tied to Quickbooks, but either runs it under wine or over VNC from a Mac.

The day wound down with a keynote from Jason Hibbets. He wrote The foundation for an open source city and is a Project Manager for His keynote was all about stories, and why it’s important to tell our open source stories. I’ve really been impressed with the development of over the past year (disclaimer: I’ve written for them too), they’ve managed to find hundreds of inspirational and beneficial stories of open source adoption from around the world. In this talk he highlighted a few of these, including the work of my friend Charlie Reisinger at Penn Manor and Stu Keroff with students in the Asian Penguins computer club (check out a video from them here). How exciting! The evening wrapped up with an afterparty (I enjoyed a nice Palmetto Amber Ale) and a great speakers and sponsors dinner, huge thanks to the conference staff for putting on such a great event and making us feel so welcome.

The second day of the conference took place across the street from the South Carolina State House at the IT-oLogoy office. The day consisted of workshops, so the sessions were much longer and more involved. But the day also kicked off with a keynote by Bradley Kuhn, who gave a basic level talk on Free Software Licensing: Software Freedom Licensing: What You Must Know. He did a great job offering a balanced view of the licenses available and the importance of selecting one appropriate to your project and team from the beginning.

After the keynote I headed upstairs to learn about OpenNMS from Tarus Balog. I love monitoring, but as a systems administrator and not a network administrator, I’ve mostly been using service-based monitoring tooling and hadn’t really looking into OpenNMS. The workshop was an excellent tour of the basics of the project, including a short history and their current work. He walked us through the basic installation and setup, and some of the configuration changes needed for SNMP and XML-based changes made to various other parts of the infrastructure. He also talked about static and auto-discovery mechanisms for a network, how events and alarms work and details about setting up the notification system effectively. He wrapped up by showing off some interesting graphs and other visualizations that they’re working to bring into the system for individuals in your organization who prefer to see the data presented in less technical format.

The afternoon workshop I attended was put on by Jim Salter and went over Backing up Android using Open Source technologies. This workshop focused on backing up content and not the Android OS itself, but happily for me, that’s what I wanted to back up, as I run stock Android from Google otherwise (easy to install again from a generic source as needed). Now, Google will happily backup all your data, but what if you want to back it up locally and store it on your own system? By using rsync backup for Android, Jim demonstrated how to configure your phone to send backups to Linux, Windows and Mac using ssh+rsync. For Linux at least so far this is a fully open source solution, which I quite like and have started using it at home. The next component makes it automatic, which is where we get into a proprietary bit of software, Llama – Location Profiles. Based on various types of criteria (battery level, location, time, and lots more), Llama allows you to identify criteria of when it runs certain actions, like automatically running rsync to do backups. In all, it was a great and informative workshop and I’m happy to finally have a useful solution to pulling photos and things off my phone periodically without plugging it in and using MTP, which apparently I hate and so never I do it. Slides from Jim’s talk, which also include specific instructions and tools for Windows and Mac are online here.

The conference concluded with Todd Lewis sending more thanks all around. By this time in the day rain was coming down in buckets and there were no taxis to be seen, so I grabbed a ride from Aaron Crosman who I was happy to learn earlier was a local but had come from Philadelphia and we had great Philly tech and city vs. country tech stories to swap.

More of my photos from the event available here:

Categories: LinuxChix bloggers

Spring Trip to Philadelphia and New Jersey

Sun, 2015-04-12 16:26

I didn’t think I’d be getting on a plane at all in March, but plans shifted and we scheduled a trip to Philadelphia and New Jersey that left my beloved San Francisco on Sunday March 29th and returned us home on Monday, April 6th.

Our mission: Deal with our east coast storage. Without getting into the boring and personal details, we had to shut down a storage unit that MJ has had for years and go through some other existing storage to clear out donatable goods and finally catalog what we have so we have a better idea what to bring back to California with us. This required movers, almost an entire day devoted to donations and several days of sorting and repacking. It’s not all done, but we made pretty major progress, and did close out that old unit, so I’m calling the trip a success.

Perhaps what kept me sane through it all was the fact that MJ has piles of really old hardware, which is a delight to share on social media. Geeks from all around got to gush over goodies like the 32-bit SPARC lunchboxes (and commiserate with me as I tried to close them).

Notoriously difficult to close, but it was done!

Now admittedly, I do have some stuff in storage too, including my SPARC Ultra 10 that I wrote about here, back in 2007. I wanted to bring it home on this trip, but I wasn’t willing to put it in checked baggage and the case is a bit too big to put in my carry-on. Perhaps next trip I’ll figure out some way to ship it.

SPARC Ultra 10

More gems were collected in my album from the trip:

We also got to visit friends and family and enjoy some of our favorite foods we can’t find here in California, including east coast sweet & sour chicken, hoagies and chicken cheese steaks.

Family visits began on Monday afternoon as we picked up the plastic storage totes we were using to replace boxes, many of which were hard to go through in their various states of squishedness and age. MJ had them delivered to his sister in Pennsylvania and they were immensely helpful when we did the move on Tuesday. We also got to visit with MJ’s father and mother, and on Saturday met up with his cousins in New Jersey to have my first family Seder for Passover! Previously I’d gone to ones at our synagogue, but this was the first time I’d done one in someone’s home, and it meant a lot to be invited and to participate. Plus, the Passover diet restrictions did nothing to stem the exceptional dessert spread, there was so much delicious food.

We were fortunate to be in town for the first Wednesday of the month, since that allowed us to attend the Philadelphia area Linux Users Group meeting in downtown Philadelphia. I got to see several of my Philadelphia friends at the meeting, and brought along a box of books from Pearson to give away (including several copies of mine), which went over very well with the crowd gathered to hear from Anthony Martin, Keith Perry, and Joe Rosato about ways to get started with Linux, and freed up space in my closet here at home. It was a great night.

Presentation at PLUG

Friend visits included a fantastic dinner with our friend Danita and a quick visit to see Mike and Jessica, who had just welcomed little David into the world, awww!

Staying in New Jersey meant we could find Passover-friendly meals!

Sunday wrapped up with a late night at storage, finalizing some of our sorting and packing up the extra suitcases we brought along. We managed to get a couple hours of sleep at the hotel before our flight home at 6AM on Monday morning.

In all, it was a productive trip, but exhausting and I spent this past week making up for sleep debt and the aches and pains. Still, it felt good to get the work done and visit with friends we’ve missed.

Categories: LinuxChix bloggers

Puppet Camp San Francisco 2015

Tue, 2015-04-07 01:47

On Tuesday, March 24th I woke up early and walked down the street to a regional Puppet Camp, this edition held not only in my home city of San Francisco, but just a few blocks from home. The schedule for the event can be found up on the Eventbrite page.

The event kicked off with a keynote by Ryan Coleman of Puppet Labs, who gave an overview of how configuration management tools like Puppet have shaped our current systems administration landscape. Our work will only continue to grow in scale as we move forward, and based on results of the 2014 DevOps Report more companies will continue to move their infrastructures to the cloud, where automation is key to a well-functioning system. He went on to talk about the work that has been going into Puppet 4 RC and some tips for attendees on how they can learn more about Puppet beyond the event, including free resources like Learn Puppet (which also links to paid training resources) and the Puppet Labs Documentation site, for which they have a dedicated documentation team working to make great docs.

Next up was a great talk by Jason O’Rourke of Salesforce who talked about his infrastructure of tens of thousands of servers and how automation using Puppet has allowed his team to do less of the boring, repetitive tasks and more interesting things. His talk then focused in on “Puppet Adoption in a Mature Environment” where he quickly reviewed different types of deployments, from fresh new ones where it’s somewhat easy to deploy a new management framework to old ones where you may have a lot of technical debt, compliance and regulatory considerations and inability to take risks in a production environment. He walked through the strategies they used to accomplished to make changes in the most mature environments, including the creation of a DevOps team who were responsible for focusing on the “infrastructure is code” mindset, use of tools like Vagrant so identical test environments can be deployed by developers without input from IT, the development of best practices for managing the system (including code review, testing, and more). One of the interesting things they also did was give production access to their DevOps team so they could run limited read/test-only commands against Puppet. This new system was then slowly rolled out typically when hardware or datacenters were rolled out, or when audits or upgrades are being conducted. They also rolled out specific “roles” in their infrastructure separately, from the less risky internal-only services to partner and customer-facing. The rest of the talk was mostly about how they actually deploy into production on a set schedule and do a massive amount of testing for everything they roll out, nice to see!

Jason O’Rourke of Salesforce

Tray Torrance of NCC Group rounded out the morning talks by giving a talk on MCollective (Marionette Collective). He began the talk by covering some history of the orchestration space that MCollective seeks to cover, and how many of the competing solutions are ssh-based, including Ansible, which we’ve been using in the OpenStack infrastructure. It was certainly interesting to learn how it integrates with Puppet and is extendable with Ruby code.

After lunch I presented a talk on “Puppet in the Open Source OpenStack Infrastructure” where I walked through how and why we have an open source infrastructure, and steps for how other organizations and projects can adopt similar methods for managing their infrastructure code. This is similar to some other “open sourcing all our Puppet” talks I have given, but with this audience I definitely honed in on the DevOps-y value of making the code for infrastructure more broadly accessible, even if it’s just within an organization. Slides here.

The next couple of talks were by Nathan Valentine and David Lutterkort of Puppet Labs. Nathan did several live demos of Puppet Enterprise, mostly working through the dashboard to demonstrate how services can be associated with servers and each other for easy deployment. David’s presentation went into a bit of systems administration history in the world before ever-present configuration management and virtualization to discuss how containerization software like Docker has really changed the landscape for testing and deployments. He walked through usage of the Puppet module for Docker written by Gareth Rushgrove and his cooresponding proof of concept for a service deployment in Docker for ASP.NET, available here.

The final talk of the day was by Aaron Stone (nickname “sodabrew”) of BrightRoll on “Dashboard 2 and External Node Classification” where he walked through the improvements to the Puppet Dashboard with the release of version 2. I myself had been exposed to Puppet Dashboard when I first joined the OpenStack Infrastructure team a couple years ago and we were using it to share read-only data to our community so we’d have insight into when Puppet changes merged and whether they were successful. Unfortunately, a period of poor support for the dashboard caused us to go through several ideas for an alternative dashboard (documented in this bug) until we finally settled on using a simple front end for PuppetDB, PuppetBoard. We’re really happy with the capabilities for our team, since read-only access is what we were looking for, but it was great to hear from Aaron about work he’s resumed on the Dashboard, should I have a need in the future. Some of the improvements he covered included some maintenance fixes, including broader support for newer versions of Ruby and updating of the libraries (gems) it’s using, an improved REST API and some UI tweaks. He said that upgrading should be easy, but in an effort to focus on development he wouldn’t be packaging it for all the distros, though the files (ie debian/ for .deb packages) to make this a task for someone else are available if someone is able to do the work.

In all, this was a great little event and the low ticket price of $50 it was quite the cost-effective way to learn about a few new technologies in the Puppet ecosystem and meet fellow, local systems administrators and engineers.

A few more photos from the event are here:

Categories: LinuxChix bloggers