Showing posts with label open source. Show all posts
Showing posts with label open source. Show all posts

Saturday, October 04, 2008

Open Source Days 2008, Saturday

Today was actually a work day for me, as we carried out a major service window. Fortunately, my duties during the service window were limited, so I managed to sneak over and attend a few sessions at Open Source Days while waiting for some SAN and operating system patches to be applied by co-workers.

I attended the last half of Jan Wieck's talk about Slony-I (not "Slony-l", as written in the conference agenda). Slony-I is an asynchronous master-slave replication program for the PostgreSQL database management system. Unfortunately, I don't work much with PostgreSQL in my current job, but if I did, I'd certainly try out Slony-I. It can be useful for scalability (think: read-only database slaves in a CMS cluster) and continuous backup to an offline location. It can also be used when upgrading PostgreSQL, resulting in close to zero down time, because Slony-I can (to a certain degree) replicate between different PostgreSQL versions. Slony-I has some rather impressive replication routing features, so that you have have master->slave->slave->slave.

This talk was an example of why I like participating in conferences with open source focus: Jan was very clear about Slony-I's limitations and weaknesses -- contrary to some corporate guy who might not be lying, but who might be suppressing unfortunate facts. Slony-I has a number of weak points: It's rather complex to install, configure, and manage. And the current version 1 does some dirty tricks with the system schema (will be cleaned up in version 2).

Jan once had a plan for multi-master replication in Slony-I, but that idea has been dropped for now. Fine with me: Although it sounds cool, I would have a hard time trusting such a feature anyway, thinking about the implementation complexity it would entail.

Next, Magnus Hagander spoke about Hidden gems of PostgreSQL. Magnus works at Redpill who provides 24x7 PostgreSQL support (among a number of other services). As far as I know, Redpill has recently opened an office in Denmark -- which means that it's now possible to sign up for local PostgreSQL support in our little pond.

Magnus went through a few selected PostgreSQL features, meaning that he had the time to explain them properly:
  • DDL operations (such as dropping a table) are transactional PostgreSQL. Magnus presented this as a rather exclusive feature which few other DBMSes have. Actually, DB2 has the feature, and it's a mixed blessing: Transactions are a tremendously handy and time-saving feature, including transactional DDL. But if DDLs are transactional, it also means that a user with very low privileges can lock the system catalog by performing a DDL and not committing -- meaning that other users (potentially equipped with loads of high permissions) are blocked from completing DDL operations. I assume that PostgreSQL's transactional DDL suffers from the same drawback(?) By the way, Magnus pointed out a serious drawback with performing DDLs in some other DBMSes that don't have transactional DDL: They may carry out an implicit commit when a DDL statement is executed; this leaves potential for rather uncomfortable situations.
    Update, Monday, Oct 6: PostgreSQL doesn't suffer from the problem described above for DB2.
  • PostgreSQL now has built-in full text indexing (FTI), based on a somewhat cleaned up version of "Tsearch2" which used to be an add-on to PostgreSQL. The FTI can be been used in a simple way, but you can also configure it in very specific and powerful ways, using language specific dictionaries and/or specialized parsers and stemmers.
  • Finally, Magnus when through a few of the packages in PostgreSQL's "contrib" add-on. The crypto add-on is something, I'd much like to have in DB2.
After the talks, I went to the SSLUG booth to have a look at the extremely small PC which was on display there. Fascinating stuff. I really like the trend towards down-scaled and cheaper PCs, exemplified also by the EEE (which were everywhere at the conference). At the booth, I had a chat with Phillip S. Bøgh who told me that for a typical desktop PC, 81% of its energy consumption actually happens during production, long before it's sold to the customer. The corollary is that there is value in keeping old hardware alive, instead of buying new equipment whenever some large software company decides to try to force us to buy new products featuring new heights of bloat.

Open Source Days 2008, Friday

For several years, there has been an annual two-day open source conference in Denmark. It has had different names in the past ("Linux 98", "Open Networks 99", "Linuxforum 200{0,2,3,4,5,6,7}"), but nowadays, it's called "Open Source Days".

I've attended the conference almost every year. This year is no exception, although I may miss out on most of the Saturday talks.

Here are my notes from Friday.

OpenID, by Simon Josefsson
Users of authentication-requiring web applications normally have an unfortunate choice: Use one or two passwords at all web sites, or store passwords in the local browser or a local password "wallet". The first option is clearly not attractive, because a rogue web site administrator could be using your credentials to log in as you on other web sites. The second option is troublesome if you use several PCs, or if your PC is stolen (workstations are often not regularly backed up). OpenID brings a good solution to this dilemma: Create an account at an OpenID provider which you choose to trust (I use myOpenId, currently). Then, you can use that account at all sites supporting OpenID logins (several weblog sites, Plaxo, Stack Overflow, etc). OpenID can also make life easier for web site developers.

Simon Josefsson went through the OpenID protocol, superficially (time was limited). In a comparison with other authentication systems, he noted that OpenID is based on a voluntary trust relationship between website and authenticator, in contrast with--e.g.--SAML. OpenID can only be used in a web context. All in all, OpenID is a rather simple and light-weight protocol.

The main potential security problem with OpenID is phishing, but Simon noted that this is a problem with other systems as well: Even though the system may use non-web-browser password dialogs, such dialogs can be rather closely mimicked using Flash. The most effective solution to the phishing threat is to avoid relying (exclusively) on passwords, through SMS-based one-time codes, one-time code dongles, etc. Simon's company produces an elegant, small USB device which emulates USB keyboards; when you press a button on the device, a long password is emitted. In combination with an encryption system, this results in very secure authentication.

Where I work, we face an identity handling challenge: We need to have the authenticator convey a list of group memberships for an account to the web application. OpenID has deliberately been kept simple, so there is no dedicated solution for that. But Simon noted that OpenID 2 includes an assertion mechanism which can--in priciple--be used to communicate any kind of attribute about a user to a web-site.
Unfortunately, we can't really use OpenID for the before mentioned challenge, but I would certainly look at OpenID if I were to implement an authentication system elsewhere.

Using Puppet to manage Linux and Mac platforms in an enterprise environment, by Nigel Kersten
Ever since I heard a recent interview with Luke Kanies, I've wanted to know more about Puppet. Luke has an interestering statement about system administrators: Sysadmins need to move to a higher level, by adopting some of the methology used in software development. This relates to version control, traceability, abstraction, and code reuse. I very much agree on this.

Without having personally tried Puppet yet, I think it's somewhat fair to characterize as a modern cfengine, and as the unix-world's version of the Microsoft-world's SMS-tool (SMS having better reporting facilities, while Puppet probably has better scripting features). Puppet has gained a lot of attention in Linux and Mac sysadm circles, lately. Kersten is part of a team managing more than 10000 Linux and Mac internal workstations at Google.

Puppet is written in the Ruby programming language. So it was reassuring to hear that Nigel Kersten is "a Python guy": Puppet is not just being hyped as an example of an Ruby implementation.

Random notes from the talk: I learned that Puppet can actually work offline: Many rules will work without network dependencies. And it seems that Puppet can be a good way to implement local adjustments to software packages without having to mess around with local re-packaging. Puppet goes out of its way to avoid adjusting configuration files if there is no need (nice: that way, file mtimes don't mysteriously change without file content changes). Unfortunately, it sounds like there are issues to be worked out regarding Puppet installations on workstations where SELinux is in enforcing mode.

Nigel has heard from no one with personal experiences getting Puppet running on AIX. And as we are (for better or for worse) using AIX on the majority of unix installations where I work, I probably can't justify fiddling with Puppet, currently.

By the way: RedMonk has a podcast where Nigel Kersten is interviewed. (RedMonk's podcasts generally have too much random chit-chat for my taste, but this interview is actually good, as far as I remember).

PostgreSQL
During a lunch break, I had a talk with Magnus Hagander at Redpill's excibition booth. Magnus Hagander is one of the developers of my favorite database system, PostgreSQL. PostgreSQL is generally being very conservative/unaggressive by default, in order to be a good citizen on any server installation. But often, the administrator of a PostgreSQL installation actually wants it to be very aggressive. I asked Magnus Hagander for a list of top-three PostgreSQL parameters he generally adjusts in PostgreSQL installations. His answer: shared buffers, work mem, checkpoint segment (and effective cache size).

Open Source Virtualization, an Overview, by Kris Buytaert
I've been using Xen virtualization for a while, both at home and at work. And I regularly touch IBM's Advanced POWER Virtualization as well as VMWare ESX. In other words, I'm interested in virtualization, not just from a theoretical perspective.

So I went to Kris Buytaert for an update of the status of open source virtualization technologies. Kris Boytaert went through the history of open source virtualization. He listed three virtualization products which he currently recommends: Xen for servers, VirtualBox for desktops, and Linux-VServer for mass hosting of web-servers. And he mentioned the openQRM solution which can be used for setting up a local cloud, as far as I understood. He had some surprising statements: If you have the choice between full, VT-based virtualization and paravirtualization, then go for paravirtualization, for performance reasons. Live migration is of little practical use (contrary to experiences where I work). It sounded like Kris is somewhat skeptical with regard to KVM; on the other hand, Kris described how Xen has been moving further and further away from the open source world, ever since it was bought by Citrix (Citrix: How can you let this slip away?)

Best practices, by Poul-Henning Kamp
The best practice concept is starting to annoy me. I've often heard obviously stupid solutions being motivated by "but that's best practice!"; the statement is often heard from someone with highly superficial knowledge about the related field. Recently, Mogens Nørgaard had some good comments about the phenomenon in his video column (in Danish only).

In his talk, Poul-Henning was also skeptical about the best practice term. He joked about people asking for operations to be done in a way which is "best practice, or better!". Apart from that, Poul-Henning went through various recommendations for programmers, C-programmers in particular: Do use assertions, and don't compile them away. Use code generation when possible. Print out your code, zoomed out to fit in few pages; surprisingly, that can reveal patterns in your code which you didn't realize. Use lints and other code checkers, and try compiling your code with different compilers on different platforms. Certainly good advice, but the talk left me wondering: How about changing to a programming language with better type safety, instead of all the band-aids? (I believe that Poul-Henning once touched upon this in another context, basically stating that C is the only realistic language for a systems programmer, for various reasons.)

Many people have high regard for Poul-Henning, the coder. At this talk, however, the loudness of the applauses were in the guru-admiration league. --Which was a bit out of proportion for the talk, in my opinion.