Showing posts with label database. Show all posts
Showing posts with label database. Show all posts

Wednesday, June 23, 2010

Separation or co-location of database and application

In a classical three-tier architecture (database, application-server, client), a choice will always have to be made: Should the database and the application-server reside on separate servers, or co-located on a shared server?

Often, I see recommendations for separation, with vague claims about performance improvements. But I assert that the main argument for separation is bureaucratic (license-related), and that separation may well hurt performance. Sure, if you separate the application and the database on separate servers, you gain an easy scale-out effect, but you also end up with a less efficient communication path.

If a database and an application is running on the same operating system instance, you may use very efficient communication channels. With DB2, for example, you may use shared-memory based inter-process communication (IPC) when the application and the database are co-located. If they are on separate servers, TCP must be used. TCP is a nice protocol, offering reliable delivery, congestion control, etc, but it also entails several protocol layers, each contributing overhead.

But let's try to quantify the difference, focusing as closely on the query round-trips as possible.

I wrote a little Java program, LatencyTester, which I used to measure differences between a number of database<>application setups. Java was chosen because it's a compiled, statically typed language; that way, the test-program should have as little internal execution overhead as possible. This can be important: I have sometimes written database benchmark programs in Python (which is a much nicer programming experience), but as Python can be rather inefficient, I ended up benchmarking Python, instead of the database.

The program connects to a DB2 database in one of two ways:
  • If you specify a servername and a username, it will communicate using "DRDA" over TCP
  • If you leave out servername and username it will use a local, shared memory based channel.
After connecting to the database, the program issues 10000 queries which shouldn't result in I/Os, because no data in the database system is referenced. The timer starts after connection setup, just before the first query; it stops immediately after the last query.

The application issues statements like VALUES(...) where ... is a value set by the application. Note the lack of a SELECT and a FROM in the statement. When invoking the program, you must choose between short or long statements. If you choose short, statements like this will be issued:
VALUES(? + 1)
where ? is a randomly chosen host variable.
If you choose long,  statements like
VALUES('lksjflkvjw...pvoiwepwvk')
will be issued, where lksjflkvjw...pvoiwepwvk is a 2000-character pseudo-randomly composed string.
In other words: The program may run in a mode where very short or very long units are sent back and forth between the application and the database. The short queries are effectively measuring latency, while the long queries may be viewed as measuring throughput.

I used the program to benchmark four different setups:
  • Application and database on the same server, using a local connection
  • Application and database on the same server, using a TCP connection
  • Application on a virtual server hosted by the same server as the database, using TCP
  • Application and database on different physical servers, but on the same local area network (LAN), using TCP
The results are displayed in the following graph:

Results from LatencyTester
Click on figure to enlarge; opens in new window

Clearly, the the highest number of queries per second is seen when the database and the application are co-located. This is especially true for the short queries: Here, more than four times as many queries may be executed per second when co-locating on the same server, compared to separating over a LAN. When using TCP on the same server, short-query round-trips run at around half the speed.

The results need to be put in perspective, though: While there are clear differences, the absolute numbers may not matter much in the Real World. The average query-time for short local queries were 0.1ms, compared to 0.8ms for short queries over the LAN. Let's assume that we are dealing with a web application where each page-view results in ten short queries. In this case, the round-trip overhead for a location connection is 10x0.1ms=1ms, whereas round-trip overhead for the LAN-connected setup is 10x0.8ms=8ms. Other factors (like query disk-I/O, and browser-to-server roundtrips) will most likely dominate, and the user will hardly notice a difference.

Even though queries over a LAN will normally not be noticeably slower, LANs may sometimes exhibit congestion. And all else being equal, the more servers and the more equipment being involved, the more things can go wrong.

Having established that application/database server-separation will not improve query performance (neither latency, nor throughput), what other factors are involved in the decision between co-location and separation?
  • Software licensing terms may make it cheaper to put the database on its own, dedicated hardware: The less CPUs beneath on the database system, the less licensing costs. The same goes for the application server: If it is priced per CPU, it may be very expensive to pay for CPUs which are primarily used for other parts of the solution.
  • Organizational aspects may dictate that the the DBA and the application server administrator each have their "own boxes". Or conversely, the organization may put an effort into operating as few servers as possible to keep administration work down.
  • The optimal operating system for the database may not be the optimal operating system for the application server.
  • The database server may need to run on hardware with special, high-performance storage system attachments. - While the application server (which probably doesn't perform much disk I/O) may be better off running in a virtual server, taking advantage of the flexible administration advantages of virtualization.
  • Buying one very powerful piece of server hardware is sometimes more expensive than buying two servers which add up to the same horsepower. But it may also be the other way around, especially if cooling, electricity, service agreements, and rack space is taken into account.
  • Handling authentication and group memberships may be easier when there is only one server. E.g., DB2 and PostgreSQL allows the operating system to handle authentication if an application connects locally, meaning that no authentication configuration needs to be set up in the application. (Don't you just hate it when passwords sneak into the version control system?)
  • A mis-behaving (e.g. memory leaking) application may disturb the database if the two are running on the same system. Operating systems generally provide mechanisms for resource-constraining processes, but that can often be tricky to setup.
Summarized:
Pro separationCon separation
Misbehaving application will not be able to disturb the database as much.Slightly higher latency.
May provide for more tailored installations (the database gets its perfect environment, and so does the application).Less predictable connections on shared networks.
If the application server is split into two servers in combination with a load balancing system, each application server may be patched individually without affecting the database server, and with little of no visible down-time for the users.More hardware/software/systems to maintain, monitor, backup, and document.
May save large amounts of money if the database and/or the application is CPU-licensed.Potentially more base software licensing fees (operating systems, supporting software).
Potentially cheaper to buy two modest servers than to buy one top-of-the-market server.Potentially more expensive to buy two servers if cooling and electricity is taken into account.
Allows for advanced scale-out/application-clustering scenarios.Prevents easy packaging of a complete database+application product, such as a turn-key solution in a single VMWare or Amazon EC2 image.
Am I overlooking something?
Is the situation markedly different with other DBMSes?

Monday, December 28, 2009

PostgreSQL innovation: Exclusion constraints

It seems like the next generation of PostgreSQL will have a new, rather innovative feature: Exclusion constraints. The feature is explained in a video from a presentation at a recent San Francisco PostgreSQL Users' Group; the presentation couples the new feature with the time period data type.

In a perfect World where databases implement the full ISO SQL standard (including non-core features), exclusion constraints could be nicely expressed as SQL assertions, but the perfect World hasn't happened yet. And I think that I know the reason for this: SQL assertions seem like a strong cocktail of NP-hard problems - they are probably very hard to implement in an efficient way.

In our less than perfect World, it's nice that PostgreSQL will soon offer a way to specify exclusion constraints other than the UNIQUE constraint.

The presenter, Jeff Davis, has an interesting blog, by the way. An on the subject of video, Vimeo has a little collection of PostgreSQL video clips that looks interesting.

Sunday, October 04, 2009

SQL comparison update: PostgreSQL 8.4

I finally got around to adjusting my SQL comparison page, so that the improvements in PostgreSQL 8.4 are taken into account. PostgreSQL is now standards-compliant in the sections about limiting result sets; actually, PostgreSQL is the first DBMS that I know of which supports SQL:2008's OFFSET + FETCH FIRST construct for pagination. PostgreSQL's new support for window functions is also very nice.

My page doesn't cover common table expressions (CTEs) yet, but it's certainly nice to see more and more DBMSes (including PostgreSQL, since version 8.4) supporting them. Even non-recursive CTEs are important, because they can really clean up SQL queries and make them more readable.

SQL comparison updates: Oracle 11R2, diagnostic logs

I finally found some time to update my page which compares SQL implementations. I performed a general update regarding Oracle, now that Oracle 11R2 has been released. And I added a new (incomplete) section which describes where the different DBMSes store diagnostic logs.

I turned out that very little has changed in Oracle since generation 10. The only remarkable new SQL feature in Oracle 11 is support for CTEs and proper CTE-based recursive SQL (introduced in version 11, release 2) -- but as I don't cover this topic on my page, updating the Oracle items was mostly a question of updating documentation links. Oracle still doesn't support the standard by having a CHARACTER_LENGTH and a SUBSTRING function, for example. This is simple, low-hanging fruit, Oracle(!) Sigh. It seems like Oracle's market position has made them (arrogantly) ignore the SQL standard.

Monday, August 31, 2009

VLDB2009: Non-cloud storage

Not all of VLDB2009 was related to cloud storage. Luckily, local and SAN storage is still being explored. Here are some notes from selected presentations.

Mustafa Canim: An Object Placement Advisor for DB2 Using Solid State Storage: What data should be put on disks, and what data on solid state disk storage? Solid state drives (SSDs) shine at random read I/O, but in most situations, there's limited funds, so only part of a database will be eligible for SSD placement. It turns out that if a naïve/simplified strategy like let's place all indexes on SSD is used, only a small overall performance gain is measurable, and it's hardly a justification of the expensive SSD storage. But if placement of database objects (tables/indexes) is based on measurements from a representative workload, then data which is often seeked to can be placed on the SSDs. Canim has created a prototype application which uses DB2 profiling information to give advice on the most optimal use for xGB of SSD storage, and he demonstrated very convincing price/performance gains from his tool. The principle should be easy to apply to other DBMSes.

Devesh Agrawal: Lazy-Adaptive Tree: An Optimized Index Structure for Flash Devices: Since SSDs do not shine at random writes, an SSD-optimized index structure would be highly welcome. The Lazy-Adaptive (LA) Tree index is a (heavily) modified B+ tree which buffers writes in a special way, yielding significant performance improvements.

Nedyalko Borisov demonstrating DIADSNedyalko Borisov and Shivnath Babu had a poster and a demonstration about a prototype application they have created: DIADS. DIADS integrates information from DB2 query access plans with knowledge about the storage infrastructure (all the way from files on the disk, through the LVM and SAN layers, to individual disks) to help diagnosing performance bottlenecks. This would certainly be useful for DBAs, and it could probably bridge the worlds of DBAs and storage people. They are considering making it an open source project. By the way: I believe that I've heard Hitachi claim to be selling a tool with a similar objective: Hitachi Tuning Manager.

Finally, a little rule of thumb from Patrick O'Neil's The Star Schema Benchmark and Augmented Fact Table Indexing (part of the TPC Workshop): Throughput of magnetic disks has grown much more than seek latency, so at the moment, 1MB of scanning can justify 1 seek.

Friday, August 28, 2009

VLDB2009: Sloppy by choice

One of the recurring themes at the VLDB2009 conference was how to create massively scalable database systems, by decreasing expressiveness, transaction coverage, data integrity guarantees—or any combination of these. The theoretical justification for this is partly explained by the CAP Theorem. Much has already been written about database key-value/object stores, cloud databases, etc. But there were still a few surprises.

Ramakrishnan's keynote
Yahoo!'s Raghu Ramakrishnan (whose textbook many readers of this article will know) gave a keynote on the subject. It was actually new to me that Yahoo! is also entering the cloud business; it's getting crowded in the sky, for sure. As we know, the business idea is this: A large operation like Amazon already has a massive, distributed IT infrastructure; so letting other companies in isn't that much of an extra burden. And the more users of the hardware, the easier it is to build a cost-efficient setup which can still handle spikes in performance demands (with many users it's very unlikely that their systems run at peak performance at the same time). Nice idea, but it may take a long while before users convert to using software which runs in the cloud, and it remains to be seen how many cloud vendors which can survive that long.

Anyway, Raghu Ramakrishnan presented a much-needed comparison of cloud database solutions (pages 55-60 in this presentation). The consistency model of Yahoo!'s cloud database system, PNUTS, does not provide ACID, but nor is it 'sloppy' to the degree of BASE. Another nice aspect of Yahoo!'s cloud systems is that much of it is based on open source software. Yay Yahoo!

When to be sloppy
At the conference, some claimed that cloud storage can also be used for important data, but no one gave a plausible example. In cloudy times, we should not forget that not all data are about tweets, status updates, weblog postings, etc. There are actually data which is actually important. That's why I liked the Consistency Rationing in the Cloud: Pay only when it matters presentation: It provided at framework for categorizing data in degrees of acceptable sloppiness, based on the cost associated with potential inconsistencies, versus the savings gained from the lower transaction overhead.

Panel on cloud storage
On Wednesday, there was a panel discussion on How Best to Build Web-Scale Data Managers, moderated by Philip Bernstein. Random notes from the discussion:
  • A Surprising, and somewhat strange viewpoint from Bernstein: We should not ditch ACID (not surprising coming from Mr Transaction himself), but we should give hierarchical DBMSes a new chance. According to Bernstein: The reason why it will not fail this time is that we have become so good at handling materialized views, and they allow us to make sure that fast queries are not restricted to restricting/scanning one one dimension. Bernstein failed to alleviate my fear of the return of another major drawback of hierarchical databases: navigational queries.
  • While there's much not-so-important data out there, the phenomenon of moderate amounts of important data hasn't gone away (not every business is a twitter.com). So although the non-ACID, non-relational database systems may have a lot of attention, it doesn't matter for the makers of "traditional" DBMSes, because RDBMS business is doing great.
  • Sub-question: Why do web start-ups seem to make use of key-value stores, and not use Oracle's DBMS (for example) when they need to scale to beyond a single data server? There wasn't much opposition to the view that—in addition to being administration labor intensive—the cost of an Oracle cluster is way out of budget in many businesses. So: If database researchers want to help prevent relational+transactional research from becoming increasingly irrelevant, it's time to help the open source database projects. While I agree that researchers can make a difference in the open source world, but I's skeptical to the perception of RDBMSes being abandoned; whatever numbers I've seen actually indicate the opposite. And while Facebook—for example—has a key-value store, their non-clickstream-data is actually in sharded MySQL databases, as far as I've heard; sharded MySQLs will never win relational database beauty contests, but at least it's tabular data, accessible with SQL, and with node-local transactions.
  • Interesting point: SAP, an application with undisputed business significance, is well known for using nothing but the most basic RDBMS features. With that in mind, one should be cautious to denounce cloud databases for lack of expressiveness.


Finally, a couple of pictures from the nearby Tête d'Or Park:

VLDB2009: TPC Workshop

Monday, I attended the Transaction Processing Council (TPC) Workshop about the latest developments in database benchmarks. The workshop was kicked off with a keynote from a true database hero, Michael Stonebraker. Mr. Stonebraker has not only published significant research papers—he is also initiated a number of projects and startups: Postgres (the precursor to the great PostgreSQL DBMS), Vertica (one of the pioneers within the "column-oriented"/"column-store" database realm), StreamBase, and others. Stonebraker held it that benchmarks have been instrumental in increasing competition among vendors, but there are aspects to be aware of: As the benchmarks allow the vendors to tune the DBMS (and sometimes create setups not resembling most setups, like using five figure disk counts), this doesn't improve the out-of-box experience—an experience which all too often needs to be significantly tweaked. Stonebraker also criticized the TPC from being too vendor focused, instead of having focus on users and (simple) applications. And he urged the TPC to speed up the development of new benchmark types (I'm thinking: geospatial, recovery, ...), partly by cutting down on organizational politics.

Personally, I'm astonished that some (most?) of the big DBMS vendors prohibit their users from publishing performance findings. This curbs discussion among practitioners, and it decreases reproducibility and detail of research papers ("product A performed like this, product B performed like that"). I doubt that this actually holds in a court of law, but it would certainly take guts (and a big wallet) to challenge it. I'm also annoyed that the vendors don't really support the TPC much: The TPC-E benchmark (OLTP-benchmark, sort-of modernized version of TPC-C) is two years old, yet only one vendor (Microsoft) has published any results yet.

Nevertheless, references to TPC benchmarks were prevalent at the conference, being referred to in several papers.

I'm planning to try running TPC-H on our most important database systems, to see if it is feasible to use it for regular performance measurements—in order to become aware of performance pro-/re-gressions. By the way: It would be of great if IBM (and others) published a table of reference DB2 TPC findings for a variety of normal hardware configurations. That way, a DBA like me could get an indication of whether I've set up our systems properly.

Other speakers had various ideas for new benchmarks, including benchmarks which measure performance per kWh, and benchmarks which expose how well databases handle error situations.

A researcher from VMWare pledged for benchmarks of databases running in virtual environments. He presented some numbers of a TPC-C-like workload running on ESX guests, showing that a DBMS (MSSQL, I believe) can be set up to run at 85%-92% of the native speed. Certainly acceptable. And much in like with what I'm experiencing. I hope that figures like this can gradually kill the myth that DBMSes shouldn't run in virtual hosts—a myth which results in a situation where many organizations don't realize the full potentials of virtualization (increased overall hardware utilization/lower need for manpower/less late-night service windows, as workloads can be switched to other hosts when a server needs hardware service).

I forgot to take a picture from the workshop, so here's a picture of me being sceptical about traditional DBMS vendors—next to the Saôme river.

At VLDB2009

I'm attending the 35th VLDB conference in Lyon: VLDB2009. The conference portrays itself as the premier international forum for database researchers, vendors, practitioners, application developers, and users. Of the 700 people (from 44 countries), I'm one of the few practitioners at the conference; and though there's a risk that the conference will be too research oriented, I've signed up, especially hoping to get the latest updates and thoughts on
  • probabalistic databases
  • column-oriented databases
  • performance quantification
  • cloud databases
During the next couple of days, I'll share my experiences here.

I—and others—have tweeted a bit from the conference, as well.

Thursday, October 09, 2008

DB2 lets you drop a parent table

DB2 is usually a rather strict database system: It doesn't allow you to drop a procedure which is being used by a function. It uses pessimistic locking. It typically forces you to back up a tablespace if you aggressively load data into a table if the database isn't using circular logging. Etc.

So I found it surprising that it allows you to drop a table which is being referred to in a foreign key. DB2 doesn't even warn you about the fact that the child table(s) have lost a potentially important constraint. That's evil.

I know of no other DBMS which lets you do drop a parent table: PostgreSQL refuses it (unless you add a CASCADE option to the DROP statement), MSSQL refuses it. Not even MySQL lets you do it.

As always, MySQL has little surprises:
CREATE TABLE child (
child_id INT NOT NULL,
parent_id INT NOT NULL,
whatever VARCHAR(50) NOT NULL,
PRIMARY KEY(child_id,parent_id),
CONSTRAINT child_fk FOREIGN KEY (parent_id) REFERENCES parent
);
ERROR 1005 (HY000): Can't create table './test/child.frm' (errno: 150)

So what is the reason behind that confusing error message (which also qualifies as evil)? -- "REFERENCES parent" must be explicit: "REFERENCES parent(parent_id)"...

Monday, October 06, 2008

Recursive SQL, soon also in PostgreSQL

SQL:1999-style recursive SQL has been added to the development version of PostgreSQL. Consequently, soon, three DBMSes (DB2, MSSQL, PostgreSQL) will be supporting the "WITH [RECURSIVE]" construct. Recursive SQL is becoming mainstream. (Oracle also has recursive SQL, but implemented in a limited and non-standard way.)

Saturday, October 04, 2008

Open Source Days 2008, Saturday

Today was actually a work day for me, as we carried out a major service window. Fortunately, my duties during the service window were limited, so I managed to sneak over and attend a few sessions at Open Source Days while waiting for some SAN and operating system patches to be applied by co-workers.

I attended the last half of Jan Wieck's talk about Slony-I (not "Slony-l", as written in the conference agenda). Slony-I is an asynchronous master-slave replication program for the PostgreSQL database management system. Unfortunately, I don't work much with PostgreSQL in my current job, but if I did, I'd certainly try out Slony-I. It can be useful for scalability (think: read-only database slaves in a CMS cluster) and continuous backup to an offline location. It can also be used when upgrading PostgreSQL, resulting in close to zero down time, because Slony-I can (to a certain degree) replicate between different PostgreSQL versions. Slony-I has some rather impressive replication routing features, so that you have have master->slave->slave->slave.

This talk was an example of why I like participating in conferences with open source focus: Jan was very clear about Slony-I's limitations and weaknesses -- contrary to some corporate guy who might not be lying, but who might be suppressing unfortunate facts. Slony-I has a number of weak points: It's rather complex to install, configure, and manage. And the current version 1 does some dirty tricks with the system schema (will be cleaned up in version 2).

Jan once had a plan for multi-master replication in Slony-I, but that idea has been dropped for now. Fine with me: Although it sounds cool, I would have a hard time trusting such a feature anyway, thinking about the implementation complexity it would entail.

Next, Magnus Hagander spoke about Hidden gems of PostgreSQL. Magnus works at Redpill who provides 24x7 PostgreSQL support (among a number of other services). As far as I know, Redpill has recently opened an office in Denmark -- which means that it's now possible to sign up for local PostgreSQL support in our little pond.

Magnus went through a few selected PostgreSQL features, meaning that he had the time to explain them properly:
  • DDL operations (such as dropping a table) are transactional PostgreSQL. Magnus presented this as a rather exclusive feature which few other DBMSes have. Actually, DB2 has the feature, and it's a mixed blessing: Transactions are a tremendously handy and time-saving feature, including transactional DDL. But if DDLs are transactional, it also means that a user with very low privileges can lock the system catalog by performing a DDL and not committing -- meaning that other users (potentially equipped with loads of high permissions) are blocked from completing DDL operations. I assume that PostgreSQL's transactional DDL suffers from the same drawback(?) By the way, Magnus pointed out a serious drawback with performing DDLs in some other DBMSes that don't have transactional DDL: They may carry out an implicit commit when a DDL statement is executed; this leaves potential for rather uncomfortable situations.
    Update, Monday, Oct 6: PostgreSQL doesn't suffer from the problem described above for DB2.
  • PostgreSQL now has built-in full text indexing (FTI), based on a somewhat cleaned up version of "Tsearch2" which used to be an add-on to PostgreSQL. The FTI can be been used in a simple way, but you can also configure it in very specific and powerful ways, using language specific dictionaries and/or specialized parsers and stemmers.
  • Finally, Magnus when through a few of the packages in PostgreSQL's "contrib" add-on. The crypto add-on is something, I'd much like to have in DB2.
After the talks, I went to the SSLUG booth to have a look at the extremely small PC which was on display there. Fascinating stuff. I really like the trend towards down-scaled and cheaper PCs, exemplified also by the EEE (which were everywhere at the conference). At the booth, I had a chat with Phillip S. Bøgh who told me that for a typical desktop PC, 81% of its energy consumption actually happens during production, long before it's sold to the customer. The corollary is that there is value in keeping old hardware alive, instead of buying new equipment whenever some large software company decides to try to force us to buy new products featuring new heights of bloat.

Open Source Days 2008, Friday

For several years, there has been an annual two-day open source conference in Denmark. It has had different names in the past ("Linux 98", "Open Networks 99", "Linuxforum 200{0,2,3,4,5,6,7}"), but nowadays, it's called "Open Source Days".

I've attended the conference almost every year. This year is no exception, although I may miss out on most of the Saturday talks.

Here are my notes from Friday.

OpenID, by Simon Josefsson
Users of authentication-requiring web applications normally have an unfortunate choice: Use one or two passwords at all web sites, or store passwords in the local browser or a local password "wallet". The first option is clearly not attractive, because a rogue web site administrator could be using your credentials to log in as you on other web sites. The second option is troublesome if you use several PCs, or if your PC is stolen (workstations are often not regularly backed up). OpenID brings a good solution to this dilemma: Create an account at an OpenID provider which you choose to trust (I use myOpenId, currently). Then, you can use that account at all sites supporting OpenID logins (several weblog sites, Plaxo, Stack Overflow, etc). OpenID can also make life easier for web site developers.

Simon Josefsson went through the OpenID protocol, superficially (time was limited). In a comparison with other authentication systems, he noted that OpenID is based on a voluntary trust relationship between website and authenticator, in contrast with--e.g.--SAML. OpenID can only be used in a web context. All in all, OpenID is a rather simple and light-weight protocol.

The main potential security problem with OpenID is phishing, but Simon noted that this is a problem with other systems as well: Even though the system may use non-web-browser password dialogs, such dialogs can be rather closely mimicked using Flash. The most effective solution to the phishing threat is to avoid relying (exclusively) on passwords, through SMS-based one-time codes, one-time code dongles, etc. Simon's company produces an elegant, small USB device which emulates USB keyboards; when you press a button on the device, a long password is emitted. In combination with an encryption system, this results in very secure authentication.

Where I work, we face an identity handling challenge: We need to have the authenticator convey a list of group memberships for an account to the web application. OpenID has deliberately been kept simple, so there is no dedicated solution for that. But Simon noted that OpenID 2 includes an assertion mechanism which can--in priciple--be used to communicate any kind of attribute about a user to a web-site.
Unfortunately, we can't really use OpenID for the before mentioned challenge, but I would certainly look at OpenID if I were to implement an authentication system elsewhere.

Using Puppet to manage Linux and Mac platforms in an enterprise environment, by Nigel Kersten
Ever since I heard a recent interview with Luke Kanies, I've wanted to know more about Puppet. Luke has an interestering statement about system administrators: Sysadmins need to move to a higher level, by adopting some of the methology used in software development. This relates to version control, traceability, abstraction, and code reuse. I very much agree on this.

Without having personally tried Puppet yet, I think it's somewhat fair to characterize as a modern cfengine, and as the unix-world's version of the Microsoft-world's SMS-tool (SMS having better reporting facilities, while Puppet probably has better scripting features). Puppet has gained a lot of attention in Linux and Mac sysadm circles, lately. Kersten is part of a team managing more than 10000 Linux and Mac internal workstations at Google.

Puppet is written in the Ruby programming language. So it was reassuring to hear that Nigel Kersten is "a Python guy": Puppet is not just being hyped as an example of an Ruby implementation.

Random notes from the talk: I learned that Puppet can actually work offline: Many rules will work without network dependencies. And it seems that Puppet can be a good way to implement local adjustments to software packages without having to mess around with local re-packaging. Puppet goes out of its way to avoid adjusting configuration files if there is no need (nice: that way, file mtimes don't mysteriously change without file content changes). Unfortunately, it sounds like there are issues to be worked out regarding Puppet installations on workstations where SELinux is in enforcing mode.

Nigel has heard from no one with personal experiences getting Puppet running on AIX. And as we are (for better or for worse) using AIX on the majority of unix installations where I work, I probably can't justify fiddling with Puppet, currently.

By the way: RedMonk has a podcast where Nigel Kersten is interviewed. (RedMonk's podcasts generally have too much random chit-chat for my taste, but this interview is actually good, as far as I remember).

PostgreSQL
During a lunch break, I had a talk with Magnus Hagander at Redpill's excibition booth. Magnus Hagander is one of the developers of my favorite database system, PostgreSQL. PostgreSQL is generally being very conservative/unaggressive by default, in order to be a good citizen on any server installation. But often, the administrator of a PostgreSQL installation actually wants it to be very aggressive. I asked Magnus Hagander for a list of top-three PostgreSQL parameters he generally adjusts in PostgreSQL installations. His answer: shared buffers, work mem, checkpoint segment (and effective cache size).

Open Source Virtualization, an Overview, by Kris Buytaert
I've been using Xen virtualization for a while, both at home and at work. And I regularly touch IBM's Advanced POWER Virtualization as well as VMWare ESX. In other words, I'm interested in virtualization, not just from a theoretical perspective.

So I went to Kris Buytaert for an update of the status of open source virtualization technologies. Kris Boytaert went through the history of open source virtualization. He listed three virtualization products which he currently recommends: Xen for servers, VirtualBox for desktops, and Linux-VServer for mass hosting of web-servers. And he mentioned the openQRM solution which can be used for setting up a local cloud, as far as I understood. He had some surprising statements: If you have the choice between full, VT-based virtualization and paravirtualization, then go for paravirtualization, for performance reasons. Live migration is of little practical use (contrary to experiences where I work). It sounded like Kris is somewhat skeptical with regard to KVM; on the other hand, Kris described how Xen has been moving further and further away from the open source world, ever since it was bought by Citrix (Citrix: How can you let this slip away?)

Best practices, by Poul-Henning Kamp
The best practice concept is starting to annoy me. I've often heard obviously stupid solutions being motivated by "but that's best practice!"; the statement is often heard from someone with highly superficial knowledge about the related field. Recently, Mogens Nørgaard had some good comments about the phenomenon in his video column (in Danish only).

In his talk, Poul-Henning was also skeptical about the best practice term. He joked about people asking for operations to be done in a way which is "best practice, or better!". Apart from that, Poul-Henning went through various recommendations for programmers, C-programmers in particular: Do use assertions, and don't compile them away. Use code generation when possible. Print out your code, zoomed out to fit in few pages; surprisingly, that can reveal patterns in your code which you didn't realize. Use lints and other code checkers, and try compiling your code with different compilers on different platforms. Certainly good advice, but the talk left me wondering: How about changing to a programming language with better type safety, instead of all the band-aids? (I believe that Poul-Henning once touched upon this in another context, basically stating that C is the only realistic language for a systems programmer, for various reasons.)

Many people have high regard for Poul-Henning, the coder. At this talk, however, the loudness of the applauses were in the guru-admiration league. --Which was a bit out of proportion for the talk, in my opinion.

Tuesday, September 30, 2008

JAOO 2008, Tuesday

Today, at JAOO, I attended sessions related to architecture, REST, and network databases.

For the record, I should state that I have strong distaste for the software architecture / architect terms. In my opinion, software architecture is a muddy buzzword, just like the architect title seems to cover a suspiciously broad category of profiles (Buzzword manager? Very experienced developer? Software development project manager?) I remember a recruiting conference where a company was hiring graduates for architect positions; if the architect title can include people without any experience at all, then it's void of meaning. Anyways, one should be open minded, so I decided to attend selected talks on the architecture track.

Frank Buschmann held a talk about architecture reviews. Buschmann has several good points, although they would probably hold for just about any review situation. A point was that an architecture reviewer should keep a neutral approach towards the development team; this certainly sounds like a good idea. <rant>However, one of Frank's war stories included a tale of how he had once used a special strategy to convince a team that they should stop being critical about a development framework which had been forced upon them. Is that being neutral?</rant> Performing an architecture review sounds like a good way to ensure documentation which would probably never be written otherwise; for this reason alone, it's probably recommendable.

Another architecture talk was Top Ten Software Architecture Mistakes by Eoin Woods. Woods seems to subscribe to a definition of a software architect which is much akin to a project manager who knows about systems development. The talk was a very well performed live version of an article which Woods has written. Too bad there wasn't time for discussion (a general problem with JAOO's presentations); but it would probably have been hard to handle anyway -- the large room was packed with people.

Eberhard Wolff held a two-part talk about Java History and Outlook. I attended the second part, which was a rewarding experience. Wolff gave several examples of how community initiatives have outlived certain over-designed "official" J2EE components. He also made an interesting point of how the Java world's web application trends seem to move from MVC towards component-based desig -- while, conversely, .Net is currently sliding from component-based design towards MVC. Wolff's company is involved in the Spring framework; if I'm not mistaken, it sounds like the Java world may finally be cured of its "frameworkitis" and converging on Spring for modern Java-based web applications. Although Wolff obviously has a strong interest in Java, the talk wasn't an evangelism event: He pretty much declared Java dead on desktop. And he noted that while the JVM is doing fine, the Java language has trouble incorporating new, needed features; instead, JVM based languages like Scala may take over in the long run. Until then, Wolff strongly suggested adopting aspect oriented Java add-ons such as AspectJ and Spring AOP to gain more expressiveness.

I had the pleasure of attending two good talks about REpresentational State Transfer (REST): Ian Robinson's RESTful Enterprise Development and Stefan Tilkov's Successfully applying REST - Integration, Web-style.

Tilkov presented a rigorous definition of REST, but in a nutshell, REST is web services, done right, IMHO. Unlike over-designed and obese SOAP, REST doesn't abuse the HTTP protocol; instead, REST takes advantage of URLs, MIME-types and HTTP's basic operations/verbs (GET/HEAD, POST, PUT, DELETE) to offer surprisingly powerful, yet simple, solutions. Other HTTP features, like content negotiation and caching, make REST even stronger. (On a side note, Tilkov pointed out it's a shame that HTML doesn't allow PUT as a form method; HTML 5 should fix this, however.)

After a good, basic explanation of REST, Tilkov went through a number of patterns and anti-patterns which one should be aware of. One interesting suggestion was to always provide an HTML representation, in addition to the main MIME type of a URL. Sort of like Java's toString() method which can be performed on any object type. That way, it's easier to perform tests.

Robinson's talk described a case where a legacy system had been integrated with new solutions, using REST. Specifically, Atom and AtomPub were used to enable the new solutions to efficiently pull messages and data from the legacy system.

In the late afternoon, a number of BoF sessions were held.
I'm always skeptical when people propose something as "post-relational", keeping in mind how the relational databases actually took over after the network databases way back then. But one needs to have beliefs challenged sometimes, and I really like BoFs as a supplement to traditional talks. So I attended a BoF about graph databases, led by Emil Eifrem. Emil's company have created an embedded database for Java which is good at storing nodes and edges and traversing them. The database -- called Neo4j -- is allegedly the fastest of its kind in the Java world. Neo4j is GPLed and offers transactions; nice. I can certainly think of use cases for navigational databases, but I didn't leave the meeting with an urgent need to take a closer look at the product. It seems that there are no standards for network database queries, and most (all?) of them tie you to one particular development platform, leaving you to have to expose web services if other platforms must have access. At the least, I would see if recursive SQL could be used before I went for a network database.