Usenix2005Report

Usenix 2005 Report

by Phil Hollenback

this article originally appeared in OS News.

This is the 30th anniversary of USENIX, the Advanced Computer Association. USENIX was started in 1975 as 'The Unix Users Group' and has been holding regular conferences ever since (along with many other activities, of course). USENIX focuses on the Unix world, including unix-like OSes like Linux. The USENIX conference is the place to go if you want to find out about topics such as advanced system administration or the latest filesystem research projects. USENIX is a blend of academic presentations and socialization. If you want to ask Andy Tanenbaum what he thinks of Linux, you can do it at USENIX.

At the same time, there are problems with USENIX. Is it relevant to modern computing? I know that when I look around the presentations, I see a lot of gray hair and wrinkles. Where are the fresh new faces and new ideas? Is membership in USENIX important for your professional development? While I can't answer all these questions, hopefully this small insight into this year's conference will give you an idea of what goes on at a USENIX conference and what you might be missing because you aren't here.

This year the conference was moved up to the middle of April instead of the usual mid-summer timeslot. The justification for this is that USENIX now conflicts with fewer other conferences. I don't know whether this is true, but from what I've seen attendance is down from last year (that's just based on looking around, not counting the attendance list). I wonder if the early date didn't catch a lot of people by surprise.

I've been interested in USENIX since I started my career back at SCO in the mid nineties (the old SCO, not the lawsuit-happy one of today). At the time, the open source movement was just gathering steam: Linux was popping up everywhere and big software projects like GNOME and KDE were starting up.

I was idealistic and fresh out of college back then. I had become involved in the SCO project called Skunkware to port open-source software to their OSes. A number of other Skunkware contributors were going to the '97 USENIX in New Orleans so I convinced my boss to send me too. What an experience. Between the party atmosphere of New Orleans and the technical discussions, I was hooked. I still remember Richard Stallman discussing free software and "GNU/Linux".

After that I attended the '98 USENIX and was planning on attending '99 until the company I was working for collapsed, leaving me without corporate sponsorship. I finally found someone else to send me to USENIX 2004 in Boston. Some things were different: the conference was smaller than in the late 90s, and there was not such a huge emphasis on the open-source world. Still, I enjoyed it enough to attend this year's USENIX in Anaheim, CA. I'm here for the three days of the technical sessions. Every day I will try to take some notes and describe the experience for OSNews.com. Now that all the background is out of the way, on to the conference.

I'm going to start with the night before the conference. I arrived in Ontario, CA at 11:20pm on a direct JetBlue flight. I expected a prearranged airport shuttle to pick me up. However, it never appeared and I was forced to hire a taxi at the last minute. I believe the $80 taxi ride was the most expensive I have ever taken. I knew I was in California and not New York because there wasn't a bulletproof divider between the driver and passengers. Still, I can't complain because I made it to the Anaheim Marriott before the hotel bar closed.

Day One

The next morning (this morning) was the start of the conference. Actually there were tutorial sessions on Monday and Tuesday, but that is separate from the main conference. The keynote speaker was George Dyson, a historian who is writing a book about Von Neumann and the birth of the digital computer at Princeton. A few interesting tidbits that I learned:

  • Kurt Godel's office was directly above Von Neumann's in Fuld Hall at Princeton.
  • The real purpose of the computer work at Princeton in the 1950s was the development of the hydrogen bomb.
  • Hardware used to be very unreliable and programs were very reliable. Now the reverse is true.

Also I looked George Dyson up in Google and he is Freeman Dyson's son (and Esther Dyson's brother). Sometimes I think the Dysons are the Wayans of the computer world: they keep popping up everywhere!

All in all this was a very informative and enjoyable speech. Of particular note were the original documents that George had discovered at Princeton, including computer operator logs with humorous scribbles and memos protesting the excessive use of sugar by the computer folks (during WWII sugar rationing).

After the keynote was a coffee break where I did a little geek stargazing and found the following individuals:

  • Andrew Tannenbaum (creator of MINIX)
  • Bill Cheswick (author of the computer science classic, "Repelling the Wily Hacker")
  • Longtime Linux developer Theodore T'so.
  • Security guru Marcus Ranum, showing off his usual sartorial flair.

USENIX is split into multiple 'tracks'. This year's tracks are:

  • General
  • Invited Talks
  • Freenix
  • Guru Sessions

Day Two

Day two started off a bit slowly. Oh wait, actually I started off a bit slowly. That could have been from the margaritas last night at La Casa Garcia, a Mexican restaurant a few blocks down the street from the Anaheim Marriott. Good Mexican food is one of the things I miss the most since I moved from California to New York two years ago. Luckily, the food at La Casa Garcia was excellent.

I started off day two with an invited talk about NFSv4. This is an example of how USENIX excels at bringing together the best of industry and academia. The presentation was by Spencer Shepler who works on NFSv4 at Sun and is the co-chair of the IETF NFSv4 working group. You can't get much closer to the source when it comes to NFS expertise.

I was a bit drowsy for the first part of the talk, so my notes were pretty sketchy. Blame it on a lack of coffee. I did manage to pick up a few interesting tidbits later on in the speech, though.

The fourth version of NFS provides many advantages for distributed filesharing. Some people tend to dismiss NFS as an obsolete mechanism or something "only used by Unix geeks". This is far from the case. The NFS developers have analyzed the shortcomings of NFS and provided some excellent improvements. One I am very excited about is the folding of all the legacy NFS protocols into one protocol running on TCP. Historically, NFS was extremely difficult to pass over firewalls because it depended on several protocols and some of those did not have well-defined port numbers. That has been completely fixed. As Spencer responded to one of the questions at the end of his talk, all you need to do to support NFSv4 over firewalls is allow TCP on port 2049. Certainly this is a welcome addition for any network administrator.

Another new NFS improvement which caught my attention is file delegations. Traditionally, every file operation by the client requires a call to the NFS server. Delegations can greatly reduce this network overhead. The NFS developers recognized that in many case, files on the server are accessed by only one user. A classic example of this is the home directory - it's quite likely that only the owner of that directory will be accessing those files.

Delegations allow the client to take control of a file. After reading a file, the client can write, close, reopen, or perform other actions on the file. The client only needs to communicate with the server when it is done with the file. Obviously this can provide greatly increased performance when files are not shared between multiple clients. One caveat is this will currently not work well over firewalls because the use of delegations requires a callback.

Other highlights: Spencer's opinion on traditional NFS file locking is, "it sucks". Thus there are a number of improvements to locking in NFSv4. One of the biggest of course is that file locking is now part of the one unified protocol and doesn't require a separate lock manager. Another is that clients (not servers) now do lock release when a server restarts. Owner attributes in NFSv4 are now of the form user@domain instead of traditional 32-bit UIDs/GIDs. This will provide more flexibility, but will cause headaches for some existing installations.

One small change that will have a big impact in how NFS evolves in the future is the new 'minor version' support. Previously, each release of NFS (v2, v3, v4) was a completely new protocol. This model did not allow for incremental changes. Now going forward new operations and attributes can be added to NFS without forcing a move to a whole new major version of NFS.

Basically the talk was a wealth of information about NFS. One of those 'USENIX moments' occurred when the presenter had a question about Linux NFS support. He just asked the audience because he knew the Linux NFS developers were there. I'm going to end my discussion of NFS now otherwise it will take over my whole update for today. I will note that you can check out Spencer Shepler's weblog for the current status of the NFSv4 work.

A few notes on the morning coffee break: first of all, does anyone really think 'strawberry-flavored' cream cheese really tastes like strawberries?

I overheard a few interesting comments from one of the USENIX attendees while enjoying my bagel (I wasn't eavesdropping, I swear!). He asserted that there has been a fundamental shift in USENIX since the mid-nineties. During that time, a decision was made to make USENIX a more academic conference. The result of this has been a substantial reduction in papers from people in the industry. This has been a negative result because many of the best papers in USENIX history came from industry (one example: the original mid-eighties paper on NFS). The academic thrust has therefore reduced the general quality of USENIX by shutting out people "in the trenches" making important contributions in our field.

I can't really comment on that since I wasn't attending USENIX back then, but I think it is food for thought. There is a balance between academia and industry, and it is possible that USENIX may be too far over on the academic side. Still, there's no denying that there are some excellent papers being presented at this year's conference.

For the 11-12:30 timeframe I chose the Freenix track and three presentations on network security and monitoring. These were all high-quality, but I'm going to concentrate on the one that I found most intriguing.

The Ourmon network monitor illustrates how open-source tools can be combined to produce very sophisticated analysis network analysis. In this case, the administrators at Portland State University use Berkley Packet Filter to filter the massive amounts of data going through their DMZ and then store the data with RRDtool. They wrap this all up in a web interface so you can see at a glance:

  • Do the number of TCP SYNs match the number of FINs? A large number of SYNs and few FINs indicates an attack (lots of connection attempts that are being aborted).
  • Is the number of TCP RSTs higher that usual? This could indicate lots of port scanning activity (which just makes connections and immediately aborts them).

I was quite impressed with the statistics they are gathering and I encourage you to check out the realtime data available at the Ourmon link above. I've only scratched the surface.

The big presentation of the day (for me anyway) was on the upcoming Mac OS X 10.4 release, Tiger. This was presented by Dave Zarzycki, Senior Engineer with the BSD group at Apple (there was an additional Apple presenter there but I neglected to get his name). Dave knew his audience: the whole presentation was about the changes in Mac OS X from a Unix perspective. It's clear that Apple is listening to the Unix world and is generally interested in improving the Unix experience.

Day Three

Day three of USENIX 2005 has come and is now almost gone. When you get to a conference, it always seems like the days will stretch on forever. Then before you know it, everything is all over and you are on your way home. Its kind of like summer camp, I guess.

One change in USENIX from last year is the technical sessions have been squeezed back in to three days. In Boston last year, the technical sessions and tutorials both ran over five days. You pay for the conference by the day and many attendees (including me) just come for three days. The result was that you had to pick which three days you wanted to attend, knowing you would miss some of the technical sessions. They also ended the conference at noon on Friday.

Happily this year they are back to the three day technical session format. That also means the conference ends at 5:30 today instead of noon. However, I can already feel things winding down. The registration desk is closed and hotel employees have begun rearranging the furniture. That combined with a windy, gray day is a bit depressing.

Anyway, here's my recap of today's activities. First up for me was the System Administration Guru Session at 9am. I recognized a number of faces from the same session last year in Boston. We discussed topics centering around system and configuration management. These are some of the standard discussions whenever you get a bunch of sysadmins in the same room. David Parter from the University of Wisconsin led the discussion very competently. One topic I brought up was the notion of passive versus active system management. Three of the presentations I attended yesterday (including the one on Ourmon and the one on NetState) both focused on the idea of passively monitoring network packets to determine what OSes and applications are running on a network. It seems to me that the more active your network becomes, the more useful passive detection is. I compare it to the world of submarines: active monitoring (active sonar) is very good for determining what is out there. However, it also has side effects, like announcing your existence to everyone else. Passive sonar, on the other hand, is less effective but much more stealthy. Ok, maybe this isn't a great analogy but its still fun.

The general consensus was that passive monitoring is valuable, but active monitoring is more reliable (except for catching systems that are rarely on the network, of course). Passive monitoring also has to be supplemented with a statistical approach, since passive tools will occasionally guess wrong answers. For example, if your fingerprinting tool says a system is Windows XP 90 percent of the time but Mac OS X 10 percent, which answer is right? Probably Windows XP.

We also discussed disaster planning. I found that the attendees from the academic world thought about this quite a lot. One reason for this is they have to deal with state auditors who require this sort of planning.

One weakness of the session, I felt, was it was primarily attended by sysadmins from the academic world. As I discussed in yesterday's report, this can be a problem at USENIX in general. To be fair, there were several other session attendees from the commercial world, and there may have been more that just didn't say anything during the session.

I also learned a couple of things about the tools people are using. The consensus is that Request Tracker (RT) is the most common ticketing system in use, by a wide margin. also, everyone is using RRDtool to collect system data, but the the MRTG front end is not used much any more. Instead, people are using tools like Cricket.

I learned one thing at the 10:30 coffee break: you can have anything to eat that you like, as long at it is a miniature chocolate chip muffin. I think there's some sort of life lesson in that somewhere.

copyright 2005 Philip J. Hollenback


CategoryGeekStuff



Our Founder
ToolboxClick to hide/show