Index ¦ Archives ¦ RSS

A history of VisionFS

Creative Commons License

This work is licensed under a Creative Commons Attribution 3.0 Unported License. For attribution use a link to this page.

Please send comments, questions and corrections to Roger Binns

Reason for existence

In 1994 SCO acquired Visionware, a company based in Leeds in the UK. They were combined with an existing group based in Cambridge formed from the 1993 acquisition of IXI. The unified group was called IXI Visionware, then became the Client Integration Division and finally formed Tarantella, which spun out as a separate company in 2001 and was acquired by Sun in 2005.

The division's products were primarily (over 90%) sold on RISC platforms such as SunOS, Solaris, HP-UX, AIX etc. The remainder were on Intel systems but even then some were on non-SCO operating systems such as Dynix or DG/UX, as well as on Windows. Consequently we had our own salesforce since the main SCO salesforce concentrated on selling SCO operating systems. The division also operated fairly independently as a result.

IXI specialised in software that ran on Unix and made Unix easier to use. Examples were X.desktop (a desktop and file manager environment), IXI Panorama (a window manager) and IXI Mosaic (the first commercial web browser, with the base licensed from NCSA). Visionware specialised in software that ran on Windows that made Unix easier to use. Examples were XVision (a PC X server letting you display Unix programs) and TermVision (a terminal emulator).

The intention was to sell a single suite of software — the Vision Family — that incorporated all the various products. But there was a problem — the Unix components could easily be distributed amongst Unix machines as they all had TCP/IP as standard with file copying (eg ftp) and networked file system (eg nfs). However the Windows components had been distributed on CD-ROM and visiting each client machine with the disc was not friendly.

There was a thriving industry providing Windows TCP/IP networking (at least 5 vendors provided their own TCP/IP stacks) and file access (eg Novell Netware and PC-NFS). These all required that you install extra software on each client machine as well as costing yet more money (list price typically was $400 for a TCP/IP stack plus some utilities like ftp and telnet, and another $100 for NFS, per machine!).

With Windows for Workgroups (Windows 3.11) Microsoft introduced their own TCP/IP stack as well as a standard networked file access — SMB. SMB was also used in the fledging Windows NT, OS/2 and some other systems. Windows 95 was in beta and including TCP/IP and SMB as standard. Most importantly using TCP/IP and SMB did not require installing extra software beyond what Microsoft supplied with the operating systems nor did it have additional license fees.

The obvious solution to the distribution problem was to have a component running on the Unix server, providing all the Windows components through SMB over TCP/IP.

Samba

Around that time, a piece of open source software named Samba was becoming increasingly popular. It did just what we needed, although it wasn't perfect. For example it needed a configuration file to be created before it could be run, and there was a lot of support volume in the mailing lists. But it would be an excellent start.

SCO's lawyers immediately took fright at the GNU General Public License (GPL) that Samba was licensed under. They eventually said no. We explored hiring the primary author Andrew Tridgell to rewrite a proprietary implementation, but he wasn't interested.

That left us with no alternative but to write our own SMB server. In fitting with the "Vision" naming scheme, it was ultimately called VisionFS (FS stands for file server). Internally its project codename was Bass, following the fish-based project naming scheme we used at the time and also a workable acronym for Build An SMB Server. Marketing nearly called it SharedVision before settling on VisionFS.

Values

The first task was setting the scope of the product. The fewer goals and functionality your product has, the better you can meet those goals and the sooner you can reach them. We had no intention of supplanting existing Windows servers. That meant that our product would never be a domain controller or later an Active Directory Server, Kerberos server etc. We would happily serve up (read-only) the components of the Vision Family and provide read-write access to other files on the Unix server as well as printers.

Our goal was zero support calls. The Samba mailing list was closely monitored to see what caused issues and to ensure they didn't happen with VisionFS. This was usually achieved through limiting possible configuration settings and by default not requiring configuration. For example VisionFS joined all workgroups at startup so that no matter which workgroup you looked at in Network Neighbourhood, you would always see your server. (You could then configure it to only be in specific workgroups).

Samba's configuration was via a text editor such as vi, the smb.conf file and a manual page describing the many options. This is not user friendly! We decided to include a graphical configuration tool (termed the Profile Editor) which would ensure the configuration was always correct, as well as providing guidance when setting options.

Specs

When implementing a network protocol, you need the specifications. When information is missing or incorrect then you have to reverse engineer it. Reverse engineering is time consuming, expensive and consists of a combination of educated guesswork and measuring what effect tweaks have to further refine your understanding. You have to keep iterating until you feel you have completed the information. For example you may see a field have a value of 7 and have to work out what that means. You would also ponder if 8 could be sent. You would have to work out when to return an error, when the field's value doesn't matter and when it is important.

The specifications available consisted of some RFCs that covered how NetBIOS over TCP/IP was framed (1001, 1002) and some documents from X/Open that covered earlier versions of the protocol such as that used by Xenix and OS/2. Also very helpfully there was the Microsoft Network Monitor which captured and decoded network traffic (like Wireshark does today). It decoded SMB so you could tell what Microsoft programmers named the various fields and thought the values meant. Finally the Samba source code told you what their developers thought various fields meant and how to handle them.

All these sources covered a substantial part of the protocol, in some cases contradicting each other, and there were still several pieces missing which had to be worked out via reverse engineering.

Language, Tools and Libraries

There were no existing libraries that implemented even parts of a SMB server so as developers we had the luxury and workload of starting from scratch. Historically all projects in Cambridge (IXI) had used C and had always used vendor compilers. At one point I counted over 20 different variants of Unix and versions thereof in the office - we even had a Sony workstation! We had this because many Unix workstation vendors sent us their machines to port our graphical desktop and file manager environment to. C was a lowest common denominator and using the vendor compilers meant that there shouldn't be any binary compatibility problems. Unfortunately vendors had started charging for their compilers, using draconian and crashtastic licensing daemons and in some cases outputting buggy executables. We had increasingly started resorting to the GNU Compiler (gcc) which at least was consistent across platforms, generally worked well and didn't need licensing daemons.

The decision was made to go ahead using gcc and the C++ language. C++ has many features but we only used a few of them. At a high level it allows you to group data structures and the methods that operate on them together in a class. It also provided a facility to make data structures that store items (collections) easier through the use of templates.

That meant a detour through what every C++ project of the time did — implementing your own string and collection classes. (Mercifully modern projects can just use the STL.) C provides a very rudimentary way of managing textual data, which you could enhance in C++ by ensuring length information was carried around with the data and avoiding redundant copies through reference counting. Collections made it easy to have lists, hashtables and arrays of data items.

We also had to add additional library code to do networking, configuration file parsing, UNIX API access (eg accessing user information, extracting printer details, files etc). That library also lived on into Tarantella. Much of it also worked under 16 bit Windows (needed for the Profile Editor) and all of it under 32 bit Unix (most common at the time) and 64 bit Unix (we had some DEC Alpha machines). It also worked with either endianness.

Implementation

Windows does its SMB client and server implementations as part of the kernel. In theory this gives better performance as there are no context switches between processes and the kernel but the plumbing inside a kernel is convoluted and has many constraints. Samba has shown that a user space daemon on Linux could outperform the Windows kernel implementation on the same hardware. Given how many different Unix variants we had to support, a kernel implementation couldn't even be considered so a Unix daemon approach was used.

The underlying implementation is as pretty standard for Unix daemons. The core of the daemon calls select() looking for any activity on network connections with a timeout for periodic work to be done. VisionFS would start up with the initial process splitting itself into 3 processes with different functions (forking in Unix terminology).

The first did all the naming activity. Samba uses a separate daemon for this — nmbd. The naming process has to listen for and respond to name queries. Additionally there are elections in the subnet for which machine will maintain the list of all machines on that subnet so the naming process has to participate in the election and if it wins also keep track of all the names.

The second was a controller. It would handle starting additional copies of the third process, as well as managing shutdown. The third process did the actual handling of the SMB protocol. As with all the other processes it was single threaded (greatest portability across all the Unix versions). It could be configured to handle as many concurrent clients as you wanted which reduced memory consumption but could affect performance if your clients were very chatty at the same time. (Samba can only do one connection per process and that was also the VisionFS default.)

A SMB request or response is fairly complex. There are initial headers giving the overall size and then other fields within giving the size of yet other fields. Some requests have info-level fields which dictate the presence or absence of other fields. The request may have been generated maliciously trying to get you to read or act on the wrong or missing data. For example a request could have an overall header claiming to be 100 bytes long, have an internal header saying that 120 bytes remained and then another field saying that the subsequent filename is 200 bytes long.

Samba (before version 4) had hand written code to parse the requests and generate the responses. It required many lines of careful coding to ensure that everything happened correctly, with the occasional mistake.

Our approach was to use a high level custom description language which was then parsed by a tool to generate (the tedious and lengthy) C++ code. You described the size of each field in bits or bytes, what type it was (eg integer, string, block of data) and a field name. Most importantly it allowed conditional (if) statements. This is a contrived example:

 1 INT unicode
 1 INT muxwrite
 1 INT extended_errors
29 PADDING
16 INT infolevel
IF infolevel<2
  {
    16 SMALLDATE creation_time
  }
ELSE
  {
    32 NTDATE creation_time
     1 INT delete_on_close
  }
16 INT filename_length
filename_length STRING filename

This would be translated into a C++ class with the appropriately named fields. Parsing the request and generating the response would never use invalid data. It also caught programming errors. For example in the above description the delete_on_close field is not present if infolevel is less than two so attempting to use it would generate an internal error.

Actually writing the SMB server was fairly simple. We started out with a server that returned not implemented error to every request. It would also print out to the console what the fields making up the request were. We started by trying to map a drive, saw what errored, wrote the missing request and tried again. Pretty soon we had file manager working. After that we started working with Word, Excel and PowerPoint documents which filled in a lot more functionality. It quickly became good enough for other people in the office to try out.

Following that we had to look through the specifications and Samba to see what requests had been missed, try other software, write custom programs to use unexercised bits of the Windows file related APIs etc. We had the MSDN full CD set which had copies of pretty much every program Microsoft had ever produced — both client and server sides. I got so adept at installing Windows NT that I was able to install the Chinese version significantly faster than a native speaker just through recognising the shape of dialog boxes and which button the answer was on!

At one point we even hired a temp and sat her in front of a computer with every piece of software we could find. She had to install the software to a VisionFS server, create a document using it on the server, create an embedded document (using OLE which as an example allowed you to embed an Excel chart in a Word document in a PowerPoint spreadsheet), and then finally uninstall the whole mess. It uncovered a few glitches that our internal checking code caught and told us we were in a good position in terms of product completeness and quality.

To my knowledge there was only ever one bug ever reported about data consistency issues, which centred around Excel's abuse of locking.

The rest of the server was structured just as you would expect. There were C++ classes that correspond to shares (trees in SMB terminology) with examples being regular file shares, printing and the secretive IPC$ share for getting meta-information). File objects encapsulated the different types of files and pseudo-files within like a printer queue or a named pipe. On our todo list but never implemented was a share that talked back out again to an FTP server so you could make the FTP server available to Windows users without them having to know anything about ftp.

Sadly Windows clients aren't actually completely compatible with each other, the servers or the specification. It takes a lot of testing to make sure you get everything right. As an example I rewrote the code that returns a directory listing 4 times. This was because directory listings were returned in chunks, with differing mechanisms to specify the next chunk and clients not behaving as they should (eg not closing the listing when they should) plus numerous other details that break implementations such as exactly what wildcards mean. And don't forget things like short vs long filenames. Andrew Tridgell of Samba eventually resorted to a program that would send the same request to multiple different Microsoft servers and compare their answers!

Profile editor

The underlying server configuration was contained in a single file. A graphical configuration tool had to run on Windows, since by definition the customer had Windows machines. We also didn't want to have it installed on the client machines since that would mean version mismatches on future releases. The solution was to make the Profile Editor available and directly runnable from a share on the server. In order to keep development time low, we wrote it using 16 bit Visual Basic. If the necessary DLLs were in the same directory as the program then it just worked on every available version of Windows. The configuration file was made available to administrators running the Profile Editor via a hidden CONFIG$ share.

However this was not sufficiently user friendly. We wanted to ensure that if you had to fill in a pathname on the server (eg when defining a share) that you could use an actual file browser and that it would tell you text you typed didn't exist. Similarly you could browse the list of users where a username was expected with the same principle applying to other data types.

To contrast installation with Samba, you installed VisionFS and it would start running automatically joining all workgroups. From your Windows machine you browsed into any workgroup and could see the server. Double click and navigate to the configuration tool and run it. On the pane where it asked for a workgroup, you could leave it as all, or click a dropdown which showed a list of all the existing workgroups for you to pick one. With Samba you would still be wondering where your configuration file was, how to invoke the unfriendly to newbies vi text editor and desperately browsing all over the manual page trying to work out how to change the workgroup, then wondering which ones your network has and if it was a dash or an underscore in the one you wanted.

Behind the scenes there was a named pipe the Profile Editor used to talk to the VisionFS server to get lists of information, file trees etc. The pipe and protocol were named MFLA which was intended as a backronym, but we never really settled on a good one. One colleague provided the original acronym after I pleaded for a Meaningless Four Letter Acronym.

We took our usability and user friendliness very seriously. In particular Alan Cooper (author of About Face) provided much inspiration. He also has a fantastic article from September 1996 in Dr Dobbs journal about goal oriented design which is still very pertinent today. Make sure you read it.

The bottom line was that functionality had to be presentable and clear to understand in the Profile Editor. If it couldn't it wasn't included in the product or was rejigged until it was acceptable. It was very refreshing having a team member absolutely insistent and participating at a technical level like that.

Of course being a Windows program it needed Windows help, again made available through the CONFIG$ share. At the time the only way to build a Windows .hlp file used Microsoft's Help Compiler, running on Windows. The source for this help was RTF format, maintained by the Documentation team and stored under version control along with the rest of the software. It was built into a distributable file by the build system in as automated a way as we could manage, which involved running the just built VisionFS and accessing the files from Windows through it. That also formed a nice sanity test that the build was correct.

There's an Easter Egg in the Profile Editor, but noone can remember how to trigger it. At one point it included photos of the VisionFS team members. It sadly became unmaintained in later versions.

Documentation

The Profile Editor help had a goal of getting you an appropriate answer as quickly as possible. The administrator would need as little as possible to get to that answer. As a complement to the Profile Editor help, the Documentation team created a printed manual, Introducing SCO VisionFS which was intended to be read from begining to end giving the bigger picture and context. It was a significantly easier read than the Samba man page, written in an informal and approachable style, and well received.

The manual was mastered in Microsoft Word from Office 95, using its "book" feature — one file per chapter. All indexing was performed in Word too. Once the content was frozen or at least sufficiently chilled, the book was laid out using Pagemaker 6. The resulting file was sent to the printers and was also used to generate the PDF included on the product CD and web site. The PDF took a while to generate on a 90 MHz Pentium.

The manual includes at least one Easter Egg.

Installation

All the team members had previously experienced the pain and anguish of dealing with software installation on Unix systems. For earlier IXI software like X.desktop, the installation instructions were long and complicated — we resorted to using flowcharts at one point. The problem was that Unix vendors at the time disagreed violently on pretty fundamental aspects of the system, such as how to ensure that a program started automatically on reboot, or even how to package up software for installation. That doesn't even consider issues like upgrades or supporting multiple languages.

The team members agreed that the previous solution — ship a tar file and make it the Documentation team's problem — was not acceptable. In particular, Windows software was relatively easy to install: stick in the CD, click OK when the setup window appeared, watch the CD activity light flicker, done. Our goal was to make it as easy as this on Unix.

Today even Linux uses two different packaging solutions — RPM and deb — and there are others in use on less mainstream distributions. Between other Unix operating systems and versions, it was rare for any two to use the same format.

VisionFS was distributed as a single file, either from the website or on the CD. You ran the file as root. Our guiding principle was: the administrator probably has no clue how best to answer any interesting question you might ask them during installation, so only ask questions you really, really need to know the answers to before the software's installed. An example would be which runlevel to run at. Wherever possible, install with default values and make it configurable later. This is not because we thought administrators were dumb; we viewed the typical person installing the software as harassed, busy and not necessarily knowledgeable about SMB servers. We could ask them, "hey, you want oplocks turned on by default?" But to most people that's just gibberish, especially when they've just bought the software. Get it installed, get it running, then they can play with it and tweak stuff when they need to.

The principle was, more or less, adhered to. The lawyers required that we ask installers to agree to the EULA, and some questions crept in at various people's insistence — installation directory, whether to start on reboot, that sort of thing.

Even so, for some time you could install VisionFS just by running setup, agreeing to the EULA and pressing Enter once more to agree to the default installation options. Then it would install, insert itself into the various system locations it needed to be in on that flavour of Unix, and be ready to use straight away.

The installation code was written as shell scipt, the lowest common denominator on all the platforms. We couldn't use all the features of the shell since they weren't consistently available across all platforms, or in some cases would crash the shells. In order to keep developer productivity high, we wrote a library of shell script macros. A program would then parse the code expanding macros and produce the resulting (and larger) script that was actually run to do installation. An example macro would be finding out how much free disk space there was on a path. The path may not fully exist when it is the proposed location for installation. The df command would tell you free space, but each Unix variant had its quirks in output, flags and the directory name passed in. This macro would expand to about 100 lines of shell code, but the developer only had to use one line of code to use it. Other example macros are starting programs on reboot, or asking the administrator questions. The latter sounds easy but is made more difficult by echo differing between platforms and spoken languages (eg an English speaker uses Yes or No while a French speaker would use Oui or Non). Much of this library was also used for the Tarantella installation script.

In order to make the installation be one file, we appended the actual product files to the end of the script. The installation script had its own size embedded internally and so extracted the files using the dd command telling it to skip the shell script header.

VisionFS installation or upgrade instructions Typical product of the day instructions
  • Run sh ./setup
  • Uncompress the installation file
  • Perform various manual measures if this was an upgrade
  • Move it to where there is free space
  • Untar the installation
  • Change directory inside
  • If you already have the product installed, stop it from running
  • Run the setup script
  • Edit /etc/rc.local or copy files to /etc/rc?.d to make it run on reboot
  • Edit /opt/product/product.conf to set options
  • Run the product from the bin directory

We also made it easy to uninstall — just run visionfs uninstall. Easy now, but almost non-existent back then. Before package managers uninstalling Unix software was usually very difficult, requiring the administrator to manually stop processes and remove all the files that may have been installed in varying locations.

Features

In addition to the usual SMB server functionality, VisionFS offered several features that were unique, almost always in the name of interoperability or user friendliness. For example Windows uses a different one way encryption than Unix so if encrypted passwords are to be used then the SMB server has to build up a new database of the passwords encrypted the Windows way. On installation VisionFS let you set all passwords to the same thing (the users would then login and change it), make some or all random or blank or various other schemes. It could also email all affected users with appropriate instructions on what to do next. (Other authentication schemes were also supported such as pass through to an existing SMB server.)

The Windows naming system had several issues (domains vs workgroups, primary vs backups, your subnet vs their subnet). VisionFS let you advertise names that corresponded to any IP address. For example that let you easily make a server on another continent appear locally. It was termed CIFS Bridge which memory recalls as a term imposed by the marketing department. We also had Internet Workgroups which let VisionFS servers on different subnets share all or part of their naming databases.

VisionFS also allowed multiple NetBIOS applications on the same machine. This had always been the intent of the NetBIOS specification but there was no NetBIOS for Unix. The implementation was very simple and effective — VisionFS would listen on the NetBIOS port (139 TCP) and when a client connected it would give the name of the NetBIOS application it wanted to speak to. VisionFS would then return a NetBIOS redirect to the application which would be on a different port on the same machine. This allowed multiple versions of VisionFS and Samba to co-exist on the same machine. (The mechanism is somewhat analogous to how virtual hosting works.)

We also offered an SMB client. Samba has an SMB client but it is a user space program similar to ftp clients. Unaltered programs running on the Unix server can't take advantage of it. The VisionFS client could not be implemented in the kernel since there were far too many versions to even consider. Instead we took the standard approach of the day which was to implement it as an NFS server in user space. By default it was available at /smb and you would access files as /smb/youruserame/server/share/directory/file. Each user could run the command line visionfs tool to preconfigure authentication information which the VisionFS client would then use when you accessed SMB servers on demand.

The most important feature was that VisionFS was easy to install, easy to administer and just worked. A few years after the first release I even discovered some internal company servers running alpha releases without their users even knowing or caring!

Locking

Perhaps the hardest part of implementing an SMB server is dealing with file locking. Functionality like name serving and collation can be easily handled in one place. A sensible algorithm for converting long filenames to short filenames will ensure that different clients see the same name conversion, and after Windows 95 was released 32 bit programs using long filenames became the norm. The clients are independent of each other so the connections are self contained. For example the current working directory of connection A has no bearing on that for connection B.

But locking affects all clients and all connections. Unix uses advisory locking. If your program cares about file locks it has to ask which ones are present and act accordingly. Programs that don't care about locking will just ignore them. Windows locks are mandatory. For example if any client locks part of a file then any other client accessing that part of the file should have an error returned. Windows programs use locks a lot. They can be on the whole file (giving compatible access to other programs such as allowing them to open for reading but not writing) or on parts of a file. They can even be beyond the end of a file. OLE used locks way beyond the end of the file as a primitive form of inter-machine communication. The standard benchmarking tool of the era – Netbench – also extensively used locking.

Unix offers little help to the SMB server implementor. You could use the advisory locks on parts of files, but semantics are very different (eg Unix locks combine, Windows locks stack). There is no equivalent of whole file locks.

An additional twist is something called opportunistic locking (oplock). When a client is the only one with a file open, the server can give it an oplock. The client can then optimise its usage of that file since it knows noone else has it open. For example it could combine writes before sending, cache read information and not even bother to tell the server about locks on part of the file since there is noone else to clash with. That allows the access to be significantly faster since many requests aren't even sent to the server so there is no waiting for responses. The moment someone else tries to open the file, the server has to pause them and contact the first client asking it to relinquish the oplock. The first client will flush outstanding writes, acquire locks, invalidate its cache etc and then give the oplock up. The second client then gets the file open response and both clients have to share the file using locks as appropriate, mediated by communication with the SMB server.

An SMB server locking implementation has to do 3 things:

  • Maintain a database of all locks — both locks on the whole file and on portions of the file
  • Consult the database frequently (eg locks on portions of a file for read and write operations and whole file locks for open operations)
  • If someone else has an oplock on a file and you want to open it, then ask the SMB server process responsible for the first party to get their client to break the oplock, and wait till they do before continuing

All this has to function in the case of adversity, such as dealing with connections going away that didn't clean up after themselves, clients not responding to oplock breaks in a reasonable amount of time, the lock database getting large, running out of memory etc.

Every few days the team would go to a meeting room with a whiteboard and explore the various approaches available. The most desirable was having each process looking after itself — ie if it created and used locks then it would have to do the database maintenance and lookup. Those that did the most locking would consume their own cpu allotment, memory quotas etc proportionate to their level of locking activity.

The most promising scheme was to use a journal, which required journal entries to be appended atomically. The journal entry would note lock information. Some journal entries would cancel out earlier ones (eg when a lock is released). Finally the journal would have to be periodically scavenged and compacted (eg removing entries that cancel each other out, or clients that had closed their connections). Locking all processes would only be required during the scavenge and could be mitigated through the use of multiple journals. After extensively exploring this we had to give up because appending to a file is not atomic with NFS as well as concerns with how consistent some operating systems were with memory mapped files that would be used for the journals. We also wanted to make the locking available to other programs such as NFS or Netware servers on the same machine.

Finally we settled on using a lock daemon. The SMB server processes talked to it over UDP "connections" to keep file descriptor usage low in the lock daemon. The good news is that we didn't need a way of locking all processes, but that we did incur a time penalty for sending a request to the lock daemon and another for getting a response. It did however lead to simple testable code. Real world and benchmark performance was good.

Samba took an alternate approach using a file as a database. They don't support it being on NFS so those issues didn't arise. Since existing file as a database systems like dbm don't support concurrent writers, they came up with their own named tdb. It has since been extended to have a clustered version although that requires a clustered filesystem and appears to use daemons. You can find that as ctdb.

Where is it now?

Sun owns the rights to the VisionFS source having got it as part of their Tarantella acquisition. Tarantella stopped selling VisionFS and the Vision Family of products in the early 2000s. VisionFS still lives on under the hood of the Tarantella product now named Oracle/Sun Secure Global Desktop. It provides the plumbing for application servers to access files on clients via the SSGD server sitting between them.

Looking back

All of the people who worked on VisionFS are very proud of what we did. It was easy to use and just worked. There were people trying to buy it after it was no longer sold, even though they knew they could get Samba for free. We ably met our goals of a low support load.

It is an interesting what-if to consider what would have happened if we had used Samba instead. In some ways Samba has been slowly duplicating the internals of VisionFS over the years (eg using code generation, adding Unicode support, virtualising shares). Samba is of course way further ahead in completely replacing Windows servers (eg dealing with Active Directory and Kerberos). As developers our code would be useful to you today, instead of locked in a cupboard at Sun. Noone else has made an SMB server so easy to configure — the closest Samba has come is Swat which is a virtual text file with integrated manual page. I think we would have brought a lot of fit and finish, and just maybe would have reduced the number of posts on the Samba support mailing lists :-) Blame the lawyers.

Credits

The following people did substantial work on VisionFS. The team peaked at 4 people and troughed at 2.

Barrie Cooper Server engineering
Chris Walsh Documentation
David Smith User interface, usability, documentation, guru
Duncan Stansfield Profile editor, server engineering
Roger Binns Project lead, server engineering
Steve Taylor Server engineering
Toby Darling Testing, server engineering

Contact me