|
Amiga Logo - |
|
Thursday May 1, 2003 News Events Forums Dealers About
-
- - -
  Club Amiga Monthly - Issue #4 Page 4 of 11

Club Amiga Monthly Index | Prev 1 2 3 4 5 6 7 8 9 10 11 Next

Something old, something new: the Amiga Fast File System, past, present and future

When you are using your Amiga computer you do not necessarily realize that there is something like a file system sitting below all the operating system services and application software. By all means it is practically invisible (unless, of course, until something goes wrong). The following article will try to make the file system a bit more "visible" by describing its strange and twisted history, its limitations and the current state of the file system in the context of the AmigaOS 4.0 project.

Data structures and detailed implementation issues are not discussed in excruciating detail for fear of boring the reader to death (don't laugh; it's true). Prepare yourself for a primarily technical discussion of the subject.

1. Historic background

1.1 Origins

What we know as the Amiga file system is part of a disk operating system that was ported to the original AmigaOS way back in 1985 when the plans for implementing an in-house solution (CAOS: Commodore Amiga Operating System) fell through. That disk operating system was called Tripos, a commercial version of an experimental operating system developed at the University of Cambridge Computer Laboratory (that would be Cambridge, England, and not Cambridge, MA). Surprisingly, Tripos shares some traits with Unix: it was devised to be a portable operating system with preemptive multitasking capabilities and it was implemented in a portable programming language (BCPL). And in fact, that portable programming language later became a precursor of 'C', the language in which Unix was implemented (by simplifying BCPL, the language 'B' was created, and from 'B' development eventually progressed to 'C'; no, I'm not making this up).

There is little information left today on what made Tripos what it was, at least on the Internet. An article by Dr. Mike Richardson published in 1979 by Software practice and experience is all I could find referenced.

A few more details on AmigaDOS. Some people still believe that at the core, the Amiga operating system uses Tripos code. This is not true. All that was "borrowed" from Tripos was the file system layer and the functionality on top of it, such as the API, the shell and its standard commands (which are actually programs). The portable Tripos kernel was adapted for the Amiga by Dr. Tim King, who brought the rights to turn the operating system into a commercial product with him to Metacomco, Ltd., a now defunct software company based in Bristol, England.

1.2 The implementation

The original Amiga file system only ran on floppy disk media at the time the first Amigas shipped to customers in about 1986. It could handle larger media, which was to follow 1-2 years later in the form of hard disk drives, but more on that later. Until then Amiga users were getting used to the strange noises their floppy disks made, which somehow sounded as if the disks were sawn in half. This noise, affectionately called "gronking", was in fact caused by the design and implementation of the file system data structures.

The "gronking" resulted from drive head movements, the reason being that the file system design fragmented data heavily. This points to the many drawbacks of the design: it was not designed for speed. But on the other hand, one could claim that it was designed for data integrity. In fact, all data and file system data structures were protected by checksums and the redundancy in the file system data structures made it possible to recover data from damaged disks. The "decentralized" file system data structure layout of the disk also made it far less likely for one error to destroy all the data on the disk (of course, this didn't actually rule out that it could happen).

Another name for "decentralized file system data structure layout" is "fragmentation", which degraded the performance of the file system. What further degraded the performance was the fact that every block used by the file system was protected by a checksum, which meant that each block had to be read individually, its checksum verified and its payload extracted. The result was a relatively safe, but not at all relatively slow file system implementation. In fact, flaws in the file system's block caching code made the implementation even slower the more data blocks it had to manage in the cache.

1.3 Integration into the operating system

Other than is typically the case with traditional operating system designs, a file system in the Amiga operating system is not part of the operating system kernel itself. A file system is a Process with which user software communicates by exchanging messages. The messages were sent implicitly by the application-programming interface (API) known as dos.library, which would interface the Tripos file system kernel layer to the file systems and the Amiga application software.

This message passing design has its benefits and drawbacks. One of the many benefits is that because the format and layout of messages could be enhanced over the years, the API for file systems could be transparently extended. Functionality that was not part of the original design could be added later, which gave the system an enormous degree of flexibility. The design also made it possible to implement asynchronous I/O rather easily and efficiently, thus allowing the multitasking functionality to be used to great effect.

The major drawbacks are in the areas of fairness and scalability. In an operating system with preemptive multitasking the task scheduler in the kernel can guarantee fairness, but in a file system that has to process incoming messages, the file system implementation itself has to make sure that every client is treated fairly. That's quite a challenge because some file system tasks can take longer than others, and some can almost take complete control of the file system for an extended period.

This file system design is somewhat unique. Under Unix, for example, application software to make file operations will make a kernel call which in turn ends up invoking file system code. That approach makes the file system operation subject to normal task scheduling policies. The Unix file system model that comes closest to how an Amiga file system works is in Sun Microsystems' Networked File System (NFS) design. An NFS server has to solve the same problems an Amiga file system has: client requests arrive as messages that need fair treatment. The typical solution to this problem is to divide the work among threads, each tending to another client request. This is in fact also how the original file system worked: it used a peculiarity of the Tripos runtime system called co-routine threading which worked by switching control between different routines, each executed in turn with its own stack and local variables (this seems to be an artifact of Algol 68 from which BCPL appears to be descended; you'll find the same functionality in Modula-2 for example). It was hard to make the scheduling fair with this concept, but it worked well to a certain degree.

1.4 Making the file system work faster: FFS

In around 1986/1987 the drawbacks of the existing file system design became apparent. The overall speed left something to be desired and the first few hard disk drives that became available for the Amiga exposed a partition size limit of about 50 Mbytes; the file system could not safely handle partitions larger than that due to its design. There was a limit imposed on the size of the data structure that kept track of the storage space claimed by directories and files.

These and more limitations led to the development of what became known as the Fast File System, an enhanced version of the original file system. Commodore engineer Steve Beats was sent to Metacomco to study the existing design and to come up with improvements. What he eventually did was to modify the data structures and to re-implement the entire file system in pure MC68000 assembly language. Andy Finkel, then head of operating system software development at Commodore, told me that they were really expecting Steve Beats to come up with a 'C' language version, which would have been more portable. Ever since then the Amiga is blessed or cursed with a non-portable, hard to modify default file system which nevertheless ran quite a lot faster than the original design.

The changes to data structures were rather minor, but had profound effects on performance. The checksums in data blocks were dropped, which allowed the file system to cluster many consecutive blocks which the underlying device driver could then read/write in one single step without requiring any further assistance or overhead. This was used to great effect by DMA (direct memory access) driven hard disk controllers, which allowed application software to load data into memory with no great effort or increase of CPU load. The file system itself tried to pack data into consecutive blocks, reducing the heavy fragmentation introduced by the original file system design. To overcome the 50 Mbytes limit, a new data structure was added which allowed for more storage space to be maintained. Last but not least, directory data structures were now kept in sorted order, thus causing the disk drive heads to move across the media in a monotonous direction for directory scanning rather than jumping back and forth (which caused floppy disk drives to "gronk").

All these changes did not turn the Amiga default file system into a high performance design, but the sometimes dramatically reduced overhead and the compact, efficient implementation both added up quite nicely.

1.5 Enhancements for usability, networked operation and internationalization

In the next two years that followed, Commodore worked on an improved version of the Amiga operating system. Part of which was a revised version of the assembly language file system, which finally was to support more operations than the original design. This introduced concepts such as record locking and change notification. These enhancements were made possible by the flexible message passing implementation of the file system design.

The underlying data structures were modified again to introduce group and owner IDs for all files and directories on a disk. This feature was not used by the file system itself but only by the networked file system layer that was part of Commodore's Envoy package.

Further enhancements went into the algorithms and data structures employed for finding named files and directories. When looking for a named object on a file system, names were always compared in a case-insensitive fashion. However, the original file system only knew how to compare plain ASCII characters that way and failed to take characters from the Amiga ISO 8859 Latin 1 character set into account. For example, this means that for the original file system the names FACADE and facade were the same, but FAÇADE and façade were not.

Last but not least, hard and soft link support was added to the file system design. This was a somewhat risky undertaking as the original design and its associated API did not cater for this kind of functionality. A soft link is what is known as a symbolic link on the Unix operating system. It points to the name of a file, which contains the data referenced by the link. A hard link directly points to the data referenced by it.

1.6 Enhancements for better performance

While further enhancements to the API followed in the next few years, the only major change to file system data structures occurred with the introduction of DCFS, the directory caching file system devised by Commodore engineer Randall Jessup. These enhancements again added to the existing file system structures, which were flexible enough to be transparently extended over the years.

What DCFS tried to improve upon was directory-scanning speed. To read the contents of a directory, the file system has to visit a number of linked blocks, each one identifying a single directory entry. And those blocks to be visited may be spread across the entire file system. The FFS design attempted to address that problem by allocating directory entries close to the file system's root block, which was located around the middle of the partition, thus minimizing head movements. Still, fragmentation could occur and send the drive heads skittering from one end of the media to the other in trying to reach the next directory entry. DCFS improved upon this by storing the contents of a directory in a second list, which would contain the individual directory entry names and metadata. By keeping this data close together, directories could be read and their contents reported by looking only into 2-3 blocks, compared to 20-30 blocks, which would have to be visited in order to compile the same information. This second list, called the "directory list" (inspired name, eh?), would in fact contain redundant information. And with file systems, redundancy often spells trouble, as the original and the redundant information have to be kept in sync.

The redundancy introduced by DCFS was not at all a small price to pay for the benefits it brought. It made modifications to the file system more complex and error prone. Where previously one write access was sufficient to update metadata and to guarantee file system integrity, now up to three additional write accesses were required. If one of the steps were omitted due to a system crash or reset, the entire file system would have to be made consistent again. With DCFS this did not just involve taking note of which blocks were in use by file system data and metadata, but also rebuilding the directory lists, which could double or triple the time it took to make the file system structure consistent again. To add insult to injury, not only made DCFS file system checks take much longer to complete, it also made them more likely to occur.

To help file system performance, Randall Jessup implemented an option to increase the default block size used by the file system. There used to be only a single disk sector per block, and that sector had to be 512 bytes in size. With the changes, which were intended to increase data throughput, multiple sectors (2, 4, 8, 16) could be combined into a single block.

The fact that the assembly language file system implementation was hard to maintain was demonstrated by the DCFS and block size enhancements, which literally took years to stabilize. A task the original author did not manage to complete as Commodore went bankrupt before almost six years later consultant Heinz Wrobel reviewed and repaired the implementation.
1.7 Recent changes and the current state of affairs

The most recent changes to the existing file system code were made by Heinz Wrobel in order to allow partitions to exist beyond what is known as the 4 Gbyte barrier. The file system implementation does not know about the size and position of the partition in terms of how many bytes are involved. It just knows the block numbers and relies upon the lowest layers to translate between block numbers and whatever the storage hardware expects. In this case, the storage hardware's APIs expected byte offsets. And these limited the size and position of partitions to a maximum of 232 bytes (about 4.2 billion bytes), or exactly 4 Bytes. Heinz Wrobel adapted the file system's block access layer to use 64 bit wide device access commands which now allow media to be used that is larger than 4 GBytes. And today, what isn't?

This is just about what led to the current state of things, give or take a dozen bug fixes which the previous maintainer of the code was unable to apply.

1.8 Limitations

The choice of the data structures defines the limits of the implementation. The simple things first: the name of a volume, file or directory cannot be longer than 30 characters. For soft links, the name of the object linked to is limited by the number of bytes in a block; for a 512-byte block, this means that the link target name cannot be longer than 288 characters (the practical limit, however, is far lower: 255 characters). The number of files and directories that may be stored on a disk is limited only by available storage space; there is no preset limit on the number of entries that may go into a directory, as is the case with other file system designs. The maximum size of the media the file system can handle is determined by the underlying media's sector size; 232 sectors can be accessed, with each sector at least 512 bytes in size. The maximum size of a file that may be used safely with the file system is 231 bytes, or about 2 GBytes; this limitation is due to the fact that all quantities used by the file system are signed 32 bit numbers, and a file whose size would be larger than 2 GBytes would come out as having a negative size. The FFS re-implementation specifically limits file sizes to 2 GBytes and will not allow for larger files to be created.

2. Data structures in brief

You may want to skip this section if you are not that interested in the technical details of the Amiga file system implementation.

The data structures used by the Amiga file system are very well documented, with few exceptions. The first exception is that they are not at all even well documented in the original Commodore documentation, but only in a very well researched 3rd party reference manual published by Amiga software developer Ralph Babel. The second exception is in the DCFS data structures, which to the best of my knowledge were never disclosed to the general public. A closed group of developers had access to them, including the designers of 3rd party data recovery software. My own information on DCFS is based upon data collected by Amiga software developer Holger Kruse.

2.1 The BCPL legacy

A little known programming language today, BCPL had a quite peculiar approach to addressing data. BCPL abstracted from the underlying machine's view of the smallest addressable memory unit by defining it to be the word. For the MC68000 CPU, the size of the word was four bytes, or 32 bits. This lead to a few gross data structures to be used by the file system, such as a time stamp that allowed one to specify the current time and day with 96 bits of precision. Compound data structures were a bit more primitive than in, say the 'C' programming language. In BCPL a compound data structure is a fancy name for an array with named array index values. Speaking of fancy names, this data structure access method allowed for easy data polymorphism on the grounds that no data type checking could or would be performed. This in fact allowed and perhaps encouraged the design and layout of the file system data structures, which are very, very similar for all the metadata used.

2.2 Directories and metadata

Every directory entry and every directory uses virtually the same data structure: there is the name of the object in question, its modification date, attributes (readable, writeable, executable, etc.) its size (for a file or hard link to a file) and a list of block numbers. That list would either point to the first few blocks a file would consist of, or make up the directory's hash table. Directory entries are stored in linked lists whose first entries are registered in the directory's hash table. Which slot a directory entry would go into is to be determined by the hash value of the entry's name. The number of entries to go into the block list depends upon the size of the block. For a 512-byte block, 72 entries are used. The maximum length of a file name is fixed to 30 characters.

Unlike in a Unix file system, directories and directory entries are not two separate entities. Where on a Unix file system a directory file associates each file name it contains with an inode number, and the inode then contains metadata and a reference to the data, the Amiga file system keeps name and metadata together in directory entry blocks. This made implementing links in the Amiga file system much more difficult than with Unix, where a link is just another directory entry referring to an inode.

2.3 Data storage

Each file contains a list of the data blocks it consists of. Unlike on a Unix file system, where a hierarchy of data structures references the data blocks the file consist of, a linked list of data block references is used with the Amiga file system. The Amiga file system does not know the concept of extents; the table references every single block.

For the original file system design, data blocks contain more than just data. They also contain a checksum and information that links the individual blocks together.

2.4 Links

Hard links are just like any other directory entry, except that they do not have any information in the block table that is part of all directory entry blocks. They merely refer to the block number of the object linked to. For each hard link, a link goes back from the object linked to the first hard link block. Several hard links referring to the same original object are also kept on a linked list.

Soft links are similar to hard links in that the block table is not used for the original purpose. Rather, it contains the name of the object linked to.

2.5 Directory lists

Introduced with DCFS, directory lists contain virtually the same information that is stored in directory entry blocks, collected in a list that packs metadata and file names tightly and space efficient. The information is again kept in a linked list of directory blocks.

2.6 Storage management

On the Amiga file system, information on which blocks are allocated for file system data and metadata is kept in a linked list of blocks in which every bit in every byte stands for a block on the partition. Before any write access can be made to the file system, the bitmap must be set up properly, so that no data is accidentally overwritten. The process of reconstructing the bitmap is called validation. Every time the consistency is invalidated, such as by an incomplete write to the media due to a system crash or reset, the file system will have to be revalidated when next powered up. This is a very time consuming process.

2.7 Conclusion

If you haven't noticed yet, the Amiga file system is an old design that values data integrity much, much higher than performance. Other file system designs that came from a different legacy (e.g. the Berkeley Fast File System that was introduced with 4.2BSD Unix in 1983) have seen major enhancements, especially in terms of speed. The Amiga file system design did not see as much change or improvement at the same time. There are so many drawbacks in the design that it is not necessarily a good idea to build upon it in the next operating system version, and changing the design is close to impossible.

3. Re-implementing the Amiga Fast File System and other strange ideas

3.1 Why re-implement the file system in the first place?

From all the information presented above it's hard to draw any conclusion other than it would be a bad idea to continue using or even supporting the Amiga file system, which should be replaced with something better, preferably yesterday.

Unfortunately, the people who drew these conclusions failed to come up with a viable replacement for it. The typical efforts produced single threaded file system designs that ran fast for simple applications, but did not scale well. The file systems also lost data. This was sometimes due to the assumption that the supposedly safer design of the file system could not possibly lose or corrupt data -- very much in the same line of reasoning as the claim that because the Titanic was unsinkable it didn't need that many lifeboats.

Slow and limited as it is, the Amiga file system is well supported by data recovery software and even defragmentation and cleanup software. These are rare for the other 3rd party file systems. The way the Amiga file system works is well understood and by now users should be familiar with the behavior and the flaws. These are moderately good reasons for supporting the Amiga file system but they don't invalidate the need for a better file system. However, implementing a better file system takes time and research, which would have to be done in addition to the necessary implementation work and the testing. Comparing these tasks, a re-implementation is a 'simpler' undertaking. And there is one single good reason for tackling this challenge: the next generation Amiga hardware would have to run the original MC68000 assembly language file system in CPU emulation mode, degrading system performance even further. A portable file system, compiled for the next Amiga hardware's CPU would do better. There is also a good chance that code written for the re-implemented Amiga file system will be reusable for the next and improved file system design. Last but not least, a file system that can read data from 'legacy' Amiga format media will always be welcome in an Amiga operating system.

The last question you may ask is "Why not port an existing file system to the Amiga, such as ReiserFS from the Linux world?" The answer lies in the different system architectures of Linux/Unix and AmigaOS. Unix file systems are typically single threaded because the kernel task scheduling takes care of the file system operations. An Amiga file system must be multithreaded to work properly and efficiently. These are fundamentally different characteristics, which you cannot easily resolve. A straight port of an existing file system that was not designed for AmigaOS will produce a substandard quality product.

3.2 That's another fine mess you got us into

It's one thing to argue in favor of re-implementing the Amiga file system in a more portable programming language, such as 'C', and then going about making it happen. I know now what I didn't do then, but I suppose somebody had to do this job. Here is a short list of good reasons why you probably shouldn't try to re-implement the Amiga file system at home, kids:

  • Amiga file systems have been notoriously difficult to implement. The BCPL legacy cast a giant shadow: some of the operating system interfaces required to make a file system work were and are particularly obscure.
  • As said in the introduction to this article, you do not normally take notice of the file system's operations. This is because the file system is supposed to do its job well, and it normally does: it keeps your data accessible and protects it from damage. Naturally, this calls for a high standard of quality in the implementation, and it takes time and effort to arrive at this stage. Most file systems in use today have a long history of research, development and quality assurance behind them. And that meant testing, retesting and, above all, a robust implementation to start with. The original Amiga FFS implementation went through numerous revisions in its twelve-year history. The result is undeniably rather robust.
  • If you try to come up with a file system, which is intended to completely replace the original implementation, you have to observe rules that the dos.library and the operating system components it builds upon define. Not all of these rules are obvious, and not all of them can be learned by looking at the respective source code. Also, because it is indented as a drop-in replacement, the file system must be 100% compatible with the data structures and operations of the original Amiga FFS. This compatibility does not just cover the fundamental functionality, such as how on-disk data is managed, but also into the plain weird. For example, the Amiga file system is supposed to set the current system time (!) if it has not yet been configured.
  • Typically, several disks are using the same file system code. For this to be possible the file system has to be completely reentrant. That is, while all disks are using the same code, none may interfere with the operation of the other.
  • File system operations must execute seemingly in parallel (multithreading), but neither the Amiga operating system nor the 'C' programming language, in which the re-implementation was to be written, offer so-called threads. Also, if you are going to implement the file system in a portable fashion, you cannot rely upon machine specific features to help you along. For example, the original Amiga Workbench would use a set of assembly language subroutines to implement the same kind of co-routine threading as used by the file system.
  • There is really no complete and instructive example of how an Amiga file system may be implemented. There are isolated examples, some of which are written in a mix of different languages, and there are the various revisions of the Amiga file system. Not all are sufficiently well documented, and few are even readable.
  • You do not only have to implement the Amiga file system features, you also have to implement the bugs and side effects. For example, application software that interacts with the file system may pass parameters to it which are not exactly correct. Yet the original Amiga file system tolerated these errors surprisingly well.

This sounds a bit discouraging, doesn't it? And that is but the tip of the iceberg: these features barely cover the basic functionality the file system is to cover. They do not touch important issues such as support for large storage media.

Work on the file system re-implementation started in early 2001. The goal was to create a 100% compatible replacement for the Amiga file system using the 'C' programming language. It took almost two years to arrive at an implementation that was sufficiently robust to build upon it. The code went through almost 200 revisions during that time, and more than 20 testers helped to find implementation errors. I'm happy to say that no serious data loss occurred during the development of the file system. This may be due to the fact that all testers were careful not to trust the implementation until it had become sufficiently mature. But there are also a few features built into the new file system implementation that help to improve its robustness.

3.3 So what else is new?

Can you teach the old dog new tricks? Heinz Wrobel did just that when he revised the Amiga file system between 1997-1999. But these were transparent extensions to an existing implementation. But what about the 'new dog', the Amiga file system re-implementation?

There are limits to what you can do within the framework defined by the Amiga file system. The data structures on disk dictate what you can and what you cannot do. But there are some notable exceptions where you can make changes that are actually rather useful.

The new Amiga file system implementation differs notably from the original implementation with regard to the following features:

  • The order in which data blocks are written to disk attempts to minimize the risk of data corruption.

    Whenever the file system structure is modified, such as when you create a directory or write to a file, these changes affect the contents of several data blocks on the storage medium. If you have to change the contents of more than one block in the course of these actions, it becomes important which block you write first. For example, if you create a new directory, you have to set up the directory storage block and you have to link the directory block to its parent directory. What happens if the operating system crashes or resets itself before you have written both modified blocks to disk? You are bound to have a problem, as the file system structure may no longer be consistent. And worse, because the file system validator will try to walk down the structure, the inconsistency will be detected and the volume will no longer be writable. Where is that disk recovery software when you need it?

    The new Amiga file system tries to protect itself against such accidents by choosing carefully which disk block is written first and which blocks are to follow it. The original Amiga file system did not care much about this detail, which increased the risk of corruption.

  • Enhanced file/directory notification provides more information

    Application software can request that the file system notifies it of changes that are made to files or, generally, to the contents of directories. Previously, changes to directories lacked detailed information as to what exactly had changed. To find out about that, application software had to reread the contents of the directory in question and then figure out the differences. This was a time consuming process, as directory scanning itself is not one of the strengths of the Amiga file system design.

    The enhanced file/directory notification system allows for the names of the directory entries being changed to be determined directly, without having to reread the entire directory. Application software can use that information to optimize its responses to the changes.

  • Changes to directory contents are propagated up the entire file system hierarchy.

    Let's say an application makes a change to a directory entry deep down the file system hierarchy and you want to find out later where that change has happened. How do you do that? Today your only chance is to check each and every file and directory on the volume until you find the once which was most recently modified. This is because the file system only adjusts the information that tells of the last time a modification was made in the directory whose contents were changed. This information is not propagated up the file system hierarchy. This is handled differently for the new Amiga file system which file and directory modification times are updated along the entire access path up to the root directory. The archive flag, which identifies files/directories in need of archiving, is updated as well.

  • The data caching system is more efficient.

    The file system uses a configurable number of so-called buffers for caching data found in the disk blocks that contain file system information (the number of buffers is controlled with the AddBuffers shell command). So far, however, the number of buffers you assigned to a file system often mattered little in terms of performance gained. There are reports that with the original Amiga file system, as part of Kickstart 1.0-1.3, it actually became slower the more buffers it had to juggle. The Amiga Fast File System certainly fared better, but it could be improved upon.

    The new Amiga file system uses a different approach for maintaining its data block cache that actually makes a difference. The more buffers you allocate for it, the more efficient it becomes at accessing its on-disk data structures. You can actually allocate 500 for a disk and it will have a positive impact.

  • File and directory names can be longer than 31 characters.

    Way back in 1985 when MS-DOS file names were even shorter than the Commodore 1541 floppy disk drive allowed for, the Amiga file system would support up to 31 characters per file/directory name. That used to be a lot, but nowadays a bit more is called for.

    The new Amiga file system supports a new disk structure layout that allows for a file or directory name to be up to 107 characters long. While more would be better, the 107 characters are the maximum the dos.library API permits: the directory scanning functions only allow for that many letters.

  • Transparent extension of the file system functionality is possible.

    While this concept is still in its early stages, it promises a degree of flexibility in file system operations that has not been seen before on the Amiga. The idea is to use the basic file system framework and to extend it with external software, so-called file system plugins.

    The functionality currently implemented as plugins works on the disk block level. There are plugin solutions for transparent data encryption and for data caching. The cache plugin is particularly useful because it extends the file system's own caching mechanism in a manner which previously required helper programs which had to cleverly patch the block storage device driver in order to perform their own caching. The encryption plugin can also replace helper programs, which previously had to struggle with the same problems as the cache helper programs.

    The way the plugin system works you can cascade encryption plugins, combining different techniques, if desired. It is also possible to combine caching with encryption, in which case the cache holds the encrypted data.

3.3 The next step

All things considered, the Amiga file system solution we have today is not really the sort of thing you would want to carry over into the next operating system version. That is, if you had a choice. But then again, every journey starts with the first step: the new Amiga file system is the first complete implementation in a portable high-level language. It is a platform to build upon, especially when the next generation file system is created. This will have to be noticeably different from what we know today. It would have to support journaling and the file system plugin concept would have to be expanded upon. If there are particular features you would like to see implemented, why not drop me a line?

© Copyright 2003 by Olaf Barthel


Club Amiga Monthly Index | Prev 1 2 3 4 5 6 7 8 9 10 11 Next

© 2002-2003 Amiga, Inc. | webmaster@os.amiga.com

Valid HTML 4.01!