📄 fs.h
字号:
/* Copyright (C) 2005 David Decotigny Copyright (C) 2000-2005 The KOS Team (Thomas Petazzoni, David Decotigny, Julien Munier) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */#ifndef _SOS_FS_H_#define _SOS_FS_H_/** * @file fs.h * * (Virtual) Filesystem management. * * SOS provides a complete Unix-like file system service. Supported * features of this service are: * - mountpoints * - generic file system support (FS) through object-oriented * specialization (so-called VFS) * - hard & symbolic links * - regular files and directories * - block and character device special files (from article 9 onward) * - file mapping * - basic permission management ("rwx" only, no notion of owner) * - chroot * - separation of disk node and namespace notions allowing hard links * and to move/rename/remove files or directories that are in use * - basic FS interface (open/read/seek/creat/mkdir/rename/link * / symlink/chmod/mount/fcntl/ioctl...) * - deferred writes (ie file caching). @see sync(3) * * Among the unsupported features: * - no user-based permission (uid/gid, ACLS) * - no modification / access time accounting * - no Fifo/socket special files (yet) * - no generic support library for common fcntl commands * (F_SETOWN/GETLEASE/NOTIFY, etc.) * - no flock-style functions * * Rationale * ========= * The VFS layer is based on 3 central concepts: * * - The meta-information for each file stored on disk (struct * sos_fs_node for SOS, inode for Unix) * * It is sufficient to know where this meta-information is located * on disk (a simple sector number most of the time) to build the * corresponding struct sos_fs_node into memory and to retrieve the * data of the file from the disk * * For example, consider that we know a sector that holds the meta * information (ie size, permissions) is located at sector 64589 on * disk. By retrieving this meta information directly from disk, we * can build the struct sos_fs_node, which would (for example) tell * that the corresponding file spans (for example) over sectors * 4345, 5645, 4539 and 6575, is 1.7kB long * * Everything is stored this way on the disk, even the * directories. From the disk contents' point of view, a directory * is simply a file whose contents represent a list of mappings * "name" -> meta-information location * * - One or many nodes in the file hierarchy pointing to this data * (struct sos_fs_nscache_node for SOS, struct dentry for Linux). This * tells that the entry "toto" in directory "/home/zorglub" * corresponds to the given struct sos_fs_node * * Actually, with the struct sos_fs_node above, we can reach any * file in the system. However, dealing with mountpoints requires * an intermediary data structure because a directory on a disk * cannot make reference to children struct sos_fs_node on other * disk. This is one of the reasons why there is this struct * sos_fs_nscache_node. Another reason is that we kind-of "cache" the * most used struct sos_fs_node: those that lead from the global * root ("/") to the files and directories currently being used * (hence the name "nscache" for "namespace cache"). This speeds-up * the path-resolving process (aka "lookup"), as the most-used path * are already in-memory and the struct sos_fs_node are already * in-memory too. * * A struct sos_fs_nscache_node can have at most 1 parent (the ".." * entry). It can also have 0 parent in case the node is being used * by a process (file is opened or mapped), but the file is * actually "removed", ie un-reachable from any directory. * * Many such structures can reference the same underlying struct * sos_fs_node, which enables the support of "hard links". * * - The "opened file" strucures. They store the information * pertaining to a particular usage of a file. The most important * thing they store is the "file pointer", which holds the * location in the file where the next read/write operation should * start * * Each process has at least 2 such opened files: its "current * working directory" (RTFM chdir) and its "process root" (RTFM * chroot). Those are heritated across fork() and can be changed by * appropriate syscalls (resp. chdir/chroot). The main "root" of * the system is the process root of the "init" process. The usual * opened files (think of open() and opendir()) are stored in the * file descriptor array (fds[]). This is the index in this array * that is commonly called a "file descriptor". * * * The whole VFS layer comprises a series of objects that can be * specialized to implement different FS support (fat, ext2, ffs, ...): * * - The notion of "file system manager", which basically is a * container to a FS name (eg "FAT", "EXT2", etc...) and a series of * functions responsible for initializing a particular "mounting" of * a FS (the "mount" method). This is SOS's struct sos_fs_manager_type * * - The notion of "file system instance" which contains the data * proper to a particular mounting of an FS. Its most important job * is to allocate new struct sos_fs_node on disk, or to retrieve the * meta-information (ie struct sos_fs_node) located at the given * location on disk. This is roughly THE primary physical interface * between the VFS and the disks. This is SOS's struct * sos_fs_manager_instance, aka the Linux's superblock * * For each struct sos_fs_node that it allocates, or that is loads * from disk into memory, this "instance manager" is responsible * for inidicating the functions that implement the FS-dedicated * routine such as read/write/mmap/ioctl/... for this precise node. * * The nodes (struct sos_fs_node) of a struct * sos_fs_manager_instance that are currently loaded in memory are * stored in a hash-table. The key of this map is the location of the * meta-information on disk. That way, it is very fast to look for * the given meta-information whose location on disk is knows: if * it has already been loaded into memory, its memory address is * quickly resolved thanks to this hash table. */#include <sos/types.h>#include <sos/errno.h>#include <sos/hash.h>#include <sos/umem_vmm.h>/* Forward declarations (structures defined in this file) */struct sos_fs_manager_type;struct sos_fs_manager_instance;struct sos_fs_statfs;struct sos_fs_node;struct sos_fs_opened_file;struct sos_fs_stat;#include "fs_nscache.h"/** * The type of filesystem object. * * Each struct sos_fs_node has a type. Here are the supported types. */typedef enum { SOS_FS_NODE_REGULAR_FILE = 0x42, SOS_FS_NODE_DIRECTORY = 0x24, SOS_FS_NODE_SYMLINK = 0x84,} sos_fs_node_type_t;#define SOS_FS_MANAGER_NAME_MAXLEN 32/** * Description of a supported Filesystem type. * * These descriptions are listed in an internal list (see * fs.c:fs_list), and each time we want to mount a FS, we precise a * name (eg "FAT", "EXT2", ...). The VFS will look for this name into * the list of supported filesystem types, and, when found, call its * sos_fs_manager_type::mount() method. * * New filesystem types are registered using sos_fs_register_fs_type() */struct sos_fs_manager_type{ char name[SOS_FS_MANAGER_NAME_MAXLEN]; /** * Responsible for making sure the underlying device (if any) really * stores the correct filesystem format, for creating the hash of fs * nodes and for calling sos_fs_register_fs_instance * * @param device May be NULL * * @note mandatory, may block */ sos_ret_t (*mount)(struct sos_fs_manager_type * this, struct sos_fs_node * device, const char * args, struct sos_fs_manager_instance ** mounted_fs); /** * Responsible for de-allocating the hash of fs nodes and for * calling sos_fs_unregister_fs_instance * * @note mandatory, may block */ sos_ret_t (*umount)(struct sos_fs_manager_type * this, struct sos_fs_manager_instance * mounted_fs); /** Free of use */ void * custom_data; /** List of filesystem instances of this type currently mounted somewhere in the system */ struct sos_fs_manager_instance * instances; /** Linkage for the list of filesystem types registered in the system */ struct sos_fs_manager_type *prev, *next;};/** * Data related to a particular "mounting" of a file system. A so-called "superblock" under Linux * * This holds the FUNDAMENTAL functions responsible for loading struct * sos_fs_node from disk, or for allocating thom on disk. It also * holds the hash-table of struct sos_fs_node already loaded into * memory. */struct sos_fs_manager_instance{ /** * @note Publicly readable. Written only by sos_fs_manager_type::mount() */ struct sos_fs_manager_type * fs_type; /** * Usually, a filesystem relies on a device (disk, network, ram, * ...) to fetch its data. This is the location of the device. * * @note Publicly readable. Written only by fs.c */ struct sos_fs_node * device;#define SOS_FS_MOUNT_SYNC (1 << 0)#define SOS_FS_MOUNT_READONLY (1 << 1)#define SOS_FS_MOUNT_NOEXEC (1 << 2) /** * Is this FS read-only, without EXEC file permission, write-through * ? Or-red combination of the SOS_FS_MOUNT_ flags * * @note Publicly readable. Written only by fs.c */ sos_ui32_t flags; /** * The namespace node that is the root of THIS file system mounting * * @note Publicly readable. Written only by fs.c */ struct sos_fs_nscache_node * root; /** * List of dirty nodes. These are the nodes that need to be written * back to disk. With FS supporting deferred-writes, the * sos_fs_sync() function will use this list to flush the dirty * nodes back to disk. * * @note Reserved to fs.c */ struct sos_fs_node * dirty_nodes; /** * Build a fresh new FS node at the given location. This implies * the allocation of a new sos_fs_node structure in memory * * @note Mandatory, may block. Appropriate locking MUST be implemented */ sos_ret_t (*fetch_node_from_disk)(struct sos_fs_manager_instance * this, sos_ui64_t storage_location, struct sos_fs_node ** result); /** * Build a fresh new FS node ON THE DISK of the given type (dir, * plain file, symlink, ...), completely empty ; return a newly * allocated IN-MEMORY node structure representing it * * @param open_creat_flags is the open_flags parameter passed to * sos_fs_open() when O_CREAT is set. 0 when allocated trough * creat/mkdir/mknod/symlink * * @note Mandatory, may block. Appropriate locking MUST be implemented */ sos_ret_t (*allocate_new_node)(struct sos_fs_manager_instance * this, sos_fs_node_type_t type, const struct sos_process * creator, sos_ui32_t access_rights, sos_ui32_t open_creat_flags, struct sos_fs_node ** result); /** * Return filesystem status (RTFM df) * * @note Optional, may block. Appropriate locking MUST be implemented */ sos_ret_t (*statfs)(struct sos_fs_manager_instance * this, struct sos_fs_statfs * result); /** * Comparison callback called when looking for file/dirs in the * namespace cache. Normally, a usual lexicographical comparison is * done (when this function points to NULL). But for some FS, it * might be useful to use another comparison function (eg for * case-insensitive FS) * * @note Optional (may be NULL), must NOT block */ sos_bool_t (*nsnode_same_name)(const char * name1, sos_ui16_t namelen1, const char * name2, sos_ui16_t namelen2); /** * Hash table of the struct sos_fs_node of this filesystem instance * loaded in memory: key=storage_location, element=sos_fs_node */ struct sos_hash_table * nodecache; /** * Unique identifier of this FS (used in sync method, updated by * fs.c). This enables sync_all_fs to be resilient to mount/umount * and (un)register_fs_type/instance */ sos_ui64_t uid; void * custom_data; /** Linkage for the list of instances for the underlying fs type */ struct sos_fs_manager_instance * prev, * next;
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -