⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 vldb_1995_elementary.txt

📁 利用lwp::get写的
💻 TXT
📖 第 1 页 / 共 5 页
字号:
In addition, because loading gigabytes and terabytes of data can take hours, we describe how to checkpoint the partitioned-list algorithm and resumea long-running load after a system crash or other interruption.</abstract></paper><paper><title>Efficient Incremental Garbage Collection for Client-Server Object Database Systems.</title><author><AuthorName>Laurent Amsaleg</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Michael J. Franklin</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Olivier Gruber</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Very Large Data Bases</conference><citation><name>List Processing in Real Time on a Serial Computer.</name><name>Enhancing Performance in a Persistent Object Store: Clustering Strategies in O2.</name><name>Building an Object-Oriented Database System, The Story of O2.</name><name>The Gemstone Object Database Management System.</name><name>Storage Reclamation in Object Oriented Database Systems.</name><name>The Object Database Standard: ODMG-93 (Release 1.1).</name><name>The oo7 Benchmark.</name><name>Object and File Management in the EXODUS Extensible Database System.</name><name>Data Caching Tradeoffs in Client-Server DBMS Architectures.</name><name>Fine-Grained Sharing in a Page Server OODBMS.</name><name>Partition Selection Policies in Object Database Garbage Collection.</name><name>A Study of Three Alternative Workstation-Server Architectures for Object Oriented Database Systems.</name><name>On-the-Fly Garbage Collection: An Exercise in Cooperation.</name><name>Local Disk Caching for Client-Server Database Systems.</name><name>Crash Recovery in Client-Server EXODUS.</name><name>Object Grouping in Eos.</name><name>Transaction Processing: Concepts and Techniques.</name><name>Atomic Incremental Garbage Collection and Recovery for a Large Stable Heap.</name><name>ARIES: A Transaction Recovery Method Supporting Fine-Granularity Locking and Partial Rollbacks Using Write-Ahead Logging.</name><name>Fault-Tolerant Distributed Garbage Collection in a Client-Server Object-Oriented Database.</name><name>ARIES/CSA: A Method for Database Recovery in Client-Server Architectures.</name><name>Fault-Tolerant Reference Counting for Garbage Collection in Distributed Systems.</name><name>Concurrent Compacting Garbage Collection of a Persistent Heap.</name><name>Generation Scavenging: A Non-Disruptive High Performance Storage Reclamation Algorithm.</name><name>Uniprocessor Garbage Collection Techniques.</name><name>Storage Reclamation and Reorganization in Client-Server Persistent Object Stores.</name><name>Readings in Object-Oriented Database Systems.</name></citation><abstract>We describe an efficient server-based algorithm for garbage collecting object-oriented databases in a client/server environment.
The algorithm is incremental and runs concurrently with client transactions.
Unlike previous algorithms, it does not hold any locks on data and does not require callbacks to clients.
It is fault tolerant, but performs very little logging.
The algorithm has been designed to be integrated into existing OODB systems, and therefore it works with standard implementation techniques such as two-phase locking and write- ahead-logging.
In addition, it supports client-server performance optimizations such as clientcaching and flexible management of client buffers.
We describe an implementation of the algorithm in the EXODUS storage manager and present results from an initial performance study.</abstract></paper><paper><title>W3QS: A Query System for the World-Wide Web.</title><author><AuthorName>David Konopnicki</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Oded Shmueli</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Very Large Data Bases</conference><citation><name>Querying and Updating the File.</name><name>A Logical Query Language for Hypertext Systems.</name><name>Expressing Structural Hypertext Queries in GraphLog.</name><name>WebMap: A Graphical Hypertext Navigation Tool.</name><name>Reflections on NoteCards: Seven Issues for the Next Generation of Hypermedia Systems.</name><name>Customized Information Extraction as a Basis for Resource Discovery.</name><name>Queries on Structures in Hypertext.</name><name>WSQ/DSQ: A Practical Approach for Combined Querying of Databases and the Web.</name></citation><abstract>The World-Wide Web (WWW) is an ever growing, distributed, non-administered, global information resource.
It resides on the world- wide computer network and allows access to heterogeneous information: text, image, video, sound and graphic data.
Currently, this wealth of information is difficult to mine.
One can either manually, slowly and tediously navigate through the WWW or utilize indexes and libraries which are built by automatic search engines (called knowbots or robots).
We have designed and are now implementing a high level SQL-like language to support effective and flexible query processing, which addresses the structure andcontent of WWW nodes and their varied sorts of data.
Query results are intuitively presented and continuously maintained when desired.
The language itself integrates new utilities and existing Unix tools (e.g.grep, awk).
The implementation strategy is to employ existing WWW browsers and Unix tools to the extent possible.</abstract></paper><paper><title>Duplicate Removal in Information System Dissemination.</title><author><AuthorName>Tak W. Yan</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Hector Garcia-Molina</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Very Large Data Bases</conference><citation><name>Copy Detection Mechanisms for Digital Documents.</name><name>World-Wide Web: The Information Universe.</name><name>Duplicate record identification in bibliographic databases.</name><name>Information Filtering - Preface to the Secial Section.</name><name>SCAM: A Copy Detection Mechanism for Digital Documents.</name><name>Index Structures for Information Filtering Under the Vector Space Model.</name><name>Index Structures for Selective Dissemination of Information Under the Boolean Model.</name><name>SIFT - a Tool for Wide-Area Information Dissemination.</name></citation><abstract>Our experience with the SIFT [YGM95] information dissemination system (in use by over 7,000 users daily) has identified an important and generic disseminationproblem: duplicate information.
In this paper we explain why duplicates arise, we quantify the problem, and we discuss why it impairs information dissemination.
We then propose a Duplicate Removal Module (DRM) for an information dissemination system.
The removal of duplicates operates on a per user, per document basis - each document read by a user generates a request, or a duplicate restraint.
In wide-area environments, the number of restraints handled is very large.
We consider the implementation of a DRM, examining alternative algorithms and data structures that may be used.
We present a performance evaluation of the alternatives and answer important design questions such as: Which implementation is the best?
With "best" scheme, how expensive will duplicate removal be?
How much memory is required? How fast can restraints be processed?</abstract></paper><paper><title>Generalizing GlOSS to Vector-Space Databases and Broker Hierarchies.</title><author><AuthorName>Luis Gravano</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Hector Garcia-Molina</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Very Large Data Bases</conference><citation><name>The Effectiveness of GlOSS for the Text Database Discovery Problem.</name><name>A Comparison of Internet Resource Discovery Approaches.</name><name>Internet Resource Discovery Services.</name><name>Precision and Recall of GlOSS Estimators for Database Discovery.</name><name>Introduction to Modern Information Retrieval.
 McGraw-Hill Book Company 1984, ISBN 0-07-054484-0</name><name>Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer.
 Addison-Wesley 1989, ISBN 0-201-12227-8</name><name>The Prospero File System: A Global File System Based on the Virtual System Model.</name><name>World-Wide Web: The Information Universe.</name><name>Searching Distributed Collections with Inference Networks.</name><name>Content Routing for Distributed Information Servers.</name><name>Content Routing in a Network of WAIS Servers.</name><name>Data Structures for Efficient Broker Implementation.</name><name>SIFT - a Tool for Wide-Area Information Dissemination.</name><name>Precision and Recall of GlOSS Estimators for Database Discovery.</name><name>The Collection Fusion Problem.</name><name>Information Retrieval Systems for Large Document Collections.</name></citation><abstract>As large numbers of text databases have become available on the Internet, it is harder to locate the right sources for given queries.
In this paper we present gGlOSS, a generalized Glossary-Of-Servers Server, that keeps statistics on the available databases to estimate which databases are the potentially most useful for a given query.
gGlOSS extends our previous work [l], which focused on databases using the boolean model of document retrieval, to cover databases using the more sophisticated vector-space retrieval model.
We evaluate our new techniques using real-user queries and 53 databases.
Finally, we further generalize our approach by showing how to build a hierarchy of gGlOSS brokers.
The top level of the hierarchy is so small it could be widely replicated, even at end-user workstations.</abstract></paper><paper><title>Hot Block Clustering for Disk Arrays with Dynamic Striping.</title><author><AuthorName>Kazuhiko Mogi</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Masaru Kitsuregawa</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Very Large Data Bases</conference><citation><name>Parity Striping of Disk Arrays: Low-Cost Reliable Storage with Acceptable Throughput.</name><name>Parity Declustering for Continuous Operation in Redundant Disk Arrays.</name><name>The Architecture of a Fault-Tolerant Cached RAID Controller.</name><name>Dynamic Parity Stripe Reorganizations for RAID5 Disk Arrays.</name><name>A Case for Redundant Arrays of Inexpensive Disks (RAID).</name><name>The Design and Implementation of a Log-Structured File System.</name><name>Parity Logging Overcoming the Small Write Problem in Redundant Disk Arrays.</name><name>Dynamic File Allocation in Disk Arrays.</name><name>Data Partitioning and Load Balancing in Parallel Disk Systems.</name></citation><abstract>RAID5 disk arrays provide high performance and high reliability for reasonable cost.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -