📄 vldb_1997_elementary.txt
字号:
the modularity of data types and the extensibility of the type system.
Fundamental architectural changes are required to build such a database
system; these have been explored through the implementation of E-ADTs
in PREDATOR. Initial performance results demonstrate an order of
magnitude in performance improvements.</abstract></paper><paper><title>Integrating Reliable Memory in Databases.</title><author><AuthorName>Wee Teck Ng</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Peter M. Chen</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Very Large Data Bases</conference><citation><name>Recovery Algorithms for Database Machines with Nonvolatile Main Memory.</name><name>Management of Partially Safe Buffers.</name><name>A NonStop Kernel.</name><name>Fault Injection Experiments Using FIAT.</name><name>A Case for Fault-Tolerant Memory for Transaction Processing.</name><name>Benchmarking Database Systems A Systematic Approach.</name><name>The Architecture of the Dal&iacute; Main-Memory Storage Manager.</name><name>RAID: High-Performance, Reliable Secondary Storage.</name><name>The Rio File Cache: Surviving Operating System Crashes.</name><name>The Case For Safe RAM.</name><name>Implementation Techniques for Main Memory Database Systems.</name><name>A Database Cache for High Performance and Fast Restart in Database Systems.</name><name>Main Memory Database Systems: An Overview.</name><name>Notes on Data Base Operating Systems.</name><name>The Recovery Manager of the System R Database Manager.</name><name>Principles of Transaction-Oriented Database Recovery.</name><name>Crash Recovery Scheme for a Memory-Resident Database System.</name><name>Application-Controlled Physical Memory using External Page-Cache Management.</name><name>FERRARI: A Flexible Software-Based Fault and Error Injection System.</name><name>FINE: A Fault Injection and Monitoring Environment for Tracing the UNIX System Behavior under Faults.</name><name>Faults, Symptoms, and Software Fault Tolerance in the Tandem GUARDIAN90 Operating System.</name><name>Replication in the Harp File System.</name><name>Free Transactions With Rio Vista.</name><name>Lessons from FTM: An Experiment in Design and Implementation of a Low-Cost Fault-Tolerant System.</name><name>Informed Prefetching and Caching.</name><name>Performance Evaluation of Extended Storage Architectures for Transaction Processing.</name><name>The Impact of Architectural Trends on Operating System Performance.</name><name>Lightweight Recoverable Virtual Memory.</name><name>Dealing with Disaster: Surviving Misbehaved Kernel Extensions.</name><name>ATOM - A System for Building Customized Program Analysis Tools.</name><name>Operating System Support for Database Management.</name><name>The Design of the POSTGRES Storage System.</name><name>Using Write Protected Data Structures To Improve Software Fault Tolerance in Highly Available Database Management Systems.</name><name>Software Defects and their Impact on System Availability: A Study of Field Failures in Operating Systems.</name><name>A Comparison of Software Defects in Database Management Systems and Operating Systems.</name><name>System Support for Software Fault Tolerance in Highly Available Database Management Systems.
Ph.D. thesis, University of California at Berkeley 1993</name><name>Efficient Software-Based Fault Isolation.</name><name>eNVy: A Non-Volatile, Main Memory Storage System.</name></citation><abstract>Recent results in the Rio project at the University of Michigan show that it is
possible to create an area of main memory that is as safe as disk from
operating system crashes. This paper explores how to integrate the reliable
memory provided by the Rio file cache into a database system. We propose three
designs for integrating reliable memory into databases: non-persistent database
buffer cache, persistent database buffer cache, and persistent database buffer
cache with protection. Non-persistent buffer caches use an I/O interface to
reliable memory and require the fewest modifications to existing databases.
However, they waste memory capacity and bandwidth due to double buffering.
Persistent buffer caches use a memory interface to reliable memory by mapping
it into the database address space. This places reliable memory under complete
database control and eliminates double buffering, but it may expose the buffer
cache to database errors. Our third design reduces this exposure by write
protecting the buffer pages. Extensive fault tests show that mapping reliable
memory into the database address space does not significantly hurt reliability.
This is because wild stores rarely touch dirty, committed pages written by
previous transactions. As a result, we believe that databases should use a
memory interface to reliable memory.</abstract></paper><paper><title>Logical and Physical Versioning in Main Memory Databases.</title><author><AuthorName>Rajeev Rastogi</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>S. Seshadri</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Philip Bohannon</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Dennis W. Leinbaugh</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Abraham Silberschatz</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>S. Sudarshan</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Very Large Data Bases</conference><citation><name>The Design and Analysis of Computer Algorithms.
Addison-Wesley 1974, ISBN 0-201-00029-6</name><name>On Mixing Queries and Transactions via Multiversion Locking.</name><name>Concurrency of Operations on B-Trees.</name><name>The Implementation of an Integrated Concurrency Control and Recovery Scheme.</name><name>Implementation Techniques for Main Memory Database Systems.</name><name>Locking and Latching in a Memory-Resident Database System.</name><name>Dal&iacute;: A High Performance Main Memory Storage Manager.</name><name>Concurrent Manipulation of Binary Search Trees.</name><name>A Study of Index Structures for Main Memory Database Management Systems.</name><name>Concurrency Control in a Dynamic Search Structure.</name><name>ARIES/IM: An Efficient and High Concurrency Index Management Method Using Write-Ahead Logging.</name><name>ARIES/KVL: A Key-Value Locking Method for Concurrency Control of Multiaction Transactions Operating on B-Tree Indexes.</name><name>Efficient and Flexible Methods for Transient Versioning of Records to Avoid Locking by Read-Only Transactions.</name><name>Concurrent Search Structure Algorithms.</name><name>System M: A Transaction Processing Testbed for Memory Resident Data.</name></citation><abstract>We present a design for multi-version concurrency control and recovery
in a main memory database, and describe logical and physical versioning
schemes that allow read-only transactions to execute without obtaining
data item locks or system latches. These schemes enable a system to
guarantee that updaters will never interfere with read-only transactions,
and that read-only transactions will not be delayed as long as the
operating system provides them with sufficient cycles. Our contributions
include several space saving techniques for the main memory implementation.
We extend the T-tree index structure (designed for main-memory databases)
to support concurrent access and latch-free traversals, and demonstrate
the performance benefits of our extensions. Some of these schemes have
been implemented on a widely-used software platform within Bell Labs.,
and the full scheme is implemented in the Dali main memory storage manager.</abstract></paper><paper><title>Using Versions in Update Transactions: Application to Integrity Checking.</title><author><AuthorName>Fran\c{c}ois Llirbat</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Eric Simon</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Dimitri Tombroff</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Very Large Data Bases</conference><citation><name>Using Multiversion Data for Non-interfering Execution of Write-only Transactions.</name><name>A Critique of ANSI SQL Isolation Levels.</name><name>On Mixing Queries and Transactions via Multiversion Locking.</name><name>Concurrency Control and Recovery in Database Systems.
Addison-Wesley 1987, ISBN 0-201-10715-5</name><name>Cost and Performance Analysis of Semantic Integrity Validation Methods.</name><name>The Implementation of an Integrated Concurrency Control and Recovery Scheme.</name><name>Load Control for Locking: The 'Half-and-Half' Approach.</name><name>Commit_LSN: A Novel and Simple Method for Reducing Locking and Latching in Transaction Processing Systems.</name><name>Efficient and Flexible Methods for Transient Versioning of Records to Avoid Locking by Read-Only Transactions.</name><name>Implementing Atomic Actions on Decentralized Data.</name><name>Transaction Chopping: Algorithms and Performance Studies.</name><name>Performance Limits of Two-Phase Locking.</name></citation><abstract>This paper proposes an extension of the multiversion two phase locking
protocol, called EMV2PL, which enables up date transactions to use versions
while guaranteeing the serializability of all transactions.
The use of the protocol is restricted to transactions, called write-then-read
transactions that consist of two consecutive parts: a write part
containing both read and write operations in some arbitrary order,
and an abusively called read part, containing read operations
or write operations on data items already locked in the write part of the
transaction. With EMV2PL, read operations in the read part use versions and
read locks acquired in the write part can be released just before entering
the read part. We prove the correctness of our protocol, and show that its
implementation requires very few changes to classical implementations
of MV2PL. After presenting various methods used by application developers to
implement integrity checking, we show how EMV2PL can be effectively
used to optimize the processing of update transactions that perform integrity
checks. Finally, performance studies show the benefits of our protocol
compared to a (strict) two phase locking protocol.</abstract></paper><paper><title>A Foundation for Multi-dimensional Databases.</title><author><AuthorName>Marc Gyssens</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Laks V. S. Lakshmanan</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Very Large Data Bases</conference><citation><name>Modeling Multidimensional Databases.</name><name>On the Computation of Multidimensional Aggregates.</name><name>Real World Requirements for Decision Support - Implications for RDBMS.</name><name>OLAP, Relational, and Multidimensional Database Systems.</name><name>Metafinite Model Theory.</name><name>Data Cube: A Relational Aggregation Operator Generalizing Group-By, Cross-Tab, and Sub-Total.</name><name>Tables as a Paradigm for Querying and Restructuring.</name><name>Implementing Data Cubes Efficiently.</name><name>Equivalence of Relational Algebra and Relational Calculus Query Languages Having Aggregate Functions.</name><name>A Data Model for Supporting On-Line Analytical Processing.</name><name>Statistical Databases: Characteristics, Problems, and some Solutions.</name><name>OLAP and Statistical Databases: Similarities and Differences.</name></citation><abstract>We present a multi-dimensional database model, which we believe can
serve as a conceptual model for On-Line Analytical Processing
(OLAP)-based applications. Apart from providing the functionalities
necessary for OLAP-based applications, the main feature of the model
we propose is a clear separation between structural aspects and the
contents. This separation of concerns allows us to define data
manipulation languages in a reasonably simple, transparent way. In
particular, we show that the data cube operator can be expressed easily.
Concretely, we define an algebra and a calculus and show them to be
equivalent. We conclude by comparing our approach to related work.
The conceptual multi-dimensional database model developed here is
orthogonal to its implementation, which is not a subject of the
present paper.</abstract></paper><paper><title>Fast Computation of Sparse Datacubes.</title><author><AuthorName>Kenneth A. Ross</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Divesh Srivastava</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Very Large Data Bases</conference><citation><name>On the Computation of Multidimensional Aggregates.</name><name>Data Cube: A Relational Aggregation Operator Generalizing Group-By, Cross-Tab, and Sub-Total.</name><name>Quicksort.</name><name>Implementing Data Cubes Efficiently.</name><name>An Array-Based Algorithm for Simultaneous Multidimensional Aggregates.</name></citation><abstract>Datacube queries compute aggregates over database relations at a
variety of granularities, and they constitute an important class of
decision support queries. Real-world data is frequently sparse, and
hence efficiently computing datacubes over large sparse relations is
important. We show that current techniques for computing datacubes
over sparse relations do not scale well with the number of CUBE BY
attributes, especially when the relation is much larger than main memory.
We propose a novel algorithm for the fast computation of datacubes
over sparse relations, and demonstrate the efficiency of our algorithm
using synthetic, benchmark and real-world data sets. When the
relation fits in memory, our technique performs multiple in-memory
sorts, and does not incur any I/O beyond the input of the relation and
the output of the datacube itself. When the relation does not fit in
memory, a divide-and-conquer strategy divides the problem of computing
the datacube into several simpler computations of sub-datacubes.
Often, all but one of the sub-datacubes can be computed in memory and
our in-memory solution applies. In that case, the total I/O overhead
is linear in the number of CUBE BY attributes. We demonstrate with an
implementation that the CPU cost of our algorithm is dominated by the
I/O cost for sparse relations.</abstract></paper><paper><title>Data Warehouse Configuration.</title><author><AuthorName>Dimitri Theodoratos</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Timos K. Sellis</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Very Large Data Bases</conference><citation><name>Updating Derived Relations: Detecting Irrelevant and Autonomously Computable Updates.</name><name>Efficiently Updating Materialized Views.</name><name>Deriving Production Rules for Constraint Maintainance.</name><name>Optimizing Queries with Materialized Views.</name><name>Answering Queries with Aggregation Using Views.</name><name>Incremental Maintenance of Views with Duplicates.</name><name>Aggregate-Query Processing in Data Warehousing Environments.</name><name>Data Integration using Self-Maintainable Views.</name><name>Maintaining Views Incrementally.</name><name>Maintenance of Materialized Views: Problems, Techniques, and Applications.</name><name>Selection of Views to Materialize in a Data Warehouse.</name><name>Index Selection for OLAP.</name><name>Implementing Data Cubes Efficiently.</name><name>Computing Queries from Derived Relations.</name><name>Answering Queries Using Views.</name><name>Queries Independent of Updates.</name><name>Incremental Recomputation of Active Relational Expressions.</name><name>Making Views Self-Maintainable for Data Warehousing.</name><name>Efficient Incremental Evaluation of Queries with Aggregation.</name><name>Materialized View Maintenance and Integrity Constraint Checking: Trading Space for Time.</name><name>An Incremental Access Method for ViewCache: Concept, Algorithms, and Cost Analysis.</name><name>WATCHMAN : A Data Warehouse Intelligent Cache Manager.</name><name>Currency-Based Updates to Distributed Materialized Views.</name><name>Updating Distributed Materialized Views.</name><name>Intelligent caching and indexing techniques for relational database systems.</name><name>Multiple-Query Optimization.</name><name>Improvements on a Heuristic Algorithm for Multiple-Query Optimization.</name><name>Incremental Maintenance of Externally Materialized Views.</name><name>The GMAP: A Versatile Tool for Physical Data Independence.</name><name>Principles of Database and Knowledge-Base Systems, Volume II.
Computer Science Press 1989, ISBN 0-7167-8162-X</name><name>Research Problems in Data Warehousing.</name><name>Query Transformation for PSJ-Queries.</name><name>View Maintenance in a Warehousing Environment.</name><name>The Strobe Algorithms for Multi-Source Warehouse Consistency.</name></citation><abstract>In the data warehousing approach to the integration of data from multiple information sources, selected information is extracted in advance and stored in a repository. A data warehouse (DW) can therefore be seen as a set of materialized views defined over the sources. When a query is posed, it is evaluated locally, using the materialized views, without accessing the original information sources. The applications using DWs require high query performance. This requirement is in conflict with the need to maintain in the DW updated information. The DW configuration problem is the problem of selecting a set of views to materialize in the DW that answers all the queries of interest while minimizing the total query evaluation and view maintenance cost.
In this paper we provide a theoretical framework for this problem in terms of the relational model.
We develop a method for dealing with it by formulating it as a state space optimization problem and then solving it using an exhaustive incremental algorithm as well as a heuristic one.
We extend this method by considering the case where auxiliary views are stored in the DW solely for reducing the view maintenance cost.</abstract></paper><paper><title>Algorithms for Materialized View Design in Data Warehousing Environment.</title><author><AuthorName>Jian Yang</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Kamalakar Karlapalem</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Qing Li</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Very Large Data Bases</conference><citation><name>Implementing Deductive Databases by Mixed Integer Programming.</name><name>Including Group-By in Query Optimization.</name><name>Of Nests and Trees: A Unified Approach to Processing Queries That Contain Nested Subqueries, Aggregates, and Quantifiers.</name><name>A Query Processing Algorithm for a System of Heterogeneous Distributed Databases.</name><name>Selection of Views to Materialize in a Data Warehouse.</name><name>Common Subexpression Identification in General Algebraic Systems.
Technical Rep. UKSC 0060, IBM United Kingdom Scientific Centre : (1974)</name><name>Optimization of a Single Relation Expression in a Relational Data Base System.</name><name>Implementing Data Cubes Efficiently.</name><name>Common Subexpression Isolation in Multiple Query Optimization.</name><name>HODFA: An Architectural Framework for Homogenizing Heterogeneous Legacy Database.</name><name>Performing Group-By before Join.</name><name>A Framework for Designing Materialized Views in Data Warehousing Environment.</name><name>Tackling the Challenges of Materialized View Design in Data Warehousing Environment.</name></citation><abstract>Selecting views to materialize is one of
the most important decisions in designing a data warehouse.
In this paper, we present a framework
for analyzing the
issues in selecting views to materialize so as to achieve
the best combination of good query performance and low view maintenance.
We first develop a heuristic algorithm which can provide a
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -