📄 icde_1995_elementary.txt
字号:
<proceedings><paper><title>Program Chairs' Message</title><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract></abstract></paper><paper><title>Organizing Committee</title><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract></abstract></paper><paper><title>Mining sequential patterns</title><author><AuthorName>R. Agrawal</AuthorName><institute><InstituteName>IBM Almaden Res. Center, San Jose, CA, US</InstituteName><country></country></institute></author><author><AuthorName>R. Srikant</AuthorName><institute><InstituteName>IBM Almaden Res. Center, San Jose, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction.</abstract></paper><paper><title>The design and experimental evaluation of an information discovery mechanism for networks of autonomous database systems</title><author><AuthorName>D. McLeod</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Univ. of Southern California, Los Angeles, CA, US</InstituteName><country></country></institute></author><author><AuthorName>A. Si</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Univ. of Southern California, Los Angeles, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>An approach and mechanism to support the dynamic discovery of information units within a collection of autonomous and heterogeneous database systems is described. The mechanism is based upon a core set of database constructs that characterizes object database systems, along with a set of self-adaptive heuristics employing techniques from machine learning. The approach provides an uniform framework for organizing, indexing, searching, and browsing database information units within an environment of multiple, autonomous, interconnected databases. The feasibility of the approach and mechanism is illustrated using a protein/genetics application environment. Metrics for measuring the performance of the discovery system are presented and the effectiveness of the system is thereby evaluated. Performance tradeoffs are examined and analyzed by experiments performed, employing a simulation model.</abstract></paper><paper><title>Set-oriented mining for association rules in relational databases</title><author><AuthorName>M. Houtsma</AuthorName><institute><InstituteName>Twente Univ., Enschede, Netherland</InstituteName><country></country></institute></author><author><AuthorName>A. Swami</AuthorName><institute><InstituteName>Twente Univ., Enschede, Netherland</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Describe set-oriented algorithms for mining association rules. Such algorithms imply performing multiple joins and may appear to be inherently less efficient than special-purpose algorithms. We develop new algorithms that can be expressed as SQL queries, and discuss the optimization of these algorithms. After analytical evaluation, an algorithm named SETM emerges as the algorithm of choice. SETM uses only simple database primitives, viz. sorting and merge-scan join. SETM is simple, fast and stable over the range of parameter values. The major contribution of this paper is that it shows that at least some aspects of data mining can be carried out by using general query languages such as SQL, rather than by developing specialized black-box algorithms. The set-oriented nature of SETM facilitates the development of extensions.</abstract></paper><paper><title>A high performance configurable storage manager</title><author><AuthorName>A. Biliris</AuthorName><institute><InstituteName>AT&T Bell Labs., Murray Hill, NJ, US</InstituteName><country></country></institute></author><author><AuthorName>E. Panagos</AuthorName><institute><InstituteName>AT&T Bell Labs., Murray Hill, NJ, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Presents the architecture of /spl Bscr/eSS (Bell Laboratories Storage System)-a high-performance configurable database storage manager providing key facilities for the fast development of object-oriented, relational or home-grown database management systems. /spl Bscr/eSS is based on a multi-client multi-server architecture offering distributed transaction management facilities and extensible support for persistence. We present some novel aspects of the /spl Bscr/eSS architecture, including a fast object reference technique that allows re-organization of databases without affecting existing references, and two operation modes that an application running on a client or server machine can use to interact with the storage system-(i) copy on access and (ii) shared memory. Subject Terms: storage management; client-server systems; transaction processing; object-oriented databases; distributed databases; relational databases; high-performance configurable database storage manager; BeSS; Bell Laboratories Storage System; object-oriented database management systems; relational database management systems; home-grown database management systems; multi-client multi-server architecture; distributed transaction management facilities; extensible support; persistence; fast object reference technique; database reorganization; operation modes; copy on access; shared memory</abstract></paper><paper><title>A performance evaluation of load balancing techniques for join operations on multicomputer database systems</title><author><AuthorName>K.A. Hua</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Central Florida Univ., Orlando, FL, US</InstituteName><country></country></institute></author><author><AuthorName>W. Tavanapong</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Central Florida Univ., Orlando, FL, US</InstituteName><country></country></institute></author><author><AuthorName>H.C. Young</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Central Florida Univ., Orlando, FL, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>There has been a wealth of research in the area of parallel join algorithms. Among them, hash-based algorithms are particularly suitable for shared-nothing database systems. The effectiveness of these techniques depends on the uniformity in the distribution of the join attribute values. When this condition is not met, a severe fluctuation may occur among the bucket sizes, causing uneven workload for the processing nodes. Many parallel join algorithms with load balancing capability have been proposed to address this problem. Among them, the sampling and incremental approaches have been shown to provide an improvement over the more conventional methods. The comparison between these two approaches, however, has not been investigated. In this paper, we improve these techniques and implement them on an nCUBE/2 parallel computer to compare their performance. Our study indicates that the sampling technique is the better approach.</abstract></paper><paper><title>A trace-based simulation of pointer swizzling techniques</title><author><AuthorName>M.L. McAuliffe</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Wisconsin Univ., Madison, WI, US</InstituteName><country></country></institute></author><author><AuthorName>M.H. Solomon</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Wisconsin Univ., Madison, WI, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Persistent object-oriented applications that traverse large object graphs can improve their performance by caching objects in main memory while they are being used. While caching offers large performance benefits, the techniques used to locate these cached objects in memory can still impede the application's performance. We present the results of a trace-based simulation study of pointer swizzling techniques (techniques for reducing the cost of access to cached objects). We used traces derived from actual persistent programs to find a class of swizzling techniques that performs well, yet permits changes to the contents of in-memory object caches over the lifetime of an application. Our study demonstrates the superiority of a class of techniques known as &quot;indirect swizzling&quot; for a variety of workloads and system configurations.</abstract></paper><paper><title>Enterprise workflow architecture</title><author><AuthorName>Weimin Du</AuthorName><institute><InstituteName>Hewlett-Packard Co., Palo Alto, CA, US</InstituteName><country></country></institute></author><author><AuthorName>S. Peterson</AuthorName><institute><InstituteName>Hewlett-Packard Co., Palo Alto, CA, US</InstituteName><country></country></institute></author><author><AuthorName>Ming-Chien Shan</AuthorName><institute><InstituteName>Hewlett-Packard Co., Palo Alto, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Workflow builders are designed to facilitate development of automated processes and support flexible applications that can be updated, enhanced or completely revamped. The Hewlett-Packard WorkManager is an open product data management solution with workflow management capabilities. WorkManager supports the entire product lifecycle by providing a single, logical repository for all data, and it manages and tracks enterprise-wide processes. With a strong information management platform at its core, WorkManager provides central administration capabilities, including supervision and intervention, where necessary. Because enterprise data is usually fragmented and stored in a variety of legacy systems, and different organizations have different amount of control over their data, an enterprise workflow system needs to support processes accessing data from different sites and applications. This paper describes the architecture of distributed workflow, Hewlett-Packard's solution to the enterprise workflow problem. The architecture is an extension of the existing WorkManager architecture. Its development is based on user requirements and four high-level user models. The user models and the architecture are described.</abstract></paper><paper><title>Toward scalability and interoperability of heterogeneous information sources</title><author><AuthorName>S. Dao</AuthorName><institute><InstituteName>Lab. of Inf. Sci., Hughes Res. Labs., Malibu, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Future large and complex information systems create new challenges and opportunities for research and advanced development in data management. A brief description of Hughes research and prototype efforts to meet these challenges is summarized.</abstract></paper><paper><title>The design and implementation of a full-fledged multiple DBMS</title><author><AuthorName>Shu-Chin Su Chen</AuthorName><institute><InstituteName>Distributed Comput. Syst. Dept., Ind. Technol. Res. Inst., Hsinchu, Taiwa</InstituteName><country></country></institute></author><author><AuthorName>Chih-Shing Yu</AuthorName><institute><InstituteName>Distributed Comput. Syst. Dept., Ind. Technol. Res. Inst., Hsinchu, Taiwa</InstituteName><country></country></institute></author><author><AuthorName>Yen-Yao Yao</AuthorName><institute><InstituteName>Distributed Comput. Syst. Dept., Ind. Technol. Res. Inst., Hsinchu, Taiwa</InstituteName><country></country></institute></author><author><AuthorName>San-Yih Hwang</AuthorName><institute><InstituteName>Distributed Comput. Syst. Dept., Ind. Technol. Res. Inst., Hsinchu, Taiwa</InstituteName><country></country></institute></author><author><AuthorName>B.P. Lin</AuthorName><institute><InstituteName>Distributed Comput. Syst. Dept., Ind. Technol. Res. Inst., Hsinchu, Taiwa</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We have described our design of the multiple DBMS (MDBMS). This MDBMS enables users to access data controlled by different DBMSs as if data were managed by a single DBMS. It supports facilities for SQL queries and transactions, and considers security functions. In addition, an ODBC driver at the client site has been realized to ease the development of MDBMS applications. Several popular commercial DBMSs, including Oracle, Informix and Sybase, have been successfully integrated. The MDBMS is in operation now. However, we found the performance to be unsatisfactory. It took about several seconds to process an SQL query with single join on two relations of hundreds of tuples. We have identified the performance bottleneck to be on the retrieval of meta data. The current MDBMS Server employs a commercial DBMS to store meta data, which is necessary for processing a global query. The processing of a query is slow because it needs to retrieve the schema information via an external DBMS several times. We are currently designing a core storage manager and an access manager specifically for maintaining the meta data and the intermediate results of a global query. We expect this design to significantly improve the performance.</abstract></paper><paper><title>Semantic query optimization for methods in object-oriented database systems</title><author><AuthorName>K. Aberer</AuthorName><institute><InstituteName>GMD-IPSI, Darmstadt, German</InstituteName><country></country></institute></author><author><AuthorName>G. Fischer</AuthorName><institute><InstituteName>GMD-IPSI, Darmstadt, German</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Although the main difference between the relational and the object-oriented data model is the possibility to define object behavior, query optimization techniques in object-oriented database systems are mainly based on the structural part of objects. We claim that the optimization potential emerging from methods has been strongly underestimated so far. In this paper we concentrate on the question of how semantic knowledge about methods can be considered in query optimization. We rely on the algebraic and rule-based approach for query optimization and present a framework that allows to integrate schema-specific knowledge by tailoring the query optimizer according to the particular application's needs. We sketch an implementation of our concepts within the OODBMS VODAK using the Volcano optimizer generator.</abstract></paper><paper><title>The AQUA approach to querying lists and trees in object-oriented databases</title><author><AuthorName>B. Subramanian</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Brown Univ., Providence, RI, US</InstituteName><country></country></institute></author><author><AuthorName>T.W. Leung</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Brown Univ., Providence, RI, US</InstituteName><country></country></institute></author><author><AuthorName>S.L. Vandenberg</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Brown Univ., Providence, RI, US</InstituteName><country></country></institute></author><author><AuthorName>S.B. Zdonik</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Brown Univ., Providence, RI, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Relational database systems and most object-oriented database systems provide support for queries. Usually these queries represent retrievals over sets or multisets. Many new applications for databases, such as multimedia systems and digital libraries, need support for queries on complex bulk types such as lists and trees. In this paper we describe an object-oriented query algebra called AQUA (= A Query Algebra) for lists and trees. The operators in the algebra preserve the ordering between the elements of a list or tree, even when the result list or tree contains an arbitrary set of nodes from the original tree. We also present predicate languages for lists and trees which allow order-sensitive queries because they use pattern matching to examine groups of list or tree nodes rather than individual nodes. The ability to decompose predicate patterns enables optimizations that make use of indices.</abstract></paper><paper><title>Translation of object-oriented queries to relational queries</title><author><AuthorName>C. Yu</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>Yi Zhang</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>Weiyi Meng</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>Won Kim</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>Gaoming Wang</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>T. Pham</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>Son Dao</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Proposes a formal approach for translating OODB queries to equivalent relational queries. The translation is accomplished through the use of relational predicate graphs and OODB predicate graphs. One advantage of using such a graph-based approach is that we can achieve bidirectional translation between relational queries and OODB queries.</abstract></paper><paper><title>Active database management of global data integrity constraints in heterogeneous database environments</title><author><AuthorName>Lyman Do</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Hong Kong Univ. of Sci. & Technol., Hong Kon</InstituteName><country></country></institute></author><author><AuthorName>P. Drew</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Hong Kong Univ. of Sci. & Technol., Hong Kon</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Today, enterprises maintain many disparate information sources over which complex business applications are expected. The informal and ad hoc characteristics of these environments make the information very prone to inconsistency. Yet, the flexibility of application execution given to different parts of an organization is desirable. This paper introduces a new mechanism in which the execution of asynchronous, pre-existing, yet related, applications can be harnessed. A multidatabase framework that supports the concurrent execution of these heterogeneous, distributed applications is presented. Using this framework, we introduce an intuitive conceptual model and algorithm for the enforcement of interdatabase constraints based on active database technology.</abstract></paper><paper><title>A transaction transformation approach to active rule processing</title><author><AuthorName>D. Montesi</AuthorName><institute><InstituteName>Dept. of Inf., Rutherford Appleton Lab., Chilton, U</InstituteName><country></country></institute></author><author><AuthorName>R. Torlone</AuthorName><institute><InstituteName>Dept. of Inf., Rutherford Appleton Lab., Chilton, U</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Describes operational aspects of a novel approach to active rule processing based on a transaction transformation technique. A user-defined transaction, which is viewed as a sequence of atomic database updates forming a semantic unit, is translated by means of active rules into a new transaction that explicitly includes the additional updates due to active rule processing. It follows that the execution of the new transaction in a passive environment corresponds to the execution of the original transaction within the active environment defined by the given rules. Both immediate and deferred execution models are considered. The approach presents two main features. First, it relies on a well known formal basis that allow us to derive solid results on equivalence, confluence and optimization issues. Second, it is easy to implement as it does not require any specific run-time support.</abstract></paper><paper><title>Building an integrated active OODBMS: requirements, architecture, and design decisions</title><author><AuthorName>A.P. Buchmann</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Tech. Hochschule Darmstadt, German</InstituteName><country></country></institute></author><author><AuthorName>J. Zimmermann</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Tech. Hochschule Darmstadt, German</InstituteName><country></country></institute></author><author><AuthorName>J.A. Blakeley</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Tech. Hochschule Darmstadt, German</InstituteName><country></country></institute></author><author><AuthorName>D.L. Wells</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Tech. Hochschule Darmstadt, German</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Active OODBMSs must provide efficient support for event detection, composition and rule execution. Previous experience of building active capabilities on top of existing closed OODBMSs has proven to be ineffective. We propose instead an active OODBMS architecture where event detection and rule support are tightly integrated with the rest of the core OODBMS functionality. After presenting an analysis of the requirements of active OODBMSs, we discuss the event set, rule execution modes and lifespan of the events supported in our architecture. We also discuss event composition coupling relative to transaction boundaries. Since building an active OODBMS ex nihilo is extremely expensive, we are building the REACH (REal-time ACtive Heterogeneous) OODBMS by extending Texas Instruments' Open OODB toolkit. Open OODB is particularly well-suited for our purposes because it is the first DBMS whose architecture closely resembles the active database paradigm. It provides low-level event detection and invokes appropriate DBMS functionality as actions. We describe the architecture of the event detection and composition mechanisms, and the rule-firing process of the REACH active OODBMS, and show how these mechanisms interplay with the Open OODB core mechanisms.</abstract></paper><paper><title>Efficient processing of nested fuzzy SQL queries</title><author><AuthorName>Qi Yang</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>Chengwen Liu</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>Jing Wu</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>C. Yu</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>Son Dao</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>H. Nakajima</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Fuzzy databases have been introduced to deal with uncertain or incomplete information in many applications. The efficiency of processing fuzzy queries in fuzzy databases is a major concern. We provide techniques to unnest nested fuzzy queries of two blocks in fuzzy databases. We show both theoretically and experimentally that unnesting improves the performance of nested queries significantly. The results obtained in the paper form the basis for unnesting fuzzy queries of arbitrary blocks in fuzzy databases.</abstract></paper><paper><title>Context-dependent interpretations of linguistic terms in fuzzy relational databases</title><author><AuthorName>Weining Zhang</AuthorName><institute><InstituteName>Dept. of Math. & Comput. Sci., Lethbridge Univ., Alta., Canad</InstituteName><country></country></institute></author><author><AuthorName>C. Yu</AuthorName><institute><InstituteName>Dept. of Math. & Comput. Sci., Lethbridge Univ., Alta., Canad</InstituteName><country></country></institute></author><author><AuthorName>B. Reagan</AuthorName><institute><InstituteName>Dept. of Math. & Comput. Sci., Lethbridge Univ., Alta., Canad</InstituteName><country></country></institute></author><author><AuthorName>H. Nakajima</AuthorName><institute><InstituteName>Dept. of Math. & Comput. Sci., Lethbridge Univ., Alta., Canad</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Approaches are proposed to allow fuzzy terms to be interpreted according to the context within which they are used. Such an interpretation is natural and useful. A query-dependent interpretation is proposed to allow a fuzzy term to be interpreted relative to a partial answer of a query. A scaling process is used to transform a pre-defined meaning of a fuzzy term into on appropriate meaning in the given context. Sufficient conditions are given for a nested fuzzy query with RELATIVE quantifiers to be unnested for an efficient evaluation. An attribute-dependent interpretation is proposed to model the applications in which the meaning of a fuzzy term in an attribute must be interpreted with respect to values in other related attributes. Two necessary and sufficient conditions for a tuple to have a unique attribute-dependent interpretation are provided. We describe an interpretation system that allows queries to be processed based on the attribute-dependent interpretation of the data. Two techniques, grouping and shifting, are proposed to improve the implementation.</abstract></paper><paper><title>Efficient processing of proximity queries for large databases</title><author><AuthorName>W.G. Aref</AuthorName><institute><InstituteName>Matsushita Inf. Technol. Lab., Princeton, NJ, US</InstituteName><country></country></institute></author><author><AuthorName>D. Barbara</AuthorName><institute><InstituteName>Matsushita Inf. Technol. Lab., Princeton, NJ, US</InstituteName><country></country></institute></author><author><AuthorName>S. Johnson</AuthorName><institute><InstituteName>Matsushita Inf. Technol. Lab., Princeton, NJ, US</InstituteName><country></country></institute></author><author><AuthorName>S. Mehrotra</AuthorName><institute><InstituteName>Matsushita Inf. Technol. Lab., Princeton, NJ, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Emerging multimedia applications require database systems to provide support for new types of objects and to process queries that may have no parallel in traditional database applications. One such important class of queries are the proximity queries that aims to retrieve objects in the database that are related by a distance metric in a way that is specified by the query. The importance of proximity queries has earlier been realized in developing constructs for visual languages. In this paper, we present algorithms for answering a class of proximity queries-fixed-radius nearest-neighbor queries over point object. Processing proximity queries using existing query processing techniques results in high CPU and I/O costs. We develop new algorithms to answer proximity queries over objects that lie in the one-dimensional space (e.g., words in a document). The algorithms exploit query semantics to reduce the CPU and I/O costs, and hence improve performance. We also show how our algorithms can be generalized to handle d-dimensional objects.</abstract></paper><paper><title>Axiomatization of dynamic schema evolution in object bases</title><author><AuthorName>R.J. Peters</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Alberta Univ., Edmonton, Alta., Canad</InstituteName><country></country></institute></author><author><AuthorName>M. Tamer Ozsu</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Alberta Univ., Edmonton, Alta., Canad</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The schema of a system consists of the constructs that model its entities. Schema evolution is the timely change and management of the schema. Dynamic schema evolution is the management of schema changes while the system is in operation. We propose a sound and complete axiomatic model for dynamic schema evolution in object-base management systems (OBMSs) that support subtyping and property inheritance. The model is formal, which distinguishes it from the traditional approach of informally defining a number of invariants and rules to enforce them. By reducing systems to the axiomatic model, their functionality with respect to dynamic schema evolution can be compared within a common framework.</abstract></paper><paper><title>A transparent object-oriented schema change approach using view evolution</title><author><AuthorName>Young-Gook Ra</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Michigan Univ., Ann Arbor, MI, US</InstituteName><country></country></institute></author><author><AuthorName>E.A. Rundensteiner</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Michigan Univ., Ann Arbor, MI, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>When a database is shared by many users, updates to the database schema are almost always prohibited because there is a risk of making existing application programs obsolete when they run against the modified schema. This paper addresses the problem by integrating schema evolution with view facilities. When new requirements necessitate schema updates for a particular user, the user specifies schema changes to the personal view rather than to the shared base schema. Our view evolution approach then computes a new view schema that reflects the semantics of the desired schema change, and replaces the old view with the new one. We present algorithms that implement the set of schema evolution operations typically supported by OODB systems as view definitions. This approach provides the means for schema change without affecting other views (and thus without affecting existing application programs). The persistent data is shared by different views of the schema, i.e., both old as well as newly developed applications can continue to interoperate. In this paper, we present examples that demonstrate our approach.</abstract></paper><paper><title>A common framework for classifying and specifying deductive database updating problems</title><author><AuthorName>E. Teniente</AuthorName><institute><InstituteName>Univ. Politecnica de Catalunya, Barcelona, Spai</InstituteName><country></country></institute></author><author><AuthorName>T. Urpi</AuthorName><institute><InstituteName>Univ. Politecnica de Catalunya, Barcelona, Spai</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We propose two interpretations of the event rules which provide a common framework for classifying and specifying deductive database updating problems such as view updating, materialized view maintenance, integrity constraints checking, integrity constraints maintenance, repairing inconsistent databases, integrity constraints satisfiability or condition monitoring. Moreover, these interpretations allow us to identify and to specify some problems that have received little attention up to now like enforcing or preventing condition activation. By considering only a unique set of rules for specifying all these problems, we want to show that it is possible to provide general methods able to deal with all these problems as a whole.</abstract></paper><paper><title>Navigation Server: a highly parallel DBMS on open systems</title><author><AuthorName>Ron-Chung Hu</AuthorName><institute><InstituteName>Sybase Inc., Emeryville, CA, US</InstituteName><country></country></institute></author><author><AuthorName>R. Stellwagen</AuthorName><institute><InstituteName>Sybase Inc., Emeryville, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Navigation Server was jointly developed to provide a highly scalable, high-performance parallel database server in the industry. By combining ATandT's experience in massively parallel systems, such as Teradata system, with Sybase's industry-leading open, client/server DBMS, Navigation Server was developed with some specific design objectives: Scalability. Minimizing interference by minimizing resource sharing among the concurrent processes, the shared-nothing architecture has, as of today, emerged as the architecture of choice for highly scalable parallel systems. Navigation Server adopts the shared-nothing parallel architecture to allow parallelized queries, updates, load, backup, and other utilities on a partitioned database. Portability Built on top of Sybase's open system products, Navigation Server is portable to Unix-based parallel machines. Further the shared-nothing software architecture demands minimal changes when porting Navigation Server to various parallel platforms ranging from symmetric multi-processing, clustered, to massively parallel processing systems. Availability. For a parallel system with many nodes, it may be often to see some hardware component failure. To achieve high availability, Navigation Server implements a hierarchical monitoring scheme to monitor all the running processes. With the monitoring frequency configurable by users, a process will be restarted automatically on an alternate node once a failure is detected. Usability. Navigation Server appears as a single Sybase SQL server to end users. Besides, it provides Sybase SQL Server two management tools: Configurator and Navigation Server Manager. The Configurator analyzes customers' workload, monitors system performance, and recommends configurations for optimal performance and resource utilization. The Navigation Server Manager provides graphical utilities to administer the system simply and efficiently.</abstract></paper><paper><title>Scalable parallel query server for decision support applications</title><author><AuthorName>Jen-Yao Chung</AuthorName><institute><InstituteName>IBM Thomas J. Watson Res. Center, Yorktown Heights, NY, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Decision-support applications require the ability to query against large amounts of detailed historical data. We are exploiting parallel technology to improve query response time through query decomposition, CPU and I/O parallelism, and client/server approach. IBM System/390 Parallel Query Server is built on advanced and low-cost CMOS microprocessors for decision-support applications. We discuss our design, implementation and performance of a scalable parallel query server.</abstract></paper><paper><title>Optimizing queries with materialized views</title><author><AuthorName>S. Chaudhuri</AuthorName><institute><InstituteName>Hewlett-Packard Co., Palo Alto, CA, US</InstituteName><country></country></institute></author><author><AuthorName>R. Krishnamurthy</AuthorName><institute><InstituteName>Hewlett-Packard Co., Palo Alto, CA, US</InstituteName><country></country></institute></author><author><AuthorName>S. Potamianos</AuthorName><institute><InstituteName>Hewlett-Packard Co., Palo Alto, CA, US</InstituteName><country></country></institute></author><author><AuthorName>K. Shim</AuthorName><institute><InstituteName>Hewlett-Packard Co., Palo Alto, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>While much work has addressed the problem of maintaining materialized views, the important question of optimizing queries in the presence of materialised views has not been resolved. In this paper, we analyze the optimization question and provide a comprehensive and efficient solution. Our solution has the desirable property that it is a simple generalization of the traditional query optimization algorithm.</abstract></paper><paper><title>Prairie: A rule specification framework for query optimizers</title><author><AuthorName>D. Das</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Texas Univ., Austin, TX, US</InstituteName><country></country></institute></author><author><AuthorName>D. Batory</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Texas Univ., Austin, TX, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>From our experience, current rule-based query optimizers do not provide a very intuitive and well-defined framework to define rules and actions. To remedy this situation, we propose an extensible and structured algebraic framework called Prairie for specifying rules. Prairie facilitates rule-writing by enabling a user to write rules and actions more quickly, correctly and in an easy-to-understand and easy-to-debug manner. Query optimizers consist of three major parts: a search space, a cost model and a search strategy. The approach we take is only to develop the algebra which defines the search space and the cost model and use the Volcano optimizer-generator as our search engine. Using Prairie as a front-end we translate Prairie rules to Volcano to validate our claim that Prairie makes it easier to write rules. We describe our algebra and present experimental results which show that using a high-level framework like Prairie to design large-scale optimizers does not sacrifice efficiency.</abstract></paper><paper><title>Pushing semantics inside recursion: A general framework for semantic optimization of recursive queries</title><author><AuthorName>L.V.S. Lakshmanan</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Concordia Univ., Montreal, Que., Canad</InstituteName><country></country></institute></author><author><AuthorName>R. Missaoui</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Concordia Univ., Montreal, Que., Canad</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We consider a class of linear query programs and integrity constraints and develop methods for (i) computing the residues and (ii) pushing them inside the recursive programs, minimizing redundant computation and run-time overhead. We also discuss applications of our strategy to intelligent query answering.</abstract></paper><paper><title>Computing temporal aggregates</title><author><AuthorName>N. Kline</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Arizona Univ., Tucson, AZ, US</InstituteName><country></country></institute></author><author><AuthorName>R.T. Snodgrass</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Arizona Univ., Tucson, AZ, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Aggregate computation, such as selecting the minimum attribute value of a relation, is expensive, especially in a temporal database. We describe the basic techniques behind computing aggregates in conventional databases and show that these techniques are not efficient when applied to temporal databases. We examine the problem of computing constant intervals (intervals of time for which the aggregate value is constant) used for temporal grouping. We introduce two new algorithms for computing temporal aggregates: the aggregation tree and the k-ordered aggregation tree. An empirical comparison demonstrates that the choice of algorithm depends in part on the amount of memory available, the number of tuples in the underlying relation, and the degree to which the tuples are ordered. This study shows that the simplest strategy is to first sort the underlying relation, then apply the k-ordered aggregation tree algorithm with k=1.</abstract></paper><paper><title>SEQ: A model for sequence databases</title><author><AuthorName>P. Seshadri</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Wisconsin Univ., Madison, WI, US</InstituteName><country></country></institute></author><author><AuthorName>M. Livny</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Wisconsin Univ., Madison, WI, US</InstituteName><country></country></institute></author><author><AuthorName>R. Ramakrishnan</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Wisconsin Univ., Madison, WI, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>This paper presents the SEQ model which is the basis for a system to manage various kinds of sequence data. The model separates the data from the ordering information, and includes operators based on two distinct abstractions of a sequence. The main contributions of the SEQ model are: (a) it can deal with different types of sequence data, (b) it supports an expressive range of sequence queries, (c) it draws from many of the diverse existing approaches to modeling sequence data.</abstract></paper><paper><title>A version numbering scheme with a useful lexicographical order</title><author><AuthorName>A.M. Keller</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>J.D. Ullman</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We describe a numbering scheme for versions with alternatives that has a useful lexicographical ordering. The version hierarchy is a tree. By inspection of the version numbers, we can easily determine whether one version is an ancestor of another. If so, we can determine the version sequence between these two versions. If not, we can determine the most recent common ancestor to these two versions (i.e., the least upper bound, lub). Sorting the version numbers lexicographically results in a version being followed by all descendants and preceded by all its ancestors. We use a representation of nonnegative integers that is self delimiting and whose lexicographical ordering matches the ordering by value.</abstract></paper><paper><title>Object exchange across heterogeneous information sources</title><author><AuthorName>Y. Papakonstantinou</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>H. Garcia-Molina</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>J. Widom</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We address the problem of providing integrated access to diverse and dynamic information sources. We explain how this problem differs from the traditional database integration problem and we focus on one aspect of the information integration problem, namely information exchange. We define an object-based information exchange model and a corresponding query language that we believe are well suited for integration of diverse information sources. We describe how, the model and language have been used to integrate heterogeneous bibliographic information sources. We also describe two general-purpose libraries we have implemented for object exchange between clients and servers.</abstract></paper><paper><title>A universal relation approach to federated database management</title><author><AuthorName>J.L. Zhao</AuthorName><institute><InstituteName>Sch. of Bus. Adm., Coll. of William & Mary, Williamsburg, VA, US</InstituteName><country></country></institute></author><author><AuthorName>A. Segev</AuthorName><institute><InstituteName>Sch. of Bus. Adm., Coll. of William & Mary, Williamsburg, VA, US</InstituteName><country></country></institute></author><author><AuthorName>A. Chatterjee</AuthorName><institute><InstituteName>Sch. of Bus. Adm., Coll. of William & Mary, Williamsburg, VA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We describe a manufacturing environment where, driven by market forces, organizations cooperate as well as compete with one another. We argue that a federated database system (FDBS) is appropriate for such an environment. Contrary to conventional wisdom, complete transparency, assumed desirable and mandatory in distributed database systems, is neither desirable nor feasible in this environment. We propose a new approach that is based on schema coordination rather than integration under which each component database is free to change its data structure, attribute naming, and data semantics. A federated metadata model based on the notion of universal relation is introduced for the FDBS. We also develop the query processing paradigm, and present procedures for query transformation and heterogeneity resolution.</abstract></paper><paper><title>Query interoperation among object-oriented and relational databases</title><author><AuthorName>Xiaolei Qian</AuthorName><institute><InstituteName>Comput. Sci. Lab., SRI Int., Menlo Park, CA, US</InstituteName><country></country></institute></author><author><AuthorName>L. Raschid</AuthorName><institute><InstituteName>Comput. Sci. Lab., SRI Int., Menlo Park, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We develop an efficient algorithm for the query interoperation among existing heterogeneous object-oriented and relational databases. Our algorithm utilizes a canonical deductive database as a uniform representation of object-oriented schema and data. High-order object queries are transformed to the canonical deductive database in which they are partially evaluated and optimized, before being translated to relational queries. Our algorithm can be incorporated into object-oriented interfaces to relational databases or object-oriented federated databases to support object queries to heterogeneous relational databases.</abstract></paper><paper><title>Design, implementation and evaluation of SCORE (a system for content based retrieval of pictures)</title><author><AuthorName>Y.A. Aslandogan</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>C. Thier</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>C.T. Yu</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>Chengwen Liu</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><author><AuthorName>K.R. Nair</AuthorName><institute><InstituteName>Dept. of Electr. Eng. & Comput. Sci., Illinois Univ., Chicago, IL, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We make use of a refined E-R model to represent the contents of pictures. We propose remedies to handle mismatches which may arise due to differences in perception of picture contents. An iconic user interface for visual query construction is presented. A naive user can specify his/her intention without learning a query language. A function which computes the similarity between a picture and a user's description is provided. Pictures which are sufficiently close to the user description, as measured by the similarity function, are retrieved. We present the results of a user-friendliness experiment to evaluate the user interface as well as retrieval effectiveness. Encouraging retrieval results and valuable lessons are obtained.</abstract></paper><paper><title>RBE: Rendering by example</title><author><AuthorName>R. Krishnamurthy</AuthorName><institute><InstituteName>Hewlett-Packard Co., Palo Alto, CA, US</InstituteName><country></country></institute></author><author><AuthorName>M. Zloof</AuthorName><institute><InstituteName>Hewlett-Packard Co., Palo Alto, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Rendering is defined to be a customized presentation of data in such a way that allows users to subsequently interact with the presented data. Traditionally such a user interface would be a custom application written using conventional programming languages; in contrast we propose an application-independent, declarative (i.e., what-you-want) language that we call Rendering By Example, RBE, with the capability to specify a wide variety of renderings. RBE is a domain calculus language over user interface widgets. Most previous domain calculus database languages (e.g., QBE, LDL, Datalog) mainly addressed the data processing problem. The main contribution in developing RBE is to model semantics of user interactions in a declarative way. This declarative specification not only allows quick and ad-hoc specification of renderings (i.e., user interfaces) but also provides a framework to understand renderings as an abstract concept, independent of the application. Further, such a linguistic abstraction provides the basis for user-interface research. RBE is part of the ICBE language that is being prototyped in the Picture Programming project at HP Labs.</abstract></paper><paper><title>Improving SQL with generalized quantifiers</title><author><AuthorName>Ping-Yu Hsu</AuthorName><institute><InstituteName>Dept. of Comput. Sci., California Univ., Los Angeles, CA, US</InstituteName><country></country></institute></author><author><AuthorName>D.S. Parker</AuthorName><institute><InstituteName>Dept. of Comput. Sci., California Univ., Los Angeles, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>A generalized quantifier is a particular kind of operator on sets. Coming under increasing attention recently by linguists and logicians, they correspond to many useful natural language phrases, including phrases like: three, Chamberlin's three, more than three, fewer than three, at most three, all but three, no more than three, not more than half the, at least two and not more than three, no student's, most male and all female, etc. Reasoning about quantifiers is a source of recurring problems for most SQL users, and leads to both confusion and incorrect expression of queries. By adopting a more modern and natural model of quantification these problems can be alleviated. We show how generalized quantifiers can be used to improve the SQL interface.</abstract></paper><paper><title>Transactions in the client-server EOS object store</title><author><AuthorName>A. Biliris</AuthorName><institute><InstituteName>AT&T Bell Labs., Murray Hill, NJ, US</InstituteName><country></country></institute></author><author><AuthorName>E. Panagos</AuthorName><institute><InstituteName>AT&T Bell Labs., Murray Hill, NJ, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The paper describes the client-server software architecture of the EOS storage manager and the concurrency control and recovery mechanisms it employs. Unlike most client-server storage systems that use the standard two-phase locking protocol, EOS offers a semi-optimistic locking scheme based on a multigranularity two-version two-phase locking protocol. Under this scheme, many readers are allowed to access a data item while it is being updated by a single writer. For recovery, EOS maintains a write-ahead redo-only log because of the potential benefits it offers in a client-server environment. First, there are no undo records, as log records of aborted transactions are never inserted in the log; this minimizes the I/O and network transfer costs associated with logging during normal transaction execution. Secondly, it reduces the space required for the log. Thirdly, it facilitates fast recovery from system crashes because only one forward scan of the log is required for installing the updates performed by transactions that committed prior to the crash. Performance results of the EOS recovery subsystem are also presented.</abstract></paper><paper><title>Locking in OODBMS client supporting nested transactions</title><author><AuthorName>L. Daynes</AuthorName><institute><InstituteName>Inst. Nat. de Recherche en Inf. et Autom., Le Chesnay, Franc</InstituteName><country></country></institute></author><author><AuthorName>O. Gruber</AuthorName><institute><InstituteName>Inst. Nat. de Recherche en Inf. et Autom., Le Chesnay, Franc</InstituteName><country></country></institute></author><author><AuthorName>P. Valduriez</AuthorName><institute><InstituteName>Inst. Nat. de Recherche en Inf. et Autom., Le Chesnay, Franc</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Nested transactions facilitate the control of complex persistent applications by enabling both fine-tuning of the scope of rollback and safe intra-transaction parallelism. We are concerned with supporting concurrent nested transactions on client workstations of an OODBMS. Use of the traditional design and implementation of a lock manager results in a high CPU overhead: in-cache traversals of the 007 benchmark perform, at best, 4.5 times slower than the same traversal achieved in virtual memory by a nonpersistent programming language. We propose a new design and implementation of a lock manager which cuts that factor down to 1.8. This lock manager supports nested transactions with both sibling and parent/child parallelisms, and provides object locking at a cost comparable to page locking. Object locking is therefore a better alternative due to its higher functionality.</abstract></paper><paper><title>Disk read-write optimizations and data integrity in transaction systems using write-ahead logging</title><author><AuthorName>C. Mohan</AuthorName><institute><InstituteName>IBM Almaden Res. Center, San Jose, CA, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We discuss several disk read-write optimizations that are implemented in different transaction systems and disk hardware to improve performance. These include: (1) when multiple sectors are written to disk, the sectors may be written out of sequence (SCSI disk interfaces do this). (2) Avoiding initializing pages on disk when a file is extended. (3) Not accessing individual pages during a mass delete operation (e.g., dropping an index from a file which contains multiple indexes). (4) Permitting a previously deallocated page to be reallocated without the need to read the deallocated version of the page from disk during its reallocation. (5) Purging of file pages from the buffer pool during a file erase operation (e.g., a table drop). (6) Avoiding logging for bulk operations like index create. We consider a system which implements the above optimizations and in which a page consists of multiple disk sectors and recovery is based on write-ahead logging using a log sequence number on every page. For such a system, we present a simple method for guaranteeing the detection of the partial disk write of a page. Detecting partial writes is very important not only to ensure data integrity from the users' viewpoint but also to make the transaction system software work correctly. Once a partial write is detected, it is easy to recover such a page using media recovery techniques. Our method imposes minimal CPU and space overheads. It has been implemented in DB2/6000 and ADSM.</abstract></paper><paper><title>Deputy mechanisms for object-oriented databases</title><author><AuthorName>Zhiyong Peng</AuthorName><institute><InstituteName>Fac. of Eng., Kyoto Univ., Japa</InstituteName><country></country></institute></author><author><AuthorName>Y. Kambayashi</AuthorName><institute><InstituteName>Fac. of Eng., Kyoto Univ., Japa</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Concepts of deputy objects and deputy classes for object-oriented databases (OODBs) are introduced. They can be used for unified realization of object views, roles and migration. The previous researches on these concepts were carried out separately, although they are very closely related. Objects appearing in a view can be regarded as playing roles in that view. Object migration is caused by change of roles of an object. Deputy objects can be used for unified treatment of them and generalization of these concepts. The schemata of deputy objects are defined by deputy classes. A set of algebraic operations are developed for deputy class derivation. In addition, three procedures for update propagation between deputy objects and source objects have been designed, which can support dynamic classification. The unified realization of object views, roles and migration by deputy mechanisms can achieve the following advantages. (1) Treating view objects as roles of an object allows them to have additional attributes and methods so that the autonomous views suitable for OODBs can be realized. (2) Handling object roles in the same way as object views enables object migration to be easily realized by dynamic classification functions of object views. (3) Generalization of object views, roles and migration makes it possible that various semantic constraints on them can, be defined and enforced uniformly.</abstract></paper><paper><title>ECA rule integration into an OODBMS: architecture and implementation</title><author><AuthorName>S. Chakravarthy</AuthorName><institute><InstituteName>Dept. of Comput. & Inf. Sci., Florida Univ., Gainesville, FL, US</InstituteName><country></country></institute></author><author><AuthorName>V. Krishnaprasad</AuthorName><institute><InstituteName>Dept. of Comput. & Inf. Sci., Florida Univ., Gainesville, FL, US</InstituteName><country></country></institute></author><author><AuthorName>Z. Tamizuddin</AuthorName><institute><InstituteName>Dept. of Comput. & Inf. Sci., Florida Univ., Gainesville, FL, US</InstituteName><country></country></institute></author><author><AuthorName>R.H. Badani</AuthorName><institute><InstituteName>Dept. of Comput. & Inf. Sci., Florida Univ., Gainesville, FL, US</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Making a database system active entails not only the specification of expressive ECA (event-condition-action) rules, algorithms for the detection of composite events, and rule management, but also a viable architecture for rule execution that extends a passive DBMS, and its implementation. We propose an integrated active DBMS architecture for incorporating ECA rules using the Open OODB Toolkit (from Texas Instruments). We then describe the implementation of the composite event detector, and rule execution model for object-oriented active DBMS. Finally, the functionality supported by this architecture and its extensibility are analyzed along with the experiences gained.</abstract></paper><paper><title>Infobusiness issues in ROC</title><author><AuthorName>Lung-Lung Liu</AuthorName><institute><InstituteName>Inst. for Inf. Ind., Taiwa</InstituteName><country></country></institute></author><year>1995</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The infobusiness operation has been popular for many years in the ROC. Management information systems in the government, military, and enterprise were the original applications, and then came the information service requirement from various kinds of users. Computer networks, database systems, and information providers together proposed the draft infobusiness environment. Closed systems are still the ones that major infobusiness operations provide to their customers. Issues in the infobusiness development include: (1) closed systems limited the infobusiness opportunity. (2) Chinese character handling and the inconvenient localized environment blocked the user and the vendor in information service applicatio
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -