⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 icde_1997_elementary.txt

📁 利用lwp::get写的
💻 TXT
📖 第 1 页 / 共 5 页
字号:
<proceedings><paper><title>Program Chairs Message</title><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract></abstract></paper><paper><title>Program Area Co-Chairs</title><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract></abstract></paper><paper><title>External Reviewers</title><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract></abstract></paper><paper><title>SEOF: An Adaptable Object Prefetch Policy for Object-Oriented Database Systems</title><author><AuthorName>Jung-Ho Ahn</AuthorName><institute><InstituteName>Seoul National Universit</InstituteName><country></country></institute></author><author><AuthorName>Hyoung-Joo Kim</AuthorName><institute><InstituteName>Seoul National Universit</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The performance of object access can be drastically improved by efficient object prefetch. In this paper we present a new object prefetch policy, Selective Eager Object Fetch(SEOF) which prefetches objects only from selected candidate pages without using any high level object semantics. Our policy considers both the correlations and the frequencies of fetching objects. Unlike existing prefetch policies, this policy utilizes the memory and the swap space of clients efficiently without resource exhaustion. Furthermore, the proposed policy has good adaptability to both the effectiveness of clustering and database size. We show the performance of the proposed policy through experiments over various multi-client system configurations.</abstract></paper><paper><title>Indexing OODB instances based on access proximity</title><author><AuthorName>Chee Yong Chan</AuthorName><institute><InstituteName>Dept. of Inf. Syst. &amp; Comput. Sci., Nat. Univ. of Singapore, Singapor</InstituteName><country></country></institute></author><author><AuthorName>Cheng Hian Goh</AuthorName><institute><InstituteName>Dept. of Inf. Syst. &amp; Comput. Sci., Nat. Univ. of Singapore, Singapor</InstituteName><country></country></institute></author><author><AuthorName>Beng Chin Ooi</AuthorName><institute><InstituteName>Dept. of Inf. Syst. &amp; Comput. Sci., Nat. Univ. of Singapore, Singapor</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Queries in object-oriented databases (OODBs) may be asked with respect to different class scopes: a query may either request for object-instances which belong exclusively to a given class c, or those which belong to any class in the hierarchy rooted at c. To facilitate retrieval of objects both from a single class as well as from multiple classes in a class hierarchy, we propose a multi-dimensional class-hierarchy index called the /spl chi/-tree. The /spl chi/-tree dynamically partitions the data space using both the class and indexed attribute dimensions by taking into account the semantics of the class dimension as well as access patterns of queries. Experimental results show that it is an efficient index.</abstract></paper><paper><title>The multikey type index for persistent object sets</title><author><AuthorName>T.A. Mueck</AuthorName><institute><InstituteName>Abteilung Data Eng., Wien Univ., Austri</InstituteName><country></country></institute></author><author><AuthorName>M.L. Polaschek</AuthorName><institute><InstituteName>Abteilung Data Eng., Wien Univ., Austri</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Multikey index structures for type hierarchies are a recently discussed alternative to traditional B/sup +/-tree indexing schemes. We describe an efficient implementation of this alternative called the multikey type index (MT-index). A prerequisite for our approach is an optimal linearization of the type hierarchy that allows us to map queries in object type hierarchies to minimal-volume range queries in multi-attribute search structures. This provides access to an already-existing large and versatile tool-box. The outline of an index implementation by means of a multi-attribute search structure (e.g. the hB-tree or any other structure with comparable performance) is followed by an analytical performance evaluation. Selected performance figures are compared to previous approaches, in particular to the H-tree and the class hierarchy tree. The comparison results allow for practically relevant conclusions with respect to index selection based on query profiles.</abstract></paper><paper><title>Distributing semantic constraints between heterogeneous databases</title><author><AuthorName>S. Grufman</AuthorName><institute><InstituteName>Dept. of Comput. &amp; Inf. Sci., Linkoping Univ., Swede</InstituteName><country></country></institute></author><author><AuthorName>F. Samson</AuthorName><institute><InstituteName>Dept. of Comput. &amp; Inf. Sci., Linkoping Univ., Swede</InstituteName><country></country></institute></author><author><AuthorName>M. Embury</AuthorName><institute><InstituteName>Dept. of Comput. &amp; Inf. Sci., Linkoping Univ., Swede</InstituteName><country></country></institute></author><author><AuthorName>P.M.D. Gray</AuthorName><institute><InstituteName>Dept. of Comput. &amp; Inf. Sci., Linkoping Univ., Swede</InstituteName><country></country></institute></author><author><AuthorName>T. Risch</AuthorName><institute><InstituteName>Dept. of Comput. &amp; Inf. Sci., Linkoping Univ., Swede</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>In recent years, research on distributing databases over networks has become increasingly important. In this paper, we concentrate on the issues of the interoperability of heterogeneous DBMSs and enforcing integrity across a multi-database made in this fashion. This has been done through a cooperative project between Aberdeen and Linko/spl uml/ping universities, with database modules distributed between the sites. In the process, we have shown the advantage of using DBMSs based on variants of the functional data model (FDM), which has made it remarkably straightforward to interoperate queries and schema definitions. Further, we have used the constraint transformation facilities of P/FDM (Prolog implementation of FDM) to compile global constraints into active rules installed locally on one or more AMOS (Active Mediators Object System) servers. We present the theory behind this, and the conditions for it to improve performance.</abstract></paper><paper><title>Semantic dictionary design for database interoperability</title><author><AuthorName>S. Castano</AuthorName><institute><InstituteName>Milano Univ., ital</InstituteName><country></country></institute></author><author><AuthorName>V. De Antonellis</AuthorName><institute><InstituteName>Milano Univ., ital</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Criteria and techniques to support the establishment of a semantic dictionary for database interoperability are described. The techniques allow the analysis of conceptual schemas of databases in a federation and the definition and maintenance of concept hierarchies. Similarity-based criteria are used to evaluate concept closeness and, consequently, to generate concept hierarchies. Experimentation of the techniques in the public administration domain is discussed.</abstract></paper><paper><title>WOL: a language for database transformations and constraints</title><author><AuthorName>S.B. Davidson</AuthorName><institute><InstituteName>Dept. of Comput. &amp; Inf. Sci., Pennsylvania Univ., Philadelphia, PA, US</InstituteName><country></country></institute></author><author><AuthorName>A.S. Kosky</AuthorName><institute><InstituteName>Dept. of Comput. &amp; Inf. Sci., Pennsylvania Univ., Philadelphia, PA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The need to transform data between heterogeneous databases arises from a number of critical tasks in data management. These tasks are complicated by schema evolution in the underlying databases and by the presence of non-standard database constraints. We describe a declarative language called WOL (Well-founded Object Logic) for specifying such transformations, and its implementation in a system called Morphase (an &amp;quot;enzyme&amp;quot; for morphing data). WOL is designed to allow transformations between the complex data structures which arise in object-oriented databases as well as in complex relational databases, and to allow for reasoning about the interactions between database transformations and constraints.</abstract></paper><paper><title>A propagation mechanism for populated schema versions</title><author><AuthorName>S.-E. Lautemann</AuthorName><institute><InstituteName>Fachbereich Inf., Frankfurt Univ., German</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Object-oriented database systems (OODBMS) offer powerful modeling concepts as required by advanced application domains like CAD/CAM/CAE or office automation. Typical applications have to handle large and complex structured objects which frequently change their value and their structure. As the structure is described in the schema of the database, support for schema evolution is a highly required feature. Therefore, a set of schema update primitives must be provided which can be used to perform the required changes, even in the presence of populated databases and running applications. In this paper, we use the versioning approach to schema evolution to support schema updates as a complex design task. The presented propagation mechanism is based on conversion functions that map objects between different types and can be used to support schema evolution and schema integration.</abstract></paper><paper><title>Representative objects: concise representations of semistructured, hierarchical data</title><author><AuthorName>S. Nestorov</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>J. Ullman</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>J. Wiener</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>S. Chawathe</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Introduces the concept of representative objects, which uncover the inherent schema(s) in semi-structured, hierarchical data sources and provide a concise description of the structure of the data. Semi-structured data, unlike data stored in typical relational or object-oriented databases, does not have a fixed schema that is known in advance and stored separately from the data. With the rapid growth of the World Wide Web, semi-structured hierarchical data sources are becoming widely available to the casual user. The lack of external schema information currently makes browsing and querying these data sources inefficient at best, and impossible at worst. We show how representative objects make schema discovery efficient and facilitate the generation of meaningful queries over the data.</abstract></paper><paper><title>Supporting fine-grained data lineage in a database visualization environment</title><author><AuthorName>A. Woodruff</AuthorName><institute><InstituteName>Dept. of Electr. Eng. &amp; Comput. Sci., California Univ., Berkeley, CA, US</InstituteName><country></country></institute></author><author><AuthorName>M. Stonebraker</AuthorName><institute><InstituteName>Dept. of Electr. Eng. &amp; Comput. Sci., California Univ., Berkeley, CA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The lineage of a datum records its processing history. Because such information can be used to trace the source of anomalies and errors in processed data sets, it is valuable to users for a variety of applications, including the investigation of anomalies and debugging. Traditional data lineage approaches rely on metadata. However, metadata does not scale well to fine-grained lineage, especially in large data sets. For example, it is not feasible to store all of the information that is necessary to trace from a specific floating-point value in a processed data set to a particular satellite image pixel in a source data set. In this paper, we propose a novel method to support fine-grained data lineage. Rather than relying on metadata, our approach lazily computes the lineage using a limited amount of information about the processing operators and the base data. We introduce the notions of weak inversion and verification. While our system does not perfectly invert the data, it uses weak inversion and verification to provide a number of guarantees about the lineage it generates. We propose a design for the implementation of weak inversion and verification in an object-relational database management system.</abstract></paper><paper><title>Quantifying complexity and performance gains of distributed caching in a wireless network environment</title><author><AuthorName>C.C.F. Fong</AuthorName><institute><InstituteName>Dept. of Comput. Sci. &amp; Eng., Chinese Univ. of Hong Kong, Shatin, Hong Kon</InstituteName><country></country></institute></author><author><AuthorName>J.C.S. Lui</AuthorName><institute><InstituteName>Dept. of Comput. Sci. &amp; Eng., Chinese Univ. of Hong Kong, Shatin, Hong Kon</InstituteName><country></country></institute></author><author><AuthorName>Man Hon Wong</AuthorName><institute><InstituteName>Dept. of Comput. Sci. &amp; Eng., Chinese Univ. of Hong Kong, Shatin, Hong Kon</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>In a mobile computing system, the wireless communication bandwidth is a scarce resource that needs to be managed carefully. In this paper, we investigate the use of distributed caching as an approach to reduce the wireless bandwidth consumption for data access. We find that conventional caching techniques cannot fully utilize the dissemination feature of the wireless channel. We thus propose a novel distributed caching protocol that can minimize the overall system bandwidth consumption at the cost of CPU processing time at the server side. This protocol allows the server to select data items into a broadcast set, based on a performance gain parameter called the bandwidth gain, and then send the broadcast set to all the mobile computers within the server's cell. We show that in general, this selection process is NP-hard, and therefore we propose a heuristic algorithm that can attain a near-optimal performance. We also propose an analytical model for the protocol and derive closed-form performance measures, such as the bandwidth utilization and the expected response time of data access by mobile computers. Experiments show that our distributed caching protocol can greatly reduce the bandwidth consumption so that the wireless network environment can accommodate more users and, at the same time, vastly improve the expected response time for data access by mobile computers.</abstract></paper><paper><title>On incremental cache coherency schemes in mobile computing environments</title><author><AuthorName>Jun Cai</AuthorName><institute><InstituteName>Dept. of Inf. Syst. &amp; Comput. Sci., Nat. Univ. of Singapore, Singapor</InstituteName><country></country></institute></author><author><AuthorName>Kian-Lee Tan</AuthorName><institute><InstituteName>Dept. of Inf. Syst. &amp; Comput. Sci., Nat. Univ. of Singapore, Singapor</InstituteName><country></country></institute></author><author><AuthorName>Beng Chin Ooi</AuthorName><institute><InstituteName>Dept. of Inf. Syst. &amp; Comput. Sci., Nat. Univ. of Singapore, Singapor</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Re-examines the cache coherency problem in a mobile computing environment in the context of relational operations (i.e. selection, projection and join). We propose a taxonomy of cache coherency schemes, and as case studies, we pick several schemes for further study. These schemes are novel in several ways. First, they are incremental. Second, they are an integration of (and built on) techniques in view maintenance in centralized systems and cache invalidation in client-server computing environments. We conducted extensive studies based on a simulation model. Our study shows the effectiveness of these algorithms in reducing uplink transmission and average access times. Moreover, the class of algorithms that exploit collaboration between the client and server performs best in most cases. We also study extended versions of this class of algorithms to further cut down on the work performed by the server.</abstract></paper><paper><title>Adaptive broadcast protocols to support power conservant retrieval by mobile users</title><author><AuthorName>A. Datta</AuthorName><institute><InstituteName>Arizona Univ., Tucson, AZ, US</InstituteName><country></country></institute></author><author><AuthorName>A. Celik</AuthorName><institute><InstituteName>Arizona Univ., Tucson, AZ, US</InstituteName><country></country></institute></author><author><AuthorName>J. Kim</AuthorName><institute><InstituteName>Arizona Univ., Tucson, AZ, US</InstituteName><country></country></institute></author><author><AuthorName>D.E. VanderMeer</AuthorName><institute><InstituteName>Arizona Univ., Tucson, AZ, US</InstituteName><country></country></institute></author><author><AuthorName>V. Kumar</AuthorName><institute><InstituteName>Arizona Univ., Tucson, AZ, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Mobile computing has the potential for managing information globally. Data management issues in mobile computing have received some attention in recent times, and the design of adaptive broadcast protocols has been posed as an important problem. Such protocols are employed by database servers to decide on the content of broadcasts dynamically, in response to client mobility and demand patterns. In this paper we design such protocols and also propose efficient retrieval strategies that may be employed by clients to download information from broadcasts. The goal is to design cooperative strategies between server and client to provide access to information in such a way as to minimize energy expenditure by clients. We evaluate the performance of our protocols analytically.</abstract></paper><paper><title>System design for digital media asset management</title><author><AuthorName>P.D. Fisher</AuthorName><institute><InstituteName>Commun. Arts, Egham, U</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Client-server computing is only now able to deliver genuine gains to broadcasters, publishers, creative agencies and production facilities. These new users are entering the distributed computing domain just as media object technologies and commercial broadband services emerge from infancy. This paper discusses the complex process of decision-making and system design for digital media asset management. The Cinebase Digital Media Management System is described and used to illustrate critical points. The digital media server architecture must accommodate extremely large datasets, scaleable content retrieval and rapid network query response. Cinebase installations are currently in place containing 100s of Terabytes of media content, and 100s of local and remote users. Cinebase has recently been ported to ObjectStore by Object Design Inc. (ODI). The presentation discusses how, in combination, the Cinebase application and ODI extensions can be used to deliver a complete object management environment for production-quality content. Examples of the new workflows, and the technical issues complex networks raise for database architecture, are also discussed.</abstract></paper><paper><title>Media asset management: managing complex data as a re-engineering exercise</title><author><AuthorName>P. de Vries</AuthorName><institute><InstituteName>Bulldog Group Inc., US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Building a media asset management application involves the storing, searching, and retrieving of complex data. How this data is managed can be viewed from two perspectives-in terms of the internal representation required to allow for high-speed searching and transferring of these items between systems, but also from the end-user perspective. This paper focuses on the perspective that media asset management is a re-engineering exercise whose fundamental goal is to eliminate the file system and its underlying classification model. The paper discusses the following topics: folder/director classification schemes; file system security; cross platform file transfers; traditional searching techniques and constraints; the data characteristics of a media asset management solution; an architectural mapping of elements required to manage complex data, including source, proxy, and metadata; a media asset management approach to classification including business semantics; content based search algorithms; and the feasibility of a database replacing the file system.</abstract></paper><paper><title>Content is king (If you can find it): a new model for knowledge storage and retrieval</title><author><AuthorName>F.L. Wurden</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The technology for acquiring and storing vast amounts of complex data is accelerating at a much faster rate than the technology for retrieving and analyzing that data. While progress has been made with OODB, OLAP, and knowledge discovery (KD) systems, users of these systems are still required to know and supply missing semantic information. When dealing with complex real-world representations, this is often nearly impossible to do. We discuss a new model that provides significant improvements in storing, correlating, and navigating information. We first provide a brief background looking at other relevant knowledge representation approaches, then describe our patented Contiguous Connection Model. Finally we discuss the impact this technology has had on a large, high-value, digital media knowledge base.</abstract></paper><paper><title>Relational Joins for Data on Tertiary Storage</title><author><AuthorName>Jussi Myllymaki</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Miron Livny</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Despite the steady decrease in secondary storage prices, the data storage requirements of many organizations cannot be met economically using secondary storage alone. Tertiary storage offers a lower-cost alternative but is viewed as a second-class citizen in many systems. For instance, the typical solution in bringing tertiary-resident data under the control of a DBMS is to use operating system facilities to copy the data to secondary storage, and then to perform query optimization and execution as if the data had been in secondary storage all along. This approach fails to recognize the opportunities for saving execution time and storage space if the data were accessed directly on tertiary devices and in parallel with other I/Os. In this paper we explore how to join two DBMS relations stored on magnetic tapes. Both relations are assumed to be larger than available disk space. We show how Grace Hash Join can be modified to handle a range of tape relation sizes. The modified algorithms access data directly on tapes and exploit parallelism between disk and tape I/Os. We also provide performance results of an experimental implementation of the algorithms.</abstract></paper><paper><title>Selectivity Estimation in the Presence of Alphanumeric Correlations</title><author><AuthorName>Min Wang</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Jeffrey Scott Vitter</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Query optimization is an integral part of relational database management systems. One important task in query optimization is selectivity estimation, that is, given a query P, we need to estimate the fraction of records in the database that satisfy P. Almost all previous work dealt with the estimation of numeric selectivity, i.e., the query contains only numeric variables. The general problem of estimating alphanumeric selectivity is much more difficult and has attracted attention only very recently, and the focus has been on the special case when only one column is involved. In this paper, we consider the more general case when there are two correlated alphanumeric columns. We develop efficient algorithms to build storage structures that can fit in a database catalog. Results from our extensive experiments to test our algorithms, on the basis of error analysis and space requirements, are given to guide DBMS implementors.</abstract></paper><paper><title>Similarity Based Retrieval of Videos</title><author><AuthorName>A. Prasad Sistla</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Clement Yu</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Raghu Venkatasubrahmanian</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract></abstract></paper><paper><title>Teaching an OLTP database kernel advanced datawarehousing techniques</title><author><AuthorName>C.D. French</AuthorName><institute><InstituteName>Sybase Inc., Burlington, MA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Most, if not all, of the major commercial database products available today were written more than 10 years ago. Their internal designs have always been heavily optimized for OLTP applications. Over the last couple of years as DSS and data warehousing have become more important, database companies have attempted to increase their performance with DSS-type applications. Most of their attempts have been in the form of added features like parallel table scans and simple bitmap indexing techniques. These were chosen because they could be quickly implemented (1-2 years), giving some level of increased query performance. The paper contends that the real performance gains for the DSS application have not yet been realized. The performance gains for DSS will not come from parallel table scans, but from major changes to the low level database storage management used by OLTP systems. One Sybase product, Sybase-IQ has pioneered some of these new techniques. The paper discusses a few of these techniques and how they could be integrated into an existing OLTP database kernel.</abstract></paper><paper><title>Data Warehousing: Dealing with the Growing Pains</title><author><AuthorName>Rob Armstrong</AuthorName><institute><InstituteName>NCR Corporatio</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>A data warehouse provides a customer with information to run and plan their business. It is true that if the data warehouse can not quickly adapt to changes in the environment then the company will lose the advantage that information provides. A warehouse must be built with a solid foundation that is flexible and responsive to business changes. The purpose of this paper is to share experiences in the area of managing the growth within the data warehouse. There are many technical issues that need to be addressed as the data warehouse grows in multiple dimensions. The ideas in this paper should enable you to provide the correct foundation for a long term warehouse. Very few companies are discussing these issues and the lack of discussion leads to a lack of knowledge that will further lead to poor architectural choices. This paper will articulate not only the benefits that are derived from data warehousing today but how to prepare to reap benefits for many tomorrow s. It will also explore the questions to ask, the points to make, and the issues to be addressed to have a long term successful data warehouse project.</abstract></paper><paper><title>Index selection for OLAP</title><author><AuthorName>H. Gupta</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>V. Harinarayan</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>A. Rajaraman</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>J.D. Ullman</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>On-line analytical processing (OLAP) is a recent and important application of database systems. Typically, OLAP data is presented as a multidimensional &amp;quot;data cube.&amp;quot; OLAP queries are complex and can take many hours or even days to run, if executed directly on the raw data. The most common method of reducing execution time is to precompute some of the queries into summary tables (subcubes of the data cube) and then to build indexes on these summary tables. In most commercial OLAP systems today, the summary tables that are to be precomputed are picked first, followed by the selection of the appropriate indexes on them. A trial-and-error approach is used to divide the space available between the summary tables and the indexes. This two-step process can perform very poorly. Since both summary tables and indexes consume the same resource-space-their selection should be done together for the most efficient use of space. The authors give algorithms that automate the selection of summary tables and indexes. In particular, they present a family of algorithms of increasing time complexities, and prove strong performance bounds for them. The algorithms with higher complexities have better performance bounds. However, the increase in the performance bound is diminishing, and they show that an algorithm of moderate complexity can perform fairly close to the optimal.</abstract></paper><paper><title>Clustering association rules</title><author><AuthorName>B. Lent</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>A. Swami</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><author><AuthorName>J. Widom</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Stanford Univ., CA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The authors consider the problem of clustering two-dimensional association rules in large databases. They present a geometric-based algorithm, BitOp, for performing the clustering, embedded within an association rule clustering system, ARCS. Association rule clustering is useful when the user desires to segment the data. They measure the quality of the segmentation generated by ARCS using the minimum description length (MDL) principle of encoding the clusters on several databases including noise and errors. Scale-up experiments show that ARCS, using the BitOp algorithm, scales linearly with the amount of data.</abstract></paper><paper><title>Modeling Multidimensional Databases</title><author><AuthorName>Rakesh Agrawal</AuthorName><institute><InstituteName>IBM Almaden Research Cente</InstituteName><country></country></institute></author><author><AuthorName>Ashish Gupta</AuthorName><institute><InstituteName>IBM Almaden Research Cente</InstituteName><country></country></institute></author><author><AuthorName>Sunita Sarawagi</AuthorName><institute><InstituteName>IBM Almaden Research Cente</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We propose a data model and a few algebraic operations that provide semantic foundation to multidimensional databases. The distinguishing feature of the proposed model is the symmetric treatment not only of all dimensions but also measures. The model provides support for multiple hierarchies along each dimension and support for adhoc aggregates. The proposed operators are composable, reorderable, and closed in application. These operators are also minimal in the sense that none can be expressed in terms of others nor can any one be dropped without sacrificing functionality. They make possible the declarative specification and optimization of multidimensional database queries that are currently specified operationally. The operators have been designed to be translated to SQL and can be implemented either on top of a relational database system or within a special purpose multidimensional database engine. In effect, they provide an algebraic application programming interface (API) that allows the separation of the frontend from the backend. Finally, the proposed model provides a framework in which to study multidimensional databases and opens several new research problems.</abstract></paper><paper><title>Failure handling for transaction hierarchies</title><author><AuthorName>Qiming Chen</AuthorName><institute><InstituteName>HP Labs., Palo Alto, CA, US</InstituteName><country></country></institute></author><author><AuthorName>U. Dayal</AuthorName><institute><InstituteName>HP Labs., Palo Alto, CA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Previously, failure recovery mechanisms have been developed separately for nested transactions and for transactional workflows specified as &amp;quot;flat&amp;quot; flow graphs. The paper develops unified techniques for complex business processes modeled as cooperative transaction hierarchies. Multiple cooperative transaction hierarchies often have operational dependencies, thus a failure occurring in one transaction hierarchy may need to be transferred to another. The existing transaction models do not support failure handling across transaction hierarchies. The authors introduce the notion of transaction execution history tree which allows one to develop a unified hierarchical failure recovery mechanism applicable to both nested and flat transaction structures. They also develop a cross-hierarchy undo mechanism for determining failure scopes and supporting backward and forward failure recovery over multiple transaction hierarchies. These mechanisms form a structured and unified approach for handling failures in flat transactional workflows, along a transaction hierarchy, and across transaction hierarchies.</abstract></paper><paper><title>An Argument in Favor of the Presumed Commit Protocol</title><author><AuthorName>Yousef J. Al-Houmaily</AuthorName><institute><InstituteName>University of Pittsburg</InstituteName><country></country></institute></author><author><AuthorName>Panos K. Chrysanthis</AuthorName><institute><InstituteName>University of Pittsburg</InstituteName><country></country></institute></author><author><AuthorName>Steven P. Levitan</AuthorName><institute><InstituteName>University of Pittsburg</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>We argue in favor of the presumed commit protocol by proposing two new presumed commit variants that significantly reduce the cost of logging activities associated with the original presumed commit protocol. Furthermore, for read-only transactions, we apply our unsolicited update-vote optimization and show that the cost associated with this type of transactions is the same in both presumed commit and presumed abort protocols, thus, nullifying the basis for the argument that favors the presumed abort protocol. This is especially important for modern distributed environments which are characterized by high reliability and high probability of transactions being committed rather than aborted.</abstract></paper><paper><title>Delegation: efficiently rewriting history</title><author><AuthorName>C.P. Martin</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Massachusetts Univ., Amherst, MA, US</InstituteName><country></country></institute></author><author><AuthorName>K. Ramamritham</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Massachusetts Univ., Amherst, MA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Transaction delegation, as introduced in ACTA, allows a transaction to transfer responsibility for the operations that it has performed on an object to another transaction. Delegation can be used to broaden the visibility of the delegatee, and to tailor the recovery properties of a transaction model. Delegation has been shown to be useful in synthesizing advanced transaction models. With an efficient implementation of delegation it becomes practicable to realize various advanced transaction models whose requirements are specified at a high level language instead of the current expensive practice of building them from scratch. The authors identify the issues in efficiently supporting delegation and hence advanced transaction models, and illustrate this with our solution in ARIES, an industrial-quality system that uses UNDO/REDO recovery. Since delegation is tantamount to rewriting history, a naive implementation can entail frequent, costly log accesses, and can result in complicated recovery protocols. The algorithm achieves the effect of rewriting history without rewriting the log, resulting in an implementation that realizes the semantics of delegation at minimal additional overhead and incurs no overhead when delegation is not used. The work indicates that it is feasible to build efficient and robust, general-purpose machinery for advanced transaction models. It is also a step towards making recovery a first-class concept within advanced transaction models.</abstract></paper><paper><title>Physical Database Design for Data Warehouses</title><author><AuthorName>Wilburt Juan Labio</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Dallan Quass</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Brad Adelberg</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Data warehouses collect copies of information from remote sources into a single database. Since the remote data is cached at the warehouse, it appears as local relations to the users of the warehouse. To improve query response time, the warehouse administrator will often materialize views defined on the local relations to support common or complicated queries. Unfortunately, the requirement to keep the views consistent with the local relations creates additional overhead when the remote sources change. The warehouse is often kept only loosely consistent with the sources: it is periodically refreshed with changes sent from the source. When this happens, the warehouse is taken off-line until the local relations and materialized views can be updated. Clearly, the users would prefer as little down time as possible. Often the down time can be reduced by adding carefully selected materialized views or indexes to the physical schema. This paper studies how to select the sets of supporting views and of indexes to materialize to minimize the down time. We call this the view index selection (VIS) problem. We present an A* search based solution to the problem as well as rules of thumb. We also perform additional experiments to understand the space-time tradeoff as it applies to data warehouses.</abstract></paper><paper><title>Multiple View Consistency for Data Warehousing</title><author><AuthorName>Yue Zhuge</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Janet L. Wiener</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Hector Garcia-Molina</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>A data warehouse stores integrated information from multiple distributed data sources. In effect, the warehouse stores materialized views over the source data. The problem of ensuring data consistency at the warehouse can be divided into two components: ensuring that each view reflects a consistent state of the base data, and ensuring that multiple views are mutually consistent. In this paper we study the latter problem, that of guaranteeing multiple view consistency (MVC). We identify and define formally three layers of consistency for materialized views in a distributed environment. We present a scalable architecture for consistently handling multiple views in a data warehouse, which we have implemented in the WHIPS(WareHousing Information Project at Stanford) prototype. Finally, we develop simple, scalable, algorithms for achieving MVC at a warehouse.</abstract></paper><paper><title>High-dimensional Similarity Joins</title><author><AuthorName>Kyuseok Shim</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Ramakrishnan Srikant</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Rakesh Agrawal</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Many emerging data mining applications require a similarity join between points in a high-dimensional domain. We present a new algorithm that utilizes a new index structure, called the epsilon-kdB tree, for fast spatial similarity joins on high-dimensional points. This index structure reduces the number of neighboring leaf nodes that are considered for the join test, as well as the traversal cost of finding appropriate branches in the internal nodes. The storage cost for internal nodes is independent of the number of dimensions. Hence the proposed index structure scales to high-dimensional data. Empirical evaluation, using synthetic and real-life datasets, shows that similarity join using the epsilon-kdB tree is 2 to an order of magnitude faster than the R+ tree, with the performance gap increasing with the number of dimensions.</abstract></paper><paper><title>Oracle parallel warehouse server</title><author><AuthorName>G. Hallmark</AuthorName><institute><InstituteName>Oracle Corp., Redwood Shores, CA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Oracle is the leading supplier of data warehouse servers, yet little has been published about Oracle's parallel warehouse architecture. After a brief review of Oracle's market, performance, and platform strengths, we present two novel features of the Oracle parallel database architecture. First, the data flow model achieves scalability while using a fixed number of threads that is independent of the complexity of the query plan. Second, a new &amp;quot;load shipping&amp;quot; architecture combines the best aspects of data shipping and function shipping, and runs on shared everything, shared disk, and shared nothing hardware.</abstract></paper><paper><title>Partial Video Sequence Caching Scheme for VOD Systems with Heterogeneous Clients</title><author><AuthorName>Y. M. Chiu</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>K. H. Yeung</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Video on Demand is one of the key application in the information era. An hinge factor to its wide booming is the huge bandwidth required to transmit digitized video to a large group of clients with widely varying requirements. This paper addresses issues due to heterogeneous clients by proposing a program caching scheme called Partial Video Sequence (PVS) Caching Scheme. PVS Caching Scheme decomposes video sequences into a number of parts by using a scalable video compression algorithm. Video parts are selected to be cached in local video servers based on the amount of bandwidth it would be demanded from the distribution network and central video server if it is only kept in central video server. In this paper, we also show that PVS Caching Scheme is suitable for handling vastly varying client requirements.</abstract></paper><paper><title>Periodic Retrieval of Videos from Disk Arrays</title><author><AuthorName>B. Ozden</AuthorName><institute><InstituteName>Bell Lab</InstituteName><country></country></institute></author><author><AuthorName>R. Rastogi</AuthorName><institute><InstituteName>Bell Lab</InstituteName><country></country></institute></author><author><AuthorName>A. Silberschatz</AuthorName><institute><InstituteName>Bell Lab</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>A growing number of applications need access to video data stored in digital form on secondary storage devices (e.g., video-on-demand, multimedia messaging). As a result, video servers that are responsible for the storage and retrieval, at fixed rates, of hundreds of videos from disks are becoming increasingly important. Since video data tends to be voluminous, several disks are usually used in order to store the videos. A challenge is to devise schemes for the storage and retrieval of videos that distribute the workload evenly across disks, reduce the cost of the server and at the same time, provide good response times to client requests for video data. In this paper, we present schemes that retrieve videos periodically from disks in order to provide better response times to client requests. We present two schemes that stripe videos across multiple disks in order to distribute the workload uniformly among them. For the two striping schemes, we show that the problem of retrieving videos periodically is equivalent to that of scheduling periodic tasks on a multiprocessor. For the multiprocessor scheduling problems, we present and compare schemes for computing start times for the tasks, if it is determined that they are scheduleable.</abstract></paper><paper><title>Buffer and I/O Resource Pre-allocation for Implementing Batching and Buffering Techniques for Video-on-Demand Systems</title><author><AuthorName>Mary Y.Y. Leung</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>John C.S. Lui</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Leana Golubchik</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>To design a cost effective VOD server, it is important to carefully manage the system resources so that the number of concurrent viewers can be maximized. Previous research results use data sharing techniques, such as batching, buffering and piggybacking, to reduce the demand for I/O resources in a VOD system. However, these techniques still suffer from the problem that additional I/O resources are needed in the system for providing VCR functionality --- without careful resource management, the benefits of these data sharing techniques can be lost. In this paper, we first introduce a model for determining the amount of resources required for supporting both normal playback and VCR functionality to satisfy predefined performance characteristics. Consequently, this model allows us to maximize the benefits of data sharing techniques. Furthermore, one important application of this model is its use in making system sizing decisions. Proper system sizing will result in a more cost-effective VOD system.</abstract></paper><paper><title>Interfacing parallel applications and parallel databases</title><author><AuthorName>V. Gottemukkala</AuthorName><institute><InstituteName>IBM Thomas J. Watson Res. Center, Yorktown Heights, NY, US</InstituteName><country></country></institute></author><author><AuthorName>A. Jhingran</AuthorName><institute><InstituteName>IBM Thomas J. Watson Res. Center, Yorktown Heights, NY, US</InstituteName><country></country></institute></author><author><AuthorName>S. Padmanabhan</AuthorName><institute><InstituteName>IBM Thomas J. Watson Res. Center, Yorktown Heights, NY, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The use of parallel database systems to deliver high performance has become quite common. Although queries submitted to these database systems are executed in parallel, the interaction between applications and current parallel database systems is serial. As the complexity of the applications and the amount of data they access increases, the need to parallelize applications also increases. In this parallel application environment, a serial interface to the database could become the bottleneck in the performance of the application. Hence, parallel database systems should support interfaces that allow the applications to interact with the database system in parallel. We present a taxonomy of such parallel interfaces, namely the Single Coordinator, Multiple Coordinator, Hybrid Parallel, and Pure Parallel interfaces. Furthermore, we discuss how each of these interfaces can be realized and in the process introduce new constructs that enable the implementation of the interfaces. We also qualitatively evaluate each of the interfaces with respect to their restrictiveness and performance impact.</abstract></paper><paper><title>Performance Evaluation of Rule Execution Semantics in Active Databases</title><author><AuthorName>Elena Baralis</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Andrea Bianco</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Different rule execution semantics may be available in the same active database system. We performe several simulation experiments to evaluate the performance trade-offs yielded by different execution semantics in various operating conditions. In particular, we evaluate the effect of executing transaction and rule statements that affect a varying number of data instances, and applications with different rule triggering breadth and depth. Since references to data changed by the database operation triggering the rules are commonly used in active rule programming, we also analyze the impact of its management on overall performance.</abstract></paper><paper><title>Titan: a high-performance remote-sensing database</title><author><AuthorName>Chialin Chang</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Maryland Univ., College Park, MD, US</InstituteName><country></country></institute></author><author><AuthorName>Bongki Moon</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Maryland Univ., College Park, MD, US</InstituteName><country></country></institute></author><author><AuthorName>Anurag Acharya</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Maryland Univ., College Park, MD, US</InstituteName><country></country></institute></author><author><AuthorName>C. Shock</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Maryland Univ., College Park, MD, US</InstituteName><country></country></institute></author><author><AuthorName>A. Sussman</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Maryland Univ., College Park, MD, US</InstituteName><country></country></institute></author><author><AuthorName>J. Saltz</AuthorName><institute><InstituteName>Dept. of Comput. Sci., Maryland Univ., College Park, MD, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>There are two major challenges for a high performance remote sensing database. First, it must provide low latency retrieval of very large volumes of spatio temporal data. This requires effective declustering and placement of a multidimensional dataset onto a large disk farm. Second, the order of magnitude reduction in data size due to post processing makes it imperative, from a performance perspective, that the post processing be done on the machine that holds the data. This requires careful coordination of computation and data retrieval. The paper describes the design, implementation and evaluation of Titan, a parallel shared nothing database designed for handling remote sensing data. The computational platform for Titan is a 16 processor IBM SP-2 with four fast disks attached to each processor. Titan is currently operational and contains about 24 GB of AVHRR data from the NOAA-7 satellite. The experimental results show that Titan provides good performance for global queries and interactive response times for local queries.</abstract></paper><paper><title>Adding full text indexing to the operating system</title><author><AuthorName>K. Peltonen</AuthorName><institute><InstituteName>Microsoft Corp., Redmond, WA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Many challenges must be faced when incorporating full text retrieval into the operating system. The search engine must be a nearly invisible, natural extension to the operating system, just like the file system and the network. The search engine must meet user expectations of an operating system, specifically in areas such as performance, fault tolerance, and security. It must handle a very heterogeneous collection of documents, in many formats, many languages and many styles. The search engine must scale with the operating system, from small laptop computers to large multiprocessor servers. The paper is an overview of the challenges faced when incorporating full text indexing into the Microsoft Windows NT/sup TM/ operating system. Specific solutions used by the Microsoft 'Tripoli' search engine, are offered.</abstract></paper><paper><title>A rule engine for query transformation in Starburst and IBM DB2 C/S DBMS</title><author><AuthorName>H. Pirahesh</AuthorName><institute><InstituteName>IBM Almaden Res. Center, San Jose, CA, US</InstituteName><country></country></institute></author><author><AuthorName>T.Y.C. Leung</AuthorName><institute><InstituteName>IBM Almaden Res. Center, San Jose, CA, US</InstituteName><country></country></institute></author><author><AuthorName>W. Hasan</AuthorName><institute><InstituteName>IBM Almaden Res. Center, San Jose, CA, US</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>The complexity of queries in relational DBMSs is increasing, particularly in the decision support area and interactive client sewer environments. This calls for a more powerful and flexible optimization of complex queries. H. Pirahesh et al. (1992) introduced query rewrite as a distinct query optimization phase mainly targeted to responding to this requirement. This approach has enabled us to extensively enrich the optimization rules in our system. Further, it has made it easier to incrementally enrich and adapt the system as need arises. Examples of such query optimizations are predicate pushdown, subquery and magic sets transformations, and decorrelating subquery. We describe the design and implementation of a rule engine for query rewrite optimization. Each transformation is implemented as a rule which consists of a pair of rule condition and action. Rules can be grouped into rule classes for higher efficiency, better understandability and more extensibility. The rule engine has a number of novelties in that it supports a full spectrum of control-from totally data driven to totally procedural. Furthermore, it incorporates a budget control scheme for controlling the resources taken for query optimization as well as guaranteeing the termination of rule execution. The rule engine and a suite of query rewrite rules have been implemented in Starburst relational DBMS prototype and a significant portion of this technology has been integrated into IBM DB2 Common Server relational DBMS.</abstract></paper><paper><title>A Data Model and Semantics of Objects with Dynamic Roles</title><author><AuthorName>Raymond K. Wong</AuthorName><institute><InstituteName>Hong Kong University of Science and Technolog</InstituteName><country></country></institute></author><author><AuthorName>H. Lewis Chau</AuthorName><institute><InstituteName>Hong Kong University of Science and Technolog</InstituteName><country></country></institute></author><author><AuthorName>Frederick H. Lochovsky</AuthorName><institute><InstituteName>Hong Kong University of Science and Technolog</InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Although the concept of roles is becoming a popular research issue in object-oriented databases and has been proven to be useful for dynamic and evolving applications, it has only been described conceptually in most of the previous work. Moreover, the important issues such as the semantics of roles (e.g., message passing) are seldom discussed. Furthermore, none of the previous work has investigated the idea of role player qualification, which models the fact that not every object is qualified to play a particular role. In this paper, we present a data model and the semantics of roles. We discuss each of the above issues and illustrate the ideas with examples. From these examples, we can easily see that the problems we discussed are fundamental and indeed exist in many complex applications.</abstract></paper><paper><title>Object Relater Plus: A Practical Tool for Developing Enhanced Object Databases</title><author><AuthorName>Bryon K. Ehlmann</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Gregory A. Riccardi</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>Object Relater Plus is a practical tool currently being used for research and development of enhanced object databases (ODBs). The tool, which is a prototype Object Database Management System (ODBMS), provides two languages that are compatible with the ODMG-93 ODBMS standard yet enhance it in some significant ways. The Object Database Definition Language (ODDL) allows object relationships to be better defined and supported; provides for the specification and separation of external, conceptual, and internal views; and facilitates the implementation of domain specific ODB extensions. The Object Database Manipulation Language (ODML) augments ODDL by providing a C++ interface for database creation, access, and manipulation based on an ODDL specification. In this paper we give an overview of Object Relater Plus, emphasizing its salient features. We also briefly discuss its architecture and implementation and its use in developing scientific databases.</abstract></paper><paper><title>Modeling and Querying Moving Objects</title><author><AuthorName>A. Prasad Sistla</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Ouri Wolfson</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Sam Chamberlain</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Son Dao</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>In this paper we propose a data model for representing moving objects in database systems. It is called the Moving Objects Spatio-Temporal (MOST) data model. We also propose Future Temporal Logic (FTL) as the query language for the MOST model, and devise an algorithm for processing FTL queries in MOST.</abstract></paper><paper><title>A Generic Query-Translation Framework for a Mediator Architecture</title><author><AuthorName>Jacques Calmet</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Sebastian Jekutsch</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><author><AuthorName>Joachim Schue</AuthorName><institute><InstituteName></InstituteName><country></country></institute></author><year>1997</year><conference>International Conference on Data Engineering</conference><citation></citation><abstract>A mediator is a domain-specific tool to support uniform access to multiple heterogeneous information sources and to abstract and combine data from different but related databases to gain new information. This middleware product is urgently needed for these frequently occurring tasks in a decision support environment. In order to provide a front end, a mediator usually defines a new language. If an application or a user submits a question to the mediator, it has to be decomposed into 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -