⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 dbffile.java

📁 TinySQL是一个轻量级的纯java数据库引擎
💻 JAVA
📖 第 1 页 / 共 2 页
字号:
/*
 *
 * dbfFile - an extension of tinySQL for dbf file access
 *
 * Copyright 1996 John Wiley & Sons, Inc.
 * See the COPYING file for redistribution details.
 *
 * $Author: davis $
 * $Date: 2004/12/18 21:27:51 $
 * $Revision: 1.1 $
 *
 */
package com.sqlmagic.tinysql;

import java.util.*;
import java.lang.*;
import java.io.*;
import java.sql.Types;

/**
dBase read/write access <br>
@author Brian Jepson <bjepson@home.com>
@author Marcel Ruff <ruff@swand.lake.de> Added write access to dBase and JDK 2 support
@author Thomas Morgner <mgs@sherito.org> Changed ColumnName to 11 bytes and strip name
 after first occurence of 0x00.
 Types are now handled as java.sql.Types, not as character flag
*/
public class dbfFile extends tinySQL {

  public static String dataDir;
  public static boolean debug=false;
  private Vector tableList=new Vector();
  static {

    try {
      dataDir = System.getProperty("user.home") + File.separator + ".tinySQL";
    } catch (Exception e) {
      System.err.println("tinySQL: unable to get user.home property, "+
                           "reverting to current working directory.");
      dataDir = "." + File.separator + ".tinySQL";
    }

  }

  /**
   *
   * Constructs a new dbfFile object
   *
   */
  public dbfFile() {

    super();
    if ( tinySQLGlobals.DEBUG ) System.out.println("Set datadir=" + dataDir);

  }

  /**
   *
   * Constructs a new dbfFile object
   *
   * @param d directory with which to override the default data directory
   *
   */
  public dbfFile( String d ) {

    super();
    dataDir = d; // d is usually extracted from the connection URL
    if ( tinySQLGlobals.DEBUG ) System.out.println("Set datadir=" + dataDir);

  }


  /**
   *
   * Creates a table given the name and a vector of
   * column definition (tsColumn) arrays.
   *
   * @param tableName the name of the table
   * @param v a Vector containing arrays of column definitions.
   * @see tinySQL#CreateTable
   *
   */
  void setDataDir( String d)
  {
/*
 *   Method to set datadir - this is a crude way to allow support for
 *   multiple tinySQL connections
 */
     dataDir = d;
  }
  void CreateTable ( String tableName, Vector v )
    throws IOException, tinySQLException {

    //---------------------------------------------------
    // determin meta data ....
    int numCols = v.size();
    int recordLength = 1;        // 1 byte for the flag field
    for (int i = 0; i < numCols; i++) {
        tsColumn coldef = ((tsColumn) v.elementAt(i));
        recordLength += coldef.size;
    }

    //---------------------------------------------------
    // create the new dBase file ...
    DBFHeader dbfHeader = new DBFHeader(numCols, recordLength);
    RandomAccessFile ftbl = dbfHeader.create(dataDir, tableName);

    //---------------------------------------------------
    // write out the rest of the columns' definition.
    for (int i = 0; i < v.size(); i++) {
       tsColumn coldef = ((tsColumn) v.elementAt(i));
       Utils.log("CREATING COL=" + coldef.name);
       writeColdef(ftbl, coldef);
    }

    ftbl.write((byte)0x0d); // header section ends with CR (carriage return)

    ftbl.close();
  }


  /**
   * Creates new Columns in tableName, given a vector of
   * column definition (tsColumn) arrays.<br>
   * It is necessary to copy the whole file to do this task.
   *
   * ALTER TABLE table [ * ] ADD [ COLUMN ] column type
   *
   * @param tableName the name of the table
   * @param v a Vector containing arrays of column definitions.
   * @see tinySQL#AlterTableAddCol
   */
  void AlterTableAddCol ( String tableName, Vector v )
    throws IOException, tinySQLException {

    // rename the file ...
    String fullpath = dataDir + File.separator + tableName + dbfFileTable.dbfExtension;
    String tmppath = dataDir + File.separator + tableName + "_tmp_tmp" + dbfFileTable.dbfExtension;
    if (Utils.renameFile(fullpath, tmppath) == false)
      throw new tinySQLException("ALTER TABLE ADD COL error in renaming " + fullpath);

    try {
      // open the old file ...
      RandomAccessFile ftbl_tmp = new RandomAccessFile(tmppath, "r");

      // read the first 32 bytes ...
      DBFHeader dbfHeader_tmp = new DBFHeader(ftbl_tmp);

      // read the column info ...
      Vector coldef_list = new Vector(dbfHeader_tmp.numFields + v.size());
      int locn = 0; // offset of the current column
      for (int i = 1; i <= dbfHeader_tmp.numFields; i++) {
        tsColumn coldef = readColdef(ftbl_tmp, tableName, i, locn);
        locn += coldef.size; // increment locn by the length of this field.
        coldef_list.addElement(coldef);
      }

      // add the new column definitions to the existing ...
      for (int jj = 0; jj < v.size(); jj++)
        coldef_list.addElement(v.elementAt(jj));

      // create the new table ...
      CreateTable(tableName, coldef_list);

      // copy the data from old to new

      // opening new created dBase file ...
      RandomAccessFile ftbl = new RandomAccessFile(fullpath, "rw");
      ftbl.seek(ftbl.length()); // go to end of file

      int numRec = 0;
      for (int iRec=1; iRec<=dbfHeader_tmp.numRecords; iRec++) {

        String str = GetRecord(ftbl_tmp, dbfHeader_tmp, iRec);

        // Utils.log("Copy of record#" + iRec + " str='" + str + "' ...");

        if (str == null) continue; // record was marked as deleted, ignore it

        ftbl.write(str.getBytes(Utils.encode));     // write original record
        numRec++;

        for (int iCol = 0; iCol < v.size(); iCol++) // write added columns
        {
          tsColumn coldef = (tsColumn)v.elementAt(iCol);

          // enforce the correct column length
          String value = Utils.forceToSize(coldef.defaultVal, coldef.size, " ");

          // transform to byte and write to file
          byte[] b = value.getBytes(Utils.encode);
          ftbl.write(b);
        }
      }

      ftbl_tmp.close();

      DBFHeader.writeNumRecords(ftbl, numRec);
      ftbl.close();

      Utils.delFile(tmppath);

    } catch (Exception e) {
      throw new tinySQLException(e.getMessage());
    }
  }



  /**
   * Retrieve a record (=row)
   * @param dbfHeader dBase meta info
   * @param recordNumber starts with 1
   * @return the String with the complete record
   *         or null if the record is marked as deleted
   * @see tinySQLTable#GetCol
   */
  public String GetRecord(RandomAccessFile ff, DBFHeader dbfHeader, int recordNumber) throws tinySQLException
  {
    if (recordNumber < 1)
      throw new tinySQLException("Internal error - current record number < 1");

    try {
      // seek the starting offset of the current record,
      // as indicated by recordNumber
      ff.seek(dbfHeader.headerLength + (recordNumber - 1) * dbfHeader.recordLength);

      // fully read a byte array out to the length of
      // the record.
      byte[] b = new byte[dbfHeader.recordLength];
      ff.readFully(b);

      // make it into a String
      String record = new String(b, Utils.encode);

      // remove deleted records
      if (dbfFileTable.isDeleted(record))
        return null;

      return record;

    } catch (Exception e) {
      throw new tinySQLException(e.getMessage());
    }
  }


  /**
   *
   * Deletes Columns from tableName, given a vector of
   * column definition (tsColumn) arrays.<br>
   *
   * ALTER TABLE table DROP [ COLUMN ] column { RESTRICT | CASCADE }
   *
   * @param tableName the name of the table
   * @param v a Vector containing arrays of column definitions.
   * @see tinySQL#AlterTableDropCol
   *
   */
  void AlterTableDropCol ( String tableName, Vector v )
    throws IOException, tinySQLException {

    // rename the file ...
    String fullpath = dataDir + File.separator + tableName + dbfFileTable.dbfExtension;
    String tmppath = dataDir + File.separator + tableName + "-tmp" + dbfFileTable.dbfExtension;
    if (Utils.renameFile(fullpath, tmppath) == false)
      throw new tinySQLException("ALTER TABLE DROP COL error in renaming " + fullpath);

    try {
      // open the old file ...
      RandomAccessFile ftbl_tmp = new RandomAccessFile(tmppath, "r");

      // read the first 32 bytes ...
      DBFHeader dbfHeader_tmp = new DBFHeader(ftbl_tmp);

      // read the column info ...
      Vector coldef_list = new Vector(dbfHeader_tmp.numFields - v.size());
      int locn = 0; // offset of the current column

      nextCol: for (int i = 1; i <= dbfHeader_tmp.numFields; i++) {

        tsColumn coldef = readColdef(ftbl_tmp, tableName, i, locn);

        // remove the DROP columns from the existing cols ...
        for (int jj = 0; jj < v.size(); jj++) {
          String colName = (String)v.elementAt(jj);
          if (coldef.name.equals(colName)) {
            Utils.log("Dropping " + colName);
            continue nextCol;
          }
        }

        locn += coldef.size; // increment locn by the length of this field.
        // Utils.log("Recycling " + coldef.name);
        coldef_list.addElement(coldef);
      }

      // create the new table ...
      CreateTable(tableName, coldef_list);

      // copy the data from old to new

      // opening new created dBase file ...
      RandomAccessFile ftbl = new RandomAccessFile(fullpath, "rw");
      ftbl.seek(ftbl.length()); // go to end of file

      int numRec = 0;
      for (int iRec=1; iRec<=dbfHeader_tmp.numRecords; iRec++) {

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -