📄 ch08.htm
字号:
on these three key PC LAN operating systems.
<H3><A NAME="Heading6"></A><FONT COLOR="#000077">Server Alternatives</FONT></H3>
<P>Although Windows dominates the corporate desktop, UNIX is still widely used as
a server platform due to its strong performance and robust features. Business-critical
servers must be able to deliver high-end features and run the company's transaction-based
applications. They also must be scalable enough to become part of a distributed network,
which replaces mainframe and minicomputer-based networks. Additionally, as a mainframe
replacement, a business-critical server needs to support security and systems management,
and must be able to interoperate with other dissimilar resources throughout the enterprise.
Specifically, it must be able to integrate with Windows PCs to make these critical
resources available to PC users.</P>
<P>The <I>Common Desktop Environment (CDE)</I> is part of the Common Operating System
Environment (COSE, pronounced "cozy") agreement, one of many attempts at
unifying the UNIX market. Although COSE itself never took off, CDE has achieved some
success--most notably, all the major UNIX vendors agreeing on the Motif interface
as the basis for the Common Desktop Environment as well as establishing several other
commonalities. Long overdue, this simple agreement will help make UNIX easier to
run in a multivendor environment.
<BLOCKQUOTE>
<P>
<HR>
<FONT COLOR="#000077"><B>64-bit API</B></FONT><BR>
Another unification attempt involves an initiative to develop a common 64-bit UNIX
API. Several UNIX vendors, including Intel, HP, SGI, DEC, Compaq, IBM, Novell, Oracle,
and Sun, are hoping that the common specification will reduce more of the problems
developers encounter in having to write for<BR>
different implementations of UNIX. The alliance will build the 64-bit specification
from existing 32-bit APIs, and will comply with existing standards, including CDE.
<HR>
</BLOCKQUOTE>
<P>Despite its fractured nature, UNIX has a number of strengths. Many tools are available
for free, and there are plenty of UNIX experts out there looking for something to
do. UNIX is a strong platform for use as an application server. UNIX also offers
the advantage of easy remote access--nearly any PC or Macintosh running any operating
system can be made to work as an X Window terminal.</P>
<P>Finally, note that if you're on a tight budget (or have no budget at all), a UNIX-like
32-bit operating system called <I>Linux</I> (based on Berkeley Software Distribution,
or BSD, UNIX code, which was developed at the University of California at Berkeley)
is freely available. Linux is a noncommercial operating system, although implementations
of Linux are available from commercial vendors complete with technical support. It
might require some long hours and customization, because like most freeware products,
it has a few rough edges. However, Linux has a number of fans, and a large informal
support network offers technical help and advice, freeware products, and other services.
<H3><A NAME="Heading7"></A><FONT COLOR="#000077">PC/Minicomputer/Mainframe LAN Integration</FONT></H3>
<P>Although practicality might dictate that PC LANs and minicomputer/mainframe LANs
share a common topology and discipline, no law dictates that they must communicate
with one another on that same LAN. In fact, multiple sets of computers can implement
different networking services, and each set might lead its life of quiet desperation
independently of the other sets.</P>
<P>But when cross-communications are mandated by some foolish need or frantic demand,
the broad scope of choices normally available in the data processing market narrows
rapidly. In looking to integrate PC functions with the larger systems capabilities,
you have to make some basic choices. Which architecture will be promoted over the
others? Will the larger systems become servers for the PCs? Will the PCs become terminals
to the larger systems? Or will a third LAN structure be implemented in which both
the PCs and large systems conform to a common standard?</P>
<P>Most approaches provide similar results (shared disk space and shared printers),
but each approach has its pros and cons:
<UL>
<LI><I>System Servers.</I> If the larger systems become servers for the PC LANs,
the PC file structure is imposed upon the larger system. In most cases, an area of
the system disk is then unavailable to native system users. On the plus side, the
PC LAN server information can be backed up along with all of the other system information
(one procedure can address both needs). Manufacturers that support this approach
include Digital Equipment and Novell. Digital markets a product that enables VMS
systems to store MS-DOS files, and Novell has released NetWare for SAA, NetWare for
DEC Access, and NetWare Connect. NetWare Connect provides a number of connectivity
options: It permits remote Windows and Macintosh computers to access any resource
available to the NetWare network, including files, databases, applications, and mainframe
services; and it permits users on the network to connect to remote control computers,
bulletin boards, X.25, and ISDN services.<BR>
<BR>
<LI><I>Terminal Emulation.</I> When PCs emulate native devices to the larger system,
they lose some of their intelligence by emulating unintelligent terminals. File sharing
in this en-vironment is normally supported via file transfers between PCs and the
larger system. This architecture is very centralized and favors the larger system
by keeping PC access to a minimum. Digital, HP, IBM, and Sun all have sets of products
that provide this type of centralized integration.<BR>
<BR>
<LI>A number of third-party products are on the market to connect TCP/IP-based networks
to mainframe and midrange hosts, potentially giving PC users access to CICS applications
and facilitating file transfer between the LAN and the mainframe system.<BR>
<BR>
<LI><I>Peer Connections.</I> Introducing a new set of services to accommodate both
small and large systems establishes an environment in which all computing nodes are
peers. This approach, however, consumes additional resources (memory, CPU, disk)
on each com-puter that participates in the shared environment. In such a system,
TCP/IP might be implemented to allow file transfer between any two systems, to provide
electronic mail services to all systems, and to enable the PCs to access the larger
systems as if they were terminals.
</UL>
<P>All of these approaches share one fundamental concept: The application the user
must be accessed at its native location. Therefore, to run a minicomputer program,
you must log onto the minicomputer and have proper authority to run it. Similarly,
to run a microcomputer program, you must mount and access the physical or logical
disk where it resides. In both cases, the user must find a path to the remote application.
This concept is changing, however, with the advent of distributed client/server computing
architectures.
<H3><A NAME="Heading8"></A><FONT COLOR="#000077">The Client/Server Model</FONT></H3>
<P>To create more meaningful integration between PC LANs and LANs built on larger
systems, a few manufacturers have developed some new approaches. They noted that
the intelligence of the desktop device was rising increasingly, while the rule of
dedicated <I>(dumb)</I> terminals was slowly crumbling. Emerging was a new breed
of intelligent, low-cost, general-purpose computers that were quite capable of handling
some of the applications-processing load. In short, the PCs had arrived.</P>
<P>To take these microcomputers and dedicate them to the task of terminal emulation
was an obvious step in the evolution of the PC explosion, but was also, in many respects,
a mismatch of power to purpose. To make a PC emulate a terminal sacrificed the ability
of the PC to interact with data, and clearly a PC can perform data entry, do mathematical
calculations and store information for subsequent retrieval. But when a PC is emulating
a terminal, it performs none of those functions. Instead, it uses all of its own
intelligence and resources to emulate a dumb device. Therefore, it would seem reasonable
to let the PC take a more meaningful role in the processing of the data. But how?</P>
<P>Certainly the idea of distributed processing was nothing new. In fact, most operating
systems and networks have basic task-to-task communication facilities. But in this
case, the communications would not necessarily occur between similar computers. A
PC might need to initiate a conversation with a midrange, or a mainframe might need
to communicate with a PC. This was the interesting twist--how to implement a distributed
processing environment that took advantage of computing power wherever it was in
the network, without requiring all of the computers to use the same operating system
or even the same primary networking services.</P>
<P>What formed as a possible solution to this puzzle was the concept of client/server
computing. In the client/server scenario, the local computer (PC or a user's session
on a larger computer) acts as the processing client. Associated with the client is
software that provides a universal appearance to the user (be it a graphical, icon-oriented
display, or a character, menu-oriented display). From that display, you can select
the applications you want to use.</P>
<P>When a user selects an application, the client initiates a conversation with the
server for that application (see Figure 8.3). This might involve communications across
LANs and WANs or simply a call to a local program. Regardless of where the server
resides, the client acts as the front end for the server and handles the user interface.
Thus, the user is not aware of where the application actually resides.</P>
<P><A HREF="javascript:if(confirm('http://docs.rinet.ru:8080/MuNet/ch08/08fig03.gif \n\nThis file was not retrieved by Teleport Pro, because it was redirected to an invalid location. You should report this problem to the site\'s webmaster. \n\nDo you want to open it from the server?'))window.location='http://docs.rinet.ru:8080/MuNet/ch08/08fig03.gif'" tppabs="http://docs.rinet.ru:8080/MuNet/ch08/08fig03.gif"><B>FIG. 8.3</B></A> <I>Client/Server Processing</I></P>
<P>Furthermore, the client/server approach is dramatically enhanced when used with
a windowing client platform. If, for example, the user is at a PC that is running
a multitasking system, multiple windows can be used to initiate multiple client sessions,
thereby enabling the user to hot-key between applications, with each application
potentially running on a different computer system. This is a vastly superior method
to having multiple terminals, each with a separate terminal emulation session logged
into a specific host and running a specific application with specific keyboard demands.
The client/server approach offers one consistent user interface for all screen and
keyboard activities.
<BLOCKQUOTE>
<P>
<HR>
<B><font color=#000077>CAUTION:</font> </B>Hot-keying between applications can be a tremendous boon
to end users, but if the network hardware is inadequate, this can cause performance
problems. Some administrators might choose to limit the amount of active applications
a user can have running at one time. Many network management systems give ad-ministrators
the ability to enforce a "clean desk" approach by establishing a maximum
number of simultaneous sessions.
<HR>
</BLOCKQUOTE>
<P>When it was first described, client/server computing was intended as an enterprise
solution, where users at all levels could work cooperatively across platforms. Client/server
technology has gone a long way in enhancing departmental productivity, although further
advances must be made before it can live up to its expectations on an enterprise
level. One of the biggest challenges of implementing a client/server environment
is establishing bridges between all of the various heterogeneous elements. Typically,
client/server solutions offer only limited access to critical data. Also, because
the environment is by its very nature decentralized, managing the environment is
extremely difficult. In order for client/server to be useful as an enterprise solution,
it must be able to access large amounts of data distributed over a heterogeneous
environment and integrate it into a common report. This service is in fact being
provided by executive information system (EIS) software and data warehouse technology.</P>
<P>There are many factors involved in designing a server system in a distributed
computing environment. Application partitioning can follow one of three different
paradigms:
<UL>
<LI><I>Client-centric model.</I> Also called a "fat client" system, this
model places all of the application logic on the client side. It requires higher-powered
desktop machines, and requires substantially more administrative chores. Multiple
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -