Book Home Java™ Distributed Computing Search this book

Preface

In a sense, distributed computing has been with us since the beginning of computer technology. A conventional computer can be thought of as "internally distributed," in the sense that separate, distinct devices within the computer are responsible for certain well-defined tasks (arithmetic/logic operations, operator stack storage, short/long-term data storage). These "distributed" devices are interconnected by communication pathways that carry information (register values, data) and messages (assembler instructions, microcode instructions). The sources and destinations of these various pathways are literally hardwired, and the "protocols" that they use to carry information and messages are rigidly defined and highly specific. Reorganizing the distribution scheme of these devices involves chip fabrication, soldering, and the recoding of microprograms, or the construction of a completely new computer. This level of inflexibility offers benefits, however, in terms of processing speed and information-transfer latencies. The functions expected of a computing device at this level are very well defined and bounded (perform arithmetic and logic operations on data, and store the results); therefore, the architecture of the device can and should be highly optimized for these tasks.

The history of truly distributed computing begins with the first day that someone tapped a mainframe operator on the shoulder and asked, "Hey, is there any way we can both use that?" Display terminals, with no computing capabilities themselves, were developed to be attached to monolithic central computing devices, and to communicate with them in very rigid and limited protocols. This allowed multiple users to access and submit jobs to the mainframe. Other I/O devices also needed to be attached to the mainframe, generally to store and retrieve data to/from other non-volatile forms (printed storage such as paper tape and punch cards, and later magnetic storage devices). For the most part, the physical links and communications protocols used to hook in these devices were custom designed for each case, and not reusable for other devices.

Meanwhile, people began to desire personal, dedicated computing resources that were available on demand, not when the mainframe schedule said so. The personal computer fit the bill, and its popularity has been growing ever since. Personal computers and workstations, despite the larger numbers of units, followed a similar evolutionary path to mainframes with respect to peripheral devices. Many and varied hardware and communications protocols were born to allow printers, disk drives, pointing devices, and the like to be connected to people's desktop computers. At first, peripheral vendors saw these custom-fit solutions as a potential for competitive advantage in the market; i.e., make your hardware or software faster or more laden with features than the next product, to become the preferred vendor of whatever it is you make. Gradually, both users and makers of personal computers became weary of this game. Users became frustrated with the lack of consistency in the installation, use, and maintenance of these devices, and computer makers were faced with an array of hardware and software interfaces, from which they had to choose the most advantageous ones to support directly in their hardware and operating systems. This became one of the major attractions for buyers of Apple's computer line, second only to their user-friendly operating system. Apple defined and strictly controlled the hardware and software interfaces to their systems, thereby guaranteeing that any third-party devices or software that followed their specifications and standards would operate correctly with their computers.

The concept of standards for hardware and software interfaces caught on at many levels. Standards for every level of distributed computing, from hardware interfaces, network cabling, and physical-level communications protocols, all the way up to application-level protocols, were developed, refined, and promoted in the marketplace. Some standards achieved a critical usage mass, for various reasons, and persist today, such as Ethernet, TCP/IP, and RPC. Others were less popular, and faded with time.

Today, both computing devices and network bandwidth have begun to achieve commodity status. A set of standard protocols, some of which make up the World Wide Web, are beginning to evolve into a worldwide network operating system. Specifics about the type of hardware, operating system, and network being used are becoming more and more irrelevant, making information and tools to process information more and more easily deployable and available. Security protocols have been defined to help owners of information and services restrict access to them. Researchers and developers are looking forward to the next evolutionary steps in information technology, such as autonomous agents and enterprise-wide distributed object systems. In the midst of this revolutionary period, Java™ can be viewed as both a product and a catalyst of all of these trends. Java offers an environment in which the network is truly the computer, and specifics such as operating system features and transport protocols become even more blurred and less important, though not yet completely irrelevant. The Java language and environment promise to play a prominent part in the next generation of distributed computing.

0.1. What Does This Book Cover?

This book is an overview of the tools and techniques that are at your disposal for building distributed computing systems in Java. In most cases, these tools are provided inherently in the Java API itself, such as the Java Remote Method Invocation (RMI) API, the Java Security API, and the Java™ Database Connectivity ( JDBC) package. Other tools are standards and protocols that exist independently of Java and its environment, but are supported within Java, either through its core APIs or by add-on APIs offered by third-party vendors. Some examples include the Common Object Request Broker Adapter (CORBA) standards, the multicast IP protocol, and the Secure Socket Layer (SSL) standard.

I intend this book to serve as both explanatory and reference material for you, the professional developer. Most of the book is made up of detailed explanations of concepts, tools, and techniques that come into play in most distributed computing situations. At the same time, for readers who are more familiar with the subject matter, the text and code examples are broken up into subject areas that should make it fairly easy to reference important bits.

0.1.1. Organization

The first four chapters of the book (after the Introduction) cover some fundamental tools that come into play in most distributed applications: basic networking tools, distributed objects, multithreading, and security measures. The last five chapters go into detail about some common types of distributed applications: message-passing systems, multitier systems involving databases, bandwidth-limited systems, and systems that allow multiple distributed users or user agents to collaborate dynamically over a network; and discuss the special issues that arise in each.

The figure on the next page shows the dependence of the various chapters on each other, to give you a sense of the order (random or otherwise) that you can choose as you journey through the book. Since the Introduction covers some concepts and terminology that will persist throughout the book, I'd suggest reading it before any of the others. The next four chapters can be read in just about any order, depending on your level of experience with each topic. Since the later chapters use concepts from all of the chapters in the first part of the book, you should have a basic understanding of their topics, either from personal experience or from reading the earlier chapters, before delving into the second part of the book.

figure

Library Navigation Links

Copyright © 2001 O'Reilly & Associates. All rights reserved.