Java’s Magic: The Bytecode
The key that allows Java to solve both the security and the portability problems just described
is that the output of a Java compiler is not executable code. Rather, it is bytecode. Bytecode is
a highly optimized set of instructions designed to be executed by the Java run-time system,
which is called the Java Virtual Machine (JVM). In essence, the original JVM was designed as
an interpreter for bytecode. This may come as a bit of a surprise since many modern languages
are designed to be compiled into executable code because of performance concerns.
However, the fact that a Java program is executed by the JVM helps solve the major
problems associated with web-based programs. Here is why.
Translating a Java program into bytecode makes it much easier to run a program in
a wide variety of environments because only the JVM needs to be implemented for each
platform. Once the run-time package exists for a given system, any Java program can run
on it. Remember, although the details of the JVM will differ from platform to platform, all
understand the same Java bytecode. If a Java program were compiled to native code, then
different versions of the same program would have to exist for each type of CPU connected
to the Internet. This is, of course, not a feasible solution. Thus, the execution of bytecode
by the JVM is the easiest way to create truly portable programs.
The fact that a Java program is executed by the JVM also helps to make it secure.
Because the JVM is in control, it can contain the program and prevent it from generating side effects outside of the system. As you will see, safety is also enhanced by certain
restrictions that exist in the Java language.
In general, when a program is compiled to an intermediate form and then interpreted
by a virtual machine, it runs slower than it would run if compiled to executable code.
However, with Java, the differential between the two is not so great. Because bytecode has
been highly optimized, the use of bytecode enables the JVM to execute programs much
faster than you might expect.
Although Java was designed as an interpreted language, there is nothing about Java that
prevents on-the-fly compilation of bytecode into native code in order to boost performance.
For this reason, the HotSpot technology was introduced not long after Java’s initial release.
HotSpot provides a Just-In-Time (JIT) compiler for bytecode. When a JIT compiler is part
of the JVM, selected portions of bytecode are compiled into executable code in real time,
on a piece-by-piece, demand basis. It is important to understand that it is not practical to
compile an entire Java program into executable code all at once, because Java performs
various run-time checks that can be done only at run time. Instead, a JIT compiler compiles
code as it is needed, during execution. Furthermore, not all sequences of bytecode are
compiled—only those that will benefit from compilation. The remaining code is simply
interpreted. However, the just-in-time approach still yields a significant performance boost.
Even when dynamic compilation is applied to bytecode, the portability and safety features
still apply, because the JVM is still in charge of the execution environment.
The History and Evolution of Java - How Java Changed the Internet? - Lesson 4
How Java Changed the Internet
The Internet helped catapult Java to the forefront of programming, and Java, in turn, had
a profound effect on the Internet. In addition to simplifying web programming in general,
Java innovated a new type of networked program called the applet that changed the way
the online world thought about content. Java also addressed some of the thorniest issues
associated with the Internet: portability and security. Let’s look more closely at each of these.
Java Applets
An applet is a special kind of Java program that is designed to be transmitted over the Internet
and automatically executed by a Java-compatible web browser. Furthermore, an applet is
downloaded on demand, without further interaction with the user. If the user clicks a link
that contains an applet, the applet will be automatically downloaded and run in the browser.
Applets are intended to be small programs. They are typically used to display data provided
by the server, handle user input, or provide simple functions, such as a loan calculator, that
execute locally, rather than on the server. In essence, the applet allows some functionality to
be moved from the server to the client.
The creation of the applet changed Internet programming because it expanded the
universe of objects that can move about freely in cyberspace. In general, there are two very
broad categories of objects that are transmitted between the server and the client: passive
information and dynamic, active programs. For example, when you read your e-mail, you
are viewing passive data. Even when you download a program, the program’s code is still
only passive data until you execute it. By contrast, the applet is a dynamic, self-executing
program. Such a program is an active agent on the client computer, yet it is initiated by
the server.
As desirable as dynamic, networked programs are, they also present serious problems
in the areas of security and portability. Obviously, a program that downloads and executes
automatically on the client computer must be prevented from doing harm. It must also be
able to run in a variety of different environments and under different operating systems.
As you will see, Java solved these problems in an effective and elegant way. Let’s look a bit
more closely at each.
Security
As you are likely aware, every time you download a “normal” program, you are taking a risk,
because the code you are downloading might contain a virus, Trojan horse, or other harmful
code. At the core of the problem is the fact that malicious code can cause its damage because
it has gained unauthorized access to system resources. For example, a virus program might
gather private information, such as credit card numbers, bank account balances, and
passwords, by searching the contents of your computer’s local file system. In order for Java to
enable applets to be downloaded and executed on the client computer safely, it was necessary
to prevent an applet from launching such an attack.
Java achieved this protection by confining an applet to the Java execution environment
and not allowing it access to other parts of the computer. (You will see how this is
accomplished shortly.) The ability to download applets with confidence that no harm will
be done and that no security will be breached may have been the single most innovative
aspect of Java.
Portability
Portability is a major aspect of the Internet because there are many different types of
computers and operating systems connected to it. If a Java program were to be run on
virtually any computer connected to the Internet, there needed to be some way to enable
that program to execute on different systems. For example, in the case of an applet, the
same applet must be able to be downloaded and executed by the wide variety of CPUs,
operating systems, and browsers connected to the Internet. It is not practical to have
different versions of the applet for different computers. The same code must work on all
computers. Therefore, some means of generating portable executable code was needed. As
you will soon see, the same mechanism that helps ensure security also helps create portability.
The Internet helped catapult Java to the forefront of programming, and Java, in turn, had
a profound effect on the Internet. In addition to simplifying web programming in general,
Java innovated a new type of networked program called the applet that changed the way
the online world thought about content. Java also addressed some of the thorniest issues
associated with the Internet: portability and security. Let’s look more closely at each of these.
Java Applets
An applet is a special kind of Java program that is designed to be transmitted over the Internet
and automatically executed by a Java-compatible web browser. Furthermore, an applet is
downloaded on demand, without further interaction with the user. If the user clicks a link
that contains an applet, the applet will be automatically downloaded and run in the browser.
Applets are intended to be small programs. They are typically used to display data provided
by the server, handle user input, or provide simple functions, such as a loan calculator, that
execute locally, rather than on the server. In essence, the applet allows some functionality to
be moved from the server to the client.
The creation of the applet changed Internet programming because it expanded the
universe of objects that can move about freely in cyberspace. In general, there are two very
broad categories of objects that are transmitted between the server and the client: passive
information and dynamic, active programs. For example, when you read your e-mail, you
are viewing passive data. Even when you download a program, the program’s code is still
only passive data until you execute it. By contrast, the applet is a dynamic, self-executing
program. Such a program is an active agent on the client computer, yet it is initiated by
the server.
As desirable as dynamic, networked programs are, they also present serious problems
in the areas of security and portability. Obviously, a program that downloads and executes
automatically on the client computer must be prevented from doing harm. It must also be
able to run in a variety of different environments and under different operating systems.
As you will see, Java solved these problems in an effective and elegant way. Let’s look a bit
more closely at each.
Security
As you are likely aware, every time you download a “normal” program, you are taking a risk,
because the code you are downloading might contain a virus, Trojan horse, or other harmful
code. At the core of the problem is the fact that malicious code can cause its damage because
it has gained unauthorized access to system resources. For example, a virus program might
gather private information, such as credit card numbers, bank account balances, and
passwords, by searching the contents of your computer’s local file system. In order for Java to
enable applets to be downloaded and executed on the client computer safely, it was necessary
to prevent an applet from launching such an attack.
Java achieved this protection by confining an applet to the Java execution environment
and not allowing it access to other parts of the computer. (You will see how this is
accomplished shortly.) The ability to download applets with confidence that no harm will
be done and that no security will be breached may have been the single most innovative
aspect of Java.
Portability
Portability is a major aspect of the Internet because there are many different types of
computers and operating systems connected to it. If a Java program were to be run on
virtually any computer connected to the Internet, there needed to be some way to enable
that program to execute on different systems. For example, in the case of an applet, the
same applet must be able to be downloaded and executed by the wide variety of CPUs,
operating systems, and browsers connected to the Internet. It is not practical to have
different versions of the applet for different computers. The same code must work on all
computers. Therefore, some means of generating portable executable code was needed. As
you will soon see, the same mechanism that helps ensure security also helps create portability.
The History and Evolution of Java - The Creation of Java - Lesson 3
The Creation of Java
Java was conceived by James Gosling, Patrick Naughton, Chris Warth, Ed Frank, and Mike
Sheridan at Sun Microsystems, Inc. in 1991. It took 18 months to develop the first working
version. This language was initially called “Oak,” but was renamed “Java” in 1995. Between
the initial implementation of Oak in the fall of 1992 and the public announcement of Java
in the spring of 1995, many more people contributed to the design and evolution of the
language. Bill Joy, Arthur van Hoff, Jonathan Payne, Frank Yellin, and Tim Lindholm were
key contributors to the maturing of the original prototype.
Somewhat surprisingly, the original impetus for Java was not the Internet! Instead, the
primary motivation was the need for a platform-independent (that is, architecture-neutral)
language that could be used to create software to be embedded in various consumer
electronic devices, such as microwave ovens and remote controls. As you can probably
guess, many different types of CPUs are used as controllers. The trouble with C and C++
(and most other languages) is that they are designed to be compiled for a specific target.
Although it is possible to compile a C++ program for just about any type of CPU, to do so
requires a full C++ compiler targeted for that CPU. The problem is that compilers are
expensive and time-consuming to create. An easier—and more cost-efficient—solution
was needed. In an attempt to find such a solution, Gosling and others began work on a
portable, platform-independent language that could be used to produce code that would
run on a variety of CPUs under differing environments. This effort ultimately led to the
creation of Java.
About the time that the details of Java were being worked out, a second, and ultimately
more important, factor was emerging that would play a crucial role in the future of Java.
This second force was, of course, the World Wide Web. Had the Web not taken shape at
about the same time that Java was being implemented, Java might have remained a useful
but obscure language for programming consumer electronics. However, with the emergence of the World Wide Web, Java was propelled to the forefront of computer language design,
because the Web, too, demanded portable programs.
Most programmers learn early in their careers that portable programs are as elusive as they
are desirable. While the quest for a way to create efficient, portable (platform-independent)
programs is nearly as old as the discipline of programming itself, it had taken a back seat
to other, more pressing problems. Further, because (at that time) much of the computer
world had divided itself into the three competing camps of Intel, Macintosh, and UNIX,
most programmers stayed within their fortified boundaries, and the urgent need for
portable code was reduced. However, with the advent of the Internet and the Web, the
old problem of portability returned with a vengeance. After all, the Internet consists of a
diverse, distributed universe populated with various types of computers, operating systems,
and CPUs. Even though many kinds of platforms are attached to the Internet, users would
like them all to be able to run the same program. What was once an irritating but lowpriority
problem had become a high-profile necessity.
By 1993, it became obvious to members of the Java design team that the problems of
portability frequently encountered when creating code for embedded controllers are also
found when attempting to create code for the Internet. In fact, the same problem that Java
was initially designed to solve on a small scale could also be applied to the Internet on a
large scale. This realization caused the focus of Java to switch from consumer electronics
to Internet programming. So, while the desire for an architecture-neutral programming
language provided the initial spark, the Internet ultimately led to Java’s large-scale success.
As mentioned earlier, Java derives much of its character from C and C++. This is by intent.
The Java designers knew that using the familiar syntax of C and echoing the object-oriented
features of C++ would make their language appealing to the legions of experienced C/C++
programmers. In addition to the surface similarities, Java shares some of the other attributes
that helped make C and C++ successful. First, Java was designed, tested, and refined by real,
working programmers. It is a language grounded in the needs and experiences of the
people who devised it. Thus, Java is a programmer’s language. Second, Java is cohesive and
logically consistent. Third, except for those constraints imposed by the Internet environment,
Java gives you, the programmer, full control. If you program well, your programs reflect it.
If you program poorly, your programs reflect that, too. Put differently, Java is not a language
with training wheels. It is a language for professional programmers.
Because of the similarities between Java and C++, it is tempting to think of Java as
simply the “Internet version of C++.” However, to do so would be a large mistake. Java has
significant practical and philosophical differences. While it is true that Java was influenced
by C++, it is not an enhanced version of C++. For example, Java is neither upwardly nor
downwardly compatible with C++. Of course, the similarities with C++ are significant, and if
you are a C++ programmer, then you will feel right at home with Java. One other point: Java
was not designed to replace C++. Java was designed to solve a certain set of problems. C++
was designed to solve a different set of problems. Both will coexist for many years to come.
As mentioned at the start of this chapter, computer languages evolve for two reasons:
to adapt to changes in environment and to implement advances in the art of programming.
The environmental change that prompted Java was the need for platform-independent
programs destined for distribution on the Internet. However, Java also embodies changes
in the way that people approach the writing of programs. For example, Java enhanced
and refined the object-oriented paradigm used by C++, added integrated support for
multithreading, and provided a library that simplified Internet access. In the final analysis,
though, it was not the individual features of Java that made it so remarkable. Rather, it was
the language as a whole. Java was the perfect response to the demands of the then newly
emerging, highly distributed computing universe. Java was to Internet programming what
C was to system programming: a revolutionary force that changed the world.
The C# Connection
The reach and power of Java continues to be felt in the world of computer language
development. Many of its innovative features, constructs, and concepts have become part
of the baseline for any new language. The success of Java is simply too important to ignore.
Perhaps the most important example of Java’s influence is C#. Created by Microsoft to
support the .NET Framework, C# is closely related to Java. For example, both share the
same general syntax, support distributed programming, and utilize the same object model.
There are, of course, differences between Java and C#, but the overall “look and feel” of
these languages is very similar. This “cross-pollination” from Java to C# is the strongest
testimonial to date that Java redefined the way we think about and use a computer language.
Java was conceived by James Gosling, Patrick Naughton, Chris Warth, Ed Frank, and Mike
Sheridan at Sun Microsystems, Inc. in 1991. It took 18 months to develop the first working
version. This language was initially called “Oak,” but was renamed “Java” in 1995. Between
the initial implementation of Oak in the fall of 1992 and the public announcement of Java
in the spring of 1995, many more people contributed to the design and evolution of the
language. Bill Joy, Arthur van Hoff, Jonathan Payne, Frank Yellin, and Tim Lindholm were
key contributors to the maturing of the original prototype.
Somewhat surprisingly, the original impetus for Java was not the Internet! Instead, the
primary motivation was the need for a platform-independent (that is, architecture-neutral)
language that could be used to create software to be embedded in various consumer
electronic devices, such as microwave ovens and remote controls. As you can probably
guess, many different types of CPUs are used as controllers. The trouble with C and C++
(and most other languages) is that they are designed to be compiled for a specific target.
Although it is possible to compile a C++ program for just about any type of CPU, to do so
requires a full C++ compiler targeted for that CPU. The problem is that compilers are
expensive and time-consuming to create. An easier—and more cost-efficient—solution
was needed. In an attempt to find such a solution, Gosling and others began work on a
portable, platform-independent language that could be used to produce code that would
run on a variety of CPUs under differing environments. This effort ultimately led to the
creation of Java.
About the time that the details of Java were being worked out, a second, and ultimately
more important, factor was emerging that would play a crucial role in the future of Java.
This second force was, of course, the World Wide Web. Had the Web not taken shape at
about the same time that Java was being implemented, Java might have remained a useful
but obscure language for programming consumer electronics. However, with the emergence of the World Wide Web, Java was propelled to the forefront of computer language design,
because the Web, too, demanded portable programs.
Most programmers learn early in their careers that portable programs are as elusive as they
are desirable. While the quest for a way to create efficient, portable (platform-independent)
programs is nearly as old as the discipline of programming itself, it had taken a back seat
to other, more pressing problems. Further, because (at that time) much of the computer
world had divided itself into the three competing camps of Intel, Macintosh, and UNIX,
most programmers stayed within their fortified boundaries, and the urgent need for
portable code was reduced. However, with the advent of the Internet and the Web, the
old problem of portability returned with a vengeance. After all, the Internet consists of a
diverse, distributed universe populated with various types of computers, operating systems,
and CPUs. Even though many kinds of platforms are attached to the Internet, users would
like them all to be able to run the same program. What was once an irritating but lowpriority
problem had become a high-profile necessity.
By 1993, it became obvious to members of the Java design team that the problems of
portability frequently encountered when creating code for embedded controllers are also
found when attempting to create code for the Internet. In fact, the same problem that Java
was initially designed to solve on a small scale could also be applied to the Internet on a
large scale. This realization caused the focus of Java to switch from consumer electronics
to Internet programming. So, while the desire for an architecture-neutral programming
language provided the initial spark, the Internet ultimately led to Java’s large-scale success.
As mentioned earlier, Java derives much of its character from C and C++. This is by intent.
The Java designers knew that using the familiar syntax of C and echoing the object-oriented
features of C++ would make their language appealing to the legions of experienced C/C++
programmers. In addition to the surface similarities, Java shares some of the other attributes
that helped make C and C++ successful. First, Java was designed, tested, and refined by real,
working programmers. It is a language grounded in the needs and experiences of the
people who devised it. Thus, Java is a programmer’s language. Second, Java is cohesive and
logically consistent. Third, except for those constraints imposed by the Internet environment,
Java gives you, the programmer, full control. If you program well, your programs reflect it.
If you program poorly, your programs reflect that, too. Put differently, Java is not a language
with training wheels. It is a language for professional programmers.
Because of the similarities between Java and C++, it is tempting to think of Java as
simply the “Internet version of C++.” However, to do so would be a large mistake. Java has
significant practical and philosophical differences. While it is true that Java was influenced
by C++, it is not an enhanced version of C++. For example, Java is neither upwardly nor
downwardly compatible with C++. Of course, the similarities with C++ are significant, and if
you are a C++ programmer, then you will feel right at home with Java. One other point: Java
was not designed to replace C++. Java was designed to solve a certain set of problems. C++
was designed to solve a different set of problems. Both will coexist for many years to come.
As mentioned at the start of this chapter, computer languages evolve for two reasons:
to adapt to changes in environment and to implement advances in the art of programming.
The environmental change that prompted Java was the need for platform-independent
programs destined for distribution on the Internet. However, Java also embodies changes
in the way that people approach the writing of programs. For example, Java enhanced
and refined the object-oriented paradigm used by C++, added integrated support for
multithreading, and provided a library that simplified Internet access. In the final analysis,
though, it was not the individual features of Java that made it so remarkable. Rather, it was
the language as a whole. Java was the perfect response to the demands of the then newly
emerging, highly distributed computing universe. Java was to Internet programming what
C was to system programming: a revolutionary force that changed the world.
The C# Connection
The reach and power of Java continues to be felt in the world of computer language
development. Many of its innovative features, constructs, and concepts have become part
of the baseline for any new language. The success of Java is simply too important to ignore.
Perhaps the most important example of Java’s influence is C#. Created by Microsoft to
support the .NET Framework, C# is closely related to Java. For example, both share the
same general syntax, support distributed programming, and utilize the same object model.
There are, of course, differences between Java and C#, but the overall “look and feel” of
these languages is very similar. This “cross-pollination” from Java to C# is the strongest
testimonial to date that Java redefined the way we think about and use a computer language.
The History and Evolution of Java - C++: The Next Step - Lesson 2
C++: The Next Step
During the late 1970s and early 1980s, C became the dominant computer programming
language, and it is still widely used today. Since C is a successful and useful language, you
might ask why a need for something else existed. The answer is complexity. Throughout the
history of programming, the increasing complexity of programs has driven the need for
better ways to manage that complexity. C++ is a response to that need. To better understand
why managing program complexity is fundamental to the creation of C++, consider the
following.
Approaches to programming have changed dramatically since the invention of the
computer. For example, when computers were first invented, programming was done by
manually toggling in the binary machine instructions by use of the front panel. As long as
programs were just a few hundred instructions long, this approach worked. As programs grew,
assembly language was invented so that a programmer could deal with larger, increasingly
complex programs by using symbolic representations of the machine instructions. As
programs continued to grow, high-level languages were introduced that gave the programmer
more tools with which to handle complexity.
The first widespread language was, of course, FORTRAN. While FORTRAN was an
impressive first step, it is hardly a language that encourages clear and easy-to-understand
programs. The 1960s gave birth to structured programming. This is the method of programming
championed by languages such as C. The use of structured languages enabled programmers
to write, for the first time, moderately complex programs fairly easily. However, even with
structured programming methods, once a project reaches a certain size, its complexity
exceeds what a programmer can manage. By the early 1980s, many projects were pushing
the structured approach past its limits. To solve this problem, a new way to program was
invented, called object-oriented programming (OOP). Object-oriented programming is discussed
in detail later in this book, but here is a brief definition: OOP is a programming methodology
that helps organize complex programs through the use of inheritance, encapsulation, and
polymorphism.
In the final analysis, although C is one of the world’s great programming languages,
there is a limit to its ability to handle complexity. Once the size of a program exceeds a
certain point, it becomes so complex that it is difficult to grasp as a totality. While the
precise size at which this occurs differs, depending upon both the nature of the program
and the programmer, there is always a threshold at which a program becomes unmanageable.
C++ added features that enabled this threshold to be broken, allowing programmers to
comprehend and manage larger programs.
C++ was invented by Bjarne Stroustrup in 1979, while he was working at Bell Laboratories
in Murray Hill, New Jersey. Stroustrup initially called the new language “C with Classes.”
However, in 1983, the name was changed to C++. C++ extends C by adding object-oriented
features. Because C++ is built on the foundation of C, it includes all of C’s features, attributes,
and benefits. This is a crucial reason for the success of C++ as a language. The invention of
C++ was not an attempt to create a completely new programming language. Instead, it was
an enhancement to an already highly successful one.
The Stage Is Set for Java
By the end of the 1980s and the early 1990s, object-oriented programming using C++ took
hold. Indeed, for a brief moment it seemed as if programmers had finally found the perfect
language. Because C++ blended the high efficiency and stylistic elements of C with the
object-oriented paradigm, it was a language that could be used to create a wide range of
programs. However, just as in the past, forces were brewing that would, once again, drive
computer language evolution forward. Within a few years, the World Wide Web and the
Internet would reach critical mass. This event would precipitate another revolution in programming.
During the late 1970s and early 1980s, C became the dominant computer programming
language, and it is still widely used today. Since C is a successful and useful language, you
might ask why a need for something else existed. The answer is complexity. Throughout the
history of programming, the increasing complexity of programs has driven the need for
better ways to manage that complexity. C++ is a response to that need. To better understand
why managing program complexity is fundamental to the creation of C++, consider the
following.
Approaches to programming have changed dramatically since the invention of the
computer. For example, when computers were first invented, programming was done by
manually toggling in the binary machine instructions by use of the front panel. As long as
programs were just a few hundred instructions long, this approach worked. As programs grew,
assembly language was invented so that a programmer could deal with larger, increasingly
complex programs by using symbolic representations of the machine instructions. As
programs continued to grow, high-level languages were introduced that gave the programmer
more tools with which to handle complexity.
The first widespread language was, of course, FORTRAN. While FORTRAN was an
impressive first step, it is hardly a language that encourages clear and easy-to-understand
programs. The 1960s gave birth to structured programming. This is the method of programming
championed by languages such as C. The use of structured languages enabled programmers
to write, for the first time, moderately complex programs fairly easily. However, even with
structured programming methods, once a project reaches a certain size, its complexity
exceeds what a programmer can manage. By the early 1980s, many projects were pushing
the structured approach past its limits. To solve this problem, a new way to program was
invented, called object-oriented programming (OOP). Object-oriented programming is discussed
in detail later in this book, but here is a brief definition: OOP is a programming methodology
that helps organize complex programs through the use of inheritance, encapsulation, and
polymorphism.
In the final analysis, although C is one of the world’s great programming languages,
there is a limit to its ability to handle complexity. Once the size of a program exceeds a
certain point, it becomes so complex that it is difficult to grasp as a totality. While the
precise size at which this occurs differs, depending upon both the nature of the program
and the programmer, there is always a threshold at which a program becomes unmanageable.
C++ added features that enabled this threshold to be broken, allowing programmers to
comprehend and manage larger programs.
C++ was invented by Bjarne Stroustrup in 1979, while he was working at Bell Laboratories
in Murray Hill, New Jersey. Stroustrup initially called the new language “C with Classes.”
However, in 1983, the name was changed to C++. C++ extends C by adding object-oriented
features. Because C++ is built on the foundation of C, it includes all of C’s features, attributes,
and benefits. This is a crucial reason for the success of C++ as a language. The invention of
C++ was not an attempt to create a completely new programming language. Instead, it was
an enhancement to an already highly successful one.
The Stage Is Set for Java
By the end of the 1980s and the early 1990s, object-oriented programming using C++ took
hold. Indeed, for a brief moment it seemed as if programmers had finally found the perfect
language. Because C++ blended the high efficiency and stylistic elements of C with the
object-oriented paradigm, it was a language that could be used to create a wide range of
programs. However, just as in the past, forces were brewing that would, once again, drive
computer language evolution forward. Within a few years, the World Wide Web and the
Internet would reach critical mass. This event would precipitate another revolution in programming.
The History and Evolution of Java - Java’s Lineage - Lesson 1
Java is related to C++, which is a direct descendant of C. Much of the character of Java is
inherited from these two languages. From C, Java derives its syntax. Many of Java’s objectoriented
features were influenced by C++. In fact, several of Java’s defining characteristics
come from—or are responses to—its predecessors. Moreover, the creation of Java was
deeply rooted in the process of refinement and adaptation that has been occurring in
computer programming languages for the past several decades. For these reasons, this
section reviews the sequence of events and forces that led to Java. As you will see, each
innovation in language design was driven by the need to solve a fundamental problem
that the preceding languages could not solve. Java is no exception.
The Birth of Modern Programming: C
The C language shook the computer world. Its impact should not be underestimated, because
it fundamentally changed the way programming was approached and thought about. The
creation of C was a direct result of the need for a structured, efficient, high-level language
that could replace assembly code when creating systems programs. As you probably know,
when a computer language is designed, trade-offs are often made, such as the following:
Prior to C, programmers usually had to choose between languages that optimized one
set of traits or the other. For example, although FORTRAN could be used to write fairly
efficient programs for scientific applications, it was not very good for system code. And
while BASIC was easy to learn, it wasn’t very powerful, and its lack of structure made its
usefulness questionable for large programs. Assembly language can be used to produce
highly efficient programs, but it is not easy to learn or use effectively. Further, debugging
assembly code can be quite difficult.
Another compounding problem was that early computer languages such as BASIC,
COBOL, and FORTRAN were not designed around structured principles. Instead, they
relied upon the GOTO as a primary means of program control. As a result, programs
written using these languages tended to produce “spaghetti code”—a mass of tangled
jumps and conditional branches that make a program virtually impossible to understand.
While languages like Pascal are structured, they were not designed for efficiency, and failed
to include certain features necessary to make them applicable to a wide range of programs.
(Specifically, given the standard dialects of Pascal available at the time, it was not practical
to consider using Pascal for systems-level code.)
So, just prior to the invention of C, no one language had reconciled the conflicting
attributes that had dogged earlier efforts. Yet the need for such a language was pressing. By
the early 1970s, the computer revolution was beginning to take hold, and the demand for
software was rapidly outpacing programmers’ ability to produce it. A great deal of effort was
being expended in academic circles in an attempt to create a better computer language.
But, and perhaps most importantly, a secondary force was beginning to be felt. Computer
hardware was finally becoming common enough that a critical mass was being reached. No
longer were computers kept behind locked doors. For the first time, programmers were
gaining virtually unlimited access to their machines. This allowed the freedom to experiment.
It also allowed programmers to begin to create their own tools. On the eve of C’s creation,
the stage was set for a quantum leap forward in computer languages.
Invented and first implemented by Dennis Ritchie on a DEC PDP-11 running the UNIX
operating system, C was the result of a development process that started with an older
language called BCPL, developed by Martin Richards. BCPL influenced a language called
B, invented by Ken Thompson, which led to the development of C in the 1970s. For many
years, the de facto standard for C was the one supplied with the UNIX operating system and
described in The C Programming Language by Brian Kernighan and Dennis Ritchie (Prentice-
Hall, 1978). C was formally standardized in December 1989, when the American National
Standards Institute (ANSI) standard for C was adopted.
The creation of C is considered by many to have marked the beginning of the modern
age of computer languages. It successfully synthesized the conflicting attributes that had so
troubled earlier languages. The result was a powerful, efficient, structured language that
was relatively easy to learn. It also included one other, nearly intangible aspect: it was a
programmer’s language. Prior to the invention of C, computer languages were generally
designed either as academic exercises or by bureaucratic committees. C is different. It was
designed, implemented, and developed by real, working programmers, reflecting the way
that they approached the job of programming. Its features were honed, tested, thought
about, and rethought by the people who actually used the language. The result was a
language that programmers liked to use. Indeed, C quickly attracted many followers
who had a near-religious zeal for it. As such, it found wide and rapid acceptance in the
programmer community. In short, C is a language designed by and for programmers.
As you will see, Java inherited this legacy.
inherited from these two languages. From C, Java derives its syntax. Many of Java’s objectoriented
features were influenced by C++. In fact, several of Java’s defining characteristics
come from—or are responses to—its predecessors. Moreover, the creation of Java was
deeply rooted in the process of refinement and adaptation that has been occurring in
computer programming languages for the past several decades. For these reasons, this
section reviews the sequence of events and forces that led to Java. As you will see, each
innovation in language design was driven by the need to solve a fundamental problem
that the preceding languages could not solve. Java is no exception.
The Birth of Modern Programming: C
The C language shook the computer world. Its impact should not be underestimated, because
it fundamentally changed the way programming was approached and thought about. The
creation of C was a direct result of the need for a structured, efficient, high-level language
that could replace assembly code when creating systems programs. As you probably know,
when a computer language is designed, trade-offs are often made, such as the following:
- Ease-of-use versus power
- Safety versus efficiency
- Rigidity versus extensibility
Prior to C, programmers usually had to choose between languages that optimized one
set of traits or the other. For example, although FORTRAN could be used to write fairly
efficient programs for scientific applications, it was not very good for system code. And
while BASIC was easy to learn, it wasn’t very powerful, and its lack of structure made its
usefulness questionable for large programs. Assembly language can be used to produce
highly efficient programs, but it is not easy to learn or use effectively. Further, debugging
assembly code can be quite difficult.
Another compounding problem was that early computer languages such as BASIC,
COBOL, and FORTRAN were not designed around structured principles. Instead, they
relied upon the GOTO as a primary means of program control. As a result, programs
written using these languages tended to produce “spaghetti code”—a mass of tangled
jumps and conditional branches that make a program virtually impossible to understand.
While languages like Pascal are structured, they were not designed for efficiency, and failed
to include certain features necessary to make them applicable to a wide range of programs.
(Specifically, given the standard dialects of Pascal available at the time, it was not practical
to consider using Pascal for systems-level code.)
So, just prior to the invention of C, no one language had reconciled the conflicting
attributes that had dogged earlier efforts. Yet the need for such a language was pressing. By
the early 1970s, the computer revolution was beginning to take hold, and the demand for
software was rapidly outpacing programmers’ ability to produce it. A great deal of effort was
being expended in academic circles in an attempt to create a better computer language.
But, and perhaps most importantly, a secondary force was beginning to be felt. Computer
hardware was finally becoming common enough that a critical mass was being reached. No
longer were computers kept behind locked doors. For the first time, programmers were
gaining virtually unlimited access to their machines. This allowed the freedom to experiment.
It also allowed programmers to begin to create their own tools. On the eve of C’s creation,
the stage was set for a quantum leap forward in computer languages.
Invented and first implemented by Dennis Ritchie on a DEC PDP-11 running the UNIX
operating system, C was the result of a development process that started with an older
language called BCPL, developed by Martin Richards. BCPL influenced a language called
B, invented by Ken Thompson, which led to the development of C in the 1970s. For many
years, the de facto standard for C was the one supplied with the UNIX operating system and
described in The C Programming Language by Brian Kernighan and Dennis Ritchie (Prentice-
Hall, 1978). C was formally standardized in December 1989, when the American National
Standards Institute (ANSI) standard for C was adopted.
The creation of C is considered by many to have marked the beginning of the modern
age of computer languages. It successfully synthesized the conflicting attributes that had so
troubled earlier languages. The result was a powerful, efficient, structured language that
was relatively easy to learn. It also included one other, nearly intangible aspect: it was a
programmer’s language. Prior to the invention of C, computer languages were generally
designed either as academic exercises or by bureaucratic committees. C is different. It was
designed, implemented, and developed by real, working programmers, reflecting the way
that they approached the job of programming. Its features were honed, tested, thought
about, and rethought by the people who actually used the language. The result was a
language that programmers liked to use. Indeed, C quickly attracted many followers
who had a near-religious zeal for it. As such, it found wide and rapid acceptance in the
programmer community. In short, C is a language designed by and for programmers.
As you will see, Java inherited this legacy.
Subscribe to:
Posts (Atom)