The Communications Decency Act
The Communications Decency Act that was signed into law by President Clinton over a year ago is clearly in need of serious revisions due, not only to its vagueness, but mostly due to the fact that the government is infringing on our freedom of speech, may it be indecent or not. The Communications Decency Act, also know by Internet users as the CDA, is an Act that aims to remove indecent or dangerous text, lewd images, and other things deemed inappropriate from public areas of the net. The CDA is mainly out to protect children.
In the beginning, the anonymity of the Internet caused it to become a haven for the free trading of pornography. This is mainly what gives the Internet a bad name. There is also information on the Net that could be harmful to children. Information on how to make home-made explosives and similar info such as The Jolly Rodgers and the Anarchist's Cookbook are easily obtained on the Net. Pedophiles (people attracted to child porn) also have a place to hide on the Internet where nobody has to know their real name. As the average age of the Internet user has started to drop, it has became apparent that something has to be done about the pornography and other inappropriate info on the net.
On February 1, 1995, Senator Exon, a Democrat from Nebraska, and Senator Gorton, a Republican from Washington, introduced the first bill towards regulating online porn. This was the first incarnation of the Telecommunications Reform Bill.
On April 7, 1995, Senator Leahy, a Democrat from Vermont, introduces bill S714. Bill S714 is an alternative to the Exon/Gorton bill. This bill commissions the Department of Justice to study the problem to see if additional legislature (such as the CDA) is even necessary.
The Senate passed the CDA as attached to the Telecomm reform bill on June 14, 1995 with a vote of 84-16. The Leahy bill does not pass, but is supported by 16 Senators that actually understand what the Internet is. Seven days later, several prominent House members publicly announce their opposition to the CDA, including Newt Gingrich, Chris Cox, and Ron Wyden. On September 26, 1995, Senator Russ Feingold urges committee members to drop the CDA from the Telecommunications Reform Bill.
On Thursday, February 1, 1996, Congress passed (House 414-9, Senate 91-5) the Telecommunications Reform Bill, and attached to it the Communications Decency Act. This day was known as "Black Thursday" by the Internet community. One week later, it was signed into law by President Clinton on Thursday, February 8, 1996, also known as the "Day of Protest." The punishment for breaking any of the provisions of the bill is punishable with up to 2 years in prison and/or a $250,000 fine.
On the "Day of Protest," thousands of home-pages went black as Internet citizens expressed their disapproval of the Communications Decency Act. Presently there are numerous organizations that have formed in protest of the Act. The groups include: the American Civil Liberties Union, the Voters Telecommunications Watch, the Citizens Internet Empowerment Coalition, the Center for Democracy & Technology, the Electronic Privacy Information Center, the Internet Action Group, and the Electronic Frontier Foundation. The ACLU is not just involved with Internet issues. They fight to protect the rights of many different groups. (ex. Gay and Lesbian Rights, Death Penalty Rights, and Women's Rights) The ACLU is currently involved in the lawsuit of Reno vs. ACLU in which they are trying to get rid of the CDA.
In addition to Internet users turning their homepage backgrounds black, there was the adoption of the Blue Ribbon, which was also used to symbolize their disapproval of the CDA. The Blue Ribbons are similar to the Red Ribbons that Aids supports are wearing. The Blue Ribbon spawned the creation of "The Blue Ribbon Campaign." The Blue Ribbon's Homepage is the fourth most linked to site on the Internet. Only Netscape, Yahoo, and Webcrawler are more linked to. To be linked to means that they can be reached from another site. It's pretty hard to surf around on the Net and not see a Blue Ribbon on someone's site.
On the day that President Clinton signed the CDA into law, a group of nineteen organizations, from the American Civil Liberties Union to the National Writers Union, filed suit in federal court, arguing that it restricted free speech. At the forefront of the battle against the CDA is Mike Godwin. Mike Godwin is regarded as one of the most important online-rights activists today. He is the staff counsel for the Electronic Frontier Foundation, and has "won fans and infuriated rivals with his media savvy, obsessive knowledge of the law, and knack for arguing opponents into exhaustion." Since 1990 he has written on legal issues for magazines like Wired and Internet World and spoken endlessly at universities, at public rallies, and to the national media. Although this all helped the cause, Godwin didn't become a genuine cyberspace superhero until what he calls the "great Internet sex panic of 1995." During this time, Godwin submitted testimony to the Senate Judiciary Committee, debated Christian Coalition executive director Ralph Reed on Nightline, and headed the attack on the study of online pornography.
The study of online porn became the foundation of "Time Magazine's" controversial July 3 cover story, "On a Screen Near You: Cyberporn." Time said the study proved that pornography was "popular, pervasive, and surprisingly perverse" on the Net, but Godwin put up such a fight to the article that three weeks later, the magazine ran a follow-up story admitting that the study had serious flaws.
The CDA is a bad solution, but it is a bad solution to a very real problem. As Gina Smith, a writer for Popular Science, has written, "It is absolutely true that the CDA, is out of bounds in it's scope and wording. As the act is phrased, for example, consenting adults cannot be sure their online conversations won't land them in jail." Even something as newsstand-friendly as the infamous Vanity Fair cover featuring a pregnant and nude(but strategically covered) Demi Moore might be considered indecent under the act, and George Carlin's famous 'seven dirty words' are definitely out. CDA supporters are right when they say the Internet and online services are fertile playgrounds for pedophiles and other wackos bent on exploiting children.
Now, parents could just watch over their children's shoulder's the whole time that they are online, but that is both an unfair and an impractical answer. There are two answers, either a software program that blocks certain sites could be installed, or parents could discipline their kids so that they would know better than to look at pornography. The latter would appear to be the better alternative, but that just isn't practical. If kids are told not to do something, they are just going to be even more curious to check out porn. On the other hand, many parents are less technologically informed than their kids. Many would not know how to find, install, and understand such programs as CyberPatrol or NetNanny.
The future of the CDA seems to be fairly evident. It doesn't look like the CDA is going to be successful. In addition to the Act being too far reaching in its powers, it is virtually unenforceable. As with anything in print, much of the material on the Internet is intelligent and worthy of our attention, but on the other hand, some of it is very vulgar. The difficulty in separating the two rests in the fact that much of the Internet's value lies in its freedom from regulation. As Father Robert A. Sirico puts it, "To allow the federal government to censor means granting it the power to determine what information we can and cannot have access to."
Temptations to sin will always be with us and around us so long as we live in this world.
Thursday, 27 September 2012
X Software Piracy45
Software piracy is the failure of a licensed user to adhere to the conditions of a software license or the unauthorized use or reproduction of copyrighted software by a person or entity that has not been licensed to use the software. Software piracy has become a household word and a household crime and has had a great affect on the software industry. It is a problem that can only be solved by the choices of each individual.
The computer software industry is one of the great business success stories of recent history, with healthy increases in both hardware and software sales around the world. However, software piracy threatens the industry's economic future. According to estimates by the U.S. Software Publisher's Association, as much as $7.5 billion of American software may be illegally copied and distributed annually worldwide. These copies work as well as the originals and sell for significantly less money. Piracy is relatively easy, and only the largest rings of distributors are usually caught. In addition, software pirates know that they are unlikely to serve hard jail time when prisons are overcrowded with people convicted of more serious crimes. The software industry loses more than $15.2 billion annually worldwide due to software piracy.
Software piracy costs the industry:
$482 every second
$28,900 every minute
$1.7 million every hour
$41.6 million every day
$291.5 million every week
To understand software piracy, one must get inside the mind of the pirate. People, who wouldn't think of sneaking merchandise out of a store or robbing a house, regularly obtain copies of computer programs which they haven't paid for. The pirate has a set of excuses for his actions: prices are too high; the company doesn't provide decent support; he's only going to use the program once in a while. Although, what really makes software piracy seem less bad than other kinds of theft is that nothing is physically taken. There is no immediate effect on the inventory or productive capacity of the creator of a piece of software if someone 500 miles away copies a disk and starts using it.
People tend to think of property as a material thing, and thus have a hard time regarding a computer program as property. However, property is not a concept pertaining to matter alone. Ownership is a concept which comes out of the fact that people live by creating things of value for their own use or for trade with others. Creation does not mean making matter, but rather changing the form of matter alongwith an idea and a purpose. Most often, the actual cost of creating goods is determined in the production of individual items. With software, the reverse is true. The cost of producing copies is negligible compared with the cost of constructing the form of the product.
In both cases, though, the only way a producer can benefit from offering his product in trade, is for others to respect his right to it and to obtain it only on his terms. If people are going to make the production of software a fulltime occupation, they should expect a return for their efforts. If they do not receive any benefit, they will have to switch to a different sort of activity if they want to keep working.
The thief, though, will seldom be caught and punished; his particular act of
copying isn't likely to push a software publisher over the edge. In most cases, people can openly talk about their acts of piracy without suffering criticism. However, there is a more basic deterrent to theft than the risk of getting caught. A person can fake what he is to others, but not to himself. He knows that he is depending on other people's ignorance or willingness to pretend they haven't noticed. He may not feel guilty because of this, but he will always feel helpless and out of control. If he attempts to rationalize his actions, he becomes dependent on his own self-ignorance as well.
Thieves who abandon honesty often fall back on the idea of being smart. They think it's stupid to buy something when they can just take it. They know that their own cleverness works only because of the stupidity of others who pay for what they buy. The thieves are counting on the failure of the very people whose successful efforts they use.
The best defense against software piracy lies neither in physical barriers to copying nor in stiffer penalties. The main prevention to theft in stores is not the presence of guards and magnetic detectors, but the fact that most people have no desire to steal. The best way to stop piracy is to instill a similar frame of mind among software users. This means breaking down the web of excuses by which pirates justify their actions, and leaving them to recognize what they are. Ultimately, this is the most important defense against any violation of people's rights; without an honest majority, no amount of effort by the police will be effective.
In almost all countries of the world, there are statutes, criminal and civil, which provides for enforcement of copyrighted software programs. The criminal penalties range from fines to jail terms or both. Civil penalties may reach as high as $100,000
per infringement. In many countries, companies as well as individuals may face civil and criminal sanctions.
There are several different types of software piracy. Networking is major cause to software piracy. Most licenses to software is written so that the program can only be installed on one machine and can only be used on one machine at a time, however, with some network methods, the program can be loaded on several machines at once, therefore a violation of the agreement. On some network applications, the speed of transporting the software back and forth is too slow, and therefore, copying the program onto each machine would be so much faster, and this could be a violation of the license agreement.
End-user Copying is a form of piracy when individuals within organisations copy software programs from co-workers, friends and relatives. This is the most prevalent form of software theft. Some refer to end user copying as 'disk swapping'.
Hard disk loading happens when unlicensed software is downloaded onto computers that you buy. Generally you, as the customer will have an original program on your hard drive that you may or may not have paid for. However, you will not
receive the accompanying disks or documentation and you will therefore not be entitled to technical support or upgrades. This practice is often used as a sales feature or an added incentive by the dealer to entice the sale.
Software rental is a form of piracy that takes place when an individual rents a computer with software loaded on it or the software itself from a rental shop or computer retailer. The licence agreement clearly states that the purchaser is prohibited from engaging in the rental of the software. This often occurs in the form of a rental, and then a re-stocking charge when the software is returned to the retailer.
Counterfeit software involves both low quality disks and high quality fakes that are extremely close in appearance to the original software.
Stealing via bulletin boards is one of the fastest growing means of software theft. It involves downloading programs onto computers via a modem.
OEM unbundling can occur at either the Original Equipment Manufacturer (OEM) level or at the retailer. Unbundling involves the separating of OEM software from the hardware that it is licensed to be sold with. The product is clearly marked 'For Distribution With New PC Hardware Only' and is designed so that it cannot be sold on the retail shelf. The customer can run into support issues as it is the OEM that is required to provide support for this type of software. When you buy unbundled software you take a bigger risk of purchasing a counterfeit product.
In conclusion, software piracy has had a major impact on the software industry. Economically it has cost the industry billions of dollars each year and there is no sign that this will change in the near future. No amount of penalties or policing will stop the trend of software piracy. Each individual must develop their own moral standards so that they do not add to the problem.
The computer software industry is one of the great business success stories of recent history, with healthy increases in both hardware and software sales around the world. However, software piracy threatens the industry's economic future. According to estimates by the U.S. Software Publisher's Association, as much as $7.5 billion of American software may be illegally copied and distributed annually worldwide. These copies work as well as the originals and sell for significantly less money. Piracy is relatively easy, and only the largest rings of distributors are usually caught. In addition, software pirates know that they are unlikely to serve hard jail time when prisons are overcrowded with people convicted of more serious crimes. The software industry loses more than $15.2 billion annually worldwide due to software piracy.
Software piracy costs the industry:
$482 every second
$28,900 every minute
$1.7 million every hour
$41.6 million every day
$291.5 million every week
To understand software piracy, one must get inside the mind of the pirate. People, who wouldn't think of sneaking merchandise out of a store or robbing a house, regularly obtain copies of computer programs which they haven't paid for. The pirate has a set of excuses for his actions: prices are too high; the company doesn't provide decent support; he's only going to use the program once in a while. Although, what really makes software piracy seem less bad than other kinds of theft is that nothing is physically taken. There is no immediate effect on the inventory or productive capacity of the creator of a piece of software if someone 500 miles away copies a disk and starts using it.
People tend to think of property as a material thing, and thus have a hard time regarding a computer program as property. However, property is not a concept pertaining to matter alone. Ownership is a concept which comes out of the fact that people live by creating things of value for their own use or for trade with others. Creation does not mean making matter, but rather changing the form of matter alongwith an idea and a purpose. Most often, the actual cost of creating goods is determined in the production of individual items. With software, the reverse is true. The cost of producing copies is negligible compared with the cost of constructing the form of the product.
In both cases, though, the only way a producer can benefit from offering his product in trade, is for others to respect his right to it and to obtain it only on his terms. If people are going to make the production of software a fulltime occupation, they should expect a return for their efforts. If they do not receive any benefit, they will have to switch to a different sort of activity if they want to keep working.
The thief, though, will seldom be caught and punished; his particular act of
copying isn't likely to push a software publisher over the edge. In most cases, people can openly talk about their acts of piracy without suffering criticism. However, there is a more basic deterrent to theft than the risk of getting caught. A person can fake what he is to others, but not to himself. He knows that he is depending on other people's ignorance or willingness to pretend they haven't noticed. He may not feel guilty because of this, but he will always feel helpless and out of control. If he attempts to rationalize his actions, he becomes dependent on his own self-ignorance as well.
Thieves who abandon honesty often fall back on the idea of being smart. They think it's stupid to buy something when they can just take it. They know that their own cleverness works only because of the stupidity of others who pay for what they buy. The thieves are counting on the failure of the very people whose successful efforts they use.
The best defense against software piracy lies neither in physical barriers to copying nor in stiffer penalties. The main prevention to theft in stores is not the presence of guards and magnetic detectors, but the fact that most people have no desire to steal. The best way to stop piracy is to instill a similar frame of mind among software users. This means breaking down the web of excuses by which pirates justify their actions, and leaving them to recognize what they are. Ultimately, this is the most important defense against any violation of people's rights; without an honest majority, no amount of effort by the police will be effective.
In almost all countries of the world, there are statutes, criminal and civil, which provides for enforcement of copyrighted software programs. The criminal penalties range from fines to jail terms or both. Civil penalties may reach as high as $100,000
per infringement. In many countries, companies as well as individuals may face civil and criminal sanctions.
There are several different types of software piracy. Networking is major cause to software piracy. Most licenses to software is written so that the program can only be installed on one machine and can only be used on one machine at a time, however, with some network methods, the program can be loaded on several machines at once, therefore a violation of the agreement. On some network applications, the speed of transporting the software back and forth is too slow, and therefore, copying the program onto each machine would be so much faster, and this could be a violation of the license agreement.
End-user Copying is a form of piracy when individuals within organisations copy software programs from co-workers, friends and relatives. This is the most prevalent form of software theft. Some refer to end user copying as 'disk swapping'.
Hard disk loading happens when unlicensed software is downloaded onto computers that you buy. Generally you, as the customer will have an original program on your hard drive that you may or may not have paid for. However, you will not
receive the accompanying disks or documentation and you will therefore not be entitled to technical support or upgrades. This practice is often used as a sales feature or an added incentive by the dealer to entice the sale.
Software rental is a form of piracy that takes place when an individual rents a computer with software loaded on it or the software itself from a rental shop or computer retailer. The licence agreement clearly states that the purchaser is prohibited from engaging in the rental of the software. This often occurs in the form of a rental, and then a re-stocking charge when the software is returned to the retailer.
Counterfeit software involves both low quality disks and high quality fakes that are extremely close in appearance to the original software.
Stealing via bulletin boards is one of the fastest growing means of software theft. It involves downloading programs onto computers via a modem.
OEM unbundling can occur at either the Original Equipment Manufacturer (OEM) level or at the retailer. Unbundling involves the separating of OEM software from the hardware that it is licensed to be sold with. The product is clearly marked 'For Distribution With New PC Hardware Only' and is designed so that it cannot be sold on the retail shelf. The customer can run into support issues as it is the OEM that is required to provide support for this type of software. When you buy unbundled software you take a bigger risk of purchasing a counterfeit product.
In conclusion, software piracy has had a major impact on the software industry. Economically it has cost the industry billions of dollars each year and there is no sign that this will change in the near future. No amount of penalties or policing will stop the trend of software piracy. Each individual must develop their own moral standards so that they do not add to the problem.
Why you should purchase a PC
Computers are capable of doing more things every year. There are many advantages to
knowing how to use a computer, and it is important that everyone know how to use them
properly. Using the information I have gathered, and my own knowledge from my 12 years of
computer experience, I will explain the many advantages of owning a computer and knowing how
to use a PC and I will attempt to explain why you should purchase a computer and learn how to
use one properly.
Webster's New World Compact Dictionary defines a computer as "an electronic machine that
performs rapid, complex calculations or compiles and correlates data" ("Computer."). While this
definition gives one a very narrow view of what a computer is capable of doing, it does describe
the basic ideas of what I will expand upon. We have been living through an age of computers for a
short while now and there are already many people world wide that are computer literate.
According to Using Computers: A Gateway to Information World Wide Web Edition, over 250
million Personal Computers (PC's) were in use by 1995, and one out of every three homes had a
PC (Shelly, Cashman,& Waggoner, 138).
Computers are easy to use when you know how they work and what the parts are. All
computers perform the four basic operations of the information processing cycle: input, process,
output, and storage. Data, any kind of raw facts, is required for the processing cycle to occur.
Data is processed into useful information by the computer hardware. Most computer systems
consist of a monitor, a system unit which contains the Central Processing Unit (CPU), a
floppy-disk drive, a CD-ROM drive, speakers, a keyboard, a mouse, and a printer. Each
component takes a part in one of the four operations.
The keyboard and mouse are input devices that a person uses to enter data into the computer.
From there the data goes to the system unit where it is processed into useful information the
computer can understand and work with. Next the processed data can be sent to storage devices
or to output devices. Normally output is sent to the monitor and stored on the hard-disk or to a
floppy-disk located internal of the system unit. Output can also be printed out through the printer,
or can be played through the speakers as sound depending on the form it takes after it is
processed.
Once you have grasped a basic understanding of the basic parts and operations of a computer,
you can soon discover what you can do with computers to make life easier and more enjoyable.
Being computer literate allows you to use many powerful software applications and utilities to do
work for school, business, or pleasure. Microsoft is the current leading producer of many of these
applications and utilities.
Microsoft produces software called operating systems that manage and regulate the
information processing cycle. The oldest of these is MS-DOS, a single user system that uses typed
commands to initiate tasks. Currently Microsoft has available operating systems that use visual
cues such as icons to help enter data and run programs. These operating systems are ran under
an environment called a Graphical User Interface (GUI's). Such operating systems include
Windows 3.xx, Windows 95, and Windows NT Workstation. Windows 95 is geared more for use
in the home for productivity and game playing whereas Windows NT is more business orientated.
The article entitled "Mine, All Mine" in the June 5, 1995 issue of Time stated that 8 out of 10
PC's worldwide would not be able to start or run if it were not for Microsoft's operating systems
like MS-DOS, Windows 95, and Windows NT (Elmer-Dewitt, 1995, p. 50).
By no means has Microsoft limited itself to operating systems alone. Microsoft has also
produced a software package called Microsoft Office that is very useful in creating reports, data
bases, spreadsheets, presentations, and other documents for school and work. Microsoft Office:
Introductory Concepts and Techniques provides a detailed, step-by-step approach to the four
programs included in Microsoft Office.
Included in this package are Microsoft Word, Microsoft Excel, Microsoft Access, and
Microsoft PowerPoint. Microsoft Word is a word processing program that makes creating
professional looking documents such as announcements, resumes, letters, address books, and
reports easy to do. Microsoft Excel, a spreadsheet program, has features for data organization,
calculations, decision making, and graphing. It is very useful in making professional looking
reports. Microsoft Access, a powerful database management system, is useful in creating and
processing data in a database. Microsoft PowerPoint is ". . a complete presentation graphics
program that allows you to produce professional looking presentations" (Shelly, Cashman, &
Vermaat, 2). PowerPoint is flexible enough so that you can create electronic presentations,
overhead transparencies, or even 35mm slides.
Microsoft also produces entertainment and reference programs. "Microsoft's Flight Simulator
is one of the best selling PC games of all time" (Elmer-Dewitt, 50). Microsoft's Encarta is an
electronic CD-ROM encyclopedia that makes for a fantastic alternative to 20 plus volume book
encyclopedias. In fact, it is so popular, it outsells the Encyclopedia Britannica. These powerful
business, productivity, and entertainment applications are just the beginning of what you can do
with a PC.
Knowing how to use the Internet will allow you access to a vast resource of facts, knowledge,
information, and entertainment that can help you do work and have fun. According to Netscape
Navigator 2 running under Windows 3.1, "the Internet is a collection of networks, each of which
is composed of a collection of smaller networks" (Shelly, Cashman, & Jordan, N2). Information
can be sent over the Internet through communication lines in the form of graphics, sound, video,
animation, and text. These forms of computer media are known as hypermedia. Hypermedia is
accessed through hypertext links, which are pointers to the computer where the hypermedia is
stored. The World Wide Web (WWW) is the collection of these hypertext links throughout the
Internet. Each computer that contains hypermedia on the WWW is known as a Web site and has
Web pages set up for users to access the hypermedia. Browsers such as Netscape allow people to
"surf the net" and search for the hypermedia of their choice.
There are millions of examples of hypermedia on the Internet. You can find art, photos,
information on business, the government, and colleges, television schedules, movie reviews, music
lyrics, online news and magazines, sport sights of all kinds, games, books, and thousands of other
hypermedia on the WWW. You can send electronic mail (E-Mail), chat with other users around
the world, buy airline, sports, and music tickets, and shop for a house or a car. All of this, and
more, provides one with a limitless supply of information for research, business, entertainment, or
other personal use. Online services such as America Online, Prodigy, or CompuServe make it
even easier to access the power of the Internet. The Internet alone is almost reason enough to
become computer literate, but there is still much more that computers can do.
Knowing how to use a computer allows you to do a variety of things in several different ways.
One of the most popular use for computers today is for playing video games. With a PC you can
play card games, simulation games, sport games, strategy games, fighting games, and adventure
games. Today's technology provides the ultimate experiences in color, graphics, sound, music,
full motion video, animation, and 3D effects. Computers have also become increasingly useful in
the music, film, and television industry. Computers can be used to compose music, create sound
effects, create special effects, create 3D life-like animation, and add previous existing movie and
TV footage into new programs, as seen in the movie Forrest Gump. All this and more can be
done with computers.
There is truly no time like the present to become computer literate. Computers will be doing
even more things in the future and will become unavoidable. Purchasing and learning about a new
PC now will help put PC's into the other two-thirds of the homes worldwide and make the
transition into a computer age easier.
Works Cited
"Computer." Webster's New World Compact School and Office Dictionary. 1995.
Elmer-Dewitt, P. "Mine, All Mine." Time Jun. 1995: 46-54.
Shelly, G., T. Cashman, and K. Jordan. Netscape Navigator 2 Running Under Windows 3.1.
Danvers: Boyd & Fraser Publishing Co., 1996.
Shelly, G., T. Cashman, and M. Vermaat. Microsoft Office Introductory Concepts and
Techniques. Danvers: Boyd & Fraser Publishing Co., 1995.
Shelly, G., T. Cashman, G. Waggoner, and W. Waggoner. Using Computers: A Gateway to
Information World Wide Web Edition. Danvers: Boyd & Fraser Publishing Co., 1996.
knowing how to use a computer, and it is important that everyone know how to use them
properly. Using the information I have gathered, and my own knowledge from my 12 years of
computer experience, I will explain the many advantages of owning a computer and knowing how
to use a PC and I will attempt to explain why you should purchase a computer and learn how to
use one properly.
Webster's New World Compact Dictionary defines a computer as "an electronic machine that
performs rapid, complex calculations or compiles and correlates data" ("Computer."). While this
definition gives one a very narrow view of what a computer is capable of doing, it does describe
the basic ideas of what I will expand upon. We have been living through an age of computers for a
short while now and there are already many people world wide that are computer literate.
According to Using Computers: A Gateway to Information World Wide Web Edition, over 250
million Personal Computers (PC's) were in use by 1995, and one out of every three homes had a
PC (Shelly, Cashman,& Waggoner, 138).
Computers are easy to use when you know how they work and what the parts are. All
computers perform the four basic operations of the information processing cycle: input, process,
output, and storage. Data, any kind of raw facts, is required for the processing cycle to occur.
Data is processed into useful information by the computer hardware. Most computer systems
consist of a monitor, a system unit which contains the Central Processing Unit (CPU), a
floppy-disk drive, a CD-ROM drive, speakers, a keyboard, a mouse, and a printer. Each
component takes a part in one of the four operations.
The keyboard and mouse are input devices that a person uses to enter data into the computer.
From there the data goes to the system unit where it is processed into useful information the
computer can understand and work with. Next the processed data can be sent to storage devices
or to output devices. Normally output is sent to the monitor and stored on the hard-disk or to a
floppy-disk located internal of the system unit. Output can also be printed out through the printer,
or can be played through the speakers as sound depending on the form it takes after it is
processed.
Once you have grasped a basic understanding of the basic parts and operations of a computer,
you can soon discover what you can do with computers to make life easier and more enjoyable.
Being computer literate allows you to use many powerful software applications and utilities to do
work for school, business, or pleasure. Microsoft is the current leading producer of many of these
applications and utilities.
Microsoft produces software called operating systems that manage and regulate the
information processing cycle. The oldest of these is MS-DOS, a single user system that uses typed
commands to initiate tasks. Currently Microsoft has available operating systems that use visual
cues such as icons to help enter data and run programs. These operating systems are ran under
an environment called a Graphical User Interface (GUI's). Such operating systems include
Windows 3.xx, Windows 95, and Windows NT Workstation. Windows 95 is geared more for use
in the home for productivity and game playing whereas Windows NT is more business orientated.
The article entitled "Mine, All Mine" in the June 5, 1995 issue of Time stated that 8 out of 10
PC's worldwide would not be able to start or run if it were not for Microsoft's operating systems
like MS-DOS, Windows 95, and Windows NT (Elmer-Dewitt, 1995, p. 50).
By no means has Microsoft limited itself to operating systems alone. Microsoft has also
produced a software package called Microsoft Office that is very useful in creating reports, data
bases, spreadsheets, presentations, and other documents for school and work. Microsoft Office:
Introductory Concepts and Techniques provides a detailed, step-by-step approach to the four
programs included in Microsoft Office.
Included in this package are Microsoft Word, Microsoft Excel, Microsoft Access, and
Microsoft PowerPoint. Microsoft Word is a word processing program that makes creating
professional looking documents such as announcements, resumes, letters, address books, and
reports easy to do. Microsoft Excel, a spreadsheet program, has features for data organization,
calculations, decision making, and graphing. It is very useful in making professional looking
reports. Microsoft Access, a powerful database management system, is useful in creating and
processing data in a database. Microsoft PowerPoint is ". . a complete presentation graphics
program that allows you to produce professional looking presentations" (Shelly, Cashman, &
Vermaat, 2). PowerPoint is flexible enough so that you can create electronic presentations,
overhead transparencies, or even 35mm slides.
Microsoft also produces entertainment and reference programs. "Microsoft's Flight Simulator
is one of the best selling PC games of all time" (Elmer-Dewitt, 50). Microsoft's Encarta is an
electronic CD-ROM encyclopedia that makes for a fantastic alternative to 20 plus volume book
encyclopedias. In fact, it is so popular, it outsells the Encyclopedia Britannica. These powerful
business, productivity, and entertainment applications are just the beginning of what you can do
with a PC.
Knowing how to use the Internet will allow you access to a vast resource of facts, knowledge,
information, and entertainment that can help you do work and have fun. According to Netscape
Navigator 2 running under Windows 3.1, "the Internet is a collection of networks, each of which
is composed of a collection of smaller networks" (Shelly, Cashman, & Jordan, N2). Information
can be sent over the Internet through communication lines in the form of graphics, sound, video,
animation, and text. These forms of computer media are known as hypermedia. Hypermedia is
accessed through hypertext links, which are pointers to the computer where the hypermedia is
stored. The World Wide Web (WWW) is the collection of these hypertext links throughout the
Internet. Each computer that contains hypermedia on the WWW is known as a Web site and has
Web pages set up for users to access the hypermedia. Browsers such as Netscape allow people to
"surf the net" and search for the hypermedia of their choice.
There are millions of examples of hypermedia on the Internet. You can find art, photos,
information on business, the government, and colleges, television schedules, movie reviews, music
lyrics, online news and magazines, sport sights of all kinds, games, books, and thousands of other
hypermedia on the WWW. You can send electronic mail (E-Mail), chat with other users around
the world, buy airline, sports, and music tickets, and shop for a house or a car. All of this, and
more, provides one with a limitless supply of information for research, business, entertainment, or
other personal use. Online services such as America Online, Prodigy, or CompuServe make it
even easier to access the power of the Internet. The Internet alone is almost reason enough to
become computer literate, but there is still much more that computers can do.
Knowing how to use a computer allows you to do a variety of things in several different ways.
One of the most popular use for computers today is for playing video games. With a PC you can
play card games, simulation games, sport games, strategy games, fighting games, and adventure
games. Today's technology provides the ultimate experiences in color, graphics, sound, music,
full motion video, animation, and 3D effects. Computers have also become increasingly useful in
the music, film, and television industry. Computers can be used to compose music, create sound
effects, create special effects, create 3D life-like animation, and add previous existing movie and
TV footage into new programs, as seen in the movie Forrest Gump. All this and more can be
done with computers.
There is truly no time like the present to become computer literate. Computers will be doing
even more things in the future and will become unavoidable. Purchasing and learning about a new
PC now will help put PC's into the other two-thirds of the homes worldwide and make the
transition into a computer age easier.
Works Cited
"Computer." Webster's New World Compact School and Office Dictionary. 1995.
Elmer-Dewitt, P. "Mine, All Mine." Time Jun. 1995: 46-54.
Shelly, G., T. Cashman, and K. Jordan. Netscape Navigator 2 Running Under Windows 3.1.
Danvers: Boyd & Fraser Publishing Co., 1996.
Shelly, G., T. Cashman, and M. Vermaat. Microsoft Office Introductory Concepts and
Techniques. Danvers: Boyd & Fraser Publishing Co., 1995.
Shelly, G., T. Cashman, G. Waggoner, and W. Waggoner. Using Computers: A Gateway to
Information World Wide Web Edition. Danvers: Boyd & Fraser Publishing Co., 1996.
Why ARJ
Compuer studies.
WHY_ARJ.DOC Jan 1997
This document describes the benefits of ARJ. ARJ is now a trend
setter in archivers with other archivers following suit.
You can find reviews of ARJ in the following magazine articles:
Computer Personlich, June 12, 1991, Leader of the Pack, Bernd
Wiebelt and Matthias Fichtner. In this German magazine, ARJ 2.0 was
named Test Sieger (Test Winner) over six other archivers including
PKZIP and LHA. Compression, speed, documentation, and features were
compared.
PC Sources, July 1991, Forum, Barry Brenesal, "A new challenger, ARJ
2.0, not only offers the speed of PKZIP, but also has the best
compression rate of the bunch."
Computer Shopper, September 1991, Shells, Bells, and Files:
Compressors for All Cases, Craig Menefee. "ARJ ... is extremely fast
and produces excellent compression; it ... has a rich set of options.
... This is a mature technology, and any of these programs will do a
fine and reliable job."
PC Magazine, October 15, 1991, Squeeze Play, Barry Simon. "Jung has
combined that foundation with academic research to produce an
impressive product. ... If your main criterion is compressed size,
ARJ will be one of your two main contenders, along with LHA."
SHAREWARE Magazine, Nov-Dec 1991, Fall Releases, Joseph Speaks. "Don't
tell the creators of ARJ that PKZIP is the standard for data
compression. They probably already know. But that hasn't stopped
them from creating a data compression utility that makes everyone -
even the folks at PKWare - sit up and take notice. ... but compression
statistics don't tell the whole story. The case for using ARJ is
strengthened by new features it debuts."
BOARDWATCH Magazine, December 1991, ARCHIVE/COMPRESSION UTILITIES.
"This year's analysis rendered a surprise winner. Robert K. Jung's
ARJ Version 2.22 is a relatively new compression utility that offers
surprising performance. The program emerged on the scene within the
past year and the 2.22 version was released in October 1991. It rated
number one on .EXE and database files and number two behind LHarc
Version 2.13 in our directory of 221 short text files."
INFO'PC, October 1992, Compression de donn‚es: 6 utilitaires du
domaine public, Thierry Platon. In this article, the French magazine
awarded ARJ 2.20, the Certificat de Qualification Labo-tests InfoPC.
PC Magazine, March 16, 1993, PKZIP Now Faster, More Efficient,
Barry Simon. "One of the more interesting features is the ability to
have a .ZIP file span multiple floppy disks, but this feature is not
nearly as well implemented as in ARJ."
ARJ FEATURES:
1) Registered users receive technical support from a full-time
software author with over FIFTEEN years of experience in
technical support and software programming. And YES, ARJ is a
full-time endeavor for our software company. ARJ and REARJ have
proven to be two of the most reliable archiver products. We
test our BETA test releases with the help of thousands of users.
2) ARJ provides excellent size compression and practical speed
compared to the other products currently available on the PC.
ARJ is particularly strong compressing databases, uncompressed
graphics files, and large documents. One user reported that in
compressing a 25 megabyte MUMPS medical database, ARJ produced
a compressed file of size 0.17 megabytes while LHA 2.13 and
PKZIP 1.10 produced a compressed file of 17 plus megabytes.
3) Of the leading archivers, only ARJ provides the capability of
archiving files to multiple volume archives no matter what the
destination media. ARJ can archive files directly to diskettes
no matter how large or how numerous the input files are and
without requiring EXTRA disk space.
This feature makes ARJ (DEARJ) especially suitable for
distributing large software packages without the concerns about
fitting entire files on one diskette. ARJ will automatically
split files when necessary and will reassemble them upon
extraction without using any EXTRA disk space.
This multiple volume feature of ARJ makes it suitable as a "cheap"
backup utility. ARJ saves pathname information, file date-time
stamps, and file attributes in the archive volumes. ARJ can also
create an index file with information about the contents of each
volume. For systems with multiple drives, ARJ can be configured
to save the DRIVE letter information, too. Files contained
entirely within one volume are easily extracted using just the one
volume. There is no need to always insert the last diskette of
the set. In addition, the ARJ data verification facility unique
to ARJ among archivers helps ensure reliable backups.
4) The myriad number of ARJ commands and options allow the user
outstanding flexibility in archiver usage. No other leading PC
archiver gives you that flexibility.
Here are some examples of ARJ's flexibility.
a) Search archives for text data without extracting the
archives to disk.
b) Save drive letter and pathname information.
c) Re-order the files within an ARJ archive.
d) Merge two or more ARJ archives without re-compressing files.
e) Extract files directly to DOS devices.
f) Synchronize an archive and a directory of files with just a
few commands.
g) Compare the contents of an archive and a directory of files
byte for byte without extracting the archive to disk.
h) Allow duplicates of a file to be archived producing
generations (versions) of a file within an archive.
i) Display archive creation and modification date and time.
j) And much more.
5) ARJ provides ARJ archive compatibility from revision 1.00 to now.
In other words, ARJ version 1.00 can extract the files from an
archive created by the current version of ARJ and vice-versa.
6) ARJ provides the facility to store EMPTY directories within its
archives. This makes it easier to do FULL backups and also to
distribute software products that come with EMPTY directories.
7) Both ARJ self-extracting modules provide default pathname support.
That means that you can build self-extracting archives of software
directories containing sub-directories. The end user of the
self-extracting archive does not have to type any command line
options to restore the full directory structure of the software.
This greatly simplifies software distribution.
8) The ARJ archive data structure with its header structure and 32
bit CRC provide excellent archive stability and recovery
capabilities. In addition, ARJ is the only archiver that allows
you to test an archive during an archive process. With other
archivers, you may have already deleted the input files with a
"move" command before you could test the built archive. In
addition, the test feature allows one to select an actual byte for
byte file compare with the original input files. This is
especially useful for verifying multi-megabyte files where a 32 bit
CRC compare would not provide sufficient reliability.
9) ARJ provides an optional security envelope facility to "lock" ARJ
archives with a unique envelope signature. A "locked" ARJ
archive cannot be modified by ARJ or other programs without
destroying the envelope signature. This provides some level of
assurance to the user receiving a "locked" ARJ archive that the
contents of the archive are intact as the "signer" intended.
10) ARJ has MS-DOS 3.x international language support. This makes ARJ
more convenient to use with international alphabets.
11) ARJ has many satisfied users in countries all over the world. ARJ
customers include the US government and many leading companies
including Lotus Development Corp.
WHY_ARJ.DOC Jan 1997
This document describes the benefits of ARJ. ARJ is now a trend
setter in archivers with other archivers following suit.
You can find reviews of ARJ in the following magazine articles:
Computer Personlich, June 12, 1991, Leader of the Pack, Bernd
Wiebelt and Matthias Fichtner. In this German magazine, ARJ 2.0 was
named Test Sieger (Test Winner) over six other archivers including
PKZIP and LHA. Compression, speed, documentation, and features were
compared.
PC Sources, July 1991, Forum, Barry Brenesal, "A new challenger, ARJ
2.0, not only offers the speed of PKZIP, but also has the best
compression rate of the bunch."
Computer Shopper, September 1991, Shells, Bells, and Files:
Compressors for All Cases, Craig Menefee. "ARJ ... is extremely fast
and produces excellent compression; it ... has a rich set of options.
... This is a mature technology, and any of these programs will do a
fine and reliable job."
PC Magazine, October 15, 1991, Squeeze Play, Barry Simon. "Jung has
combined that foundation with academic research to produce an
impressive product. ... If your main criterion is compressed size,
ARJ will be one of your two main contenders, along with LHA."
SHAREWARE Magazine, Nov-Dec 1991, Fall Releases, Joseph Speaks. "Don't
tell the creators of ARJ that PKZIP is the standard for data
compression. They probably already know. But that hasn't stopped
them from creating a data compression utility that makes everyone -
even the folks at PKWare - sit up and take notice. ... but compression
statistics don't tell the whole story. The case for using ARJ is
strengthened by new features it debuts."
BOARDWATCH Magazine, December 1991, ARCHIVE/COMPRESSION UTILITIES.
"This year's analysis rendered a surprise winner. Robert K. Jung's
ARJ Version 2.22 is a relatively new compression utility that offers
surprising performance. The program emerged on the scene within the
past year and the 2.22 version was released in October 1991. It rated
number one on .EXE and database files and number two behind LHarc
Version 2.13 in our directory of 221 short text files."
INFO'PC, October 1992, Compression de donn‚es: 6 utilitaires du
domaine public, Thierry Platon. In this article, the French magazine
awarded ARJ 2.20, the Certificat de Qualification Labo-tests InfoPC.
PC Magazine, March 16, 1993, PKZIP Now Faster, More Efficient,
Barry Simon. "One of the more interesting features is the ability to
have a .ZIP file span multiple floppy disks, but this feature is not
nearly as well implemented as in ARJ."
ARJ FEATURES:
1) Registered users receive technical support from a full-time
software author with over FIFTEEN years of experience in
technical support and software programming. And YES, ARJ is a
full-time endeavor for our software company. ARJ and REARJ have
proven to be two of the most reliable archiver products. We
test our BETA test releases with the help of thousands of users.
2) ARJ provides excellent size compression and practical speed
compared to the other products currently available on the PC.
ARJ is particularly strong compressing databases, uncompressed
graphics files, and large documents. One user reported that in
compressing a 25 megabyte MUMPS medical database, ARJ produced
a compressed file of size 0.17 megabytes while LHA 2.13 and
PKZIP 1.10 produced a compressed file of 17 plus megabytes.
3) Of the leading archivers, only ARJ provides the capability of
archiving files to multiple volume archives no matter what the
destination media. ARJ can archive files directly to diskettes
no matter how large or how numerous the input files are and
without requiring EXTRA disk space.
This feature makes ARJ (DEARJ) especially suitable for
distributing large software packages without the concerns about
fitting entire files on one diskette. ARJ will automatically
split files when necessary and will reassemble them upon
extraction without using any EXTRA disk space.
This multiple volume feature of ARJ makes it suitable as a "cheap"
backup utility. ARJ saves pathname information, file date-time
stamps, and file attributes in the archive volumes. ARJ can also
create an index file with information about the contents of each
volume. For systems with multiple drives, ARJ can be configured
to save the DRIVE letter information, too. Files contained
entirely within one volume are easily extracted using just the one
volume. There is no need to always insert the last diskette of
the set. In addition, the ARJ data verification facility unique
to ARJ among archivers helps ensure reliable backups.
4) The myriad number of ARJ commands and options allow the user
outstanding flexibility in archiver usage. No other leading PC
archiver gives you that flexibility.
Here are some examples of ARJ's flexibility.
a) Search archives for text data without extracting the
archives to disk.
b) Save drive letter and pathname information.
c) Re-order the files within an ARJ archive.
d) Merge two or more ARJ archives without re-compressing files.
e) Extract files directly to DOS devices.
f) Synchronize an archive and a directory of files with just a
few commands.
g) Compare the contents of an archive and a directory of files
byte for byte without extracting the archive to disk.
h) Allow duplicates of a file to be archived producing
generations (versions) of a file within an archive.
i) Display archive creation and modification date and time.
j) And much more.
5) ARJ provides ARJ archive compatibility from revision 1.00 to now.
In other words, ARJ version 1.00 can extract the files from an
archive created by the current version of ARJ and vice-versa.
6) ARJ provides the facility to store EMPTY directories within its
archives. This makes it easier to do FULL backups and also to
distribute software products that come with EMPTY directories.
7) Both ARJ self-extracting modules provide default pathname support.
That means that you can build self-extracting archives of software
directories containing sub-directories. The end user of the
self-extracting archive does not have to type any command line
options to restore the full directory structure of the software.
This greatly simplifies software distribution.
8) The ARJ archive data structure with its header structure and 32
bit CRC provide excellent archive stability and recovery
capabilities. In addition, ARJ is the only archiver that allows
you to test an archive during an archive process. With other
archivers, you may have already deleted the input files with a
"move" command before you could test the built archive. In
addition, the test feature allows one to select an actual byte for
byte file compare with the original input files. This is
especially useful for verifying multi-megabyte files where a 32 bit
CRC compare would not provide sufficient reliability.
9) ARJ provides an optional security envelope facility to "lock" ARJ
archives with a unique envelope signature. A "locked" ARJ
archive cannot be modified by ARJ or other programs without
destroying the envelope signature. This provides some level of
assurance to the user receiving a "locked" ARJ archive that the
contents of the archive are intact as the "signer" intended.
10) ARJ has MS-DOS 3.x international language support. This makes ARJ
more convenient to use with international alphabets.
11) ARJ has many satisfied users in countries all over the world. ARJ
customers include the US government and many leading companies
including Lotus Development Corp.
VR 2
An Insight Into Virtual Reality
Virtual Reality is a creation of a highly interactive computer based multimedia environment in which the user becomes a participant with the computer in a "virtually real" world
We are living in an era characterized by 3D virtual systems created by computer graphics. In the concept called Virtual Reality (VR), the virtual reality engineer is combining computer, video, image-processing, and sensor technologies so that a human can enter into and react with spaces generated by computer graphics.
In 1969-70, a MIT scientist went to the University of Utah, where he began to work with vector generated graphics. He built a see-through helmet that used television screens and half-silvered mirrors, so that the environment was visible through the TV displays. It was not yet designed to provide a surrounding environment. It was not until the mid '80's that virtual reality systems were becoming more defined. The AMES contract started in 1985, came up with the first glove in February 1986. The glove is made of thin Lycra and is fitted with 15 sensors that monitor finger flexion, extension, hand position and orientation. Connected to a computer through fiber optic cables. Sensor inputs enable the computer to generate an on screen image of the hand that follows the operator's hand movements. The glove also has miniature vibrators in the finger tips to provide feedback to the operator from grasped virtual objects. Therefore, driven by the proper software, the system allows the operator to interact by grabbing and moving a virtual object within a simulated room, while experiencing the "feel" of the object.
The virtual reality line includes the Datasuit and the Eyephone. The Datasuit is an instrumented full-body garment that enables full-body interaction with a computer constructed virtual world. In one use, this product is worn by film actors to give realistic movement to animated characters in computer generated special effects. The Eyephone is a head mounted stereo display that shows a computer made virtual world in full color and 3D.
The Eyephone technology is based on an experimental Virtual Interface Environment Workstation (VIEW) design. VIEW is a head-mounted stereoscopic display system with two 3.9 inch television screens, one for each eye. The display can be a computer generated scene or a real environment sent by remote video cameras. Sound effects delivered to the headset increase the realism.
It was intended to use the glove and software for such ideas as a surgical simulation, or "3D virtual surgery" for medical students. In the summer of 1991, US trainee surgeons were able to practice leg operations without having to cut anything solid. NASA Scientists have developed a three-dimensional computer simulation of a human leg which surgeons can operate on by entering the computer world of virtual reality. Surgeons use the glove and Eyephone technology to create the illusion that they are operating on a leg.
Other virtual reality systems such as the Autodesk and the CAVE have also come up with techniques to penetrate a virtual world. The Autodesk uses a simple monitor and is the most basic visual example for virtual reality. An example where this could be used is while exercising. For example, Autodesk may be connected to an exercise bike, you can then look around a graphic world as you pedal through it. If you pedal fast enough, your bike takes off and flies.
The CAVE is a new virtual reality interface that engulfs the individual into a room whose walls, ceiling, and floor surround the viewer with virtual space. The illusion is so powerful you won't be able to tell what's real and what's not.
Computer engineers seem fascinated by virtual reality because you can not only program a world, but in a sense, inhabit it.
Mythic space surrounds the cyborg, embracing him/her with images that seem real but are not. The sole purpose of cyberspace virtual reality technology is to trick the human senses, to help people believe and uphold an illusion.
Virtual reality engineers are space makers, to a certain degree they create space for people to play around in. A space maker sets up a world for an audience to act directly within, and not just so the audience can imagine they are experiencing a reality, but so they can experience it directly. "The film maker says, 'Look, I'll show you.' The space maker says, 'Here, I'll help you discover.' However, what will the space maker help us discover?"
"Are virtual reality systems going to serve as supplements to our lives, or will individuals so miserable in their daily existence find an obsessive refuge in a preferred cyberspace? What is going to be included, deleted, reformed, and revised? Will virtual reality systems be used as a means of breaking down cultural, racial, and gender barriers between individuals and thus nurture human values?"
During this century, responsive technologies are moving even closer to us, becoming the standard interface through which we gain much of our experience. The ultimate result of living in a cybernetic world may create an artificial global city. Instead of a global village, virtual reality may create a global city, the distinction being that the city contains enough people for groups to form affiliations, in which individuals from different cultures meet together in the same space of virtual reality. The city might be laid out according to a three dimensional environment that dictates the way people living in different countries may come to communicate and understand other cultures. A special camera, possibly consisting of many video cameras, would capture and transmit every view of the remote locations. Viewers would receive instant feedback as they turn their heads. Any number of people could be looking through the same camera system. Although the example described here will probably take many years to develop, its early evolution has been under way for some time, with the steady march of technology moving from accessing information toward providing experience. As well, it is probably still childish to imagine the adoption of virtual reality systems on a massive scale because the starting price to own one costs about $300,000.
Virtual Reality is now available in games and movies. An example of a virtual reality game is Escape From Castle Wolfenstein. In it, you are looking through the eyes of an escaped POW from a Nazi death camp. You must walk around in a maze of dungeons were you will eventually fight Hitler. One example of a virtual reality movie is Stephen King's The Lawnmower Man. It is about a mentally retarded man that uses virtual reality as a means of overcoming his handicap and becoming smarter. He eventually becomes crazy from his quest for power and goes into a computer. From there he is able to control most of the world's computers. This movie ends with us wondering if he will succeed in world domination.
From all of this we have learned that virtual reality is already playing an important part in our world. Eventually, it will let us be able to date, live in other parts of the world without leaving the comfort of our own living room, and more. Even though we are quickly becoming a product of the world of virtual reality, we must not lose touch with the world of reality. For reality is the most important part of our lives.
Virtual Reality is a creation of a highly interactive computer based multimedia environment in which the user becomes a participant with the computer in a "virtually real" world
We are living in an era characterized by 3D virtual systems created by computer graphics. In the concept called Virtual Reality (VR), the virtual reality engineer is combining computer, video, image-processing, and sensor technologies so that a human can enter into and react with spaces generated by computer graphics.
In 1969-70, a MIT scientist went to the University of Utah, where he began to work with vector generated graphics. He built a see-through helmet that used television screens and half-silvered mirrors, so that the environment was visible through the TV displays. It was not yet designed to provide a surrounding environment. It was not until the mid '80's that virtual reality systems were becoming more defined. The AMES contract started in 1985, came up with the first glove in February 1986. The glove is made of thin Lycra and is fitted with 15 sensors that monitor finger flexion, extension, hand position and orientation. Connected to a computer through fiber optic cables. Sensor inputs enable the computer to generate an on screen image of the hand that follows the operator's hand movements. The glove also has miniature vibrators in the finger tips to provide feedback to the operator from grasped virtual objects. Therefore, driven by the proper software, the system allows the operator to interact by grabbing and moving a virtual object within a simulated room, while experiencing the "feel" of the object.
The virtual reality line includes the Datasuit and the Eyephone. The Datasuit is an instrumented full-body garment that enables full-body interaction with a computer constructed virtual world. In one use, this product is worn by film actors to give realistic movement to animated characters in computer generated special effects. The Eyephone is a head mounted stereo display that shows a computer made virtual world in full color and 3D.
The Eyephone technology is based on an experimental Virtual Interface Environment Workstation (VIEW) design. VIEW is a head-mounted stereoscopic display system with two 3.9 inch television screens, one for each eye. The display can be a computer generated scene or a real environment sent by remote video cameras. Sound effects delivered to the headset increase the realism.
It was intended to use the glove and software for such ideas as a surgical simulation, or "3D virtual surgery" for medical students. In the summer of 1991, US trainee surgeons were able to practice leg operations without having to cut anything solid. NASA Scientists have developed a three-dimensional computer simulation of a human leg which surgeons can operate on by entering the computer world of virtual reality. Surgeons use the glove and Eyephone technology to create the illusion that they are operating on a leg.
Other virtual reality systems such as the Autodesk and the CAVE have also come up with techniques to penetrate a virtual world. The Autodesk uses a simple monitor and is the most basic visual example for virtual reality. An example where this could be used is while exercising. For example, Autodesk may be connected to an exercise bike, you can then look around a graphic world as you pedal through it. If you pedal fast enough, your bike takes off and flies.
The CAVE is a new virtual reality interface that engulfs the individual into a room whose walls, ceiling, and floor surround the viewer with virtual space. The illusion is so powerful you won't be able to tell what's real and what's not.
Computer engineers seem fascinated by virtual reality because you can not only program a world, but in a sense, inhabit it.
Mythic space surrounds the cyborg, embracing him/her with images that seem real but are not. The sole purpose of cyberspace virtual reality technology is to trick the human senses, to help people believe and uphold an illusion.
Virtual reality engineers are space makers, to a certain degree they create space for people to play around in. A space maker sets up a world for an audience to act directly within, and not just so the audience can imagine they are experiencing a reality, but so they can experience it directly. "The film maker says, 'Look, I'll show you.' The space maker says, 'Here, I'll help you discover.' However, what will the space maker help us discover?"
"Are virtual reality systems going to serve as supplements to our lives, or will individuals so miserable in their daily existence find an obsessive refuge in a preferred cyberspace? What is going to be included, deleted, reformed, and revised? Will virtual reality systems be used as a means of breaking down cultural, racial, and gender barriers between individuals and thus nurture human values?"
During this century, responsive technologies are moving even closer to us, becoming the standard interface through which we gain much of our experience. The ultimate result of living in a cybernetic world may create an artificial global city. Instead of a global village, virtual reality may create a global city, the distinction being that the city contains enough people for groups to form affiliations, in which individuals from different cultures meet together in the same space of virtual reality. The city might be laid out according to a three dimensional environment that dictates the way people living in different countries may come to communicate and understand other cultures. A special camera, possibly consisting of many video cameras, would capture and transmit every view of the remote locations. Viewers would receive instant feedback as they turn their heads. Any number of people could be looking through the same camera system. Although the example described here will probably take many years to develop, its early evolution has been under way for some time, with the steady march of technology moving from accessing information toward providing experience. As well, it is probably still childish to imagine the adoption of virtual reality systems on a massive scale because the starting price to own one costs about $300,000.
Virtual Reality is now available in games and movies. An example of a virtual reality game is Escape From Castle Wolfenstein. In it, you are looking through the eyes of an escaped POW from a Nazi death camp. You must walk around in a maze of dungeons were you will eventually fight Hitler. One example of a virtual reality movie is Stephen King's The Lawnmower Man. It is about a mentally retarded man that uses virtual reality as a means of overcoming his handicap and becoming smarter. He eventually becomes crazy from his quest for power and goes into a computer. From there he is able to control most of the world's computers. This movie ends with us wondering if he will succeed in world domination.
From all of this we have learned that virtual reality is already playing an important part in our world. Eventually, it will let us be able to date, live in other parts of the world without leaving the comfort of our own living room, and more. Even though we are quickly becoming a product of the world of virtual reality, we must not lose touch with the world of reality. For reality is the most important part of our lives.
Virtual Reality
Joe Blige
Virtual Reality
Virtual reality as of recent, while still extremely new, has become the topic of many
opposing viewpoints. It has caught the eye of the general public for several reasons.
Perhaps, this is mainly because of all the possibilities which virtual reality creates.
Note that the possibilities are not pre-determined as either good or bad, mainly because
there are many different opinions to the future of this developing technology. However,
despite the controversy this new technology has aroused, society should not remain
skeptical. Virtual reality has the potential, if used correctly, to become a great
technological advancement that will aid society in many ways.
In the past, virtual reality has been nothing more than a small step beyond video
games.
However, it is now apparent that this technology can be used for more practical purposes.
These purposes include national defense, surgical procedures and various other applications.
Society has not fully acknowledged the benefits of virtual reality as of yet because it is
still under development. The reason for virtual reality remaining in its development for so
long is mainly due to its complexity. The hardware that has developed so far is unable to
make the large calculations required by a virtual reality based machine. However, as
apparent in recent years, technology is advancing at an extreme rate. This is another
reason why society's hopes for virtual reality should and have remained unwaivered.
In Orenstein's story, she gives the perspective of the average citizen who is obviously
uncertain about the uses and/or affects that virtual reality will have upon society. The
show she attended was quick to point out the practicality of virtual reality however, it
still left much to be desired. It seems that Orenstein was disgruntled when she came to an
exhibit and the topic of cyber-sex was raised. Perhaps it wasn't just that it came up but
more like how it came up. The idea of a man and woman being in a virtual world and a man
fondling the womans breasts was probably, although very much possible, not a great first
impression. It gave Orenstein the opportunity to explore the evils that virtual reality
makes possible.
After a while, Orenstein realizes that just like the computing age has hackers, the
virtual
age will have it's own high-tech delinquents.
You can't prevent technology from being abused. There will be those who
use VR rudely, stupidly, dangerously--just as they do the telephone or
computer. Like the telephone and the modem, its popular rise will also
eliminate the need for certain fundamental kinds of human contact, even
as it enhances our ability to communicate. (Orenstein 258)
Here she is quick to point out that because virtual reality is such a new technology it is
extremely possible for hackers to have their way with it. Perhaps she also points out that
in order for society to accept this new technology they will have to accept it's risks as
well.
In the government's perspective use of virtual reality it is easy to see how this
technology
proves useful. Supposing that the United States got into a war, by using virtual reality
pilots instead of real pilots the number of casualties would obviously be less. Pilots
would fly their aircraft from a remote location via video and audio equipment in the form of
virtual reality. As technology increases over the next several years it will become easier
and easier for the pilots to fly planes from a remote location.
However, despite all the lives this may save there is a down side. The down side being
that perhaps this will stimulate the government to react more easily in a violent way.
Without any loss of lives the only thing the government has to lose by attacking are the
cost of planes. Keeping this idea in mind, it is very likely that the US will spend less
time negotiating and more time fighting. This is most definitely a negative side-affect of
virtual reality because it will weaken the relationship that the US has with other
countries.
Integrating virtual reality with society is where the majority of problems occur. It
is
clearly apparent that because this technology is so new society is unsure how it will fit
in. This is also a good example of why people's opinions are so varied. Some people see
virtual reality as just another tool which will aid society in several ways. Others see it
as dominating society all together and affecting everyone's lives everyday. It obviously
has the potential to be both and it is easy to see why people are so hesitant to decide.
Perhaps another reason for society's lack of optimism is their fear that they will
somehow be removed from actual reality. Although quite ironic, for a long time society has
had a fear that technology will someday take control of their lives. Perhaps the idea of
technology becoming so advanced that people will no longer be able to tell whether they are
in virtual or actual reality. It is clear that technology has definitely affected society
in recent years. However, it is quite difficult to predict the role of technology in the
future. The potential for technology is certainly there, it just needs to be focused it the
right direction.
Technology most definitely has the ability to run out of control. Just the idea alone,
of
man creating technology and having it run out of control is something society has been
fascinated with for many years. Books and movies depicting technology overwhelming society
have been created with much of this idea in mind. Perhaps it is possible that virtual
reality will be that technology which man is unable to control and will take over all of
society. If this is the case, society and the people within it would become uncertain if
they were in virtual or actual reality. It must be pointed out however, due to the nature
and precaution of society in general, it is very unlikely that anything like this will ever
actually occur. If society is intelligent enough to invent such a technology it should be
able to determine and control it's consequences.
Orenstein brings up a good point when she says, "This time, we have the chance to enter
the debate about the direction of a revolutionary technology, before that debate has been
decided for us"(258). Often times in the past, society as a whole has been subject to
decisions made by those of the creators of new technology. In this quote however, Orenstein
points out that with this technology people should not only try but make it a priority to
get involved. She, as many others do, see this technology as having a huge amount of
potential. Without the direction and influence of society upon virtual reality it could go
to waste, or even worse, turn into society's enemy of sorts.
Towards the end of the story she tries to depict how virtual reality will have an
impact
upon society whether they like it or not.
As I rode down the freeway, I found myself going a little faster than usual,
edging my curves a little sharper, coming a little close than was really
comfortable to the truck merging in the lane ahead of me. Maybe I was just
tired. It had been a long night. But maybe it just doesn't take the mind that
long to grab onto the new and make it real. Even when you don't want it to.
She depicts that no matter how much society is aware of virtual reality, the human brain
still has instincts that cannot be controlled. That is one of the drawbacks of virtual
reality. That no one is sure what to expect. Just as with any other technology, the only
way to find out the results of virtual reality are to test the limits.
Knowing that virtual reality has the ability to affect so many people in such a large
number of ways there needs to be some kind of limitation. This brings up another key
controversy as to who should be in control of limiting this virtual world. If the
government is in control it could likely be abused and mishandled. However, if society as a
whole is left to contemplate its uses, the affects could be either good or bad.
Although society knows a lot about virtual reality there is still so much that it
doesn't
know. Perhaps in the coming years, new technology will come out and people will learn more
about this virtual world. However, until that time, the questions will remain numerous and
doubtful yet the possibilities are unlimited.
Virtual Reality
Virtual reality as of recent, while still extremely new, has become the topic of many
opposing viewpoints. It has caught the eye of the general public for several reasons.
Perhaps, this is mainly because of all the possibilities which virtual reality creates.
Note that the possibilities are not pre-determined as either good or bad, mainly because
there are many different opinions to the future of this developing technology. However,
despite the controversy this new technology has aroused, society should not remain
skeptical. Virtual reality has the potential, if used correctly, to become a great
technological advancement that will aid society in many ways.
In the past, virtual reality has been nothing more than a small step beyond video
games.
However, it is now apparent that this technology can be used for more practical purposes.
These purposes include national defense, surgical procedures and various other applications.
Society has not fully acknowledged the benefits of virtual reality as of yet because it is
still under development. The reason for virtual reality remaining in its development for so
long is mainly due to its complexity. The hardware that has developed so far is unable to
make the large calculations required by a virtual reality based machine. However, as
apparent in recent years, technology is advancing at an extreme rate. This is another
reason why society's hopes for virtual reality should and have remained unwaivered.
In Orenstein's story, she gives the perspective of the average citizen who is obviously
uncertain about the uses and/or affects that virtual reality will have upon society. The
show she attended was quick to point out the practicality of virtual reality however, it
still left much to be desired. It seems that Orenstein was disgruntled when she came to an
exhibit and the topic of cyber-sex was raised. Perhaps it wasn't just that it came up but
more like how it came up. The idea of a man and woman being in a virtual world and a man
fondling the womans breasts was probably, although very much possible, not a great first
impression. It gave Orenstein the opportunity to explore the evils that virtual reality
makes possible.
After a while, Orenstein realizes that just like the computing age has hackers, the
virtual
age will have it's own high-tech delinquents.
You can't prevent technology from being abused. There will be those who
use VR rudely, stupidly, dangerously--just as they do the telephone or
computer. Like the telephone and the modem, its popular rise will also
eliminate the need for certain fundamental kinds of human contact, even
as it enhances our ability to communicate. (Orenstein 258)
Here she is quick to point out that because virtual reality is such a new technology it is
extremely possible for hackers to have their way with it. Perhaps she also points out that
in order for society to accept this new technology they will have to accept it's risks as
well.
In the government's perspective use of virtual reality it is easy to see how this
technology
proves useful. Supposing that the United States got into a war, by using virtual reality
pilots instead of real pilots the number of casualties would obviously be less. Pilots
would fly their aircraft from a remote location via video and audio equipment in the form of
virtual reality. As technology increases over the next several years it will become easier
and easier for the pilots to fly planes from a remote location.
However, despite all the lives this may save there is a down side. The down side being
that perhaps this will stimulate the government to react more easily in a violent way.
Without any loss of lives the only thing the government has to lose by attacking are the
cost of planes. Keeping this idea in mind, it is very likely that the US will spend less
time negotiating and more time fighting. This is most definitely a negative side-affect of
virtual reality because it will weaken the relationship that the US has with other
countries.
Integrating virtual reality with society is where the majority of problems occur. It
is
clearly apparent that because this technology is so new society is unsure how it will fit
in. This is also a good example of why people's opinions are so varied. Some people see
virtual reality as just another tool which will aid society in several ways. Others see it
as dominating society all together and affecting everyone's lives everyday. It obviously
has the potential to be both and it is easy to see why people are so hesitant to decide.
Perhaps another reason for society's lack of optimism is their fear that they will
somehow be removed from actual reality. Although quite ironic, for a long time society has
had a fear that technology will someday take control of their lives. Perhaps the idea of
technology becoming so advanced that people will no longer be able to tell whether they are
in virtual or actual reality. It is clear that technology has definitely affected society
in recent years. However, it is quite difficult to predict the role of technology in the
future. The potential for technology is certainly there, it just needs to be focused it the
right direction.
Technology most definitely has the ability to run out of control. Just the idea alone,
of
man creating technology and having it run out of control is something society has been
fascinated with for many years. Books and movies depicting technology overwhelming society
have been created with much of this idea in mind. Perhaps it is possible that virtual
reality will be that technology which man is unable to control and will take over all of
society. If this is the case, society and the people within it would become uncertain if
they were in virtual or actual reality. It must be pointed out however, due to the nature
and precaution of society in general, it is very unlikely that anything like this will ever
actually occur. If society is intelligent enough to invent such a technology it should be
able to determine and control it's consequences.
Orenstein brings up a good point when she says, "This time, we have the chance to enter
the debate about the direction of a revolutionary technology, before that debate has been
decided for us"(258). Often times in the past, society as a whole has been subject to
decisions made by those of the creators of new technology. In this quote however, Orenstein
points out that with this technology people should not only try but make it a priority to
get involved. She, as many others do, see this technology as having a huge amount of
potential. Without the direction and influence of society upon virtual reality it could go
to waste, or even worse, turn into society's enemy of sorts.
Towards the end of the story she tries to depict how virtual reality will have an
impact
upon society whether they like it or not.
As I rode down the freeway, I found myself going a little faster than usual,
edging my curves a little sharper, coming a little close than was really
comfortable to the truck merging in the lane ahead of me. Maybe I was just
tired. It had been a long night. But maybe it just doesn't take the mind that
long to grab onto the new and make it real. Even when you don't want it to.
She depicts that no matter how much society is aware of virtual reality, the human brain
still has instincts that cannot be controlled. That is one of the drawbacks of virtual
reality. That no one is sure what to expect. Just as with any other technology, the only
way to find out the results of virtual reality are to test the limits.
Knowing that virtual reality has the ability to affect so many people in such a large
number of ways there needs to be some kind of limitation. This brings up another key
controversy as to who should be in control of limiting this virtual world. If the
government is in control it could likely be abused and mishandled. However, if society as a
whole is left to contemplate its uses, the affects could be either good or bad.
Although society knows a lot about virtual reality there is still so much that it
doesn't
know. Perhaps in the coming years, new technology will come out and people will learn more
about this virtual world. However, until that time, the questions will remain numerous and
doubtful yet the possibilities are unlimited.
The Power On Self Test
The Power On Self Test
When the system is powered on, the BIOS will perform diagnostics and initialize system components, including the video system.
(This is self-evident when the screen first flicks before the Video Card header is displayed).
This is commonly referred as POST (Power-On Self Test).
Afterwards, the computer will proceed its final boot-up stage by calling the operating system.
Just before that, the user may interrupt to have access to SETUP.
To allow the user to alter the CMOS settings, the BIOS provides a little program, SETUP.
Usually, setup can be entered by pressing a special key combination (DEL, ESC, CTRL-ESC, or CRTL-ALT-ESC)
at boot time (Some BIOSes allow you to enter setup at any time by pressing CTRL-ALT-ESC).
The AMI BIOS is mostly entered by pressing the DEL key after resetting (CTRL-ALT-DEL) or powering up the computer.
You can bypass the extended CMOS settings by holding the <INS> key down during boot-up. This is really helpful,
especially if you bend the CMOS settings right out of shape and the computer won't boot properly
anymore. This is also a handy tip for people who play with the older AMI BIOSes with the XCMOS setup.
It allows changes directly to the chip registers with very little technical explanation.
A Typical BIOS POST Sequence
Most BIOS POST sequences occur along four stages:
1. Display some basic information about the video card like its brand, video BIOS version and video memory available.
2. Display the BIOS version and copyright notice in upper middle screen. You will see a large sequence of numbers at the bottom of the screen. This sequence is the .
3. Display memory count. You will also hear tick sounds if you have enabled it (see Memory Test Tick Sound section).
4. Once the POST have succeeded and the BIOS is ready to call the operating system (DOS, OS/2, NT, WIN95, etc.) you will see a basic table of the system's configurations:
· Main Processor: The type of CPU identified by the BIOS. Usually Cx386DX, Cx486DX, etc..
· Numeric Processor: Present if you have a FPU or None on the contrary. If you have a FPU and the BIOS does not recognize it, see section Numeric Processor Test in Advanced CMOS Setup.
· Floppy Drive A: The drive A type. See section Floppy drive A in Standard CMOS Setup to alter this setting.
· Floppy Drive B: Idem.
· Display Type: See section Primary display in Standard CMOS Setup.
· AMI or Award BIOS Date: The revision date of your BIOS. Useful to mention when you have compatibility problems with adaptor cards (notably fancy ones).
· Base Memory Size: The number of KB of base memory. Usually 640.
· Ext. Memory Size: The number of KB of extended memory.
In the majority of cases, the summation of base memory and extended memory does not equal the total system memory.
For instance in a 4096 KB (4MB) system, you will have 640KB of base memory and 3072KB of extended memory, a total of 3712KB.
The missing 384KB is reserved by the BIOS, mainly as shadow memory (see Advanced CMOS Setup).
· Hard Disk C: Type: The master HDD number. See Hard disk C: type section in Standard CMOS Setup.
· Hard Disk D: Type: The slave HDD number. See Hard disk D: type section in Standard CMOS Setup.
· Serial Port(s): The hex numbers of your COM ports. 3F8 and 2F8 for COM1 and COM2.
· Parallel Port(s): The hex number of your LTP ports. 378 for LPT1.
· Other information: Right under the table, BIOS usually displays the size of cache memory.
Common sizes are 64KB, 128KB or 256KB. See External Cache Memory section in Advanced CMOS Setup.
AMI BIOS POST Errors
During the POST routines, which are performed each time the system is powered on, errors may occur.
Non-fatal errors are those which, in most cases, allow the system to continue the boot up process.
The error messages normally appear on the screen.
Fatal errors are those which will not allow the system to continue the boot-up procedure.
If a fatal error occurs, you should consult with your system manufacturer or dealer for possible repairs.
These errors are usually communicated through a series of audible beeps.
The numbers on the fatal error list correspond to the number of beeps for the corresponding error.
All errors listed, with the exception of #8, are fatal errors. All errors found by the BIOS will be forwarded to the I/O port 80h.
· 1 beep: DRAM refresh failure. The memory refresh circuitry on the motherboard is faulty.
· 2 beeps: Parity Circuit failure. A parity error was detected in the base memory (first 64k Block) of the system.
· 3 beeps: Base 64K RAM failure. A memory failure occurred within the first 64k of memory.
· 4 beeps: System Timer failure. Timer #1 on the system board has failed to function properly.
· 5 beeps: Processor failure. The CPU on the system board has generated an error.
· 6 beeps: Keyboard Controller 8042-Gate A20 error. The keyboard controller (8042) contains the gate A20 switch which allows the computer to operate in virtual mode.
This error message means that the BIOS is not able to switch the CPU into protected mode.
· 7 beeps: Virtual Mode (processor) Exception error. The CPU on the motherboard has generated an Interrupt Failure exception interrupt.
· 8 beeps: Display Memory R/W Test failure. The system video adapter is either missing or Read/Write Error its memory is faulty. This is not a fatal error.
· 9 beeps: ROM-BIOS Checksum failure. The ROM checksum value does not match the value encoded in the BIOS. This is good indication that the BIOS ROMs went bad.
· 10 beeps: CMOS Shutdown Register. The shutdown register for the CMOS memory Read/Write Error has failed.
· 11 beeps: Cache Error / External Cache Bad. The external cache is faulty.
Other AMI BIOS POST Codes
· 2 short beeps: POST failed. This is caused by a failure of one of the hardware testing procedures.
· 1 long & 2 short beeps: Video failure. This is caused by one of two possible hardware faults. 1) Video BIOS ROM failure, checksum error encountered. 2) The video adapter installed has a horizontal retrace failure.
· 1 long & 3 short beeps: Video failure. This is caused by one of three possible hardware problems. 1) The video DAC has failed. 2) the monitor detection process has failed. 3) The video RAM has failed.
· 1 long beep: POST successful. This indicates that all hardware tests were completed without encountering errors.
If you have access to a POST Card reader, (Jameco, etc.) you can watch the system perform each test by the value that's displayed.
If/when the system hangs (if there's a problem) the last value displayed will give you a good idea where and what went wrong, or what's bad on the system board. Of course, having a description of those codes would be helpful,
and different BIOSes have different meanings for the codes. (could someone point out FTP sites where we could have access to a complete list of error codes for different versions of AMI and Award BIOSes?).
BIOS Error Messages
This is a short list of most frequent on-screen BIOS error messages. Your system may show them in a different manner. When you see any of these, you are in trouble - Doh! (Does someone has any additions or corrections?)
· "8042 Gate - A20 Error": Gate A20 on the keyboard controller (8042) is not working.
· "Address Line Short!": Error in the address decoding circuitry.
· "Cache Memory Bad, Do Not Enable Cache!": Cache memory is defective.
· "CH-2 Timer Error": There is an error in timer 2. Several systems have two timers.
· "CMOS Battery State Low" : The battery power is getting low. It would be a good idea to replace the battery.
· "CMOS Checksum Failure" : After CMOS RAM values are saved, a checksum value is generated for error checking. The previous value is different from the current value.
· "CMOS System Options Not Set": The values stored in CMOS RAM are either corrupt or nonexistent.
· "CMOS Display Type Mismatch": The video type in CMOS RAM is not the one detected by the BIOS.
· "CMOS Memory Size Mismatch": The physical amount of memory on the motherboard is different than the amount in CMOS RAM.
· "CMOS Time and Date Not Set": Self evident.
· "Diskette Boot Failure": The boot disk in floppy drive A: is corrupted (virus?). Is an operating system present?
· "Display Switch Not Proper": A video switch on the motherboard must be set to either color or monochrome.
· "DMA Error": Error in the DMA (Direct Memory Access) controller.
· "DMA #1 Error": Error in the first DMA channel.
· "DMA #2 Error": Error in the second DMA channel.
· "FDD Controller Failure": The BIOS cannot communicate with the floppy disk drive controller.
· "HDD Controller Failure": The BIOS cannot communicate with the hard disk drive controller.
· "INTR #1 Error": Interrupt channel 1 failed POST.
· "INTR #2 Error": Interrupt channel 2 failed POST.
· "Keyboard Error": There is a timing problem with the keyboard.
· "KB/Interface Error": There is an error in the keyboard connector.
· "Parity Error ????": Parity error in system memory at an unknown address.
· "Memory Parity Error at xxxxx": Memory failed at the xxxxx address.
· "I/O Card Parity Error at xxxxx": An expansion card failed at the xxxxx address.
· "DMA Bus Time-out": A device has used the bus signal for more than allocated time (around 8 microseconds).
If you encounter any POST error, there is a good chance that it is an HARDWARE related problem.
You should at least verify if adaptor cards or other removable components (simms, drams etc...) are properly inserted before calling for help. One common attribute in human nature is to rely on others before investigating the problem yourself.
When the system is powered on, the BIOS will perform diagnostics and initialize system components, including the video system.
(This is self-evident when the screen first flicks before the Video Card header is displayed).
This is commonly referred as POST (Power-On Self Test).
Afterwards, the computer will proceed its final boot-up stage by calling the operating system.
Just before that, the user may interrupt to have access to SETUP.
To allow the user to alter the CMOS settings, the BIOS provides a little program, SETUP.
Usually, setup can be entered by pressing a special key combination (DEL, ESC, CTRL-ESC, or CRTL-ALT-ESC)
at boot time (Some BIOSes allow you to enter setup at any time by pressing CTRL-ALT-ESC).
The AMI BIOS is mostly entered by pressing the DEL key after resetting (CTRL-ALT-DEL) or powering up the computer.
You can bypass the extended CMOS settings by holding the <INS> key down during boot-up. This is really helpful,
especially if you bend the CMOS settings right out of shape and the computer won't boot properly
anymore. This is also a handy tip for people who play with the older AMI BIOSes with the XCMOS setup.
It allows changes directly to the chip registers with very little technical explanation.
A Typical BIOS POST Sequence
Most BIOS POST sequences occur along four stages:
1. Display some basic information about the video card like its brand, video BIOS version and video memory available.
2. Display the BIOS version and copyright notice in upper middle screen. You will see a large sequence of numbers at the bottom of the screen. This sequence is the .
3. Display memory count. You will also hear tick sounds if you have enabled it (see Memory Test Tick Sound section).
4. Once the POST have succeeded and the BIOS is ready to call the operating system (DOS, OS/2, NT, WIN95, etc.) you will see a basic table of the system's configurations:
· Main Processor: The type of CPU identified by the BIOS. Usually Cx386DX, Cx486DX, etc..
· Numeric Processor: Present if you have a FPU or None on the contrary. If you have a FPU and the BIOS does not recognize it, see section Numeric Processor Test in Advanced CMOS Setup.
· Floppy Drive A: The drive A type. See section Floppy drive A in Standard CMOS Setup to alter this setting.
· Floppy Drive B: Idem.
· Display Type: See section Primary display in Standard CMOS Setup.
· AMI or Award BIOS Date: The revision date of your BIOS. Useful to mention when you have compatibility problems with adaptor cards (notably fancy ones).
· Base Memory Size: The number of KB of base memory. Usually 640.
· Ext. Memory Size: The number of KB of extended memory.
In the majority of cases, the summation of base memory and extended memory does not equal the total system memory.
For instance in a 4096 KB (4MB) system, you will have 640KB of base memory and 3072KB of extended memory, a total of 3712KB.
The missing 384KB is reserved by the BIOS, mainly as shadow memory (see Advanced CMOS Setup).
· Hard Disk C: Type: The master HDD number. See Hard disk C: type section in Standard CMOS Setup.
· Hard Disk D: Type: The slave HDD number. See Hard disk D: type section in Standard CMOS Setup.
· Serial Port(s): The hex numbers of your COM ports. 3F8 and 2F8 for COM1 and COM2.
· Parallel Port(s): The hex number of your LTP ports. 378 for LPT1.
· Other information: Right under the table, BIOS usually displays the size of cache memory.
Common sizes are 64KB, 128KB or 256KB. See External Cache Memory section in Advanced CMOS Setup.
AMI BIOS POST Errors
During the POST routines, which are performed each time the system is powered on, errors may occur.
Non-fatal errors are those which, in most cases, allow the system to continue the boot up process.
The error messages normally appear on the screen.
Fatal errors are those which will not allow the system to continue the boot-up procedure.
If a fatal error occurs, you should consult with your system manufacturer or dealer for possible repairs.
These errors are usually communicated through a series of audible beeps.
The numbers on the fatal error list correspond to the number of beeps for the corresponding error.
All errors listed, with the exception of #8, are fatal errors. All errors found by the BIOS will be forwarded to the I/O port 80h.
· 1 beep: DRAM refresh failure. The memory refresh circuitry on the motherboard is faulty.
· 2 beeps: Parity Circuit failure. A parity error was detected in the base memory (first 64k Block) of the system.
· 3 beeps: Base 64K RAM failure. A memory failure occurred within the first 64k of memory.
· 4 beeps: System Timer failure. Timer #1 on the system board has failed to function properly.
· 5 beeps: Processor failure. The CPU on the system board has generated an error.
· 6 beeps: Keyboard Controller 8042-Gate A20 error. The keyboard controller (8042) contains the gate A20 switch which allows the computer to operate in virtual mode.
This error message means that the BIOS is not able to switch the CPU into protected mode.
· 7 beeps: Virtual Mode (processor) Exception error. The CPU on the motherboard has generated an Interrupt Failure exception interrupt.
· 8 beeps: Display Memory R/W Test failure. The system video adapter is either missing or Read/Write Error its memory is faulty. This is not a fatal error.
· 9 beeps: ROM-BIOS Checksum failure. The ROM checksum value does not match the value encoded in the BIOS. This is good indication that the BIOS ROMs went bad.
· 10 beeps: CMOS Shutdown Register. The shutdown register for the CMOS memory Read/Write Error has failed.
· 11 beeps: Cache Error / External Cache Bad. The external cache is faulty.
Other AMI BIOS POST Codes
· 2 short beeps: POST failed. This is caused by a failure of one of the hardware testing procedures.
· 1 long & 2 short beeps: Video failure. This is caused by one of two possible hardware faults. 1) Video BIOS ROM failure, checksum error encountered. 2) The video adapter installed has a horizontal retrace failure.
· 1 long & 3 short beeps: Video failure. This is caused by one of three possible hardware problems. 1) The video DAC has failed. 2) the monitor detection process has failed. 3) The video RAM has failed.
· 1 long beep: POST successful. This indicates that all hardware tests were completed without encountering errors.
If you have access to a POST Card reader, (Jameco, etc.) you can watch the system perform each test by the value that's displayed.
If/when the system hangs (if there's a problem) the last value displayed will give you a good idea where and what went wrong, or what's bad on the system board. Of course, having a description of those codes would be helpful,
and different BIOSes have different meanings for the codes. (could someone point out FTP sites where we could have access to a complete list of error codes for different versions of AMI and Award BIOSes?).
BIOS Error Messages
This is a short list of most frequent on-screen BIOS error messages. Your system may show them in a different manner. When you see any of these, you are in trouble - Doh! (Does someone has any additions or corrections?)
· "8042 Gate - A20 Error": Gate A20 on the keyboard controller (8042) is not working.
· "Address Line Short!": Error in the address decoding circuitry.
· "Cache Memory Bad, Do Not Enable Cache!": Cache memory is defective.
· "CH-2 Timer Error": There is an error in timer 2. Several systems have two timers.
· "CMOS Battery State Low" : The battery power is getting low. It would be a good idea to replace the battery.
· "CMOS Checksum Failure" : After CMOS RAM values are saved, a checksum value is generated for error checking. The previous value is different from the current value.
· "CMOS System Options Not Set": The values stored in CMOS RAM are either corrupt or nonexistent.
· "CMOS Display Type Mismatch": The video type in CMOS RAM is not the one detected by the BIOS.
· "CMOS Memory Size Mismatch": The physical amount of memory on the motherboard is different than the amount in CMOS RAM.
· "CMOS Time and Date Not Set": Self evident.
· "Diskette Boot Failure": The boot disk in floppy drive A: is corrupted (virus?). Is an operating system present?
· "Display Switch Not Proper": A video switch on the motherboard must be set to either color or monochrome.
· "DMA Error": Error in the DMA (Direct Memory Access) controller.
· "DMA #1 Error": Error in the first DMA channel.
· "DMA #2 Error": Error in the second DMA channel.
· "FDD Controller Failure": The BIOS cannot communicate with the floppy disk drive controller.
· "HDD Controller Failure": The BIOS cannot communicate with the hard disk drive controller.
· "INTR #1 Error": Interrupt channel 1 failed POST.
· "INTR #2 Error": Interrupt channel 2 failed POST.
· "Keyboard Error": There is a timing problem with the keyboard.
· "KB/Interface Error": There is an error in the keyboard connector.
· "Parity Error ????": Parity error in system memory at an unknown address.
· "Memory Parity Error at xxxxx": Memory failed at the xxxxx address.
· "I/O Card Parity Error at xxxxx": An expansion card failed at the xxxxx address.
· "DMA Bus Time-out": A device has used the bus signal for more than allocated time (around 8 microseconds).
If you encounter any POST error, there is a good chance that it is an HARDWARE related problem.
You should at least verify if adaptor cards or other removable components (simms, drams etc...) are properly inserted before calling for help. One common attribute in human nature is to rely on others before investigating the problem yourself.
The Necessity of Computer Security
The Necessity Of Computer Security
When the first electronic computers emerged from university and military laboratories in
the late 1940s and early 1950s, visionaries proclaimed them the harbingers of a second industrial
revolution that would transform business, government and industry. But few laymen, even if they
were aware of the machines, could see the connection. Experts too, were sceptical. Not only
were computers huge, expensive, one-of-a-kind devices designed for performing abstruse
scientific and military calculations, such as cracking codes and calculations missile trajectories,
they were also extremely difficult to handle.
Now, it is clear that computers are not only here to stay, but they have a profound effect
on society as well. As John McCarthy, Professor of Computer Science at Stanford University,
speculated in 1966: "The computer gives signs of becoming the contemporary counterpart of the
steam engine that brought on the industrial revolution - one that is still gathering momentum and
whose true nature had yet to be seen."
Today's applications of computers are vast. They are used to run ordinary household
appliances such as televisions and microwaves, to being tools in the workplaces through word
processing, spreadsheets, and graphics software, to running monumental tasks such as being the
heart and soul of the nations tax processing department, and managing the project timetables of
the Space Shuttle. It is obvious that the computer is now and always will be inexorably linked to
our lives, and we have no choice but to accept this technology and learn how to harness its total
potential.
With any progressing technology, an unauthorized application can almost be found for it.
A computer could and has been used for theft and fraud - for example, as a database and manager
of illegal activities such as drug trafficking and pornography. However, we must not just consider
the harmful applications of the computer, but also take into account the good that they have
caused.
When society embraced the computer technology, we have to treat this as an extension of
what we already have at hand. This means that some problems that we had before the computer
era may also arise now, in the form where computers are an accessory to a crime.
One of the problems that society has faced ever since the dawn of civilization is privacy.
The issue of privacy on the Internet has risen many arguments for and against having it. The issue
of privacy has gotten to the point where the government of the United States has placed a bill
promoting a single chip to encrypt all private material on the Internet.
Why is privacy so important? Hiding confidential material from intruders does not
necessarily mean that what we keep secret it illegal. Since ancient times, people have trusted
couriers to carry their messages. We seal out messages in a envelope when sending mail through
the postal service. Using computer and encrypting programs to transfer electronic messages
securely is not different from sending a letter the old-fashioned way. This paper will examine the
modern methods of encrypting messages and analyse why Phil Zimmerman created an extremely
powerful civilian encipherment program, called the PGP, for "Pretty Good Privacy." In
particular, by focusing on cryptography, which was originally intended for military use, this paper
will examine just how easy it is to conclude why giving civilians a military-grade encrypting
program such as the PGP may be dangerous to national security. Therefore, with any type of new
technology, this paper will argue that the application of cryptography for civilian purposes is not
just a right, but is also a necessity.
Increasingly in today's era of computer technology, not only banks but also businesses and
government agencies are turning to encryption. Computer security experts consider it best and
most practical way to protect computer data from unauthorized disclosure when transmitted and
even when stored on a disk, tape, of the magnetic strip of a credit card.
Two encryption systems have led the way in the modern era. One is the single-key
system, in which data is both encrypted and decrypted with the same key, a sequence of eight
numbers, each between 0 and 127. The other is a 2-key system; in this approach to cryptography,
a pair of mathematically complementary keys, each containing as many as 200 digits, are used for
encryptions and decryption. In contrast with ciphers of earlier generations, where security
depended in part on concealing the algorithm, confidentiality of a computer encrypted message
hinges solely on the secrecy of the keys. Each system is thought to encrypt a message so
inscrutably that the step-by-step mathematical algorithms can be made public without
compromising security.
The single key system, named the Data Encryption Standard - DES for short - was
designed in 1977 as the official method for protecting unclassified computer data in agencies of
the American Federal government. Its evolution began in 1973 when the US National Bureau of
Standards, responding to public concern about the confidentiality of computerized information
outside military and diplomatic channels, invited the submission of data-encryption techniques as
the first step towards an encryption scheme intended for public use.
The method selected by the bureau as the DES was developed by IBM researchers.
During encryption, the DES algorithm divides a message into blocks of eight characters, then
enciphers them one after another. Under control of the key, the letters and numbers of each block
are scrambled no fewer than 16 times, resulting in eight characters of ciphertext.
As good as the DES is, obsolescence will almost certainly overtake it. The life span of
encryption systems tends to be short; the older and more widely used a cipher is, the higher the
potential payoff if it is cracked, and the greater the likelihood that someone has succeeded.
An entirely different approach to encryption, called the 2-key or public-key system,
simplifies the problem of key distribution and management. The approach to cryptography
eliminates the need for subscribers to share keys that must be kept confidential. In a public-key
system, each subscriber has a pair of keys. One of them is the so-called public key, which is freely
available to anyone who wishes to communicate with its owner. The other is a secret key, known
only to its owner. Though either key can be used to encipher or to decipher data encrypted with
its mate, in most instances, the public key is employed for encoding, and the private key for
decoding. Thus, anyone can send a secret message to anyone else by using the addressee's public
key to encrypt its contents. But only the recipient of the message can make sense of it, since only
that person has the private key.
A public key cryptosystem is called the PGP, for Pretty Good Privacy. Designed by Phil
Zimmerman, this program is freely distributed for the purpose of giving the public the knowledge
that whatever communications they pass, they can be sure that it is practically unbreakable.
PGP generates a public and private key for the user using the RSA technique. The data is
then encrypted and decrypted with the IDEA algorithm - which is similar to the DES, but the
work factor to decode the encrypted message by brute force is much higher than what the DES
could provide. The reason why the RSA is used only when generating the keys is that the RSA
takes a very long time to encrypt an entire document, where using the RSA on the keys takes a
mere fraction of the time.
At this time, Zimmerman is bing charged by the US government for his effort in
developing the PGP. The government considers encryption as a weapon, and they have
established regulations controlling or prohibiting the export of munitions. Since the PGP is a
powerful encryption program, it is considered and can be used as a powerful weapon and may be
a threat to national security.
On the Internet, it is clear that many people all over the world are against the US
government's effort on limiting the PGP's encryption capabilities, and their reason is that the ban
infringes on the people's right to privacy.
The PGP must not be treated only as a weapon, for it contains analogies that are not used
in wartime. One of them is authentication. The two-key cryptosystem is designed with
authentication in mind: Using someone's public key to encrypt enables only the owner of the
private key to decrypt the same message. In the real world, we use our own signature to prove
out identity in signing cheques or contracts. There exists retina scanners that check the blood
vessels in out eyes, as well as fingerprint analysis devices. These use our physical characteristics
to prove our identity. A digital signature generated by a public key cryptosystem is much harder
to counterfeit because of the mathematics of factoring - which is an advantage over conventional
methods of tests for out identity.
Another analogy the PGP has with the real world is the need for security. Banks and
corporations employ a trusted courier - in the form of an armoured truck or a guard - to transfer
sensitive documents or valuables. However, this is expensive for civilian purposes, and the PGP
provides the same or better security when securing civilian information.
While many argue that limiting the PGP's abilities are against the people's right to privacy,
the PGP must also be seen as a necessity as we enter the Information Age. There is currently
little or no practical and inexpensive way to secure digital information for civilians, and the PGP is
an answer to this problem.
Computer privacy must not be treated differently than any other method to make private
any documents. Rather, we must consider the computer as a tool and use it as an extension of
society's evolution. Clearly the techniques we employ for computer privacy such as encryption,
secure transfers and authentication closely mirrors past efforts at privacy and non-criminal efforts.
The government is putting more pressure against the distribution of PGP outside of the
United States. One of their main reasons was that since it is freely distributed and thus can be
modified in such a way that even the vast computational resources of the US government cannot
break the PGP's secured message. The government could now reason that the PGP can provide
criminal organizations a means of secure communications and storage of their activities, and thus
make the law enforcement's job much harder in tracking criminals down and proving them guilty.
Also, we must never forget one of out basic human rights - one that many laid their lives
for, is freedom. We have the freedom to do anything we wish that is within the law. The
government is now attempting to pass a bill promoting a single algorithm to encrypt and decrypt
all data that belongs to its citizens. A multitude of people around the world are opposed to this
concept, arguing that it is against their freedom and their privacy.
When the first electronic computers emerged from university and military laboratories in
the late 1940s and early 1950s, visionaries proclaimed them the harbingers of a second industrial
revolution that would transform business, government and industry. But few laymen, even if they
were aware of the machines, could see the connection. Experts too, were sceptical. Not only
were computers huge, expensive, one-of-a-kind devices designed for performing abstruse
scientific and military calculations, such as cracking codes and calculations missile trajectories,
they were also extremely difficult to handle.
Now, it is clear that computers are not only here to stay, but they have a profound effect
on society as well. As John McCarthy, Professor of Computer Science at Stanford University,
speculated in 1966: "The computer gives signs of becoming the contemporary counterpart of the
steam engine that brought on the industrial revolution - one that is still gathering momentum and
whose true nature had yet to be seen."
Today's applications of computers are vast. They are used to run ordinary household
appliances such as televisions and microwaves, to being tools in the workplaces through word
processing, spreadsheets, and graphics software, to running monumental tasks such as being the
heart and soul of the nations tax processing department, and managing the project timetables of
the Space Shuttle. It is obvious that the computer is now and always will be inexorably linked to
our lives, and we have no choice but to accept this technology and learn how to harness its total
potential.
With any progressing technology, an unauthorized application can almost be found for it.
A computer could and has been used for theft and fraud - for example, as a database and manager
of illegal activities such as drug trafficking and pornography. However, we must not just consider
the harmful applications of the computer, but also take into account the good that they have
caused.
When society embraced the computer technology, we have to treat this as an extension of
what we already have at hand. This means that some problems that we had before the computer
era may also arise now, in the form where computers are an accessory to a crime.
One of the problems that society has faced ever since the dawn of civilization is privacy.
The issue of privacy on the Internet has risen many arguments for and against having it. The issue
of privacy has gotten to the point where the government of the United States has placed a bill
promoting a single chip to encrypt all private material on the Internet.
Why is privacy so important? Hiding confidential material from intruders does not
necessarily mean that what we keep secret it illegal. Since ancient times, people have trusted
couriers to carry their messages. We seal out messages in a envelope when sending mail through
the postal service. Using computer and encrypting programs to transfer electronic messages
securely is not different from sending a letter the old-fashioned way. This paper will examine the
modern methods of encrypting messages and analyse why Phil Zimmerman created an extremely
powerful civilian encipherment program, called the PGP, for "Pretty Good Privacy." In
particular, by focusing on cryptography, which was originally intended for military use, this paper
will examine just how easy it is to conclude why giving civilians a military-grade encrypting
program such as the PGP may be dangerous to national security. Therefore, with any type of new
technology, this paper will argue that the application of cryptography for civilian purposes is not
just a right, but is also a necessity.
Increasingly in today's era of computer technology, not only banks but also businesses and
government agencies are turning to encryption. Computer security experts consider it best and
most practical way to protect computer data from unauthorized disclosure when transmitted and
even when stored on a disk, tape, of the magnetic strip of a credit card.
Two encryption systems have led the way in the modern era. One is the single-key
system, in which data is both encrypted and decrypted with the same key, a sequence of eight
numbers, each between 0 and 127. The other is a 2-key system; in this approach to cryptography,
a pair of mathematically complementary keys, each containing as many as 200 digits, are used for
encryptions and decryption. In contrast with ciphers of earlier generations, where security
depended in part on concealing the algorithm, confidentiality of a computer encrypted message
hinges solely on the secrecy of the keys. Each system is thought to encrypt a message so
inscrutably that the step-by-step mathematical algorithms can be made public without
compromising security.
The single key system, named the Data Encryption Standard - DES for short - was
designed in 1977 as the official method for protecting unclassified computer data in agencies of
the American Federal government. Its evolution began in 1973 when the US National Bureau of
Standards, responding to public concern about the confidentiality of computerized information
outside military and diplomatic channels, invited the submission of data-encryption techniques as
the first step towards an encryption scheme intended for public use.
The method selected by the bureau as the DES was developed by IBM researchers.
During encryption, the DES algorithm divides a message into blocks of eight characters, then
enciphers them one after another. Under control of the key, the letters and numbers of each block
are scrambled no fewer than 16 times, resulting in eight characters of ciphertext.
As good as the DES is, obsolescence will almost certainly overtake it. The life span of
encryption systems tends to be short; the older and more widely used a cipher is, the higher the
potential payoff if it is cracked, and the greater the likelihood that someone has succeeded.
An entirely different approach to encryption, called the 2-key or public-key system,
simplifies the problem of key distribution and management. The approach to cryptography
eliminates the need for subscribers to share keys that must be kept confidential. In a public-key
system, each subscriber has a pair of keys. One of them is the so-called public key, which is freely
available to anyone who wishes to communicate with its owner. The other is a secret key, known
only to its owner. Though either key can be used to encipher or to decipher data encrypted with
its mate, in most instances, the public key is employed for encoding, and the private key for
decoding. Thus, anyone can send a secret message to anyone else by using the addressee's public
key to encrypt its contents. But only the recipient of the message can make sense of it, since only
that person has the private key.
A public key cryptosystem is called the PGP, for Pretty Good Privacy. Designed by Phil
Zimmerman, this program is freely distributed for the purpose of giving the public the knowledge
that whatever communications they pass, they can be sure that it is practically unbreakable.
PGP generates a public and private key for the user using the RSA technique. The data is
then encrypted and decrypted with the IDEA algorithm - which is similar to the DES, but the
work factor to decode the encrypted message by brute force is much higher than what the DES
could provide. The reason why the RSA is used only when generating the keys is that the RSA
takes a very long time to encrypt an entire document, where using the RSA on the keys takes a
mere fraction of the time.
At this time, Zimmerman is bing charged by the US government for his effort in
developing the PGP. The government considers encryption as a weapon, and they have
established regulations controlling or prohibiting the export of munitions. Since the PGP is a
powerful encryption program, it is considered and can be used as a powerful weapon and may be
a threat to national security.
On the Internet, it is clear that many people all over the world are against the US
government's effort on limiting the PGP's encryption capabilities, and their reason is that the ban
infringes on the people's right to privacy.
The PGP must not be treated only as a weapon, for it contains analogies that are not used
in wartime. One of them is authentication. The two-key cryptosystem is designed with
authentication in mind: Using someone's public key to encrypt enables only the owner of the
private key to decrypt the same message. In the real world, we use our own signature to prove
out identity in signing cheques or contracts. There exists retina scanners that check the blood
vessels in out eyes, as well as fingerprint analysis devices. These use our physical characteristics
to prove our identity. A digital signature generated by a public key cryptosystem is much harder
to counterfeit because of the mathematics of factoring - which is an advantage over conventional
methods of tests for out identity.
Another analogy the PGP has with the real world is the need for security. Banks and
corporations employ a trusted courier - in the form of an armoured truck or a guard - to transfer
sensitive documents or valuables. However, this is expensive for civilian purposes, and the PGP
provides the same or better security when securing civilian information.
While many argue that limiting the PGP's abilities are against the people's right to privacy,
the PGP must also be seen as a necessity as we enter the Information Age. There is currently
little or no practical and inexpensive way to secure digital information for civilians, and the PGP is
an answer to this problem.
Computer privacy must not be treated differently than any other method to make private
any documents. Rather, we must consider the computer as a tool and use it as an extension of
society's evolution. Clearly the techniques we employ for computer privacy such as encryption,
secure transfers and authentication closely mirrors past efforts at privacy and non-criminal efforts.
The government is putting more pressure against the distribution of PGP outside of the
United States. One of their main reasons was that since it is freely distributed and thus can be
modified in such a way that even the vast computational resources of the US government cannot
break the PGP's secured message. The government could now reason that the PGP can provide
criminal organizations a means of secure communications and storage of their activities, and thus
make the law enforcement's job much harder in tracking criminals down and proving them guilty.
Also, we must never forget one of out basic human rights - one that many laid their lives
for, is freedom. We have the freedom to do anything we wish that is within the law. The
government is now attempting to pass a bill promoting a single algorithm to encrypt and decrypt
all data that belongs to its citizens. A multitude of people around the world are opposed to this
concept, arguing that it is against their freedom and their privacy.
The Internet 2
The Internet is a worldwide connection of thousands of computer networks. All of them speak the same language, TCP/IP, the standard protocol. The Internet allows people with access to these networks to share information and knowledge. Resources available on the Internet are chat groups, e-mail, newsgroups, file transfers, and the World Wide Web. The Internet has no centralized authority and it is uncensored. The Internet belongs to
everyone and to no one.
The Internet is structured in a hierarchy. At the top, each country has at least one
public backbone network. Backbone networks are made of high speed lines that connect to other backbones. There are thousands of service providers and networks that connect
home or college users to the backbone networks. Today, there are more than
fifty-thousand networks in more than one-hundred countries worldwide. However, it all
started with one network.
In the early 1960's the Cold War was escalating and the United States Government was faced with a problem. How could the country communicate after a nuclear war? The Pentagon's Advanced Research Projects Agency, ARPA, had a solution. They would create a non-centralized network that linked from city to city, and base to base. The network was designed to function when parts of it were destroyed. The network could not have a center because it would be a primary target for enemies. In 1969, ARPANET was created, named after its original Pentagon sponsor. There were four supercomputer stations, called nodes, on this high speed network.
ARPANET grew during the 1970's as more and more supercomputer stations were
added. The users of ARPANET had changed the high speed network to an electronic post
office. Scientists and researchers used ARPANET to collaborate on projects and to trade
notes. Eventually, people used ARPANET for leisure activities such as chatting. Soon
after, the mailing list was developed. Mailing lists were discussion groups of people who
would send their messages via e-mail to a group address, and also receive messages. This
could be done twenty-four hours a day.
As ARPANET became larger, a more sophisticated and standard protocol was needed. The protocol would have to link users from other small networks to ARPANET, the main network. The standard protocol invented in 1977 was called TCP/IP. Because of TCP/IP, connecting to ARPANET by any other network was made possible. In 1983, the military portion of ARPANET broke off and formed MILNET. The same year, TCP/IP was made a standard and it was being used by everyone. It linked all parts of the branching complex networks, which soon came to be called the Internet.
In 1985, the National Science Foundation (NSF) began a program to establish Internet access centered on its six powerful supercomputer stations across the United States. They created a backbone called NSFNET to connect college campuses via regional networks to its supercomputer centers. ARPANET officially expired in 1989. Most of the networks were gained by NSFNET. The others became parts of smaller networks. The Defense Communications Agency shut down ARPANET because its functions had been taken over by NSFNET. Amazingly, when ARPANET was turned off in June of 1990, no one except the network staff noticed.
In the early 1990's the Internet experienced explosive growth. It was estimated that the
number of computers connected to the Internet was doubling every year. It was also
estimated that at this rapid rate of growth, everyone would have an e-mail address by the
year 2020. The main cause of this growth was the creation of the World Wide Web. The World Wide Web was created at CERN, a physics laboratory in Geneva,
Switzerland. The Web's development was based on the transmission of web pages over the
Internet, called Hyper Text Transmission Protocol or HTTP. It is an interactive system for
the dissemination and retrieval of information through web pages. The pages may consist
of text, pictures, sound, music, voice, animations, and video. Web pages can link to other
web pages by hypertext links. When there is hypertext on a page, the user can simply click
on the link and be taken to the new page. Previously, the Internet was black and white,
text, and files. The web added color. Web pages can provide entertainment, information, or commercial advertisement. The World Wide Web is the fastest growing Internet resource.
The Internet has dramatically changed from its original purpose. It was
formed by the United States government for exclusive use of government officials and the
military to communicate after a nuclear war. Today, the Internet is used globally for a
variety of purposes. People can send their friends an electronic "hello." They can
download a recipe for a new type of lasagna. They can argue about politics on-line, and
even shop and bank electronically in their homes. The number of people signing on-line is
still increasing and the end it not in sight. As we approach the 21st century, we are
experiencing a great transformation due to the Internet and the World Wide Web. We are
breaking through the restrictions of the printed page and the boundaries of nations and
cultures.
You may not be aware of it, but the World Wide Web is currently transforming the world as we know it. You've probably heard a lot about the Internet and the World Wide Web, but you may not know what these terms mean and may be intimidated by this rapidly advancing field of science. If there is one aspect of this field that is advancing faster than any other, it is the ease with which this technology can be learned.
The Internet, by definition is a "network of networks." That is, it is a world-wide network that links many smaller networks. The World Wide Web is a new subdivision of the Internet. The World Wide Web consists of computers (servers) all over the world that store information in a textual as well as a multimedia format. Each of these servers has a specific Internet address which allows users to easily locate information. Files stored on a server can be accessed in two ways. The first is simply by clicking on a link in a Web document (better known as a Web page) that points to the address of another document. The second way to locate a particular Web page is by typing the Universal Resource Locator (URL) of the page in your browser (the software interface used to navigate the World Wide Web). The URL of a page is the string of characters that appears in the Location: box at the top of your screen. Every Web page has a unique URL which begins with the letters "http://" that identify it as a Web page. This is the equivalent of the Internet address and tells the computer where to find the particular page you are looking for.
The greatest advantage of producing information in HTML format, is that files may be linked to one another via hyperlinks (or links) within the documents. Links usually appear in a different color than the rest of the text on a Web page and are often underlined. Navigating the Web is as simple as clicking a mouse button. Clicking the mouse on a link tells the computer to go to another Internet location and display a specific file. Also, most Web browsers allow easy navigation of the Web by utilizing "Back" and "Forward" buttons that can trace your path around the Web. Links within Web pages aren't limited to just other Web pages. They can include any type of file at all. Some of the more common types of files found on the Web are graphics files, sound files, and files containing movie clips. These files can be run by different helper applications that the Web browser associates with files of that type.
As a student, the Web can provide you with an enormous source of information pertaining to any area of academic interest. This can be especially useful when information is needed to write a term paper. Students can use one of the many Search Engines on the Web to locate information on virtually any topic, just by typing the topic that they wish to find information on.
Another application many students find the World Wide Web to be useful for is career planning. There are hundreds of Web sites that contain information about job openings in every field all over the United States as well as abroad. Job openings can be found listed either by profession or by geographical location, so students don't have to waste time looking through job listings that don't pertain to their area of interest or location of preference. Alas, if students fail to find job openings they are interested in, they can post their resumes to employment service Web sites which try to match employers with those seeking employment.
The Web can also be a useful place for high school students applying to college or college graduates who wish to delay their job hunt by going to graduate school. Many colleges and universities around the world are getting on the Internet to provide their students with access to the enormous amount of information available on it. This allows students the opportunity to browse Web servers at different colleges where they can find information useful in selecting the institution most appropriate to their academic needs.
While the World Wide Web can provide information crucial to your academic and professional career, the information contained on it is not limited to such serious matters. The Web can also provide some entertaining diversions from academics. You can spend hours on the internet and it only feels like a couple minutes. A recent topic I have personally been looking into is three dimensional chat rooms. In this type of chat room you virtually walk around and approach other people and attempt to have a conversation with them. Unfortunatlely everyone is not always responsive as you would like them to be. As an avid user of the internet I highly recommend all people to look into "Worlds Chat".
As the 21st century approaches, it seems inevitable that computer and telecommunications technology will radically transform our world in the years to come. The Internet and the World Wide Web, in particular, appear to be the protocol that will lead us into the Information Age. The social and political implications for this new technology are astounding. Never before has such an enormous amount of information been available to a limitless number of people. Already, issues of censorship and free-speech have come to take center stage, as the world scrambles to deal with the power of modern technology.
The World Wide Web has already affected our educational, political, and commercial sectors, and it now seems poised to affect every other aspect of human life. The days where every home will have a computer are not far from the present. In order to keep up with the technology of the future, you need to catch up with the technology of the present. The easiest way to do this is to simply wander around the World Wide Web. It's as easy as clicking a mouse. So sit back and explore the World Wide Web at your own pace, and don't let yourself get left behind when the next technological breakthrough comes along.
everyone and to no one.
The Internet is structured in a hierarchy. At the top, each country has at least one
public backbone network. Backbone networks are made of high speed lines that connect to other backbones. There are thousands of service providers and networks that connect
home or college users to the backbone networks. Today, there are more than
fifty-thousand networks in more than one-hundred countries worldwide. However, it all
started with one network.
In the early 1960's the Cold War was escalating and the United States Government was faced with a problem. How could the country communicate after a nuclear war? The Pentagon's Advanced Research Projects Agency, ARPA, had a solution. They would create a non-centralized network that linked from city to city, and base to base. The network was designed to function when parts of it were destroyed. The network could not have a center because it would be a primary target for enemies. In 1969, ARPANET was created, named after its original Pentagon sponsor. There were four supercomputer stations, called nodes, on this high speed network.
ARPANET grew during the 1970's as more and more supercomputer stations were
added. The users of ARPANET had changed the high speed network to an electronic post
office. Scientists and researchers used ARPANET to collaborate on projects and to trade
notes. Eventually, people used ARPANET for leisure activities such as chatting. Soon
after, the mailing list was developed. Mailing lists were discussion groups of people who
would send their messages via e-mail to a group address, and also receive messages. This
could be done twenty-four hours a day.
As ARPANET became larger, a more sophisticated and standard protocol was needed. The protocol would have to link users from other small networks to ARPANET, the main network. The standard protocol invented in 1977 was called TCP/IP. Because of TCP/IP, connecting to ARPANET by any other network was made possible. In 1983, the military portion of ARPANET broke off and formed MILNET. The same year, TCP/IP was made a standard and it was being used by everyone. It linked all parts of the branching complex networks, which soon came to be called the Internet.
In 1985, the National Science Foundation (NSF) began a program to establish Internet access centered on its six powerful supercomputer stations across the United States. They created a backbone called NSFNET to connect college campuses via regional networks to its supercomputer centers. ARPANET officially expired in 1989. Most of the networks were gained by NSFNET. The others became parts of smaller networks. The Defense Communications Agency shut down ARPANET because its functions had been taken over by NSFNET. Amazingly, when ARPANET was turned off in June of 1990, no one except the network staff noticed.
In the early 1990's the Internet experienced explosive growth. It was estimated that the
number of computers connected to the Internet was doubling every year. It was also
estimated that at this rapid rate of growth, everyone would have an e-mail address by the
year 2020. The main cause of this growth was the creation of the World Wide Web. The World Wide Web was created at CERN, a physics laboratory in Geneva,
Switzerland. The Web's development was based on the transmission of web pages over the
Internet, called Hyper Text Transmission Protocol or HTTP. It is an interactive system for
the dissemination and retrieval of information through web pages. The pages may consist
of text, pictures, sound, music, voice, animations, and video. Web pages can link to other
web pages by hypertext links. When there is hypertext on a page, the user can simply click
on the link and be taken to the new page. Previously, the Internet was black and white,
text, and files. The web added color. Web pages can provide entertainment, information, or commercial advertisement. The World Wide Web is the fastest growing Internet resource.
The Internet has dramatically changed from its original purpose. It was
formed by the United States government for exclusive use of government officials and the
military to communicate after a nuclear war. Today, the Internet is used globally for a
variety of purposes. People can send their friends an electronic "hello." They can
download a recipe for a new type of lasagna. They can argue about politics on-line, and
even shop and bank electronically in their homes. The number of people signing on-line is
still increasing and the end it not in sight. As we approach the 21st century, we are
experiencing a great transformation due to the Internet and the World Wide Web. We are
breaking through the restrictions of the printed page and the boundaries of nations and
cultures.
You may not be aware of it, but the World Wide Web is currently transforming the world as we know it. You've probably heard a lot about the Internet and the World Wide Web, but you may not know what these terms mean and may be intimidated by this rapidly advancing field of science. If there is one aspect of this field that is advancing faster than any other, it is the ease with which this technology can be learned.
The Internet, by definition is a "network of networks." That is, it is a world-wide network that links many smaller networks. The World Wide Web is a new subdivision of the Internet. The World Wide Web consists of computers (servers) all over the world that store information in a textual as well as a multimedia format. Each of these servers has a specific Internet address which allows users to easily locate information. Files stored on a server can be accessed in two ways. The first is simply by clicking on a link in a Web document (better known as a Web page) that points to the address of another document. The second way to locate a particular Web page is by typing the Universal Resource Locator (URL) of the page in your browser (the software interface used to navigate the World Wide Web). The URL of a page is the string of characters that appears in the Location: box at the top of your screen. Every Web page has a unique URL which begins with the letters "http://" that identify it as a Web page. This is the equivalent of the Internet address and tells the computer where to find the particular page you are looking for.
The greatest advantage of producing information in HTML format, is that files may be linked to one another via hyperlinks (or links) within the documents. Links usually appear in a different color than the rest of the text on a Web page and are often underlined. Navigating the Web is as simple as clicking a mouse button. Clicking the mouse on a link tells the computer to go to another Internet location and display a specific file. Also, most Web browsers allow easy navigation of the Web by utilizing "Back" and "Forward" buttons that can trace your path around the Web. Links within Web pages aren't limited to just other Web pages. They can include any type of file at all. Some of the more common types of files found on the Web are graphics files, sound files, and files containing movie clips. These files can be run by different helper applications that the Web browser associates with files of that type.
As a student, the Web can provide you with an enormous source of information pertaining to any area of academic interest. This can be especially useful when information is needed to write a term paper. Students can use one of the many Search Engines on the Web to locate information on virtually any topic, just by typing the topic that they wish to find information on.
Another application many students find the World Wide Web to be useful for is career planning. There are hundreds of Web sites that contain information about job openings in every field all over the United States as well as abroad. Job openings can be found listed either by profession or by geographical location, so students don't have to waste time looking through job listings that don't pertain to their area of interest or location of preference. Alas, if students fail to find job openings they are interested in, they can post their resumes to employment service Web sites which try to match employers with those seeking employment.
The Web can also be a useful place for high school students applying to college or college graduates who wish to delay their job hunt by going to graduate school. Many colleges and universities around the world are getting on the Internet to provide their students with access to the enormous amount of information available on it. This allows students the opportunity to browse Web servers at different colleges where they can find information useful in selecting the institution most appropriate to their academic needs.
While the World Wide Web can provide information crucial to your academic and professional career, the information contained on it is not limited to such serious matters. The Web can also provide some entertaining diversions from academics. You can spend hours on the internet and it only feels like a couple minutes. A recent topic I have personally been looking into is three dimensional chat rooms. In this type of chat room you virtually walk around and approach other people and attempt to have a conversation with them. Unfortunatlely everyone is not always responsive as you would like them to be. As an avid user of the internet I highly recommend all people to look into "Worlds Chat".
As the 21st century approaches, it seems inevitable that computer and telecommunications technology will radically transform our world in the years to come. The Internet and the World Wide Web, in particular, appear to be the protocol that will lead us into the Information Age. The social and political implications for this new technology are astounding. Never before has such an enormous amount of information been available to a limitless number of people. Already, issues of censorship and free-speech have come to take center stage, as the world scrambles to deal with the power of modern technology.
The World Wide Web has already affected our educational, political, and commercial sectors, and it now seems poised to affect every other aspect of human life. The days where every home will have a computer are not far from the present. In order to keep up with the technology of the future, you need to catch up with the technology of the present. The easiest way to do this is to simply wander around the World Wide Web. It's as easy as clicking a mouse. So sit back and explore the World Wide Web at your own pace, and don't let yourself get left behind when the next technological breakthrough comes along.
The evelution of the microprossesor
The evaluation of the microprocessor.
The microprocessor has changed a lot over the years, says (Michael W.
Davidson,http://micro.magnet.fsu.edu/chipshot.html) Microprocessor
technology is progressing so rapidly that even experts in the field are
having trouble keeping up with current advances. As more competition
develops in this $150 billion a year business, power and speed of the
microprocessor is expanding at an almost explosive rate. The changes
have been most evident over the last decade. The microprocessor has
changed the way computers work by making them faster. The
microprocessor is often called the brain of the C.P.U.(or the central
processing unit)and without the microprocessor the computer is more or
less useless. Motorola and Intel have invented most of the
microprocessors over the last decade. Over the years their has been a
constant battle over cutting edge technology. In the 80's Motorola won
the battle, but now in the 90's it looks as Intel has won the war.
The microprocessor 68000 is the original microprocessor(Encarta 95). It
was invented by Motorola in the early 80's. The 68000 also had two very
distinct qualities like 24-bit physical addressing and a 16-bit data
bus. The original Apple Macintosh ,released in 1984, had the 8-MHz
found at the core of it. It was also found in the Macintosh Plus, the
original Macintosh SE, the Apple Laser-Writer IISC, and the Hewlett-
Packard's LaserJet printer family. The 68000 was very efficient for its
time for example it could address 16 megabytes of memory, that is 16
more times the memory than the Intel 8088 which was found in the IBM PC
. Also the 68000 has a linear addressing architecture which was better
than the 8088's segmented memory architecture because it made making
large applications more straightforward.
The 68020 was invented by Motorola in the mid-80's(Encarta 95). The
68020 is about two times as powerful as the 68000. The 68020 has 32-bit
addressing and a 32-bit data bus and is available in various speeds like
16MHz, 20MHz, 25MHz, and 33MHz. The microprocessor 68020 is found in the
original Macintosh II and in the LaserWriter IINT both of which are from
Apple.
The 68030 microprocessor was invented by Motorola about a year after the
68020 was released(Encarta 95). The 68030 has 32-bit addressing and a
32-bit data bus just like it's previous model, but it has paged memory
management built into it, delaying the need for additional chips to
provide that function. A 16-MHz version was used in the Macintosh IIx,
IIcx, and SE/30. A 25-MHz model was used in the Mac IIci and the NeXT
computer. The 68030 is produced in various versions like the 20-MHz,
33MHz, 40-MHz, and 50MHz.
The microprocessor 68040 was invented by Motorola(Encarta 95). The
68040 has a 32-bit addressing and a 32-bit data bus just like the
previous two microprocessors. But unlike the two previous
microprocessors this one runs at 25MHz and includes a built-in floating
point unit and memory management units which includes 4-KB instruction
and data coaches. Which just happens to eliminate the need additional
chips to provide these functions. Also the 68040 is capable of parallel
instruction execution by means of multiple independent instruction
pipelines, multiple internal buses, and separate caches for both data
and instructions.
The microprocessor 68881 was invented by Motorola for the use with both
microprocessor 68000 and the 68020(Encarta 95). Math coprocessors, if
supported by the application software, would speed up any function that
is math-based. The microprocessor 68881 does this by additional set of
instructions for high-proformance floating point arithmetic, a set of
floating-point data registers, and 22 built-inconstants including p and
powers of 10. The microprocessor 68881 conforms to the ANSI/IEEE 754-
1985 standard for binary floating-point arithmetic. When making the
Macintosh II, Apple noticed that when they added a 68881, the
improvement in performance of the interface, and thus the apparent
performance was changed dramatically. Apple then decided to add it as
standard equipment.
The microprocessor 80286, also called the 286was invented by Motorola in
1982(Encarta 95). The 286 was included in the IBM PC/AT and compatible
computers in 1984. The 286 has a 16-bit resister, transfers information
over the data bus 16 bits at a time, and use 24 bits to address memory
location. The 286 was able to operate in two modes real (which is
compatible with MS-DOS and limits the 8086 and 8088 chips) and protected
( which increases the microprocessor's functionality). Real mode limits
the amount of memory the microprocessor can address to one megabyte; in
protected mode, however the addressing access is increased and is
capable of accessing up to 16 megabytes of memory directly. Also, an
286 microprocessor in protected mode protects the operating system from
mis-behaved applications that could normally halt (or "crash") a system
with a non-protected microprocessor such as the 80286 in real mode or
just the plain old 8088.
The microprocessor 80386dx also called the 386 or the 386dx was invented
in 1985(Encarta 95). The 386 was used in IBM and compatible
microcomputers such as the PS/2 Model 80. The 386 is a full 32-bit
microprocessor, meaning that it has a 32-bit resister, it can easily
transfer information over its data bus 32 bits at a time, and it can use
32 bits in addressing memory. Like the earlier 80286, the 386 operates
in two modes, again real (which is compatible with MS-DOS and limits the
8086 and 8088 chips) and protected ( which increases the
microprocessor's functionality and protects the operating system from
halting because of an inadvertent application error.) Real mode limits
the amount of memory the microprocessor can address to one megabyte; in
protected mode, however the total amount of memory that the 386 can
address directly is 4 gigabytes, that is roughly 4 billion bytes. The
80386dx also has a virtual mode, which allows the operating systems to
effectively divide the 80386dx into several 8086 microprocessors each
having its own 1-megabyte space, allowing each "8086" to run its own
program.
The microprocessor 80386sx also called the 386sx was invented by Intel
in 1988 as a low-cost alternative to the 80386DX(Encarta 95). The
80386SX is in essence an 80386DX processor limited by a 16-bit data bus.
The 16-bit design allows 80386SX systems to be configured from less
expensive AT-class parts, ensuring a much lower complete system price.
The 80386SX offers enhanced performance over the 80286 and access to
software designed for the 80386DX. The 80386SX also offers 80386DX
comforts such as multitasking and virtual 8086 mode.
The microprocessor 80387SX also called the 387SX was invented by
Intel(Encarta 95). A math, or floating-point, coprocessor from Intel
for use with the 80386SX family of microprocessors. The 387sx is
available in a 16-MHz version only, the 80387SX, if supported by the
application software, can dramatically improve system performance by
offering arithmetic, trigonometric, exponential, and logarithmic
instructions for the application to use-instructions not offered in the
80386SX instruction set. The 80387SX also offers perfect operations for
sine, cosine, tangent, arctangent, and logarithm calculations. If used,
these additional instructions are carried out by the 80387SX, freeing
the 80386SX to perform other tasks. The 80387SX is capable of working
with 32- and 64-bit integers, 32-, 64-, and 80-bit floating-point
numbers, and 18-digit BCD (binary coded decimal) operands; it coincides
to the ANSI/IEEE 754-1985 standard for binary floating-point arithmetic.
The 80387SX operates individually on the 80386SX's mode, and it performs
as expected regardless of whether the 80386SX is running in real,
protected, or virtual 8086 mode.
The microprocessor mi486 also called the 80486 or the 486 was invented
in 1989 by Intel(Encarta 95). Like its 80386 predecessor, the 486 is a
full-bit processor with 32-bit registers, 32-bit data bus, and 32-bit
addressing. It includes several enhancements, however, including a
built-in cache controller, the built-in equivalent of an 80387 floating-
point coprocessor, and provisions for multiprocessing. In addition, the
486 uses a "pipeline" execution scheme that breaks instructions into
multiple stages, resulting in much higher performance for many common
data and integer math operations.
In conclusion it is evident by the following that microprocessors are
developing at leaps and bounds and it is not surprising that if by the
time it hits the teacher's desk or by the time you read this the next
superchip will be developed(Encarta 95).
The microprocessor has changed a lot over the years, says (Michael W.
Davidson,http://micro.magnet.fsu.edu/chipshot.html) Microprocessor
technology is progressing so rapidly that even experts in the field are
having trouble keeping up with current advances. As more competition
develops in this $150 billion a year business, power and speed of the
microprocessor is expanding at an almost explosive rate. The changes
have been most evident over the last decade. The microprocessor has
changed the way computers work by making them faster. The
microprocessor is often called the brain of the C.P.U.(or the central
processing unit)and without the microprocessor the computer is more or
less useless. Motorola and Intel have invented most of the
microprocessors over the last decade. Over the years their has been a
constant battle over cutting edge technology. In the 80's Motorola won
the battle, but now in the 90's it looks as Intel has won the war.
The microprocessor 68000 is the original microprocessor(Encarta 95). It
was invented by Motorola in the early 80's. The 68000 also had two very
distinct qualities like 24-bit physical addressing and a 16-bit data
bus. The original Apple Macintosh ,released in 1984, had the 8-MHz
found at the core of it. It was also found in the Macintosh Plus, the
original Macintosh SE, the Apple Laser-Writer IISC, and the Hewlett-
Packard's LaserJet printer family. The 68000 was very efficient for its
time for example it could address 16 megabytes of memory, that is 16
more times the memory than the Intel 8088 which was found in the IBM PC
. Also the 68000 has a linear addressing architecture which was better
than the 8088's segmented memory architecture because it made making
large applications more straightforward.
The 68020 was invented by Motorola in the mid-80's(Encarta 95). The
68020 is about two times as powerful as the 68000. The 68020 has 32-bit
addressing and a 32-bit data bus and is available in various speeds like
16MHz, 20MHz, 25MHz, and 33MHz. The microprocessor 68020 is found in the
original Macintosh II and in the LaserWriter IINT both of which are from
Apple.
The 68030 microprocessor was invented by Motorola about a year after the
68020 was released(Encarta 95). The 68030 has 32-bit addressing and a
32-bit data bus just like it's previous model, but it has paged memory
management built into it, delaying the need for additional chips to
provide that function. A 16-MHz version was used in the Macintosh IIx,
IIcx, and SE/30. A 25-MHz model was used in the Mac IIci and the NeXT
computer. The 68030 is produced in various versions like the 20-MHz,
33MHz, 40-MHz, and 50MHz.
The microprocessor 68040 was invented by Motorola(Encarta 95). The
68040 has a 32-bit addressing and a 32-bit data bus just like the
previous two microprocessors. But unlike the two previous
microprocessors this one runs at 25MHz and includes a built-in floating
point unit and memory management units which includes 4-KB instruction
and data coaches. Which just happens to eliminate the need additional
chips to provide these functions. Also the 68040 is capable of parallel
instruction execution by means of multiple independent instruction
pipelines, multiple internal buses, and separate caches for both data
and instructions.
The microprocessor 68881 was invented by Motorola for the use with both
microprocessor 68000 and the 68020(Encarta 95). Math coprocessors, if
supported by the application software, would speed up any function that
is math-based. The microprocessor 68881 does this by additional set of
instructions for high-proformance floating point arithmetic, a set of
floating-point data registers, and 22 built-inconstants including p and
powers of 10. The microprocessor 68881 conforms to the ANSI/IEEE 754-
1985 standard for binary floating-point arithmetic. When making the
Macintosh II, Apple noticed that when they added a 68881, the
improvement in performance of the interface, and thus the apparent
performance was changed dramatically. Apple then decided to add it as
standard equipment.
The microprocessor 80286, also called the 286was invented by Motorola in
1982(Encarta 95). The 286 was included in the IBM PC/AT and compatible
computers in 1984. The 286 has a 16-bit resister, transfers information
over the data bus 16 bits at a time, and use 24 bits to address memory
location. The 286 was able to operate in two modes real (which is
compatible with MS-DOS and limits the 8086 and 8088 chips) and protected
( which increases the microprocessor's functionality). Real mode limits
the amount of memory the microprocessor can address to one megabyte; in
protected mode, however the addressing access is increased and is
capable of accessing up to 16 megabytes of memory directly. Also, an
286 microprocessor in protected mode protects the operating system from
mis-behaved applications that could normally halt (or "crash") a system
with a non-protected microprocessor such as the 80286 in real mode or
just the plain old 8088.
The microprocessor 80386dx also called the 386 or the 386dx was invented
in 1985(Encarta 95). The 386 was used in IBM and compatible
microcomputers such as the PS/2 Model 80. The 386 is a full 32-bit
microprocessor, meaning that it has a 32-bit resister, it can easily
transfer information over its data bus 32 bits at a time, and it can use
32 bits in addressing memory. Like the earlier 80286, the 386 operates
in two modes, again real (which is compatible with MS-DOS and limits the
8086 and 8088 chips) and protected ( which increases the
microprocessor's functionality and protects the operating system from
halting because of an inadvertent application error.) Real mode limits
the amount of memory the microprocessor can address to one megabyte; in
protected mode, however the total amount of memory that the 386 can
address directly is 4 gigabytes, that is roughly 4 billion bytes. The
80386dx also has a virtual mode, which allows the operating systems to
effectively divide the 80386dx into several 8086 microprocessors each
having its own 1-megabyte space, allowing each "8086" to run its own
program.
The microprocessor 80386sx also called the 386sx was invented by Intel
in 1988 as a low-cost alternative to the 80386DX(Encarta 95). The
80386SX is in essence an 80386DX processor limited by a 16-bit data bus.
The 16-bit design allows 80386SX systems to be configured from less
expensive AT-class parts, ensuring a much lower complete system price.
The 80386SX offers enhanced performance over the 80286 and access to
software designed for the 80386DX. The 80386SX also offers 80386DX
comforts such as multitasking and virtual 8086 mode.
The microprocessor 80387SX also called the 387SX was invented by
Intel(Encarta 95). A math, or floating-point, coprocessor from Intel
for use with the 80386SX family of microprocessors. The 387sx is
available in a 16-MHz version only, the 80387SX, if supported by the
application software, can dramatically improve system performance by
offering arithmetic, trigonometric, exponential, and logarithmic
instructions for the application to use-instructions not offered in the
80386SX instruction set. The 80387SX also offers perfect operations for
sine, cosine, tangent, arctangent, and logarithm calculations. If used,
these additional instructions are carried out by the 80387SX, freeing
the 80386SX to perform other tasks. The 80387SX is capable of working
with 32- and 64-bit integers, 32-, 64-, and 80-bit floating-point
numbers, and 18-digit BCD (binary coded decimal) operands; it coincides
to the ANSI/IEEE 754-1985 standard for binary floating-point arithmetic.
The 80387SX operates individually on the 80386SX's mode, and it performs
as expected regardless of whether the 80386SX is running in real,
protected, or virtual 8086 mode.
The microprocessor mi486 also called the 80486 or the 486 was invented
in 1989 by Intel(Encarta 95). Like its 80386 predecessor, the 486 is a
full-bit processor with 32-bit registers, 32-bit data bus, and 32-bit
addressing. It includes several enhancements, however, including a
built-in cache controller, the built-in equivalent of an 80387 floating-
point coprocessor, and provisions for multiprocessing. In addition, the
486 uses a "pipeline" execution scheme that breaks instructions into
multiple stages, resulting in much higher performance for many common
data and integer math operations.
In conclusion it is evident by the following that microprocessors are
developing at leaps and bounds and it is not surprising that if by the
time it hits the teacher's desk or by the time you read this the next
superchip will be developed(Encarta 95).
The Dependability of the Web
The Dependability of the Web
by
Nathan Redman
A new age is upon us - the computer age. More specifically, the internet. There's no doubt that the internet has changed how fast and the way we communicate and get information. The popularity of the net is rising at a considerable rate. Traffic, the number of people on the web, is four times what it was a year ago and every thirty seconds someone new logs on to the net to experience for themselves what everyone is talking about. Even Bill Gates, founder of Microsoft, a company that really looks into the future in order to stay ahead of the competition, said the internet is one part of the computer business that he really underestimated. As one of the richest men in the world I doubt if he makes too many mistakes. Nobody could have predicted that the internet would expand at this rate and that's probably the reason why troubles are arising about the dependability of the web.
As usage soars (some estimate there will be over eighty million users by the end of 1997) could the internet overload? Even though no one predicted the popularity of the net some are quick to foresee the downfall or doomsday of this fade. If you call it a fade. The demand continues to rise and is now so great that technological improvements are continually needed to handle the burden that's been created from all of the people using the net.
Their are many things that can lighten the load that's been slowing down the internet. First, it needs to have a lot better organization because with over seventy-five million web pages, and rising steadily fast, being able to pin-point the information you are trying to find is very difficult. It's like finding a needle in a haystack. When you use a search engine to find information on a certain topic the search can come back with thousands upon thousands of pages or sites. Sometimes with over fifty-thousand. Now you have the never ending task of looking through each site to find what you want. Plus, with links on most pages people can end up getting lost real fast and not being able to find their way back. Search engines should develop what I call the filter down affect. In the filter down affect a broad topic is chosen and then sub-topics are chosen continuously until the list of sites has narrowed the search to respectable sites and the information needed. Having better organization would remove some of the worthless sites around cyperspace from ending up first in the search engines results.
A second way to lighten the load of the internet is by improving the time it takes to load a page. The speed of loading a page depends greatly on the type of equipment, software, how much memory you have, your connection speed, plus a lot of other factors, but web page designers should make more text and less graphics, animation, and video because text takes a lot less time to load. According to an October 1996 survey by AT&T, sixty-seven percent of sites were "off the net" for at least on hour per day because of overload. The web wasn't designed to handle graphics, animation, audio, and video. It was first developed for E-mail (electronic mail) and transferring text files, but web page designers want their pages to look the best so that people or for in the case of business, potential customers, visit their site and are impressed by it so they come back in the future and tell others about the site. After all, the best way to have people visit your web site is by word of mouth because it is very hard for people to find the site unless they happen to know the exact address. Sometimes though, popularity can kill a site. For instance, when the weather got bad in Minnesota one weekend the National Weather Service web site got overloaded with people wanting to read and see the current weather forecasts. The result was the server going down from the overload leaving nobody the ability to visit that site. With more businesses seeing the dollar signs that the web could produce they compete for advertising on the web because it is much cheaper then advertising on television or the newspaper and it can reach people all the way around the world. Designers for these pages can't forget that surfers aren't willing to wait very long for pages to load so simplifying pages can make the web a lot faster.
Another way to make things faster is to make sure that servers can handle the load applied to them. Internet providers want to make money just like all other businesses so they try fitting as many customers on to one server as they can. Putting less people on each server would create faster service. Also popular businesses or sites should have big enough capacities to handle the amount of people that visit. Slow servers will lose a lot of business. Internet providers and businesses should look at future capacities and not just to current loads.
As in the case of doomsday more and more fears of logging into cyberspace are beginning to receive attention. As mentioned above, speed is a major concern. Besides what is recommended, technological improvements need to be developed. For example, bigger pipelines (lines carrying computer data) like fiber optics and satellite transmission are receiving high ratings from people, but like all good things in life putting bigger pipelines in the ground takes a lot of time and money. If the government or private industries are willing to lay the foundation for putting in faster lines it will change the world just like the railroad tracks in the 1800's.
Another major fear of people on the superhighway of information is security. Hackers (people that get into data on computers they aren't suppose to) can hack into a lot of private information on the information superhighway. In reality it isn't any different then credit cards and valuables being stolen in the real world. Their is currently cybercops surfing the web looking for illegal happenings on the information superhighway. Patrolling the web is only one way to help put a stop to hackers. Encrypting and making better security software needs to be developed along with other computer technology to help control hackers or cybercriminals.
Theirs no denying the fact that the internet is very powerful in today's world. It combines text, audio, animation, video and graphics all in one and at the click of a button you can receive entertainment and news, communicate with people around the world, do your banking, reserve tickets, buy, sell and trade merchandise or collectibles, or even order dinner for the night. These are just a few things that can be accomplished on the net. People aren't only attracted to what the internet has to offer now but to what will be available in the near future. Some day computers will replace our television, radio, answering machine, and telephone. Technology is developing so rapidly that things not even imaginable will be developed to make our lives easier but more confusing. After all, no one predicted where the net is today and how fast it would it would develop.
by
Nathan Redman
A new age is upon us - the computer age. More specifically, the internet. There's no doubt that the internet has changed how fast and the way we communicate and get information. The popularity of the net is rising at a considerable rate. Traffic, the number of people on the web, is four times what it was a year ago and every thirty seconds someone new logs on to the net to experience for themselves what everyone is talking about. Even Bill Gates, founder of Microsoft, a company that really looks into the future in order to stay ahead of the competition, said the internet is one part of the computer business that he really underestimated. As one of the richest men in the world I doubt if he makes too many mistakes. Nobody could have predicted that the internet would expand at this rate and that's probably the reason why troubles are arising about the dependability of the web.
As usage soars (some estimate there will be over eighty million users by the end of 1997) could the internet overload? Even though no one predicted the popularity of the net some are quick to foresee the downfall or doomsday of this fade. If you call it a fade. The demand continues to rise and is now so great that technological improvements are continually needed to handle the burden that's been created from all of the people using the net.
Their are many things that can lighten the load that's been slowing down the internet. First, it needs to have a lot better organization because with over seventy-five million web pages, and rising steadily fast, being able to pin-point the information you are trying to find is very difficult. It's like finding a needle in a haystack. When you use a search engine to find information on a certain topic the search can come back with thousands upon thousands of pages or sites. Sometimes with over fifty-thousand. Now you have the never ending task of looking through each site to find what you want. Plus, with links on most pages people can end up getting lost real fast and not being able to find their way back. Search engines should develop what I call the filter down affect. In the filter down affect a broad topic is chosen and then sub-topics are chosen continuously until the list of sites has narrowed the search to respectable sites and the information needed. Having better organization would remove some of the worthless sites around cyperspace from ending up first in the search engines results.
A second way to lighten the load of the internet is by improving the time it takes to load a page. The speed of loading a page depends greatly on the type of equipment, software, how much memory you have, your connection speed, plus a lot of other factors, but web page designers should make more text and less graphics, animation, and video because text takes a lot less time to load. According to an October 1996 survey by AT&T, sixty-seven percent of sites were "off the net" for at least on hour per day because of overload. The web wasn't designed to handle graphics, animation, audio, and video. It was first developed for E-mail (electronic mail) and transferring text files, but web page designers want their pages to look the best so that people or for in the case of business, potential customers, visit their site and are impressed by it so they come back in the future and tell others about the site. After all, the best way to have people visit your web site is by word of mouth because it is very hard for people to find the site unless they happen to know the exact address. Sometimes though, popularity can kill a site. For instance, when the weather got bad in Minnesota one weekend the National Weather Service web site got overloaded with people wanting to read and see the current weather forecasts. The result was the server going down from the overload leaving nobody the ability to visit that site. With more businesses seeing the dollar signs that the web could produce they compete for advertising on the web because it is much cheaper then advertising on television or the newspaper and it can reach people all the way around the world. Designers for these pages can't forget that surfers aren't willing to wait very long for pages to load so simplifying pages can make the web a lot faster.
Another way to make things faster is to make sure that servers can handle the load applied to them. Internet providers want to make money just like all other businesses so they try fitting as many customers on to one server as they can. Putting less people on each server would create faster service. Also popular businesses or sites should have big enough capacities to handle the amount of people that visit. Slow servers will lose a lot of business. Internet providers and businesses should look at future capacities and not just to current loads.
As in the case of doomsday more and more fears of logging into cyberspace are beginning to receive attention. As mentioned above, speed is a major concern. Besides what is recommended, technological improvements need to be developed. For example, bigger pipelines (lines carrying computer data) like fiber optics and satellite transmission are receiving high ratings from people, but like all good things in life putting bigger pipelines in the ground takes a lot of time and money. If the government or private industries are willing to lay the foundation for putting in faster lines it will change the world just like the railroad tracks in the 1800's.
Another major fear of people on the superhighway of information is security. Hackers (people that get into data on computers they aren't suppose to) can hack into a lot of private information on the information superhighway. In reality it isn't any different then credit cards and valuables being stolen in the real world. Their is currently cybercops surfing the web looking for illegal happenings on the information superhighway. Patrolling the web is only one way to help put a stop to hackers. Encrypting and making better security software needs to be developed along with other computer technology to help control hackers or cybercriminals.
Theirs no denying the fact that the internet is very powerful in today's world. It combines text, audio, animation, video and graphics all in one and at the click of a button you can receive entertainment and news, communicate with people around the world, do your banking, reserve tickets, buy, sell and trade merchandise or collectibles, or even order dinner for the night. These are just a few things that can be accomplished on the net. People aren't only attracted to what the internet has to offer now but to what will be available in the near future. Some day computers will replace our television, radio, answering machine, and telephone. Technology is developing so rapidly that things not even imaginable will be developed to make our lives easier but more confusing. After all, no one predicted where the net is today and how fast it would it would develop.
Subscribe to:
Posts (Atom)