Technology, Data Recovery, Cell Phones, Latest Gadgets, Game Reviews

Computer Tricks, Internet Tips, Latest Gadgets, Latest Software, Tips and Tricks,Latest Reviews, Cell Phones Review, Data Recovery, Game Reviews.......


A dedicated server is a single computer on a web-hosting network that is leased or rented, and dedicated to just one customer. A service provider monitors the computer’s hardware, network connectivity, and routing equipment, while the customer generally controls and maintains the server software. Dedicated servers are most often used by those who’ve outgrown typical hosting accounts and now require massive amounts of data space and bandwidth, those with mission critical web sites, web hosting companies, or those who have special needs. Dedicated servers are housed in data centers, where service providers can monitor them close-up and have hands-on access to them.

The primary advantage of using a dedicated server over a typical shared hosting account is the sheer amount of resources and control available to you, the customer. In many cases, the client is at liberty to install whatever software they desire, giving them greater flexibility and administrative options. Dedicated server clients do not share resources, as those with shared hosting plans do; but rather, are at liberty to use all the resources available to them.

Managed Servers vs. Unmanaged Servers

There are two types of dedicated servers available today: Managed Dedicated Servers and Unmanaged Dedicated Servers.

An Unmanaged Dedicated Server leaves nearly all the management duties of running a server in the purchaser’s control. The customer in this case, updates software on their own, applies necessary patches, performs kernel compiles and operating system restores, installs software, and monitors security. With this type of dedicated server, the consumer is solely responsible for day-to-day operations and maintenance. The service provider, in turn, monitors the network, repairs hardware problems, and troubleshoots connectivity issues. Additionally, some service providers offer partial management of services, such as network monitoring, software upgrades and other services, but leave the general upkeep of the server in the hands of the client. An unmanaged dedicated server is best for someone with server management experience.

A Managed Dedicated Server is generally more proactively monitored and maintained on the part of the service provider. When renting or leasing a managed server, the service provider or host carries out the responsibility of software updates and patches, putting security measures in place, performing hardware replacements, and also monitoring the network and its connection for trouble. In other words, when utilizing a managed dedicated server, the host provider will perform both hardware and software operations. A managed dedication server solution works well for the customer with limited server management experience or limited time in being able to perform the duties necessary to keep a server running and online.

Technical Aspects In Choosing A Server

When choosing a dedicated server, there are several things to consider: Operating System, Hardware options, Space and bandwidth.

The Operating System of a server is similar to that on your own personal computer; once installed, the operating system enables one to perform tasks more simply. There are a bevy of server operating systems available today including Linux-based and Windows-based software. The operating system you choose should be directly relational to what operations your server will be performing, which types of software you’ll need to install and also, what you’re more comfortable with.

Hardware Options are also something to consider when choosing a dedicated server. You’ll need to pick a processor that’s up to the task, the amount of memory you wish installed, firewall options, and the size of the hard drive.

A certain amount of bandwidth is generally included when renting or leasing a dedicated server. Once you ascertained how much bandwidth you will require, you can adjust that limit with your service provider. The space you’ll be given is generally directly relational to the size of your hard drive. Some hosts also give clients the choice of uplink port speed (usually 10Mbps/100Mbps).

You really enjoy those dvd movies and games and the last thing you need or want is to experience problems with your dvd drive.

To prepare for the possibility of having your dvd drive leaving you out in the cold one morning,we will dicuss problems that may cause dvd failure as well as the procedures you should take to correct these problems.

As with all drives,be sure to double check the failure. If the dvd drive will not read the dvd,try running another dvd in the drive.Make sure the dvd has no scratches and is clean.

Visually inspect the drive if the drive is external and if the drive is enternal,check the computer.Check to see if the computer has good ventilation to help keep it cool.Here are the common dvd problems with their solutions.

DVD DRIVE HAS NO POWER

First..For external drives that have no power,first check to see if anything or anyone has caused the power cord to become unplugged.Rule out the wall outlet by plugging in another device such as a radio and see if it plays.

Second...If you've proven the wall outlet to be good,but you still don't have power,check the surge protector for any signs of damage.If the surge protector is good,check the cord.

Third..If you're certain the surge protector or wall outlet is providing power,double check the cord by plugging it in a few times.If no power is present,you will have to replace the cord or the drive itself.

Internal dvd drives receive their power from the connector from the power supply.Try another connector to the drive. if the internal dvd drive still does not receive power after using another connector,the drive is faulty.

DRIVE HAS POWER BUT TRAY WON'T OPEN

You may experience the tray failing to open.Should this happen,press the button a couple times to see if it will open.If the tray fail to open,reboot your computer and try to open the tray.

When rebooting the system,notice the monitor to see if the drive is recognized by the computer.Some systems will not display installed hardware during bootup.If this is the case,you will have to access your BIOS to check if the dvd drive is being registered.

You can also try the manual eject button on the drive to get it to open.Use something very small but firm to press in the pinhole in front of the drive to open the tray.

Shut the computer off and unplug it.Use something like a long paperclip to insert in the pinhole to open the tray. The tray may open a couple inches and you can grab it with your fingers to open it completely.

DRIVE IS NOT RECOGNIZED BY WINDOWS

Be sure the operating system is recognizing the drive by clicking on My Computer.Windows XP will show "drives with removable storage".If your drive is present, highlight the drive,right click and select properties.Click on properties and you should see "this drive is working properly".

If you see another message such as "this drive is not working properly",you may be able to update the device driver.If the drive is not present in My computer,reboot the computer and access the cmos setup.

In the cmos setup,the dvd drive should be present.The drive may not be properly installed or one of the cables have become disconnected if the drive is missing

If you check the drive cables and are certain they are connected correctly,it may be that the data cable is faulty and the drive controller may be at fault.And we can't overlook the fact that the drive itself may be bad.

DRIVE HAS POWER BUT WILL NOT READ DVD

First..try another dvd since a dirty or scratched dvd may not play.If the new dvd fail to play as well,check to see if the operating system is recognizing the drive

Click on My Computer and highlight the dvd drive.Right click and select properties.The statement "This device is working properly" should be present.If not or you see another message,try to update the device driver.

In the My Computer screen,highlight the dvd drive,and select the Properties screen,select Drivers,and then select update device driver.

To make a backup of your registry with Windows 98, just go to Start, select Run, enter scanregw and click OK. This will run Scanregw.exe.

Here's how to format a hard drive (Legal Stuff: We are not responsible for any damages, lost data, or anything of the sort)...

If you have a computer, you surely know what a hard drive is. If you don't have one, or simply don't know what a hard drive is, then this article will begin with a short description of the hard drive. Then we will cover formating a hard drive...

Step 1: What Is A Hard Disk Drive?

A hard disk drive in computing is a type of storage device made up of hard disk platters, a spindle, read and write heads read and write arms, electrical motors, and integrated electronics contained inside an airtight enclosure.

Now you know what the hard drive is. Let's stick to the point and start with the information on the title of this article. How to format a hard disk drive....

Step 2...

First of all, you should have a reason if you really want to learn how to format a hard drive. But don't forget that formating a hard drive does NOT permanently delete your data!

Of course, when you format your hard drive you think that the data is really deleted, but that is not the case.

The fact is that the data you have "deleted" can be restored. Nonetheless, you should not experiment with formatting a hard drive because you never know what may happen. Of course, it also depends on the software you use, for example, there are such products that will permanently delete the data you want and then you can continue the process of how to format a hard drive.

Step 3...

In fact there is nothing so difficult in it. You first need to decide what operating system you intend to load after formating a hard drive.

It is best and easiest to use a boot disk for that Operating System, such as MS Dos6.2 or Windows95b or Windows98SE. You will need the proper Windows95/98 boot disk in order to load these operating systems on the computer, else it will reject loading due to the wrong Operating System on the computer.

Step 4...

Then you will have to insert your boot disk in the floppy drive and start the computer.

Once the system has completed booting and an A: prompt appears. You will need to type format C: /s and then press Enter. The function of this command is to tell the system to format your "C" drive and when it is finished to copy the system files to the drive.

The "/s" switches for "System". You can format a different drive this way by using a different drive letter.

Step 5...

After that you will see on the screen the following text: "WARNING, ALL DATA ON NON-REMOVABLE DISK DRIVE C: WILL BE LOST! Proceed with Format (Y/N)?" and if you really want to continue, type [Y] and then press Enter.

Your screen should display the size of your drive and a countdown in percentage of formatting completed. Depending on your computer's speed and the size of the drive it can take from a few minutes to over 15 minutes.

When it reaches 100% complete, you will see a new message: FORMAT COMPLETE. SYSTEM TRANSFERRED. This message is to indicate that the files required to boot your computer from the hard drive have been copied from the floppy to the hard drive.

The computer can now boot from the hard drive without a boot disk in the floppy drive.

The last message that will appear on your screen is the following: "Volume label (11 characters, ENTER for none)?" You can either press any key to continue, or simply to press Enter. And now, you can finally begin to load your Operating System.

Keep in mind that you can receive an error message, which says "insufficient memory to load system files". If you do receive such message, do not worry. It is caused by the lack of a memory manager loaded at boot and your PC can only access the first 1mg of ram memory.

You can handle this situation with two options. The first one is to omit the /s switch when formatting. You should do it by typing this: FORMAT C: and then press Enter. Then when the format is complete, manually add the system files to your hard drive by using this command: SYS C: and press Enter again.

The second solution is to load a memory manager in order to overcome this issue. If you don't have any you can easily download one from one of the million sites on the Internet.

Step 6...

You see, we have finally reached the end of How To Format A Hard Drive. and consequently – the end of this article. Now you surely know how to format a hard drive. But, once again, don't play with the commands if you are not serious about formating a hard drive.

Even if the data is restorable you may do something wrong to your computer. That is why, you should be careful! And now, good luck!

Forget about emptying your wallets every time you see the blinking light. Quit worrying and start doing it yourself! It’s an easy process that won’t take you more than five minutes.

The following is included in a typical ink refill kit: ink bottles, syringes and detailed instructions. Some kits include an air balance clip for balancing the air inside the cartridge to ensure proper ink flow. Some kits also include hand-drill tool to make a hole in the top of your empty cartridge.

Refilling Process

1. To start the refilling process, fill the syringe with one of the ink colors over the sink or several sheets of scrap paper to prevent any mess. Different printers hold different amounts of ink. In most Epson printers, the black cartridge holds approximately 17 ml and the color cartridges hold approximately 8 ml. See the instructions with your refill kit to see how much ink your cartridges can hold.

2. Before inserting the needle, make a small hole in the top of the cartridge (one for each color chamber). The hole is at the top of the cartridge near the label. Simply push the needle through the hole and press to the bottom of the cartridge towards the outlet hole. It’s important to fill the cartridge slowly so as to avoid the ink from foaming and introducing air in the chamber.

3. You do not need to seal the refill holes since there are already breather holes on the top of the cartridge.

4. Any unused ink can be put back in the bottle. You should clean the syringe with water and dry it properly to do the same process for the other cartridges or for future use. You can also label each syringe for the different colors so that each syringe is only used with one color.

5. Once you place the cartridge back in the printer, run the cleaning cycle 1 to 3 times. If there are any gaps in the printing, run the cleaning cycle again.

Don’t Forget

There are a few things to remember when refilling your cartridge. It should be refilled before the cartridge is completely empty to avoid the chamber from drying out and clogging. Also, it is a good idea to let the printer cartridge sit for a few hours (or overnight) so that the pressure in the cartridge will stabilize.

Some printers, like newer Epson models, have a green chip on their ink cartridges which is visible by looking at your cartridge. They are often referred to as “Intellidge” cartridges. The chip keeps track of how often the cartridge is used and lets the computer know when the cartridge may be low or empty. As long as you reset the chip, refilling the cartridge with ink from a refill kit will not be a problem. A resetting tool can be used to reset the memory on the chip. This allows the printer to recognize the cartridge as being full which makes printing with a refilled cartridge possible.

Refilling your own ink cartridge is easy, good for the environment, and very good for your pocket.

DVD CD ROMs have become predominantly the most used CD drive for desktop and notebook computers. They are very reliable and now come as a standard in most computers. If you are looking for a laptop then make sure it has a DVD ROM, this will give you extra speed for normal Cds, and you will be able to watch your favourite DVDs while you travel.

I often sit up late watching DVDs on my laptop after a hard working day.

If you are interested in Desktop computers, then the DVD drive will enable you to watch your favourite DVDs on your monitor. I currently have a 21-inch monitor and a 5.1 computer surround kit. This brings DVDs to life and acts just like a home cinema system, however the quality is even better.

Most DVD ROMs will come in speeds ranging from 4 to 10 speed. This is more than adequate to watch the latest DVDs, play the latest games, and use and install the latest software. Normal CDs can only perform at a certain speed, and it is a lot lower than what the DVD ROM offers. I hope you have a better understanding of why DVD ROMs are now used more than a normal CD nowadays. It is basically all part of bringing a home entertainment system at the best quality closer to everyone.

Blu-ray is an optical disc format which is set to rival HD-DVDin the race to be the
de-facto standard storage medium for HDTV. The HD-DVD vs Blu-ray battle resembles
that between Betamax and VHS and DVD+RW and DVD-RW.

Currently, the major Hollywood film studios are split evenly in their support for Blu-
ray and HD-DVD, but most of the electronics industry is currently in the blue corner.

The key difference between these new players and recorders and current optical disc
technology is that Blu-ray, as its name suggests, uses a blue-violet laser to read
and write data rather than a red one. Blue light has a shorter wavelength than red
light, and according to the Blu-ray Disc Association (BDA), which is made up of,
amongst others, Sony, Philips, Panasonic, and Pioneer, this means that the laser
spot can be focussed with greater precision.

Blu-ray discs have a maximum capacity of 25GB and dual-layer discs can hold up to
50GB - enough for four hours of HDTV. Like HD-DVD, Blue laser discs don’t require
a caddy and the players and recorders will be able to play current DVD discs. Codecs
supported by Blu-ray include the H.264 MPEG-4 codec which will form part of
Apple’s QuickTime 7, and the Windows Media 9 based VC-1.

High definition DVD, also known as HD-DVD (which actually stands for High Density
DVD), is one of two competing high definition storage format - the other being Blu-ray.

The need for a
new, high capacity storage format, has been primarily brought about by the rapid
rise in popularity of HDTV in Japan and the US. HDTV has much higher bandwidth
than either NTSC or regular DVD discs, so in order to record programs from HD-
DVD higher capacity discs, of at least 30GB, are required.

High definition video is also being used increasingly to make Hollywood movies as it
offers comparable quality to film at much less cost. Therefore, the studios plan to
release future movies on one or both high definition formats.

HD-DVD was developed by Toshiba and NEC and has the support of the DVD Forum,
along with a number of Hollywood studios. Currently those studios which have
announced support for HD-DVD are; Universal Studios, Paramount Studios, Warner
Bros., and New Line Cinema. It has a capacity of 15GB for single-sided discs and
30Gb for double-sided. It doesn’t need a caddy or cartridge and the cover layer is
the same thickness as current DVD discs, 0.6mm. The numerical aperture of the
optical pick-up head is also the same as DVD, 0.65mm.

Because of its similarities to current DVD, high definition DVD is cheaper to
manufacture than Blu-ray, because it doesn’t need big changes in the production
line set-up. Both HD-DVD and Blu-ray have backward compatibility with existing
DVDV discs. That is that current DVDs will play in HD-DVD player, although new
high definition DVD won’t play in older DVD players.

High definition DVD currently supports a number of compression formats, including
MPEG-2, VC1 (based on Microsoft’s Windows Media 9), and H.264 which is based on
MPEG-4 and will be supported by the next version of Apple’s QuickTime software,
which will be included with Mac OS X Tiger.

ROUTING PROTOCOLS

A generic term that refers to a formula, or protocol, used by a router to determine the appropriate path over which data is transmitted. The routing protocol also specifies how routers in a network share information with each other and report changes. The routing protocol enables a network to make dynamic adjustments to its conditions, so routing decisions do not have to be predetermined and static.

Routing, Routed and Non-Routable Protocols

ROUTING | ROUTED | NON-ROUTABLE

ROUTING PROTOCOLS

ROUTING PROTOCOLS are the software that allow routers to dynamically advertise and learn routes, determine which routes are available and which are the most efficient routes to a destination. Routing protocols used by the Internet Protocol suite include:

· Routing Information Protocol (RIP and RIP II).

· Open Shortest Path First (OSPF).

· Intermediate System to Intermediate System (IS-IS).

· Interrior Gateway Routing Protocol (IGRP).

· Cisco's Enhanced Interior Gateway Routing Protocol (EIGRP).

· Border Gateway Protocol (BGP).

Routing is the process of moving data across two or more networks. Within a network, all hosts are directly accessable because they are on the same

ROUTED PROTOCOLS

ROUTED PROTOCOLS are nothing more than data being transported across the networks. Routed protocols include:

· Internet Protocol

o Telnet

o Remote Procedure Call (RPC)

o SNMP

o SMTP

· Novell IPX

· Open Standards Institute networking protocol

· DECnet

· Appletalk

· Banyan Vines

· Xerox Network System (XNS)

Outside a network, specialized devices called ROUTES are used to perform the routing process of forwarding packets between networks. Routers are connected to the edges of two or more networks to provide connectivity between them. These devices are usually dedicated machines with specialized hardware and software to speed up the routing process. These devices send and receive routing information to each other about networks that they can and cannot reach. Routers examine all routes to a destination, determine which routes have the best metric, and insert one or more routes into the IP routing table on the router. By maintaining a current list of known routes, routers can quicky and efficiently send your information on it's way when received.

There are many companies that produce routers: Cisco, Juniper, Bay, Nortel, 3Com, Cabletron, etc. Each company's product is different in how it is configured, but most will interoperate so long as they share common physical and data link layer protocols (Cisco HDLC or PPP over Serial, Ethernet etc.). Before purchasing a router for your business, always check with your Internet provider to see what equipment they use, and choose a router, which will interoperate with your Internet provider's equipment.

NON-ROUTABLE PROTOCOLS

NON-ROUTABLE PROTOCOLS cannot survive being routed. Non-routable protocols presume that all computers they will ever communicate with are on the same network (to get them working in a routed environment, you must bridge the networks). Todays modern networks are not very tolerant of protocols that do not understand the concept of a multi-segment network and most of these protocols are dying or falling out of use.

· NetBEUI

· DLC

· LAT

· DRP

· MOP

RIP (Routing Information Protocol)

RIP is a dynamic internetwork routing protocol primary used in interior routing environments. A dynamic routing protocol, as opposed to a static routing protocol, automatically discovers routes and builds routing tables. Interior environments are typically private networks (autonomous systems). In contrast, exterior routing protocols such as BGP are used to exchange route summaries between autonomous systems. BGP is used among autonomous systems on the Internet.

RIP uses the distance-vector algorithm developed by Bellman and Ford (Bellman-Ford algorithm).

Routing Information Protocol

Background

The Routing Information Protocol, or RIP, as it is more commonly called, is one of the most enduring of all routing protocols. RIP is also one of the more easily confused protocols because a variety of RIP-like routing protocols proliferated, some of which even used
the same name! RIP and the myriad RIP-like protocols were based on the same set of algorithms that use distance vectors to mathematically compare routes to identify the best path to any given destination address. These algorithms emerged from academic research that dates back to 1957.

Today's open standard version of RIP, sometimes referred to as IP RIP, is formally defined in two documents: Request For Comments (RFC) 1058 and Internet Standard (STD) 56. As IP-based networks became both more numerous and greater in size, it became apparent to the Internet Engineering Task Force (IETF) that RIP needed to be updated. Consequently, the IETF released RFC 1388 in January 1993, which was then superceded in November 1994 by RFC 1723, which describes RIP 2 (the second version of RIP). These RFCs described an extension of RIP's capabilities but did not attempt to obsolete the previous version of RIP. RIP 2 enabled RIP messages to carry more information, which permitted the use of a simple authentication mechanism to secure table updates. More importantly, RIP 2 supported subnet masks, a critical feature that was not available in RIP.

This chapter summarizes the basic capabilities and features associated with RIP. Topics include the routing update process, RIP routing metrics, routing stability, and routing timers.

Routing Updates

RIP sends routing-update messages at regular intervals and when the network topology changes. When a router receives a routing update that includes changes to an entry, it updates its routing table to reflect the new route. The metric value for the path is increased by 1, and the sender is indicated as the next hop. RIP routers maintain only the best route (the route with the lowest metric value) to a destination. After updating its routing table, the router immediately begins transmitting routing updates to inform other network routers of the change. These updates are sent independently of the regularly scheduled updates that RIP routers send.

RIP Routing Metric

RIP uses a single routing metric (hop count) to measure the distance between the source and a destination network. Each hop in a path from source to destination is assigned a hop count value, which is typically 1. When a router receives a routing update that contains a new or changed destination network entry, the router adds 1 to the metric value indicated in the update and enters the network in the routing table. The IP address of the sender is used as the next hop.

RIP Stability Features

RIP prevents routing loops from continuing indefinitely by implementing a limit on the number of hops allowed in a path from the source to a destination. The maximum number of hops in a path is 15. If a router receives a routing update that contains a new or changed entry, and if increasing the metric value by 1 causes the metric to be infinity (that is, 16), the network destination is considered unreachable. The downside of this stability feature is that it limits the maximum diameter of a RIP network to less than 16 hops.

RIP includes a number of other stability features that are common to many routing protocols. These features are designed to provide stability despite potentially rapid changes in a network's topology. For example, RIP implements the split horizon and holddown mechanisms to prevent incorrect routing information from being propagated.

RIP Timers

RIP uses numerous timers to regulate its performance. These include a routing-update timer, a route-timeout timer, and a route-flush timer. The routing-update timer clocks the interval between periodic routing updates. Generally, it is set to 30 seconds, with a small random amount of time added whenever the timer is reset. This is done to help prevent congestion, which could result from all routers simultaneously attempting to update their neighbors. Each routing table entry has a route-timeout timer associated with it. When the route-timeout timer expires, the route is marked invalid but is retained in the table until the route-flush timer expires.

Packet Formats

The following section focuses on the IP RIP and IP RIP 2 packet formats illustrated in Figures 44-1 and 44-2. Each illustration is followed by descriptions of the fields illustrated.
RIP Packet Format

· Command—Indicates whether the packet is a request or a response. The request asks that a router send all or part of its routing table. The response can be an unsolicited regular routing update or a reply to a request. Responses contain routing table entries. Multiple RIP packets are used to convey information from large routing tables.

· Version number—Specifies the RIP version used. This field can signal different potentially incompatible versions.

· Zero—This field is not actually used by RFC 1058 RIP; it was added solely to provide backward compatibility with prestandard varieties of RIP. Its name comes from its defaulted value: zero.

· Address-family identifier (AFI)—Specifies the address family used. RIP is designed to carry routing information for several different protocols. Each entry has an address-family identifier to indicate the type of address being specified. The AFI for IP is 2.

· Address—Specifies the IP address for the entry.

· Metric—Indicates how many internetwork hops (routers) have been traversed in the trip to the destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.

Note: Up to 25 occurrences of the AFI, Address, and Metric fields are permitted in a single IP RIP packet. (Up to 25 destinations can be listed in a single RIP packet.)

RIP 2 Packet Format

· Command—Indicates whether the packet is a request or a response. The request asks that a router send all or a part of its routing table. The response can be an unsolicited regular routing update or a reply to a request. Responses contain routing table entries. Multiple RIP packets are used to convey information from large routing tables.

· Version—Specifies the RIP version used. In a RIP packet implementing any of the RIP 2 fields or using authentication, this value is set to 2.

· Unused—Has a value set to zero.

· Address-family identifier (AFI)—Specifies the address family used. RIPv2's AFI field functions identically to RFC 1058 RIP's AFI field, with one exception: If the AFI for the first entry in the message is 0xFFFF, the remainder of the entry contains authentication information. Currently, the only authentication type is simple password.

· Route tag—Provides a method for distinguishing between internal routes (learned by RIP) and external routes (learned from other protocols).

· IP address—Specifies the IP address for the entry.

· Subnet mask—Contains the subnet mask for the entry. If this field is zero, no subnet mask has been specified for the entry.

·Next hop—Indicates the IP address of the next hop to which packets for the entry should be forwarded.

· Metric—Indicates how many internetwork hops (routers) have been traversed in the trip to the destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.

Note: Up to 25 occurrences of the AFI, Address, and Metric fields are permitted in a single IP RIP packet. That is, up to 25 routing table entries can be listed in a single RIP packet. If the AFI specifies an authenticated message, only 24 routing table entries can be specified. Given that individual table entries aren't fragmented into multiple packets, RIP does not need a mechanism to resequence datagrams bearing routing table updates from neighboring routers.

Summary

Despite RIP's age and the emergence of more sophisticated routing protocols, it is far from obsolete. RIP is mature, stable, widely supported, and easy to configure. Its simplicity is well suited for use in stub networks and in small autonomous systems that do not have enough redundant paths to warrant the overheads of a more sophisticated protocol.

Review Questions

Q—Name RIP's various stability features.

A—RIP has numerous stability features, the most obvious of which is RIP's maximum hop count. By placing a finite limit on the number of hops that a route can take, routing loops are discouraged, if not completely eliminated. Other stability features include its various timing mechanisms that help ensure that the routing table contains only valid routes, as well as split horizon and holddown mechanisms that prevent incorrect routing information from being disseminated throughout the network.

Q—What is the purpose of the timeout timer?

A—The timeout timer is used to help purge invalid routes from a RIP node. Routes that aren't refreshed for a given period of time are likely invalid because of some change in the network. Thus, RIP maintains a timeout timer for each known route. When a route's timeout timer expires, the route is marked invalid but is retained in the table until the route-flush timer expires.

Q—What two capabilities are supported by RIP 2 but not RIP?

A—RIP 2 enables the use of a simple authentication mechanism to secure table updates. More importantly, RIP 2 supports subnet masks, a critical feature that is not available in RIP.

Q—What is the maximum network diameter of a RIP network?

A—A RIP network's maximum diameter is 15 hops. RIP can count to 16, but that value is considered an error condition rather than a valid hop count.

Computer network installation has become an essential prerequisite for any efficient modern-day business as it allows employees to truly work as a team by sharing information, accessing the same database and staying in touch constantly. For a computer network to give the best results, a lot of detailed planning and foresight is required before installation.

Firstly, an organisation needs to clearly define its requirements – how many people would use the network, how many would use it locally (within the office) and how many might require remote access (from a different location), how many computers and other devices (servers, printers, scanners) would be connected to the network, what are the needs of the various departments and who would be in charge of running/managing the network. It also helps if one can anticipate the direction the company would take in the near future so potential growth can be factored in during computer network installation.

The technology issues should also be ironed out in advance – hardware, software, servers, switches, back-up devices, cables and network operating systems. Make sure you have the required licenses to run the software on all your machines before installing a computer network. Alongside computer network installation should proceed the building of a dedicated technical support staff, either within your own organisation or outside consultants. Delegate responsibility clearly for network management. Before installing the network, you also need to choose the security mechanism to protect corporate data and keep viruses at bay.

The transition to a new or upgraded computer network can bring some teething problems. To minimise chances of confusion, the company might need to train its staff to make them familiar with the new system. Careful planning will to a large extent prevent crises like system downtime and network crashes.

Bluetooth Basics

Bluetooth technology is nothing new, but in many respects it still seems to be more of a buzz word rather than a well understood, commonly accepted technology. You see advertisements for Bluetooth enabled cell phones, PDAs, and laptops, and a search of the Geeks.com website shows all sorts of different devices taking advantage of this wireless standard. But, what is it?

History

Before getting into the technology, the word Bluetooth is intriguing all on its own, and deserves a look. The term is far less high tech than you might imagine, and finds its roots in European history. The King of Denmark from 940 to 981 was renowned for his ability to help people communicate, his name (in English)... Harald Bluetooth. Perhaps a bit obscure, but the reference is appropriate for a wireless communications standard.

Another item worth investigating is the Bluetooth logo. Based on characters from the runic alphabet (used in ancient Denmark), it was chosen as it appears to be the combination of the English letter B and an asterisk.

Capabilities

The FAQ on the Bluetooth.org (https://www.bluetooth.org/) website offers a basic definition: "Bluetooth wireless technology is a worldwide specification for a small-form factor, low-cost radio solution that provides links between mobile computers, mobile phones, other portable handheld devices, and connectivity to the Internet."

Just like 802.11 b/g wireless networking systems and many cordless telephones, Bluetooth devices operate on 2.4 GHz radio signals. That band seems to be getting a bit crowded, and interference between devices may be difficult to avoid. Telephones are now being offered on the 5.8 GHz band to help remedy this, and Bluetooth has taken its own steps to reduce interference and improve transmission quality. Version 1.1 of the Bluetooth standard greatly reduces interference issues, but requires completely different hardware from the original 1.0C standard, thus eliminating any chance of backwards compatibility.

The typical specifications of Bluetooth indicate a maximum transfer rate of 723 kbps and a range of 20-100 meters (65 to 328 feet - depending on the class of the device). This speed is a fraction of that offered by 802.11 b or g wireless standards, so it is obvious that Bluetooth doesn’t pose a threat to replace your wireless network. Although it is very similar to 802.11 in many ways, Bluetooth was never intended to be a networking standard, but does have many practical applications.

Practical Applications

There are a variety of products that take advantage of Bluetooth’s capabilities, from laptops and PDAs, to headphones and input devices, and even wireless printer adapters.

Many Laptops include an onboard Bluetooth adaptor to allow the system to connect to any Bluetooth device right out of the box. For laptop or desktop systems that do not have an adaptor built in, there are many USB Bluetooth adaptors available.

Bluetooth enabled PDAs allow for convenient wireless synchronization and data transfer.

Headphones can take advantage of Bluetooth for two purposes… audio playback and mobile phone communications. Using something a mobile headset with a Bluetooth enabled mobile phone allows anyone to go hands free, as well as wire free.

Logitech, and other manufacturers, also produce input devices that eliminate wires thanks to Bluetooth. You can add a Bluetooth mouse to your system, or both a mouse and keyboard. One advantage that Bluetooth wireless keyboard/mouse combinations have over the standard RF wireless keyboard/mouse combinations is range. Where most standard RF keyboard/mouse combinations have a range up to 6 feet; a Bluetooth keyboard/mouse combination will usually have a range of up to 30 feet.

Bluetooth printer adaptors make sharing a printer extremely convenient by eliminating the need for any wires or special configurations on a typical network. Printing to any compatible HP printer from a PC, PDA or mobile phone can now be done easily from anywhere in the office.

What is Video Encryption?

Video Encryption is an extremely useful method for the stopping unwanted interception and viewing of any transmitted video or other information, for example from a law enforcement video surveillance being relayed back to a central viewing centre.

The scrambling is the easy part. It is the un-encryption that's hard, but there are several techniques that are available. However, the human eye is very good at, spotting distortions in pictures due to poor video decoding or poor choice of video scrambling hardware. Therefore, it is very important to choose the right hardware or else your video transmissions may be un-secure or your decoded video may not be watchable.

Some of the more popular techniques are detailed below:

Line Inversion:

Method: Whole or parts of the signal scan lines are inverted.

Advantages: Simple, cheap video encryption.

Disadvantages: Poor video decrypting quality, low obscurity, low security.

Sync Suppression:

Method: Hide/remove the horizontal/vertical line syncs.

Advantages: Provides a low cost solution to Encryption and provides good quality video decoding.

Disadvantages: This method is incompatible with some distribution equipment. Obscurity (i.e. how easy it is to visually decipher the image) is dependant on video content.

Line Shuffle:

Method: Each signal line is re-ordered on the screen.

Advantages: Provides a compatible video signal, a reasonable amount of obscurity, good decode quality.

Disadvantages: Requires a lot of digital storage space. There are potential issues with video stability. Less secure than the cut and rotate encryption method (see below)

Cut & Rotate:

Scrambling Method: Each scan line is cut into pieces and re-assembled in a different order.

Advantages: Provides a compatible video signal, gives an excellent amount of obscurity, as well as good decode quality and stability.

Disadvantages: Can have complex timing control and requires specialized scrambling equipment

The cut and rotate video encryption method is probably the best way of achieving reliable and good quality video encryption, an example of a good implementation of this system is in the Viewlock II

Implementing vice scrambling

The video scrambling hardware, in particular the decoder should function correctly even if there is a noisy (for example having what are commonly known as 'snow' on the screen. 'Snow' is when there are flecks on your TV screen, often seen in poor reception areas) or unstable signal. If the link to the encrypted signal should stop working then this should not be a problem. The link between the video encoder and video decoder should be regained and the decryption quickly continued.

The very nature of security camera systems is that they are often outdoors as so must be able to withstand the rigours of the weather. The video encryption hardware should be stable under or protected from the effects of rain, sunlight, extreme heat and cold. It should not be damaged if there is a power spike in the supply. In these systems the video encoder emits a wireless signal to the video decoder unit before it is viewed, it obviously must be the case that the very act of broadcasting the signal does not effect the video encoding hardware and likewise the video encoding hardware should not effect the radio transmitter.

The most important item is that the video scrambling system should be secure, else why bother? It is amazing how some encryption methods can easily be cracked. For example certain cable television stations 'encrypt' their channel broadcasts via a relatively un complex method, which can easily be decoded using a number of cheap bits of electronics from radio shack. That would obviously be illegal! The cable TV's method of encryption is very crude, they usually just dynamically alter the vertical sync signal so that your TV cannot get a proper lock on it and so it scrolls randomly.

The other extreme is to scramble the transmitted video signal too much so that it is costly both in equipment and time to the video at the receiver. Remember that this is a 'live' video scrambling broadcast followed by a 'live' video decryption display. ANY electronics can be copied, given enough money and time, but making this process as hard as possible is of benefit as it at least delays the time when illegal copies will be available.

IMO, these sd work 'like a VCR' as far as recording and playback. There are models w/ harddrives, VHS players, etc. built in, but to me that's overboard.

Bells and Whistles

The VHS option is not bad, but you most likely already have one you can plug into the inputs of the DVD recorder.

I have a DVD recorder for archiving TiVo shows as opposed to accessing my TiVo from my PC. This is nice because it means I can also archive VHS tapes, camcorder tapes, etc. w/no extra work.

I do have a TV card in my PC so I can do this, but using the DVD recorder is easier.

My motto is: buy what you WILL use and not what you CAN use.

I've bought lots of things that CAN do a lot, but in reality I don't use all the extra features. Not in all cases, but in this case, I say pass on the bells and whistles.

Again, there are models w/ all types of features, but if you buy one that is a DVR, DVD recorder, VCR, TV tuner all in one and one part breaks, it's all broke.

Realize Something About Technology

Remember - this is new technology and will only get better and cheaper. If you buy the top of the line today, it's going to be out of date and/or cheap tomorrow. Test the waters w/ a 'good' model and upgrade when the time is right.

Editing Your Recordings

Chances are - you won't. It's a pain for the most part and usually requires DVD-RAM or DVD-RW discs to do it and they're more expensive. If you have a lot of free time for this, you're a rare person.

I was looking for this type of solution in getting ready for having a baby and I knew I wasn't going to be sifting through and editing hours of video.

If you're really interested in editing, look in to PC options. Pinnacle, ArcSoft, Adobe, etc. - they have good solutions for that.

DVD+R, DVD-R, DVD-RAM, DVD-RW

DVD+R and DVD-R are like VHS and Beta: they're both ok right now, but eventually we'll probably land on one or the other. It seems to be leaning towards DVD-R which tend to be less expensive also.

Many recorders and players do both, but cost more. I say save some money, pick one (probably DVD-R) and move on. If you pick the wrong one, chances are in a couple years you'll be buying a new one anyway. Moreover, you'll probably be able to get a cheap one w/ a built in converter or two trays to duplicate one to the other.

DVD-RAM and DVD-RW are the rewritable types. They're more expensive and for my purposes aren't worth worrying about.

My Recommendation

I got the Panasonic DMR-E55K:

It records to DVD-R like a VCR. I don't use it to record live TV so I don't use VCR+, but it has it. Also, it has TimeSlip which lets you watch something while it's recording (start recording "24" at 8pm and start watching it from the begining at 8:20 to speed thru commercials like a TiVo). Again, I don't use this, but it has it.

Plain and simple, it records my TiVo, camcorder, digital camera (RCA cable output), VCR, etc. to DVD - that's what I want it to do and that's what it does. It's easy, creates a good menu w/ thumbnails and my chosen titles, it's a name brand w/ good reviews and was fairly cheap (there was a rebate at the time).

Also, it plays CDs and mp3 CDs w/ a good interface so not only does it replace a CD player, but since you can put so many songs on one CD, it replaces a CD changer.

An interesting trick: If you have a digital camera w/ RCA cable output, you can hook it directly into the dvd recorder and create a quick slide-show dvd. Many cameras even have a slide show function built in! You can use the sound from a music channel, CD, etc.

Thinking about a mini DVD camcorder? You're not alone, it's a rapidly growing
sector of the camcorder market, with Hitachi, Sony and Panasonic all making more
than one mini dvd camcorder.

These camcorders differ from regular digital video cameras in one important way -
they record video onto mini DVD discs, rather than DV tape. This has a number of
advantages. DVD discs are more robust than tape and won't get chewed up in the
camera. Although this is thankfully a rare occurance, it scares me every time I here a
strange noise coming from my camcorder, so it's with bearing in mind.

The second advantage is that DVD discs are random access, compared to tape on
which everything is recorded sequentially. This means that there's no need to
rewind and fast forward to find the clip you're after, just select it from the menu.
Some cameras even allow you to perform basic editing functions on-camera. An
additional side-benefit is that a mini DVD camcorder doesn't have tape heads to get
worn or dirty as happens in regular mini DVD cameras.

And thirdly, you can easily watch your home movies by removing the DVD from the
camera and playing it in practically any DVD player.

However, there are negative factors to. The most siginificant one is that video is
encoded as MPEG-2 on a mini DVD camcorder, as opposed to DV format. This
means that it needs specialist software to edit - you can't just use your regular
video editing program (unless it specifically supports MPEG-2). And if a Mac user
you're out of luck, as there are no MPEG-2 editing applications for the Mac.

Also, mini DVD camcorders tend to cost more than similarly specified mini DV
cameras. And the media is also more expensive. However, if you don't intend
editing your movies and don't mind the extra cost, a mini dvd camcorder does offer
extraordinary convenience.

Picking your way through the ton of information available on recordable DVD
formats can be a nightmare. To help you out, we’ve done our best to distill it into
this summary.

There are five recordable versions of DVD; DVD-R for General, DVD-R for Authoring,
DVD-RAM, DVD-RW, and DVD+RW. None of the formats is fully compatible with the
other although there are drives which will read, and in some cases write to more
than one format.

DVD-R for General and DVD-R for Authoring are essentially DVD versions of CD-R.
And DVD-RW is a DVD version of CD-RW. All three formats can be read in standard
DVD-ROM drives and in most DVD video players. The difference between DVD-R for
General and DVD-R for Authoring is that DVD-R for General is a format intended for
widespread consumer use and doeasn’t support ‘professional’ features such as
piracy protection or duplication in mass duplicators. The Pioneer DVD-RW drive
which is the most popular PC device for writing to DVD uses the DVD for General
format. And as as the case with CD, DVD-RW is essentially the same as DVD-R
except that it can be erased and written to again and again.

DVD-RAM is slightly different as it is a sector based disc which mounts on the
desktop of a PC when inserted into a drive. Files can then be copied to it in the same
way as any other mounted media. Some single-sided DVD-RAM discs can be
removed from their caddy and inserted in a DVD-ROM drive which will then be able
to read the content of the disc.

There are DVD video recorders which use the DVD-RAM format. This enables themn
to pull off clever tricks like timeshifting – where you can watch the beginning of a
programme you have recorded while you are still recording the end on the same
disc.

DVD+RW is the newest format and not supported by the DVD Forum, the body
which sets the standards for DVD. However, it is supported by some of the biggest
electronics and computer manufacturers, and is therefore likely to stick around.

It is also the format used by Philips in its DVD video recorders. Despite not being
authorised by the DVD Forum, DVD+RW is claimed by its supporterd to be
compatible with more DVD video players than DVD-R and DVD+RW writers are
found in PCs from quite a few manufacturers.

Hard Drives: ATA versus SATA

The performance of computer systems has been steadily increasing as faster processors, memory, and video cards are continuously being developed. The one key component that is often neglected when looking at improving the performance of a computer system is the hard drive. Hard drive manufacturers have been constantly evolving the basic hard drive used in modern computer systems for the last 25 years, and the last few years have seen some exciting developments from faster spindle speeds, larger caches, better reliability, and increased data transmission speeds.

The drive type used most in consumer grade computers is the hearty ATA type drive (commonly called an IDE drive). The ATA standard dates back to 1986 and is based on a 16-bit parallel interface has undergone many evolutions since its introduction to increase the speed and size of the drives that it can support. The latest standard is ATA-7 (first introduced in 2001 by the T13 Technical Committee (the group responsible for the ATA standard)) which supports data transfer rates up to 133MB/sec. This is expected to be the last update for the parallel ATA standard.

As long ago as 2000 it was seen that the parallel ATA standard was maxing out its limitations as to what it could handle. With data rates hitting the 133MB/sec mark on a parallel cable, you are inviting all sorts of problems because of signal timing, EMI (electromagnetic interference) and other data integrity issues; thus industry leaders got together and came up with a new standard known as Serial ATA (SATA). SATA has only been around a few years, but is destined to become “the standard” due to several benefits to be addressed in this Tech Tip.

The two technologies that we will be looking at are:
ATA (Advanced Technology Attachment) – a 16-bit parallel interface used for controlling computer drives. Introduced in 1986, it has undergone many evolutions in the last 18+ years, with the latest version being called ATA-7. Wherever an item is referred to as being an ATA device, it is commonly a Parallel ATA device. ATA devices are also commonly called IDE, EIDE, Ultra-ATA, Ultra-DMA, ATAPI, PATA, etc. (each of these acronyms actually do refer to very specific items, but are commonly interchanged)
SATA (Serial Advanced Technology Attachment) – a 1-bit serial evolution of the Parallel ATA physical storage interface.

Basic Features & Connections

SATA drives are easy to distinguish from their ATA cousins by the different data and power connections found on the back of the drives. A side-by-side comparison of the two interfaces can be seen in this PDF from Maxtor, and the following covers many of the differences…

Standard ATA drives, such as this 200GB Western Digital model, have somewhat bulky, two inch wide ribbon cable with 40-pin data connections and receive the 5V necessary to power them from the familiar 4-pin connection. The basic data cables for these drives have looked the same for years. A change was made with the introduction of the ATA-5 standard to better improve the signal quality by making an 80 wire cable used on the 40-pin connector (these are commonly called 40-pin/80-wire cables). To improve airflow within the computer system some manufacturers resorted to literally folding over the ribbon cable and taping it into that position. Another recent physical change also came with the advent of rounded cables. The performance of the rounded cables is equal to that of the flat ribbon, but many prefer the improved system air flow afforded, ease of wire management, and cooler appearance that come with them.

SATA drives, such as this 120GB Western Digital model, have a half inch wide, 7 “blade and beam” data connection, which results in a much thinner and easier to manage data cable. These cables take the convenience of the ATA rounded cables to the next level by being even narrower, more flexible and capable of being longer without fear of data loss. SATA cables have a maximum length of 1 meter (39.37 inches), which is much greater than the recommended 18 inch cable for ATA drives. The reduced footprint of SATA data connections frees up space on motherboards, potentially allowing for more convenient layouts and room for more onboard features!

A 15-pin power connection delivers the 250mV of necessary power to SATA drives. 15-pins for a SATA device sounds like it would require a much larger power cable than a 4-pin ATA device, but in reality the two power connectors are just about the same height. For the time being, many SATA drives are also coming with a legacy 4-pin power connector for convenience.

Many modern motherboards, such as this Chaintech motherboard, come with SATA drive connections onboard (many also including the ATA connectors as well for legacy drive compatibility), and new power supplies, such as this Ultra X-Connect, generally feature a few of the necessary 15-pin power connections, making it easy to use these drives on new systems. Older systems can easily be upgraded to support SATA drives by use of adapters, such as this PCI slot SATA controller and this 4-pin to 15-pin SATA power adapter.

Optical drives are also becoming more readily available with SATA connections. Drives such as the Plextor PX-712SA take advantage of the new interface, although the performance will not be any greater than a comparable optical drive with an ATA connection.

Performance

In addition to being more convenient to install and drawing less power, SATA drives have performance benefits that really set them apart from ATA drives.

The most interesting performance feature of SATA is the maximum bandwidth possible. As we have noted, the evolution of ATA drives has seen the data transfer rate reach its maximum at 133 MB/second, where the current SATA standard provides data transfers of up to 150 MB/second. The overall performance increase of SATA over ATA can currently be expected to be up to 5% (according to Seagate), but improvements in SATA technology will surely improve on that.

The future of SATA holds great things for those wanting even more speed, as drives with 300 MB/second transfer rates (SATA II) will be readily available in 2005, and by 2008 speeds of up to 600 MB/second can be expected. Those speeds are incredible, and are hard to imagine at this point.

Another performance benefit found on SATA drives is their built-in hot-swap capabilities. SATA drives can be brought on and offline without shutting down the computer system, providing a serious benefit to those who can’t afford downtime, or who want to move drives in and out of operation quickly. The higher number of wires in the power connection is partially explained by this, as six of the fifteen wires are dedicated to allowing the hot-swap feature.

Price

Comparing ATA drives to SATA drives can be tricky given all of the variables, but in general it is the case that SATA drives will still cost just a bit more than a comparable ATA drive. The gap is closing rapidly though, and as SATA drives gain in popularity and availability a distinct shift in prices can be expected. Considering the benefits of SATA over ATA, the potential difference of a few dollars can easily be justified when considering an upgrade. Computer Geeks currently has a limited selection of SATA drives, but several technical sites, such as The Tech Zone and The Tech Lounge, offer real time price guides to see how comparable drives stack up.

To Wire or Not to Wire

Wireless networks are en vogue, but your installation won’t be successful unless you chose the right type of network and set it up properly. Wired networks require that each computer be connected via a wire to a central location, called a switch or hub. This often involves installing cables through walls and ceilings and can present a challenge for anyone.

If the computers in your home or office are all within 500 feet of each other, a wireless network might be for you. A wireless network has no cables. It can connect computers on different floors of a building or even across the street. Aside from the obvious benefit of not having wires, wireless networks are more convenient since the setup, configuration, and reconfiguration can often be done within minutes, without extensive planning.

Wireless networks, however, are not as fast as wired networks. If you play computer games or want to view streaming video or other high-speed multimedia, a wireless network might not have enough capacity. But, if you just want to check e-mail and view web pages, a wireless network is a good choice. To install a wireless network, you need a Wireless Access Point and a wireless network card for each computer. You will need to buy a wireless network card for each desktop computer, although most newer laptops come equipped with one.

Security is not a large concern in a wired network, since someone would have to physically connect to a wired network to break in. In wireless networks, a car parked outside with a laptop could easily connect to your network if you don’t have proper security in place. To prevent this from happening, encrypt your wireless network connections, or set a password to access the network, or do both.
Do It Yourself or Call a Professional?

If you decide to use a wired network, consider whether you will install it yourself or hire a professional. If you have a small number of computers that are all situated very close to one another, you may be able to buy pre-assembled network cables and connect them yourself. If you need to wire multiple floors and lay wire through ceilings and walls, you need a professional installation. If you go this route, it is best to begin with a floor plan of your office or home, determine what your current needs are, and consider how the network design can be adapted to future needs. A professional installer should be familiar with EIA/TIA standards, local wiring and electrical codes, and making custom cables. Network cabling professionals are often judged by the neatness of their work, because sloppy cabling is more apt to deteriorate over time, harder to manage, and poses more of a fire risk.

Having a wireless network or a wired network is not mutually exclusive. Many small offices have a wired network in addition to one or more wireless networks, depending on their needs. Wireless networks are continuing to get faster, more secure, and less expensive. Wired networks will continue to coexist with wireless networks, often in the same homes and offices.

If you are in need of a good wireless router, it isn't necessary for you to pay an arm and a leg. The truth is, there are cheap wireless routers that can do just as good a job than some of its more expensive brethren. This means that it is a misconception that only the most expensive routers will produce the highest speed connection. Just ask many people who have used both expensive and cheap wireless routers. They will tell you that they either noticed no difference, or the cheaper router was actually a better choice.

Choosing cheap wireless routers

When scoping out cheap wireless routers, you have to make sure that the router is compatible with your particular computer. Most routers are compatible with most computers. However, there may be a few restrictions here and there. What is most important is that you determine the type of user you are so you know which one of the many cheap wireless routers is going to serve you the way you need it to.

When determining what kind of user you are, this includes asking yourself about how often you use your computer and what kind of tasks you perform on your computer. There are some individuals who only use their computer to check e-mail, and then there are those who actually use the internet regularly. Heavy users are the ones who are on their computer and using the internet all of the time. These are also the individuals who tend to download a lot of information.

The features you need

Once you have determined the type of user you are, and then it is time to decide which of the cheap wireless routers is for you. You also need to look at how many computers will be using a single router. When looking at cheap routers, you may find that a starter kit is the way to go if you're a beginner and you don't use the internet often. But if you're a heavy user, you may need to look into cheap wireless routers that are designed for heavy usage. This goes for those who like to download a lot of material. It is essential to have a router that can handle the load. These are usually the individuals who need a very high speed connection that is reliable.

When it's all said and done

By exploring the various cheap wireless routers to see what types of options exist for you, you will find that you have the type of wireless connection that you really need. There is no need to overdo it, but you don't want the connection to be inadequate either. This is also the most cost-effective solution when looking at the various cheap wireless routers available.


Here we have it folks, an exclusive on the iPhone's availability in India. When, how much, what features, software -- everything you ever wanted to know about the iPhone 3G in India! We know you want to get to the meat of the matter, so without further ado, here's what you need to know:

> iPhone 3G Availability: Vodafone to have a 15 day launch advantage (available August '08 through Vodafone, and late Aug/September '08 through Airtel)
> Model: Initially, the 8GB version only

> Price: Rs. 11,500 to Rs. 12,000 (note: U.S. price of the 8GB iPhone 3G is $199)

> Applications: App Store will be available

> App prices: Application prices may see a revision to suit Indian market

> 3G: No fixed date for 3G availability; expected sometime later this year

> GPS: Present on the Indian version as well

> Grey-market availability: Almost nowhere to be seen. One store expects to sell it next week at Rs. 50,000

Scroll down for the rest of the story. While we all await the release of the iPhone 3G in India, some questions are pressing. In many ways, the iPhone is a platform - a means and a tool to unlock far greater features through new and sundry software. Until recently, the only means of installing software not produced by Apple was through illegal avenues. Now though, with the launch of the App Store, Apple has brought third-party software under a legal umbrella. Thus, the first question is this: will we, in India, get access to the same App Store that the people in the U.S. and other western countries enjoy? Further, would pricing be in Rupees or would we need to pay in Dollars? What about 3G - a feature so central to the iPhone that it's part of its name, its identity - will the iPhone be released along with 3G coverage?

To put it simply, will the iPhone offer the same value proposition in India, as it does abroad? We try and answer some of these burning questions here in this TechTree exclusive.

We contacted the necessary parties at Apple, Vodafone and Airtel - the latter being iPhone's official carriers in India - to get the required answers. While Apple and Vodafone were tight-lipped about release details, we struck gold with Airtel. Our source at Airtel, who wishes to remain anonymous, gave us valuable insight. Most of it, good news.

According to our source, the iPhone 3G will bring along most of the software goodies that are available in the West. In fact, the source went on to add, some of the features will be customized to suit the Indian tariff. The source confirmed that we would also be able to purchase software from the App Store with a possibly of a revised price for the Indian consumer, quickly adding that the price revision is not set in stone. Let's move on to the meat of the matter - when will the iPhone be available in India?

When I last spoke to our source from Airtel, I was told that the iPhone 3G launch was scheduled for the 25th of August, with the caveat that the date may be pushed to early September for supply reasons. Apparently, the number of pre-orders for the iPhone have exceeded the initial product allotment by Apple.

Interestingly, our source from Airtel confirmed that Vodafone will be the first to introduce the iPhone 3G in India, with a 15-day launch advantage over Airtel, thanks to Vodafone's global tie-ups. If we are to believe Airtel, Vodafone should introduce the iPhone 3G by end of August at least. That's only a month from now!

So does this mean Vodafone will get to gorge on the subscription of maximum number of iPhone-hungry Indians due to the company's first-in-line advantage? Is the iPhone 3G is likely to become a "Vodafone product"?

One would obviously like to think so, but Airtel differs on this aspect. Our source assertively stated that Vodafone hasn't been able to create the kind of impact on the prospective iPhone buyers as Airtel has with its aggressive marketing. The latter added, "We will make sure we don t downplay them (Vodafone), but we will create an impact of our own." There has been a lot of speculation over the price of the iPhone. Initially, only the 8 GB version of the iPhone 3G will be officially launched in India. The handset is expected to cost between Rs. 11,500 and Rs. 12,000, says our source. Compare this to the $199 price tag of the 8GB version in the U.S, and it's obvious that India is once again going to feel the import-duty pinch. Moreover, this price does not include the data plan costs that a user will have to bear every month, the details of which will only be out by the 15th of August, our source indicated. There are still deliberations being made on the specifics of the plan, we were told.

If you want the iPhone 3G right away, you need to look to the grey market. The scene there isn't all rosy though, with rumors flying of Airtel and Vodafone clamping down on parallel imports. The market-reality seemed to lend credence to these rumors as most vendors we contacted told us that the iPhone (old model or new) was unavailable for purchase. One Mr. Munshi Adnan, owner of a shop called "Gadgets" at Heera Panna, in Mumbai, promises to offer the iPhone 3G next week. He claimed to have it around July 28th, and quoted a princely sum of Rs. 40,000 to Rs. 50,000! While all of the above is largely good news, the bad news comes in the garb of the key feature: 3G. Nobody seemed willing to place a date on 3G availability in India, the arrival of which is still a rather nebulous 'sometime late this year'. We have a bad feeling about this. Our source did confirm though that GPS (a feature through which the iPhone 3G can track your location) would be a present in the Indian avatar of the Apple iPhone 3G.

So there you have it! Mostly good news for those of us who have waited for the release date of this product.

What are your thoughts on the price, the model, the software availability, the lack of clarity on 3G? Let us know by typing out your comment below.

The workhouse of the computer is the hard disk. Therefore, it is important to know the functions and working of the hard disk. The files that we save are magnetically recorded onto the platter. The platter is placed inside the hard disk. The platters in most of the hard drives are usually mounted on a spindle. With the help of this spindle, the platter is able to spin faster and the speed can reach up to 15,000 times/ minute. The platter is two sided, and is mounted on a single arm which has a slider.

The head can move across the surface with the help of the platter is two sided. In this way the head can read and access the data. The unit of data held by the platter is tracks/inch. The track is equal to 1 concentric ring around the disk. The track is further divided into sectors, and the sector is equal to approx. 512 bytes. The saved data on the disk is referred accordingly to its sector and track.

The process of data collection can be made faster by de-fragmenting the hard disk periodically. The de-fragmenting can be done with the help of Windows Disk Defragmenter, which reorganizes the scattered data on the hard drive. Once, the de-fragmentation is done, the files on the hard disk run faster and more efficiently. The de-fragmentation process shifts the files to the beginnings that are used most often. This shifting of files is done so that the hard disk can load these files immediately when they are accessed.

The SATA (Serial ATA) hard drives are the next generation of ATA (Advanced Technology Attachment) hard drives. The HP SATA hard drives have been developed and designed to offer its users the best and optimum prices for the entry level servers. The HP SATA hard drive also offers its customers the affordable solutions for external storage deployments in the environments with low workload.

The hard disk Cleanup

The Windows Disk Cleanup application sorts through the disk and deletes unused and temporary files. In this way space can be made available on the disk which speeds up the operations.

The ScanDisk utility

The files in normal daily routine are regularly read from and written to the disk, often in small chunks. The ScanDisk utility runs when the computer had to restart after a crash or is turned off without properly shutting down the system.

The ScanDisk scans the disk to detect if there are any errors on the hard drive. If it finds any errors on the hard disk, the ScanDisk marks the cluster of sectors containing the error as unusable. In this way no data can be written to or read from that portion of the disk.

ScanDisk should probably be done after every two or three month's time. The ScanDisk can also automatically repair some of the detected errors. If the errors are more severe, then these errors can be repaired by reformatting the drive. The bumps or scratches are known as the hard errors, which are the physical damages to the disk. These hard errors cannot be repaired. If the number of hard error increases, then the hard drive needs to be replaced. The hard drive can run smoothly well past the time it has become obsolete with the help of simple maintenance.

If you follow the instructions listed in this article you can install a printer without a CD. Many people have went to install a printer but could not find the CD. This used to be a difficult and frustrating task, but it is no longer that way.

1. To begin you are going to want to place the computer and the printer next to each other and make sure that the wires are connected to the printer and the computer but not connected to each other. Make sure that neither the printer nor the computer are on.

2. Next, you need to turn on the computer. Until you have booted up the computer do not do anything else.

3. Now, plug the printer into the wall and then connect the USB cord that is attached to the printer into the correct port on your computer. Now turn the printer on.

4. Click the "Start" button, typically located in the lower left corner of your computer, and click on the "Control Panel" button.

5. Once you are in the control panel you need to find the Printer button, which is either going to be found by clicking on the "Printers and Hardware" button or clicking the "Printer" button under the "Hardware and Sound" category.

6. Once you are in the Printer menu you need to click on the "Add printer" button. A message saying "Welcome to the add printer wizard." When that message appears you will need to hit the "Next" button.

7. When the next page appears you will need to click the "Automatically detect and install plug and play printer" button on the add printer wizard page. Continue by clicking the "Next" button.

8. If the computer can automatically detect the printer than it will begin to install that printer. However, if the printer is not being detect a message saying unable to detect will pop up, and you will have to install the printer manually. Again, click the "Next" button.

9. Now you will need to select the printer port. Choose the recommended port because it will give you the best set up options. Click the "Next" button.

10. You will now need to choose the manufacturer of the printer and the exact printer. Click the "Next" button.

11. You will have the option to either confirm the name or type a name that appears in the box and then set the printer as the default printer if you choose to. Click the "Next" button.

12. You will need to print a test page to ensure that the printer is set up and working. Once the test page is printed click the "Next: button.

13. You will now receive a successful completion of installation of printer message and all you have to do is click "Finish" and you have installed a printer without a CD.

It used to be extremely difficult to install a printer without a CD, but with the steps listed above you should be able to do it with no problems at all

Hard disk drive, commonly known as the Hard Drive, is an integral device of a computer system. It is a device use to store encoded data on the rotatable magnetic surface platform. A Hard drive can be a floppy disc drive or simply a tape drive. Many times, computer users find several hard drive problems, but due to incomplete technical information, they misjudge the problem and thus, make way to more computer related problems. So, it is very important to understand whether a slight change in the functioning of the hard disc drive is a problem or not? And if it is, then what is an easy and straight forward solution for that?

Earlier, hard drives had a removable channel but these days, all drives come up as a fixed unit media. They are used to store data on a large scale. Its storage capability is considered to be reliable as well as independent. They are sealed so that dust cannot enter into it and affect the stored data. They are designed in such a way that a fine passage of filtered air keeps going on all the time. They work with a high speed and transmit the data much faster than that of a floppy disk drive as well.

Hard disk drives do have problems sometimes, but it is very important to judge the degree of the problem before starting its repair. Booting is one of the main hard drive problems. In these particular cases, the suspected poor parts of the hard drive must be replaced by high quality parts. Sometimes, there is problem in the main system and people misunderstand it. A user should first connect the drive with another system to check out the problematic area first.

Some of the symptoms of hard drive problems are discussed next. These include: when the drive starts giving poor or very slow performance, it takes a long time to reboot. Also self lock ups is a major symptom that conveys some problem in the proper functioning of hard drives.

These symptoms can also occur due to some virus in the drive. So, before replacing your drive for the above mentioned symptoms, it is wise to detect the latest virus definitions of your drive and remove it by taking some specific steps. When you install any new software in your system, your hard drive may display some problems or changes in the system. This is very normal and should not be taken as a problem if it corrects itself within 2-3 days. Sometimes, storage devices may take time to do accurate work with certain new software in the system. Hard drive failures can also be related to electrical failure, mechanical failure and logical failures as well. The best way to minimize them is to use high quality drive and parts in your computer.

In the entire computer system, every little part has a major role to play. You probably have not given much importance to the CPU fan retainer clip before, but after reading this article, you will. We will discuss the importance of quality clips in the computer.

Many people have complaints regarding their system over heating, as this effects the way their whole system operates. That is why CPU, that is; Central Processing System of the computer has a fan retainer behind its hardware which minimizes the effect of heat over the entire computer. But what if the pressure of heat is so high that it melts the CPU fan retainer clips? It will worsen the situation and damage your CPU as well. This is why, clips are considered very important for the safety of your system and are available to you by a number of internationally famous computer companies.

Earlier, most of the CPU fan retainer clips were made of plastic, but they provide safety only up to a certain limit. For instance, plastic is breakable and in certain conditions, such as intense heat, they can be damaged. Now, most of the companies including Dell and Apple are manufacturing metal or alloy based CPU fan retainer clips. These clips are least prone to damage by heat so they last much longer than the earlier clips make of plastic. CPU fans are developed in a number of designs. In turn, they influence the design of clips as well. Further, their design is also influenced by the socket in which it is inserted into. These clips are very cheap in price and can be purchased by the bunch in many places.

Many of you may not know about the required design of the clips. So, it is better to go on line to shop for them. You will have to search for a reliable company first. All leading companies deal with this product. All companies offer great discounts on the clips required for the model manufactured by them. You can easily contact your computer's company and ask them to provide the clips for your computer. Give them the model and design number of the system so that they can come up with the right clips.

The single most important piece of hardware that you will buy is the motherboard - the very core of your PC. The processor plugs into it, drives connect to it with cables, expansion cards live in special slots and everything else, from the mouse to the printer, is ultimately connected to and controlled by the motherboard. If you buy a PC from a shop, chances are you'll never think about or even see the motherboard; but when you build a system from scratch, it must be your primary consideration. Everything else follows from here.

Form factor: This is the way of describing the motherboard's size and shape, important because it involves industry-wide standards and ties in with the computer case and power supply. Form factors have evolved through the years, culminating since 1995 in a popular and flexible standard known as ATX. Not just one ATX standard, of course: there are MiniATX, MicroATX and FlexATX motherboards out there, all progressively slimmed-down versions of full-size ATX. The upside of a smaller motherboard is that you can use a smaller case and reduce the overall dimensions of your computer; the downside is a corresponding reduction in expandability. A full-sized ATX motherboard can have up to seven expansion slots while a MicroATX motherboard is limited to four.

One technical benefit of ATX over the earlier BabyAT form factor from which it directly evolved is that full-length expansion cards can now be fitted in all slots; previously, the location of the processor and memory on the motherboard meant that some slots could only take stumpy (not a technical term) cards. Another is the use of a double-height input/output panel that lets motherboard manufacturers build-in more integrated features. All in all, it's a definite improvement.

But from your point of view, the main attraction has to be the guarantee that any ATX motherboard, including the smaller versions, will fit inside any ATX computer case. That's the beauty of standards.

Chipset: The real meat of a motherboard resides in its chipset: a collection of microchips that together control all the major functions. Without a chipset, a motherboard would be lifeless; with a duff chipset, it may be inadequate for your needs. Indeed, as one motherboard manufacturer explained it to us, the chipset is the motherboard: don't ask what this or that motherboard can do -ask instead what chipset it uses and there you'll find your answer.

So what does a chipset do, precisely? Well, at one level it controls the flow of data between motherboard components through a series of interfaces. Each interface, or channel, is called a bus. The most important buses are:

FSB (Front Side Bus) The interface between the Northbridge component of the chipset and the processor.

Memory bus The interface between the chipset and RAM.

AGP (Accelerated Graphics Port) The interface between the chipset and the AGP port. This is gradually disappearing from motherboards as more and more video cards are designed for the PCI Express slot.

PCI (Peripheral Component Interconnect) bus The interface between the chipset and PCI expansion slots. Pretty much any expansion card can be installed here, including sound cards, network cards and TV tuners. The exception is a video card, as these are, or were, designed for the higher bandwidth AGP interface. Like AGP, PCI is gradually giving way to PCI Express.

PCI Express bus The interface between the chipset and PCI Express expansion slots. There may be two separate buses determined by the bandwidth of the slots. For instance, the motherboard may have a 16-speed PCI Express slot for the video card and one or more slower slots for standard expansion cards.

IDE (Integrated Drive Electronics) bus The interface between the chipset and hard & CD/DVD drives.

SATA (Serial Advanced Technology Attachment) bus An alternative interface between the chipset and hard & CD?DVD drives which will eventually completely replace the IDE bus.

And then there are buses controlling the floppy disk drive, parallel and serial ports, USB and FireWire, integrated audio and more.

Bus bandwidths

Not all buses are equal. Far from it, in fact: they operate at different speeds and have different 'widths'. For example, the basic single-speed (lx) AGP specification has a clock speed of 66.6MHz (usually expressed as 66MHz). This means that over 66 million units of data can pass between the video card and the chipset through the bus per second. However, the AGP bus transfers 32 bits of data (that's 32 individual I's and O's) with every clock cycle, so the true measure of the bus is not its speed alone but rather the overall rate at which data is transferred. This is known as the bandwidth of a bus. In this case, 32 bits pass through the bus 66 million times per second. This equates to a bandwidth of 266MB/sec.

Just to be clear, using round figures, here's the sum:

66,600,000 clock cycles x 32 bits = 2,131,200,000 bits/sec There are 8 bits in a byte (B), so this equals 266,400,000B/sec

There are 1,000 bytes in a kilobyte (KB), so this equals 266,400KB/sec There are 1,000 kilobytes in a megabyte (MB), so this equals 266MB/sec

Looked at another way, the AGP bus transfers sufficient data to fill a recordable CD every three seconds.

It's also possible to run the AGP bus up to eight times faster, which boosts the bandwidth to over 2 gigabytes per sec. This is the kind of speed you need for playing games. By contrast, the PCI bus runs at only 133MB/sec. This is fine for many purposes but not for three-dimensional video.

Chipset architecture We needn't linger on the physical design of chipsets except to comment briefly on the terminology you are likely to encounter:

Northbridge The primary chip in a chipset, it typically controls the processor, memory and video buses.

Southbridge A second chip that typically incorporates the PCI, IDE/SATA and USB buses.

Processor and memory support The two most important questions with any motherboard, and hence computer, are which processor family and what kind of memory does it work with?

For instance, if you decide that you want to build a Pentium 4-based system, you'll need a motherboard with either a Socket 478 or a Socket 775 to house it; and if you want an AthIon-based system, you'll be looking for a Socket 939, 940 or 974. So far, so confusing. It gets all the more so when you factor in the many possible permutations of memory support, including bus speed, number of slots on the motherboard and whether it offers single-channel or dual-channel performance. We'll cover all of this in due course.

Need a further complication? Intel makes its own chipsets, which means ifs easy to compare like for like, but AMD largely relies on third-party manufacturers to come up with compatible chipsets for its processors. There's nothing wrong with AMD's stance on this - and indeed it opens the market to chipset manufacturers which would otherwise be squeezed out by Intel's dominance - but it does make motherboard comparisons slightly trickier.

Have you ever wondered "why" your battery stops working? All batteries fail at one point or another and more importantly all batteries fail - due to different reasons. Specifically, two identical batteries that come from the same manufacturing batch, with the same identical voltage, capacity, and chemistry fail (or stop working) at different times. Why? To understand why batteries fail I will walk through the steps of a battery mode and effects analysis to discover modes of battery failure and the effects of the battery failure.

A battery mode and effects analysis is a procedure for identifying and understanding potential failure modes in a battery system. A battery mode and effects analysis contains four main steps or phases:

* Battery Mode Pre-work
* Battery Failure Severity
* Battery Failure Occurrence
* Battery Failure Detection

Battery Mode Pre-work

The Battery Mode Pre-work is an essential preliminary component to a battery mode and effects analysis and often times the one component that gets the least attention. It is a way of "starting smart" in the identification of battery failures. As an example, battery failures are often caused by shared interfaces. If an engineer, focused on a single facet of the battery's micro or macro system, glosses over the effectiveness and efficiency of interfacing components when designing, compiling and assembling a battery's system, then the failure rate and severity could dramatically increase regardless of how "correct" the engineer's portion of the system is working. A really good case study on shared interface failures is the battery interface with the device's operating system. The inefficiency of the operating system's software in a device can under or over utilize the maximum capacity and voltage of a battery and thus subsequently degrade the battery faster then normal. At the consumer level they would just say the battery is bad or "sucks" when in fact it is the device's software that is the culprit of faster than normal battery degradation.

Thus careful attention to a battery's mode pre-work is well advised. Battery mode pre-work includes a complete and detailed description of the battery's system, the battery's function, the battery's intended uses, and the probable unintended uses.

In part 2 of Battery Failure Mode and Effects Analysis I will address Battery Failure Severity and Battery Failure Occurrence.

Printers have no doubt made our lives very simple. Almost all the printed materials you see around you have been the gift of printers. The technological advancements in the field of printers have made printouts not only more professional but also more pocket-friendly. One of the most commonly heard name in the printer industry is Inkjet printers. Inkjet printers are versatile and require less maintenance. Their ability to be put in to multi-handed and heavy use is what makes them ideal candidate at many offices and organizations. If you are using inkjet printers or planning to install them in your office or at your home, the one thing that you should know is about Inkjet Cartridges. There are people who prefer to refill their cartridges to cut down on the cost. Others prefer using branded cartridges. Both have reasons to support their own points. Here we will discuss some of them.

The first and most obvious reason that most printer users cite for refilling their old cartridges is based on the cost. Refilling an old cartridge can be lot cheaper than buying a brand new cartridge for your printer.

The other group say that the usage of refilled cartridges affects the performance of their inkjet printers. Therefore, they prefer new and original cartridges to refilled ones. These users usually go for an inkjet cartridge of the same brand whose printer they use. For example, if they are using a HP inkjet printer, they will buy a new HP inkjet cartridge when their old cartridge runs out.

Besides being more economic, refilling your old printer cartridges is also a friendly option for the environment. This is because the plastic used in the cartridges is disposed if you are buying a new cartridge. This is the reason some people give for their prefer of using refilled cartridges.

Professionals generally don't have a choice, for they cannot compromise on the print quality. The print quality you can get from a refilled cartridge is often lower than that obtained using a new inkjet cartridge. You can not present a low quality printed document to a client. In such cases, it is advisable to replace your old cartridge with a new one, preferably from the same manufacturer as your printer. If you want to use a cartridge from different manufacturer, opt for a compatible HP inkjet cartridge, or a cartridge from some other renowned brand. They can give you the quality you need as a professional.

If you have a photo scanner, you can easily scan the images from paper or photographs to digital forms. This type of scanners is a common find with people who scan traditional films and slides. The advantages of digitalized images are many - you can modify and print it, and you can also send it to your friends via e-mail. You will also need a digital image if you need to upload it on a website. However, there are some measures you must take into consideration if you want that your photo scanning device renders hassle-free services. Here are some tips for the usage of photo scanner you can benefit from:

* Every time you use the scanner, allow it to warm up for a few minutes. It is also advisable to wipe the scanner glass clean of dust and lint with a piece of soft cloth before every use.
* As with most sensitive electronic devices, you should place the scanner on a leveled surface. This improves its performance and increases its life.
* Don't expose your scanner to heat or moisture. They tend to damage the sensitive components inside the device.
* When you operate the device, be careful not to cause scratches on the surfaces. Also, you should be gentle on the buttons and avoid using pens or other such objects to press them. Buttons are meant to be pressed by fingers.
* The glass in a scanner should be kept clean and scratch-free. Do not touch it with hands or any such thing from which it can take up grease or oil. The presence of any smear on the glass of the scanner can lead to unwanted cloudiness in the scanned images. If you find any visible smear on the glass, you should use a soft cotton cloth to wipe it off. If a dry cloth doesn't work, try one dipped in a good-quality cleaning solution.
* Keep the photo scanner in a clean dust-free place.
* Whether you have a HP photo scanner or any other scanner, you can find a list of instructions about the operating and the maintenance of your scanner on the manual provided by the manufacturer. Read and understand it well.
* It is always good to use your scanner single-handedly. However, if you must share it with other people, like in the case of a scanner at your office, you should tell them about the proper handling of the scanning device.

In a nutshell, computer networking is basically a cluster of computers linked together in a way that it can transmit data and share resources. These sets of connections do not necessitate that the computers bear the same operating systems (OS); it does not even require that similar types of gadgets be used. A perfect example is a personal data assistant (PDA). One may connect a PDA to a laptop over a network. Even kitchen appliances like the internet enabled refrigerator uses networking to activate its surfing functions.

How Is Networking Done?

There are various methods to link computers and other gadgets to a network. And among the plethora of ways and means, the most common networking method is the use of cables. The market provides an assortment of cables from copper-wired to fiber optics each with its advantages and disadvantages.

Copper Wire: Unshielded Twisted Pair Cable (UTP)

The UTP is one of the frequently utilized cables for a local area network (LAN) connection, which is essentially linking a few computers within a small geographical area (thus the name, LAN). Going back to the UTP, this is composed of two unshielded and insulated copper wires coiled and twisted together to diminish electrical interference. This type of cable is often opted due to its flexibility, easy maintenance and low cost. The downside, it can (and will) possibly receive severe blows from electrical interference.

Fiber Optics

Data is exchanged by sending voltages along the wire. But for fiber optics, the data is transmitted through light pulses. While the UTP has copper, fiber optics has threadlike strands of glass, or silica. The process goes like this: the laser translates digital signals into pulses of light and conveys it down the series of glass strands. Fiber optics offer rapid data correspondence, though this speed comes with a price, a pocket burning price. With that said, this kind of connection is often seen being utilized by huge internet service providers (ISP) and data centers, not in office or home networks.

Network Topology: Bus Network

Simply put, network topology is the physical formation of the network. And the bus network is the most straightforward among the various network topologies. Let us begin with the bus. All the machines link up to a linear transmission channel, or the bus.

In operation, when a computer sends off data through the bus, all connected machines can see the data, otherwise known as packet. These packets possess pieces of information tagged as packet headers. And these headers, in turn, reflect the recipient or to which computer or machine the data is intended for. The bus inspects the header: if it is for that computer then the whole packet is recognized and received, but if it is not, it will be merely ignored.

The Downside: One at a Time

Unfortunately, the bus can only handle a single data at any given time. In here, two is a crowd. Imagine what could happen if three computers in the network simultaneously transmit data. Well, you will certainly have a few network problems if this collision happens. Speaking of collision, if and when two computers concurrently send data, the main computer (or the first one that notices the collision) will transmit a blocking sign onto the bus. This will trigger a standstill among the linked computers preventing any further data exchange.

Network Interface Cards (NIC)

NICs usually control the 'to and fro' or the transmission of packets across the wires bridging the computers in a single network. It also provides a venue of communication among the computers.

Hub

Hub is intended for small-scale computer networking. The problem with this is that it does not sort packets, meaning the data may be sent to the wrong recipient. Another is that the data will be open for everyone in the network. Security wise, this is not the way to go.

Router

To speed things up, this is a network device that diffuses data packets between two networks bearing different protocols. Yes, it is that blinking box that allows you to connect to the web.

Subscribe to: Posts (Atom)