Technology, Data Recovery, Cell Phones, Latest Gadgets, Game Reviews

Computer Tricks, Internet Tips, Latest Gadgets, Latest Software, Tips and Tricks,Latest Reviews, Cell Phones Review, Data Recovery, Game Reviews.......


Slider Phones are hot, in more ways than one. They have also become associated (sometimes falsely) with higher prices than other phones, thanks to flagship devices like Nokia’s N95.

The 6210 Navigator is part of Nokia’s new series of phones where GPS is the selling feature. A lot of other phones has GPS, this phone is just blatant with the handle Navigator. Its a good looking phone that looks better in black than mocha red, which are the two colors available.

Nokia 6210 Navigator

Nokia 6210 Navigator

The built and the working of the slider is excellent, a very smooth and positive action. The back of the phone has a cheap and tacky feeling - a pity since the rest of the phone is quite well built. The front has this shiny plastic coating that we feel is overdone. this makes the Nokia 6210 look a little too loud. The Navigator monogram, a blue fore cornered star, on the front of the phone looks overdone too. Call Accept, Reject and Menu buttons are large, well laidout and very easy to use. The Number keypad, although flat, is large and well spaced - a boon for SMS junkies. All the button on the body feel solid and work well.

The screen is crisp and can display upto 16 million colors. The 6210 Navigator has a 3.15 MegaPixel camera that is good for taking the odd image, but falls short of some of the other 3.2 MegaPixel rated cellphone cameras we already seen. Music quality is decent, but not as good as some of the other Nokia’s. A 3.5 mm jack means you can use your own headphones.

The phone interface is fast, thanks to an ARM 11, 369MHz processor, and signal quality is good. We feel a larger battery should have been present.

At $459, the 6210 Navigator is expenssive for what it offers.

Specifications at a Glance:
1. CPU : ARM 11, 369 MHz
2. 64MB SDRAM
3. Screen Size : 2.4″ with 16 million colors.
4. Resolution : 240×320 pixels
5. Camera Lense Sensor : 3.15 Camera
6. GPS : Inbuilt GPS Navigation

Advanced Micro Devices Inc. yesterday reported a narrower loss than expected for its third quarter on increased sales of its microprocessors and graphics chips, the company said.

It was AMD's eighth consecutive quarterly loss but a much smaller one than the year before. The loss was $67 million, or 11 cents per share, compared with a loss of $396 million, or 71 cents per share, in the third quarter of 2007. Revenue climbed 14% to $1.78 billion, from $1.56 billion a year earlier.

The net loss is based on generally accepted accounting principles. On a non-GAAP basis, excluding a loss of $108 million from discontinued operations, as well as other charges, AMD said it would have reported a profit for the quarter of $80 million.

The revenue and profit figures both came in ahead of analyst forecasts, according to Thomson Reuters.

"AMD had a well-executed third quarter in the context of a challenging environment," Chief Financial Officer Bob Rivet said on a conference call. "We reached our goal of achieving operating profitability."

Revenue from AMD's microprocessor unit climbed 8% year over year to to $1.39 billion, while revenue from its graphics business climbed 40% to $385 million. The growth came from AMD's quad-core Barcelona server processor, which had its first full quarter of shipments following delays, and from new Radeon 4000 graphics chips that shipped during the quarter.

The graphics business, which AMD acquired when it bought ATI Technologies Inc. two years ago, turned an operating profit for the first time, Rivet said.

AMD expects servers based on its new Shanghai processor, which uses a more advanced 45-nanometer manufacturing process, to be available in a few weeks, said President and CEO Dirk Meyer. Desktop PCs based on 45-nanometer processors will be available early in the first quarter of next year, he said. The number refers to the dimension of circuits etched on the chips, and the more advanced process should mean faster, less power-hungry products.

AMD announced a plan last week to stem its losses by spinning off its chip-manufacturing business into a separate company. Analysts said the move could help AMD return to profitability by freeing it of the costly burden of building and maintaining its own manufacturing plants. AMD would continue to design and sell its chips but have them manufactured by a third party.

AMD's shares were up 5% ahead of the financial report, closing at $4.12 per share. The stock moved 9% higher after the report was issued, climbing to $4.50 per share in after-hours trading.

Financial results are being closely watched this quarter as the industry tries to weigh the impact of the emerging financial crisis in the U.S. on customers' IT spending. The news so far has been mixed.

T-Mobile USA will become the first company in the world to announce a mobile phone based on Google's Android OS at a New York press conference Sept. 23, the New York Times reports, citing T-Mobile.

The handset was manufactured by Taiwan's High Tech Computer (HTC), the Times said. HTC representatives in Taipei declined to comment on the report.

Several other Web sites are also reporting the Sept. 23 event, including Gizmodo, which is displaying what appears to be an announcement from T-Mobile.

HTC has already said it is developing a mobile phone developed around Android and plans to call the handset "Dream."

The handset maker may end up being first in the world to put out an Android-based mobile phone, but other companies are also developing handsets around Android, including Samsung Electronics.

HTC's Google handset is just over 5-inches long and 3-inches wide, with a keypad underneath the screen that either slides out or swivels out. The aim of the keypad is for easy e-mail, note-taking and writing Web addresses.

Internet navigational controls are situated below the screen on the handset.

Android is an open source software platform that includes an OS and is designed to take advantage of Internet services for mobility. The software could become a potent new rival to Windows Mobile and other handset operating systems. At the launch ceremony early this year, Google announced that over 30 companies had joined the Open Handset Alliance

Data recovery is the process of reviving data that have been lost due to physical damages like hard drive head crash, scratched, water-damaged or broken disks and tapes, and defective mechanism. Free data recovery refers to freeware or free software applications or utilities that anybody can download from the Internet for personal use.

Aside from physical causes, data could get lost when the file system is formatted; when the operating system running the machine is reinstalled; when files get deleted accidentally; when there is a virus attack; and when the system gets corrupted. These damages can happen to anyone and any computer anytime.

Most specialist would say that the best free data recovery system is to be pro-active in file management. There should always be a backup system especially for very important data. The backup will definitely make things easier and less frustrating when the unfortunate happens.

It's a good thing that there are ways to recover your lost or damaged files. If the data loss has a large impact on your business or it's a matter of national security or something, the wisest move is to go straight to the data recovery companies who have experts that can work on your problem.

But, if you think you know what you're doing and you have a technical background in computers and storage devices, then there are two options for you: purchase a data recovery software package or download a free data recovery utility online.

Obviously, there is a huge difference on the cost for each of these two. Software packages could be expensive but they could really offer a lot to the user. The freeware mostly downloadable on the Internet entails no cost although some authors ask for donations for their efforts to keep on going.

Now, let's say you accidentally formatted your drive and you have no backup. The freeware can help you recover them but you have to bear in mind a few things:

Stand up and leave that computer which uses the drive that needs the recovery system. You can't afford disk-swap activities that could damage the data even more.

Never install or download the free data recovery software from the same machine you want to recover data from. Go to another computer and save the program on a flash drive and run it directly from there.

Never save the recovered files on the same hard drive you recovered them from.

If the machine is making strange noises or is returning error messages, stop the installation. This could be a sign of hardware malfunction.

Follow the tips if you're planning to download free recovery software to address your data loss predicament. Remember to be cautious when getting freeware to avoid getting any more problems.

Be warned also about these websites that claim to give free data recovery programs, some could be deceiving. They would say the program is free but you'll soon find out that these are just demo versions and soon you will have to purchase the real thing to get the benefits or more features. The freeware that is genuinely free are those utilities you can download and really pay nothing for.

Performing data backups is a lot like buying life insurance: You hope that you never have to use it, but if disaster strikes you are so much better off because you followed through on it.

While many people find reasons not to perform backups, data backups are the center piece of a computer defense strategy, no matter if the defense is set against viruses, trojan horse programs, hackers, malicious software ("malware), or hardware failures.

A virus or Trojan Horse program trashed your machine? Restore your system from a backup and you are back in business once again.

A hacker penetrates your computer defenses and goofs with your system in such a way that you do not know what they did or what they left behind? No problem! Pull out your backup and restore your system.

Malicious software launching your web browser at random intervals and pointing it to rated X sites? If the malware proves to be resistant to your attempts to remove it using ad-ware removal software, you can always restore your system from a backup that you made.

Hard disk failure keeping you from booting your computer or accessing your data? Once again, it is the handy data backup to the rescue (once you have replaced your hard disk).

So with all of these benefits, why do people fail to make backups? Most likely because they think that backups are a real pain to make and maintain.

While keeping a recent backup of your data DOES take some effort, the fact is that there are many backup strategies available, and there is undoubtedly one that will work for you.

For example, if you want to be able to be up and running as quickly as possible after a data loss, you can make an entire image of your hard drive and store it in a safe place. If disaster strikes, you simply restore the entire hard disk image and your computer is returned to the state it was in at the point in time when the backup was made. This method is nice in that it does not take much time to restore your entire system (relatively speaking), but you do have to store the ENTIRE contents of your hard drive as part of the backup. This takes a non-trivial amount of time and also chews up a lot of storage space, especially as you make multiple backups. However, the arrival of affordable external hard drives with sizes as large as one terabyte of storage space (that is one million gigabytes!) is making this option more attractive all the time.

But for those who wish to keep backups that use less space and require less time to create, you can backup ONLY your data, rather than the entire drive. The down side to this, of course, is that in the event of a data loss, you have to manually reinstall your operating system and all of your applications before you can restore your data and resume working. You have to decide for yourself if this option works for you.

There are many other backup strategies that you can employ. Each one has its own strengths and weaknesses with respect to the amount of time required to create the backup, the amount of time required to restore the backup in the event of a loss, the amount of space required to hold the backup, and the amount of human intervention required to make or restore a backup. The trick then is to assess your needs and come up with a backup strategy that balances these strengths and weaknesses in a manner that makes sense for you.

It seems that in today's business environment, everything of any importance is stored and managed on the hard drive of your computer. Customer Records, Sales Records, Sales Records, Sales Records, Marketing Information, Receipts, Bank Records, and the Software to run all of those records. The list goes on and on, in fact without those records you would be out of business. There is no any way to over emphasis the need for good computer house keeping. Secure, safe and timely data backup is a necessity.

In my years in business, I have used various systems to back up my valuable files. I have used floppy discs and CDs. That made sense at the time, but what happens when someone accidentally spills the morning coffee on the top of the desk and warps the discs, or someone misplaces the data media, or even worse walks of with it.

Then, there is the tape back up that is supposed to be done in a timely manner each and every day. That for some reason seems to be one of the easiest things to forget to do, or even get neglected. We all have the best of intentions, but I have done it myself, once in awhile, it could be put off until later. I am sure that I am the only person who ever did that, right.

Then there is the employee who just hit the wrong key, or misplaced that file, and can not seem to find where it was filed. Too bad, that we had been working on the particular file for oh say, months. It seems that mistakes happen, but there are sometimes mistakes that can ruin a business. Some mistakes, it seems just can not to be recovered from. What about natural disasters like fire, flood, or an earthquake? It just does not make sense to have your valuable data stored on or near the location that it is generated in.

What I have found for my own peace of mind; is that a remote data backup site is imperative for my sanity. I have experienced various ways of loosing data, and crashing hard drives. The statement that there are those who have lost data, and those that will, is an absolute truth. This type of disaster can averted with forward thinking and proper planning.

So take the initiative, and seek a secure, reliable remote site backup, before the worst takes place, and you have a loss of data that you can not recover from.

The decision to use data recovery software versus the professional service of a data recovery company is one that should be weighed carefully. In extremely simple cases, the usage of data recovery software can be valuable. However, the use of recovery software often results in further damage to your hard-drive and permanent data loss. The software should only be used in extremely simple data recovery cases. If your drive is making any noise such as clicking or small vibrations then the case would not qualify as simple.

Most of the time a malfunctioning hard drive is the result of mechanical or electrical problems to the drive. In this case, data recovery software will do nothing but continue to spin the drive and result in further damage and possible permanent loss of data. Each hard drive contains spinning magnetic discs that hold all of your data. When software is used, the drive continues to spin, and if there is any sort of misalignment or electrical problem then the drive will simply be exposed to more forms of damage.

The best course of action to fix a damaged or malfunctioning drive is to discontinue use of the drive immediately. If the data on your computer is more valuable than the fees for a data recovery service, then it would be wise to seek expert advice. Unfortunately, many consumers are convinced to buy data recovery software simply because it is the cheapest method of attempted recovery. The software companies often do not disclose that their product often will result in further damage, and permanent loss of your valuable pictures of files. Most of the unrecoverable data problems that are sent to data recovery companies are the result of data recovery software or other utilities that have damaged the drive after a crash or malfunction.

Since a large majority of hard drive crashes are due to electrical and mechanical problems, the drive cannot be fixed with software. A data recovery company with expertly trained technicians must be used to give you the highest chance of recovering your files. Data recovery professionals charge fees that are much higher than data recovery software, but the service is also much more effective. If the data that you have lost is extremely important, it would be wise to invest in the money required to give yourself the best chance at a successful recovery.

The analogy is similar to a medical problem. The use of WebMD.com or any other medical advice website can sometimes be helpful for mild problems or symptoms. However, when you have experienced a serious medical condition you will obviously need to consult with a practicing medical doctor. It is more expensive, but the service customization and quality is at a vasty higher level. Data recovery service versus recovery software contains a very similar quality to cost ratio.

There seems to be some sort of misunderstanding or misinformation about various theories that are supposedly ways to recover data from a broken hard drive. One of the most popular rumors is basically that freezing a hard drive that has failed will result in the drive becoming more stable and allowing you to possible run the drive. The basic theory is that an extremely cold drive will be able to run long enough to possibly allow you access to your files to retrieve them. The internet is littered with stories of people who say it works, and have tried it, and other people who have tried it with no success.

The truth of the matter is that it can possibly work, but only on very old drives, and in very specific situations. Freezing a drive, in the majority of data recovery cases, will actually result in further data loss and damage to the drive. The specific situation where freezing a drive can possibly churn some more life out of your drive is a head crash. Unfortunately for most people, there will be no way for the individual to tell if their drive has experienced a head crash with any certainty. Additionally, freezing a drive in the case of a head crash simply leads to a microscopic shrinking effect of the mechanical parts, which will un stick the platters long enough to run the drive temporarily.

The main point is that freezing your hard drive can most likely result in even worse damage and further data loss. If that data that you are trying to recover is extremely valuable, then any sort of action without the guidance of a professional is a step in the wrong direction. Even if the drive works for 30 seconds or 10 minutes, you can be further damaging the hard drive in the process and permanently losing precious data. A data recovery case that is completely recoverable can quickly turn into disaster when methods such as freezing the drive are used.

The most common problem that data recovery companies run into when trying to recover valuable data is that the user has tried the use of home methods or data recovery software. This is not to say that these methods never work, but when they don't work the drive will often be damaged beyond the point of recoverability. Every extra second that the drive runs results in a higher and higher chance of complete failure. If the drive in question contains information that is extremely valuable it is not advisable to take serious risks with the data because one wrong step can result in permanent data loss.

The issue eventually comes down to how valuable the data is that you are trying to recover from a hard drive. If the specific situation involves data that is not very valuable, there is justification to try a risky method such as freezing, even If the success rate is less than ideal. However, when the data is extremely valuable, or you are using a new age hard drive, these methods do not work reliably and professional assistance from a data recovery company should be sought.

With new regulations in play more and more companies are considering backup HIPAA data offsite solutions. In some cases it's the law and in other's our health records are one thing most of us want to keep private. The reason for this consideration involves the HIPAA legislation, its effects on the health care industry, what those effects mean for data storage, and what medical offices need to consider. By reviewing this information on backup HIPAA data offsite storage, you may be better prepared to make choices about your storage solutions.

To understand the issue, you may need background on the HIPAA legislation. HIPAA, or the Health Insurance Portability and Accountability Act, became law in 1996. The purpose of the law was to ensure people would maintain health care coverage if they changed jobs. However, Title II of the law dealt with Administrative Simplification, specifically how to deal with electronic data about health care. Obviously, the passage of the law had a drastic effect on the health care industry. And probably more importantly paved the way for an increase in backup HIPAA data offsite providers.

Because of the Administrative Simplification portion of the law, medical facilities today must take great care when dealing with a patient's electronic files. For example, HIPAA required a hierarchical approach to data access. Physicians might be able to access patient information that would not be available to the nurse. Protecting the data from unauthorized access became crucial. As a result, the law has affected the way data storage backups are handled in the medical field. As a consequence backup HIPAA data offsite services have had to adjust there service offerings.

Today, data storage for these facilities must be carefully controlled. HIPAA requires that all of the data be backed up and that data must be secured with 100% reliability. The government wants to make sure no one gets into your personal medical records. That means, however, that backup HIPAA data offsite storage facilities must take special precautions to ensure the service they provide meets these requirements.

If you're looking for offsite-backup for HIPAA data storage, you should look for a few factors. Make sure to find out if the storage service specializes in this type of data storage. There are many that do. You should also ask about the back up process, security, and storage to make sure you are comfortable with how the data will be handled.

Exabyte is the original brand in high capacity 8mm magnetic tape storage technology. Exabyte Corporation introduced helical scan recording technology in the late 1980s and was widely recognised at the time as a reliable, cost effective format which offered high speed read/write capabilities and a wide range of native capacities and formats. The Exabyte 8mm data storage tape was the first time that helical scan technology had been used for data storage, and was mechanically identical to the widely used 8mm video format technology found in the Professional Media and broadcasting market. Due to its heritage with Sony, Exabyte 8mm technology shares similar mechanical components found in home video systems but on a slightly smaller scale.

Helical scan is an older style method of recording data onto a slow moving magnetic tape which uses a rapidly rotating read/write head. The data is recorded onto diagonal tracks, which are at an angle to the edge of the tape. After loading into a drive, the tape is pulled from the cartridge and wrapped around the read/write head, which rotates at around 30 metres per second. While the Sony AIT family also uses the same 8mm technology as the Exabyte range, they are not compatible with one another.

In its hey-day the helical scan 8mm technology was for a time at the forefront of this sector in the market due to data transfer rates of around 240KB per second and initial storage capacity of 2.4GB. It has however, since been surpassed by linear recording technology. The newer linear recording technology allows for faster reads and writes, lower search latency, and is the technology that has been widely adopted across all sectors of data storage.

What is Linear Recording?
Linear (or longitudinal) recording is a method of recording data whereby tape is passed by a non-moving recording head. The tracks are recorded parallel to the tape edge.

Why migrate from Exabyte 8mm Technology

Exabyte 8mm data storage technology has now reached EOL (end of life). Although designed primarily for a domestic camcorder market and not for the commercial data storage market, the Exabyte format became incredibly popular in the commercial sector during the 1980s and 1990s because of its relatively low-cost drives and tape media. As a result of this popularity, a broader, more comprehensive adoption of the technology occurred than perhaps strategically suited the technology. It is only in recent years as the technology was nearing retirement (EOL), that many clients have discovered that the economic advantages of the technology heralded in the early days was now showing signs of decreased reliability also and therefore resulting in data loss.

This type of aging 8mm is now becoming the new "9 track" legacy migration requirement. Unfortunately it also comes with new issues and problems which can make it very difficult or impossible to recover data from. Traditionally, 9 track tapes have presented problems for data migration, but with the implementation of binder hardening processes, specialist software and multiple reads of exercised tapes, the data in 99% of cases can be recovered.

Data stored on 8mm technology may not be so lucky. Why? From our experience we believe there are many factors which place data stored on this technology at risk. These are as follows:

Tape Width
With the exception of 4mm DAT tapes, Exabyte 8mm tape technology is the narrowest commercial data storage tape introduced to the market. In comparison, most of the tape in today's cartridges are 25mm wide. The narrowness of the tape equates to a weaker, less robust tape that results in stress and damage more readily, particularly over time and number of physical tape loads.

Dual Reel Architecture
The dual-reel cartridge architecture of the 8mm Exabyte technology is not the most ideal form. The complexity of the configuration makes it much more susceptible to physical defects and problems such as looping within the tape spools. This inturn causes inconsistencies in the tension of the media, resulting in damage to the tape media in the form of creases, nicks, folds, crushed, stretched and snapped media. Obviously not ideal!! Closed cartridges in this format result in a "if you can't see it, you can't fix it" scenario. The now more commonly implemented open reel architecture used with LTO and 3590 media overcomes the difficulties that are encountered with forms such as Exabyte.

Limited Head Adjustments
The tape drives are limited to the number of head adjustments which can be made while reading data. If there was a skew of any kind on the original drive which wrote the data, and that skew cannot be replicated by a head adjustment on the new drive, then it is unlikely it can be recovered. In addition, the multiple reading of the tape at different head settings and the constant stopping and starting can cause stress on the tape and result in snapping or damage.

Internal Drive Settings
Certain types of 8mm Exabyte drives allowed the creation of different partitions within a tape cartridge, through the use of internal drive settings. Often these were specific to certain organisations or departments only and this information is not easily obtained or second guessed during a recovery or migration process.

No Longer Supported By Manufacturer
As a data storage technology which has reached EOL, this means that it is no longer supported by the manufacturer. In fact the original drive and media manufacturer Exabyte were bought by Tandberg in 2006. This means that there are very few new drives available for purchase, nor spares readily available for maintenance and repair of existing drives. From our own experience, even to purchase older working drives is becoming increasingly difficult. Once a common item on eBay Australia, they are now rarely seen or available.

Type of Data Held On Exabyte Technology
Using the oil and gas industry as a prime example, the type of data stored in Exabyte 8mm technology was largely processed data. This means it has had substantial value added to the raw data, and therefore small losses of data can be more significant.

Density
In comparison to storage technology which is readily available today, Exabyte tapes hold very little data. An average holding of 1000 Exabyte cartridges would now quite easily fit onto 5 or 6 LTO tapes which in addition to the main advantage of data security and longevity, also represents savings in ongoing storage and retrieval costs.

In addition, as one media cartridge can be written to on many drives, there is a higher chance of there being a compatibility issue for users with regards to data density.

Brand Differences in Media
As always in a competitive market, there were quality products and inferior quality products and by this we refer to the media brand which were compatible with this technology. Largely due to the manufacturing process of 8mm tape and how the end products were distributed for rebadging and resale, there are significant differences in the reliability of different brands. The brand of media storing your data should be an important consideration when planning for its migration to newer technology - with the inferior cartridges taking priority where possible.

How To Take Corrective Action

An audit and data migration plan, if required, should take into account the following factors:

- How much data do you have stored on 8mm technology?
- How many cartridges are there?
- How/where are they stored?
- What brands of media were used to record the data? (Some will have priority, some may last a little longer)
- What type of data is residing on these tapes? (Is it field data, processed data or other data)
- How often are the tapes tested for restoration of data? (Have you already encountered problems with restoring data?)
- Do you even still have the tape drive to read the tapes on?
- Do you have problems with your drives?

Conclusion

Exabyte technology is not considered an ideal choice for archival and backup data - even for small/medium sized organisations which generally do not have the budgets to purchase expensive, state of the art high capacity storage technology. The retired drives (and tape media) can be unreliable, are prone to physical damage and stress, and as a storage technology choice are now considered a very low capacity legacy technology.

The recommended migration path for data stored on 8mm technology is to either 3592 media, which is now industry standard within the oil and gas sector, or to LTO technology for other types of industry. With the current rise in popularity of the LTO format, this enterprise level media solution is now more affordable allowing SMB/SME users to exploit its large volume and reliability. This makes it an ideal replacement for the aging Exabyte media format. Both types of media are widely used and are readily available.

Once you have selected the right company to entrust your data with, it is time to scrutinize them with the requirements below to ensure that you are getting a good deal.

When making the major decision to get data recovery services, know your rights. Ask what is their data recovery success rate. You do not want to be going through evaluation and shipping for nothing even when it is paid by the recovery company.

No obligation free evaluation fees. Your data recovery company should be able to tell you how likely your data is to be recovered based on the damage on the hard disk.

Then, if you should decide to give the data recovery provider the go ahead, they will clone the system and begin attempting to retrieve the data that is on your hard drive. This is for safety purpose that another recovery attempt can be done if the first one fail.

Complimentary shipping to and from their facilities. Specialists do not require your entire computer to recover your data. All you got to do is simply ship your hard drive to them and they will retrieve the data from it.

Extra precaution should be taken when shipping your hard drive. For example, if it is damage by water, it should be shipped wet by covering it with a damp cloth. Also, you might want to put it in a shock proof tight container so that the hard disk do not get damage even more is handled roughly during shipping. However, avoid using styrofoam container when shipping as it can create static electricity that can damage the data.

If your computer is still under warranty, a data recovery attempt may void the warranty you have with the manufacturer. Honoring a drive manufacturer warranty may cause delays in recovering the data. If they do not provide you a clear warranty statement, you may have to purchase a new hard drive after your data is recovered.

Does your data recovery company offer 24 / 7 telephone-based personal support? Better still, this number should be toll-free too. Apart from that, a 24 / 7 online case status reporting ensure that you can track the progress of your case and a dedicated case manager can answer all the questions and ease all the concerns that you may have.

Laptops and notebooks can also have data recovered from them, but the process is different as the parts were way smaller. The main difference is that these computers require smaller tools in order to recover the data due to its mini size. Ask your data recovery company if they can handle this.

Can your data remain confidential? Ask this question when selecting a trustable company. Going to an established company with good credibility will help in this situation.

How will your data be returned to you? Depending on the volume, your data can be returned in the form of a CD, DVD, loan or replacement hard drive. A free disc is usually the norm. As a basic requirement, your data should be returned on media that you can easily access and integrate into your existing system.

Make sure that your newly restored data is covered by a warranty. If you later find any problem with it, you can always refer back to the data recovery company.

A clear time frame for getting your data back. Although data recovery can be done in as little 24 hours to 5 days, the evaluation process can take several days to weeks, and it may or may not be successful after all.

If they cannot get your data back you should not be charged for the service.

Data recovery at home using software are the cheapest and easiest way around, but does it really work? You may be tempted to DIY instead of paying hundreds and even thousands of dollars for professional data recovery service, but let's not forget that these professionals remained in demand despite their exorbitant fee and the various related software available in the market.

Two problems can arise from doing doing a DIY data recovery. One, you could have selected a poor software and the data recovery is not successful, greatly lowering the chances of the salvaging the data in subsequent effort. Two, you may have a powerful tool in hand, but it is so powerful and complex, it is most probably meant for professionals.

So as you can see, attempting to recover data on your own is no easy feat, even when you are knowledgeable with computers. Even the pros need sophisticated machines and special room and spending a lot of time and effort to delicately extract information from a corrupted hard disk. While software are created with good intention, many things could go wrong.

You should probably go ahead with software when you meet these all these three criteria:

Make sure you are excellent and well-versed with computers (and familiar with your type and year of your computer). Say for example you are an expert in today's latest computer parts but your model was phased out last year, you could be looking at a totally different thing altogether.

Ensure that your software company is helpful, preferably with live chat so you don't have to submit trouble ticket and wait hours upon hours for reply. Also, you should be following instructions that come with the software carefully and ask questions when in doubt. If the product website offer step-by-step video instruction, even better. This ensure that the product work as claimed and you get a look at how complicated it is to use before buying.

Thirdly, you must be prepared or be able to afford not seeing those data ever again. This seem ironic since the goal is to get them back. Even with professional intervention, the odds of recovery in 80-85%, although a part of this is because they sometimes work on data that failed to be recovered the first time. If loss of data means financial loss, or not getting that multi billion dollar project or contain sentimental value contents, your best bet is to NOT do it yourself.

It is crucial that if you are able to complete a scan of the failing hard drive, the recovered files are not saved back to the hard drive you are trying to recover. If you do that, the saved recovered files on the source drive could overwrite other files you are trying to recover.

Before you proceed to paying for the software, do a background search on search engine to read unsolicited feedbacks (those listed on the software site don't count). Type name of product + scam to read the bad things written about it, and whether ot not the problem gets rectified in the end. If so, how long does it get rectified? Was the software company helpful and offered help? You get the picture.

Also, if you intend to go ahead with a software, make sure they offer money back guarantee in case they don't work. You don't want to be paying for nothing. If the site does not mention anything about a return, then forget about it.

If you have experienced inability to retrieve stored files from your computer, chances are you may have a data crash. This could happen due to physical or logical problem with your hard drive.

Physical refers to external damage to the hard drive component such as platter, circuit board failure and motor malfunction. Logical problems are internal failures due to virus attack and accidentally deletion of files and so on.

How to tell if you are facing possible data crash? You can usually know by strange noises produced by your computer, such as grinding, scraping, buzzing and clicking.

The above symptoms are indicating the possibility of a head crash. Because hard drive have an operational spinning speed of 3600 to 15000 RPM, severe damage can be done in a very short amount of time. This can result in damage to the head, loss of data and damage to disk surface or a combination of all three.

In this event, you will have to act instantly to firstly avoid further damage to your stored data and secondly, to preserve the damage in its original state. The first thing you want to do is to power off immediately. Do not even use the 'shut down' command, just switch off from the main source. This is because if you shut down, temporary files could overwrite any precious data.

Even if you are working on a particularly important document at the time of the crash, do not attempt to edit, save or retrieve anything. Remember that your hard drive could be severly corrupted very quickly and the goal here is to preserve the original state of the damage in order to recover not one, but all your files.

If your computer tell you that it cannot detect the hard disk, refrain from rebooting repeatedly especially if you face the same outcome after the initial reboot. If there has been a data crash, you risk damaging the head by having it scratching against the disk, damaging the disk surface at the same time.

The greater the damage, the lesser your chance in recovering your data. However, in approximately 80% of the time, data recovery is possible.

A dedicated server is a single computer on a web-hosting network that is leased or rented, and dedicated to just one customer. A service provider monitors the computer’s hardware, network connectivity, and routing equipment, while the customer generally controls and maintains the server software. Dedicated servers are most often used by those who’ve outgrown typical hosting accounts and now require massive amounts of data space and bandwidth, those with mission critical web sites, web hosting companies, or those who have special needs. Dedicated servers are housed in data centers, where service providers can monitor them close-up and have hands-on access to them.

The primary advantage of using a dedicated server over a typical shared hosting account is the sheer amount of resources and control available to you, the customer. In many cases, the client is at liberty to install whatever software they desire, giving them greater flexibility and administrative options. Dedicated server clients do not share resources, as those with shared hosting plans do; but rather, are at liberty to use all the resources available to them.

Managed Servers vs. Unmanaged Servers

There are two types of dedicated servers available today: Managed Dedicated Servers and Unmanaged Dedicated Servers.

An Unmanaged Dedicated Server leaves nearly all the management duties of running a server in the purchaser’s control. The customer in this case, updates software on their own, applies necessary patches, performs kernel compiles and operating system restores, installs software, and monitors security. With this type of dedicated server, the consumer is solely responsible for day-to-day operations and maintenance. The service provider, in turn, monitors the network, repairs hardware problems, and troubleshoots connectivity issues. Additionally, some service providers offer partial management of services, such as network monitoring, software upgrades and other services, but leave the general upkeep of the server in the hands of the client. An unmanaged dedicated server is best for someone with server management experience.

A Managed Dedicated Server is generally more proactively monitored and maintained on the part of the service provider. When renting or leasing a managed server, the service provider or host carries out the responsibility of software updates and patches, putting security measures in place, performing hardware replacements, and also monitoring the network and its connection for trouble. In other words, when utilizing a managed dedicated server, the host provider will perform both hardware and software operations. A managed dedication server solution works well for the customer with limited server management experience or limited time in being able to perform the duties necessary to keep a server running and online.

Technical Aspects In Choosing A Server

When choosing a dedicated server, there are several things to consider: Operating System, Hardware options, Space and bandwidth.

The Operating System of a server is similar to that on your own personal computer; once installed, the operating system enables one to perform tasks more simply. There are a bevy of server operating systems available today including Linux-based and Windows-based software. The operating system you choose should be directly relational to what operations your server will be performing, which types of software you’ll need to install and also, what you’re more comfortable with.

Hardware Options are also something to consider when choosing a dedicated server. You’ll need to pick a processor that’s up to the task, the amount of memory you wish installed, firewall options, and the size of the hard drive.

A certain amount of bandwidth is generally included when renting or leasing a dedicated server. Once you ascertained how much bandwidth you will require, you can adjust that limit with your service provider. The space you’ll be given is generally directly relational to the size of your hard drive. Some hosts also give clients the choice of uplink port speed (usually 10Mbps/100Mbps).

You really enjoy those dvd movies and games and the last thing you need or want is to experience problems with your dvd drive.

To prepare for the possibility of having your dvd drive leaving you out in the cold one morning,we will dicuss problems that may cause dvd failure as well as the procedures you should take to correct these problems.

As with all drives,be sure to double check the failure. If the dvd drive will not read the dvd,try running another dvd in the drive.Make sure the dvd has no scratches and is clean.

Visually inspect the drive if the drive is external and if the drive is enternal,check the computer.Check to see if the computer has good ventilation to help keep it cool.Here are the common dvd problems with their solutions.

DVD DRIVE HAS NO POWER

First..For external drives that have no power,first check to see if anything or anyone has caused the power cord to become unplugged.Rule out the wall outlet by plugging in another device such as a radio and see if it plays.

Second...If you've proven the wall outlet to be good,but you still don't have power,check the surge protector for any signs of damage.If the surge protector is good,check the cord.

Third..If you're certain the surge protector or wall outlet is providing power,double check the cord by plugging it in a few times.If no power is present,you will have to replace the cord or the drive itself.

Internal dvd drives receive their power from the connector from the power supply.Try another connector to the drive. if the internal dvd drive still does not receive power after using another connector,the drive is faulty.

DRIVE HAS POWER BUT TRAY WON'T OPEN

You may experience the tray failing to open.Should this happen,press the button a couple times to see if it will open.If the tray fail to open,reboot your computer and try to open the tray.

When rebooting the system,notice the monitor to see if the drive is recognized by the computer.Some systems will not display installed hardware during bootup.If this is the case,you will have to access your BIOS to check if the dvd drive is being registered.

You can also try the manual eject button on the drive to get it to open.Use something very small but firm to press in the pinhole in front of the drive to open the tray.

Shut the computer off and unplug it.Use something like a long paperclip to insert in the pinhole to open the tray. The tray may open a couple inches and you can grab it with your fingers to open it completely.

DRIVE IS NOT RECOGNIZED BY WINDOWS

Be sure the operating system is recognizing the drive by clicking on My Computer.Windows XP will show "drives with removable storage".If your drive is present, highlight the drive,right click and select properties.Click on properties and you should see "this drive is working properly".

If you see another message such as "this drive is not working properly",you may be able to update the device driver.If the drive is not present in My computer,reboot the computer and access the cmos setup.

In the cmos setup,the dvd drive should be present.The drive may not be properly installed or one of the cables have become disconnected if the drive is missing

If you check the drive cables and are certain they are connected correctly,it may be that the data cable is faulty and the drive controller may be at fault.And we can't overlook the fact that the drive itself may be bad.

DRIVE HAS POWER BUT WILL NOT READ DVD

First..try another dvd since a dirty or scratched dvd may not play.If the new dvd fail to play as well,check to see if the operating system is recognizing the drive

Click on My Computer and highlight the dvd drive.Right click and select properties.The statement "This device is working properly" should be present.If not or you see another message,try to update the device driver.

In the My Computer screen,highlight the dvd drive,and select the Properties screen,select Drivers,and then select update device driver.

To make a backup of your registry with Windows 98, just go to Start, select Run, enter scanregw and click OK. This will run Scanregw.exe.

Here's how to format a hard drive (Legal Stuff: We are not responsible for any damages, lost data, or anything of the sort)...

If you have a computer, you surely know what a hard drive is. If you don't have one, or simply don't know what a hard drive is, then this article will begin with a short description of the hard drive. Then we will cover formating a hard drive...

Step 1: What Is A Hard Disk Drive?

A hard disk drive in computing is a type of storage device made up of hard disk platters, a spindle, read and write heads read and write arms, electrical motors, and integrated electronics contained inside an airtight enclosure.

Now you know what the hard drive is. Let's stick to the point and start with the information on the title of this article. How to format a hard disk drive....

Step 2...

First of all, you should have a reason if you really want to learn how to format a hard drive. But don't forget that formating a hard drive does NOT permanently delete your data!

Of course, when you format your hard drive you think that the data is really deleted, but that is not the case.

The fact is that the data you have "deleted" can be restored. Nonetheless, you should not experiment with formatting a hard drive because you never know what may happen. Of course, it also depends on the software you use, for example, there are such products that will permanently delete the data you want and then you can continue the process of how to format a hard drive.

Step 3...

In fact there is nothing so difficult in it. You first need to decide what operating system you intend to load after formating a hard drive.

It is best and easiest to use a boot disk for that Operating System, such as MS Dos6.2 or Windows95b or Windows98SE. You will need the proper Windows95/98 boot disk in order to load these operating systems on the computer, else it will reject loading due to the wrong Operating System on the computer.

Step 4...

Then you will have to insert your boot disk in the floppy drive and start the computer.

Once the system has completed booting and an A: prompt appears. You will need to type format C: /s and then press Enter. The function of this command is to tell the system to format your "C" drive and when it is finished to copy the system files to the drive.

The "/s" switches for "System". You can format a different drive this way by using a different drive letter.

Step 5...

After that you will see on the screen the following text: "WARNING, ALL DATA ON NON-REMOVABLE DISK DRIVE C: WILL BE LOST! Proceed with Format (Y/N)?" and if you really want to continue, type [Y] and then press Enter.

Your screen should display the size of your drive and a countdown in percentage of formatting completed. Depending on your computer's speed and the size of the drive it can take from a few minutes to over 15 minutes.

When it reaches 100% complete, you will see a new message: FORMAT COMPLETE. SYSTEM TRANSFERRED. This message is to indicate that the files required to boot your computer from the hard drive have been copied from the floppy to the hard drive.

The computer can now boot from the hard drive without a boot disk in the floppy drive.

The last message that will appear on your screen is the following: "Volume label (11 characters, ENTER for none)?" You can either press any key to continue, or simply to press Enter. And now, you can finally begin to load your Operating System.

Keep in mind that you can receive an error message, which says "insufficient memory to load system files". If you do receive such message, do not worry. It is caused by the lack of a memory manager loaded at boot and your PC can only access the first 1mg of ram memory.

You can handle this situation with two options. The first one is to omit the /s switch when formatting. You should do it by typing this: FORMAT C: and then press Enter. Then when the format is complete, manually add the system files to your hard drive by using this command: SYS C: and press Enter again.

The second solution is to load a memory manager in order to overcome this issue. If you don't have any you can easily download one from one of the million sites on the Internet.

Step 6...

You see, we have finally reached the end of How To Format A Hard Drive. and consequently – the end of this article. Now you surely know how to format a hard drive. But, once again, don't play with the commands if you are not serious about formating a hard drive.

Even if the data is restorable you may do something wrong to your computer. That is why, you should be careful! And now, good luck!

Forget about emptying your wallets every time you see the blinking light. Quit worrying and start doing it yourself! It’s an easy process that won’t take you more than five minutes.

The following is included in a typical ink refill kit: ink bottles, syringes and detailed instructions. Some kits include an air balance clip for balancing the air inside the cartridge to ensure proper ink flow. Some kits also include hand-drill tool to make a hole in the top of your empty cartridge.

Refilling Process

1. To start the refilling process, fill the syringe with one of the ink colors over the sink or several sheets of scrap paper to prevent any mess. Different printers hold different amounts of ink. In most Epson printers, the black cartridge holds approximately 17 ml and the color cartridges hold approximately 8 ml. See the instructions with your refill kit to see how much ink your cartridges can hold.

2. Before inserting the needle, make a small hole in the top of the cartridge (one for each color chamber). The hole is at the top of the cartridge near the label. Simply push the needle through the hole and press to the bottom of the cartridge towards the outlet hole. It’s important to fill the cartridge slowly so as to avoid the ink from foaming and introducing air in the chamber.

3. You do not need to seal the refill holes since there are already breather holes on the top of the cartridge.

4. Any unused ink can be put back in the bottle. You should clean the syringe with water and dry it properly to do the same process for the other cartridges or for future use. You can also label each syringe for the different colors so that each syringe is only used with one color.

5. Once you place the cartridge back in the printer, run the cleaning cycle 1 to 3 times. If there are any gaps in the printing, run the cleaning cycle again.

Don’t Forget

There are a few things to remember when refilling your cartridge. It should be refilled before the cartridge is completely empty to avoid the chamber from drying out and clogging. Also, it is a good idea to let the printer cartridge sit for a few hours (or overnight) so that the pressure in the cartridge will stabilize.

Some printers, like newer Epson models, have a green chip on their ink cartridges which is visible by looking at your cartridge. They are often referred to as “Intellidge” cartridges. The chip keeps track of how often the cartridge is used and lets the computer know when the cartridge may be low or empty. As long as you reset the chip, refilling the cartridge with ink from a refill kit will not be a problem. A resetting tool can be used to reset the memory on the chip. This allows the printer to recognize the cartridge as being full which makes printing with a refilled cartridge possible.

Refilling your own ink cartridge is easy, good for the environment, and very good for your pocket.

DVD CD ROMs have become predominantly the most used CD drive for desktop and notebook computers. They are very reliable and now come as a standard in most computers. If you are looking for a laptop then make sure it has a DVD ROM, this will give you extra speed for normal Cds, and you will be able to watch your favourite DVDs while you travel.

I often sit up late watching DVDs on my laptop after a hard working day.

If you are interested in Desktop computers, then the DVD drive will enable you to watch your favourite DVDs on your monitor. I currently have a 21-inch monitor and a 5.1 computer surround kit. This brings DVDs to life and acts just like a home cinema system, however the quality is even better.

Most DVD ROMs will come in speeds ranging from 4 to 10 speed. This is more than adequate to watch the latest DVDs, play the latest games, and use and install the latest software. Normal CDs can only perform at a certain speed, and it is a lot lower than what the DVD ROM offers. I hope you have a better understanding of why DVD ROMs are now used more than a normal CD nowadays. It is basically all part of bringing a home entertainment system at the best quality closer to everyone.

Blu-ray is an optical disc format which is set to rival HD-DVDin the race to be the
de-facto standard storage medium for HDTV. The HD-DVD vs Blu-ray battle resembles
that between Betamax and VHS and DVD+RW and DVD-RW.

Currently, the major Hollywood film studios are split evenly in their support for Blu-
ray and HD-DVD, but most of the electronics industry is currently in the blue corner.

The key difference between these new players and recorders and current optical disc
technology is that Blu-ray, as its name suggests, uses a blue-violet laser to read
and write data rather than a red one. Blue light has a shorter wavelength than red
light, and according to the Blu-ray Disc Association (BDA), which is made up of,
amongst others, Sony, Philips, Panasonic, and Pioneer, this means that the laser
spot can be focussed with greater precision.

Blu-ray discs have a maximum capacity of 25GB and dual-layer discs can hold up to
50GB - enough for four hours of HDTV. Like HD-DVD, Blue laser discs don’t require
a caddy and the players and recorders will be able to play current DVD discs. Codecs
supported by Blu-ray include the H.264 MPEG-4 codec which will form part of
Apple’s QuickTime 7, and the Windows Media 9 based VC-1.

High definition DVD, also known as HD-DVD (which actually stands for High Density
DVD), is one of two competing high definition storage format - the other being Blu-ray.

The need for a
new, high capacity storage format, has been primarily brought about by the rapid
rise in popularity of HDTV in Japan and the US. HDTV has much higher bandwidth
than either NTSC or regular DVD discs, so in order to record programs from HD-
DVD higher capacity discs, of at least 30GB, are required.

High definition video is also being used increasingly to make Hollywood movies as it
offers comparable quality to film at much less cost. Therefore, the studios plan to
release future movies on one or both high definition formats.

HD-DVD was developed by Toshiba and NEC and has the support of the DVD Forum,
along with a number of Hollywood studios. Currently those studios which have
announced support for HD-DVD are; Universal Studios, Paramount Studios, Warner
Bros., and New Line Cinema. It has a capacity of 15GB for single-sided discs and
30Gb for double-sided. It doesn’t need a caddy or cartridge and the cover layer is
the same thickness as current DVD discs, 0.6mm. The numerical aperture of the
optical pick-up head is also the same as DVD, 0.65mm.

Because of its similarities to current DVD, high definition DVD is cheaper to
manufacture than Blu-ray, because it doesn’t need big changes in the production
line set-up. Both HD-DVD and Blu-ray have backward compatibility with existing
DVDV discs. That is that current DVDs will play in HD-DVD player, although new
high definition DVD won’t play in older DVD players.

High definition DVD currently supports a number of compression formats, including
MPEG-2, VC1 (based on Microsoft’s Windows Media 9), and H.264 which is based on
MPEG-4 and will be supported by the next version of Apple’s QuickTime software,
which will be included with Mac OS X Tiger.

ROUTING PROTOCOLS

A generic term that refers to a formula, or protocol, used by a router to determine the appropriate path over which data is transmitted. The routing protocol also specifies how routers in a network share information with each other and report changes. The routing protocol enables a network to make dynamic adjustments to its conditions, so routing decisions do not have to be predetermined and static.

Routing, Routed and Non-Routable Protocols

ROUTING | ROUTED | NON-ROUTABLE

ROUTING PROTOCOLS

ROUTING PROTOCOLS are the software that allow routers to dynamically advertise and learn routes, determine which routes are available and which are the most efficient routes to a destination. Routing protocols used by the Internet Protocol suite include:

· Routing Information Protocol (RIP and RIP II).

· Open Shortest Path First (OSPF).

· Intermediate System to Intermediate System (IS-IS).

· Interrior Gateway Routing Protocol (IGRP).

· Cisco's Enhanced Interior Gateway Routing Protocol (EIGRP).

· Border Gateway Protocol (BGP).

Routing is the process of moving data across two or more networks. Within a network, all hosts are directly accessable because they are on the same

ROUTED PROTOCOLS

ROUTED PROTOCOLS are nothing more than data being transported across the networks. Routed protocols include:

· Internet Protocol

o Telnet

o Remote Procedure Call (RPC)

o SNMP

o SMTP

· Novell IPX

· Open Standards Institute networking protocol

· DECnet

· Appletalk

· Banyan Vines

· Xerox Network System (XNS)

Outside a network, specialized devices called ROUTES are used to perform the routing process of forwarding packets between networks. Routers are connected to the edges of two or more networks to provide connectivity between them. These devices are usually dedicated machines with specialized hardware and software to speed up the routing process. These devices send and receive routing information to each other about networks that they can and cannot reach. Routers examine all routes to a destination, determine which routes have the best metric, and insert one or more routes into the IP routing table on the router. By maintaining a current list of known routes, routers can quicky and efficiently send your information on it's way when received.

There are many companies that produce routers: Cisco, Juniper, Bay, Nortel, 3Com, Cabletron, etc. Each company's product is different in how it is configured, but most will interoperate so long as they share common physical and data link layer protocols (Cisco HDLC or PPP over Serial, Ethernet etc.). Before purchasing a router for your business, always check with your Internet provider to see what equipment they use, and choose a router, which will interoperate with your Internet provider's equipment.

NON-ROUTABLE PROTOCOLS

NON-ROUTABLE PROTOCOLS cannot survive being routed. Non-routable protocols presume that all computers they will ever communicate with are on the same network (to get them working in a routed environment, you must bridge the networks). Todays modern networks are not very tolerant of protocols that do not understand the concept of a multi-segment network and most of these protocols are dying or falling out of use.

· NetBEUI

· DLC

· LAT

· DRP

· MOP

RIP (Routing Information Protocol)

RIP is a dynamic internetwork routing protocol primary used in interior routing environments. A dynamic routing protocol, as opposed to a static routing protocol, automatically discovers routes and builds routing tables. Interior environments are typically private networks (autonomous systems). In contrast, exterior routing protocols such as BGP are used to exchange route summaries between autonomous systems. BGP is used among autonomous systems on the Internet.

RIP uses the distance-vector algorithm developed by Bellman and Ford (Bellman-Ford algorithm).

Routing Information Protocol

Background

The Routing Information Protocol, or RIP, as it is more commonly called, is one of the most enduring of all routing protocols. RIP is also one of the more easily confused protocols because a variety of RIP-like routing protocols proliferated, some of which even used
the same name! RIP and the myriad RIP-like protocols were based on the same set of algorithms that use distance vectors to mathematically compare routes to identify the best path to any given destination address. These algorithms emerged from academic research that dates back to 1957.

Today's open standard version of RIP, sometimes referred to as IP RIP, is formally defined in two documents: Request For Comments (RFC) 1058 and Internet Standard (STD) 56. As IP-based networks became both more numerous and greater in size, it became apparent to the Internet Engineering Task Force (IETF) that RIP needed to be updated. Consequently, the IETF released RFC 1388 in January 1993, which was then superceded in November 1994 by RFC 1723, which describes RIP 2 (the second version of RIP). These RFCs described an extension of RIP's capabilities but did not attempt to obsolete the previous version of RIP. RIP 2 enabled RIP messages to carry more information, which permitted the use of a simple authentication mechanism to secure table updates. More importantly, RIP 2 supported subnet masks, a critical feature that was not available in RIP.

This chapter summarizes the basic capabilities and features associated with RIP. Topics include the routing update process, RIP routing metrics, routing stability, and routing timers.

Routing Updates

RIP sends routing-update messages at regular intervals and when the network topology changes. When a router receives a routing update that includes changes to an entry, it updates its routing table to reflect the new route. The metric value for the path is increased by 1, and the sender is indicated as the next hop. RIP routers maintain only the best route (the route with the lowest metric value) to a destination. After updating its routing table, the router immediately begins transmitting routing updates to inform other network routers of the change. These updates are sent independently of the regularly scheduled updates that RIP routers send.

RIP Routing Metric

RIP uses a single routing metric (hop count) to measure the distance between the source and a destination network. Each hop in a path from source to destination is assigned a hop count value, which is typically 1. When a router receives a routing update that contains a new or changed destination network entry, the router adds 1 to the metric value indicated in the update and enters the network in the routing table. The IP address of the sender is used as the next hop.

RIP Stability Features

RIP prevents routing loops from continuing indefinitely by implementing a limit on the number of hops allowed in a path from the source to a destination. The maximum number of hops in a path is 15. If a router receives a routing update that contains a new or changed entry, and if increasing the metric value by 1 causes the metric to be infinity (that is, 16), the network destination is considered unreachable. The downside of this stability feature is that it limits the maximum diameter of a RIP network to less than 16 hops.

RIP includes a number of other stability features that are common to many routing protocols. These features are designed to provide stability despite potentially rapid changes in a network's topology. For example, RIP implements the split horizon and holddown mechanisms to prevent incorrect routing information from being propagated.

RIP Timers

RIP uses numerous timers to regulate its performance. These include a routing-update timer, a route-timeout timer, and a route-flush timer. The routing-update timer clocks the interval between periodic routing updates. Generally, it is set to 30 seconds, with a small random amount of time added whenever the timer is reset. This is done to help prevent congestion, which could result from all routers simultaneously attempting to update their neighbors. Each routing table entry has a route-timeout timer associated with it. When the route-timeout timer expires, the route is marked invalid but is retained in the table until the route-flush timer expires.

Packet Formats

The following section focuses on the IP RIP and IP RIP 2 packet formats illustrated in Figures 44-1 and 44-2. Each illustration is followed by descriptions of the fields illustrated.
RIP Packet Format

· Command—Indicates whether the packet is a request or a response. The request asks that a router send all or part of its routing table. The response can be an unsolicited regular routing update or a reply to a request. Responses contain routing table entries. Multiple RIP packets are used to convey information from large routing tables.

· Version number—Specifies the RIP version used. This field can signal different potentially incompatible versions.

· Zero—This field is not actually used by RFC 1058 RIP; it was added solely to provide backward compatibility with prestandard varieties of RIP. Its name comes from its defaulted value: zero.

· Address-family identifier (AFI)—Specifies the address family used. RIP is designed to carry routing information for several different protocols. Each entry has an address-family identifier to indicate the type of address being specified. The AFI for IP is 2.

· Address—Specifies the IP address for the entry.

· Metric—Indicates how many internetwork hops (routers) have been traversed in the trip to the destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.

Note: Up to 25 occurrences of the AFI, Address, and Metric fields are permitted in a single IP RIP packet. (Up to 25 destinations can be listed in a single RIP packet.)

RIP 2 Packet Format

· Command—Indicates whether the packet is a request or a response. The request asks that a router send all or a part of its routing table. The response can be an unsolicited regular routing update or a reply to a request. Responses contain routing table entries. Multiple RIP packets are used to convey information from large routing tables.

· Version—Specifies the RIP version used. In a RIP packet implementing any of the RIP 2 fields or using authentication, this value is set to 2.

· Unused—Has a value set to zero.

· Address-family identifier (AFI)—Specifies the address family used. RIPv2's AFI field functions identically to RFC 1058 RIP's AFI field, with one exception: If the AFI for the first entry in the message is 0xFFFF, the remainder of the entry contains authentication information. Currently, the only authentication type is simple password.

· Route tag—Provides a method for distinguishing between internal routes (learned by RIP) and external routes (learned from other protocols).

· IP address—Specifies the IP address for the entry.

· Subnet mask—Contains the subnet mask for the entry. If this field is zero, no subnet mask has been specified for the entry.

·Next hop—Indicates the IP address of the next hop to which packets for the entry should be forwarded.

· Metric—Indicates how many internetwork hops (routers) have been traversed in the trip to the destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.

Note: Up to 25 occurrences of the AFI, Address, and Metric fields are permitted in a single IP RIP packet. That is, up to 25 routing table entries can be listed in a single RIP packet. If the AFI specifies an authenticated message, only 24 routing table entries can be specified. Given that individual table entries aren't fragmented into multiple packets, RIP does not need a mechanism to resequence datagrams bearing routing table updates from neighboring routers.

Summary

Despite RIP's age and the emergence of more sophisticated routing protocols, it is far from obsolete. RIP is mature, stable, widely supported, and easy to configure. Its simplicity is well suited for use in stub networks and in small autonomous systems that do not have enough redundant paths to warrant the overheads of a more sophisticated protocol.

Review Questions

Q—Name RIP's various stability features.

A—RIP has numerous stability features, the most obvious of which is RIP's maximum hop count. By placing a finite limit on the number of hops that a route can take, routing loops are discouraged, if not completely eliminated. Other stability features include its various timing mechanisms that help ensure that the routing table contains only valid routes, as well as split horizon and holddown mechanisms that prevent incorrect routing information from being disseminated throughout the network.

Q—What is the purpose of the timeout timer?

A—The timeout timer is used to help purge invalid routes from a RIP node. Routes that aren't refreshed for a given period of time are likely invalid because of some change in the network. Thus, RIP maintains a timeout timer for each known route. When a route's timeout timer expires, the route is marked invalid but is retained in the table until the route-flush timer expires.

Q—What two capabilities are supported by RIP 2 but not RIP?

A—RIP 2 enables the use of a simple authentication mechanism to secure table updates. More importantly, RIP 2 supports subnet masks, a critical feature that is not available in RIP.

Q—What is the maximum network diameter of a RIP network?

A—A RIP network's maximum diameter is 15 hops. RIP can count to 16, but that value is considered an error condition rather than a valid hop count.

Computer network installation has become an essential prerequisite for any efficient modern-day business as it allows employees to truly work as a team by sharing information, accessing the same database and staying in touch constantly. For a computer network to give the best results, a lot of detailed planning and foresight is required before installation.

Firstly, an organisation needs to clearly define its requirements – how many people would use the network, how many would use it locally (within the office) and how many might require remote access (from a different location), how many computers and other devices (servers, printers, scanners) would be connected to the network, what are the needs of the various departments and who would be in charge of running/managing the network. It also helps if one can anticipate the direction the company would take in the near future so potential growth can be factored in during computer network installation.

The technology issues should also be ironed out in advance – hardware, software, servers, switches, back-up devices, cables and network operating systems. Make sure you have the required licenses to run the software on all your machines before installing a computer network. Alongside computer network installation should proceed the building of a dedicated technical support staff, either within your own organisation or outside consultants. Delegate responsibility clearly for network management. Before installing the network, you also need to choose the security mechanism to protect corporate data and keep viruses at bay.

The transition to a new or upgraded computer network can bring some teething problems. To minimise chances of confusion, the company might need to train its staff to make them familiar with the new system. Careful planning will to a large extent prevent crises like system downtime and network crashes.

Bluetooth Basics

Bluetooth technology is nothing new, but in many respects it still seems to be more of a buzz word rather than a well understood, commonly accepted technology. You see advertisements for Bluetooth enabled cell phones, PDAs, and laptops, and a search of the Geeks.com website shows all sorts of different devices taking advantage of this wireless standard. But, what is it?

History

Before getting into the technology, the word Bluetooth is intriguing all on its own, and deserves a look. The term is far less high tech than you might imagine, and finds its roots in European history. The King of Denmark from 940 to 981 was renowned for his ability to help people communicate, his name (in English)... Harald Bluetooth. Perhaps a bit obscure, but the reference is appropriate for a wireless communications standard.

Another item worth investigating is the Bluetooth logo. Based on characters from the runic alphabet (used in ancient Denmark), it was chosen as it appears to be the combination of the English letter B and an asterisk.

Capabilities

The FAQ on the Bluetooth.org (https://www.bluetooth.org/) website offers a basic definition: "Bluetooth wireless technology is a worldwide specification for a small-form factor, low-cost radio solution that provides links between mobile computers, mobile phones, other portable handheld devices, and connectivity to the Internet."

Just like 802.11 b/g wireless networking systems and many cordless telephones, Bluetooth devices operate on 2.4 GHz radio signals. That band seems to be getting a bit crowded, and interference between devices may be difficult to avoid. Telephones are now being offered on the 5.8 GHz band to help remedy this, and Bluetooth has taken its own steps to reduce interference and improve transmission quality. Version 1.1 of the Bluetooth standard greatly reduces interference issues, but requires completely different hardware from the original 1.0C standard, thus eliminating any chance of backwards compatibility.

The typical specifications of Bluetooth indicate a maximum transfer rate of 723 kbps and a range of 20-100 meters (65 to 328 feet - depending on the class of the device). This speed is a fraction of that offered by 802.11 b or g wireless standards, so it is obvious that Bluetooth doesn’t pose a threat to replace your wireless network. Although it is very similar to 802.11 in many ways, Bluetooth was never intended to be a networking standard, but does have many practical applications.

Practical Applications

There are a variety of products that take advantage of Bluetooth’s capabilities, from laptops and PDAs, to headphones and input devices, and even wireless printer adapters.

Many Laptops include an onboard Bluetooth adaptor to allow the system to connect to any Bluetooth device right out of the box. For laptop or desktop systems that do not have an adaptor built in, there are many USB Bluetooth adaptors available.

Bluetooth enabled PDAs allow for convenient wireless synchronization and data transfer.

Headphones can take advantage of Bluetooth for two purposes… audio playback and mobile phone communications. Using something a mobile headset with a Bluetooth enabled mobile phone allows anyone to go hands free, as well as wire free.

Logitech, and other manufacturers, also produce input devices that eliminate wires thanks to Bluetooth. You can add a Bluetooth mouse to your system, or both a mouse and keyboard. One advantage that Bluetooth wireless keyboard/mouse combinations have over the standard RF wireless keyboard/mouse combinations is range. Where most standard RF keyboard/mouse combinations have a range up to 6 feet; a Bluetooth keyboard/mouse combination will usually have a range of up to 30 feet.

Bluetooth printer adaptors make sharing a printer extremely convenient by eliminating the need for any wires or special configurations on a typical network. Printing to any compatible HP printer from a PC, PDA or mobile phone can now be done easily from anywhere in the office.

What is Video Encryption?

Video Encryption is an extremely useful method for the stopping unwanted interception and viewing of any transmitted video or other information, for example from a law enforcement video surveillance being relayed back to a central viewing centre.

The scrambling is the easy part. It is the un-encryption that's hard, but there are several techniques that are available. However, the human eye is very good at, spotting distortions in pictures due to poor video decoding or poor choice of video scrambling hardware. Therefore, it is very important to choose the right hardware or else your video transmissions may be un-secure or your decoded video may not be watchable.

Some of the more popular techniques are detailed below:

Line Inversion:

Method: Whole or parts of the signal scan lines are inverted.

Advantages: Simple, cheap video encryption.

Disadvantages: Poor video decrypting quality, low obscurity, low security.

Sync Suppression:

Method: Hide/remove the horizontal/vertical line syncs.

Advantages: Provides a low cost solution to Encryption and provides good quality video decoding.

Disadvantages: This method is incompatible with some distribution equipment. Obscurity (i.e. how easy it is to visually decipher the image) is dependant on video content.

Line Shuffle:

Method: Each signal line is re-ordered on the screen.

Advantages: Provides a compatible video signal, a reasonable amount of obscurity, good decode quality.

Disadvantages: Requires a lot of digital storage space. There are potential issues with video stability. Less secure than the cut and rotate encryption method (see below)

Cut & Rotate:

Scrambling Method: Each scan line is cut into pieces and re-assembled in a different order.

Advantages: Provides a compatible video signal, gives an excellent amount of obscurity, as well as good decode quality and stability.

Disadvantages: Can have complex timing control and requires specialized scrambling equipment

The cut and rotate video encryption method is probably the best way of achieving reliable and good quality video encryption, an example of a good implementation of this system is in the Viewlock II

Implementing vice scrambling

The video scrambling hardware, in particular the decoder should function correctly even if there is a noisy (for example having what are commonly known as 'snow' on the screen. 'Snow' is when there are flecks on your TV screen, often seen in poor reception areas) or unstable signal. If the link to the encrypted signal should stop working then this should not be a problem. The link between the video encoder and video decoder should be regained and the decryption quickly continued.

The very nature of security camera systems is that they are often outdoors as so must be able to withstand the rigours of the weather. The video encryption hardware should be stable under or protected from the effects of rain, sunlight, extreme heat and cold. It should not be damaged if there is a power spike in the supply. In these systems the video encoder emits a wireless signal to the video decoder unit before it is viewed, it obviously must be the case that the very act of broadcasting the signal does not effect the video encoding hardware and likewise the video encoding hardware should not effect the radio transmitter.

The most important item is that the video scrambling system should be secure, else why bother? It is amazing how some encryption methods can easily be cracked. For example certain cable television stations 'encrypt' their channel broadcasts via a relatively un complex method, which can easily be decoded using a number of cheap bits of electronics from radio shack. That would obviously be illegal! The cable TV's method of encryption is very crude, they usually just dynamically alter the vertical sync signal so that your TV cannot get a proper lock on it and so it scrolls randomly.

The other extreme is to scramble the transmitted video signal too much so that it is costly both in equipment and time to the video at the receiver. Remember that this is a 'live' video scrambling broadcast followed by a 'live' video decryption display. ANY electronics can be copied, given enough money and time, but making this process as hard as possible is of benefit as it at least delays the time when illegal copies will be available.

IMO, these sd work 'like a VCR' as far as recording and playback. There are models w/ harddrives, VHS players, etc. built in, but to me that's overboard.

Bells and Whistles

The VHS option is not bad, but you most likely already have one you can plug into the inputs of the DVD recorder.

I have a DVD recorder for archiving TiVo shows as opposed to accessing my TiVo from my PC. This is nice because it means I can also archive VHS tapes, camcorder tapes, etc. w/no extra work.

I do have a TV card in my PC so I can do this, but using the DVD recorder is easier.

My motto is: buy what you WILL use and not what you CAN use.

I've bought lots of things that CAN do a lot, but in reality I don't use all the extra features. Not in all cases, but in this case, I say pass on the bells and whistles.

Again, there are models w/ all types of features, but if you buy one that is a DVR, DVD recorder, VCR, TV tuner all in one and one part breaks, it's all broke.

Realize Something About Technology

Remember - this is new technology and will only get better and cheaper. If you buy the top of the line today, it's going to be out of date and/or cheap tomorrow. Test the waters w/ a 'good' model and upgrade when the time is right.

Editing Your Recordings

Chances are - you won't. It's a pain for the most part and usually requires DVD-RAM or DVD-RW discs to do it and they're more expensive. If you have a lot of free time for this, you're a rare person.

I was looking for this type of solution in getting ready for having a baby and I knew I wasn't going to be sifting through and editing hours of video.

If you're really interested in editing, look in to PC options. Pinnacle, ArcSoft, Adobe, etc. - they have good solutions for that.

DVD+R, DVD-R, DVD-RAM, DVD-RW

DVD+R and DVD-R are like VHS and Beta: they're both ok right now, but eventually we'll probably land on one or the other. It seems to be leaning towards DVD-R which tend to be less expensive also.

Many recorders and players do both, but cost more. I say save some money, pick one (probably DVD-R) and move on. If you pick the wrong one, chances are in a couple years you'll be buying a new one anyway. Moreover, you'll probably be able to get a cheap one w/ a built in converter or two trays to duplicate one to the other.

DVD-RAM and DVD-RW are the rewritable types. They're more expensive and for my purposes aren't worth worrying about.

My Recommendation

I got the Panasonic DMR-E55K:

It records to DVD-R like a VCR. I don't use it to record live TV so I don't use VCR+, but it has it. Also, it has TimeSlip which lets you watch something while it's recording (start recording "24" at 8pm and start watching it from the begining at 8:20 to speed thru commercials like a TiVo). Again, I don't use this, but it has it.

Plain and simple, it records my TiVo, camcorder, digital camera (RCA cable output), VCR, etc. to DVD - that's what I want it to do and that's what it does. It's easy, creates a good menu w/ thumbnails and my chosen titles, it's a name brand w/ good reviews and was fairly cheap (there was a rebate at the time).

Also, it plays CDs and mp3 CDs w/ a good interface so not only does it replace a CD player, but since you can put so many songs on one CD, it replaces a CD changer.

An interesting trick: If you have a digital camera w/ RCA cable output, you can hook it directly into the dvd recorder and create a quick slide-show dvd. Many cameras even have a slide show function built in! You can use the sound from a music channel, CD, etc.

Thinking about a mini DVD camcorder? You're not alone, it's a rapidly growing
sector of the camcorder market, with Hitachi, Sony and Panasonic all making more
than one mini dvd camcorder.

These camcorders differ from regular digital video cameras in one important way -
they record video onto mini DVD discs, rather than DV tape. This has a number of
advantages. DVD discs are more robust than tape and won't get chewed up in the
camera. Although this is thankfully a rare occurance, it scares me every time I here a
strange noise coming from my camcorder, so it's with bearing in mind.

The second advantage is that DVD discs are random access, compared to tape on
which everything is recorded sequentially. This means that there's no need to
rewind and fast forward to find the clip you're after, just select it from the menu.
Some cameras even allow you to perform basic editing functions on-camera. An
additional side-benefit is that a mini DVD camcorder doesn't have tape heads to get
worn or dirty as happens in regular mini DVD cameras.

And thirdly, you can easily watch your home movies by removing the DVD from the
camera and playing it in practically any DVD player.

However, there are negative factors to. The most siginificant one is that video is
encoded as MPEG-2 on a mini DVD camcorder, as opposed to DV format. This
means that it needs specialist software to edit - you can't just use your regular
video editing program (unless it specifically supports MPEG-2). And if a Mac user
you're out of luck, as there are no MPEG-2 editing applications for the Mac.

Also, mini DVD camcorders tend to cost more than similarly specified mini DV
cameras. And the media is also more expensive. However, if you don't intend
editing your movies and don't mind the extra cost, a mini dvd camcorder does offer
extraordinary convenience.

Picking your way through the ton of information available on recordable DVD
formats can be a nightmare. To help you out, we’ve done our best to distill it into
this summary.

There are five recordable versions of DVD; DVD-R for General, DVD-R for Authoring,
DVD-RAM, DVD-RW, and DVD+RW. None of the formats is fully compatible with the
other although there are drives which will read, and in some cases write to more
than one format.

DVD-R for General and DVD-R for Authoring are essentially DVD versions of CD-R.
And DVD-RW is a DVD version of CD-RW. All three formats can be read in standard
DVD-ROM drives and in most DVD video players. The difference between DVD-R for
General and DVD-R for Authoring is that DVD-R for General is a format intended for
widespread consumer use and doeasn’t support ‘professional’ features such as
piracy protection or duplication in mass duplicators. The Pioneer DVD-RW drive
which is the most popular PC device for writing to DVD uses the DVD for General
format. And as as the case with CD, DVD-RW is essentially the same as DVD-R
except that it can be erased and written to again and again.

DVD-RAM is slightly different as it is a sector based disc which mounts on the
desktop of a PC when inserted into a drive. Files can then be copied to it in the same
way as any other mounted media. Some single-sided DVD-RAM discs can be
removed from their caddy and inserted in a DVD-ROM drive which will then be able
to read the content of the disc.

There are DVD video recorders which use the DVD-RAM format. This enables themn
to pull off clever tricks like timeshifting – where you can watch the beginning of a
programme you have recorded while you are still recording the end on the same
disc.

DVD+RW is the newest format and not supported by the DVD Forum, the body
which sets the standards for DVD. However, it is supported by some of the biggest
electronics and computer manufacturers, and is therefore likely to stick around.

It is also the format used by Philips in its DVD video recorders. Despite not being
authorised by the DVD Forum, DVD+RW is claimed by its supporterd to be
compatible with more DVD video players than DVD-R and DVD+RW writers are
found in PCs from quite a few manufacturers.

Hard Drives: ATA versus SATA

The performance of computer systems has been steadily increasing as faster processors, memory, and video cards are continuously being developed. The one key component that is often neglected when looking at improving the performance of a computer system is the hard drive. Hard drive manufacturers have been constantly evolving the basic hard drive used in modern computer systems for the last 25 years, and the last few years have seen some exciting developments from faster spindle speeds, larger caches, better reliability, and increased data transmission speeds.

The drive type used most in consumer grade computers is the hearty ATA type drive (commonly called an IDE drive). The ATA standard dates back to 1986 and is based on a 16-bit parallel interface has undergone many evolutions since its introduction to increase the speed and size of the drives that it can support. The latest standard is ATA-7 (first introduced in 2001 by the T13 Technical Committee (the group responsible for the ATA standard)) which supports data transfer rates up to 133MB/sec. This is expected to be the last update for the parallel ATA standard.

As long ago as 2000 it was seen that the parallel ATA standard was maxing out its limitations as to what it could handle. With data rates hitting the 133MB/sec mark on a parallel cable, you are inviting all sorts of problems because of signal timing, EMI (electromagnetic interference) and other data integrity issues; thus industry leaders got together and came up with a new standard known as Serial ATA (SATA). SATA has only been around a few years, but is destined to become “the standard” due to several benefits to be addressed in this Tech Tip.

The two technologies that we will be looking at are:
ATA (Advanced Technology Attachment) – a 16-bit parallel interface used for controlling computer drives. Introduced in 1986, it has undergone many evolutions in the last 18+ years, with the latest version being called ATA-7. Wherever an item is referred to as being an ATA device, it is commonly a Parallel ATA device. ATA devices are also commonly called IDE, EIDE, Ultra-ATA, Ultra-DMA, ATAPI, PATA, etc. (each of these acronyms actually do refer to very specific items, but are commonly interchanged)
SATA (Serial Advanced Technology Attachment) – a 1-bit serial evolution of the Parallel ATA physical storage interface.

Basic Features & Connections

SATA drives are easy to distinguish from their ATA cousins by the different data and power connections found on the back of the drives. A side-by-side comparison of the two interfaces can be seen in this PDF from Maxtor, and the following covers many of the differences…

Standard ATA drives, such as this 200GB Western Digital model, have somewhat bulky, two inch wide ribbon cable with 40-pin data connections and receive the 5V necessary to power them from the familiar 4-pin connection. The basic data cables for these drives have looked the same for years. A change was made with the introduction of the ATA-5 standard to better improve the signal quality by making an 80 wire cable used on the 40-pin connector (these are commonly called 40-pin/80-wire cables). To improve airflow within the computer system some manufacturers resorted to literally folding over the ribbon cable and taping it into that position. Another recent physical change also came with the advent of rounded cables. The performance of the rounded cables is equal to that of the flat ribbon, but many prefer the improved system air flow afforded, ease of wire management, and cooler appearance that come with them.

SATA drives, such as this 120GB Western Digital model, have a half inch wide, 7 “blade and beam” data connection, which results in a much thinner and easier to manage data cable. These cables take the convenience of the ATA rounded cables to the next level by being even narrower, more flexible and capable of being longer without fear of data loss. SATA cables have a maximum length of 1 meter (39.37 inches), which is much greater than the recommended 18 inch cable for ATA drives. The reduced footprint of SATA data connections frees up space on motherboards, potentially allowing for more convenient layouts and room for more onboard features!

A 15-pin power connection delivers the 250mV of necessary power to SATA drives. 15-pins for a SATA device sounds like it would require a much larger power cable than a 4-pin ATA device, but in reality the two power connectors are just about the same height. For the time being, many SATA drives are also coming with a legacy 4-pin power connector for convenience.

Many modern motherboards, such as this Chaintech motherboard, come with SATA drive connections onboard (many also including the ATA connectors as well for legacy drive compatibility), and new power supplies, such as this Ultra X-Connect, generally feature a few of the necessary 15-pin power connections, making it easy to use these drives on new systems. Older systems can easily be upgraded to support SATA drives by use of adapters, such as this PCI slot SATA controller and this 4-pin to 15-pin SATA power adapter.

Optical drives are also becoming more readily available with SATA connections. Drives such as the Plextor PX-712SA take advantage of the new interface, although the performance will not be any greater than a comparable optical drive with an ATA connection.

Performance

In addition to being more convenient to install and drawing less power, SATA drives have performance benefits that really set them apart from ATA drives.

The most interesting performance feature of SATA is the maximum bandwidth possible. As we have noted, the evolution of ATA drives has seen the data transfer rate reach its maximum at 133 MB/second, where the current SATA standard provides data transfers of up to 150 MB/second. The overall performance increase of SATA over ATA can currently be expected to be up to 5% (according to Seagate), but improvements in SATA technology will surely improve on that.

The future of SATA holds great things for those wanting even more speed, as drives with 300 MB/second transfer rates (SATA II) will be readily available in 2005, and by 2008 speeds of up to 600 MB/second can be expected. Those speeds are incredible, and are hard to imagine at this point.

Another performance benefit found on SATA drives is their built-in hot-swap capabilities. SATA drives can be brought on and offline without shutting down the computer system, providing a serious benefit to those who can’t afford downtime, or who want to move drives in and out of operation quickly. The higher number of wires in the power connection is partially explained by this, as six of the fifteen wires are dedicated to allowing the hot-swap feature.

Price

Comparing ATA drives to SATA drives can be tricky given all of the variables, but in general it is the case that SATA drives will still cost just a bit more than a comparable ATA drive. The gap is closing rapidly though, and as SATA drives gain in popularity and availability a distinct shift in prices can be expected. Considering the benefits of SATA over ATA, the potential difference of a few dollars can easily be justified when considering an upgrade. Computer Geeks currently has a limited selection of SATA drives, but several technical sites, such as The Tech Zone and The Tech Lounge, offer real time price guides to see how comparable drives stack up.

Subscribe to: Posts (Atom)