الفرق او المعنى باختصار هو :
الـ Load balancing هو عبارة عن توزيع ضغط وحمل العمل على اكثر من جهاز او سيرفر او ماكينة ... حتى لا يتسبب ضغط العمل على جهاز واحد فى حدوث مشاكل ....
اما الـ Fail Over ....
فهو انك يكون عندك جهازين او اكتر ... بحيث ان لو جهاز وقع او اطفى او حصله اى crash يروح الشغل منقول على الجهاز التانى على طول بدون ما يحدث down time يعنى بدون ما الشغل يتعطل .... وإليك شرح المفهومين باستفاضة فى الاسفل .....
For better readability , kindly right click and scroll to encoding then choose left to right ......
Computer networks are complex systems, often routing hundreds, thousands, or even millions of data packets every second. Therefore, in order for networks to handle large amounts of data, it is important that the data is routed efficiently. For example, if there are ten routers within a network and two of them are doing 95% of the work, the network is not running very efficiently. The network would run much faster if each router was handling about 10% of the traffic. Likewise, if a website gets thousands of hits every second, it is more efficient to split the traffic between multiple Web servers than to rely on a single server to handle the full load.
Load balancing helps make networks more efficient. It distributes the processing and traffic evenly across a network, making sure no single device is overwhelmed. Web servers, as in the example above, often use load balancing to evenly split the traffic load among several different servers. This allows them to use the available bandwidth more effectively, and therefore provides faster access to the websites they host.
Whether load balancing is done on a local network or a large Web server, it requires hardware or software that divides incoming traffic among the available servers. Networks that receive high amounts of traffic may even have one or more servers dedicated to balancing the load among the other servers and devices in the network. These servers are often called (not surprisingly) load balancers.
Clusters, or mulitple computers that work together, also use load balancing to spread out processing jobs among the available systems.
Other defintion of load balancing:
- Load balancing is dividing the amount of work that a computer has to do between two or more computers so that more work gets done in the same amount of time and, in general, all users get served faster. Load balancing can be implemented with hardware, software, or a combination of both. Typically, load balancing is the main reason for computer server clustering.
On the Internet, companies whose Web sites get a great deal of traffic usually use load balancing. For load balancing Web traffic, there are several approaches. For Web serving, one approach is to route each request in turn to a different server host address in a domain name system (DNS) table, round-robin fashion. Usually, if two servers are used to balance a work load, a third server is needed to determine which server to assign the work to. Since load balancing requires multiple servers, it is usually combined with failover and backup services. In some approaches, the servers are distributed over different geographic locations.
A backup operation that automatically switches to a standby database, server or network if the primary system fails or is temporarily shut down for servicing. Failover is an important fault tolerance function of mission-critical systems that rely on constant accessibility. Failover automatically and transparently to the user redirects requests from the failed or down system to the backup system that mimics the operations of the primary system.
Failover is a backup operational mode in which the functions of a system component (such as a processor, server, network, or database, for example) are assumed by secondary system components when the primary component becomes unavailable through either failure or scheduled down time. Used to make systems more fault-tolerant, failover is typically an integral part of mission-critical systems that must be constantly available. The procedure involves automatically offloading tasks to a standby system component so that the procedure is as seamless as possible to the end user. Failover can apply to any aspect of a system: within an personal computer, for example, failover might be a mechanism to protect against a failed processor; within a network, failover can apply to any network component or system of components, such as a connection path, storage device, or Web server.
Originally, stored data was connected to servers in very basic configurations: either point-to-point or cross-coupled. In such an environment, the failure (or even maintenance) of a single server frequently made data access impossible for a large number of users until the server was back online. More recent developments, such as the storage area network (SAN), make any-to-any connectivity possible among servers and data storage systems. In general, storage networks use many paths - each consisting of complete sets of all the components involved - between the server and the system. A failed path can result from the failure of any individual component of a path. Multiple connection paths, each with redundant components, are used to help ensure that the connection is still viable even if one (or more) paths fail. The capacity for automatic failover means that normal functions can be maintained despite the inevitable interruptions caused by problems with equipment