Grid computing

This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back office data processing in support for e-commerce and Web services.

This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors.

[5] The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet.

This makes it possible to write and debug on a single conventional machine and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.

Some nodes (like laptops or dial-up Internet customers) may also be available for computation but not network communications for unpredictable periods.

In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes.

With many languages, there is a trade-off between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network).

Cross-platform languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform).

Grid middleware is a specific software product, which enables the sharing of heterogeneous resources, and Virtual Organizations.

For companies on the demand or user side of the grid computing market, the different segments have significant implications for their IT deployment strategy.

CPU-scavenging, cycle-scavenging, or shared computing creates a “grid” from the idle resources in a network of participants (whether worldwide or internal to an organization).

Typically, this technique exploits the 'spare' instruction cycles resulting from the intermittent inactivity that typically occurs at night, during lunch breaks, or even during the (comparatively minuscule, though numerous) moments of idle waiting that modern desktop CPU's experience throughout the day (when the computer is waiting on IO from the user, network, or storage).

In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power.

For instance, HTCondor[8] (the open-source high-throughput computing software framework for coarse-grained distributed rationalization of computationally intensive tasks) can be configured to only use desktop machines where the keyboard and mouse are idle to effectively harness wasted CPU power from otherwise idle desktop workstations.

[9][10] CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.

[16] "For outstanding contributions to the development of software for HPC and Grid computing to enable the collaborative numerical investigation of complex problems in physics; in particular, modeling black hole collisions.

As of October 2016, over 4 million machines running the open-source Berkeley Open Infrastructure for Network Computing (BOINC) platform are members of the World Community Grid.

To extract best practice and common themes from the experimental implementations, two groups of consultants are analyzing a series of pilots, one technical, one business.

This, along with the Worldwide LHC Computing Grid[28] (WLCG), was developed to support experiments using the CERN Large Hadron Collider.

A list of active sites participating within WLCG can be found online[29] as can real time monitoring of the EGEE infrastructure.

[31] There is speculation that dedicated fiber optic links, such as those installed by CERN to address the WLCG's data-intensive needs, may one day be available to home users thereby providing internet services at speeds up to 10,000 times faster than a traditional broadband connection.

The NASA Advanced Supercomputing facility (NAS) ran genetic algorithms using the Condor cycle scavenger running on about 350 Sun Microsystems and SGI workstations.