By Brian Proffitt
Managing Editor
Grid computing is one of those hype words that flies across my
desk every couple of days. Grid computing, it seems, will fix all
of a corporation’s problems–if you believe all the hype.
Hype or no, there is a very strong push towards moving to grid
computing these days, because it has two big elements that make it
very attractive to IT managers: it runs on open-standard and
open-source platforms and it is more scalable than any other IT
platform you can think of.
This is what I personally find so appealing about grid
computing: you can take a problem, say some huge computational
algorithm that would normally take a standard server umpteen
billion years to solve, throw it into a grid computing system, and
have the algorithm solved in months, or even weeks. If you need
more processing “oomph,” just add more computers to the grid.
I am grossly over-simplifying the process, of course, since grid
computing does involve a lot more than hooking some computers
together and let them have at some poor, defenseless data and
algorithms. Grid application development, hardware configuration,
and data management are all skills that need to be learned in order
to make a truly efficient grid. But more and more companies are
throwing their weight behind these skill sets and are prepared to
deliver very powerful grid applications and platforms that will
very cheaply bring the power of a supercomputer to your
organization.
IBM is one of those companies leading the push to grid
computing. At next week’s Enterprise Linux Forum, they plan to
demonstate how these efforts can help companies maximize their
computational power.
IBM has gotten behind open standards-based technologies such as
Linux, XML-based Web services, and the Open Grid Services
Architecture to deliver what they describe as on-demand computing.
The on-demand reference is a strong indicator of the scaability I
mentioned. Their strategy is to apply grid computing on demand, all
based on open standards architectures. This lets their customers
remain responsive to new challenges and opportunities, focus on
core competencies, and employ new variable cost structures.
So, if a business needs major computing power for a limited
amount of time, then they get it. When the need is finished, they
can scale down what they have and not be stuck with more computing
platforms (and more costs and more administration and more
headaches…) than they need.
IBM won’t be the only company at the Enterprise Linux Forum
pushing grid technology. Oracle will also be there, demonstrating
their approach to grid computing, which enables enterprise
computing on commodity clusters and provides the grid concepts of
what Oracle refers to as “Virtualization and Provisioning.”
In their session, Oracle will present how organizations can
utilize Oracle9i Grid technology for high-availability clustering
on Linux, which is the other aspect of grid computing beyond the
high-performance side usually touted in the media. Oracle also
plans to show how IT strategists can design their IT infrastructure
to add hardware as needed without disrupting services.
If you’re looking for a more hands-on, technical look at
implementing grid computing, you might want to visit the
pre-conference workshop on June 4, where Ahmar Abbas of Grid
Technology Partners and Chris Smith of Platform Computing plan to
host a two-hour workshop on Grid Computing in Linux
Environments.
Among the topics to be covered in this workshop are the
introduction of concepts of grid computing, including an
examination of some existing grid technologies (Platform
MultiCluster and the Globus Toolkit), discussions of what types of
applications are amenable to execution on the grid, and an example
of the phases of a Grid deployment within an enterprise.
The Enterprise Linux Forum will be held from June 5-6 at the
Santa Clara (California) Convention Center, with a pre-show
workshop day taking place on June 4. For more information on the
conference and to register online, please visit the Forum Web
site.