Focus on the Vine: producing focused web solutins that bear real fruit
eNews Archive

eNewsletter August 2007

Google Earth Enterprise Addresses Display
and Data Integration Challenges

On July 25, Google announced that business and government users of Google Earth Enterprise will be able to view their organization's geospatial data in 2D on a browser, behind the firewall. Additionally, through a special version of the Google Maps API(the programming interface for Google Maps), administrators can embed this 2D view into any web application (much like a Google Map) and create mash-ups with information from external databases, spreadsheets and other data sources.

"Not only is it now easier for employees who need to access their organization's geo-data to do so from almost anywhere, but they can integrate additional layers of information within their existing web applications without ever having to leave the browser," said Matthew Glotzbach, Product Management Director, Google Enterprise. "Because businesses and government organizations can leverage the browser in addition to the downloadable Earth Enterprise client, they can share their geo-data more easily across teams and departments.”

Client testimonials from Dell Computers, Alabama's Department of Homeland Security and Norsk Hydro (a Norwegian oil, energy and aluminum company) are included below.

Dell Computers

According to Jamie Wills, vice president of sales and marketing systems, the Dell Enterprise Command Center gains a global view of customer activity at a glance, ensuring optimum responsiveness for customers. “Google Earth gives us the intuitive yet powerful interface to critical business information that we need to manage effectively in today's fast-paced business environment."

Norsk Hydro

Deployed in the Research Centre in Bergen, Norway, the Google Earth Enterprise system is used to synthesize large quantities of global data high resolution imagery and terrain models and dozens of vector layers that are relevant to Hydro's interests. “It's an eye-opening experience to see years of accumulated geological and geophysical data appear with just a few clicks,” commented Ole Martinsen, Head of Exploration Research for Norsk Hydro.

State of Alabama

Alabama's Department of Homeland Security is using the latest release of Google Earth Enterprise as a platform for Virtual Alabama, a department program to support information sharing across Alabama's county, municipal and state agencies for emergency response, disaster assistance and the protection of public assets. "Google Earth Enterprise enhances our ability to identify, track and update critical infrastructure throughout Alabama," said Jim Walker, Alabama's Homeland SecurityDirector.

Key features include:

  • Browser view lets anyone in the organization securely access Google Earth Enterprise through a browser. In addition, organizations can embed a map view with proprietary data into any web-based application. (A real estate firm, for example, can now publish 2D images of all properties in a given area and overlay those images with a spreadsheet's pricing data or availability notes -- all on the firm's website.)
  • Enhanced search framework allows integration with multiple search services through Java plug-ins, including the Google Search Appliance. (A manufacturer might use this feature to find a set of customers with certain product preferences using the Google Search Appliance, and view the geographic distribution of those customers in Google Earth.)
  • Regions-based KML imagery data processing tool for creating super-overlays. These overlays enable organizations to easily publish large collections of images.(A government agency would be able to publish local aerial photography to citizens.)
  • Faster data processing and serving performance produces time savings of up to 10x for vector processing (points/lines/polygons) and computational savings of more than 2x for server responses to imagery data requests.
  • Industry standard security methodologies are supported for easier implementation of LDAP and SSL.
  • User interface improvements make the process of ingesting, previewing and publishing data easier and more efficient.

Google Earth Product Manager Noah Doyle notes that the ability to see each organization's own key data in the browsers “unlocks the value” of years of GIS investments.

Key links:


Technical Advances for Digital Earth

Rod Franklin, Reporter, Imaging Notes, Denver, Colo., www.imagingnotes.com

Editor's Note: In light of Google Earth's new announcement of enhancements to Google Earth Enterprise products, (covered in this eNewsletter in the previous article) some of the information in this story may no longer be accurate. See previous story for clarification.

“It is the dearth of content that could be coursing through today’s underused fiber optic networks, as well as the unrealized potential of knowledge domain specialists who still have not marshaled forces, that stands in the way of more rapid progress.”

The modern approach to informational georeferencing is in a state of significant transformation. Over the past two years, what cartographers have known as a GIS software model strongly oriented toward the desktop has morphed into a configuration wherein geobrowsers and extensible markup technology are used to tap collections of increasingly dynamic and widely disparate hosted data.

But papers and technical demonstrations presented in June at the Fifth International Symposium on Digital Earth (ISDE, Berkeley, Calif.) shed light on a number of technical goals that remain unfulfilled as engineers work to invent the kind of rich, on-demand browsing system that would allow users to query storehouses of properly contextualized and geo-coordinated data from any point on the globe. Some of these goals have to do with the multiplicity of technical approaches that exist with regard to data modeling. Other goals concern the makeup of the geobrowsers, which ideally would be capable of pulling content from all sorts of existing data stores for use as base map overlays. Still others are focused on raising the intelligence quotient of remote sensing hardware by coding into them the capability for autonomous inter-node tasking adjustments within multi-tentacled sensor networks, or “sensor webs.” See Figure 1.

Figure 1 Sensor Web as predicted by Matt Heavner at University of Alaska Southeast.

In an ISDE presentation entitled Is Google Earth “Digital Earth?”—Defining a Vision, Karl Grossner and Kenneth Clarke of the University of California Santa Barbara Department of Geography posit that the Google Earth client, while fast, extremely popular and in its own right revolutionary, falls short of the functionality outlined by former Vice President Al Gore in the 1998 speech that catalyzed the Digital Earth movement. What Google Earth delivers, they wrote, is “breakthrough technology with terrific potential” and a “multi-resolution three-dimensional representation of the planet.” What it does not deliver is the comprehensively integrated solution which Gore suggested could represent “the full range of data about our planet and our history.”

The authors further assert that the organizationally fragmented Digital Earth movement, which lost some of its momentum after Gore's defeat in the 2000 presidential election, has not resulted in the kind of standardized knowledge management architecture that would suit the requirements of a truly well-calibrated global information repository. One must-have feature of this architecture, they argue, is separation of georeferenced information on three basic tiers:

  • The first tier would be devoted to thematic base data, rendered at a uniform scale or set of scales.
  • A second tier would be dedicated to peer-reviewed data “at any geographic scale, level of detail or coverage extent as are made available according to published standards.”
  • The third tier would hold data contributed in a more casual fashion by the global public.

The benefit of this layered data approach, Grossman and Clarke say, would be to ensure that “information” can be distinguished from “knowledge” in ways that help users preserve the epistemological integrity of any conclusions they may derive from their geobrowsing experience.

But the authors point out that this tiered approach providing separation of georeferenced information satisfies only one of the components necessary for a knowledge management system capable of delivering the kind of functionality Gore spoke of. Three additional elements would include:

  • a data model based on a semantic, ontological approach “to allow feature and event attributes to represent meaning in class rules and relationships,” and that also allows the tracking of attribute changes over time, as well as the integration of object and field data sources;
  • an integrated set of authority lists, including place name gazetteers, time period and biographical directories, and an extensible framework of domain ontologies;
  • granular object-level metadata to further distinguish the provenance of observed data from the quality of knowledge derived from that data.

According to Grossner and Clarke's presentation, most georeferenced data on offer today is posted to the Internet in the form of publicly accessible Keyhole Markup Language (KML) files which are “not organized except for bulletin-board type forum folders.” The authors do note that the efforts of various Digital Earth working groups between 1999 and 2004 have resulted in some suggested standards and interoperability reference models. But over this same time period, the geospatial community has witnessed greater progress in front-end application development than it has in efforts to win consensus on a data model at the foundational level.

Thus, we have the phenomenon of Google Earth's wide acceptance, which has served to advance KML as a leading approach to geospatial markup. The speed and user-friendly interface associated with this geobrowser are two factors that eclipse the fact that, unlike ArcGIS Explorer (a client for ArcGIS Server which ESRI has made freely available to the public), Google Earth is not a true GIS application. That is to say, ArcGIS Explorer permits queries and analysis on the underlying data, whereas Google Earth does not. In his plenary speech at the ISDE gathering, ESRI Director of Products Dr. David Maguire offered his perspective on the rapid ascension of Google Earth by comparing it to another of his company's robust products: “Sometimes people say, ‘Why is Google Earth so fast and ArcGIS Desktop so slow?' The answer is that Google took a small part of what ArcGIS Desktop does and produced a fantastically highly optimized solution to do, not quite one thing, but a small number of things extremely well.”

Google Earth's dazzling geotour capabilities may be powerful enough to send some users into spinning fits of vertigo. Nevertheless, Grossner and Clarke maintain that it delivers only a portion of the functionality elucidated by Gore, and they have charted its features against wish-list items culled from his 1998 speech. In the functionality category, the platform works well for its ability to view the Earth at various resolutions and facilitate content sharing through the use of built-in e-mail features. In terms of content, Google Earth often shines by virtue of its sub-meter resolution at selected locations, digital elevation models of cities and mountain ranges, and data layers showing roads, population, land cover, political boundaries, hiking trails and distributions of plant and animal species. The interface also offers its users hyperlink navigation popups and allows them to self-publish files to the Google servers in the KML format.

Challenges exist in other areas. Google Earth does not permit the fusion and integration of data from multiple sources, for example: Non-Google software cannot read the primary Google database. Audio and speech recognition capabilities are not supported. Google Earth cannot present modeled thunderstoms, virtual reality museum tours or oral histories. And it references “vast quantities of georeferenced information” only in the sense that a fair amount of data resides on Google's own servers—not within a distributed network of datasets rendered compatible by dint of standardized protocols, formats and metadata.

Meanwhile, other geobrowsers, including Microsoft's Virtual Earth and NASA's World Wind, continue to develop along their own evolutionary paths. Grossner and Clarke acknowledge that Virtual Earth, which functions as a 3-D plug-in to Internet Explorer, is “by most accounts comparable to Google Earth in terms of functionality.” Virtual Earth was released only in November of 2006 and has a smaller user base than Google Earth. Yet it is reasonable to assume that the ubiquity of Internet Explorer and the size of Microsoft's developer base invest it with significant potential.

Beyond these applications, visualization technologies such as tile walls and inflatable projection domes are bringing spectacularly immersive geospatial resolution to the masses. Indeed, the absence of computing power necessary to render nearly full-sized digital “avatars” that can fly like Superman around a virtual island to illustrate the dynamics of glacial melting, tsunamis, and sea life is not the weak link that blocks full realization of Gore's fantasized ideal. According to John Graham, senior developer at the San Diego State University Visualization Center, it is the dearth of content that could be coursing through today's underused fiber optic networks, as well as the unrealized potential of knowledge domain specialists who still have not marshaled forces, that stands in the way of more rapid progress. “I'd like to minimize these supercomputer assets,” Graham told the ISDE audience at Berkeley. “They are everywhere. Every university has got some pretty powerful machines.”


So, where has the Digital Earth concept landed? We have the basic outline for a distributed but standardized back-end data model, geobrowsers that underscore the need for a front-end solution that is fast and inquisitive, and clock speed aplenty. But what about data acquisition? Are improved remote sensing motifs being applied in the field? Look to the University of Alaska Southeast (UAS) for a lesson in this area. Assistant Professor of Physics Matt Heavner reported in an ISDE breakout session that UAS students are working on a NASA-funded project with hopes of pushing current field sensor methodology to the next level. In the Lemon Creek watershed between the Gulf of Alaska and the Juneau Icefield, they have installed a five-node remote sensing network that will serve as a test bed for the validation of more intelligent power management and sensor tasking schemes. Its acronym, SEAMONSTER, stands for South East Alaska MOnitoring Science Technology Education and Research.

The nodes integrate weather station gear, web cams, GPS hardware, geophones, pressure transducer and stream gauging hardware, water chemistry samplers, and communications linkage. These are placed in proximity to lakes, streams and glaciers within the region so the team can monitor the causal impacts of specific events. SEAMONSTER runs on solar panels and car batteries, Heavner explained, “so power management is one issue. Data storage and data management is another.”

The study group is working to develop inter-node feedback loops and to refine communications schema in ways that allow for the real-time reconfiguration of sensor tasks in response to natural phenomenon. Heavner describes it as a sort of “thinking” sensor network: “We want to know what happens when those lakes drain,” he explained. “We don't know when they're going to drain, but that's what we really care about. And if we want to know the water quality every 15 seconds when the drainage occurs, we don't have the power or the (data) storage to do that measurement the entire time. So somehow we need a pressure transducer in a lake to fire out a message saying: ‘The water's dropping: go to a faster sampling rate.' So there's this feedback and adaptation between the nodes, autonomously.”

Another instance of intelligent inter-nodal adaptation might occur in a scenario wherein the pressure transducer senses a drop in lake levels and responds by sending a signal to a web cam to pan, tilt or zoom in the direction of the water. In addition to this sensitivity to natural events, SEAMONSTER's ability to forecast its own solar power needs by referencing online weather forecasts means that resources normally allotted to juicing the system can be redirected toward specific sensor tasks during periods of impending sunshine.

Today's remote sensing networks can be compared to the fragmented islands of local- and wide-area network connectivity that existed 15 years ago, prior to the World Wide Web's coming of age. It's possible that within 20 years a “sensor web” will develop along lines that are analogous to the Internet. Such a system might be characterized by a series of event-responsive, multi-node sensor layouts dedicated to monitoring change factors within the domains of geologic, ecologic, meteorologic and oceanic study.

“Things are going to change. Things are going to break. Things are going to get better,” Heavner predicted. “So being able to just plug in improvements and work around failures is one of the key designs. And the other thing we're doing is running a wiki and just populating the hell out of it. So hopefully people can learn from our duct tape implementation.”


As developers continue to build on platforms established by Google, ESRI and Microsoft, the Digital Earth treasure trove is bound to grow from contributions originating in a number of different development spheres. And it seems clear that issues such as interoperability and metadata will remain on the front burner, at least for the foreseeable future.

Index of eNewsletters
DGI 2014 | 21-23 January 2014, London | Strategies for data, geoint, and cyber security in defence & intelligence | Find out more