Tuesday, December 30, 2008

Laser Scanning Versus Photogrammetry

At the end of my previous post I asked which is more applicable for 3D cultural heritage projects, photogrammetry or laser scanning?

HDS6000 Laser Scanner

A reader pointed me in the direction of the December issue of Geoinformatics, which features an article on page 50 called "3D Laser Scanning and its 2D partners". I wanted to highlight the article in this post as it offers some interesting thoughts on the subject. In particular the article makes the following points (with my own observations):

  • There are similarities between photogrammetry and laser scanning. For example, both technologies are used to capture point clouds where points have XYZ coordinates. I would add that the differentiator is that in photogrammetry we are usually capturing points to model a surface (e.g. a TIN or grid) as opposed to a true 3D point cloud (e.g. multiple points with the same XY but different Z's).
  • Challenges in the adoption and acceptance of laser scanning, being the (much) more recent technology. For example, the cost and learning curve. I agree with this, however as the authors note this is changing. We face the same challenge in photogrammetry, which still carries a bit of a stigma as a dark art within the broader geospatial community. However times are changing and new technology will continue to flatten out the learning and cost curves...
  • The all-too-common belief that the two technologies compete. The authors argue that this is a misconception and go on to outline why photogrammetry and laser scanning are complementary. I completely agree with this point: both technologies have advantage and should be implemented as needed on a case-by-case basis. I think we're seeing this in the context of the airborne mapping world as well - an increasing number of organizations are opting for optical and LIDAR systems for simultaneous collection. The Leica RCD105 Digital Camera is a good example of this, as it is typically sold alongside an ALS LIDAR system. There are a lot of advantages for such a system but that's a story for another post.
The article then proceeds to discuss laser scanning in the context of recording historic monuments and landscapes, along with several projects (including Cyark) and examples. Overall it is a compelling read and I'd recommend picking up a copy or checking out the online version if you are interested in this topic.

So in summary, I suppose the question above needs to be turned around. For recording cultural heritage, choose the tool set that best fit the requirements - which may mean integrating several technologies.

Friday, December 26, 2008

Angkor Wat in 3D Revisited

Several months ago I wrote about a 3D feature extraction project based on aerial photography at the temple complex of Angkor Wat in Cambodia. While photogrammetric reconstruction is one avenue of documenting historical sites, another method is terrestrial laser scanning. While Cyark has been mentioned before, I thought I'd highlight the foundation again here since it is such an excellent resource for terrestrial laser scanning in the context of archeology and documenting cultural heritage.

Aside from the slick presentation, one of the really nice features of the website is a 3D point cloud viewer, which allows you to navigate various scenes in 3D. While the point clouds are pre-cooked for the viewer (which has a 2 million point limitation), the density is still enough to provide a very realistic experience. You can even see a couple of people in the "Outer Cruciform Courtyard at Banteay Kdei". The process for digital heritage preservation is outlined quite well in this paper.


Angkor Wat 3D Model


Point Cloud: Outer Cruciform Courtyard at Banteay Kdei

One question this raises is which technology, photogrammetry or laser scanning, is the most effective (cost, quality, processing time, etcetera) for cultural heritage projects. For a study on that, check out this paper, which compares terrestrial laser scanning with "terrestrial photogrammetry" (photos are taken from the ground with an SLR camera, not aerial photogrammetry although the processing principles are similar). As one might expect, the study indicates several pros and cons of each method, as well as a look into combining methods. The cost of hardware is higher with terrestrial laser scanning, and the processing (automatic and manual) for both methods can be fairly intensive depending on the level of detail and accuracy required.

Thursday, December 18, 2008

Mapping Unexploded WWII Bombs in Germany

While visiting our German business partner, GEOSYSTEMS, in Munich this week we had the opportunity to discuss a very interesting workflow they have been extensively involved with over the last several years.

Dealing with unexploded munitions has remained a challenge for Germany since the end of WWII. Here is a great article outlining the challenge.

However, mapping based on aerial photography has been a success in many regions throughout the country. The workflow involves processing legacy aerial photography taken during or shortly after the war. The imagery is often of poor quality, and may even be lacking the fiducial marks required to establish interior orientation. GEOSYSTEMS tackled this by building an application to reconstruct fidicual locations. After that, they run through the classic photogrammetric workflow and produce stereo images and digital orthos. This enables both 2D and 3D workflows for capturing the location of bomb craters. After munitions locations have been mapped, the data can then be entered in a GIS. Next comes the practical application: in the areas that have implemented this workflow, the database is checked prior to new construction. This helps uncover the potential locations for unexploded munitions prior to construction - which is a life-saving application.

Here is an example of the 2D workflow: vectors of the bomb locations are collected in IMAGINE:


Files with the XY coordinates of the potential bomb locations can also be created:


For the 3D stereo feature extraction process, here's another example depicting stereo extraction in Stereo Analyst for ArcGIS:

Monday, December 15, 2008

KML 3D Buildings in Los Angeles

I thought I would make the buildings I extracted for the 3D City Construction webinar last week available - you can now download them from here.

I'll talk about the tools used to create these another day, but this allows you to take a look at a photogrammetrically-derived quick and dirty city model. It doesn't have all the buildings and they're not all perfect, but it does give you an idea of what's possible. These took me about 8 hours to extract - using anaglyph-mode on my laptop (perhaps less time, I didn't time myself). It would be more efficient to use a proper stereo viewing environment but a laptop works fine for smaller jobs.

Note that if you open the file in Google Earth they'll have the floating effect described here on Thursday... So if you want to follow the workflow described in that post you can grab the terrain from here: both the IMG grid and the RRD pyramid layer.

Friday, December 12, 2008

Defining Your Own Raster Basemap in ERDAS TITAN

Something I forgot to mention in yesterday's post is the ability to define your own raster basemap in TITAN. So what does this mean???

Most virtual worlds have pre-cooked imagery that they serve up as their "skin of the earth" basemap. But what if I want to define my own default imagery, just for my own personal use? Some applications allow loading imagery as layers, but one of the useful features of the TITAN Client is that you can use your own imagery as the default skin.

So how do you do this?

First, load your imagery in TITAN's Geospatial Instant Messenger. The screen capture below shows an ECW orthomosaic that I have loaded.
Next, right click on the layer and choose the "Copy WMS URL" option.

With the URL copied, navigate over to My Services, where you'll see a few difference service options. For the next step, double-click on the WMS service.
This opens up the "Select Services" interface.
Here you can paste in the URL of your image that you copied earlier. Accept it and you'll see the WMS show up in the list of services, displayed below.
Right click on the newly-added WMS service (displayed above) and choose "Set as Default Basemap". You should see it turn red, indicating it is the default basemap.

Finally, fire up the TITAN Viewer. You can see the results of the example I ran through below. Note that my only layers are a terrain layer (see yesterday's post) and a KML file. I don't have any image layers, as my orthophoto is being used as the skin for the virtual world. It's a nifty workflow and can be useful if you operate in the photogrammetric world and create/display/use your own orthos or if you purchase orthos and just want to use them locally as a basemap, without having to deal with layer management.

Thursday, December 11, 2008

3D Buildings and Terrain in Virtual Worlds

One the the problems I've grappled with in the past is the challenge posed by modeling accurate photogrammetrically extracted 3D buildings in virtual worlds with a less accurate base terrain layer. The folks over at the Bluesky blog have outlined the same challenge in this post. Here's an example of the problem: when I import my KML 3D buildings into Google Earth, they float off the ground due to the accuracy of the base terrain layer. Here's a screen capture depicting the problem:
Even if you modify the terrain exaggeration the buildings still float. There's a few different methods for resolving this. One method suggested in the Bluesky blog post is to model terrain and imagery right into the KML file and then load it up in GE. This certainly works. Another method would be to take a look at a different system for visualizing the data.

The latest update to ERDAS TITAN has a good technique for sorting out this problem: it allows you to specify your own terrain layer. Once you add your data as layers in the viewer, you'll see a similar problem as depicted above - again due to the accuracy of the default terrain layer. The only difference is that instead of seeing floating buildings the buildings are embedded under the terrain layer, so that some of the smaller buildings are not visible. Here's a screen capture that displays the issue:

You can see that some of the buildings in the foreground appear very "flat", and there are others that are not even visible.

The solution is to add a terrain layer. In this case I processed my own digitial orthomosaic along with a terrain layer during the photogrammetric processing part of the project. I added the digital elevation models as layer, right-clicked on it and chose the "use as terrain" option.
The layer gets shifted to the terrain folder and the TITAN viewer updates to reflect the new base terrain. The result is that the buildings sit perfectly on top of the terrain. Here is the result, from the same perspective as the screen capture above:

In summary, take a look at TITAN if you're interested in this kind of workflow. The ERDAS TITAN Client is free, so you can download it from the ERDAS site and give it a whirl with your data.

Tuesday, November 25, 2008

3D City Modeling

I'll be hosting a webinar on December 9th entitled: "3D City Construction Workflows: From Collection to Presentation". This will be an opportunity for us to explore workflows for planning projects and developing 3D building content data, which can then be fed into various GIS and mapping applications. We'll also take a look at data presentation: methods for visualizing and sharing the data.

3D city modeling is a rather nebulous topic, and there are many ways to create 3D city models. Here are a few different options:

- Automatically extruding 2D building polygons digitized from an ortho, either from a rooftop Z value or guesstimating the height. This is the "quick and dirty" method.
- Manually model each building individually and tag it with an XY location (often guesstimating the spatial dimensions). These models may be based on ground photos, oblique imagery, or a combination.
- Photogrammetrically measure and extrude buildings.
- Photogrammetrically measure and extrue buildings, and then model further detail and add elements such as photo-texture.

The webinar will primarily focus on the 3rd option: using photogrammetric tools to measure buildings. In addition to the actual extraction process, other discussion points such as data capture, data quality, accuracy, and other topics will be outlined as well.

If you're a GIS/photogrammetry/mapping professional, please tune in to one of the scheduled webinars. You can sign up here.

Monday, November 24, 2008

LPS 9.3.1 Now Available!

We've just released our first Service Pack for LPS 9.3, called LPS 9.3.1. It is available from our support site and is located in the "Fixes and Enhancements" section. In addition to numerous bug fixes, LPS 9.3.1 (which is a shared Service Pack with ERDAS IMAGINE) also contains the following features and improvements:

LPS Core

• Added required entry of Average Terrain Elevation when creating a block file for Orbital Pushbroom sensors. This entry overrides the generally less accurate method of using the sensor metadata to derive a mean elevation.

Stereo Point Measurement/Classic Point Measurement

• Improved and simplified image loading.
• Added “Create Ground Control Point” feature that calculates 3D ground coordinates from image points which are measured in at least two images. This is particularly useful in transferring control points from more accurately oriented imagery into lower accuracy imagery.
• Improved several icons to maintain consistency with other dialogs in LPS and IMAGINE.
• Improved image tracking to use the mouse wheel to adjust the image positions in relation to each other and remove the x-parallax.

RPC Generation

• Added fields to give the user the ability to control parameters of the RPC fit cube. This can help improve the fit accuracy.
• Added an option to output a report file that shows where higher residuals are falling, check point and fit point statistics, and other information that help to refine the results on subsequent generations.

LPS Core Triangulation

• Improved RPC refinement by automatically detecting points with parallel rays and using these parallel rays as an additional constraint. This eliminates the need to manually delete points with no convergence.


ERDAS MosaicPro

• Improved memory management during Image Dodging and seamline generation.
• Faster seamline refreshing after editing and undo.
• Improved performance when running MosaicPro with a shapefile or an AOI with multiple polygons as a single output mosaic.


Defense Productivity Module

Image Slicer

• Added capability to input active area.
• Named all segment IDs from West to East regardless of how images were captured.
• Improved quality of segment boundary computation.
• Added an option to trim the nonoverlapping portions of the stereo segment image pairs.
• Changed the pyramid generation so it is driven by the IMAGINE preferences rather than always making pyramids after segmentation.

TFRD AMSD Calibrator

• Eliminated requirement that LPS be installed to generate an Image Slicer project file (.isproj).
• Added an option to output the rectangle and stereopair accuracy values from AMSD to a text file.


IMAGINE Radar Mapping Suite

• You can now use the Subset Processing Step to extract portions of the input images in InSAR from which to generate a DEM.
• Multilook factors are now displayed directly on the Reference DEM Processing Step of InSAR.
• Coherence window values are now edited within the Interfere Processing Step of InSAR, and a working Coherence image is generated and available for viewing.
• The Register step in InSAR has been renamed Coregister in which the match image is coregistered to the reference image.
• In the Coregister step, there are now two tabs named Coregistration Input Parameters and Coregistration Output Coefficients. The Coregistration Output Coefficients tab contains a CellArray reporting polynomial coefficients which describe the pixel shift of the match image along the x-direction and along the y-direction.
• In the Height step of InSAR, you can now select an interpolation method for the resample process as part of the chosen rectification method.


ERDAS IMAGINE – General

• Chooser icons, such as those for selecting colors or Annotation styles, now have two portions. Click the top portion to open the Chooser dialog directly; or click the bottom portion to open the selection menu as before.

IMAGINE GLT

• GeoPoint Annotation has a new Properties dialog where you can select a template for the text to be displayed. You can also configure options indicating where the label is to be placed and whether or not there is a leader line from the label to the point. The text template can be a dynamic coordinate (as before), an incrementing number, a static string or a combination of coordinate or incrementing number and static string.

IMAGINE Viewer

• In addition to the Annotation Alignment tool, the Annotation menu and tool palette now contain separate options for aligning annotation vertically or horizontally.
• You can now save a Footprint layer in the Viewer/GLT as Annotation or as a shapefile.

IMAGINE Composer

• Leading zeros are now displayed for coordinates in Grid/Ticks.
• You are now able to specify the text to be displayed for the units in an Annotation Scale Bar.
• When generating an Annotation Grid/Tick, you can now specify a rotation angle for the coordinate labels.

Monday, November 17, 2008

ERDAS on Twitter

If you want to follow ERDAS news and updates on Twitter, feel free to check us out at www.twitter.com/erdas.

Monday, November 10, 2008

Chandrayaan-1 Moon Probe

I've seen a lot of mainstream media coverage of India's Chandrayaan-1 moon probe recently. The significance for the mapping community is the mission goal of creating a 3D lunar surface model.

This page on the Indian Space Research Organization's site outlines the specifications of the various payloads:
1) Terrain Mapping Camera (TMC)
2) Hyperspectral Imager (HySI)
3)
Lunar Laser Ranging Instrument (LLRI)
4) High Energy X-ray Spectrometer (HEX)
5)
Moon Impact Probe (MIP)

Of particular interest are the TMC and the LLRI.

The TMC will have a 5 meter spatial resolution and pushbroom sensors in forward, nadir and reverse directions. This will allow for stereo data collection that can be used for topographic mapping applications. A concise summary document is here, which I would recommend for anyone interested in the topic.

The LLRI will is an excellent complementary sensor to the TMC, and as one would expect from the name, is essentially a on-board LIDAR sensor. A summary document is available here.

It is also interesting to note that the Chandrayaan-1 mission is the first in a series of five planned missions. These are all outlined on the ISRO page here.

It'll certainly be interesting to see the data products Chandrayaan-1 will produce. To my knowledge this is the first lunar mapping project since the Clementine mission in 1994, which collected much of the lunar surface at 7-20 meters GSD with it's camera (not in stereo, I believe). LIDAR data was collected during that mission as well.

Wednesday, November 5, 2008

Sensor Spotlight: CARTOSAT

Today's post will highlight the CARTOSAT-1 and CARTOSAT-2 sensors. CARTOSAT-1 was launched in 2005 and features two panchromatic cameras for stereo imagery capture. The cameras cover a 30 meter swath and the resolution is approximately three meters. Many of the sensor features and are located on the ISRO site here. For a good white paper on CARTOSAT-1, presented at the ISPRS conference in Beijing, see here. The authors outline the processing workflow for CARTOSAT-1 data in LPS, consisting of setting up the orientation, performing automatic terrain extraction, and finally orthorectifying the images. While the processing was performed on CARTOSAT-1 data, the workflow would also be very similar for CARTOSAT-2.
The paper highlights some software improvements we implemented between LPS 9.0 and 9.2 SP1, as some of the initial processing was performed in LPS 9.0. The improvements specifically pertained to the LPS ATE (Automatic Terrain Extraction) module, where we added quality improvements in the correlator. The authors found the ATE results with LPS 9.2 SP1 to be acceptable, with errors general under 2m. It is also important to note that we made Adaptive ATE available for satellite sensor models with the recent release of LPS 9.3, so I suspect the accuracy could even be further improved.

Here is another ISPRS paper from 2007 outlining some advances in CARTOSAT-1 data processing. It also highlights the use of LPS for the terrain processing component of the workflow.

CARTOSAT-2 was launched in early 2007 and also features two panchromatic sensors, featuring a greatly improved resolution of under one meter. A spec sheet from ISRO is here. The wikipedia article makes some interesting statements about resolution (80cm) and pricing, but I haven't been able to verify these statements. This paper, also from the ISPRS conference in Beijing, has some great information on both satellites, as well as some further details on the terrain processing workflow. LPS was used as well in this paper, although it does not state the version number.

Note that in addition to using LPS ATE for automatic terrain extraction, the LPS Terrain Editor can be used to view the imagery in stereo and perform interactive terrain editing (from manual compilation to editing an automatically correlated surface).
Note that in addition to the CARTOSAT-1 and 2 sensors, there is also a CARTOSAT-2a sensor that is reserved for military usa.

Monday, November 3, 2008

ERDAS Photogrammetry Movies

Something about the new ERDAS website that I would like to highlight is that there are now movies available for most of the products. If you navigate to the product pages, they are typically under the "Demo" tab.

I've embedded one of the LPS Terrain Editor movies below. The movie basically goes through the process of eliminating terrain points located on the tops of buildings to produce a bare earth terrain model suitable for orthorectifying the imagery. It is important to note that the Terrain Editor contains a stereo viewer (for 3D measurement) so the left and right images are, in this case, displayed in anaglyph mode.

As we expand the site we'll continue to add new movie content!



Friday, October 24, 2008

How to Access and Download LPS 9.3 and other ERDAS Software

I've received a lot of questions from people asking about how they can access and download the new LPS 9.3 release. Since we've changed the methodology for software access, I've outlined the new procedure below. This example covers the download process for LPS 9.3, but it essentially the same for all the products on our new web-site.

The first step is to go to the Product page of the software you would like to download. For LPS, this is here. Note that all the products can be accessed by the product page, and are organized into general functions: Author, Manage, Connect, and Deliver. Clicking on a cube (e.g. Author) at the top will filter the product list.
Next, click the downloads tab on the right-hand side of the tab list.

After clicking the download tab, the content will change and you'll see a download link. After clicking on this you will be directed to log in. Please note that this log in only pertains to software download access, and you will have to register with some basic details.

After registering, an email is sent with your login details. After you login, you can go to the product download tab again and click on the download link once again. For LPS you will see a message saying that due to the large file size (~600MB for LPS) a temporary ftp login has been created for you and that you will be emailed the ftp location and login details. The email is sent immediately and you should receive it within seconds of hitting the download link.

Next, log on the the FTP site and download the software. Without a license LPS will work in "demo mode" for 30 days, but you can also contact your local representative for an evaluation license (or any questions). You may also contact your local representative for the software package on DVD.

Monday, October 20, 2008

Fidicial Marks: An Explanation

I tend to get a lot of visitors finding their way to this blog by searching for information on fiducial marks. So, why not ouline what they are and why they are important?

In the context of aerial photography, fiducial marks are small registration marks located along the outside of an aerial photograph. There are typically four or eight numbered marks, which look like this (note: this is number 1):



So what are they for? During the camera calibration process, the positions of the fiducial marks are measured precisely. The principal point of the image can be derived from the intersection of the fiducial marks. See here for an interesting paper on the development of camera calibration methods. The results of the camera calibration are usually stored and reported in a document (typically a USGS camera calibration report in the USA), and some organizations include their camera calibration reports on their web-sites: here is an example.

Fidicial marks are also important in the early stages of the photogrammetric processing, when the system establishes the relationship between "film" coordinate space and "pixel" coordinate space (solving for interior orientation). This process involves either physically or automatically measuring the fiducial marks.

Finally, it is important to note that fidicial marks are only used in film cameras. You'll only see them on scanned aerial photography. Digital cameras use different camera calibration techniques, and the USGS has a research lab on digital camera calibration research.

Friday, October 17, 2008

Sensor Spotlight: Leica Geosystems ADS80 Airborne Digital Sensor

I’ve touched on ADS40 sensor technology in a few different posts, but the focus of today is the new ADS80 sensor. The ADS80 is a pushbroom airborne sensor that was formally announced and highlighted at the ISPRS conference this past summer in Beijing.

See here for an interesting discussion on the transition to from analogue to digital processing as well as pushbroom sensors. The new sensor represents a solid advancement, and arguably delivers the best quality imagery of any of the commercial large-format airborne sensors.

But what is the difference between the ADS80 and the previous version, the ADS40? This post will cover the differences and explore some of the specific technical improvements.
Firstly, there are several overall design improvements. There is a new design for the data channel with overall data throughput increasing from 65 MB/s to 130MB/s. The fastest cycle time has increased from 800Hz to 1000Hz (this allows for faster flying speeds than previously possible), and there are data compression options for 10 bit, 12 bit, as well as the raw data.

The ADS80 also features a new design for the Control Unit (called CU80). The new Control Unit is smaller and contains an integrated slow for two Mass Memory units. Here what the new CU80 looks like:

The new system also introduces a new solid state Mass Memory unit (MM80). This size is smaller and weights only 2.5 kg, and has a few different options for data storage modes: single volume, joined volume, and in-flight backup. The joined volume of the two MM offers the greatest data throughput as well as the largest storage capacity, which is ideal for large-area collection missions.

For direct georeferencing applications, IPAS comes embedded in the control unit as well. This is critical for image collection missions in remote areas where ground control may not be possible: this is important in applications such as disaster mapping, remote area mapping (e.g. certain pipeline mapping applications) as well as surveillance operations.

Overall, the system weight has been reduced by 26 kg! It also contains new periphery equipment, including a new GPS/GLONASS Antenna.

Lastly, what does the imagery look like? In short, it looks fantastic. Here’s a sample of imagery collected at 5cm GSD over Lucern, Switzerland earlier this year (click on the image for a larger view).
More information, including both a product brochure and data sheet, is available from the Leica Geosystems website. Also note that new a new software package for ground processing, called XPro, will also be released quite soon.

Special thanks to Ruediger Wagner, ADS Product Manager at Leica Geosystems, for providing details on the new sensor.

Tuesday, October 14, 2008

ERDAS and Oracle: Building Geospatial Business Systems

Last Wednesday I had the chance to travel up to Brussels to sit in on our ERDAS & Oracle technology seminar: "Building Geospatial Business Systems". It was a full-house and the day was full of both ERDAS and Oracle product information, as well as customer presentations of business solutions they have created by synthesizing technology from both companies.

Some of the focus areas included:
- A introduction and discussion on ERDAS APOLLO, which we just announced last week.
- The ERDAS commitment to OGC standards, including the importance and priority of interoperability.
- Oracle Spatial in Europe, presented by EMEA Oracle Spatial Product Manager Mike Turnill.
- A discussion on the ERDAS product roadmap and direction.

Here are a few snaps from my camera phone. Overall it was a great event: full turnout, great discussions, and a high level of interaction and information exchange.


Thursday, October 9, 2008

Satellite Archeology with Quickbird

Check out this article from MSNBC for an interesting application of remote sensing. The article highlights the fact that remote sensing can be an excellent tool for locating historical sites that cannot by detected by ground. The arrows in the image below show the outlines of the pyramids.

If you're interested in this topic, here is an excellent paper by Dr. Armin Gruen from ETH on "New Technologies for Efficient Large Site Modeling."

Monday, October 6, 2008

Now Released: LPS 9.3

We are pleased to announce the release of LPS 9.3 today! This is a major milestone for ERDAS, as we're launching new releases across all product lines today, which is a first. Additionally, we've completely updated the website at www.erdas.com. Each product can be downloaded from the site from the various product sections via the "Downloads" tab. For LPS 9.3, go here and click on the Downloads tab. Note that you'll need to register to access the download. You can also get a new 9.3 license from the link on the www.erdas.com front page.


The main theme of the LPS 9.3 release is 3D feature extraction, with the introduction of "PRO600 Fundamentals for PowerMap XM": PRO600 Fundamentals is a streamlined stereo feature extraction software. Basically we've made PRO600's PROCART module (3D feature extraction) available for Bentley PowerMap XM, which is a GIS-oriented application for map production in a 2D or 3D environment. PRO600 Fundamentals also includes LPS Stereo.

I wrote about a couple of the new benefits in previous posts, but I've also included the entire list of improvements below.

In LPS Core, the following improvements have been added:

  • Export to KML: This new LPS 9.3 feature exports an LPS block file or group of block files to the KML (keyhole markup language) file format. This feature allows for the export of both image footprints as well as point measurements associated with the block file.
  • Improved Automatic Point Measurement (APM) point correlation quality in cases with less than 50% overlap, variable flying height, and in sidelap areas.
  • Added support for NITF NCDRD format in the orbital pushbroom QuickBird/WorldView model.
  • The Triangulation Point Review user interface has been extended to support Satellite Sensor Models.
  • New Support for Image Chipping for NCDRD Sensor Model.
  • Registration free .NET and COM: New Registry Free LPS allows users to install different versions of LPS on the same machine.
  • Synchronized units of measure for the Average Flying Height (Frame Camera) and Average Elevation (Orbital Pushbroom) defined in the Block Property Setup with the units reported in the block file.
  • The Average Elevation, Minimum Elevation and Maximum Elevation units in RPC Model projects are now displayed in the project vertical units in the Frame Editor.
  • Synchronized units of measure for GCPs and residuals in the Refinement Report.
  • Enhanced Importer for ISAT projects with multiple flight lines.
  • Support for EMSEN Hand Wheels.
  • Added the LHN95 Geoid model (Switzerland).
  • Added Latvian Coordinate System (LKS-92) support, which includes the Latvian Gravimetric Geoid (LGG98).

LPS Automatic Terrain Extraction (ATE)

  • DEM Accuracy: Added an option to enter a tolerance in the vertical units of the terrain source to set the accuracy range for the predicted surface value of the area. The Min and Max Z Search Range will change with respect to the accuracy value entered. Providing a reliable tolerance will result better matching quality.
  • Added support for all currently supported sensors in Adaptive ATE (not just frame cameras and ADS sensors).
  • Reliability has been improved with better memory handling.

LPS Terrain Editor

  • Drive to Control Point: In 9.3 a new panel in Terrain Editor enables the display of GCPs and tie points associated with the currently loaded block file. An additional new dialog called “Control Point Display Settings” allows users to filter points in the cell array and choose the rendering settings for the Ground Control Points panel. The user can load some or all of the image pairs that a GCP is projected into. This new tool lets users check the quality of the DTM with respect to GCP, check points and the tie points. This tool can also be used for visual inspection of triangulation results after a bundle block adjustment.
  • Post Editor hotkeys: allow a user to quickly move through points by using keyboard arrow keys and adjusting the Z value for selected points in gridded terrain files.
  • Enhanced jpeg image display.

ERDAS MosaicPro

  • Save to Script Functionality: With the release of LPS 9.2, users were able to batch script the entire MosaicPro process and then execute the script from an MSDOS prompt. In 9.3 it is possible to generate the batch script automatically from the MosaicPro user interface. The script generated from MosaicPro may also be used as a template which can be easily modified. This new feature builds a script file from a combination of the currently open MosaicPro project and/or from previously saved settings from image dodging, color balancing, seam polygons, and exclusion areas. The MosaicPro process can then be run in time-set, batch mode from the MSDOS prompt.
  • Improved performance for seam polygon generation with "most nadir", "geometry", and "weighted" options.
  • Various reliability improvements.

STEREO ANALYST for ERDAS IMAGINE

  • Extend Features to Ground: this new feature uses a 3D Polygon Shapefile and extends the segments of each polygon (as faces) to the ground to form solid features (e.g. Buildings).

PRO600

  • Ability in PRODTM for the user to specify the extent within which to load terrain data. This allows very large terrain datasets to be used in PRODTM, in a piece-wise manner.

ORIMA

  • For triangulation projects using AD40 data, multiple ADS40 flown at the same time are now supported. This required the change of some file formats. This new approach leads to shorter project creation times.
  • CAP-A Release 8.10: New Handling of Orientation Data for ADS40. This new data handling has two primary advantages:
o The amount of disk space to store the project is drastically reduced.
o The startup time of CAP-A is much faster as there is no need to read the *.ori files and find the corresponding orientation for each point.

Defense Productivity Module (DPM)

  • Users in classified environments can now process NGA MC&G imagery in LPS photogrammetric workflows if the DPM is installed. This support includes access to AMSD ground and imagery points.
  • A new Image Slicer has been created to facilitate cutting of the original imagery into smaller segments for extraction. After slicing, an RPC model may be generated to provide support in ERDAS products without a local DPM license. If an NITF module is licensed, the RPC segments may be exported to NITF with RPC00B tags for interoperability with a wide variety of software packages.

Saturday, September 27, 2008

LPS 9.3 Preview: Control Point Review Tool

Last month I highlighted the ability to export LPS Block Files to KML. I'll do a full round-up of the new functionality as soon as we release, but for today I will focus on another new feature: Ground Control Point Review in the LPS Terrain Editor module.

The background for this feature came from many requests we received for the ability to review points either by themselves in stereo or as a means of checking terrain accuracy. Thus, we added a new panel in the Terrain Editor "View" drop-down menu: View > Panels > Ground Control Points.

In the screen capture below, I have launched the Terrain Editor and have opened the GCP Review Tool. I haven't loaded any imagery. The GCP Review Tool automatically loads in all points (ground control points and tie points) from the Block File that the Terrain Editor was launched from. You can see from the column settings that it provides some basic information such as ID, point type, coordinates, the number of images the point intersects, and the description.

In the Terrain Editor, the usual method for loading stereo pairs is to drag and drop them from the image pair list (on the left above) into the stereo viewport. One of the nice features of the GCP Review Tool is that you can automatically load images by double clicking on a particular point (double-click anywhere on the row). This is beneficial because it removes the need to know exactly which stereo pair to pick when you would like to review a particular point. In the screen capture below, I loaded a stereo pair associated with Point ID 30 by double-clicking on it. You can see that this is a full GCP, and it is even possible to see the target in the imagery.

Point symbol and label graphics can be turned on or off by using the icons on the bottom left of the panel. Additionally, the "Settings" button can be used to modify the behaviour of the tool. For example, it is possible to filter the points (e.g. only display full ground control points). It is also possible to customize the graphics (size, color, and label font).

Thursday, September 18, 2008

ERDAS UK GeoImaging User Group Conference 2008

If you're in the UK you may be interested in checking out the GeoImaging User Group Conference 2008 hosted by InfoTerra. The event is in Oxford on September 29th and 30th and will cover the entire range of ERDAS desktop and enterprise products.

I will be there on the 29th to deliver a presentation on our photogrammetry product line. The main focus will be on the upcoming LPS 9.3 release, but I'll also cover productivity tips, photogrammetric workflows, and highlight the directions we're going in. Please feel free to sign up and attend if you are in the region!

Wednesday, September 10, 2008

Mapping Standards Directory Update

I've added a few new sites to the mapping standards directory. The new additions include a mapping manual from Minnesota DOT, a presentation on accuracy standards for mapping projects in North Carolina (from the NC ASPRS chapter), and various mapping specs from the Nova Scotia Geomatics Centre. All three are in the "Provincial/State Level Standards" section.

In particular the Surveying and Mapping Manual from Minnesota DOT provides a wealth of information, and it is relatively recent (updated in 2007).

Friday, September 5, 2008

Anaglyph Generation in ERDAS IMAGINE

Last week Adam Estrada over at the GeoPDF blog wrote a great post regarding anaglyph GeoPDFs. He also posted a link on how to create anaglyphs manually and mentioned that they could also be created in ERDAS IMAGINE. I will walk through the steps on how to perform this operation in IMAGINE.

First of all, you need to open up Image Interpreter, choose Topographic Analysis, and then select "Anaglyph". Here is what the Topographic Analysis toolset looks like.
After selecting the "Anaglyph" tool, it is necessary to specify a few parameters. Since the anaglyph effect is created by producing an offset based on relief, a terrain file needs to be specified. There are various options, including the ability to exaggerate the relief. Next the input image needs to be defined, along with the output image and format. One flexible aspect of the tool is the ability to define the color for the left and right "lens" of the anaglyph glasses. Most anaglyph glasses are red and cyan, so the default option is a Red / Green and Blue (Cyan) combination for the left eye and right eye. It is also possible to define the output bands along with defining a subset definition (e.g. if you only want to produce a small anaglyph area from a large image mosiac).

Here's what the dialogue looks like with the settings I used:

And here is what the final anaglyph image looked like:

Note that you may need to click the link to truly see the effect. Also note that the effect that be modified with the "exaggerate" option - this can be used to increase the effect (which may be necessary depending on the scale of imagery being processed). While it is certainly possible to "manually" create anaglyphs in a number of packages, the Anaglyph tool in IMAGINE adds a degree of automation that isn't available in other solutions. For example, if you want to serve up anaglyphs for an entire city on a web application, you can use the IMAGINE batch tool to process several hundred (or thousands) of images - which beats processing them one-by-one.

Monday, September 1, 2008

Upcoming GeoEye-1 Satellite Launch

The GeoEye-1 satellite launch on September 4th has been getting a lot of media attention lately. Instead of regurgitating the specs, I'll focus on a few other details.

First of all, check out the Launch Site here. It has a count-down and informs us that there will be a live video stream of the launch - should be interesting to watch.

There were several reports today about a deal between GeoEye and Google, whereby Google will be the exclusive online mapping site with access to the imagery.

Note that LPS will support both the rigorous and RPC models for GeoEye-1 in both LPS 9.2 (the current version) as well as LPS 9.3 (coming soon, stay tuned).

Friday, August 29, 2008

Historical Aerial Photography at the Serge A. Sauer Map Library

For today's post I'd like to highlight the Serge A. Sauer Map Library. Housed at the University of Western Ontario in Canada, the Map Library has large repository of maps in both hardcopy and digital form. The library also has a large stock of aerial photography, and recently made an air photo project flown in 1922 available online. The project page allows you to view and download the photographs via a mosaic or through a Google Maps interface. From the drop-down menu on the Google page, it looks like aerial sets from other years are coming soon.

The steps used to produce the mosaic are also available. While the process was very "manual" (rotating and scaling the images until the fit a basemap), the end result is a good looking historical mosaic!

Tuesday, August 19, 2008

What's New in LPS 9.3 Webinar

You should be seeing a press release on www.erdas.com shortly, but I wanted to highlight the fact that we are planning a webinar on the new release next week. While we just released LPS 9.2 earlier this year, the LPS 9.3 release is now in beta and we are looking forward to getting it on the market soon.

I highlighted one of the new features, KML Export from the LPS Block File, in a previous post. The webinar will proceed to highlight the new enhancements and solutions, as well as a live demonstration of certain capabilities.

Anyways, you can check for more details including registration for the free webinar on the ERDAS site in the next 24 hours.

Friday, August 15, 2008

Sensor Spotlight: Thailand Earth Observation System (THEOS)

You might have noticed that last year we released a PR regarding support for the THEOS sensor model in both LPS and ERDAS IMAGINE, which was due to launch last week. Unfortunately the satellite launch has been postponed, but we're looking forward to a successful launch once the new date has been set.

A THEOS engineer, Suwan Vongvivatanakij, put together an excellent overview presentation for the CEOS Working Group on Information Systems and Services (WGISS) for their meeting last February. I haven't seen a lot of detailed information about the satellite on the web and thus far this is the most comprehensive info I've seen.

The satellite has panchromatic and multispectral modes, with a 2 meter resolution at nadir for panchromatic and 15 meters for the multispectral mode. For mapping, stereo applications will be supported since there are three (forward, nadir, reverse) look angles.

If you hail from Thailand (or can read Thai) you can see the THEOS program homepage here: http://theos.gistda.or.th/home.html

Wednesday, August 13, 2008

Mapping Standards: A Directory

A few months ago I wrote a post regarding mapping standards and highlighted a few links. Personally I find this a critical topic in the mapping industry due to the massive growth of available data. While the quantity of available data is growing daily, there are naturally questions regarding data quality and lineage. How many geospatial professionals know the true accuracy of their data?

The importance of accuracy depends on the particular use case geospatial data is developed for. For example, being several few meters off might be ok for a local urban planning department that only needs to know the relative position of buildings and roads but could result in costly errors for, say, a Department of Transportation that requires engineering-grade accuracy.

Since I wrote the last post I've had a hard time finding a single place where I can find information about mapping/accuracy standards. So, I thought it might be helpful to start a simple directory of relevant sites. Please feel free to check it out and let me know if you're aware of any other sites that belong on the list. I know I'm only scraping the surface, so additional state, national, or private specifications and standards additions would be appreciated.

For those interested in production-level mapping, be sure to check out the "GeoBC Crown Registry and Geographic Base" (that is a mouthful) standards and specifications page. It has a wealth of content, from digital camera specifications (e.g. down to the nuts and bolts of camera calibration specs) to aerial triangulation, DEM, orthophoto specs and more. They've done a great job of putting it together all on one cohesive site.

Monday, August 11, 2008

DigitalGlobe on NBC: A Closer Look at 3D Olympics

I've seen a couple of posts regarding DigitalGlobe's Bejing Oympic Games coverage. In my opinion this is a great consumer-grade marriage of geospatial technology and mainstream media. On the DG homepage "DigitalGlobe on NBC" is prominently displayed, which takes you to http://www.digitalglobe-aegistg.com/, their site with EAgis Technologies (a joint effort between DG, EAgis, and NBC).

The posts above comment on the technological underpinnings on the 3D scenes offered on the new site. One common theme is how a combination of different technologies (photogrammetry, 3D modeling, satellite imagery, 3D visualization) can be used together to provide a powerful and immersive viewing experience.

The http://www.digitalglobe-aegistg.com/ site allows visitors to download 3D PDFs, a KMZ and perspective views of various Olympic sites. 3D PDFs have been around for awhile, but this is the first time I've had a chance to examine one: very cool, although not the same experience as a KML file in a virtual world. Compare for yourself below:

KML in Google Earth:

3D PDF:
Aside from the availability of 3D example data, the site also provides some insight into the creation of the dataset. The "How Can This Be Possible?" heading expands to provide a high-level introduction to the technology and workflow. The workflow is divided up into four parts with a graphic associated with each one: the "3D wireframe" generation, imagery capture, feature extraction and extrusion, and fully textured 3D model generation.

"3D Wireframe" Generation

The site mentions that the wireframe represents the earth's terrain, and that it was derived from two DG satellite images. Sounds like classical photogrammetry! Two images associated with sensor model can be viewed in stereo to extract (measure) 3D positions (e.g. points with an accurate XYZ location). For high-accuracy applications relying on the satellite sensor model may not be enough, and there would be a need to collect and measure ground control points and then run through the triangulation process. However, once that is done there are numerous applications that may be used for terrain extraction.

The screen capture of the wireframe is actually a Triangulated Irregular Network (TIN), which when compared to raster DEMs is a more efficient means of modeling terrain. These can be automatically correlated using point matching algorithms or manually compiled by hand - which can be a very time-consuming process.

Why is the TIN important? The terrain represents a fundamental part of an immersive 3D scene. If it isn't accurate then the scene will not look realistic... In inaccurate terrain model could also cause problems in the image processing (orthorectification) part of the workflow.

It is also important to note that terrain can come from a number of sources: manual compilation, automatic correlation, LIDAR, IFSAR, and other sources.

Imagery Capture

This screen capture shows imagery draped over the terrain. The imagery would have come from QuickBird or WorldView-1 satellites. For a good-looking scene high-resolution satellite imagery or aerial photography is important. Sometimes satellite imagery is useful, but often aerial photography is the best solution. Why? If imagery is captured from a sensor mounted on a plane, the data acquisition organization has full control over the scale/resolution of the photography. Flying low equals higher resolution...

Another important note is that the terrain model discussed in the previous step would likely be used to orthorectify the image. This will result in a geometrically accurate orthophoto with real-world coordinates. The accuracy of the terrain is important: if there are large errors the 3D features discussed in the next step may not appear in the correct position if they were extracted in a stereo feature extraction system (building footprints digitized off the orthos and then extruded would be ok though).

Feature Extraction and Extrusion

The text for this segment talks about "special tools" being used to determine "footprints" of buildings and then extruding them. This might work for some rectangular buildings with flat roofs, but it is clear that all the download-able content on the site was not derived from automatic extrusion. There are a couple of ways to generate 3D buildings. The quick an dirty way (extrusion) involves digitizing the building footprints in 2D from a digital orthophoto. Then you need to tag the building polygon with an attribute to represent height. This is a fairly straightforward procedure if a digital surface model (DSM) of the area is available. The drawback of extrusion is that, although quick, it may not be accurate. Extrusion assigns one elevation value for the entire building (roof) area, so buildings with pitched or complex roof structures will not be modeled accurately.

Photogrammetric feature extraction can model buildings with greater detail, since specific building detail can be modeled in stereo by viewing and measuring buildings in 3D. However, photogrammetric feature extraction is performed from a "top-down" perspective, so features like balconies may be difficult to model. This is where CAD or CAD-like 3D modeling packages and ground-based photography can help. One workflow for 3D city construction is to photogrammetrically extract the buildings and then import them into a CAD package to add more detail to the models. Ground photos can also provide photo-realistic image texture, as can aerial photography, but capturing all four sides of a building can be difficult without planning the acquisition flight with a very high degree of overlap - which can add to the project cost (more fuel, more data to process). In addition, aerial photography may not be able to capture street-level image texture or areas with dense skyscrapers.

At any rate, there are many ways to go about generating the 3D buildings - it all depends on the level of detail required and the project budget...

Textured 3D Model Generation

As I mentioned above, texture can be applied to buildings from both ground and aerial photography. There's a number of tools that can be used to texture the buildings, here is an example video of how this can be done in SketchUp. There's a number of 3D modeling applications out there to do this sort of work. Again, production costs rise an accordance with the level of detail applied to a building. A "perfect" building cannot be easily automated and can be laborious to produce in sophisticated packages such as Autodesk's 3ds Max.

Looking at the Beijing Institute of Technology model it is clear that a lot of effort went into building it. Not only does the texture look great, but there is a lot of 3D modeling that has been done in a professional 3D modeling system. The rounded rooftop would be very difficult to model in a photogrammetric feature extraction system, and the model contains detail of the roof overhangs - which would likely come from the use of ground photography.

At any rate it is nice to see this technology getting some mainstream media coverage. Photogrammetry and 3D mapping have been around for a long time, but the mass-market popularity of visualization packages such as Google Earth is exposing this technology to a much broader audience.

Wednesday, August 6, 2008

LPS 9.3 Preview: KML Export

We're getting late in the release cycle for the upcoming LPS 9.3 release (the beta testing phase has now started!) so I thought I'd start previewing some of the new functionality we're releasing.

An increasing number of geospatial applications are supporting KML (although the word "support" can mean a lot of things), so mentioning that we'll be able to export the LPS Block File as a KML file isn't earth-shattering news. However, KML in the context of photogrammetric applications is relatively new and there are some interesting implications.

First I'll show how the exporter works and then get into what some of the uses are. Here's a screen capture of a small photogrammetric project in the LPS Project Manager, in area of Waldkirch, Switzerland.As you can see it is a relatively "complete" project. There are triangulated images, GCPs and Tie Points, some DTMs, and orthophotos. From the Project Manager, we have a new drop down entry in the "Tools" section called "Export to KML". Click on this and the following dialog appears.The dialog allows you to choose which elements of the photogrammetric project (Block File) you would like to export. Check the various boxes and then you can hit the "Export" button to generate the KML file.

For this dataset I've uploaded the output KML file here. Feel free to download it and check it out. Note that the various photogrammetric data elements (e.g. Ground Control Points) can be turned on and off. Here is a screen capture of the file in Google Earth.
So this brings us to the question, why is this relevant? The first thing that comes to mind is project tracking and status reporting. Photogrammetric/mapping projects are increasingly completed in disparate geographic areas. This can make project tracking a challenge. While there's a mixed-bag of current approaches to project tracking, a KML file can provide a relatively compact (especially if you leave out the tie points) and visual representation of what parts of the project are complete. For example, an organization with an office in the USA that is working with a partner in another part of the world could request daily updates of status for a large digital ortho project. By looking at the "orthos" layer, the project coordinator could not only see how many are complete (like they may currently do with MS Excel or other spreadsheet apps) but also see a visual of the completed project areas. Thus, they could see if the "challenging" parts of the project had been tackled yet (e.g. rugged terrain or urban areas) and manage accordingly.

I'll talk about this a bit more in future posts, as well as hightlight some of the other solutions we've been working on this year. We're certainly looking forward to getting the new release out!

Friday, July 25, 2008

Upcoming ERDAS Webinar: Mosaicking

I'll be doing a webinar next week on mosaicking, which was just announced on the ERDAS web-site yesterday. The main focus will be planning considerations for frame photograph mosaicking, processing techniques (radiometry, image manipulation, seams, output considerations), and final mosaic product generation. This may be of interest to both LPS and ERDAS IMAGINE users (along with anyone generally interested in mosaicking), since I'll be using MosaicPro for the processing: an add-on module to both LPS and IMAGINE.

The registration page is here. We'll be hosting the webinar at both 3AM and 11AM EST, so please feel free to join us for either session!

Monday, July 21, 2008

Photogrammetry at the Acropolis

After a few weeks offline I'm now back and writing from Liege, Belgium. During my time off I had the opportunity to visit the Acropolis in Athens, Greece. While walking up to the Parthenon I noticed there was a terrestrial laser scanner set-up and operational - although unfortunately I didn't get any photos. But that was enough to get me wondering what the project was about. At the top of the Acropolis I found a sign with a short description of the project (photos below). Since it is difficult to read I have reproduced the text below:

DATA ACQUISITION FOR THE PHOTOGRAMMETRIC RECORDING OF THE ACROPOLIS

The Acropolis Restoration Service carries out the project of geometric documentation of the Acropolis hill, the circuit Wall and the Erechtheion, using photogrammetric methods together with 3-dimensional scanning.

All the information to emerge is to be entered in a Geographic Information System (G.I.S) that will be available through the Acropolis Restoration Service's web site (ysma.culture.gr).

Photogrammetry at the Acropolis was also a subject of discussion at the recent ISPRS Conference in Beijing. One of the technical sessions (TS-SS19) was "Recording and Documenting the Acropolis of Athens - From Classical Ancient Greece to Modern Olympics". While I wasn't at the conference, a colleague sent me the paper for "Recording, Modeling, Visualisation and GIS Applications Development for the Acropolis of Athens", by Tsingas et al. The paper discusses the various techniques employed by the project outlined above, which include geodetic field measurements, terrestrial scanning, and photogrammetric data capture and processing. Of the many data products to come out of the project, an interesting one is a top-view orthomosaic with a 10mm resolution. A 22MP camera was used on a balloon system, as motorized vehicles such as helicopters are not permitted to fly above the Acropolis. Also of interest (and news to me) is that Leica Geosystems is a partner in the project. One of the terrestrial scanners is a Leica HD3000, while ERDAS LPS is used for parts of the photogrammetric processing. This included camera calibration, bundle adjustment, and terrain processing.

The paper describes the methodology in detail, and I will see if it is available online anywhere - it provides an excellent discussion of various techniques used in concert to fully capture a highly detailed digital version of the monument. A few other good papers on photogrammetry/mapping at the Acropolis are here and here.