I integrated the LiDAR data with WorldView-2 data for the canopy height model. The tree canopy height model (CHM) was computed as the difference between tree canopy hits (DSM) and corresponding LiDAR derived terrain (DTM) elevation values. Both WV-2 and CHM used, in eCognition for tree level classification. Spectral information WV-2 and tree height from CHM.
From your question its not clear for which application, you are targeting both active and passive optical data integration. If you can elaborate that will be nice to answer.
You can use some cooresponding feature points\lines\planes to integrate the lidar point cloud with the aerial optical image. In urban area, the lineary features is the best .
As far as I know there is no readily available algorithm to integrate LIDAR and aerial optical image.
They are completely different data sets. A lidar point cloud is generated either in a PRCS (Project coordinate system) or GLCS (Global coordinate system) and it has a unique x, y and z coordinates. Similarly, with the photogrammetric system (PGCS), photogrammetric coordinate system it also has unique x, y and z
If you want to tie both photogrammteric and LiDAR system in space, you need to have GCP's (Ground control points) for which you need to take a GPS reading and then extract them in the image data and then register them with the PRCS or GLCS of Lidar point cloud,
However, if you use a TLS(terrestrail laser scanner) equipped with a camera, after the scanner collected the point cloud, the camera takes the pictures in the particular orientation. The proprietary software of the laser scanner merge the point cloud and photographic image automatically.
I'll assume for part one that you want to pull colour information from images and apply it to a point cloud.
If you have access to the Terrasolid suite, there are tools to first coregister imagery with LiDAR, and then extract colour values for the LIDAR point cloud from coregistered imagery.
For the imagery, you need a way to apply an apriori geolocation - either ground points or camera positions and orientations. I've used the second approach - no ground control, but camera centre location (plus heading, pitch, roll).
I've used this in an airborne context - it's a very good tool but fairly expensive.
If your imagery and LiDAR are already coregistered, LAStools also offers a method to extract colour data and apply it to the point cloud (LAScolor).
For part two, I'll assume you want to drape already-coregistered imagery over a LIDAR cloud to make a pretty terrain model.
Terrasolid also offers this capacity (TerraModel). I am almost certain that FUSION can also do the job, but I'm not a FUSION user, I've just dabbled at the edges.
Right now I'm working on a quick-and-dirty program to coarsely coregister (direct georeferencing) some images and LiDAR over a flat surface - but it is not such an easy task. Preferably I'd use an existing tool but I have operational (no ground control, which most methods require) and funding constraints.
I hope my assumptions are somewhat correct, and you find this useful!