I am a software engineer at Philips Healthcare. My current work focuses on bringing advanced visualization of medical data to the hospitals. Present our clients the information in such a way that it supports the analysis, and interpretation of critical patient data.
In my first and second year at Philips, I have been working on visualization of diffusion tensor data in the form of white matter tracts. The application that includes the visualization uses diffusion tensor data with a minimum of 6 diffusion directions to calculate the preferred diffusion direction. This diffusion direction indicates the orientation of the local fibertracts. The FiberTrak package uses all this information to delineate the various fibertract bundles. With three people we were assigned to come up to visualize the 3D fibertract bundles, possible (tumor) models obtained from segmentation, and up to two volumes (one for data, one for context).
Fiber tracking is an advanced tool for pre-operative surgical planning, post-surgery evaluation and for general evaluation of fiber tracts around tumors and lesions in connection with functional areas.
Presenting everything in 3D is not the best option here, as the two volumes quickly occlude the interesting information, the fibertracts. The FiberTrak package visualizes the volumes on three planes in 3D space. For techies, it is similar to the vtkPlaneWidget, but it supports fusion of two volumes, and has additional options to remove (make transparent) non-interesting information (such as the black edges outside the volume data). In addition, the planes are rendered together with the models in one step using an order independend transparency algorithm. This allows for proper transparency render results for all objects in the view.
The final requirement was that everything should fit in the existing client/server architecture, and be able to render at interactive speeds. This introduced a new problem, processing power. The FiberTracts can go up to millions of triangles, together with two volumes and possibly other models this requires a decent amount of rendering power. GPU’s are the logical choice as they are very efficient for these types of scenes. However, the servers (in the field) did not have a graphics card installed, and should be able to render the scenes due to the client/server split. In the end we were able to emulate our GPU code on the CPU. This has the additional benefit that once a GPU is installed, it automatically switches to run the whole routine on the GPU. This significantly improves performance. The project is now released to the field, and since 3D always brings pretty pictures it is used in almost every marketing page or pdf online.
It is seen in interactive 3D here (opens in new tab), it runs in the background of the movie below (4:35) which was recorded at RSNA 2013, and is used in various other marketing information documents. But most importantly, of course, it is now used in the field, by actual MD’s :).
Who said computer science was dull?