29 April 2013

X-TRACT - CT and Imaging tools

X-TRACT - a software for advanced X-ray image analysis and Computed Tomography currently in use on the MASSIVE cluster at the Australian Synchrotron, ANU and at the Shanghai Synchrotron in China. X-TRACT implements a large number of conventional and advanced algorithms for 2D and 3D X-ray image reconstruction and simulation.

Major X-TRACT functionality is now available as part of Cloud-Based Image Analysis and Processing Toolbox. The following features are implemented:

Sinogram creation
X-ray projection data must first be converted into sinograms before CTreconstruction can be carried out. Each sinogram contains data from a single row of detector pixels for each illuminating angles. This data is sufficient for the reconstruction of a single axial slice (at least, in parallel-beam geometry).
Ring artefact removal
Ring artefacts are caused by imperfect detector pixel elements as well as by defects or impurities in the scintillator crystals. Ring artefacts can be reduced by applying various image processing techniques on sinograms or reconstructed images.
Dark current subtraction
Dark current subtraction compensates for the readout noise, ADC oset, and dark current in the detector. The dark current images are collected before and/or after CT measurements with no radiation applied and with the same integration time as the one used during the measurements. The dark current image is subtracted from each CT projection.
Flat field correction
Flat-field images are obtained under the same conditions as the actual CT projections, but without the sample in the beam. They allow one to correct the CT projections for the unevenness of the X-ray illumination.
Positional drift correction
The function is used for correction of transverse drift between related experimental images.  Image drift is assessed by cross-correlating pairs of images.
Data normalisation
Data normalisation
TIE-based phase extraction
The TIE algorithm allows the recovery of the optical phase of an electromagnetic wave (e.g. an X-ray beam) from a single near-field in-line image by solving the Transport of Intensity equation under the assumption that the phase shift and absorption distributions are proportional to each other. This method is usually applied in propagation-based in-line CT imaging (PCI-CT).
FBP CT reconstruction
Filtered back-projection (FBP) parallel-beam CT reconstruction.
Gridrec CT reconstruction
High speed CT reconstruction algorithm.
Centre of rotation
Automated calculation of the centre of sample rotation in a CT scan from experimental X-ray projections, sinograms or reconstructed axial slices.
CT Reconstruction Filters
The choice of available CT reconstruction filters will include at least the Liner-Ramp, Shepp-Logan, Cosine, Hamming and Hann filters.
ROI reconstruction
This option enables the user to select a subset of axial slices to be reconstructed and/or limit the reconstruction area to a user-defined rectangular subarea of the axial slice. The option reduces the reconstruction time and the size of the output data.

 And here's a short video showing the basic usage of X-TRACT in Galaxy cloud:

28 April 2013

HCA-Vision Components in Cloud-based Image Analysis and Processing Toolbox Ready to Test

HCA-Vision components in Cloud-based Image Analysis and Processing Toolbox are ready to test. Here is a video clip showing an example of how to build a workflow using some of the tools in the toolbox:

Enjoy using the toolbox and look forward to its release in the near future!

23 April 2013

NeCTAR Workshop on Cloud-based Computational Frameworks

Yesterday on the 22nd of April, together with NeCTAR, we have organised the NeCTAR Workshop on Cloud-based Computational Frameworks. Around 30 people arrived to Sydney to share their knowledge, expertise and also to discuss common problems across the NeCTAR projects.

The workshop was held in an anti-conference way, with the agenda prepared interactively. Discussed topics included:
  • Demonstrations, Galaxy + CloudMan
  • Galaxy for image analysis
  • High Throughput Computing
  • Storage
  • AAA
  • Programming Models + Patterns
  • Deployments
  • Orchestration

During the workshop, we also had also the Marshmallow Challange to enhance collaborative thinking.

Photographs from the workshop can be found under the following link.

17 April 2013

MILXView components updated with ITK 4.3.1

The MILXView components of the toolbox: Image registration, segmentation, Partial volume correction (PVC), atlas normalisation (SUVR), cortrical thickness estimation (CTE) and CTE surface have now all be updated to use the latest Insight toolkit (ITK 4.3.1)

11 April 2013

Feed Hadoop with a Large Amount of Images for Parallel Processing

We are investigating how to use Hadoop to process a large amount of images in parallel using our image Toolbox in the NeCTAR cloud. One technical requirement is to find out how to feed Hadoop with a set of binary image files?

Hadoop was originally developed for text mining. As a result, the whole design is based on <key, value> pairs as input and output. It is straightforward to use it for text mining for rich software harnesses, examples etc., but not for binary data, such as images. This blog discusses two approaches to this requrement as follows:

1. Using Hadoop SequenceFile

A sequenceFile is a flat file consisting of binary <k, v> pair, which can be directly used as Hadoop's inputs.

Here is an example for generating a SequenceFile from a set of image files in Java:

Here is the execution of the above code:

Then, the generated file can be used to feed Hadoop as <k, v> pair as input for paralel processing.

2. Using HDFS (Hadoop Distributed File System)'s URL:

Instead of feeding Hadoop with data contents directly, we can feed Hadoop a file list, where lists all files HDFS' URI as v of <k, v>. Each mapper uses the assigned <k, v> to load the corresponding data contents (e.g. images) to process as shown as follows:

Here is a simple comparsion between the above two approaches:

8 April 2013

Clusters and computational frameworks in the NeCTAR cloud

Many NeCTAR Virtual Laboratory and eResearch Tool projects are working on deploying Clusters in the Cloud (such as CloudMan and StarCluster). Some are also investigating computational frameworks (such as Hadoop) in the cloud.
At NeCTAR's Software Projects Collaboration Workshop (Dec 2012) some projects expressed a desire to share knowledge about such cloud-based computational frameworks.


This workshop aims to kick-start a NeCTAR Interest Group for sharing knowledge about deploying and using clusters and computational frameworks in the NeCTAR cloud.
Target audience
Software engineers, software architects and technical project managers deploying or using clusters or computational frameworks on the NeCTAR Research Cloud are encouraged to attend.  NeCTAR funded projects working on clusters in the cloud are particularly encouraged to attend.


We'll run this workshop as an Unconference/ Open-space where participants will set the agenda and run sessions on the day. Please come along prepared to share examples of how you are building clusters and computational frameworks in the NeCTAR Cloud. As per BarCamp rules there will be NO SPECTATORS, ONLY PARTICIPANTS!

Register HERE

3 April 2013

Us at the 9th Annual e-Health Research Colloquium in Brisbane

The Australian e-Health Research centre (AeHRC) develops and deploys leading edge ICT innovations in the healthcare domain. It hosted its 9th Annual e-Health Research Colloquium in Brisbane, on Wednesday 27th March 2013. We presented our poster on MILXCloud – a faster, smarter way to process imaging data in the cloud – at the event.