HemeLB Sample Clauses
HemeLB. HemeLB, developed by the team of ▇▇▇▇ ▇▇▇▇▇ ▇▇▇▇▇▇▇ at University College London (UK), is a software pipeline that simulates the blood flow through a stent (or other flow diverting device) inserted in a patient’s brain. The aim is to discover how different stent designs (surface patterns) affect the stress the blood applies to the blood vessel, in particular in the region of the aneurysm being treated. The pipeline also allows the motion of magnetically steered particles, for example coated with drugs, to be simulated and estimates made as to where they might statistically end up. The HemeLB setup tool voxelises the geometry at the given resolution, and HemeLB (lattice- Boltzmann CFD solver) then simulates the fluid flow within that geometry, using the given velocity-time profiles for each inlet. Once complete, the simulation output is analysed using the hemeXtract utility, which can produce images of cross-sectional flow, or 3D shots of wall shear stress distribution in the geometry using ParaView visualisation software. HemeLB is installed, optimised, and available for use to any user with a valid account and CPU-time on ▇▇▇▇▇▇, Cartesius, SuperMUC, Prometheus and Blue Waters. The UCL team also provides consulting to biomedical companies and clinical users. A study of the model's computational performance found excellent results, with a performance drop of ~15% (relative to a simulation of the hydrodynamics alone, i.e. in the absence of any particles) in the most extreme case of load imbalance (all particles clustered in one region). This study, both with and without the presence of particles, represents the deployment of an Extreme Scaling compute pattern of a single simulation code being deployed across a large number of cores. Here the lattice Boltzmann simulation should be regarded as the primary model and the particles as the auxiliary model.
HemeLB. 7.1.2.1 Code description
7.1.2.2 Technical specification and requirements
7.1.2.3 HemeLB on High Performance Computing systems
Figure 4. Strong scaling of HemeLB up to 96,000 cores on EPCC ▇▇▇▇▇▇ (top) and up to 239,615 cores on NCSA Blue Waters (bottom). The plots show both initialisation (red line) and simulate phase (green line). Two different initial data datasets of 7.7 x 108 and 1.5x109 lattice sites on ▇▇▇▇▇▇ and Blue Waters, respectively. Extracted from [27] and [28]. Figure 4 contains the results on studies performed on the two systems: HemeLB, on ▇▇▇▇▇▇ (EPCC, UEDIN) (top panel), showed excellent scalability, with about 20-fold speed-up at 96,000 cores and 80 % parallel efficiency up to 48,000 cores with respect to the reference configuration run on the system (3,000 cores). On Blue Waters (NCSA) (bottom panel), scalability has been investigated from 288 up to 18,432 compute nodes. In order to fit within the memory per node on the system, only 13 of the 16 cores available per node have been used, for a total of about 240,000 MPI ranks per single simulation. The results for the simulation phase, show an 80% efficiency and speed up by a factor of 13 with 59,904 cores compared to the used baseline for this system (3744 cores). A maximum relative speed up of 19.2 was achieved with 239,615, but in this case efficiency was much lower. For this work the authors have collaborated with SCALASCA [29] developers in order to use this profiling tool at core counts over 30,000. The study, has been useful to identify bottlenecks in the use of current MPI-2 and MPI-3 implementations which, using only 32-bit communications, are inadequate when running on hundreds of thousands of cores. For this reason, HemeLB has become a “use case” for the MPI Forum in the release of MPI-4, which contains clean 64-bit communications. UCL is currently working on coupled full human blood flow simulation which is planned to run on SuperMUC-NG (LRZ) using up to 160,000 cores which will provide a new insight into the scalability of the code, towards exascale machine. More details about this research, can be found in the WP2 deliverable D2.4 [1]. Mode of operation One single extreme parallel run for each problem required. Type of parallelism MPI Number of cores per run Typical run 1,000-10,000 Large run >50,000 Number of GPUs per run Typical run 0 Large run 0 Input data Format STL (for surface geometry), XML (config file), GMY (HemeLB own format) Coming from The XML and GMY are genera...
HemeLB. Application description Technical specifications HPC usage and parallel performance
HemeLB. Strong scaling behaviour of walltime and speed up of the HemeLB CPU and GPU codes on large fractions of Tier-0 supercomputers SuperMUC-NG (Germany) and Summit (USA) using the same test geometry. The CPU code was run on up to 309,696 cores of SuperMUC-NG (99.6% capacity). The GPU code was run on up to 18,432 GPUs on Summit (66.6% capacity), in further testing we have run up to 88.9% of Summit’s capacity. For comparison we have used the measure of 1 CPU core being equivalent to a single GPU Streaming Multiprocessor (SM), this is the measure used by the Top500 list.
HemeLB. HemeLB, developed by the team of ▇▇▇▇ ▇▇▇▇▇ ▇▇▇▇▇▇▇ at University College London (UK), is a software pipeline that simulates the blood flow through a stent (or other flow diverting device) inserted in a patient’s brain. The aim is to discover how different stent designs (surface patterns) affect the stress the blood applies to the blood vessel, in particular in the region of the aneurysm being treated. The pipeline also allows the motion of magnetically steered particles, for example coated with drugs, to be simulated and estimates made as to where they might statistically end up. The HemeLB setup tool voxelises the geometry at the given resolution, and HemeLB (lattice- Boltzmann CFD solver) then simulates the fluid flow within that geometry, using the given velocity-time profiles for each inlet. Once complete, the simulation output is analysed using the hemeXtract utility, which can produce images of cross-sectional flow, or 3D shots of wall shear stress distribution in the geometry using ParaView visualisation software. HemeLB is installed, optimised, and available for use to any user with a valid account and CPU-time on ▇▇▇▇▇▇, Cartesius, SuperMUC, Prometheus and Blue Waters. The UCL team also provides consulting to biomedical companies and clinical users.
HemeLB. This code simulates the blood flow through a stent (or other flow diverting device) inserted in a patient’s brain. The aim is to discover how different stent designs (surface patterns) affect the stress the blood applies to the blood vessel, in particular in the region of the aneurysm being treated. The pipeline also allows the motion of magnetically steered particles, for example coated with drugs, to be simulated and estimates made as to where they might statistically end up. More technically, the pipeline takes as input an STL file of the surface geometry of the patient, generally obtained via segmentation of DICOM images from a CT-scan. Also required is the (peak) velocity-time profile of fluid flow at each of the inlets to the simulated region. If inserting a stent, the start and end points of the stent in the vessel must be specified, as well as an image file containing a black and white representation of the surface pattern (black signifying ‘solid’). The HemeLB setup tool voxelizes the geometry bounded by the input STL at the given resolution, and HemeLB (lattice-Boltzmann CFD solver) then simulates the fluid flow within that geometry, using the given velocity-time profiles for each inlet. Once complete, the simulation output is analysed using the hemeXtract utility, which can produce images of cross-sectional flow, or 3D shots of wall shear stress distribution in the geometry using ParaView visualisation software. Non-clinical research Clinical research ✔ Clinical decision support ✔ Drug discovery Design & optimisation In silico clinical trials ✔ Personal health forecasting Contact email: ▇▇▇▇▇.▇▇▇▇▇▇▇▇▇▇@▇▇▇.▇▇.▇▇ End user name ▇▇▇▇▇ ▇▇▇▇▇▇▇’▇ group ▇▇▇▇▇▇ ▇▇▇▇▇▇▇▇’s group ▇▇▇▇▇ ▇▇▇▇▇’s group Affiliation UCL Hamed Medical Corporation, Qatar Qatar University Application area Cardiovascular Cardiovascular Cardiovascular No. of associated users 4 3 3 Target use Research Research / clinical Research Improvements implemented Fully automated input file generation pipeline Integrated clinical pipeline with image segmentation Real time visualisation with GPU, FPGA implementation Impact Papers Use of any e- infrastructure available via CompBioMed EPCC
