SLinCA@Home

SLinCA@Home
Developer(s) IMP NASU
Initial release September 14, 2010 (2010-09-14)
Development status Alpha
Operating system Linux, Windows
Platform BOINC, SZTAKI Desktop Grid, XtremWeb-HEP, OurGrid
Type Grid computing, Volunteer computing
Website dg.imp.kiev.ua

SLinCA@Home (Scaling Laws in Cluster Aggregation) is a research project that uses Internet-connected computers to do research in fields of physics and materials science.

Contents

Introduction

SLinCA@Home is based at G.V.Kurdyumov Institute for Metal Physics (IMP) of the National Academy of Sciences of Ukraine (NASU), in Kiev, the capital of Ukraine. It runs on the Berkeley Open Infrastructure for Network Computing (BOINC) software platform, SZTAKI Desktop Grid platform, and Distributed Computing API (DC-API) by SZTAKI. SLinCA@Home hosts several scientific applications dedicated to search of scale-invariant dependencies in experimental data and results of computer simulations.

History

SLinCA@Home project was previously launched in January, 2009 as part of EDGeS project in Seventh Framework Programme of the European Union for the funding of research and technological development in Europe. During 2009-2010 it used the power of local IMP Desktop Grid (DG), but from December 2010 it uses the power of volunteer-driven distributed computing in solving the computationally intensive problems related with a search of scale-invariant dependencies in experimentally obtained and simulated scientific data. Now it is operated by group of scientists from IMP NASU in the tight cooperation with partners from IDGF and Distributed Computing team 'Ukraine'. From June 2010 SLinCA@Home works under framework of DEGISCO FP7 EU project.

Current status

Now SLinCA@Home works under alpha-test status, which is related to gradual upgrades of server and client parts.

By informal statistics at BOINCstats site (as of 16 March 2011) over 2,000 volunteers in 39 countries have participated in the project, making it the second most popular BOINC project in Ukraine (after Magnetism@Home project, which is not active now).[1] About 700 active users contribute about 0.5-1.5 teraFLOPS[2] of computational power, which would rank SLinCA@Home among the top 20 on the TOP500 list of supercomputers ... as of June, 2005.[3] :)

Currently, one application (SLinCA) is running at global public IMP Desktop Grid (DG) infrastructure (SLinCA@Home), and three others (MultiScaleIVideoP, CPDynSG, LAMMPS over DCI) are under tests now at local private IMP Desktop Grid (DG) infrastructure.

Scientific Applications

SLinCA@Home project has been created to perform searches for previously unknown scale-invariant dependencies using data from the public data of experiments and simulations in the following scientific applications.

Scaling Laws in Cluster Aggregation (SLinCA)

SLinCA
Developer(s) IMP NASU
Initial release July 24, 2007 (2007-07-24)
Development status Active
Written in C, C++
Operating system Linux (32-bit), Windows (32-bit)
Platform BOINC, SZTAKI Desktop Grid, XtremWeb-HEP, OurGrid
Type Grid computing, Volunteer computing

SLinCA (Scaling Laws in Cluster Aggregation) application was the first application ported to DG infrastructure by the Lab of Physics of Deformation Processes of IMP of the NASU. Its aim is to find scale invariant laws in kinetic scenarios of monomer aggregation in clusters of different kinds in different scientific domains.

The processes of agent aggregation in clusters are investigated in many branches of science: defect aggregation in materials science, population dynamics in biology, city growth and evolution in sociology, etc. There are experimental data confirming their evolving structure, which is hierarchical on many scales. The available theories give many scenarios of cluster aggregation, formation of hierarchical structures, and their scaling properties. But it takes the powerful computational resources for hierarchical processing of the huge databases of experimental data. The typical simulation of one cluster aggregation process with 106 monomers takes approximately 1–7 days on a single modern CPU, depending on the number of Monte Carlo steps (MCS). Deploying SLinCA on a Grid computing infrastructure, utilising hundreds of machines at the same time, allows harnessing sufficient computational power to undertake the simulations on a larger scale and in a much shorter timeframe. Running the simulations and analysing the results on the Grid provides the required excessive computational power.

The typical technical parameters for running the DG-enabled version of SLinCA application at global public IMP Desktop Grid (DG) infrastructure (SLinCA@Home):

Scientific Results

The previous scientific results of SLinCA application were obtained on EGEE computing resources at CETA-CIEMAT and at XtremWeb-HEP LAL test infrastructures were obtained and reported in 2009 during the poster session at 4th EDGeS training event and 3rd AlmereGrid Workshop, Almere, Netherlands (29–30 March 2009).[4]

Future Plans

The current version of SLinCA-application will be upgraded to stable checkpointing, some new functionality, and support of NVIDIA GPU-computing to perform the analysis faster (by estimations from 50 to 200% faster).

Multiscale Image and Video Processing (MultiScaleIVideoP)

MultiScaleIVideoP
Developer(s) IMP NASU (wrapper for DCI), Mathworks (MATLAB libraries)
Initial release January 11, 2008 (2008-01-11)
Development status Alpha
Written in C, C++, 4GL MATLAB
Operating system Linux (32-bit), Windows (32-bit)
Platform MATLAB, BOINC, SZTAKI Desktop Grid, XtremWeb-HEP
Type Grid computing, Volunteer computing

Optical microscopy is usually used for structural characterization of materials in narrow ranges of magnification, small region of interest (ROI), and in static regime. But many crucial processes of damage initiation and propagation take place dynamically in the wide observable time domain from 10−3 s to 103 s and on the many scales from 10−6 m (solitary defects places) to 10−2 m (correlated linked network of defects). Multiscale Image and Video Processing (MultiscaleIVideoP) is designed to process the recorded evolution of material under mechanical deformation in loading machine. The calculations include many parameters of physical process (process rate, magnification, illumination conditions, hardware filters, etc.) and image processing parameters (size distribution, anisotropy, localization, scaling parameters, etc.), hence the calculations are very slow. That is why the extreme need of more powerful computational resources appears. Deploying this application on a Grid computing infrastructure, utilising hundreds of machines at the same time, allows harnessing sufficient computational power to perform image and video processing on a larger scale and in a much shorter timeframe.

The typical technical parameters for running the DG-enabled version of MultiScaleIVideoP application at local private IMP Desktop Grid (DG) infrastructure:

Scientific Results

The previous scientific results of MultiScaleIVideoP application were obtained on EGEE computing resources at CETA-CIEMAT and at XtremWeb-HEP LAL test infrastructures were obtained and reported in 2009 during the poster session at 4th EDGeS training event and 3rd AlmereGrid Workshop, Almere, Netherlands (29–30 March 2009).[5]

In January, 2011 the further scientific results for experiments with cyclic constrained tension of Al foils under video monitoring were obtained and reported.[6]

Future Plans

The current version of MultiScaleIVideoP application will be upgraded to stable checkpoinitng, some new functionality, and support of NVIDIA GPU-computing to perform the analysis faster (by estimations from 300 to 600% faster).

City Population Dynamics and Sustainable Growth (CPDynSG)

CPDynSG
Developer(s) IMP NASU
Initial release April 14, 2010 (2010-04-14)
Development status Alpha
Written in C, C++
Operating system Linux (32-bit), Windows (32-bit)
Platform BOINC, SZTAKI Desktop Grid
Type Grid computing, Volunteer computing

In the social sciences, it has been found that the growth of cities (municipalities, lands, counties, etc.) should be explained by migration, merges, population growth, etc. For example, from literature one can found that the city population distribution in many countries is consistent with a power-law form in which the exponent t is close to 2. This is confirmed qualitatively by data for the populations of various cities during their early histories. The population of essentially every major city grows much faster than country as a whole over considerable time range. However, as cities reach maturity, their growth may slow or their population may even decline for reasons unrelated to preferential migration to still larger cities. The different theories give the growth rates, asymptotics, and distributions of such populations. An important feature of application is comparison of available theories, comparison with observations, and prediction of scenarios of population dynamics and sustainable growth for different national and international regions. City Population Dynamics and Sustainable Growth (CPDynSG) application allows to investigate connections between vast volume of experimental data on and found a qualitative correspondence between model predictions and available long time historical data.

The typical technical parameters for running the DG-enabled version of CPDynSG application at local private IMP Desktop Grid (DG) infrastructure:

Scientific Results

In June–September 2010 the previous results on the concept and results of porting application CPDynSG to DCI on the basis of BOINC SZTAKI DG were obtained on city size distributions in several Central and Eastern European countries. The distinctive isolation of the city size distribution in Hungary was noted. The very high similarity in evolution of city size distributions in Ukraine and Poland were discovered. These previous results were reported during Cracow Grid Workshop'10 (October 11–13, 2010) in oral and poster[7] presentations. The poster presentation was awarded as "The Best Poster of the Cracow Grid Workshop'09".

Future Plans

The current version of CPDynSG-application will be upgraded to stable checkpoinitng, some new functionality, and support of NVIDIA GPU-computing to perform the analysis faster (by estimations from 50 to 200% faster).

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) over DCI

LAMMPS over DCI
Developer(s) IMP NASU (wrapper for DCI), Sandia National Laboratories (LAMMPS itself)
Initial release June 4, 2010 (2010-06-04)
Development status Alpha
Written in C, C++
Operating system Linux (32-bit), Windows (32-bit)
Platform BOINC, SZTAKI Desktop Grid
Type Grid computing, Volunteer computing

Search for new nanoscale functional devices is considered as "El Dorado" and stimulates the modern "Gold Rush" in material science now. But controlled fabrication of nanoscale functional devices takes careful selection and tuning the critical parameters (elements, interaction potentials, regimes of external influence, temperature, etc.) of atomic self-organization in designed patterns and structures for nanoscale functional devices. That is why molecular dynamics simulations of nanofabrication processes with physical parameter decomposition for parameter sweeping in a brute force manner are very promising. For this purpose the very popular non-commercial open-source package "Large-scale Atomic/Molecular Massively Parallel Simulator" (LAMMPS) by Sandia National Laboratories was selected as a candidate for porting to DCI on the basis of DG. That is why, LAMMPS in “parameter sweeping” parallelism can be ported to DCI on DG. Usually, it takes the powerful computational resources for simulation of nanoobjects with many parameters. The typical simulation of the typical investigated nanostructure under 1 configuration of physical parameters — for instance, metal single crystal (Al, Cu, Mo, etc.) with 107 atoms with embeded atom potentials for 1-10 picoseconds of the simulated physical process — takes approximately 1–7 days on a single modern CPU. Deploying LAMMPS on a Grid computing infrastructure, utilising hundreds of machines at the same time, allows harnessing sufficient computational power to undertake the simulations in a wider range of physical parameter (configuration) and in a much shorter timeframe.

The typical technical parameters for running the DG-enabled version of MultiScaleIVideoP application at local private IMP Desktop Grid (DG) infrastructure:

Scientific Results

In September–October 2010 the previous results were obtained and reported in oral presentation during International Conference “Nanostructured materials-2010”, Kiev, Ukraine [8]

Future Plans

The current version of LAMMPS over DCI application will be upgraded to stable checkpoinitng, some new functionality, and support of NVIDIA GPU-computing to perform the analysis faster (by estimations from 300 to 500% faster).

The additional target is migration to OurGrid platform for testing and demonstrating the potential mechanisms of interoperation between worldwide communities with different DCIs paradigms. SLinCA application will be migrated to OurGrid platform targeted at the support of peer-to-peer desktop grids, which are, in nature, very different from volunteer computing desktop grids such as SZTAKI Desktop Grid.

Partners

SLinCA@Home collaborates with

Awards

See also

References

  1. ^ BOINCstats project statistics, http://boincstats.com/stats/project_graph.php?pr=SLinCA, retrieved March 16, 2011 
  2. ^ SLinCA@Home Server Status
  3. ^ Comparison with TOP500 supercomputers, June, 2005, http://www.top500.org/list/2005/06/500, retrieved March 16, 2011 
  4. ^ O. Gatsenko; O. Baskova, and Y. Gordienko (March, 2009). "Kinetics of Defect Aggregation in Materials Science Simulated in Desktop Grid Computing Environment Installed in Ordinary Material Science Lab". Proceedings of 3rd AlmereGrid Workshop. http://www.edges-grid.eu:8080/c/document_library/get_file?p_l_id=11065&folderId=80175&name=DLFE-1626.pdf. Retrieved March 16, 2011. 
  5. ^ O. Baskova; O. Gatsenko, and Y. Gordienko (March, 2009). "Porting Multiparametric MATLAB Application for Image and Video Processing to Desktop Grid for High-Performance Distributed Computing". Proceedings of 3rd AlmereGrid Workshop. http://www.edges-grid.eu:8080/c/document_library/get_file?p_l_id=11065&folderId=80175&name=DLFE-1627.pdf. Retrieved March 16, 2011. 
  6. ^ O. Baskova; O. Gatsenko, O. Lodygensky, G. Fedak, and Y. Gordienko (January, 2011). Statistical Properties of Deformed Single-Crystal Surface under Real-Time Video Monitoring and Processing in the Desktop Grid Distributed Computing Environment. 465. Key Engineering Materials. pp. 306–309. http://www.scientific.net/KEM.465.306. Retrieved March 16, 2011. 
  7. ^ a b O. Gatsenko; O. Baskova, and Y. Gordienko (February, 2011). "Simulation of City Population Dynamics and Sustainable Growth in Desktop Grid Distributed Computing Infrastructure". Proceedings of Cracow Grid Workshop'10. https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B9xsF99eBx6rMzM5MWRmYWMtMGIyYi00NTAxLWFjNDgtYmI2NTEzNWM1ZWE0&hl=en&authkey=CMTypskG. Retrieved March 16, 2011. 
  8. ^ O. Baskova; O. Gatsenko, O. Gontareva, E. Zasimchuk, and Y. Gordienko (19–22 October 2011). "Scale-Invariant Aggregation Kinetics of Nanoscale Defects of Crystalline Structure(Russian: Масштабно-инвариантная кинетика агрегации наноразмерных дефектов кристаллического строения)". Online Proceedings of “Nanostructured materials-2010” (in Russian). http://www.nas.gov.ua/conferences/nano2010/program/22/Documents/u79_Gordiyenko.pdf. Retrieved March 16, 2011. 
  9. ^ O. Baskova; O. Gatsenko, and Y. Gordienko (February, 2010). Scaling-up MATLAB Application in Desktop Grid for High-Performance Distributed Computing - Example of Image and Video Processing. pp. 255–263. ISBN 9788361433019. http://www.cyfronet.krakow.pl/cgw09/img-posters/18.pdf. Retrieved March 16, 2011. 

External links