From intelligent data acquisition via smart data-management to confident predictions
Images contain very rich information, and digital cameras combined with image processing and analysis can detect and quantify a range of patterns and processes. The valuable information is however often sparse, and the ever increasing speed at which data is collected results in data-volumes that exceed the computational resources available.
The HASTE project takes a hierarchical approach to acquisition, analysis, and interpretation of image data. We develop computationally efficient measurements for data description, confidence-driven machine learning for determination of interestingness, and a theory and framework to apply intelligent spatial and temporal information hierarchies, distributing data to computational resources and storage options based on low-level image features.
Everyone presented their latest work, and discussed the latest image datasets from AstraZeneca and Vironova. During the software workshop session, we discussed linking the HASTE cloud pipeline to the Vironova MiniTEM. Thanks to: Carolina Wählby, Ola Spjuth, Andreas Hellander, Ida-Maria Sintorn, Alan Sabirsh, Ernst Ahlberg Helgee, Johan Karlsson, Håkan Wieslander, Philip Harrison, Salman Toor, Ben …
HASTE has been featured in ‘Framtidens Forskning’: “As more and more instruments are generating more and more data, we need new methods to not completely drown in data volumes. Our tools make it possible to know in advance where to focus the analysis, which greatly reduces time-consuming and streamlines resource usage” said Prof. Carolina Wählby, …
The HASTE team are pleased to announce the availability of a new publication of the arXiv pre-print service: ‘Apache Spark Streaming and HarmonicIO: A Performance and Architecture Comparison‘. We performed a benchmark analysis to compare two stream processing frameworks – the popular, Apache Spark framework, widely used in industry, and our own framework HarmonicIO (presented …
The HASTE project takes a holistic approach to new, intelligent ways of processing and managing very large amounts of microscopy images to leverage the imminent explosion of image data from modern experimental setups in the biosciences. One central idea is to represent datasets as intelligently formed and maintained information hierarchies, and to prioritize data acquisition and analysis to certain regions/sections of data based on automatically obtained metrics for usefulness and interestingness. To arrive at such smart systems for scientific discovery in image data, we will pursue a range of topics such as efficient data mining in image data, machine learning models with quantifiable confidence that learn an object’s interestingness, and development of intelligent and efficient cloud systems capable of mapping data and compute to a variety of cloud computing and data storage e-infrastructure based on the quality and interestingness of the data.
We will focus our efforts on microscopy data, and work in three specific areas where image collection results in data volumes difficult to handle with today’s computational resources, namely:
Large-scale time-lapse experiments exploring the dynamics of cells and drug. delivery particles in collaboration with Astra Zeneca.
Nanometer-resolution transmission electron microscopy data of in collaboration with Vironova AB.
Multi-modal digital pathology data from SciLifeLab Sweden.
We expect the resulting methodologies and frameworks to be highly relevant also for other scientific and industrial applications, including surveillance, predictive maintenance and quality control.
The project is a collaboration between the Wählby lab (PI), Hellander lab (co-PI), both at the Department of Information Technology, Uppsala University, the Spjuth lab (co-PI) at the Department of Pharmaceutical Biosciences, Uppsala University, the Nilsson lab at the Department of Biochemistry and Biophysics at Stockholm University and SciLifeLab, Vironova AB and AstraZeneca AB.
The HASTE project is funded by the Swedish Foundation for Strategic Research (SSF), under the call “Big Data and Computational Science”. See the press release here. The publications arising from the project are solely the responsibility of the authors and does not necessarily reflect the views of this agency.