Speaker
Description
Powerful detectors at modern experimental facilities routinely collect data at multiple GB/s. Online analysis methods are needed to enable the collection of only interesting subsets of such massive data streams, such as by explicitly discarding some data elements or by directing instruments to relevant areas of experimental space. Such online analyses require methods for configuring and running high-performance distributed computing pipelines—what we call flows—linking instruments, data center computers (e.g., for analysis, simulation, AI model training), edge computing (for analysis), data stores, metadata catalogs, and high-speed networks. In this talk, I review common patterns associated with such flows, describe methods for instantiating those patterns, and present experiences with the application of these methods to the processing of data from a range of experimental facilities, each of which engages HPC resources for data inversion, machine learning model training, or other purposes. I also discuss implications of these new methods for operators and users of scientific facilities.