processing and machine learning, with scalability as a common thread.
I have a background on parallel computing and foundational problems
motivated by HPC, which I investigated during my PhD and during my
years as a PostDoc in Vienna.
In the latest years I have worked, at the CRS4 research center, to
build efficient and scalable bigdata and machine learning workflows. I
have worked with streams of genomic and industrial data (using Apache
Kafka and Flink as tools) and, within the DeepHealth European project,
to the classification of gigapixel medical images, using both
Tensorflow and the specialized EDDL ML library. In this context I have
a built a scalable pipeline to efficiently manage datasets via the use
of Apache Spark and Cassandra.