Seminar held by Matteo Bregonzio "AI applied to healthcare and cybersecurity"

-
Astrobicocca
Room U1-12 and online: https://meet.google.com/bgj-schz-ymf

 

AI applied to healthcare and cybersecurity

Today Datrix is helping public and private companies in finding and accelerating innovation paths within the digital sphere. In fact, thanks to AI it is possible to gather and analyse large volumes of data in order to better understand processes and extract insights on how to improve services. As an example, Datrix is currently working on multiple healthcare and cybersecurity projects where AI is adopted to overcome limitations and speed-up processing; in both cases the ability of processing and extracting information from Big Data represent a game changer in supporting professionals in taking the right decision in a timely fashion. Specifically, in the healthcare/oncology sector we are researching and developing a medical imaging platform capable of collecting, processing and analysing hyperspectral data collected by coherent Raman microscopies. Those next-generation bio-photonics devices are based on vibrational spectroscopy, with the potential to revolutionise the study of the cellular origin of diseases allowing for novel approaches towards personalised therapy. Deep learning algorithms are developed to address multiple challenges including spectral denoise, data acquisition equipment signature removal, cellular classification, and disease detection. On this subject, we recently released a Cloud-based platform wildy used by researcher and healthcare professionals (https://ramapp.io/)
Within the cybersecurity space Datrix is working at AIA Guard (http://aiaguard.com), the very first European solution for assessing Adversarial Machine Learning vulnerabilities. Artificial intelligence can be both a blessing and a curse for cybersecurity; in fact adding AI modules to business products implicitly also means introducing new vulnerabilities that can cause data stealing, model drift, and performance degradation. In this context, statistical learning and deep learning are used to analyse both the AI algorithm and datasets used for training in order to identify vulnerabilities. This is achieved via a multi-step analysis that includes data privacy & sanitization at dataset level, static program analysis, penetration testing and data poisoning. Importantly, this hybrid infrastructure is designed to be GDPR compliant and capable of monitoring, detecting and mitigating AI models vulnerabilities.

Argomento