September 30, 2020, 13:00 – 14:00 CET

End of September, AIT is organizing a webinar together with YEAR. The webinar is about explainable and interpretable Artificial Intelligence. Register here and share the event with your friends and co-workers.

Artificial Intelligence and specifically, machine learning is used in many different domains. They promise high precision and are cost-efficient. However, they have a significant drawback, and that is they act as black-box models, meaning, their decisions are not apparent to the user and cannot be explained. Therefore, the field of IML (a.k.a.) Explainable AI exists. This research field focuses on approaches that can explain and interpret the decisions made by AI models.

OVERVIEW OF THE WEBINAR

In this webinar you will learn about

  • Background of XAI and state of the art approaches
  • How to bring transparency to Predictive Maintenance?
  • How to bring transparency to Computer vision?
  • How to bring transparency to Fake News Detection?

BIOGRAPHIES

Anahid Jalali

Anahid is a data scientist at the Austrian Institute of Technology, Center of Digital Safety and Security. She is researching and developing machine learning applications for multiple fields such as Predictive Maintenance (smart industry), Audio, and Speech analysis.  Her current research involves the transparency of Blackbox models on industrial time-series data to explain the decisions made by these models, detect faults and biases in the data and improve model’s performance by understanding when the model fails and when it succeeds. These are the steps to decrease the gap between research and industry.

  • Title of the Talk: Explainable and Interpretable AI: The story so far.
  • Abstract of the Talk: Machine Learning has offered us its robust performance and is becoming increasingly present in many different applications. We must continue researching its performance improvement and optimization problems. However, we should also highlight its disadvantages, like its lack of transparency. This downside raises the question: “Why should we trust such networks above our experts and shallow machine learning algorithms, ones who can explain their decisions and why they have come to their judgments”. This talk covers the explainability problem, expectations, and state of the art approaches used to interpret models.

Denis Katic

Denis is a research engineer at the Austrian Institute of Technology, Center of Digital Safety and Security. His research field includes applications of Machine Learning for Industrial Data science and health care, which are considered as critical domains and need explanations on model’s behavior and decisions.

  • Title of the Talk: Explainable machine learning and its capabilities in the field of computer vision.
  • Abstract of the Talk: Methods of explainable ML are generally used to understand individual decisions and which factors have led to them. The results and experiences of these methods can be used in various post processing steps. In this talk we will show an example of how we trace the results and experiences of individual explainable ML results back to the training process in order to obtain a basic understanding of the model and the possibilities for improvements.

Mina Schütz

Mina is currently working as a PhD student at the Austrian Institute of Technolgy (AIT) in Vienna, Austria. Before she started working on her dissertation in May, she wrote her master thesis at AIT with the title: Detection and Identification of Fake News: Binary Content-Classification with Pre-Trained Language Models. She finished both – her  Master’s and Bachelor’s– in Information Science at the University of Applied Sciences in Darmstadt, Germany, where she specialized on machine learning, natural language processing, visual analytics, and information extraction/architecture.

  • Title of the Talk: Explainability of Transformers on Textual Content with a Fake News Dataset
  • Abstract of the Talk: Recent studies of fake news classification have shown that Transformers and language models are a promising approach for natural language processing downstream tasks, because they capture the context of a word in a sentence. They are pre-trained on a neutral corpus and can be fine-tuned for a specific task, such as automatic fake news detection. The results achieved during this work have shown that they already gain high accuracy results for short titles of news articles, but do not give any information on which this decision is based on. Therefore, this talk presents an approach to have further insights on the prediction outcomes of those models.

Interested? Register to the webinar!