BEGIN:VCALENDAR

VERSION:2.0

PRODID:-//wp-events-plugin.com//6.3//EN

TZID:Asia/Jerusalem

X-WR-TIMEZONE:Asia/Jerusalem
BEGIN:VEVENT

UID:265@dds.technion.ac.il

DTSTART;TZID=Asia/Jerusalem:20221117T133000

DTEND;TZID=Asia/Jerusalem:20221117T143000

DTSTAMP:20221102T111358Z

URL:https://dds.technion.ac.il/iemevents/transformer-explainability-beyond
 -accountability/

SUMMARY:Transformer Explainability beyond Accountability [	\n					Computati
 onal Data Science (CDS) Seminar\n					Seminars\n			\n	]
DESCRIPTION:By:  Hila Chefer\n Advisors: \n Where: Cooper 112 From:\nTel Av
 iv University\nAbstract:\n\nTransformers have revolutionized deep learning
  research across many disciplines\, starting from NLP and expanding to vis
 ion\, speech\, and more. In my talk\, I will explore several milestones to
 ward interpreting all families of Transformers\, including unimodal\, bi-m
 odal\, and encoder-decoder Transformers. I will present working examples a
 nd results that cover some of the most prominent models\, including CLIP\,
  ViT\, and LXMERT.\n\nI will then present our recent explainability-driven
  fine-tuning technique that significantly improves the robustness of Visio
 n Transformers (ViTs). The loss we employ ensures that the model bases its
  prediction on the relevant parts of the input rather than supportive cues
  (e.g.\, background).\n\n&nbsp\;
CATEGORIES:Computational Data Science (CDS) Seminar,Seminars
LOCATION:Cooper 112

END:VEVENT

BEGIN:VTIMEZONE

TZID:Asia/Jerusalem

X-LIC-LOCATION:Asia/Jerusalem

BEGIN:STANDARD

DTSTART:20221030T010000

TZOFFSETFROM:+0300

TZOFFSETTO:+0200

TZNAME:IST

END:STANDARD

END:VTIMEZONE
END:VCALENDAR