2nd webinar 6th July 2022from 4:00 to 6:20 pm

Topics: Cloud & Software Architecture, Artificial Intelligence (AI), Subtitles & Metadata

The second webinar of the DeafIT Academy will take place on Wednesday, July 6th, from 4:00 pm (CEST) to 6:40 pm online on the DeafIT Academy website. In this webinar, three presentations on the topics of cloud & software architecture, artificial intelligence (AI), subtitles, metadata of the canceled DeafIT Conference Online 2022 in March will be made up for. During the webinar, you have the opportunity to ask the speakers questions about the lecture. The webinar is in German Sign Language (DGS), German or English spoken language as well as with German and English subtitles. If you are interested in participating in this webinar, get a ticket here!

These are the speakers & lectures:

CQRS 3.0 – A concept goes with the times

Over the years and the associated use of CQRS in various projects, CQRS did not emerge as the one pattern, but appears in different forms, depending on the application.

CQRS 3.0 represents the form in a serverless world. This means the consistent separation of application concerns in reading and writing in a serverless architecture. It turns the previous technical model on its head. It is simpler, clearer, easier to implement, and requires no framework.

In this session, I show, with the help of the Microsoft Azure Serverless components, C#, and maybe some JavaScript, how CQRS 3.0 is implemented in the cloud.

Cloud & Software-Architectur

Janek Fellien, hearing, German Spoken Language
medialesson GmbH, Cloud Software Architect & Consultant, Berlin (GER)

Automated speaker identification for subtitles

Subtitles for audiovisual media are often produced without speaker identification. This information is an important prerequisite for SDH (Subtitles for the Deaf and hard of hearing). The lecture answers the question, which ways AI-based technologies could offer to provide missing speaker information. It also describes how the annotation of speaker metadata could be used for different strategies of speaker identification.

Artificial Intelligence (AI), Subtitles, metadata

Andreas Tai, hard of hearing, German Spoken Language
IT Consultant for Accessible Media Technologies, Andreas Tai, Munich (GER)

Real-time Marketing Attribution with AI and Google Cloud

During the course of this session, we will show how you can establish a Real-time Marketing Attribution with the help of Vertex AI (Google Cloud).

Artificial intelligence (AI)

Marcus Stade & Patrick Mohr, hearing, German Spoken Language
Head of Analytics, Co-Founder & Managing Partner, Co-Founder – Mohrstade, Munich (GER)

The Programme on Wednesday, 6th Juli 2022:

4:00 – 4.15: Greeting
4:15 – 4:55: Cloud & Software-Architectur by Janek Fellin “CQRS 3.0 – A concept goes with the times”
4:55 – 5:00: Break
5:00 – 5:35 pm: Artificial Intelligence (AI), Subtitles, meta data from Andreas Tai “Automated speaker identification for subtitles”
5:35 – 5:40 pm: break
5:40 – 6:10 pm: Artificial Intelligence (AI) from Marcus Stade & Patrick Mohr “Real-time Marketing Attribution with AI and Google Cloud”
6:10 – 6:20 pm: Ending