Login

Zein Al Abidin Kamel Ibrahim

Associate professor
Computer Science - Statistics department - Section I - Hadath
Speciality: Computer Science
Specific Speciality: Informatique: Multimedia et reconnaissance des formes

Positions
2012 - 2014 : Teacher

Lebanese International University
Lebanon

2012 - 2014 : Teacher

Islamic University of Lebanon
Lebanon

2010 - 2011 : ATER

University of Angers
Angers - France

2009 - 2010 : ATER

University of Caen
Caen - France

2008 - 2009 : Postdoctorat

INRIA - IRISA
Rennes - France

2005 - 2008 : Vacataire

Institute of Limayrac
Toulouse, France

Teaching 7 Taught Courses
(2014-2015) Info 404 - Advanced object oriented programming

M1 Computer Sciences

(2014-2015) Info 446 - Advanced Web Technologies

M1 Computer Sciences

(2014-2015) Info 430 - Multimedia

M1 Computer Sciences

(2014-2015) Info 447 - Machine Learning

M1 Computer Sciences

(2014-2015) Info 449 - Image, Video and Audio

M1 Computer Sciences

(2014-2015) Info 205 - Data structures

BS Computer Sciences

(2014-2015) Info 306 - Language Theory

BS Computer Sciences

Education
2004 - 2007: PhD

Image, Information, Hypermedia, Toulouse, France
Computer Science

Docteur

2002 - 2003: M2R

Faculty of Science, Lebanese University
Computer Science

2001 - 2002:

Faculty of Science, Lebanese University
Computer Science

1998 - 2001:

Faculty of Science, Lebanese University
Computer Science

Publications 13 publications
Ali Jezzini, Mohammed Ayache, Zein Al Abidin Ibrahim, Lina ELkhansa ECG classification for Sleep Apnea Detection ICABME2015 2015

Sleep apnea is a sleep-related breathing disorder that involves a decrease or complete halt in airflow despite an ongoing effort to breathe. The most common form of sleep apnea is well known as Obstructive sleep apnea (OSA) which is currently diagnosed using polysomnography (PSG) at sleeping labs. This diagnostic technique is both expensive and inconvenient. It requires an expert human to observe the patient over night. New automated methods have been developed for sleep apnea detection using artificial intelligence algorithms, which are more convenient and comfortable for patients. The aim of this paper is two folds: first, compare the well-known methods that have been proposed in the literature, which may have not used the same features and/or the same dataset. Secondly, Use a variety of classifiers which may have not been previously explored. In this paper, we will explore different type of classifiers for sleep apnea detection. Statistical features have been extracted and fed into different types of classifiers. The results show that the KNN classifier reaches an accuracy of 98.7% overpassing all the other classifiers that have been already used in the literature.

Zein Al Abidin Ibrahim and Patrick Gros About TV Stream Macro-segmentation: Approaches and Results CRC Press, Taylor Francis LLC 2012

In the last few decades, digital TV broadcasting has witnessed a huge interest from users against the traditional analog transmission. Many facilities have been already provided for capturing, storing, and sharing digital video content. However, navigating within TV streams is still considered as an important challenge to be faced. From a user point of view, a TV stream is represented as a sequence of programs (P) and breaks (B). While from a signal point of view, this stream is seen as a continuous flow of video and audio frames, with no external markers of the start and the end points of the included programs and no apparent structure. Most TV streams have no associated metadata to describe their structure, except the program guides produced by the TV channels. These program guides lack precision, since the TV channels cannot predict the exact duration of live programs for example. In addition, they don’t provide any information about breaks like commercials or trailers. To cope with this problem, TV stream macro-segmentation or structuring has been proposed as a promising solution in the domain of video indexing. TV stream macro-segmentation consists in precisely detecting the first and the last frames of all programs and breaks (commercials, trailers, station identification, bumpers) of a given stream, and then in annotating all these segments with some metadata. This can be performed by (1) analyzing the metadata provided with the stream (EPG, EIT), or (2) analyzing the audio-visual stream to detect the precise start and end of programs and breaks. In this chapter we aim at providing a survey of the existing approaches in this field. Then, we discuss and compare the results of the different approaches to see how to benefit from the advantages and to overcome the limitations of each of them.

Zein Al Abidin Ibrahim, Isabelle Ferrane, and P. Joly A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis EURASIP Journal on Image and Video Processing 2011

We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM). Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that maywell characterize its content and its structure. The aimof this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

Zein Al Abidin Ibrahim and Patrick Gros TV Stream Structuring EURASIP ISRN Journal on Signal Processing 2011

TV stream structuring consists in detecting precisely the first and the last frames of all the programs and the breaks (commercials, trailers, station identification, bumpers) of a given stream and then in annotating all these segments with metadata. Usually, breaks are broadcasted several times during a stream. Thus, the detection of these repetitions can be considered as a key tool for stream structuring. After the detection stage, a classification method is applied to separate the repetitions in programs and breaks. In their turn, breaks repetitions are then used to classify the segments which appear only once in the stream. Finally, the stream is aligned with an electronic program guide (EPG), in order to annotate the programs. Our experiments have been applied on a 22-day long TV stream, and results show the efficiency of the proposed method in TV stream structuring.

Zein Al Abidin IBRAHIM, Patrick GROS and Sebastien CAMPION AVSST: an Automatic Video Stream Structuring Tool Third Networked and Electronic Media Summit 2010

The aim of this paper is to present the tool that we have developed to automatically structure TV streams. The objective is to determine precisely the start and the end of broadcasted TV programs (P). Usually, TV channels separate programs with breaks (B). These breaks can be commercials, trailers, station identification breaks (monochrome frames for example), or bumpers. They may be broadcasted several times in the stream. The detection of these repetitions is the key of our method to structure the TV stream. After the detection step, a classification method is applied to separate the program repeated content from breaks ones. The latter are used to segment the stream in Program/Breaks sequence. Finally, the segmented stream is aligned with the metadata provided with the stream such as the Electronic Program Guide (EPG) in order to provide labeled programs. Experimentations are made on 22-day long TV stream that show the effectiveness of our method.

Benjamin Bigot, Isabelle Ferrane, and Zein Al Abidin Ibrahim Towards the Detection and the Characterization of Conversational Speech Zones in Audiovisual Documents International Workshop on Content-Based Multimedia Indexing (CBMI) 2008

Giving access to the semantically rich content of large amounts of digital audiovisual data using an automatic and generic method is still an important challenge. The aim of our work is to address this issue while focusing on temporal aspects. Our approach is based on a method previously developed for analyzing temporal relations from a data mining point of view. This method is used to detect zones of a document in which two characteristics are active. These characteristics can result from low-level segmentations of the audio or video components, or from more semantic processings. Once ldquoactivity zonesrdquo have been detected, we propose to compute a set of additional descriptors in order to better characterize them. The method is applied in the scope of the EPAC project that focuses on the detection and the characterization of conversational speech.

Benjamin Bigot, Isabelle Ferrane, and Zein Al Abidin Ibrahim Caractérisation des Zones d'Interactivité entre Locuteurs: vers la Détection de Zones de Parole Conversationnelle pour le Projet EPAC Journées d'Etudes sur la Parole (JEP) 2008

Our work focuses on the detection and the characterization of conversational speech zones in audio documents. We want to provide enriched annotations of large sets of audio data to people working on tools for conversational speech processing. To adress this issue we adopt a data mining point of view and use a method based on temporal relation analysis. This method enables the detection of zones in which two characteristics are both active and to better characterize them we propose to compute a set of additional descriptors. These descriptors put to the fore some interesting information about speaker profile and give indications on their potential role in audio documents. This method is applied in the scope of the EPAC1 project.

Zein Al Abidin Ibrahim, Isabelle Ferrane, and Philippe Joly Audio Data Analysis using Parametric Representation of Temporal Relations IEEE International Conference on Information and Communication Technologies: from Theory to Applications (ICTTA) 2006

The aim of our work is the automatic analysis of audiovisual documents to retrieve their structure by studying the temporal relations between the events occurring in each of them. Different elementary segmentations of a same document are necessary. Then, starting from a parametric representation of temporal relations, a temporal relation matrix (TRM) is built. In order to analyze its content, a classification step is carried out to identify relevant relation class or to observe predefined relations as the Allen's relations. The use of segmentation tools brings in the problem of the segmentation result reliability. Through a first experiment we analyze the effect of segmentation errors on the study of temporal relations. Then, as a second experiment, we apply our parametric method to a TV game document in order to analyze audio events and to see if the observations made can reveal information about the document content or its structure.

Zein Al Abidin Ibrahim, Isabelle Ferrane, and Philippe Joly Conversation Detection in Audiovisual Documents: Temporal Relation Analysis and Error Handling International Conference on Information Processing and Management of Uncertainty in Knowledge-based Systems (IPMU) 2006

The aim of our work is to study temporal relations between events that occur in a same audiovisual document. Our underlying purpose is to propose a method to reveal more significant events about the document content, like conversation sequences. We start from elementary features such as those produced by basic segmentation tools. Based on the new parametric representation we proposed, we build a matrix called TRM that represents all the temporal relations detected between a couple of segmentation results. To analyze the TRM content, a classification step is necessary. To go further with TRM analysis, we extend our parametric representation to observe conjunction of temporal relations. Allen’s temporal relations are used to illustrate our aim. Then we address the question of the lack of reliability of segmentation results and their effect on our analysis step. At last, focusing on the processing of audio segmentations, we have applied our method on a French TV program, to see what kind of high level information can be observed between speakers.

Zein Al Abidin Ibrahim, Isabelle Ferrane, and Philippe Joly Représentation Paramétrique des Relations Temporelles Appliquée à l'Analyse de Données Audio pour la mise en Evidence de Zones de Parole Conversationnelle Journées d'Etudes sur la Parole (JEP) 2006

The general aim of our work is the automatic analysis of audiovisual document to caraterize their structure by studying the temporal relations between the events occurring in it. For this purpose, we have proposed a parametric representation of temporal relations. From this representation, a TRM (Temporal Relation Matrix) can be computed and analyzed to identify relevant relation class. In this paper, we applied our method on audio data, mainly on speaker and applause segmentations from a TV game program. Our purpose is to analyze these basic audio events, to see if the observations automatically highlighted could reveal information of a higher level like speaker exchanges or conversation, which may be relevant in a structuring or indexing process.

Zein Al Abidin Ibrahim, Isabelle Ferrane, and Philippe Joly Temporal Relation Analysis in Audiovisual Documents for Complementary Descriptive Information Third International Workshop on Adaptative Multimedia Retrieval (AMR) 2005

Relations among temporal intervals can be used to detect semantic events in audio visual documents. The aim of our work is to study all the relations that can be observed between different segmentations of a same document. These segmentations are automatically provided by a set of tools. Each tool determines temporal units according to specific low or mid-level features. All this work is achieved without any prior information about the document type (sport, news ...), its structure, or the type of the semantic events we are looking for. Considering binary temporal relations between each couple of segmentations, a parametric representation is proposed. Using this representation, observations are made about temporal relation frequencies. When using relevant segmentations, some semantic events can be inferred from these observations. If they are not relevant enough, or if we are looking for semantic events of a higher level, conjunctions between two temporal relations can turn to be more efficient. In order to illustrate how observations can be made in the parametric representation, an example is given using Allen’s relations. Finally, we present some first results of an experimental phase made on TV news programs.

Zein Al Abidin Ibrahim, Isabelle Ferrane, and Philippe Joly Temporal Relation Mining between Events in Audiovisual Documents Fourth International Workshop on Content-Based Multimedia Indexing (CBMI) 2005

The aim of our work is to detect semantic segments in multimedia documents and especially in videos. To do so, we first need to obtain different document segmentations and then to study all the relations that can be observed between those segmentations. In our case, no prior information about the document type (sport, news …), the structure, or the type of the semantic segments we are looking for. We use segmentation process providing automatically temporal units according to specific low or mid-level features. Then, for each couple of segmentations, an analysis is made by taking into account temporal relations between segments, relation frequency or rarity. In order to give a semantic interpretation to these observations, we propose a new representation for temporal relations applying this, as an example, to Allen’s relations. After mentioning the segmentation error problem, we comment on some first results of an experimental phase made on TV news programs and TV games.

Zein Al Abidin Ibrahim, Isabelle Ferrane, and Philippe Joly Exploitation des Relations Temporelles entre Evènements présents dans les Documents Audiovisuels Atelier : connaissances et documents temporels (Plate-forme AFIA) 2005

Le but de notre travail est de caractériser des structures intentionnelles dans des documents multimédia et particulièrement dans les vidéos. Pour cela, nous devons d'abord obtenir différentes segmentations du document pour étudier ensuite toutes les relations qui peuvent être observées entre ces segmentations. En ce qui nous concerne, nous ne disposons pas d’informations préalables concernant le type de la vidéo (sport, nouvelles…), la structure, ou le type de structure à rechercher. Notre travail est basé sur différents outils de segmentation fournissant automatiquement des unités temporelles en fonction de caractéristiques spécifiques essentiellement de bas niveau. A partir des segmentations obtenues, considérées deux à deux, nous effectuons une analyse des relations temporelles susceptibles d’exister entre les différents segments. Nous étudions la fréquence des réalisations de ces relations. Afin de donner une interprétation sémantique à ces observations, nous proposons une nouvelle représentation des relations temporelles, que nous appliquerons pour l’exemple aux relations de Allen. Après évocation du problème relatif aux erreurs de segmentation, nous présentons les premiers résultats obtenus après une première phase expérimentale produite sur des programmes TV et notamment des journaux télévisés.

Languages
Arabic

Native or bilingual proficiency

English

Professional working proficiency

French

Professional working proficiency