Faculty Office Ext.
Dr. Mohamed El-Helw is CIS director. He joined Nile University as an Assistant Professor in 2008 where he led the Ubiquitous and Visual Computing Group (UbiComp) at the Centre for Informatics Science (CIS). Prior to moving to NU, Dr. El-Helw had been working as post-doctoral researcher at the Department of Computing and the Institute of Biomedical Engineering, Imperial College London where he carried out work on the use of image-based modeling and rendering techniques for medical simulation, understanding visual perception and the development of wireless body sensor networks.
His research interests are focused on ubiquitous systems, computer vision, 3D computer graphics, deep neural networks, and scientific computing. He has a proven research and development track record in the above areas with more than 60 refereed publications and major research grants totaling more than EGP 10 Million. Dr. El-Helw received B.Sc. in Computer Science from the American University in Cairo, M.Sc. in Computer Science from the University of Hull, UK, and Ph.D. in Computer Science from Imperial College London, University of London in 2006. He also holds a Diploma in Visual Information Processing (DIC) from Imperial College London. He is a full Professor and a Senior Member of the IEEE society.
1) Mohamed El-Helw received the Cairo Innovates Award 2014 for Innovation from the Academy for Scientific Research and Innovation (ASRT).
2) Best paper award in the International Conference on Pervasive Computing Technologies for Healthcare held in London, UK, 2009.
3) Certificate of Recognition, Microsoft Research, 2010.
4) 3rd place winner of the International AMD OpenCL Innovation Challenge Competition 2011.
5) Winner of the 2014 Cairo Innovate Award.
6) Creator and leader of the Ubiquitous and Visual Computing Group (UbiComp).
Multi projection fusion for real-time semantic segmentation of 3D LiDAR point clouds
Semantic segmentation of 3D point cloud data is essential for enhanced high-level perception in autonomous platforms. Furthermore, given the increasing deployment of LiDAR sensors onboard of cars and drones, a special emphasis is also placed on non-computationally intensive algorithms that operate on mobile GPUs. Previous efficient state-of-the-art methods relied on 2D spherical projection of
Improved Semantic Segmentation of Low-Resolution 3D Point Clouds Using Supervised Domain Adaptation
One of the key challenges in applying deep learning to solve real-life problems is the lack of large annotated datasets. Furthermore, for a deep learning model to perform well on the test set, all samples in the training and test sets should be independent and identically distributed (i.i.d.), which means that test samples should be similar to the samples that were used to train the model. In many
Robust real-time pedestrian detection on embedded devices
Detection of pedestrians on embedded devices, such as those on-board of robots and drones, has many applications including road intersection monitoring, security, crowd monitoring and surveillance, to name a few. However, the problem can be challenging due to continuously-changing camera viewpoint and varying object appearances as well as the need for lightweight algorithms suitable for embedded
Combined regional and spatio-temporal approach improves hepatic tumors classification in Multiphase CT
In this work, we investigate the effect of using spatio-tepmoral features on a regional basis on the liver focal lesions classification performance in the multiphase CT images. Texture, Density, and temporal feature set and their different combinations along spatial partitioned ROI were investigated to better characterizing five hepatic pathologies from multiphase contrast-enhanced CT scans
Deep convolutional neural network based autonomous drone navigation
This paper presents a novel approach for aerial drone autonomous navigation along predetermined paths using only visual input form an onboard camera and without reliance on a Global Positioning System (GPS). It is based on using a deep Convolutional Neural Network (CNN) combined with a regressor to output the drone steering commands. Furthermore, multiple auxiliary navigation paths that form a â€n
AutoDLCon: An Approach for Controlling the Automated Tuning for Deep Learning Networks
Neural networks have become the main building block on revolutionizing the field of artificial intelligence aided applications. With the wide availability of data and the increasing capacity of computing resources, they triggered a new era of state-of-the-art results in diverse directions. However, building neural network models is domain-specific, and figuring out the best architecture and hyper
D-SmartML: A distributed automated machine learning framework
—Nowadays, machine learning is playing a crucial role in harnessing the value of massive data amount currently produced every day. The process of building a high-quality machine learning model is an iterative, complex and time-consuming process that requires solid knowledge about the various machine learning algorithms in addition to having a good experience with effectively tuning their hyper
Motion and depth augmented semantic segmentation for autonomous navigation
Motion and depth provide critical information in autonomous driving and they are commonly used for generic object detection. In this paper, we leverage them for improving semantic segmentation. Depth cues can be useful for detecting road as it lies below the horizon line. There is also a strong structural similarity for different instances of different objects including buildings and trees. Motion
Depth Augmented Semantic Segmentation Networks for Automated Driving
In this paper, we explore the augmentation of depth maps to improve the performance of semantic segmentation motivated by the geometric structure in automotive scenes. Typically depth is already computed in an automotive system to localize objects and path planning and thus can be leveraged for semantic segmentation. We construct two networks that serve as a baseline for comparison which are “RGB
- Computer Vision
- Deep Learning
- EO Analytics