Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 3rd International Conference and Business Expo on Wireless & Telecommunication Munich, Germany.

Day 1 :

  • Wireless and Telecommunication

Session Introduction

Sitharama Iyengar

FIU School of Computing and Information Sciences

Title: Impact of Brooks-Iyengar Distributed Sensor Network Algorithm for the Next Decade
Speaker
Biography:

S. S. Iyengar is a Distinguished Ryder Professor and Director of the School of Computing and Information Sciences at the Florida International University and is the founding Director of the FIU-Discovery Lab. Iyengar is a pioneer in the field of distributed sensor networks/sensor fusion, computational aspects of robotics and high performance computing. Iyengar has published over 500 research papers and has authored/co-authored/edited 20 books published by MIT Press, John Wiley & Sons, Prentice Hall, CRC Press, Springer Verlag, etc. These publications have been used in major universities all over the world.

His research publications are on the design and analysis of efficient algorithms, parallel computing, sensor networks, and robotics. He is also a member of the European Academy of Sciences, a Fellow of IEEE, a Fellow of ACM, a Fellow of AAAS, Fellow of NAI, and a Fellow of Society of Design and Process Program (SPDS), Fellow of Institution of Engineers (FIE), awarded a Distinguished Alumnus Award of the Indian Institute of Science, Bangalore, and was awarded the IEEE Computer Society Technical Achievement for the contributions to sensor fusion algorithms, and parallel algorithms. He is a Golden Core member of the IEEE-CS and he has received a Lifetime Achievement Award conferred by the International Society of Agile Manufacturing (ISAM) in recognition of his illustrious career in teaching, research, and a lifelong contribution to the fields of Engineering and Computer Science at Indian Institute of Technology (BHU). Iyengar and Nulogix were awarded in the 2012 Innovation 2 Industry (i2i) Florida competition. Iyengar received Distinguished Research Award from Xaimen University, China for his research in Sensor Networks, Computer Vision and Image Processing Iyengar's landmark contributions with his research group is the development of grid coverage for surveillance and target location in distributed sensor networks and Brooks Iyengar fusion algorithm.

He has also been awarded honorary Doctorate of Science and Engineering from an institution. He serves on the advisory board of many corporations and universities in the world. He has served on many National Science Boards such as NIH - National Library of Medicine in Bioinformatics, National Science Foundation review panel, NASA Space Science, Department of Homeland Security, Office of Naval Security, and many others. His contribution was a centerpiece of this pioneering effort to develop image analysis for our science and technology and to the Goals of the US Naval Research Laboratory. The impact of his research contributions can be seen in companies/National Labs like Raytheon, Telecordia, Motorola, the United States Navy, DARPA agencies, etc. His contribution in DARPAS's program demonstration with BBN, Cambridge, Massachusetts, MURI, researchers from PSU/ARL, Duke, University of Wisconsin, UCLA, Cornell university and LSU.

Abstract:

Brooks–Iyengar algorithm is a seminal work and a major milestone in distributed sensing, and could be used as a fault tolerant solution for many redundancy scenarios. Also, it is easy to implement and embed in any networking systems. In 1996, the algorithm was used in MINIX to provide more accuracy and precision, which leads to the development of the first version of RT-Linux. In 2000, the algorithm was also central to the DARPA SensIT program’s distributed tracking program. Acoustic, seismic and motion detection readings from multiple sensors are combined and fed into a distributed tracking system. Besides, it was used to combine heterogeneous sensor feeds in the application fielded by BBN Technologies, BAE systems, Penn State Applied Research Lab(ARL), and USC/ISI.

Besides, the Thales Group, an UK Defense Manufacturer, used this work in its Global Operational Analysis Laboratory. It is applied to Raytheon’s programs where many systems need extract reliable data from unreliable sensor network, this exempts the increasing investment in improving sensor reliability. Also, the research in developing this algorithm results in the tools used by the US Navy in its maritime domain awareness software.

In education, Brooks–Iyengar algorithm has been widely used in teaching classes such as University of Wisconsin, Purdue, Georgia Tech, Clemson University, University of Maryland, etc.

In addition to the area of sensor network, other fields such as time-triggered architecture, safety of cyber-physical systems, data fusion, robot convergence, high-performance computing, software/hardware reliability, ensemble learning in artificial intelligence systems could also benefit from Brooks–Iyengar algorithm.

Mario Marques da Silva

Director, Universidade Autónoma de Lisboa, Lisbon

Title: A novel Massive MIMO for 5G Systems
Speaker
Biography:

Mário Marques da Silva is an Associate Professor and the Director of the Department of Sciences and Technologies at Universidade Autónoma de Lisboa. He is also a Researcher at Instituto de Telecomunicações, in Lisbon, Portugal. He received his B.Sc in Electrical Engineering in 1992, and the M.Sc and PhD degrees in Electrical and Computers Engineering (Telecommunications), respectively in 1999 and 2005, both from Instituto Superior Técnico, University of Lisbon. Between 2005 and 2008 he was with NATO in Brussels (Belgium), where he managed the deployable communications of the new Air Command and Control System Program. He has been involved in multiple networking and telecommunications projects. He is the author of five books published by CRC Press and of several dozens of journal and conference papers, a member of IEEE and AFCEA, and reviewer for a number of international scientific IEEE journals and conferences. Finally, he has chaired many conference sessions and has been serving in the organizing committee of relevant EURASIP and IEEE conferences.

Abstract:

The evolution from 4G to 5G wireless systems is driven by the expected huge growth in user bit rates and overall system throughput. This requires a substantial spectral efficiency increase, while maintaining or even improving power efficiency. To accomplish this, one needs new transmission techniques, with the most promising ones being millimeter Waves (mm-Waves) and massive Multiple-Input Multiple-Output (m-MIMO). Moreover, the small wavelength means small antennas, allowing small-sized transmitter and receivers with very high number antenna elements and, therefore, enabling m-MIMO implementations. However, these frequencies present considerable challenges both in terms of propagation (high propagation free-space path losses, small diffraction effects and almost total absorption losses due to obstacles) and implementation difficulties, both at the analog and digital domains design, efficient amplification, signal processing requirements for equalization and user separation, etc.), which can be particularly challenging for m-MIMO systems. It is considered the use of m-MIMO combined with single-carrier with frequency-domain equalisation (SC-FDE) modulations, which aims to reduce the Peak-to-Average Power Ratio, as compared to other block transmission techniques (e.g., OFDM). A low-complexity iterative
frequency-domain receiver based on the equal gain combining approach is proposed. This receiver does not require matrix inversions and has excellent performance, which can be very close to the matched filter bound after just a few iterations, even when the number of receive antennas is not very high.

Speaker
Biography:

Rune Hylsberg Jacobsen holds a MSc (1995) degree in physics and chemistry and a PhD degree (1997) in optoelectronics from Aarhus University, Denmark. He has been an associate professor at the Electronics and Computer Engineering section at Aarhus University since 2010. His professional career furthermore includes 15 years in the telecommunication industry, where he has managed research & development products and teams. His main research interests embrace wireless networking, network security, smart grid communication, and distributed computing for the Internet of Things. He is the author of more than 50 scientific papers and has been speaker at several conferences.  

Abstract:

Internet of Things (IoT) and cloud computing technology are advancing digitization and efficiency across application domains. Sensor technology provides insights into system performance and other observables. Sensor devices range from simple electronics to highly sophisticated multi-spectral cameras capable of measuring earth surface reflectance in different spectral bands. Such remote sensors can be carried by drones or satellites to produce streams of geospatial data. A satellite such as Sentinel-2, with revisit times of 5-10 days, delivers open data that surpass gigabytes volumes per week for a region as Denmark. Open satellite data is provided in downloadable bulks and calls for efficient processing with the aim of gaining new knowledge. In this talk, I present results from a feasibility study on the use of elastic cloud computing that efficiently provides data analytics based on bulks of satellite data from multi-spectral camera sensors. The performance of the processing cloud infrastructure is analyzed with the aim of determining a cost optimal elastic cloud computing infrastructure. The infrastructure is based on Open Geospatial Consortium (OGC) standards and OpenSensorHub is used to provide a dynamic sensor web application from the combination of geospatial maps and time series data. Furthermore, I present an efficient caching mechanism that minimizes the need for data downloads. The cloud infrastructure allows us to prepare knowledge gained from data and present this to stakeholders in a cost-efficient manner. Results from a case study in precision agriculture demonstrates several conceivable applications of  calculated vegetation indices such as field classification and harvesting detection.

Romeo Kienzler

Chief Data Scientist, IBM Watson IoT, IBM Academy of Technology, Switzerland

Title: How to capture, analyse and react on IoT generated sensor data in real time
Speaker
Biography:

Romeo Kienzler holds a Master Degree in Information Systems with specialisation in Applied Statistics and Bioinformatics from the Swiss Federal Institute of Technology. He works for IBM Watson IoT as a Chief Data Scientist. His current research focus is on large-scale machine learning on resilient cloud infrastructures in addition to the application of DeepLearning technologies on time series data. Romeo Kienzler is a Member of the IBM Academy of Technology, the IBM Technical Expert Council and the IBM BigData BlackBelts team.

Abstract:

In this hands-on tutorial session you will learn how to connect various IoT devices (exemplified using a Raspberry Pi) to a MQTT broker, ingest data in real-time in a NoSQL database (for historic data processing), train a machine learning model (e.g. Neural Net, ..) on this historic data and then - using this model - react on live-data using this model. Finally the result will be translated into an action and sent back to the IoT device.

Speaker
Biography:

Dr. Celestine Iwendi is a Senior Lecturer Federal University Ikwo, Nigeria, Sensor researcher and Director of WSN Consults Ltd, a technology consulting company specializing in sensors, integrated circuits, security of wireless sensor networks module and system manufacturing with headquarters in Aberdeen, Scotland. He obtained a BSc and MSc in Electronics and Computer Engineering from Nnamdi Azikiwe University Nigeria, MSc Communication Hardware and Microsystems from Uppsala University Sweden and a PhD in Engineering at the University of Aberdeen, Scotland. He has carried out many Independent and supervised designs that apply knowledge of Wireless Sensor Networks, Signal processing and Communications engineering to analyze and solve problems at Nnamdi Azikiwe University, Awka Nigeria, and Nigerian Telecommunication (Nitel), Uppsala University Sweden, Norwegian University of Science and Technology, and University of Aberdeen, Scotland. He is member of the IEEE (Institute of electrical and electronics engineers), IEEE Communication Society, Swedish Engineers, Nigerian Society of Engineers. He is also an Associate at Centre for Sustainable International Development.

Abstract:

The Sub-Saharan Africa according to Commonwealth Telecommunications Organisation has succeeded in the last decade in bringing voice services within the reach of some three quarters of the population, but the vast majority of the region is falling further behind the rest of the world in terms of broadband connectivity that will empower Internet of Things. There are two main reasons for this: supply is limited, and prices have been very high. Broadband, which  is  the  delivery  of  Internet  IP  bandwidth  and  all of the content, services and applications which consume this bandwidth. The essential underpinning of broadband therefore is the need for a high capacity transmission backbone network capable of delivering this bandwidth. Providing an entry level 256 Kbps broadband service to hundreds, thousands or millions of customers requires a backbone transmission network with sufficient capacity to do so. And each time an operator increases its broadband service from 256 Kbps to 512 Kbps, 2 Mbps, or even 100 Mbps, this in turn escalates the capacity requirements of the transmission backbone network. Broadband is not just a consequence of economic growth, it is also a cause. This is the problem currently facing most Internet service Providers in Africa.

Many people in Africa still use only landline Internet option, which is a “painfully slow” dial-up service. Cell phone service is erratic because of the thick trees. African consumers need satellite Internet to do their online banking, emailing, bill paying, social media  for example Facebook , and general Internet surfing. Many will also like to watch television shows online and occasionally downloads files for research irrespective of where you are in any part of the country.

Speaker
Biography:

Zuriati Ahmad Zukarnain is an associate professor at the Faculty of Computer Science and information Technology, University Putra Malaysia. She is the head for high performance computing section at Institute for Mathematics and Research (INSPEM), University Putra Malaysia. She received her PhD from the University of Bradford, UK. Her research interests include: Efficient multiparty QKD protocol for classical network and cloud, load balancing in the wireless ad hoc network, quantum processor unit for quantum computer, Authentication Time of IEEE 802.15.4 with Multiple-key Protocol, Intra-domain Mobility Handling Scheme for Wireless Networks, Efficiency and Fairness for new AIMD Algorithms and A Kernel model to improve the computation speedup and workload performance. Se has published more than 100 papers in reputed journals and has been actively involved as a member of the editorial board for some international peer-reviewed and cited journals. She is currently undertaking some national funded projects on QKD protocol for cloud environment as well as routing and load balancing in the wireless ad hoc network.

Abstract:

For decades now, researchers have been trying to figure out how we can use the enormous potential of quantum mechanics to build a whole new generation of computers. According to Microsoft's research lab, we could crack the quantum computing code within the next 10 years. Fortunately, both Google and Microsoft are extremely invested in the idea of quantum computers. They believe this quantum technology will be vast demand in near future. Quantum cryptography sets goal towards the holy grail of absolute security in the cryptography domain. However, lack of efficient simulation based on the performance analysis tools may cause delay in achieving the goal. On the other hand, the real quantum experiment and quantum communication require expensive components and inefficient. Hence, a powerful performance analysis tool with ease to handle will give benefit to all the researchers in the area of quantum communication.

We are proposing an efficient simulation tool called as Quantum Communication Simulator. Quantum Communication Simulator is a tailored simulation tool for quantum communication experiment that may give benefit to the theorist, experimental, hardware developers and also the end user. The Quantum Communication Simulator should aim for both the performance oriented and also the cost oriented. The Quantum Communication Simulator is based on drag and drop interfaces with a complement functions such as budget estimation, cost planning, online collaboration and inbuilt which align with quantum communication experimental models. Further, estimation of cost is also included to assist budgeting and decision making process. Modelling and performance analysis or testing the real quantum experiment is expensive due to the nature of optical components. In order to overcome this problem, a Quantum Communication Simulator is needed to model and simulate the real Quantum communication experiments. The motivation for this particular Quantum Communication Simulator is the culmination of lead researcher’s respective fruitful researchers from digital security, high-speed network and quantum computation. Therefore, a Quantum Communication Simulator is needed to model and simulate the real quantum experiment and quantum communication.