Invited Speakers 2016

Sponsors

Platinum Sponsorship

Gold Sponsorship

Silver Sponsorship

Students Sponsorship

Bronze Sponsorship

Sponsors

Exhibitors

Startups & Demos Zone Sponsorship

Media Partner

Header

13th Israel Machine Vision Conference (IMVC) 2024

April 8, 2024

Pavilion 10, EXPO Tel Aviv

Floating Button

Invited Speakers 2016

OrderSorted By Order In Ascending OrderNameJobTitlePhotoLink-to-fullBioAbstractTitleAbstractText
 Prof. Amnon ShashuaHebrew University
Co-founder, CTO and Chairman, Mobileye
Co-founder, CTO and Chairman, OrCam
PhotoYes

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. For his academic achievements he received the MARR prize Honorable Mention in 2001, the Kaye innovation award in 2004, and the Landau award in exact sciences in 2005.

In 1999 Prof. Shashua co-founded Mobileye, an Israeli company developing a system-on-chip and computer vision algorithms for a driving assistance system, providing a full range of active safety features using a single camera. Today, approximately 10 million cars from 23 automobile manufacturers rely on Mobileye technology to make their vehicles safer to drive. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In addition, Mobileye is developing autonomous driving technology with more than a dozen car manufacturers. An early version of Mobileye’s autonomous driving technology was deployed in series as an "autopilot" feature in October, 2015, and will evolve to support more autonomous features in 2016 and beyond. The introduction of autonomous driving capabilities is of a transformative nature and has the potential of changing the way cars are built, driven and owned in the future.

In 2010 Prof. Shashua co-founded OrCam with a mission to harness the power of artificial vision to assist people who are visually impaired or blind. Based on advanced machine perception and artificial intelligence capabilities, the OrCam device is unique in its ability to provide visual aid to hundreds of millions of people, through a discreet wearable platform. Within its wide-ranging scope of capabilities, OrCam’s device can read most texts (both indoors and outdoors) and learn to recognize thousands of new items and faces.

Wearable AI: What if Our Digital Assistants had Eyes and Ears?Imagine that our digital personal assistant had eyes and ears watching and listening all day - such a capability would take our real life experiences to a new level. I will describe an activity at OrCam to build a wearable device that is able to run the most advanced deep network technologies for image categorization, face recognition, product and bar-code search and voice-to-text analysis - all running in a continuous manner on a single charge throughout the day. 
 Prof. Gerard G. MedioniComputer Science and Electrical Engineering Departments
University of Southern California
Los Angeles, CA
PhotoYes

Professor Gérard Medioni received the Diplôme d’Ingenieur from ENST, Paris in 1977, and a Ph.D. from USC in 1983. He is currently at Amazon, on leave from his position of Professor of CS at USC. He served as Chairman of the Computer Science Department from 2001 to 2007. He has published 4 books, many articles, and is the recipient of 18 patents.

Prof. Medioni is on the editorial board of several journals, and served as Chair of many conferences (CVPR, ICCV, WACV, ICPR).

He is a Fellow of IAPR, a Fellow of the IEEE, and a Fellow of AAAI.

Computer Vision Now and Then: A Personal Journey

Computer Vision is a fairly recent discipline, rooted in academic research, now emerging as a red hot field. A wealth of commercial applications are being pursued by both startups and large corporations. I have the privilege of straddling both the academic and industry fields, as a professor at USC, now on leave at Amazon, and acted as a consultant to a number of companies: OptiCopy, Geometrix, Poseidon, DXO, Bigstage, PrimeSense. I will review some of the technology developed along the way, and the lessons learned, often the hard way.

 Prof. David Frakes

Technical Project Lead of Mobile Imaging

Google ATAP

Associate Professor

Arizona State University

USA

PhotoYesDavid H. Frakes received the B.S. and M.S. degrees in electrical engineering, the M.S. degree in mechanical engineering, and the Ph.D. degree in bioengineering, all from the Georgia Institute of Technology.  In 2008 he joined the faculty at Arizona State University (ASU) where he is the Fulton Entrepreneurial Professor and a jointly appointed associate professor in the School of Biological and Health Systems Engineering and the School of Electrical, Computer, and Energy Engineering. He received the 2009 ASU Centennial Professor of the Year Award, the 2012 National Science Foundation CAREER Award, the 2014 Innovator of the Year Award in the state of Arizona, and the 2014 World Technology Network Award in Health and Medicine. His general research interests include computer vision, medical devices, and fluid dynamics.  Dr. Frakes’ work is currently funded by Google, the National Institutes of Health, the National Science Foundation, the U.S. Department of Energy, and Mayo Clinic, among others.  Dr. Frakes is also currently serving as a Program Leader in the Advanced Technologies and Projects group at Google.Computer Vision from Academia to ATAPComputer Vision is a long-standing field that many researchers have focused on for decades.  Among the plethora of tasks fundamental to computer vision is segmentation, or the partitioning of an image into meaningful parts.  This talk will present segmentation work from the Image Processing Applications Laboratory at Arizona Sate University focused in the biomedical field.   Specific applications to be highlighted will include segmentation as a precursor to surgical planning for congenital heart defect repair and endovascular intervention.  The basis for this work and the underlying approach taken will then be examined to compare and contrast the very different innovation models pursued in academia and industry.  Particular emphasis will be given to the innovation model employed at the Google Advanced Technologies and Projects (ATAP) group.  That model will be exemplified through presentation of current and emerging projects being pursued at Google ATAP.
 Prof. Eyal ShimoniCTO & VP Technology
Strauss Group
Israel
PhotoYesProf. Eyal Shimoni is one of Israel’s top food engineering & technology experts, with vast academic and industrial experience. He serves as the CTO of Strauss Group as of 2010. He is in charge of the Technological strategy for the group, as well identifying, evaluating, and developing innovative technologies with Alpha-Strauss, the Food Tech Community. Currently he is also a board member of the newly established FoodTech incubator in Israel, by Strauss. He is a world renown researcher in food science and technology, with global network in these fields. To date he published over 70 peer reviewed scientific papers, book chapters and patents. Eyal is a proud Technion graduate (BSc, MSc, DSc), with a post doc in the university of Minnesota. He was a research professor in the Department of Biotechnology and Food Engineering of the Technion 2000-2013, performing many studies and projects with industrial partners, as well as basic research, mostly in food biophysics and food nanoscience. Over the years he consulted to numerous companies, and led various tech transfer projects.FoodTech, New Frontier for Technology

Today, more than ever, advanced technology plays a key role in the food and beverage industry. This is mainly evident in the impact on cutting production costs, while creating consumer value.
Up until this decade, food companies worked primarily to increase their volume of activity in response to constant population growth and the resulting increase in food consumption. In recent years, we have seen growing awareness of health and well-being; stricter regulation in the developed world; diminishing natural resources and raw materials; and awareness of sustainability. In addition, there is growing demand for products that possess considerable value and adapt to current trends in quality of life. 

In light of these trends, a viable solution over time can only be found with the use of innovative and groundbreaking technologies in all technological dimensions of the value chain. This need is a fertile ground for technological innovation in the food industry. Solutions enable us to increase the value for consumers through the development /identification of smart ingredients, increasing product freshness and bringing the customers products that are as close to their natural form as possible, while improving their nutritional value. FOODTECH pertains to those technologies that have food-related applications through the entire value chain – from growing agricultural raw materials through various processing stages to packaging. Companies that employ such new and advanced technologies will be able to manufacture improved-value products for their consumers.

At Strauss Group we realized that in order  to compete successfully in the global and domestic markets we must stand at the forefront of food technology. To this end, we started the Alpha venture in recent years, which aims to promote and create a complete ecosystem in relevant technologies to the food industry. The venture was established with the understanding that a large industrial entity was needed in order to link the numerous research institutes, researchers, inventors and entrepreneurs to the market; help them understand consumer trends and challenges of the industry; and enable them to use its assets (laboratories, technologists, production lines, etc.) as a test site for new technologies before they are turned into products. The venture also engages in the connection between the “technology manufacturers” and venture capital funds, market service providers, government representatives, our strategic partners and more. This reflects the understanding that the creation of a FOODTECH community in Israel can only occur if all players in the ecosystem take part in it. We have recently added another tool to the ecosystem: "The Kitchen" FOODTECH  incubator, led and supported by Strauss Group, including the Alpha Strauss venture. Our work will create the suitable ecosystem for developing food technologies and groundbreaking food production technologies that have relevance for the entire world. Our vision is to create here in Israel the “Silicon Valley of food technologies”. 

 Dr. Gershom Kutliroff

Principal Engineer

Intel (Perceptual Computing)

Israel

PhotoYesOver the last 15 years, Gershom Kutliroff has held several positions leading R&D efforts in the field of computer vision. Most recently, he was the CTO and co-founder of Omek Interactive, which developed hand tracking and gesture control technology, and was acquired by Intel in 2013. Today, he is a Principal Engineer at Intel, responsible for the technology roadmap of the PerC software group, and is an inventor on over 25 patents and patents pending. He earned his Ph. D. and M. Sc. in Applied Mathematics from Brown University, and his B. Sc. in Applied Mathematics from Columbia University. Gershom is married with five children, but he still hopes to one day hike the Appalachian Trail.On the Way to Visual Understanding…In the past several years, the field of computer vision has chalked up significant achievements, fueled by new algorithms (such as deep neural networks), new hardware (such as consumer 3D cameras) and new available processing (such as GPU’s). When we consider the problems that tomorrow’s household robots and autonomous vehicles will have to solve, however, there is evidently still a ways to go. In this talk, I will discuss current work within Intel’s Perceptual Computing on a scene understanding pipeline, the aim of which is to enable a far more comprehensive understanding of an environment than existing techniques currently provide. Key elements of the pipeline are a 3D reconstruction of the scene geometry, followed by segmentation of likely candidates, classification, and finally 3D registration to align models to the scene data. The overall effect is to move from a pixel-based reconstruction of the scene to one that integrates semantic understanding into the capture process.
 Shaul Gelman

President, Founder and VP of R&D

RealView Imaging 

Israel

PhotoYes

Mr. Shaul Gelman is a highly skilled R&D and business executive, with vast hands-on experience and multiple inventions in the field of multidisciplinary electro-optical and display technologies. Before founding RealView Imaging, Mr. Gelman worked for Elbit Systems (NASDAQ: ESLT), one of Israel’s largest defense companies, leading large scale R&D programs in the field of high-performance Augmented Reality Head Mounted Display (HMD) systems for aviation/pilot applications, actively used by leading Air Forces around the world.

Over the years, Shaul has gained substantial system engineering and leadership skills, working on cutting edge projects and leading multiple complex programs and operations in the fields of interference-based Live Holography and See-Trough Augmented Reality. Mr. Gelman earned his Executive MBA (cum laude) from the Haifa University, a B.Sc. (cum laude) in Industrial Engineering (IT) & Management from the Technion, Israel Institute of Technology and is a graduate of the Merage Executive Program in Irvine California.

Live Interactive Holography - From Medical Imaging to Augmented RealityA novel technology of true, wide viewing angle, multi-depth planes and "touchable" digital holography will be presented. The technology is developed by RealView Imaging Ltd. that is introducing the world's first interference-based 3D holographic display and interface systems initially for medical imaging applications. RealView’s proprietary technology projects hyper-realistic, dynamic 3D holographic images “floating in the air” with configurations and applications ranging from medical imaging to Holographic Augmented Reality. The projected 3D volumes appear in free space, allowing the user to literally touch and interact precisely within the image while naturally focusing on the image and on the user’s hand simultaneously as well as projecting far and close objects in parallel in multiple depth planes, presenting a unique and proprietary breakthrough in digital holography and real-time 3D interaction capabilities. For more information please visit www.realviewimaging.com. An overview on the company, technology, capabilities and recent accomplishments will be presented.
 Dr. Leonid Karlinsky

Research Staff Member

Cognitive Vision and Interaction (CVI) group

IBM Research

PhotoYesDr. Karlinsky leads the AR research in the Computer Vision and Interaction (CVI) group @ IBM Research. He is a Computer Vision and Machine Learning expert with years of hands on experience. He has published research papers in leading CV and ML venues such as ECCV, CVPR and NIPS and is actively reviewing for these conferences for the past 7 years. Dr. Karlinsky holds a PhD degree in CV from the Weizmann Institute of Science, supervised by Prof. Shimon Ullman.Practical 3D Recognition for Augmented RealityIn this talk we describe a practical approach to object recognition for Augmented Reality (AR). Using a single input image, our recognition engine is capable of accurate localization and recognition of 2D objects (e.g., retail products on shelves), as well as 3D pose of objects and environments modeled by large-scale 3D point clouds. By combining our recognition engine with video input and additional CV algorithms, we have built robust and scalable AR applications. We demonstrate our engine capabilities on various examples from the retail and industrial domains, and welcome the audience to view a live demo at our booth.
 Dr. Tali Treibitz

Head of the Marine Imaging Lab

School of Marine Sciences

University of Haifa

PhotoYes

Tali Treibitz is heading the marine imaging lab in the School of Marine Sciences in the University of Haifa. She received the BA degree in computer science and the PhD degree in electrical engineering from the Technion-Israel Institute of Technology in 2001 and 2010, respectively.

Between 2010-2013 she has been a post-doctoral researcher in the department of computer science and engineering, in the University of California, San Diego and in the Marine Physical Lab in Scripps Institution of Oceanography.

How Can Computer Vision Advance Ocean Exploration?The ocean covers 70% of the earth surface, and influences almost every aspect in our life. It is a complex foreign environment that is hard to explore and therefore much about it is still unknown. As human access to most of the ocean is very limited, novel imaging systems and computer vision methods are needed to reveal new information about the ocean that is currently unknown. However, the ocean poses numerous challenges such as handling optics through a medium, movement, limited resources, communications, power management, and autonomous decisions, while operating in a large-scale environment. In the talk I will give an overview of current efforts and challenges in this field.
 Erez Nur

Technical Manager

Omek Consortium

Israel

PhotoYes Presentation of Omek Consortium
  • Short presentation of the MAGNET program that we work under
  • Short presentation of Omek members
  • Omek Vision
    • Why 3D point cloud research
    • To what Needs
  • Omek fields of research
    • Point cloud registration
    • 3D modeling
    • 3D clasification
  • Conclusion
 Prof. Guy Gilboa

Assistant Professor

Electrical Engineering Department

Technion

Israel

PhotoYesGuy Gilboa is an Assistant Professor at the Department of Electrical Engineering, Technion, since 2013. His research is on image processing based on variational methods. He was previously in Microsoft and Philips Healthcare conducting research related to depth cameras and CT imaging. He received his PhD from the Technion and held a postdoctoral position at UCLA Math Department. He is a member of the editorial boards of JMIV, SPL and JVCIR.
Estimating Dense LiDAR Measurements by Depth and Color FusionDepth sensors, such as ones based on LiDAR technology, are highly informative for robust navigation and obstacle avoidance of autonomous vehicles. However, high quality, high resolution systems are costly and power consuming. We explore the potential of significantly improving LiDAR data through side information given by RGB cameras. The proposed method is based on advanced regularization formulations coupled with super-pixel techniques. This research is still in progress, preliminary experiments show very promising results. 
 Daniël Van Nieuwenhove

Chief Technical Officer & Co-Founder

SoftKinetic

PhotoYesDaniël co-founded SoftKinetic Sensors and holds an engineering degree in electronics from the VUB (Free University of Brussels). He is inventor on multiple patents and the author of several scientific papers. In 2009, he obtained a PhD degree on the subject of CMOS circuits and devices for 3D time-of-flight imagers. Since then he has been focusing on making effective 3D sensing solutions a reality in many applications. Since 2011 he has been the CTO of the SoftKinetic group and has overlooked its technology and engineering roadmaps. SoftKinetic was acquired in 2015 by Sony Corporation. 3D Time-of-Flight Sensing: Challenges and Solutions

For ages engineers have been trying to enable depth sensing for machines. To date, none of the 3D sensor technologies fully meet the combined requirements on power, size, robustness, resolution and cost. 

In this presentation we will illustrate the needs of 3D sensing solutions and position the different 3D technologies. We will further zoom in on 3D Time-of-Flight and present an outlook of where this technology will go in the near to not-so-near future. 

The content of the talk will be illustrated based on use cases and examples from the field.

 Orna Bregman Amitai

Research Leader

Zebra Medical Vision

PhotoYes

Orna Bregman Amitai is Research Leader of medical imaging informatics at Zebra Medical Vision ltd. She develops automated analysis of medical imaging for diagnostic and population health, enabling clinical decision enhancement. With more than 15 years of leading research teams and projects in computer vision, machine learning and image processing, she initiated the medical imaging lab at the Samsung R&D center in Israel.

Orna is inventor of more than 17 patents in the field of algorithm, applications as well as inventions in the medical domain. Orna graduated in Physics and Math from the Tel Aviv University, and is a open water swimming enthusiast.

From Idea to Patient Care in 8 Months DEXA TScore Prediction Based on CT

Osteoporosis is a leading cause of fractures in elderly population in USA. DEXA is currently the diagnostic gold standard for assessing bone mineral density. Our target was to develop an automatic algorithm for Osteoporosis screening based on CT scans.

Using Zebra’s research platform, a qualitative dataset of >2,500 CTs’ with known DEXA scores was allocated. An innovative algorithm was developed, emulating DEXA results using CT. The entire process took less than 8 months from research initiation to commercialization on Zebra’s Image Analytics platform including FDA. The Analytics platform reaches over 1,100 hospitals.

Using traditional methods, research at this scale would have required a tedious process of data collection, and significant costs.

Zebra invites researchers to join the Zebra research community.

 Aviv Hod

Content Manager

Samsung Israel

PhotoYes

Aviv Hod has been around the media market for over 20 years now.

Started at radio and TV, continued to online and print, and in the last 11 years is into content management.

After 5.5 years managing HOT VOD, he came to Samsung Israel and manages content for Samsung smart devices (Smart TV's, Smartphones, Tablets, Smart watches and Gear VR). 
Always looking for more interesting ideas and interesting people to work with, Aviv never said no to a meeting request.

Gear VR

VR has been around since the 80's, in Dizengoff center. 

It has evolved, mainly due to Oculus Rift's efforts over the past few years.
The connection between Oculus (now a Facebook company) and Samsung Electronics, using the Smartphone as a screen and sensors, allowed the birth of Gear VR and the option to allow the amazing experience for the masses.

Now everyone in the video, gaming and other industries, are trying to find the best fit and scenario for their business.

In my short lecture, I will explain briefly on the options and try and make you understand how VR will affect your life. Very soon.

 Prof. Amir Amedi

Department of Medical Neurobiology

Hebrew University and

ELSC Brain Center

PhotoYesAmir is an internationally acclaimed brain scientist with 15 years of experience in the field of brain plasticity and multisensory integration. He has a particular interest in visual rehabilitation.
 
Amir is an Associate Professor at the Department of Medical Neurobiology at the Hebrew University and the ELSC brain center. He is also an Adjoint research Professor in the Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision. He holds a PhD in Computational Neuroscience (ICNC, Hebrew University) and Postdoctoral and Instructor of Neurology (Harvard Medical School).
 
Amir won several international awards and fellowships such as The Krill Prize for Excellence in Scientific Research, the Wolf Foundation (2011); The international Human Frontiers Science Program Organization Post docatoral fellowship and later a Career Development award (2004, 2009), the JSMF Scholar Award in Understanding Human Cognition (2011), and was recently selected as a European Research Council (ERC) fellow (2013). 
New Frontiers in Sensory Substitution and Sensory Recovery: Practical Musical Visual Rehabilitation and Underlying Brain Specialization & Connectivity

Sensory substitution devices (SSDs) transfer information from one sense into another. For example, they can enable the blind access to visual information by translating it into auditory cues. Over the past years we have developed several such devices including the EyeMusic, upon which my talk will focus, which transfers the visual information of location shape and color from an entire scene using musical cues.
We then utilized the EyeMusic both for practical visual rehabilitation in the blind and for researching a series of neurobiological questions which the brain of blind users offers us a unique opportunity to explore.

Many 'visual' brain regions respond preferentially tocertain categories. E.g. the Visual-Word-Form-Area (VWFA) shows selective preference for letter shapes and the Visual-Number-Form-Area (VNFA) for number shapes. But what happens in congenital blindness? How do shape biases contribute to their formation? How are there consistent regions dedicated specifically to letters/numbers if these concepts are only 10,000 years old?
 
We used congenital blindness as a model for brain development without visual experience.
During fMRI, we present blind subjects with shapes encoded via The EyeMusic, We find greater activation in the rITG when subjects process symbols as numbers compared with control tasks on the same symbols.

Using resting-state fMRI we further show that the numerals, letter areas and body images respectively (rITG, VWFA, EBA) exhibit distinct patterns of functional connectivity with quantity, language processing areas and the body and mirror system respectively.

 Erez Natan

Imaging & Computer Vision Team Leader

Imaging & Video BU

CEVA

PhotoYes

Erez Natan is CEVA’s Deep Learning team manager, leading the development of Deep Neural Network libraries at CEVA. Erez has more than 10 years’ experience in developing video and imaging algorithms and applications like MP4, H.264, Real Video, ISP, image compression and noise reduction. He represents CEVA in the Khronos OpenVX group, a new standard for computer vision, developing a dedicated language for embedded platforms. 

Erez holds a BSc. in electrical engineering from Ben-Gurion University.  

Challenges of Deep Neural Networks in Embedded and Real Time SystemsIn the last four years, Deep Neural Network algorithms, and especially a computer vision algorithm called “convolutional neural network” (CNN), have seen a phenomenal success. They bring to applications targeting smart cameras, mobile devices and the automotive industry the promise of enabling real-time embedded systems the impressive image tracking and recognition tasks. The implementation of CNN on embedded platforms creates new challenges due to the amount of data needed to be processed and loaded in a very short time under the constraints of low bandwidth and power. Furthermore, the variety of CNN networks like AlexNet, VGG, NIN, GogleNet and others, raises the demand for programmable generic solutions capable of running CNN where DSP is preferable due to its ability to overcome GPU solutions..
 Yangyan Li

Postdoctoral Fellow

Tel Aviv University

PhotoYesYangyan Li is a postdoctoral fellow in Tel Aviv University. Before that, Yangyan was a postdoctoral scholar in Stanford University, he received his PhD degree from University of the Chinese Academy of Sciences in 2013, and bachelor degree from Sichuan University in 2008. His primary research interests fall in the field of Computer Graphics and Computer Vision with an emphasis on 3D reconstruction.Joint Embeddings of Shapes and Images via CNN Image PurificationWe propose a joint embedding space populated by both 3D shapes and 2D images, where the distance between embedded entities reflects the similarity between the underlying objects represented by the image or 3D model, unaffected by all the aforementioned nuisance factors. This joint embedding space facilitates comparison between entities of either form, and allows for cross-modality retrieval. 
 Noam Babayof

Sr. Staff Processors Solution Architect

Synopsys  

PhotoYes

Noam Babayof joined Synopsys in 2011 and is currently the Sr. Staff Processors Solution Architect in Israel, responsible for ARC processors and subsystems and for Embedded vision products. Prior to this role Noam was the technical consultant for Synopsys foundation IPs products.

Noam has over 23 years of experience in semiconductor design and IP industries, and held VLSI project manager and SOC Architect roles at PMC-Sierra, ASIC project manager at Zen Research and Silicon Value, and R&D manager at his IAF service. 

Noam has Executive MBA, Finance from The Hebrew University in Jerusalem, B.A. in Mathematics and Computers Science from The Open University in Israel , Practical Eng in Electronics from Ort Giva’at Ram Collage in Jerusalem. 

Enabling Cars to See with Efficient Vision ProcessorsAutomotive vision capabilities are advancing rapidly because of their potential to enhance safety and simplify driving. Embedding vision in cars requires powerful multicore processors and specialized software that is optimized for object detection and classification of HD video coming from multiple cameras. A challenge is to offer efficient support for the required vision performance without burning watts of power. This presentation describes the specialized vision processors, software, and IP solutions that Synopsys is developing for automotive applications.
 Prof. Shai Dekel

Principle Scientist

GE Global Research and

Visiting Associate Professor

School of Mathematical Sciences, TAU

PhotoYesShai is a visiting associate professor at the school of mathematics, Tel-Aviv University. He also serves as a principal scientist in GE Global Research. His research interests are theoretical approximation theory, harmonic analysis and their applications in data science.When Sensors are Out of Line!... Deep Learning on Manifolds – Theory and ApplicationsDeep learning (DL) methods have become very popular in recent years due to their success in a variety of "human like" tasks in areas such as computer vision and  natural language processing. However, there are many data science problems in which data is collected by sensors located on non-planar geometric structures. In these cases, conventional  DL architectures such as the Convolutional Neural Networks (CNN) are unusable in their standard form, because the notations of localized convolutions on the data are not obvious. We show how modelling DL architectures based on harmonic analysis on graphs and manifolds provides better classification and estimation than previous work on several data sets. Joint work with Leon Gugel, Michael Rotman and Yoel Shkolnisky.
 Ziv Mhabary

Director of Computer Vision

Trax Image Recognition

PhotoYes

Ziv Mhabary is the Computer Vision Director at Trax Image Recognition. Ziv is responsible for leading the company’s focus in fine-grained recognition and the Computer Vision group at Trax.  
His team is focused on the development of a scalable recognition system by using deep learning technology.

Prior to Trax, Ziv was a computer vision team leader at Samsung R&D, where he was responsible for various computer vision projects.

Ziv is a PhD candidate in computer vision from Ben-Gurion University, where he also received his M.Sc. and B.Sc. with honors.

His research deals with image processing algorithms in high dimensions using non-uniform fast Fourier transforms.

Convolutional Neural Networks – from Theory to Practice

Convolutional Neural Networks (CNN) is an approach to machine learning that has revolutionized computer vision in the last four years and outperformed the previous state of the art methods. Recently, open sourced libraries such as Google TensorFlow, Microsoft CNTK and Samsung VELES, have made deep learning training much more accessible. While there has been major progress in the accessibility of CNN training, it remains challenging to understand how they do what they do, especially what computations they perform at intermediate layers, and how one can improve trained models.

Trax is built to revolutionize the retail industry using cutting edge computer vision techniques, our challenges include fine grained detection in cluttered and noisy scenes using fuzzy labeled training data.  This challenged drove us to build tools to better understand one of the least understood techniques used in both academics and industry.

In this lecture, we will share our journey from theory to practice, we will examine some of the challenges we face, the techniques and best practices we’ve developed. To gain a better understanding of the network, we used several debugging and visualization tools to give us a better understanding about what neuron “sees” and thus, what computations the networks are doing.

 Prof. Shai Shalev-Shwartz

Associate Professor

School of Computer Science and Engineering, HUJI and

VP of technology

Mobileye

PhotoYesShai Shalev-Shwartz is an associate professor at the school of computer science and engineering at the Hebrew university, and a VP of technology at Mobileye, where he is working on autonomous driving.  Shai is the author of the book "Online Learning and Online Convex Optimization" and a co-author of the book "Understanding Machine Learning: From Theory to Algorithms".Deep Learning for Autonomous DrivingIn recent years, deep learning based systems have led to breakthroughs in computer vision, speech recognition, and other hard AI tasks. The application of autonomous driving has some unique features, and I will describe theoretical and practical work on deep learning for autonomous driving. 
 Riki Sheinin

SW Engineer

Intel PerC SW

PhotoYes Approach for Evaluating Point Cloud Quality in SLAM

As our world is becoming 3D oriented, the demand for cloning real life scenes into 3D models is growing. Many SLAM algorithms are developed and being used for various use cases, from Indoor Navigation through Augmented Reality (AR) to Home Décor. Evaluation of SLAM quality performance, like many algorithm evaluations, is a key requirement for choosing the best algorithm for a specific use case.

Current evaluation methods focus on the pose estimation quality (tracking quality). Although this is very important and crucial for any type of SLAM use case, for many SLAM applications like home-décor, AR games and others - the quality of the point-cloud itself is as important as the tracking.
Our model presents an innovative approach for assessing the quality of the point-cloud created during the SLAM process. The model provides an RMS error estimation for the difference between the real-world scene and the generated point cloud. Additionally, it rates the blob to holes ratio.

 Yehonatan Sela

Validation Architect

Intel PerC SW

PhotoYes

Yehonatan is Validation Architect in Intel’s Perceptual computing group.

He is 10 years in Intel working there on various teams and areas, where he got expertise mainly on Video Encoding and Computer Vision.

Yehonatan holds a BSc and MSc in Computational Biology from Hebrew University in Jerusalem with a thesis focused on medical imaging.

He’s a proud father of 5 young children, and enjoys reading history, geography, and science fiction.

Approach for Evaluating Point Cloud Quality in SLAM

As our world is becoming 3D oriented, the demand for cloning real life scenes into 3D models is growing. Many SLAM algorithms are developed and being used for various use cases, from Indoor Navigation through Augmented Reality (AR) to Home Décor. Evaluation of SLAM quality performance, like many algorithm evaluations, is a key requirement for choosing the best algorithm for a specific use case.

Current evaluation methods focus on the pose estimation quality (tracking quality). Although this is very important and crucial for any type of SLAM use case, for many SLAM applications like home-décor, AR games and others - the quality of the point-cloud itself is as important as the tracking.
Our model presents an innovative approach for assessing the quality of the point-cloud created during the SLAM process. The model provides an RMS error estimation for the difference between the real-world scene and the generated point cloud. Additionally, it rates the blob to holes ratio.

 Koby Simana

CEO

IVC Research Center

PhotoYesKoby Simana, CEO of IVC Research Center, is considered a leading expert and speaker on the high-tech and venture capital industries in Israel. With vast academic and industrial experience, Koby is leading IVC Research Center which provides data, insights, facts and figures, on the local high-tech ecosystem. Koby joined the IVC team in 2001 as Director of Marketing & Sales, and until 2008 was responsible for operations in Israel and global business development. In this role, Koby supervised IVC’s close partnerships with Israeli VC funds, business partners in Europe and the US, coupled with managing production and sales of the IVC Yearbook. Prior to joining IVC, Koby was a senior analyst and consultant to Israel's Minister of Housing, Natan Sharansky, under the auspices of a fellowship at the Institute for Advanced Strategic and Political Studies. Koby holds an MBA with a major in Finance from Bar Ilan University and a BA in Economics and Communications from Tel Aviv University.
The Israeli Machine Vision Industry OverviewPresenting a macro overview of the Machine Vision cluster within the Israeli high tech industry. The overview will include the size and growth rate of this cluster, segments within the cluster, breakdown by stage and geographical regions within Israel, investments trends throughout the years, leading strategic & financial investors and exits.
 Avram Golbert

3D Understanding Group Leader

Rafael

PhotoYesAvram Golbert leads Rafael’s 3D Understanding group, which researches 3D scene reconstruction and the applications of geometric understanding to algorithms such as mapping, real time urban navigation and object detection. Avram holds a BSc in Mathematics and MSc in Computer Science from the Hebrew University, where he researched object detection using geometric and semantic context. This work was funded in part by the Omek consortium and was done in part while Avram was a guest researcher at the Deep Vision Lab in Tel Aviv University with Prof. Lior Wolf.What from Where. In 3D! Learning Semantic Segmentation from 3D Models

We address the challenge of classifying pixels in aerial images of urban areas which can provide crucial data for various applications such as mapping, 3D modeling and navigation. Fully Convolutional Networks are very well suited to handle image segmentation; however generating enough annotated data is very costly.

We present a semi-supervised system to annotate an entire urban 3D model generated from Multi-View Stereo. The annotated model is then back projected onto thousands of images that are then used to train the network. The exact world location of every pixel, provided by the model, enables a variety of geometric features, simplifying the semantic annotation. We introduce a novel loss function to optimize the F1 score calculated over the entire batch. We present results on multiple data-sets and compare results with and without depth information.

 Dan Levi

Senior Researcher

GM ATC-I

PhotoYesI received the B.Sc. degree (with honor) in mathematics and computer science from the Tel-Aviv University, in 2000, and the M.Sc. and PhD degrees in applied mathematics and computer science at the Weizmann Institute, in 2004 and 2009 respectively. In the Weizmann Institute I conducted research in human and computer vision under the instruction of Professor Shimon Ullman. From 2007 I have been conducting industrial computer vision research and development at several companies including Elbit Systems, Israel. Since 2009 I am a senior researcher at the Smart Sensing and Vision Group, Advanced StixelNet: A Deep Convolutional Network for Obstacle Detection and Road SegmentationGeneral obstacle detection is a key enabler for obstacle avoidance in mobile robotics and autonomous driving. We address the task of detecting the closest obstacle in each direction from a driving vehicle. As opposed to existing methods based on 3D sensing we use a single color camera. In our approach the task is reduced to a column-wise regression problem. The regression is then solved using a deep convolutional neural network (CNN). In addition, we introduce a new loss function based on a semi-discrete representation of the obstacle position probability to train the network. The network is trained using ground truth automatically generated from a laser-scanner point cloud. Using the KITTI dataset, we show that the our monocular-based approach outperforms existing camera-based methods including ones using stereo. We also apply the network on the related task of road segmentation achieving among the best results on the KITTI road segmentation challenge.
 Dr. Tammy Riklin Raviv

Electrical and Computer Engineering Department

Ben Gurion University of the Negev

PhotoYesHer research focuses on the on the development of mathematical and algorithmic tools for processing, analysis and understanding of natural, biological and medical images. She holds a B.Sc. in Physics and a M.Sc. in Computer Science from the Hebrew University of Jerusalem, Israel. She received her Ph.D. from the School of Electrical Engineering of Tel-Aviv University. In the years 2008-2012 she was a post-doctoral associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT) and a research fellow at Harvard Medical School and at the Broad Institute of MIT and Harvard. Analysis of High-throughput Microscopy Videos: Catching Up with Cell DynamicsHigh-throughput live cell imaging is a versatile platform for quantitative analysis of biological processes. Thanks to advances in automation, thousands of cell populations can be perturbed and recorded by automated microscopes, making imaging suitable for comprehensive and systematic experiments. While numerous algorithms for analyzing specific experimental setups exist, the construction of a generally applicable tool, without an exhaustive training, remains a challenge. In this talk I will present an unsupervised framework that ties together three fundamental aspects of live-cell analysis, namely cell segmentation, tracking and mitosis detection, via a Bayesian inference of dynamic models.   Successful application to different datasets acquired in a variety of laboratory settings is demonstrated.
 Carlo Nardone

Senior Solution Architec

NVIDIA

PhotoYes

Physicist by background, Carlo Nardone is an HPC professional with more than 25 years of experience in mathematical modelling and numerical simulation, scientific data analysis, parallel and distributed computing, and accelerated computing.

He joined NVIDIA about 1.5 years ago, as a Senior Solution Architect in the EMEA Enterprise team. His current focus, alongside traditional HPC projects, is on the latest "killer app" in this field, namely Deep Learning, helping partners and customers in the adoption of NVIDIA Deep Learning technologies, particularly for autonomous driving applications.

NVIDIA Deep Learning PlatformResearchers, enterprises and start-ups rely on NVIDIA GPUs to solve applied AI problems such as computer vision, speech recognition, and natural language processing. The recent advances in Deep Learning algorithms and frameworks are pushing AI capabilities to a new level. NVIDIA is helping this effort with a set of hardware platforms and software toolkits including its Deep Learning SDK which provides high-performance tools and libraries to power innovative GPU-accelerated machine learning applications. NVIDIA is also building the DRIVE PX product series for the automotive industry, enabling Deep Learning and advanced AI algorithms for autonomous driving. We will discuss these platforms as well as briefly describe future trends of the industry.