March 18, 2019 Pavilion 10, EXPO Tel Aviv
The recent success in mapping between two domains in an unsupervised way and without any existing knowledge, other than network hyperparameters, is nothing less than extraordinary and has far-reaching consequences. As far as we know, nothing in the existing machine learning or cognitive science literature suggests that this would be possible.
We conjecture that functions of minimal complexity play a pivotal role in this success. If our hypothesis is correct, simply by training networks that are not too complex, the "correct" target mapping stands out from all other alternative mappings. Our analysis leads directly to a new unsupervised cross-domain mapping algorithm that is able to avoid the ambiguity of such mapping, yet enjoy the expressiveness of deep neural networks.
Taking this approach a step further, we define a general Occam’s razor property and employ it in order to obtain generalization bounds for unsupervised learning. The bounds hold both in expectation, with application to hyperparameter selection, and per sample, thus supporting dynamic confidence-based runtime behavior. The latter is crucial for real-world computer vision systems and was never shown for functions learned in an unsupervised way.
I will also present new results on the AI task of Identifying analogies across domains without supervision. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. Our work tackles this very task of finding exact analogies between datasets e.g. for every image from domain A find an analogous image in domain B.
Research Scientist, Facebook AI Research (FAIR)
Full professor, School of Computer Science at Tel-Aviv University
Prof. Wolf is a research scientist in Facebook AI Research (FAIR) and a full professor at the School of Computer Science at Tel-Aviv University. Prof. Wolf’s work has received several awards including the best paper awards at ICANN'16 and at the CVPR'13 workshop on action recognition.
Prof. Wolf has extensive experience in forming, advising and heading R&D at multiple computer vision startups and his research focuses on computer vision and deep learning and includes topics such as face identification, document analysis, natural language processing, digital paleography, and video action recognition.
Deep Learning has been amazingly successful in applications such as speech recognition, image and video analysis and machine translation. Yet, compared with the human brain it is still extremely inefficient, both in terms of data and power. In this talk we will discuss a number of directions to improve in both these dimensions. First I will discuss how symmetries in the data can be exploited to extract more information from each data point, through the use of group convolutional networks. Then we will discuss how a Bayesian view of deep learning can help us compress neural networks, sometimes by a very large amount, thus improving its power efficiency. Finally, we will discuss how spiking neural networks can improve the efficiency of deep learning in the temporal domain.
Vice President Technologies, Qualcomm
Professor of Machine Learning, University of Amsterdam
Max Welling is a research chair in Machine Learning at the University of Amsterdam and a Vice President Technologies at Qualcomm. He has a secondary appointment at the Canadian Institute for Advanced Research (CIFAR). In the past he held postdoctoral positions at Caltech (’98-’00), UCL (’00-’01) and the U. Toronto (’01-’03). He received his PhD in ’98 under supervision of Prof. G. 't Hooft. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015. He serves on the board of the NIPS foundation since 2015 and has been program chair and general chair of NIPS in 2013 and 2014, respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016. He has served on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. He received an NSF career award in 2005 and is the recipient of the ECCV Koenderink Prize in 2010. He is in the board of the Data Science Research Center in Amsterdam. Besides AMLAB he co-directs deep learning labs at UVA funded by Qualcomm, Bosch, Philips, Microsoft and SAP. He co-authored over 200 publications in machine learning.
Recent work has shown impressive success in automatically synthesizing new images with desired properties such as transferring painterly style, modifying facial expressions or manipulating the center of attention of the image. In this talk I will discuss two of the standing challenges in image synthesis and how we tackle them:
Associate Professor, EE
Technion - Israel Institute of Technology
Lihi Zelnik-Manor is an Associate Professor in the Faculty of Electrical Engineering in the Technion, Israel. Between 2014-2016 she was a visiting Associate Professor at CornellTech. Prior to the Technion, she worked as a post-doctoral fellow in the Department of Engineering and Applied Science in the California Institute of Technology (Caltech). She holds a PhD and MSc (with honors) in Computer Science from the Weizmann Institute of Science and a BSc (summa cum laude) in Mechanical Engineering from the Technion.
Prof. Zelnik-Manor’ awards and honors include the Israeli high-education planning and budgeting committee (Vatat) scholarship for outstanding Ph.D. students, the Sloan-Swartz postdoctoral fellowship, the best Student Paper Award at the IEEE SMI'05, the AIM@SHAPE Best Paper Award 2005 and the Outstanding Reviewer Award at CVPR'08. She is also a recipient of the Gutwirth prize. Prof Zelnik-Manor has served as Area Chair for ECCV and CVPR multiple times, as Program Chair of CVPR’16 and as Associate Editor at TPAMI.
A city is an aggregate of a huge amount of heterogeneous data. However, extracting meaningful values from that data remains challenging. City Brain is an end-to-end system whose goal is to glean irreplaceable values from big-city data, specifically videos, with the assistance of rapidly evolving AI technologies and fast-growing computing capacity. From cognition to optimization, to decision-making, from search to prediction and ultimately, to intervention, City Brain improves the way we manage the city, as well as the way we live in it. In this talk, we will introduce what we can do to further this goal and make it a reality, step by step.
Distinguished Engineer / VP of
Alibaba DAMO Academy
Dr. Xian-Sheng Hua is now a Distinguished Engineer/VP of Alibaba DAMO Academy, leading the visual computing team. Dr. Hua is an IEEE Fellow, ACM Distinguished Scientist, and MIT TR35 Young Innovator Award Recipient. The team he is leading is focused on visual intelligence on the cloud, which includes but is not limited to: large scale image and video analysis, recognition, search, and reconstruction, and related cloud-based applications such as health care, transportation, communication, education, sports, entertainment, etc. He has authored or coauthored more than 200 research papers and has filed more than 90 patents. He served as a Program Co-Chair for IEEE ICME 2013, ACM Multimedia 2012, and IEEE ICME 2012, and will be serving as general co-chair of ACM Multimedia 2020.
Much progress has been made in the last two years on efficient object detection networks (e.g., YOLO9000, SqueezeDet and MobileNet). In this talk, we will address the unique challenges of autonomous driving applications that go beyond the traditional object detection methods. First, we will introduce a unified network that jointly performs various autonomous driving tasks in real-time on mobile to protect drivers on the road. Then, we will address the challenges that emerge when training a single mobile network for multiple tasks such as object detection, object attributes recognition, classification, and tracking. Next, we will describe a scalable pipeline for continuous training of mobile networks through hard negative mining. Finally, we will go over some of our advanced driver assistance applications that aim to make driving safer worldwide.
Director of Deep Learning
Ilan Kadar is the Director of Deep Learning at Nexar. Ilan is responsible for leading the deep learning team and effort to leverage Nexar's large-scale datasets of real-world driving environments to automotive safety applications. Prior to Nexar, Ilan was leading the deep learning group at Cortica and was responsible for building the company's machine vision technology. Ilan received his BSc, MSc and PhD degrees in computer science from the Ben-Gurion University of the Negev, Israel, in 2006, 2008, and 2012 respectively (Summa Cum Laude). His research thesis focused on machine learning algorithms for scene recognition and image retrieval, while employing insights from behavioral and psychophysical experiments. His work was published in leading conferences and journals in the areas of machine vision and was awarded the best research project at IMVC in 2013, the Intel award for excellent Israeli PhD students in 2012, and the Friedman award for outstanding PhD students in 2012.
We will examine and shed new light on several common practices and beliefs in Deep Learning: the effect of batch size on generalization, the use of early-stopping and the role of the final classifier in convolutional networks. Both theoretical and empirical arguments will be used to show that current methods and understanding may prove misguided.
Technion Institute of Technology
Elad Hoffer is a PhD candidate at Technion Institute of Technology. His research is focused on Deep Learning of representations. Elad holds B.Sc. and Master degrees in Electrical Engineering from the Technion.
Deploying Deep Neural Networks on everyday embedded devices holds a challenge which is currently dealt by using the speed-accuracy tradeoff. This talk will focus on understanding the different components of the deep learning "stack (with emphasis on algorithms) and their impact on the final embedded applications accuracy and run-time performance. During the talk I will go over several use cases in IoT and ADAS applications which we are solving at Brodmann17.
Co-founder and CTO
The CTO and co-founder of Brodmann17, a pioneering startup which took upon itself to solve deep learning compute on everyday devices. Prior to co-founding Brodmann17 Amir has lead highly professional deep-learning research teams at Adience and Superfish, which was one of the first companies in Israel to adopt deep learning. He specializes in Deep Learning, Machine Learning and Computer Vision and holds a PhD in Engineering from Bar Ilan University under the supervision of Prof Jacob Goldberger.
Collecting and labeling training data for vision-based road scene understanding is a major challenge. The most prominent approach is to use manual labeling, though it is clear that scalability of this approach is limited. More scalable alternatives are simulated data and cross-sensor label transfer. In this talk I will present automatically generated ground truth using one or more sensors, primarily dense Lidar. Specifically, I will present the benefits and challenges of this approach for road scene understanding tasks, including general and category-based obstacle detection, free space and curb detection.
I received my B.Sc. degree (with honor) in mathematics and computer science from the Tel-Aviv University, in 2000, and the M.Sc. and PhD degrees in applied mathematics and computer science at the Weizmann Institute, in 2004 and 2009 respectively. In the Weizmann Institute I conducted research in human and computer vision under the supervision of Professor Shimon Ullman. Since 2007 I have been conducting industrial computer vision research and development at several companies including General Motors and Elbit Systems, Israel.
Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model, and then turn to describe two special cases of it – the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to … deep-learning. This talk is meant for newcomers to these fields - no prior knowledge on sparse approximation is assumed.
Computer Science Department
Technion - Israel Institute of Technology
Michael Elad holds a B.Sc. (1986), M.Sc. (1988) and D.Sc. (1997) in Electrical Engineering from the Technion. After several years in industrial research, Michael served as a research-associate at Stanford University during 2001-2003. Since 2003 he is a Computer-Science Professor at the Technion. Michael works in the fields of signal and image processing, specializing in sparse representations. He has authored hundreds of technical publications in leading venues, many of which have led to exceptional impact. Since January 2016, Prof. Elad is serving as the Editor-in-Chief for SIIMS.
In this talk, you will get an exposure to the various types of deep learning frameworks – declarative and imperative frameworks such as TensorFlow and PyTorch. After a broad overview of frameworks, you will be introduced to the PyTorch framework in more detail. We will discuss your perspective as a researcher and a user, formalizing the needs of research workflows (covering data pre-processing and loading, model building, etc.). Then, we shall see how the different features of PyTorch map to helping you with these workflows.
Facebook AI Research
Adam Polyak is a PhD student under the supervision of Prof, L. Wolf from Tel-Aviv University and a Research Engineer in the Facebook AI Research (FAIR) Group. He received the bachelors degree in computer science and mathematics from Bar-Ilan University as part of the program for mathematically talented youth, and the masters degree under the guidance of Prof. L. Wolf.
His research focuses on deep learning and includes topics such as cross domain image generation, speech synthesis and voice conversion.
Facebook AI Research
Eliya Nachmani is a Phd student under the supervision of Prof. Lior Wolf and a researcher in Facebook AI Research (FAIR). He received the B.Sc. degree from the Technion Israel Institute of Technology, and the M.Sc. degree from Tel-Aviv University, both in electrical engineering. His research interests include machine learning, deep learning, reinforcement learning, signal processing, error control coding.
Realistic automotive simulation platform where virtual cars travel virtual roads in virtual cities in remarkably true-to-life conditions, will be a vital part of developing and testing autonomous vehicles. The technology behind the Cognata simulation engine is heavily based on deep learning, computer vision, and other advanced AI methods. We'll present a cloud-based simulation engine, and discuss how it works and how to develop with it.
CEO & Founder
Danny Atsmon, an expert in ADAS and deep learning, is the CEO at Cognata Ltd., a dynamic young technology company that brings the disruptive power of artificial intelligence, deep learning, and computer vision to simulated testing for autonomous cars. He has been in the business of launching high-tech products for more than 20 years. Before joining Cognata, Danny served as Harman’s (NYSE:HAR now Samsung) Director of ADAS and Senior director of Machine learning. He has co-founded two startup companies, Picitup, a computer visual shopping suite, and iOnRoad, which uses a phone’s native camera and sensors to detect vehicles in front of a car (later acquired by Harman International). Danny holds several United States utility patents and has created a pipeline of dozens of patent-pending applications. He has also won many top industry awards, including Design and Engineering Showcase Award (2012) for innovation in design, CTIA award (2012) for Best Mobile Application for Automotive, Safe Driving & Transportation, Microsoft Think Next (2012), and the QPrize (2013), which is awarded after Qualcomm Venture’s seed investment competition. Danny is a graduate of the prestigious Israeli Defense Forces (IDF) Haman Talpiot program, where he served in the elite Unit 8200, and holds a B.Sc. degree in Physics from Tel-Aviv University.
Video Indexer (https://vi.microsoft.com) is a new AI cognitive service by Microsoft which analyzes and indexes visual and audial data from media files. Predicted to be 80% of internet traffic before 2021, video is everywhere, and even small organizations accumulate large amounts of videos, giving rise to a pressing need to search and manage video assets.
Video Indexer uses many cutting-edge and traditional ML techniques for various applications. In this talk, we present Video Indexer in high-level, with a demo, and continue with a technical presentation two machine learning models.
Principal Data Science Manager
Azure Media Services
Dr. Royi Ronen manages the Video Indexer Data Science team at Microsoft ILDC. Previously, he managed the Azure Security Center Data Science team. Prior to Microsoft, he was with Adobe and with IBM Research, working on data modeling. He earned his PhD, MSc and BSc at the Technion.
The past six years have seen a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for feature learning and classification. However, the mathematical reasons for this success remain elusive. In this talk, we will briefly survey some existing theory of deep learning. In particular, we will focus on data structure based theory and discuss two recent developments. The first studies the generalization error of deep neural network. The second focuses on solving minimization problems with neural networks.
Tel Aviv University
Raja Giryes is a senior lecturer (assistant professor) in the School of Electrical Engineering at Tel Aviv University. He received his B.Sc, M.Sc., and PhD degrees from the Computer Science Department at the Technion, and was a postdoc at the lab of Prof. G. Sapiro at Duke University. His research interests include signal and image processing and machine learning, and in particular, deep learning and sparse representations.
Raja received numerous grants and awards for his research including the ERC-StG grant and the Azrieli Fellowship. He organized workshops and tutorials on deep learning in leading conference such as ICML, ICCV, CVPR, CDC and ACCV.
Generative adversarial networks (GAN) have recently shown major progress in generating images, as in cross-model generation or applying a specific style to images. We utilize this progress in order to improve generation of simulated images required for computer vision tasks. Our objective function is based on principles acquired from recently published work: preserving key attributes between the input and the translated image; balancing the power of the discriminator against the generator in order to better achieve Nash equilibrium; and using a task-dedicated loss in order to ensure that the generated images are valuable for the desired task at hand.
Computer Vision Researcher
Michal Holtzman Gazit is a senior computer vision researcher in Rafael LTD since 2013, with nearly 20 years of experience in the field of computer vision and image processing. She received her BSc. (1998) and MSc. (2004) in Electrical Engineering Technion, and her PhD (2010) in Computer Science, Technion. During 2010-2012, she was a post-doctorate fellow in the computer science department in the University of British Columbia, Vancouver, Canada. Her main research interests are computer vision, image processing, and deep learning.
In recent times, there have been many advances in anomaly detection for computer vision applications. Despite this, the problem of anomaly detection on any vehicles undercarriage remains very challenging for two main reasons:
First, the data domain for a vehicle undercarriage is very unique; there is no publicly available data set, and its not readily available online.
Second, there is no dataset of threats to be detected, which can appear in any place or form (explosive devices, weapons, contraband etc). Essentially, this is a semi-supervised anomaly detection problem, where the anomaly class does not exist in the dataset.
In this presentation, we will describe the steps we took to solve this problem, including deep learning models for representations of vehicles, similarity metrics, segmentation, anomaly detection, and how all these models are combined into a singular system that analyzes a vehicle in just few seconds. We will also show how models trained for security purpose have great value in the automotive industry, whereby in using similar systems we can detect various types of mechanical problems and damages to the exterior of any vehicle. By using such technologies for anomaly detection in vehicles in automotive/civilian context, we can enable and streamline predictive maintenance practices, and consequently ensure safe and reliable mobility.
Deep Learning Team Lead
Ilya joined UVeye to lead the development of the deep learning algorithms team.
Ilya is responsible for developing the visual threat assessment and anomaly detection algorithms in Uveye, providing state of the art vehicle inspection capabilities for both the security and automotive industry.
Prior to his current position, Ilya was responsible for developing deep learning algorithms for Robotics visual perception and decision-making technology.
Ilya holds a B.Sc. in Electrical Engineering from Ben-Gurion university.
Current ultrasound evaluation are done in most cases- visually, manually or semi-automatically.
Those methods are subjective, time-consuming, error-prone, cumbersome and highly dependent on the experience of the physician.
DiA Imaging Analysis is demonstrating how the use of cognitive image processing technology, which is based on advanced pattern recognition and machine learning algorithms, automatically imitates the way the human eye identifies borders and motion and provides quick, accurate and automated data and scoring for diagnosis.
DiA Imaging Analysis
Mrs. Goldman-Aslan is the Co-Founder and CEO of DiA Imaging Analysis Ltd.
Mrs. Goldman Aslan has previously founded several start-ups, and prior to joining DiA served as Managing Director of several startups in the biomed and security industry.
Mrs. Goldman-Aslan began her career as an attorney focused on capital market, high-tech industry and commercial law. Mrs. Goldman-Aslan holds a B.A in Business and an LL.M in Commercial Law from Tel-Aviv University.
A main challenge in Magnetic Resonance Imaging (MRI) for clinical applications is speeding up scan time. Beyond the improvement of patient experience and the reduction of operational costs, faster scans are essential for time-sensitive imaging, where target movement is unavoidable, yet must be significantly lessened, e.g., fetal MRI, cardiac cine, and lungs imaging. Moreover, short scan time can enhance temporal resolution in dynamic scans, such as functional MRI or dynamic contrast enhanced MRI. Current imaging methods facilitate MRI acquisition at the price of lower spatial resolution and costly hardware solutions.
We introduce a practical, software-only framework, based on deep learning, for accelerating MRI acquisition, while maintaining anatomically meaningful imaging. This is accomplished by partial MRI sampling, while using an adversarial neural network to directly estimate the missing k-space samples. The inter-play between the generator and the discriminator networks enables the introduction of an adversarial cost in addition to a fidelity loss used for optimizing the peak signal-to-noise ratio (PSNR). Promising image reconstruction results are obtained for 3T and 1.5T brain MRI, from large publicly available dataset, where only 40%, 25% and 16.6% of the raw samples of each scan are used. To assess the clinical usability of the reconstructed images we also performed tissue segmentation and compared the results to those obtained by using the original fully-sampled images.
Segmentation compatibility, measured in terms of Dice scores and Hausdorff distances, demonstrate the quality of the proposed MRI reconstruction with respect to other methods, including the widely-used Compressed Sensing.
Department of Electrical and Computer Engineering
Ben-Gurion University of the Negev
Tammy Riklin Raviv is a Senior Lecturer in the Department of Electrical and Computer Engineering of Ben-Gurion University of the Negev. Her research focuses on the development of computational tools for medical and biomedical logical imaging. She holds a B.Sc. in Physics (magna cum laude) and an M.Sc. in Computer Science (magna cum laude) from the Hebrew University of Jerusalem. She received her Ph.D. from the School of Electrical Engineering of Tel-Aviv University. During 2010-2012 she was a research fellow at Harvard Medical School and the Broad Institute. Prior to this (2008-2010) she was a post-doctorate associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT).
We present an autonomous PTZ camera solution which is able to detect, classify, track objects in real-time and estimate their real-world position. Our solution combines new advances in deep learning with a classical computer vision approach.
Fully autonomous detection camera that is able to replace a human guard was historically bounded by low-quality classification, low tracking accuracy in moving cameras and low image quality at night. Today’s evolution of DNNs, laser cameras & strong GPUs gives way to huge progress towards fully autonomous guarding-system. Our work combines state of the art classification networks with new approaches for semi-automatic calibration to create a camera-based Radar for perimeter defense.
JCI Innovation Garage
Lior Kirsch is a machine learning researcher in JCI innovation Garage. His work focuses on extracting meaningful insights from RGB cameras, depth cameras, and various other sensors in order to solve problems in the field of security and building automation. Prior to JCI, Lior worked as a machine learning consultant to Israel’s defense sector. Lior did his Ph.D. in Machine learning&Brain science at Bar Ilan University and his main research interests are deep learning, computer vision, and network theory.
Coronary artery disease (CAD) is the most common cause of death globally. Coronary computed tomography angiography (CCTA) is a non-invasive technique commonly used to rule out CAD due to its high negative predictive value. However, CCTA provides mainly an anatomical characterization of CAD rather than assessing its clinical significance resulting in unnecessary invasive cardiac catheterizations. We present the CT-FFR application which enables an on-site assessment of the clinical relevance of CAD from CCTA exam by combining automatic machine-learning based coronary tree modeling and advanced real-time flow simulation. On-site CT-FFR has the potential to reduce the amount of unnecessary invasive cardiac catheterizations and subsequently involved risk and hospitalization in patients with suspected coronary artery disease.
Staff Research Scientist, Global Advanced Technology, CT/AMI
Moti Freiman is a staff research scientist at Philips Healthcare where he is developing advanced algorithms with the aim of improving the capacity of medical imaging devices to provide clinically meaningful information by leveraging machine learning, computer vision and image processing algorithms.
Prior to Philips, Dr. Freiman was an Instructor in Radiology at Harvard Medical School where he developed advanced algorithms for quantitative analysis of diffusion-weighted MRI data.
Dr. Freiman is the recipient of an NIH R01 research grant and the 2012 Crohn's and Colitis foundation of America research fellow award. He is the author and co-author of more than 40 journal and full-length conference papers and holds several patents and patent applications.
Technical support service over the phone can be a frustration and costly process for the client and for the technical support agent.
At TechSee Augmented Vision Ltd. we have developed a platform that adds visual aids using video stream over the smartphone accompanied by augmentation of symbols and annotations superimposed on the video stream in a real time. In addition, we are using machine vision and deep learning algorithms empowered by transfer learning for devices classifications and segmentation. This enable the platform to support the agents and the clients by proposing solutions based on lesson learned in previous technical support sessions.
Chief Scientist, TechSee - Augmented Vision Ltd.
Researcher, Electro-optics Engineering, Ben-Gurion University
Gabby Sarusi is a faculty at the electro-optic engineering department at Ben-Gurion University. His main areas of researches are quantum structure photonic devices, band-gap engineering, and augmented reality. He is co-founder of several startup companies: Imagine-AR, Ride-on and TechSee. Prior to his academic and entrepreneurship carrier, he worked at Elbit-System–Electrooptic (ElOp): V.P., Head of Space and Air Imagery Intelligence Division, Chief Scientist and Director of Thermal Imaging System. He holds double B.Sc. with honor, M.Sc. with honor and Ph.D. from Tel Aviv University in Physical-Electronics. He did his postdoc. at AT&T Bell Labs. and NASA-JPL.
As “smart” products progress to mass adoption, companies must ensure that the “smart” continues to evolve as much as the “technology”. Nanit is doing exactly that, in the smart nursery space. Merging advanced computer vision with proven sleep science technologies, Nanit provides the most in-depth data available for helping babies, and parents, sleep well in those crucial early months and years of a baby’s development.
This technology is expandable to the greater population as well, as tracking and understanding sleep patterns and anomalies can lead to early detection of other disease states like sleep apnea, seizures, autism and more.
Dr. Assaf Glazer is the CEO and Founder of Nanit. Nanit is a smart baby monitor that uses machine learning algorithms to provide sleep insights through first-of-its-kind camera vision. Assaf completed his Ph.D. at the Technion, specialized in the fields of machine learning and computer vision, and later worked as a postdoctorate in the Runway Program at the Jacobs Institute at Cornell-Tech.
Before his Ph.D., Assaf joined Applied Materials as an algorithm researcher, working on image classification methods for process control in the semiconductor industry. Prior to that, Assaf worked at Wales Ltd. as an operational researcher, providing solutions for defense systems, while applying a variety of professional and academic skills.
The talk will have two parts. In the first, we will give a bird eye's overview of some of Google's latest accomplishments in Machine Perception. Then, we will focus on the Israeli group, and discuss the relation between audio and video.
Bar Ilan University & Google
Professor Avinatan Hassidim is a Talpiot Alumni, with over 15 years of research experience. After graduating from the Hebrew university and a postdoc at MIT he joined the faculty of Bar Ilan University, where his works were used to design the Israeli medical internship lottery, and the Israeli matching system for admittance to psychology. He now leads Google Research in Israel.
Avinatan received numerous prizes, including the Israeli chief of staff award for excellent officer, winner of MIT 100K mobile track, runner up for best paper award in INFOCOM 2012 and 2013, and best paper award in SIGMETRICS 2011.
Medical image acquisition has improved substantially over recent years, with devices acquiring data at faster rates and increased resolution. The image interpretation process, however, has only recently begun to benefit from computer technology. Most interpretations of medical images are performed by radiologists; however, image interpretation by humans is limited due to large variations across interpreters and fatigue. The Radiologist main tasks include an initial search process to detect abnormalities, segmentation to quantify measurements and characterization of findings into categories such as benign vs malignant.
In this talk I will give an overview of the deep learning computer-aided detection and diagnosis tools we are developing, which can support the detection, segmentation and the characterization tasks. Examples will be presented in Chest Xray, CT liver ,and MRI brain analysis. Obtaining large-scale annotated datasets is a key challenge in the medical domain. I will present novel methods we are developing to solve these data challenges. I will conclude with an overview of possible translations of these tools towards augmented radiology reports, and more efficient radiologist workflows.
Faculty of Engineering
Tel Aviv University
Hayit Greenspan is a Professor of Biomedical Engineering in the Faculty of Engineering, Tel-Aviv University. She is also the Chief Scientist of RADLogics Inc. Dr. Greenspan received the B.S. and M.S. degrees in Electrical Engineering (EE) from the Technion, and the Ph.D. degree in EE from CALTECH – California Institute of Technology. She was a Postdoc with the CS Division at U.C. Berkeley following which she joined Tel-Aviv University. From 2008 until 2011, she was a visiting Professor at Stanford University, Department of Radiology, Faculty of Medicine. She was also a visiting researcher at IBM Research in the Multi-modal Mining for Healthcare group, in Almaden CA.
Dr. Greenspan has over 150 publications in leading international journals and conferences and has received several awards and patents. She is member of several journal and conference program committees, including SPIE medical imaging, IEEE_ISBI and MICCAI. She serves as an Associate Editor for the IEEE Trans on Medical Imaging (TMI) journal. In 2016 she was the Lead Co-editor for a Special issue on Deep Learning in Medical Imaging in IEEE TMI. In 2017 she Co-edited an Elsevier Academic Press book on Deep learning for Medical Image Analysis.
We’ll give an overview on Apple’s iPhone X TrueDepth camera system, its design and capabilities. We’ll also describe algorithmic layers which are used in some of the features it enables, and describe how it can be used by developers.
Eitan Hirsh leads a team focused on depth sensing research and development at Apple, helping to bring to life new features and technologies, such as the TrueDepth camera in iPhone X. Eitan came to Apple in 2013 with more than 15 years in various technology development, innovation and management roles at PrimeSense, modu and IDF. He holds an M.Sc in Computer Science from Tel Aviv University (2007) and a B.A in Computer Science from the Technion (2001).
Intel’s AI revolution generates a range of solutions for AI, to meet each customer’s unique requirements, from general purpose to specialized acceleration needs.
Intel Israel - at the center of Intel’s AI revolution - has created a spearhead by building a strong AI portfolio including H/W, S/W, Apps/Services and Research.
Come and learn about Intel’s AI strategy and see how it enables AI innovation across all markets.
This talk will address central questions, such as:
AIPG Inference Lead Architect
Ofri Wechsler is an Intel fellow, and is the architecture lead of Intel’s AIPG global inference product line. Five years ago Ofri established the Computer Vision Group, which later morphed into AIPG (Artificial Intelligence Product Group). Prior to that he drove Intel’s CPU architecture.
We consider the task of learning from subjectively labeled data. This type of labels is usually found in tasks that judge beauty and other aesthetic traits of images. In this work, we are interested in understanding the aesthetic traits of an outfit worn by an individual given its image. Due to the subjective natural of the data, the labels tend to be noisy. One solution is to reduce this noise, is to annotate each example numerous times by several different human subjects. However, this approach is not scalable due of the number of human annotation required. Therefore, for practical reasons, large data sets contain varying number of annotations per example, where the majority of examples are annotated by only a few human subjects. This approach introduces sampling uncertainty in the labels that varies from example to example. In this work, we provide a closed-form expression to model this uncertainty in the label induced by sub-sampling. We show that for fashion related traits, the uncertainty model is directly linked to the ability of a neural network to learn from noisy data. We further use our model to construct a custom neural network loss function that provides better generalization for learning fashion related traits from noisy labels.
Head of CVML Team
94, Yigal Alon St.
Tel Aviv 6109202