Dr. Dan Levi is currently a technical fellow at General Motors R&D Israel, leading the computer vision and perception research group. He received a BSc degree with honours in mathematics and computer science from Tel-Aviv University, in 2000, and MSc and PhD degrees in applied mathematics and computer science from the Weizmann Institute, in 2004 and 2009, respectively. In his studies he researched human and computer vision under the supervision of Prof. Shimon Ullman. Since 2007, he has been conducting industrial computer vision research and development at several companies, including General Motors and Elbit Systems, Israel.
The Pop in Your Job – What drives you? Why do you love your job?
I am excited to tackle on a day-to-day basis the challenging research problems involved in developing the perception for autonomous driving. The most intriguing question guiding my research is: how do we, as humans, perceive the world around us?
Case Study
Monday, September 29
09:45 am - 10:15 am
Live in Berlin
Less Details
The task of open-vocabulary object-centric image retrieval involves the retrieval of images containing a specified object of interest, delineated by an open-set text query. As working on large image datasets becomes standard, solving this task efficiently has gained significant practical importance. Applications include targeted performance analysis of retrieved images using ad-hoc queries and hard example mining during training. Recent advancements in contrastive-based open vocabulary systems have yielded remarkable breakthroughs, facilitating large-scale open vocabulary image retrieval. However, these approaches use a single global embedding per image, thereby constraining the system’s ability to retrieve images containing relatively small object instances. Alternatively, incorporating local embeddings from detection pipelines faces scalability challenges, making it unsuitable for retrieval from large databases.
In this work, we present a simple yet effective approach to object-centric open-vocabulary image retrieval. Our approach aggregates dense embeddings extracted from CLIP into a compact representation, essentially combining the scalability of image retrieval pipelines with the object identification capabilities of dense detection methods. We show the effectiveness of our scheme to the task by achieving significantly better results than global feature approaches on three datasets, increasing accuracy by up to 15 mAP points. We further integrate our scheme into a large scale retrieval framework and demonstrate our method’s advantages in terms of scalability and interpretability.
In this session, you will learn more about: