Dr. Ben Backus

Vivid Vision

June 6, 2018 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Mobile VR for vision testing and treatment

Talk Abstract: Consumer-level HMDs are adequate for many medical applications. Vivid Vision (VV) takes advantage of their low cost, light weight, and large VR gaming code base to make vision tests and treatments. The company’s software is built using the Unity engine, which allows for multiplatform support.in the Unity framework, allowing it to run on many hardware platforms. New headsets are available every six months or less, which creates interesting challenges within in the medical device space. VV’s flagship product is the commercially available Vivid Vision System, used by more than 120 clinics to test and treat binocular dysfunctions such as convergence difficulties, amblyopia, strabismus, and stereo blindness. VV has recently developed a new, VR-based visual field analyzer.

Speaker's Biography: Ben Backus was Empire Innovation Associate Professor in Manhattan at the Graduate Center of the SUNY College of Optometry until October 2017, when he gave up tenure and moved to San Francisco to be Chief Science Officer at Vivid Vision, Inc. He still teaches and leads NIH-funded research in New York. He has training in mathematics (BA, Swarthmore), human vision (PhD, UC Berkeley), and visual neuroscience (postdoc, Stanford). He was a math teacher in the Oakland Public Schools and a professor of Psychology at the University of Pennsylvania. He makes a good Meyer limoncello.


Dr. Boyd Fowler


May 9, 2018 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Advances in automotive image sensors

Talk Abstract: In this talk I present recent advances in 2D and 3D image sensors for automotive applications such as rear view cameras, surround view cameras, ADAS cameras and in cabin driver monitoring cameras. This includes developments in high dynamic range image capture, LED flicker mitigation, high frame rate capture, global shutter, near infrared sensitivity and range imaging. I will also describe sensor developments for short range and long range LIDAR systems.

Speaker's Biography: Boyd Fowler joined OmniVision in December 2015 and is the CTO. Prior to joining OmniVision he was a founder and VP of Engineering at Pixel Devices where he focused on developing high performance CMOS image sensors. After Pixel Devices was acquired by Agilent Technologies, Dr. Fowler was responsible for advanced development of their commercial CMOS image sensors products. In 2005 Dr. Fowler joined Fairchild Imaging as the CTO and VP of technology, where he developed SCMOS image sensors for high performance scientific applications. After Fairchild Imaging was acquired by BAE Systems, Dr. Fowler was appointed the technology directory of the CCD/CMOS image sensor business. He has authored numerous technical papers, book chapters and patents. Dr. Fowler received his M.S. and Ph.D. degrees in Electrical Engineering from Stanford University in 1990 and 1995 respectively.


Dr. Seishi Takamura


April 25, 2018 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Video Coding before and beyond HEVC

Talk Abstract: We are enjoying video contents in various situations. Though they are already compressed down to 1/10 - 1/1000 from its original size, it has been reported that video traffic over the internet is increasing 31% per year, within which the video traffic will occupy 82% by 2020. This is why development of better compression technology is eagerly demanded. ITU-T/ISO/IEC jointly developed the latest video coding standard, High Efficiency Video Coding (HEVC), in 2013. They are about to start next generation standard. Corresponding proposals will be evaluated at April 2018 meeting in San Diego, just a week before this talk.

In this talk, we will first overview the advances of video coding technology in the last several decades, latest topics including the report of the San Diego meeting, some new approaches including deep learning technique etc. will be presented.

Speaker's Biography: Seishi Takamura received his B.E., M.E., and Ph.D. from the Department of Electronic Engineering, Faculty of Engineering, the University of Tokyo in 1991, 1993, and 1996. He joined NTT Corporation in 1996 and appointed a Distinguished Technical Member in 2009. From 2005 to 2006, he was a visiting scientist at Stanford University, California, USA. Currently, he is a Senior Distinguished Engineer of NTT Media Intelligence Laboratories. His current research interests include efficient video coding and ultrahigh quality video coding.

He has fulfilled various duties in the research and academic community in current and prior roles including Associate Editor of IEEE Trans. CSVT (2006-2014), Executive Committee Member of the IEEE Tokyo Section, the IEEE Japan Council, and the Institute of Electronics, Information and Communication Engineers (IEICE) Image Engineering SIG Chair and Board of Directors of the Institute of Image Information and Television Engineers (ITE). He has also served as Japan National Body Chair and Japan Head of Delegates of ISO/IEC JTC 1/SC 29, and as an International Steering Committee Member of the Picture Coding Symposium.

He has received 41 academic awards including the ITE Niwa-Takayanagi Best Paper Award in 2002, the Information Processing Society of Japan (IPSJ) Nagao Special Researcher Award in 2006, PCSJ (Picture Coding Symposium of Japan) Frontier/Best Paper Awards in 2004, 2008 and 2015, the ITE Fujio Frontier Award in 2014, and TAF (Telecommunications Advancement Foundation) Telecom System Technology Awards in 2004, 2008 and 2015 (with highest honor). Dr. Takamura is a senior member of IEEE, IEICE, and IPSJ and a member of APSIPA, SID, ITE and MENSA.


Professor Kyros Kutulakos

University of Toronto

April 18, 2018 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Transport-Aware Cameras

Talk Abstract: Conventional cameras record all light falling onto their sensor regardless of
its source or its 3D path to the camera. In this talk I will present an
emerging family of coded-exposure video cameras that can be programmed to
record just a fraction of the light coming from an artificial source---be it a
common street lamp or a programmable projector---based on the light path's
geometry or timing. Live video from these cameras offers a very unconventional
view of our everyday world in which refraction and scattering can be
selectively blocked or enhanced, visual structures too subtle to notice with
the naked eye can become apparent, and the flicker of electric lights can be
turned into a powerful cue for analyzing the electrical grid from room to city

I will discuss the unique optical properties and power efficiency of these
"transport aware cameras" through three case studies: the ACam for analyzing
the electrical grid, EpiScan3D for robust structured-light 3D imaging, and
EpiToF for robust time-of-flight imaging. I will also discuss our initial
progress toward designing a computational CMOS sensor for
coded two-bucket imaging---a novel capability that promises much more flexible and
powerful transport-aware cameras compared to existing off-the-shelf solutions.

Speaker's Biography: Kyros Kutulakos is a Professor of Computer Science at the University of
Toronto. He received his PhD degree from the University of Wisconsin-Madison
in 1994 and his BS degree from the University of Crete in 1988, both in
Computer Science. In addition to the University of Toronto, he has held
appointments at the University of Rochester (1995-2001) and Microsoft Research
Asia (2004-05 and 2011-12). He is the recipient of an Alfred P. Sloan
Fellowship, an Ontario Premier's Research Excellence Award, a Marr Prize in
1999, a Marr Prize Honorable Mention in 2005, and four other paper awards
(CVPR 1994, ECCV 2006, CVPR 2014, CVPR 2017). He also served as Program
Co-Chair of CVPR 2003, ICCP 2010 and ICCV 2013.


Thomas Burnett


April 11, 2018 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Light-field Display Architecture and the Heterogenous Display Ecosystem FoVI3D

Talk Abstract: Human binocular vision and acuity, and the accompanying 3D retinal processing of the human eye and brain are specifically designed to promote situational awareness and understanding in the natural 3D world. The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.

A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display’s projection volume. Binocular disparity, occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer’s perspective as in the natural real-world light-field.

Light-field displays are no longer a science fiction concept and a few companies are producing impressive light-field display prototypes. This presentation will review:
· The application agnostic light-field display architecture being developed at FoVI3D.
· General light-field display properties and characteristics such as field of view, directional resolution, and their effect on the 3D aerial image.
· The computation challenge for generating high-fidelity light-fields.
· A display agnostic ecosystem.

Demo after the talk:The FoVI3D Light-field Display Developer Kit (LfD DK2) is a prototype, wide field-of-view, full parallax, monochrome light-field display capable of projecting ~100,000,000 million unique rays to fill a 9cm x 9cm x 9cm projection volume. The particulars of the light-field compute, photonics subsystem and hogel optics will be discussed during the presentation.

Speaker's Biography: Thomas graduated from Texas A&M University in 1989 with a bachelor’s degree in Computer Science. He has spent 25+ years developing, architecting, and managing computer software and hardware projects including processor logic synthesis and simulation, 2D image processing pipelines, 2D/3D and light-field rendering solutions, 3D physics engines and 2D/3D games.

Thomas has been a developer/manager with multiple visualization start-up companies in and around Austin's Silicon Hills. At Applied Science Fiction (ASF) he co-developed image processing libraries and a processing pipeline to render images from exposed yet undeveloped 35mm film. As the software lead at Zebra Imaging, Thomas was a key contributor in the development of static light-field topographic maps for use by the Department of Defense in Iraq and Afghanistan. He was the computation architect for the DARPA Urban Photonic Sandtable Display (UPSD) program which produced several wide-area light-field display prototypes for human factors testing and research.

More recently, Thomas launched a new light-field display development program at FoVI3D where he serves as President and CTO. FoVI3D is developing a next-generation light-field display architecture and display prototype to further socialize the cognitive benefit of spatially accurate 3D aerial image


Dr. Anna-Karin Gustavson

Stanford University

May 2, 2018 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: 3D single-molecule super-resolution microscopy using a tilted light sheet

Talk Abstract: To obtain a complete picture of subcellular structures, cells must be imaged with high resolution in all three dimensions (3D). In this talk, I will present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. Here the axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The result is simple and flexible 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D super-resolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSFs for fiducial bead tracking and live axial drift correction. We think that TILT3D in the future will become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

Speaker's Biography: Dr. Anna-Karin Gustavsson is a postdoctoral fellow in the Moerner Lab at the Department of Chemistry at Stanford University, and she also holds a postdoctoral fellowship from the Karolinska Institute in Stockholm, Sweden. Her research is focused on the development and application of 3D single-molecule super-resolution microscopy for cell imaging, and includes the implementation of light sheet illumination for optical sectioning. She has a background in physics and received her PhD in Physics in 2015 from the University of Gothenburg, Sweden. Her PhD project was focused on studying dynamic responses in single cells by combining and optimizing techniques such as fluorescence microscopy, optical tweezers, and microfluidics. Dr. Gustavsson has received several awards, most notably the FEBS Journal Richard Perham Prize for Young Scientists in 2012 and the PicoQuant Young Investigator Award in 2018.