Professor Roarke Horstmeyer

Biomedical Engineering Department at Duke University

May 22, 2019 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Towards intelligent computational microscopes

Talk Abstract: Deep learning algorithms offer a powerful means to automatically analyze the content of biomedical images. However, many biological samples of interest are difficult to resolve with a standard optical microscope. Either they are too large to fit within the microscope’s field-of-view, or too thick, or are quickly moving around. In this talk, I will discuss our recent work in addressing these challenges by using deep learning algorithms to design new experimental strategies for microscopic imaging. Specifically, we use deep neural networks to jointly optimize the physical parameters of our computational microscopes - their illumination settings, lens layouts and data transfer pipelines, for example - for specific tasks. Examples include learning specific illumination patterns that can improve classification of the malaria parasite by up to 15%, and establishing fast methods to automatically track moving specimens across gigapixel-sized images.

Speaker's Biography: Roarke Horstmeyer is a new assistant professor within the Biomedical Engineering Department at Duke University. He develops microscopes, cameras and computer algorithms for a wide range of applications, from forming 3D reconstructions of organisms to detecting neurons deep within tissue. Most recently, Dr. Horstmeyer was a guest professor at the University of Erlangen in Germany and an Einstein International Postdoctoral Fellow at Charitè Medical School in Berlin. Prior to his time in Germany, Dr. Horstmeyer earned a PhD from Caltech’s EE department (2016), an MS from the MIT Media Lab (2011), and bachelor’s degrees in physics and Japanese from Duke in 2006.


Professor Denis Kalkofen

Institute of Computer Graphics and Vision at Graz University of Technology, Austria.

June 5, 2019 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Augmented Reality Handbooks

Talk Abstract: Handbooks are an essential requirement for understanding and using many artifacts found in our daily life. We use handbooks to understand how things work and how to maintain them. Most handbooks still exist on paper relying on graphical illustrations and accompanying textual explanations to convey the relevant information to the reader. With the success of video sharing platforms a large body of video tutorials available for nearly every aspect of life became available. Video tutorials can often expand printed handbooks with the demonstrations of actions required to solve certain tasks. However, interpreting printed manuals and video tutorials often requires a certain mental effort since users have to match printed images or video frames with the physical object in their environment.
Augmented Reality (AR) has been demonstrated to be effective of presenting information traditionally provided in printed handbooks and video tutorials. However, creating interactive illustrative graphics for AR is costly and requires specially trained authors. In this this talk, I will present research towards the automation of the authoring process of AR handbooks by interactively retargeting conventional, two-dimensional image and video data into three-dimensional AR handbooks. In addition, I will present interaction, visualization and rendering techniques tailored for AR handbooks.

Speaker's Biography: Dr. Denis Kalkofen is an Assistant Professor at the Institute of Computer Graphics and Vision at Graz University of Technology, Austria. His research is focused on developing visualization, interaction and authoring techniques for Mixed Reality environments. He is especially interested in the combination of computer graphics and computer vision techniques in order to enable comprehensible and easily accessible Mixed Reality experiences. Denis received Dipl.-Ing. from the University of Magdeburg, Germany and Dr. techn. from Graz University of Technology, Austria. Before joining ICG, he worked at the Virtual Reality Laboratory at the University of Michigan and he has been a member of the Wearable Computing Laboratory at the University of South Australia.


David Lindell


May 8, 2019 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Computational Imaging with Single-Photon Detectors

Talk Abstract: Active 3D imaging systems, such as LIDAR, are becoming increasingly prevalent for applications in autonomous vehicle navigation, remote sensing, human-computer interaction, and more. These imaging systems capture distance by directly measuring the time it takes for short pulses of light to travel to a point and return. With emerging sensor technology we can detect down to single arriving photons and identify their arrival at picosecond timescales, enabling new and exciting imaging modalities. In this talk, I discuss trillion-frame-per-second imaging, efficient depth imaging with sparse photon detections, and imaging objects hidden from direct line of sight.

Speaker's Biography: David is a PhD student in the Stanford Computational Imaging Lab. He received his bachelor's and master's degrees in EE from Brigham Young University (BYU), where he worked on satellite remote sensing of sea ice and soil moisture. His current research involves developing new computational algorithms for non-line-of-sight imaging, single-photon imaging, and 3D imaging with sensor fusion.


Kihwan Kim


May 1, 2019 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: 3D Computer Vision: Challenges and Beyond

Talk Abstract: 3D Computer Vision (3D Vision) techniques have been the key solutions to various scene perception problems such as depth from image(s), camera/object pose estimation, localization and 3D reconstruction of a scene. These solutions are the major part of many AI applications including AR/VR, autonomous driving and robotics. In this talk, I will first review several categories of 3D Vision problems and their challenges. Given the category of static scene perception, I will introduce several learning-based depth estimation methods such as PlaneRCNN, Neural RGBD, and camera pose estimation methods including MapNet as well as few registration algorithms deployed in NVIDIA’s products. I will then introduce more challenging real world scenarios where scenes contain non-stationary rigid changes, non-rigid motions, or varying appearance due to the reflectance and lighting changes, which can cause scene reconstruction to fail due to the view dependent properties. I will discuss several solutions to these problems and conclude by summarizing the future directions for 3D Vision research that are being conducted by NVIDIA’s learning and perception research (LPR) team.

Speaker's Biography: Kihwan Kim is a senior research scientist in learning and perception research group at NVIDIA Research. He received Ph.D degree in Computer Science from Georgia Institute of Technology in 2011, and BS from Yonsei University in 2001. Prior to join Georgia Tech, he spent five years as an R&D engineer at Samsung and also worked for Disney Research Pittsburgh as a visiting research associate. His research interests span the areas of computer vision, graphics, machine learning and multimedia. A common thread in his research is in understanding scenes from image(s), and estimating the motion and structure of geometric information extracted from the scene. He led NVIDIA’s SLAM project (NVSLAM) and currently leads various 3D Vision projects in NVIDIA. More information:


Professor Harish Bhaskaran

University of Oxford, UK

May 15, 2019 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Phase change materials as functional photonic elements in future computing and displays

Talk Abstract: Photonics has always been the technology of the future. Light is faster, can multiplex etc. have all been "good" arguments for several decades and the ushering in of optical computing has perpetually been just a few years away. However, over the last decade, with the advent of micro-and nanofabrication techniques and phenomenal advances in photonics, that era seems to have finally arrived. The ability to create integrated optical circuits on a chip is near. But (and yes, there's always a but) you need "functional" materials that can be used to control and manipulate this flow of information. In electronics, doping silicon results in one of the most versatile functional materials ever employed by humanity. And that can used to efficiently route electrical signals. How do you do that optically? I hope to convince you that whatever route photonics takes, a class of materials known as phase change materials, will play a key role in its commercialization. These materials can be addressed electrically, and whilst this can be used to control optical signals on photonic circuits this can also be used to create displays and smart windows. In this talk, I hope to give a whistle-stop tour of these applications of these materials with a view towards their near-term applications in displays, and their longer-term potential ranging from integrated photonic memories to machine-learning hardware components.

Speaker's Biography: Harish Bhaskaran is Professor of Applied Nanomaterials at the University of Oxford, UK and an entrepreneur having co-founded Bodle Technologies. He enjoys working on challenging technologies that have a shot at disruptive commercialization. This often involves a combination of device design and new functional materials at the nanoscale. He enjoys cricket (a bat and ball game) and discussing philosophy over coffee. He also hates writing bios about himself in 3rd person, but then opportunistically uses this to claim to be very humble. He holds a PhD from the University of Maryland, College Park and a BE from the College of Engineering, Pune.


Radek Grzeszczuk


May 29, 2019 4:30 pm to 5:30 pm

Location: Packard 101

Talk Title: Computational Imaging at Light

Talk Abstract: Light develops computational imaging technologies that utilize heterogenous constellations of small cameras to create sophisticated imaging effects. This enables the company to provide hardware solutions that are compact – they can easily fit into a cell phone, or a similar small form factor. In this talk, I will review the recent progress of computational imaging research done at the company.

Speaker's Biography: Radek is the Senior Director of Computational Imaging at Light. He received the PhD degree (’98) in Computer Science from University of Toronto. He moved to Silicon Valley in 1997, where he has been working as an individual contributor and managing teams of scientists and engineers in the areas of computer graphics, 3D modeling, augmented reality and visual search. Before joining Light, he has worked at Intel Research Labs (’97-’06), Nokia Research Center (’06-’12), Microsoft (’12-’15), Uber (’15-’16), and Amazon’s A9 (’16-’18).


Previous SCIEN Colloquia

To see a list of previous SCIEN colloquia, please click here.