ISVC Tutorials

Tutorials are intended to (1) provide a comprehensive review of the current state of the art in a specific topic aimed at researchers and practitioners who are knowledgeable, but not necessarily experts in the topic, (2) provide a hands-on introduction to one or more software tools or other resources of broad interest to the symposium participants, and (3) introduce new research problems, new application areas, or new or emerging technologies of relevance to visual computing.

20242023202220202019

ISVC’24 Tutorials

TBA

ISVC’23 Tutorials

 

T1: Explainable Deep Few-shot Learning on the Cloud and its Application in Medical Imaging Informatics

Summary

This tutorial aims to mainly provide a professional forum to share state-of-the-art in few-shot learning medical image analysis research, and how we can tackle the problem using high-performance cloud infrastructure, while we are also providing model explainability and interpretability. Moreover, this hands-on-practice tutorial teaches how to implement explainable deep few-shot learning for two clinical use cases, including: (1) object localization, and (2) image segmentation on the cloud. Additional information can be found here.

Organizers:

Ahmad P. Tafti, Ph.D., University of Pittsburgh, US (tafti.ahmad@pitt.edu)
Soheyla Amirian, Ph.D., University of Georgia, US (amirian@uga.edu)
Arvind Rao, Ph.D., University of Michigan, US (ukarvind@med.umich.edu)
Iman Zadeh, Ph.D., Oracle, US (iman.zadeh@oracle.com)
Bryan Barker, Ph.D., Oracle, US (bryan.barker@oracle.com)
Johannes Plate, MD, Ph.D., University of Pittsburgh, US (platefj2@upmc.edu)
Shandong Wu, Ph.D., University of Pittsburgh, US (wus3@upmc.edu)

 

T2: Ethics, Bias and Responsible AI: Challenges and Mitigation Strategies

Summary

This tutorial will discuss how deep learning methods can enhance visual knowledge discovery and image processing. The tutorial will further focus on risks of privacy and ethical considerations, with discussing cancellability and de-identification as two of the mechanisms to mitigate the privacy concerns in sharing and storing of visual data. It will focus on technical approaches to addressing bias and unfairness in AI, discussing the ways to identify, explain, mitigate and communicate bias. It will conclude with a demo of some advanced AI tools from Microsoft, Google and IBM for assessing and improving fairness, and mitigating bias.

Organizer:

Marina L. Gavrilova, Department of Computer Science, University of Calgary, mgavrilo@ucalgary.ca

Marina L. Gavrilova is a Full Professor, an Order of the University of Calgary inductee and a head of the Biometric Technologies and SPARKS Laboratories in the Faculty of Science. Her publications include over 250 refereed articles, edited special issues, books and book chapters in the areas of machine learning, information fusion, knowledge discovery and cybersecurity. She serves as a Founding Editor-in-Chief of Transactions on Computational Science Journal, Springer and an Editor-in-Chief of the International Journal of Digital Human, Inderscience. As a globally renown award-winning researcher and educator, Dr. Gavrilova has given over 50 keynotes, invited lectures and tutorials at major scientific gatherings worldwide, including Stanford University, Purdue University, Fordham University, Microsoft Research USA, Oxford University UK, Samsung Research South Korea and Nanyang Technological University, Singapore. Dr. Gavrilova is a passionate advocate of equity, diversity and inclusion in academia, industry and society.

 

T3: Immersive ParaView: New immersive visualization capabilities

Summary

ParaView is a well-established tool in the scientific visualization community, equipped with many rendering techniques, selection controls, and data format readers. Many experts already rely on ParaView to accomplish a variety of data analysis tasks. ParaView’s interface to virtual reality systems has recently expanded both for consumer-facing VR systems, as well as larger CAVE-style installations (as well as mid-range systems). The primary focus of this tutorial is on the immersive visualization features of ParaView’s latest releases, which will cover how to setup and configure ParaView for virtual/extended reality, the basic immersive user interface, the shared-user collaborative interface, as well as the more advanced opportunities afforded by the Python connection to the VR capabilities.

Organizers:

William Sherman, National Institute of Standards and Technology (NIST), william.sherman@nist.gov

William Sherman is a Computer Scientist for the National Institute of Standards and Technology (NIST) in the High-Performance Computing and Visualization Group. He is interested in all types of immersive technology, scientific visualization methodologies, as well as the merging of the two. Prior to joining NIST, William worked on visualizations, both immersive and non-immersive for the Indiana University Advanced Visualization Lab. He established the Center for Advanced Visualization, Computation, and Modelling which housed both a 4-sided, and 6-sided CAVEs. At the National Center for Supercomputing Applications (NCSA), he led the technical efforts of the VR lab starting in 1993. In 1994 the NCSA VR lab constructed CAVE #2 with the assistance of the EVL team at the University of Illinois, Chicago. He has been working in VR for 30 years. William has also taught courses on virtual reality and scientific visualization to undergrad and graduate students for the University of Illinois at Urbana-Champaign, the University of Nevada-Reno, and Indiana University. (The VR courses used the prevailing technology of the time, from CAVE systems to Google Cardboard to HTC-Vive HMDs.). Sherman is also the co-author or editor of four books on virtual reality. 

Simon Su, National Institute of Standards and Technology (NIST), simon.su@nist.gov

Simon Su (Ph.D., Houston, 2001) is Computer Scientist in the High-Performance Computing Visualization Group at the National Institute of Standards and Technology (NIST). His research efforts have focused on Immersive Visualization. He is responsible for research and development of data visualization and 3D interaction using advanced immersive and interactive technologies. Before joining NIST, he was a Computer Scientist at the CCDC Army Research Laboratory working on immersive visualization and analysis of data generated by users of Department of Defense Supercomputing Resource Center. He has been working in the VR field for 21 years now.

ISVC’22 Tutorials

Visualizing Spatial Data on the Web Using RStudio, Leaflet, and Shiny

Summary

This tutorial will provide a hands-on introduction to visualization of spatial data using interactive maps that can be deployed as public web pages. We will use a combination of RStudio, the Shiny package, and the Leaflet open-source library to provide an introduction on how to combine data and maps to create public web pages. Attendees will gain an overview of RStudio, Leaflet, and Shiny Applications. They will learn how to install packages for leaflet and Shiny, create and customize different types of leaflet maps including a choropleth, and develop a Shiny application deployable on the web.

Maps provide an intuitive interface while communicating elements that are related spatially and presented visually. Any data holding a spatial component lends itself to presentation on a map. Interactive maps allow us to explore spatial data, build layers, identify patterns, drill down to reveal additional information, and inform data-driven decisions. This tutorial will introduce attendees to the seamless integration between RStudio, Shiny, and the Leaflet Mapping Library to enable the creation of spatially mapped data on the web, with minimal friction.

This tutorial is suitable for attendees that would like to learn more about mapping data, and using the Shiny app with RStudio to deploy the maps to the web. Attendees should have some programming experience, but not necessarily in RStudio.

Organizer:

Ann McNamara, Department of Visualization at Texas A&M University,

Ann McNamara is the Associate Dean for Research in the College of Architecture and an Associate Professor in the Department of Visualization at Texas A&M University. She is the founding director of the VIVID Lab, an interdisciplinary lab devoted to the advancement of data visualization and information design. Her research focuses on advancing computer graphics and scientific visualization through novel approaches for optimizing an individual’s experience when creating, viewing, and interacting with virtual and augmented spaces.

ISVC’20 Tutorials

Evolutionary Computer Vision

Summary

This tutorial will explain the theory and application of evolutionary computer vision, a new paradigm where challenging vision problems can be approached using the techniques of evolutionary computing. The objectives of the tutorial are to introduce the subject under the umbrella of goal-oriented vision, explaining the relationship between artificial evolution and mathematical optimization, and introducing the idea of symbolic learning through genetic programming for visual computing tasks. This methodology achieves excellent results for defining fitness functions and representations for problems by merging evolutionary computation with mathematical optimization to produce automatic creation of emerging visual behaviors.

In the first part of the tutorial, we will survey the literature in a concise form, define the relevant terminology, and offer historical and philosophical motivations for the key research problems in the field. For researchers from the computer vision community, we will offer a simple introduction to the evolutionary computing paradigm. The second part of the tutorial will focus on implementing evolutionary algorithms that solve given problems using working programs in the major fields of low-, intermediate- and high-level computer vision.

This tutorial will be of value to researchers, engineers, and students in the fields of computer vision, evolutionary computing, robotics, biologically inspired visual computing, machine learning, and artificial intelligence.

Organizer:

Olague Gustavo, CICESE Research Center, Mexico, email: gustavo.olague@me.com

Gustavo Olague  received the B.S. and M.S. degrees in industrial and electronics engineering from the Instituto Tecnológico de Chihuahua (ITCH), in 1992 and 1995, respectively, and the Ph.D. degree in computer vision, graphics, and robotics from the Institut Polytechnique de Grenoble (INPG) and the Institut National de Recherche en Informatique et Automatique (INRIA) in France. He is currently a Professor with the Department of Computer Science, Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE), México, and also the Director of the EvoVisión Research Team. He is also an Adjunct Professor of engineering with the Universidad Autonóma de Chihuahua (UACH).

He has authored over 100 conference proceedings papers and journal articles, co-edited special issues in Pattern Recognition Letters, Evolutionary Computation (MIT Press), and Applied Optics (OSA). He has authored the book Evolutionary Computer Vision (Springer) in the Natural Computing Series. His main research interests are evolutionary computing and computer vision. He is a member of the Editorial Team of the IEEE Access, Neural Computing and Applications (Springer), and served as the Co-Chair of the Real-World Applications track at the main international evolutionary computing conference, GECCO (ACM SIGEVO Genetic and Evolutionary Computation Conference), in 2012 and 2013.

He has received numerous distinctions, among them the Talbert Abrams Award–first honorable mention 2003–presented by the American Society for Photogrammetry and Remote Sensing (ASPRS) for authorship and recording of current and historical engineering and scientific developments in photogrammetry; Best Paper Awards at major conferences such as GECCO, EvoIASP (European Workshop on Evolutionary Computation in Image Analysis, Signal Processing, and Pattern Recognition), and EvoHOT (European Workshop on Evolutionary Hardware Optimization); and twice the Bronze Medal at the Humies (GECCO award for Human-Competitive results produced by genetic and evolutionary computation).

ISVC’19 Tutorials

T1: Analysis and visualization of 3D data in Python

Summary

This hands-on tutorial teaches how to analyze three dimensional stacked / volumetric images at scale in Python, primarily using scikit-image and scikit-learn.  The material is formatted as a sequence of interactive Jupyter notebooks designed to investigate aspects of analysis such as counting, object relationships, and shape measurements. Real-world examples are given from various domains such as material science and biomedicine, and all data and code are made available freely. For each section above, we show how to implement the solution, and then provide several hands-on exercises so that attendees can become more familiar with the techniques while applying the new concepts to the provided datasets.

Organizers:

Daniela Ushizima, Berkeley Institute for Data Science, UC Berkeley, USA, dani.lbnl@berkeley.edu

Alexandre de Siqueira, Berkeley Institute for Data Science, UC Berkeley, USA, alex.desiqueira@berkeley.edu

Stéfan van der Walt, Berkeley Institute for Data Science, UC Berkeley, USA, stefanv@berkeley.edu

 

T2: Computer Vision for Underwater Environmental Monitoring

Summary

Monitoring marine ecosystems is of critical importance for gaining a better understanding of their complexity and of their delicate balancing processes, which are significantly affected by climate change and other anthropogenic influences.

Recently, oceanographic data acquisition has been greatly facilitated by the establishment of seafloor cabled observatories whose co-located sensors facilitate interdisciplinary studies and real-time observations. Prior to the advent of cabled observatories, the majority of deep-sea video data was acquired by ROVs (remotely operated vehicles), and was analyzed and annotated manually. In contrast, seafloor cabled observatories such as those operated by Ocean Networks Canada (http://www.oceannetworks.ca) offer a 24/7 presence, resulting in unprecedented volumes of visual data. Scheduled recordings of underwater imagery are gathered with Internet-connected fixed and PTZ cameras, which observe a variety of biological processes.

The analysis of underwater imagery imposes a series of unique interdisciplinary challenges, which need to be tackled by computer vision researchers in collaboration with biologists and ocean scientists. This tutorial will present the state of the art in computer vision and image processing approaches for

  • underwater image enhancement
  • underwater scene understanding
  • detection and monitoring of marine life
  • fish behaviour analysis
  • automated analysis for fisheries research
  • video summarization

Organizers:

Alexandra Branzan Albu, Electrical and Computer Engineering, University of Victoria, BC, Canada, aalbu@uvic.ca

Maia Hoeberechts, Ocean Networks Canada, Canada, maiah@uvic.ca

 

T3: Visual Object Tracking Using Deep Learning

Summary

Visual object tracking has become a significant research area. There are a huge number of tracking approaches are being proposed each year. The objective of this tutorial is to introduce and overview recent progress in object tracking, as well as to discuss, motivate and encourage future research in the field of deep-learning-based trackers. In this tutorial, a broad overview of techniques for object tracking especially deep-learning-based trackers and their architectures. Moreover, how various deep architecture networks can be made useful on object tracking problem. In the first part we will provide the classical methods of object tracking also, we will lay a taxonomy of deep-learning-based trackers, and explain each category conceptually and mathematically. The second part of the tutorial will explain how to design deep-learning-based trackers and how to pre-process the input data.

Organizers:

Mohamed H. Abdelpakey, Memorial University of Newfoundland, St. John’s, NL, Canada, mha241@mun.ca

Mohamed S. Shehata, Memorial University of Newfoundland, St. John’s, NL, Canada, mshehata@mun.ca