Chun-Hsiao (Daniel) Yeh
email: daniel_yeh_at_berkeley.edu

| CV | Google Scholar | GitHub | LinkedIn |

I am a third-year Ph.D. student at University of California, Berkeley, co-advised by Professor Yi Ma (EECS / HKU) and Professor Meng C. Lin (Vision Science). I received my M.Sc. at National Taiwan University (NTU).

Throughout my time at Berkeley, I have engaged in close collaborations with Dr. Yubei Chen at Meta AI Research / UC Davis. I also worked closely with Professor Stella Yu during the initial two years of my PhD. Prior to joining Berkeley, I had the privilege of working with Dr. Tyng-Luh Liu at IIS, Academia Sinica. I also spent half a year as a Visiting Researcher at UC Berkeley / ICSI from 2018 to 2019.

In 2022, I was a Research Intern at Adobe Inc., where I had the opportunity to work with Simon Jenni, Fabian Caba, Bryan Russell, and Josef Sivic.

I am passionate about building universal models that integrate source from various modalities, with a particular focus on aligning vision with language. I am also interested in the area of self-supervised representation learning and understanding, with an emphasis on its application to image and video tasks.

sym

UC Berkeley
Ph.D. Student
Sept. 21 - Present

sym

Adobe Inc.
Research Intern
May. 24 - Present
May. 22 - Nov. 22

sym

IIS, Academia Sinica
Research Assistant
Apr. 20 - Aug. 21

sym

UC Berkeley / ICSI
Visiting Researcher
Sept. 18 - Mar. 19

sym

NTU
Master Degree
Sept. 15 - Mar. 19

  News
  • [06/2024] One paper, MDPipe is accepted to MICCAI 2024 @ Morocco!
  • [05/2024] Start my summer research internship at Adobe Reseach @ San Jose, CA!
  • [06/2023] One paper is accepted to CVPR 2023 MultiEarth Workshop!
  • [02/2023] One paper is accepted to CVPR 2023! Congratulations to all the Adobe co-authors!
  • [12/2022] Passed the qualifying exam and became a PhD candidate @ UC Berkeley !
  • [11/2022] Finish my first internship at Adobe Inc. !
  • [07/2022] One paper accepted by ECCV 2022! Can't wait to go Israel!
  • [05/2022] Start the internship at Adobe Research working with Simon Jenni, Fabian Caba, Bryan Russell, and Josef Sivic.
  • [01/2022] One paper accepted by ICASSP 2022.
  • [08/2021] Join UC Berkeley as a Ph.D. student!
  Publications
sym

Insight: A Multi-Modal Diagnostic Pipeline using LLMs for Ocular Surface Disease Diagnosis
Chun-Hsiao Yeh, Jiayun Wang, Andrew D. Graham, Andrea J. Liu, Bo Tan, Yubei Chen, Yi Ma, and Meng C. Lin
27th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2024.

| Project Page | Abstract | Bibtex | Preprint | PDF | Poster | šŸ¤— Huggingface Model | Code |

Accurate diagnosis of ocular surface diseases is critical in optometry and ophthalmology, which hinge on integrating clinical data sources (e.g., meibography imaging and clinical metadata). Traditional human assessments lack precision in quantifying clinical observations, while current machine-based methods often treat diagnoses as multi-class classification problems, limiting the diagnoses to a predeļ¬ned closed-set of curated answers without reasoning the clinical relevance of each variable to the diagnosis. To tackle these challenges, we introduce an innovative multi-modal diagnostic pipeline (MDPipe) by employing large language models (LLMs) for ocular surface disease diagnosis. We first employ a visual translator to interpret meibography images by converting them into quantifiable morphology data, facilitating their integration with clinical metadata and enabling the communication of nuanced medical insight to LLMs. To further advance this communication, we introduce a LLM-based summarizer to contextualize the insight from the combined morphology and clinical metadata, and generate clinical report summaries. Finally, we refine the LLMs' reasoning ability with domain-specific insight from real-life clinician diagnoses. Our evaluation across diverse ocular surface disease diagnosis benchmarks demonstrates that MDPipe outperforms existing standards, including GPT-4, and provides clinically sound rationales for diagnoses.

      @inproceedings{yeh2024insight,
        title={Insight: A Multi-modal Diagnostic
        Pipeline Using LLMs for Ocular Surface 
        Disease Diagnosis},author={Yeh, Chun-Hsiao 
        and Wang, Jiayun and Graham, Andrew D and
        Liu, Andrea J and Tan, Bo and Chen, Yubei 
        and Ma, Yi and Lin, Meng C},booktitle=
        {International Conference on Medical Image 
        Computing and Computer-Assisted 
        Intervention},pages={711--721},year={2024},
        organization={Springer}
      }

        
sym

Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition
Chun-Hsiao Yeh*, Ta-Ying Cheng*, He-Yen Hsieh*, Chuan-En Lin, Yi Ma, Andrew Markham, Niki Trigoni, H.T. Kung, and Yubei Chen
Tech Report, 2024.

| Project Page | Abstract | Bibtex | Preprint | Code |

Recent text-to-image diffusion models are able to learn and synthesize images containing novel, personalized concepts (e.g., their own pets or specific items) with just a few examples for training. This paper tackles two interconnected issues within this realm of personalizing text-to-image diffusion models. First, current personalization techniques fail to reliably extend to multiple concepts --- we hypothesize this to be due to the mismatch between complex scenes and simple text descriptions in the pre-training dataset (e.g., LAION). Second, given an image containing multiple personalized concepts, there lacks a holistic metric that evaluates performance on not just the degree of resemblance of personalized concepts, but also whether all concepts are present in the image and whether the image accurately reflects the overall text description. To address these issues, we introduce Gen4Gen, a semi-automated dataset creation pipeline utilizing generative models to combine personalized concepts into complex compositions along with text-descriptions. Using this, we create a dataset called MyCanvas, that can be used to benchmark the task of multi-concept personalization. In addition, we design a comprehensive metric comprising two scores (CP-CLIP and TI-CLIP) for better quantifying the performance of multi-concept, personalized text-to-image diffusion methods. We provide a simple baseline built on top of Custom Diffusion with empirical prompting strategies for future researchers to evaluate on MyCanvas. We show that by improving data quality and prompting strategies, we can significantly increase multi-concept personalized image generation quality, without requiring any modifications to model architecture or training algorithms.

      @article{yeh2024gen4gen,
        title={Gen4Gen: Generative Data 
          Pipeline for Generative Multi-Concept 
          Composition},author={Yeh, Chun-Hsiao and 
          Cheng, Ta-Ying and Hsieh, He-Yen and 
          Lin, Chuan-En and Ma, Yi and Markham, 
          Andrew and Trigoni, Niki and 
          Kung, Hsiang-Tsung and Chen, Yubei},
        journal={arXiv preprint arXiv:2402.15504},
        year={2024}
      }

        
sym

Magic-Me: Identity-Specific Video Customized Diffusion
Ze Ma*, Daquan Zhou*, Chun-Hsiao Yeh, Xue-She Wang, Xiuyu Li, Huanrui Yang, Zhen Dong, Kurt Keutzer, Jiashi Feng
ECCV AI4VA Workshop, 2024; Tech Report, 2024.

| Project Page | Abstract | Bibtex | Preprint | šŸ¤— Huggingface Demo | Code |

Creating content for a specific identity (ID) has shown significant interest in the field of generative models. In the field of text-to-image generation (T2I), subject-driven content generation has achieved great progress with the ID in the images controllable. However, extending it to video generation is not well explored. In this work, we propose a simple yet effective subject identity controllable video generation framework, termed Video Custom Diffusion (VCD). With a specified subject ID defined by a few images, VCD reinforces the identity information extraction and injects frame-wise correlation at the initialization stage for stable video outputs with identity preserved to a large extent. To achieve this, we propose three novel components that are essential for high-quality ID preservation: 1) an ID module trained with the cropped identity by prompt-to-segmentation to disentangle the ID information and the background noise for more accurate ID token learning; 2) a text-to-video (T2V) VCD module with 3D Gaussian Noise Prior for better inter-frame consistency and 3) video-to-video (V2V) Face VCD and Tiled VCD modules to deblur the face and upscale the video for higher resolution. Despite its simplicity, we conducted extensive experiments to verify that VCD is able to generate stable and high-quality videos with better ID over the selected strong baselines. Besides, due to the transferability of the ID module, VCD is also working well with finetuned text-to-image models available publically, further improving its usability.

      @article{ma2024magic,
        title={Magic-Me: 
        Identity-Specific Video 
        Customized Diffusion},
        author={Ma, Ze and Zhou, Daquan 
        and Yeh, Chun-Hsiao and 
        Wang, Xue-She and Li, Xiuyu 
        and Yang, Huanrui and Dong, Zhen 
        and Keutzer, Kurt and Feng, Jiashi},
        journal={arXiv preprint arXiv:2402.09368},
        year={2024}
      }

        
sym

Meta-Personalizing Vision-Language Models to Find Named Instances in Video
Chun-Hsiao Yeh, Byran Russell, Josef Sivic, Fabian Caba Heilbron, and Simon Jenni
Conference on Computer Vision and Pattern Recognition (CVPR), 2023.

| Project Page | Abstract | Bibtex | Preprint | PDF | Project Video | Dataset |

Large-scale vision-language models (VLM) have shown impressive results for language-guided search applications. While these models allow category-level queries, they currently struggle with personalized searches for moments in a video where a specific object instance such as ``My dog Biscuit'' appears. We present the following three contributions to address this problem. First, we describe a method to meta-personalize a pre-trained VLM, learning how to learn to personalize a VLM at test time to search in video. Our method extends the VLM's token vocabulary by learning novel word embeddings specific to each instance. To capture only instance-specific features, we represent each instance embedding as a combination of shared and learned global category features. Second, we propose to learn such personalization without explicit human supervision. Our approach automatically identifies moments of named visual instances in video using transcripts and vision-language similarity in the VLM's embedding space. Finally, we introduce This-Is-My, a personal video instance retrieval benchmark. We evaluate our approach on This-Is-My and DeepFashion2 and show that we obtain a 15% relative improvement over the state of the art on the latter dataset.

      @inproceedings{yeh2023meta,
        title={Meta-Personalizing 
        Vision-Language Models To Find Named 
        Instances in Video},
        author={Yeh, Chun-Hsiao and Russell, Bryan
        and Sivic, Josef and Heilbron, Fabian Caba
        and Jenni, Simon},
        booktitle={Proceedings of the IEEE/CVF 
        Conference on Computer Vision and 
        Pattern Recognition},
        pages={19123--19132},
        year={2023}
      }
        
sym

Decoupled Contrastive Learning
Chun-Hsiao Yeh, Cheng-Yao Hong, Yen-Chi Hsu, Tyng-Luh Liu, Yubei Chen, and Yann LeCun
European Conference on Computer Vision (ECCV), 2022.

| Abstract | Bibtex | Preprint | PDF | Project Video | Code (Lightly-ai) |

Contrastive learning (CL) is one of the most successful paradigms for self-supervised learning (SSL). In a principled way, it considers two augmented "views" of the same image as positive to be pulled closer, and all other images as negative to be pushed further apart. However, behind the impressive success of CL-based techniques, their formulation often relies on heavy-computation settings, including large sample batches, extensive training epochs, etc. We are thus motivated to tackle these issues and establish a simple, efficient, yet competitive baseline of contrastive learning. Specifically, we identify, from theoretical and empirical studies, a noticeable negative-positive-coupling (NPC) effect in the widely used InfoNCE loss, leading to unsuitable learning efficiency concerning the batch size. By removing the NPC effect, we propose decoupled contrastive learning (DCL) loss, which removes the positive term from the denominator and significantly improves the learning efficiency. DCL achieves competitive performance with less sensitivity to sub-optimal hyperparameters, requiring neither large batches in SimCLR, momentum encoding in MoCo, or large epochs. We demonstrate with various benchmarks while manifesting robustness as much less sensitive to suboptimal hyperparameters. Notably, SimCLR with DCL achieves 68.2% ImageNet-1K top-1 accuracy using batch size 256 within 200 epochs pre-training, outperforming its SimCLR baseline by 6.4%. Further, DCL can be combined with the SOTA contrastive learning method, NNCLR, to achieve 72.3% ImageNet-1K top-1 accuracy with 512 batch size in 400 epochs, which represents a new SOTA in contrastive learning. We believe DCL provides a valuable baseline for future contrastive SSL studies.

@inproceedings{yeh2022decoupled,
  title={Decoupled contrastive learning},
  author={Yeh, Chun-Hsiao and Hong,
  Cheng-Yao and Hsu, Yen-Chi and Liu, 
  Tyng-Luh and Chen, Yubei and LeCun, Yann},
  booktitle={Computer Vision--ECCV 2022: 
  17th European Conference, Tel Aviv, Israel, 
  October 23--27, 2022, Proceedings, Part XXVI},
  pages={668--684},
  year={2022},
  organization={Springer}
}
sym

SAGA: Self-Augmentation with Guided Attention for Representation Learning
Chun-Hsiao Yeh, Cheng-Yao Hong, Yen-Chi Hsu, and Tyng-Luh Liu
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022.

| Abstract | Bibtex | IEEE Webpage | PDF |

Self-supervised training that elegantly couples contrastive learning with a wide spectrum of data augmentation techniques has been shown to be a successful paradigm for representation learning. However, current methods implicitly maximize the agreement between differently augmented views of the same sample, which may perform poorly in certain situations. For example, considering an image comprising a boat on the sea, one augmented view is cropped solely from the boat and the other from the sea, whereas linking these two to form a positive pair could be misleading. To resolve this issue, we introduce a Self-Augmentation with Guided Attention (SAGA) strategy, which augments input data based on predictive attention to learn representations rather than simply applying off-the-shelf augmentation schemes. As a result, the proposed self-augmentation framework enables feature learning to enhance the robustness of representation.

@inproceedings{yeh2022saga,
  title={SAGA: Self-Augmentation 
  with Guided Attention for 
  Representation Learning},
  author={Yeh, Chun-Hsiao and 
  Hong, Cheng-Yao and Hsu, Yen-Chi 
  and Liu, Tyng-Luh},
  booktitle={ICASSP 2022-2022 IEEE 
  International Conference on Acoustics, 
  Speech and Signal Processing (ICASSP)},
  pages={3463--3467},
  year={2022},
  organization={IEEE}
}
sym

Scene Novelty Prediction from Unsupervised Discriminative Feature Learning
Arian Ranjbar*, Chun-Hsiao Yeh*, Sascha Hornauer, Stella X. Yu, and Ching-Yao Chan
IEEE International Conference on Intelligent Transportation Systems (ITSC), 2020.
(* indicates equal contribution)

| Abstract | Bibtex | IEEE Webpage | PDF |

Deep learning approaches are widely explored in safety-critical autonomous driving systems on various tasks. Network models, trained on big data, map input to probable prediction results. However, it is unclear how to get a measure of confidence on this prediction at the test time.Our approach to gain this additional information is to estimate how similar test data is to the training data that the model was trained on. We map training instances onto a feature space that is the most discriminative among them. We then model the entire training set as a Gaussian distribution in that feature space. The novelty of the test data is characterized by its low probability of being in that distribution, or equivalently a large Mahalanobis distance in the feature space.Our distance metric in the discriminative feature space achieves a better novelty prediction performance than the state-of-the-art methods on most classes in CIFAR-10 and ImageNet. Using semantic segmentation as a proxy task often needed for autonomous driving, we show that our unsupervised novelty prediction correlates with the performance of a segmentation network trained on full pixel-wise annotations. These experimental results demonstrate potential applications of our method upon identifying scene familiarity and quantifying the confidence in autonomous driving actions.

@inproceedings{ranjbar2020scene,
  title={Scene novelty prediction from 
  unsupervised discriminative feature learning},
  author={Ranjbar, Arian and Yeh, Chun-Hsiao 
  and Hornauer, Sascha and Stella, X Yu 
  and Chan, Ching-Yao},
  booktitle={2020 IEEE 23rd International 
  Conference on Intelligent Transportation 
  Systems (ITSC)},
  pages={1--7},
  year={2020},
  organization={IEEE}
}
sym

Face Liveness Detection Based on Perceptual Image Quality Assessment Features with Multi-scale Analysis
Chun-Hsiao Yeh, and Herng-Hua Chang
IEEE Winter Conference on Applications of Computer Vision (WACV), 2018.

| Abstract | Bibtex | IEEE Webpage | PDF |

Vulnerability of recognition systems to spoofing attacks (presentation attacks) is still an open security issue in the biometrics domain. Among all biometric traits, face is exposed to the most serious threat since it is particularly easy to access and reproduce. In this paper, an effective approach against face spoofing attacks based on perceptual image quality assessment features with multiscale analysis is presented. First, we demonstrate that the recently proposed blind image quality evaluator (BIQE) is effective in detecting spoofing attacks. Next, we combine the BIQE with an image quality assessment model called effective pixel similarity deviation (EPSD), which we propose to obtain the standard deviation of the gradient magnitude similarity map by selecting effective pixels in the image. A total number of 21 features acquired from the BIQE and EPSD constitute the multi-scale descriptor for classification. Extensive experiments based on both intradataset and cross-dataset protocols were performed using three existing benchmarks, namely, Replay-Attack, CASIA, and UVAD. The proposed algorithm demonstrated its superiority in detecting face spoofing attacks over many state of the art methods. We believe that the incorporation of the image quality assessment knowledge into face liveness detection is promising to improve the overall accuracy.

@inproceedings{yeh2018face,
  title={Face liveness detection 
  based on perceptual image quality 
  assessment features with multi-scale analysis},
  author={Yeh, Chun-Hsiao and Chang, Herng-Hua},
  booktitle={2018 IEEE Winter conference
  on applications of computer vision (WACV)},
  pages={49--56},
  year={2018},
  organization={IEEE}
}
sym

Face Liveness Detection with Feature Discrimination between Sharpness and Blurriness
Chun-Hsiao Yeh, and Herng-Hua Chang
International Conference on Machine, Vision and Applications (MVA), 2017. (Oral)

| Abstract | Bibtex | IEEE Webpage | PDF |

Face recognition has been extensively used in a wide variety of security systems for identity authentication for years. However, many security systems are vulnerable to spoofing face attacks (e.g., 2D printed photo, replayed video). Consequently, a number of anti-spoofing approaches have been proposed. In this study, we introduce a new algorithm that addresses the face liveness detection based on the digital focus technique. The proposed algorithm relies on the property of digital focus with various depths of field (DOFs) while shooting. Two features of the blurriness level and the gradient magnitude threshold are computed on the nose and the cheek subimages. The differences of these two features between the nose and the cheek in real face images and spoofing face images are used to facilitate detection. A total of 75 subjects with both real and spoofing face images were used to evaluate the proposed framework. Preliminary experimental results indicated that this new face liveness detection system achieved a high recognition rate of 94.67% and outperformed many state-of-the-art methods. The computation speed of the proposed algorithm was the fastest among the tested methods.

@inproceedings{yeh2017face,
  title={Face liveness detection with
  feature discrimination between sharpness
  and blurriness},
  author={Yeh, Chun-Hsiao and Chang, Herng-Hua},
  booktitle={2017 Fifteenth IAPR International
  Conference on Machine Vision Applications (MVA)},
  pages={398--401},
  year={2017},
  organization={IEEE}
}
  Projects
sym

Comics Generation
NTU CSIE ADLxMLDS 2017 Fall Project
Tensorflow implementation of Conditional Generative Adversarial Network (CGAN) automatically generates anime images based on given constraints (e.g., green hair, blue eyes).

sym

Video Captioning
NTU CSIE ADLxMLDS 2017 Fall Project
TensorFlow implementation of Seq2seq model (S2VT) and attention mechanism, which generates the description (captions) for the given video.

  Professional Activities
  • Conference Reviewer: CVPR: 2023, 2024; ICCV: 2023; ECCV: 2022, 2024; MICCAI 2024; CPAL 2024; ACCV 2024; IV 2022
  • Journal Reviewer: Pattern Recognition Letters, IEEE Access, Elseiver ESWA
  Awards and Honors
  • UC Berkeley Conference Travel Grant | $900, 2023
  • UC Berkeley Conference Travel Grant | $1500, 2022
  • National Taiwan University Exchange Program Application - Top 3.5% (17/501), 2016
  • Top-3 in IEEE International Conference on Robotics and Automation (ICRA) Challenge, USA, 2015
  • First Prize in Undergraduate Project Competition, 2015
  • Third Place in Federation of International Robot-Soccor Association (FIRA) Competition, China, 2014
  • First Prize in International Competition on Intelligent Humanoid Robotics (HuroCup), Taiwan, 2014
  • Dean's List Award, 2014
  • Top-5 in IEEE International Robot Hands-on Competition & Symposium Robot Bowling Competition (IRHOCS), Taiwan, 2013

Many thanks to webpage, webpage, and website for awesome template.