About Zhiheng
Hi, here is Zhiheng. Welcome to my personal website, nice to see you😆~ I’m currently seeking a PhD position on the term of 24Fall. My research interests are specifically focused on structural text planning and generation
, causality in natural language processing
, and the development of efficient large language models (LLMs)
. I am deeply passionate about advancing the field of AI by creating innovative systems that merge language structures with logical reasoning to address complex real-world problems efficiently. – for more details, please see my SoP and CV.
Currently I’m a senior undergraduate student major in Computer Science in University of Hong Kong, supervised by Lingpeng Kong. Last summer, I work at the Berkeley NLP, supervised by Kevin Yang and Prof Dan Klein, focusing on fact tracking and contradict detection in a narrative structure, such as story outline. Previously, I’ve worked at the ETH NLPED Lab under the supervision of Zhijing Jin and Dr. Mrinmaya Sachan, and as a research intern at Megvii.
Research Highlight
- Fact tracking and contradict detection in story outline
- Probing the Understanding of Large Language Model in Causality
- Explore new Training Paradigm for a Causal Graph based on Conditional Independent Test
- Construct and Pre-process Datasets on NLP for Social Good
Publications
Can Large Language Models Infer Causation from Correlation?
Published in , 2023
This research introduces the first benchmark dataset, Corr2Cause, to test large language models (LLMs) pure causal inference skills.
Recommended citation: Jin Z, Liu J, Lyu Z, et al. Can Large Language Models Infer Causation from Correlation? arXiv preprint arXiv:2306.05836, 2023. https://arxiv.org/abs/2306.05836
Can Large Language Models Distinguish Cause from Effect?
Published in , 2023
Our paper conducts a post-hoc analysis to check whether large language models can be used to distinguish cause from effect.
Recommended citation: Jin, Lalwani, A., Vaidhya, T., Shen, X., Ding, Y., Lyu, Z., Sachan, M., Mihalcea, R., & Schölkopf, B. (2022). Logical Fallacy Detection. https://openreview.net/forum?id=ucHh-ytUkOH
Psychologically-Inspired Causal Prompts.
Published in , 2023
This paper is about a prompting method embedded causal direction and analyze the performance gap of LLMs
Recommended citation: Lyu, Z., Jin, Z., Mattern, J., Mihalcea, R., Sachan, M., & Schoelkopf, B. (2023). Psychologically-Inspired Causal Prompts. arXiv preprint arXiv:2305.01764. https://arxiv.org/pdf/2305.01764
Logical Fallacy Detection
Published in , 2022
This paper is about the a dataset of Logical Fallacy Detection and its baseline model
Recommended citation: Jin, Lalwani, A., Vaidhya, T., Shen, X., Ding, Y., Lyu, Z., Sachan, M., Mihalcea, R., & Schölkopf, B. (2022). Logical Fallacy Detection. https://arxiv.org/abs/2202.13758
More
If you’re a potential collaborator with similar interests, feel free to email me at zhihenglyu.cs@gmail.com
. I’m open to discussing my current projects or exploring potential cooperation in any of the following areas: Research Project
, Developing Open source project
, Cofounder of Community/Association
, Start-up Companies
.
For freshmen and sophomores interested in pursuing CS research, please don’t hesitate to reach out. I’m excited to meet new people from diverse backgrounds, share my experiences, and potentially start a mentorship.