Iā€™m a 4th-year undergraduate student majoring in Information and Computing Science at Beijing Jiaotong University. Now I am working as a research intern at Microsoft under the guidance of Principal Researcher Dr. Justin Ding. Previously, I was working at Peking University under the supervision of Prof. Xiaojun Wan.

As a student with a multidisciplinary background in Mathematics and Artificial Intelligence, strong foundations in research and leadership roles, I am keen on exploring opportunities that can deepen my understanding of Generative AI.

šŸ”„ News

-2024.08: Ā šŸŽ‰šŸŽ‰ I will join Microsoft as Researcher Intern in August under the guidance of Principal Researcher Justin Ding.

šŸ“ Publications

Manuscripts submitted to AAAI 2025
sym

DSGram: Dynamic Weighting Sub-Metrics for Grammartical Error Correction in the Era of Large Language Models

Jinxiang Xie, Yilin Li, Xunjian Yin, Xiaojun Wan

Project | Link to paper

  • We introduce new sub-metrics for GEC evaluation, diverging from previous categorical approaches.
  • We propose a novel dynamic weighting-based GEC evaluation method, DSGram, which integrates the Analytic Hierarchy Process (Ana 1987) with large language models to ascertain the relative importance of different evaluation criteria.
  • We present two datasets: DSGram-Eval, created through human scoring, and DSGram-LLMs, a larger dataset designed to simulate human scoring for fine-tuning.

šŸŽ– Honors and Awards

  • 2024.05 Honorable Mention of 2024 Mathematical Contest In Modeling
  • 2023.04 First Prize of 2023 Beijing Jiaotong University Mathematical Modeling Competition

šŸ“– Educations

  • 2021.09 - 2025.06, Bachelor of Science in Information and Computing Science, Beijing Jiaotong University.

šŸ’» Internships

šŸ“š Blogs

sym

LLMs: Cutting-Edge Technology and Future Applications

My notes from a presentation on LLMs at the Gaoling Schoolof Artificial Intelligence, Renmin University of China, which featured people from top research institutions.

Prompt Engineering: How To Ask LLMS Better

Introduce a number of methods for optimizing the output of large language models and reducing the probability of irrelevant or incorrect responses.